Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (627)

Search Parameters:
Keywords = net color

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5198 KiB  
Article
Research on a Fault Diagnosis Method for Rolling Bearings Based on the Fusion of PSR-CRP and DenseNet
by Beining Cui, Zhaobin Tan, Yuhang Gao, Xinyu Wang and Lv Xiao
Processes 2025, 13(8), 2372; https://doi.org/10.3390/pr13082372 - 25 Jul 2025
Viewed by 247
Abstract
To address the challenges of unstable vibration signals, indistinct fault features, and difficulties in feature extraction during rolling bearing operation, this paper presents a novel fault diagnosis method based on the fusion of PSR-CRP and DenseNet. The Phase Space Reconstruction (PSR) method transforms [...] Read more.
To address the challenges of unstable vibration signals, indistinct fault features, and difficulties in feature extraction during rolling bearing operation, this paper presents a novel fault diagnosis method based on the fusion of PSR-CRP and DenseNet. The Phase Space Reconstruction (PSR) method transforms one-dimensional bearing vibration data into a three-dimensional space. Euclidean distances between phase points are calculated and mapped into a Color Recurrence Plot (CRP) to represent the bearings’ operational state. This approach effectively reduces feature extraction ambiguity compared to RP, GAF, and MTF methods. Fault features are extracted and classified using DenseNet’s densely connected topology. Compared with CNN and ViT models, DenseNet improves diagnostic accuracy by reusing limited features across multiple dimensions. The training set accuracy was 99.82% and 99.90%, while the test set accuracy is 97.03% and 95.08% for the CWRU and JNU datasets under five-fold cross-validation; F1 scores were 0.9739 and 0.9537, respectively. This method achieves highly accurate diagnosis under conditions of non-smooth signals and inconspicuous fault characteristics and is applicable to fault diagnosis scenarios for precision components in aerospace, military systems, robotics, and related fields. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

19 pages, 9361 KiB  
Article
A Multi-Domain Enhanced Network for Underwater Image Enhancement
by Tianmeng Sun, Yinghao Zhang, Jiamin Hu, Haiyuan Cui and Teng Yu
Information 2025, 16(8), 627; https://doi.org/10.3390/info16080627 - 23 Jul 2025
Viewed by 111
Abstract
Owing to the intricate variability of underwater environments, images suffer from degradation including light absorption, scattering, and color distortion. However, U-Net architectures severely limit global context utilization due to fixed-receptive-field convolutions, while traditional attention mechanisms incur quadratic complexity and fail to efficiently fuse [...] Read more.
Owing to the intricate variability of underwater environments, images suffer from degradation including light absorption, scattering, and color distortion. However, U-Net architectures severely limit global context utilization due to fixed-receptive-field convolutions, while traditional attention mechanisms incur quadratic complexity and fail to efficiently fuse spatial–frequency features. Unlike local enhancement-focused methods, HMENet integrates a transformer sub-network for long-range dependency modeling and dual-domain attention for bidirectional spatial–frequency fusion. This design increases the receptive field while maintaining linear complexity. On UIEB and EUVP datasets, HMENet achieves PSNR/SSIM of 25.96/0.946 and 27.92/0.927, surpassing HCLR-Net by 0.97 dB/1.88 dB, respectively. Full article
Show Figures

Figure 1

18 pages, 2545 KiB  
Article
Reliable Indoor Fire Detection Using Attention-Based 3D CNNs: A Fire Safety Engineering Perspective
by Mostafa M. E. H. Ali and Maryam Ghodrat
Fire 2025, 8(7), 285; https://doi.org/10.3390/fire8070285 - 21 Jul 2025
Viewed by 361
Abstract
Despite recent advances in deep learning for fire detection, much of the current research prioritizes model-centric metrics over dataset fidelity, particularly from a fire safety engineering perspective. Commonly used datasets are often dominated by fully developed flames, mislabel smoke-only frames as non-fire, or [...] Read more.
Despite recent advances in deep learning for fire detection, much of the current research prioritizes model-centric metrics over dataset fidelity, particularly from a fire safety engineering perspective. Commonly used datasets are often dominated by fully developed flames, mislabel smoke-only frames as non-fire, or lack intra-video diversity due to redundant frames from limited sources. Some works treat smoke detection alone as early-stage detection, even though many fires (e.g., electrical or chemical) begin with visible flames and no smoke. Additionally, attempts to improve model applicability through mixed-context datasets—combining indoor, outdoor, and wildland scenes—often overlook the unique false alarm sources and detection challenges specific to each environment. To address these limitations, we curated a new video dataset comprising 1108 annotated fire and non-fire clips captured via indoor surveillance cameras. Unlike existing datasets, ours emphasizes early-stage fire dynamics (pre-flashover) and includes varied fire sources (e.g., sofa, cupboard, and attic fires), realistic false alarm triggers (e.g., flame-colored objects, artificial lighting), and a wide range of spatial layouts and illumination conditions. This collection enables robust training and benchmarking for early indoor fire detection. Using this dataset, we developed a spatiotemporal fire detection model based on the mixed convolutions ResNets (MC3_18) architecture, augmented with Convolutional Block Attention Modules (CBAM). The proposed model achieved 86.11% accuracy, 88.76% precision, and 84.04% recall, along with low false positive (11.63%) and false negative (15.96%) rates. Compared to its CBAM-free baseline, the model exhibits notable improvements in F1-score and interpretability, as confirmed by Grad-CAM++ visualizations highlighting attention to semantically meaningful fire features. These results demonstrate that effective early fire detection is inseparable from high-quality, context-specific datasets. Our work introduces a scalable, safety-driven approach that advances the development of reliable, interpretable, and deployment-ready fire detection systems for residential environments. Full article
Show Figures

Figure 1

20 pages, 41202 KiB  
Article
Copper Stress Levels Classification in Oilseed Rape Using Deep Residual Networks and Hyperspectral False-Color Images
by Yifei Peng, Jun Sun, Zhentao Cai, Lei Shi, Xiaohong Wu, Chunxia Dai and Yubin Xie
Horticulturae 2025, 11(7), 840; https://doi.org/10.3390/horticulturae11070840 - 16 Jul 2025
Viewed by 224
Abstract
In recent years, heavy metal contamination in agricultural products has become a growing concern in the field of food safety. Copper (Cu) stress in crops not only leads to significant reductions in both yield and quality but also poses potential health risks to [...] Read more.
In recent years, heavy metal contamination in agricultural products has become a growing concern in the field of food safety. Copper (Cu) stress in crops not only leads to significant reductions in both yield and quality but also poses potential health risks to humans. This study proposes an efficient and precise non-destructive detection method for Cu stress in oilseed rape, which is based on hyperspectral false-color image construction using principal component analysis (PCA). By comprehensively capturing the spectral representation of oilseed rape plants, both the one-dimensional (1D) spectral sequence and spatial image data were utilized for multi-class classification. The classification performance of models based on 1D spectral sequences was compared from two perspectives: first, between machine learning and deep learning methods (best accuracy: 93.49% vs. 96.69%); and second, between shallow and deep convolutional neural networks (CNNs) (best accuracy: 95.15% vs. 96.69%). For spatial image data, deep residual networks were employed to evaluate the effectiveness of visible-light and false-color images. The RegNet architecture was chosen for its flexible parameterization and proven effectiveness in extracting multi-scale features from hyperspectral false-color images. This flexibility enabled RegNetX-6.4GF to achieve optimal performance on the dataset constructed from three types of false-color images, with the model reaching a Macro-Precision, Macro-Recall, Macro-F1, and Accuracy of 98.17%, 98.15%, 98.15%, and 98.15%, respectively. Furthermore, Grad-CAM visualizations revealed that latent physiological changes in plants under heavy metal stress guided feature learning within CNNs, and demonstrated the effectiveness of false-color image construction in extracting discriminative features. Overall, the proposed technique can be integrated into portable hyperspectral imaging devices, enabling real-time and non-destructive detection of heavy metal stress in modern agricultural practices. Full article
Show Figures

Figure 1

21 pages, 5889 KiB  
Article
Mobile-YOLO: A Lightweight Object Detection Algorithm for Four Categories of Aquatic Organisms
by Hanyu Jiang, Jing Zhao, Fuyu Ma, Yan Yang and Ruiwen Yi
Fishes 2025, 10(7), 348; https://doi.org/10.3390/fishes10070348 - 14 Jul 2025
Viewed by 193
Abstract
Accurate and rapid aquatic organism recognition is a core technology for fisheries automation and aquatic organism statistical research. However, due to absorption and scattering effects, images of aquatic organisms often suffer from poor contrast and color distortion. Additionally, the clustering behavior of aquatic [...] Read more.
Accurate and rapid aquatic organism recognition is a core technology for fisheries automation and aquatic organism statistical research. However, due to absorption and scattering effects, images of aquatic organisms often suffer from poor contrast and color distortion. Additionally, the clustering behavior of aquatic organisms often leads to occlusion, further complicating the identification task. This study proposes a lightweight object detection model, Mobile-YOLO, for the recognition of four representative aquatic organisms, namely holothurian, echinus, scallop, and starfish. Our model first utilizes the Mobile-Nano backbone network we proposed, which enhances feature perception while maintaining a lightweight design. Then, we propose a lightweight detection head, LDtect, which achieves a balance between lightweight structure and high accuracy. Additionally, we introduce Dysample (dynamic sampling) and HWD (Haar wavelet downsampling) modules, aiming to optimize the feature fusion structure and achieve lightweight goals by improving the processes of upsampling and downsampling. These modules also help compensate for the accuracy loss caused by the lightweight design of LDtect. Compared to the baseline model, our model reduces Params (parameters) by 32.2%, FLOPs (floating point operations) by 28.4%, and weights (model storage size) by 30.8%, while improving FPS (frames per second) by 95.2%. The improvement in mAP (mean average precision) can also lead to better accuracy in practical applications, such as marine species monitoring, conservation efforts, and biodiversity assessment. Furthermore, the model’s accuracy is enhanced, with the mAP increased by 1.6%, demonstrating the advanced nature of our approach. Compared with YOLO (You Only Look Once) series (YOLOv5-12), SSD (Single Shot MultiBox Detector), EfficientDet (Efficient Detection), RetinaNet, and RT-DETR (Real-Time Detection Transformer), our model achieves leading comprehensive performance in terms of both accuracy and lightweight design. The results indicate that our research provides technological support for precise and rapid aquatic organism recognition. Full article
(This article belongs to the Special Issue Technology for Fish and Fishery Monitoring)
Show Figures

Figure 1

22 pages, 7562 KiB  
Article
FIGD-Net: A Symmetric Dual-Branch Dehazing Network Guided by Frequency Domain Information
by Luxia Yang, Yingzhao Xue, Yijin Ning, Hongrui Zhang and Yongjie Ma
Symmetry 2025, 17(7), 1122; https://doi.org/10.3390/sym17071122 - 13 Jul 2025
Viewed by 317
Abstract
Image dehazing technology is a crucial component in the fields of intelligent transportation and autonomous driving. However, most existing dehazing algorithms only process images in the spatial domain, failing to fully exploit the rich information in the frequency domain, which leads to residual [...] Read more.
Image dehazing technology is a crucial component in the fields of intelligent transportation and autonomous driving. However, most existing dehazing algorithms only process images in the spatial domain, failing to fully exploit the rich information in the frequency domain, which leads to residual haze in the images. To address this issue, we propose a novel Frequency-domain Information Guided Symmetric Dual-branch Dehazing Network (FIGD-Net), which utilizes the spatial branch to extract local haze features and the frequency branch to capture the global haze distribution, thereby guiding the feature learning process in the spatial branch. The FIGD-Net mainly consists of three key modules: the Frequency Detail Extraction Module (FDEM), the Dual-Domain Multi-scale Feature Extraction Module (DMFEM), and the Dual-Domain Guidance Module (DGM). First, the FDEM employs the Discrete Cosine Transform (DCT) to convert the spatial domain into the frequency domain. It then selectively extracts high-frequency and low-frequency features based on predefined proportions. The high-frequency features, which contain haze-related information, are correlated with the overall characteristics of the low-frequency features to enhance the representation of haze attributes. Next, the DMFEM utilizes stacked residual blocks and gradient feature flows to capture local detail features. Specifically, frequency-guided weights are applied to adjust the focus of feature channels, thereby improving the module’s ability to capture multi-scale features and distinguish haze features. Finally, the DGM adjusts channel weights guided by frequency information. This smooths out redundant signals and enables cross-branch information exchange, which helps to restore the original image colors. Extensive experiments demonstrate that the proposed FIGD-Net achieves superior dehazing performance on multiple synthetic and real-world datasets. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

18 pages, 10703 KiB  
Article
An Emergency Response Framework Design and Performance Analysis for Ship Fire Incidents in Waterway Tunnels
by Jian Deng, Shaoyong Liu and Xiaohan Zeng
Fire 2025, 8(7), 278; https://doi.org/10.3390/fire8070278 - 12 Jul 2025
Viewed by 510
Abstract
Waterway tunnels, a novel type of infrastructure designed for inland waterways in mountainous gorge regions, have seen rapid development in recent years. However, their unique structural characteristics and specific shipping activities pose significant risks in the event of an accident. To enhance the [...] Read more.
Waterway tunnels, a novel type of infrastructure designed for inland waterways in mountainous gorge regions, have seen rapid development in recent years. However, their unique structural characteristics and specific shipping activities pose significant risks in the event of an accident. To enhance the scientific rigor and efficiency of emergency responses to vessel incidents in tunnels, this study focuses on fire accidents in waterway tunnels. Considering the unique challenges of emergency response in such scenarios, we propose an emergency response framework using Business Process Modeling Notation (BPMN). The framework is mapped into a Petri net model encompassing three key stages: detection and early warning, emergency response actions, and recovery. A Colored Hierarchical Timed Petri Net (CHTPN) emergency response model is then developed based on fire incident data and emergency response time functions. Furthermore, a homomorphic Markov chain is employed to assess the network’s validity and performance. Finally, optimization strategies are proposed to improve the emergency response process. The results indicate that the emergency response network demonstrates strong accessibility, effectively mitigating information bottlenecks in critical stages of the response process. The network provides accurate and rapid decision support for different tunnel ship fire scenarios, efficiently and reasonably allocating emergency resources and response teams, and monitoring the operation of key emergency response stages. This enhances the efficiency of emergency operations and provides robust support for decision-making in waterway tunnel fire emergencies. Full article
(This article belongs to the Special Issue Modeling, Experiment and Simulation of Tunnel Fire)
Show Figures

Figure 1

13 pages, 2569 KiB  
Article
Research on the Denitrification Efficiency of Anammox Sludge Based on Machine Vision and Machine Learning
by Yiming Hu, Dongdong Xu, Meng Zhang, Shihao Ge, Dongyu Shi and Yunjie Ruan
Water 2025, 17(14), 2084; https://doi.org/10.3390/w17142084 - 12 Jul 2025
Viewed by 332
Abstract
This study combines machine vision technology and deep learning models to rapidly assess the activity of anaerobic ammonium oxidation (Anammox) granular sludge. As a highly efficient nitrogen removal technology for wastewater treatment, the Anammox process has been widely applied globally due to its [...] Read more.
This study combines machine vision technology and deep learning models to rapidly assess the activity of anaerobic ammonium oxidation (Anammox) granular sludge. As a highly efficient nitrogen removal technology for wastewater treatment, the Anammox process has been widely applied globally due to its energy-saving and environmentally friendly features. However, existing sludge activity monitoring methods are inefficient, costly, and difficult to implement in real-time. In this study, we collected and enhanced 1000 images of Anammox granular sludge, extracted color features, and used machine learning and deep learning training methods such as XGBoost and the ResNet50d neural network to construct multiple models of sludge image color and sludge denitrification efficiency. The experimental results show that the ResNet50d-based neural network model performed the best, with a coefficient of determination (R2) of 0.984 and a mean squared error (MSE) of 523.38, significantly better than traditional machine learning models (with R2 up to 0.952). Additionally, the experiment demonstrated that under a nitrogen load of 2.22 kg-N/(m3·d), the specific activity of Anammox granular sludge reached its highest value of 470.1 mg-N/(g-VSS·d), with further increases in nitrogen load inhibiting sludge activity. This research provides an efficient and cost-effective solution for online monitoring of the Anammox process and has the potential to drive the digital transformation of the wastewater treatment industry. Full article
(This article belongs to the Special Issue AI, Machine Learning and Digital Twin Applications in Water)
Show Figures

Figure 1

23 pages, 3645 KiB  
Article
Color-Guided Mixture-of-Experts Conditional GAN for Realistic Biomedical Image Synthesis in Data-Scarce Diagnostics
by Patrycja Kwiek, Filip Ciepiela and Małgorzata Jakubowska
Electronics 2025, 14(14), 2773; https://doi.org/10.3390/electronics14142773 - 10 Jul 2025
Viewed by 209
Abstract
Background: Limited availability of high-quality labeled biomedical image datasets presents a significant challenge for training deep learning models in medical diagnostics. This study proposes a novel image generation framework combining conditional generative adversarial networks (cGANs) with a Mixture-of-Experts (MoE) architecture and color histogram-aware [...] Read more.
Background: Limited availability of high-quality labeled biomedical image datasets presents a significant challenge for training deep learning models in medical diagnostics. This study proposes a novel image generation framework combining conditional generative adversarial networks (cGANs) with a Mixture-of-Experts (MoE) architecture and color histogram-aware loss functions to enhance synthetic blood cell image quality. Methods: RGB microscopic images from the BloodMNIST dataset (eight blood cell types, resolution 3 × 128 × 128) underwent preprocessing with k-means clustering to extract the dominant colors and UMAP for visualizing class similarity. Spearman correlation-based distance matrices were used to evaluate the discriminative power of each RGB channel. A MoE–cGAN architecture was developed with residual blocks and LeakyReLU activations. Expert generators were conditioned on cell type, and the generator’s loss was augmented with a Wasserstein distance-based term comparing red and green channel histograms, which were found most relevant for class separation. Results: The red and green channels contributed most to class discrimination; the blue channel had minimal impact. The proposed model achieved 0.97 classification accuracy on generated images (ResNet50), with 0.96 precision, 0.97 recall, and a 0.96 F1-score. The best Fréchet Inception Distance (FID) was 52.1. Misclassifications occurred mainly among visually similar cell types. Conclusions: Integrating histogram alignment into the MoE–cGAN training significantly improves the realism and class-specific variability of synthetic images, supporting robust model development under data scarcity in hematological imaging. Full article
Show Figures

Figure 1

13 pages, 1697 KiB  
Article
A Real-Time Vision-Based Adaptive Follow Treadmill for Animal Gait Analysis
by Guanghui Li, Salif Komi, Jakob Fleng Sorensen and Rune W. Berg
Sensors 2025, 25(14), 4289; https://doi.org/10.3390/s25144289 - 9 Jul 2025
Viewed by 357
Abstract
Treadmills are a convenient tool to study animal gait and behavior. Traditional animal treadmill designs often entail preset speeds and therefore have reduced adaptability to animals’ dynamic behavior, thus restricting the experimental scope. Fortunately, advancements in computer vision and automation allow circumvention of [...] Read more.
Treadmills are a convenient tool to study animal gait and behavior. Traditional animal treadmill designs often entail preset speeds and therefore have reduced adaptability to animals’ dynamic behavior, thus restricting the experimental scope. Fortunately, advancements in computer vision and automation allow circumvention of these limitations. Here, we introduce a series of real-time adaptive treadmill systems utilizing both marker-based visual fiducial systems (colored blocks or AprilTags) and marker-free (pre-trained models) tracking methods powered by advanced computer vision to track experimental animals. We demonstrate their real-time object recognition capabilities in specific tasks by conducting practical tests and highlight the performance of the marker-free method using an object detection machine learning algorithm (FOMO MobileNetV2 network), which shows high robustness and accuracy in detecting a moving rat compared to the marker-based method. The combination of this computer vision system together with treadmill control overcome the issues of traditional treadmills by enabling the adjustment of belt speed and direction based on animal movement. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Graphical abstract

25 pages, 2841 KiB  
Article
Dynamic Graph Neural Network for Garbage Classification Based on Multimodal Feature Fusion
by Yuhang Yang, Yuanqing Luo, Yingyu Yang and Shuang Kang
Appl. Sci. 2025, 15(14), 7688; https://doi.org/10.3390/app15147688 - 9 Jul 2025
Viewed by 191
Abstract
Amid the accelerating pace of global urbanization, the volume of municipal solid garbage has surged dramatically, thereby demanding more efficient and precise garbage management technologies. In this paper, we introduce a novel garbage classification approach that leverages a dynamic graph neural network based [...] Read more.
Amid the accelerating pace of global urbanization, the volume of municipal solid garbage has surged dramatically, thereby demanding more efficient and precise garbage management technologies. In this paper, we introduce a novel garbage classification approach that leverages a dynamic graph neural network based on multimodal feature fusion. Specifically, the proposed method employs an enhanced Residual Network Attention Module (RNAM) network to capture deep semantic features and utilizes CIELAB color (LAB) histograms to extract color distribution characteristics, achieving a complementary integration of multimodal information. An adaptive K-nearest neighbor algorithm is utilized to construct the dynamic graph structure, while the incorporation of a multi-head attention layer within the graph neural network facilitates the efficient aggregation of both local and global features. This design significantly enhances the model’s ability to discriminate among various garbage categories. Experimental evaluations reveal that on our self-curated KRHO dataset, all performance metrics approach 1.00, and the overall classification accuracy reaches an impressive 99.33%, surpassing existing mainstream models. Moreover, on the public TrashNet dataset, the proposed method demonstrates equally outstanding classification performance and robustness, achieving an overall accuracy of 99.49%. Additionally, hyperparameter studies indicate that the model attains optimal performance with a learning rate of 2 × 10−4, a dropout rate of 0.3, an initial neighbor count of 20, and 8 attention heads. Full article
Show Figures

Figure 1

14 pages, 239 KiB  
Article
The Willingness to Pay for Non-Alcoholic Beer: A Survey on the Sociodemographic Factors and Consumption Behavior of Italian Consumers
by Antonietta Baiano
Foods 2025, 14(13), 2399; https://doi.org/10.3390/foods14132399 - 7 Jul 2025
Viewed by 377
Abstract
The Italian market for non-alcoholic beer is very small, with a volume per capita of around 0.7 L. However, there are interesting prospects for future growth for reasons ranging from strict traffic code rules on the quantity of alcohol ingested to simple curiosity. [...] Read more.
The Italian market for non-alcoholic beer is very small, with a volume per capita of around 0.7 L. However, there are interesting prospects for future growth for reasons ranging from strict traffic code rules on the quantity of alcohol ingested to simple curiosity. This research aimed to investigate the willingness of Italian consumers/potential consumers to pay for non-alcoholic beer. To accomplish this, a questionnaire was administered using the Google Forms application. Three hundred and ninety-two people participated in this survey voluntarily and without monetary compensation. A probit regression model was used to estimate the impact of certain sociodemographic characteristics (number of inhabitants of the place of residence, region of residence, age group, gender, education level, employment situation, and annual net income), participants’ consumption habits with respect to alcoholic beer, and participants’ knowledge of and preference for non-alcoholic beers with respect to willingness to pay for non-alcoholic beers. The prices respondents were willing to pay ranged from EUR 1.51 to 2.00 for a 33 cL glass bottle. Only two factors significantly affected (p < 0.1) non-alcoholic beer WTP, namely, “Age” and “Non-alcoholic beer color”. WTP decreased as the age of the respondents increased and was higher for the darker beer. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
20 pages, 2968 KiB  
Article
Real-Time Lightweight Morphological Detection for Chinese Mitten Crab Origin Tracing
by Xiaofei Ma, Nannan Shen, Yanhui He, Zhuo Fang, Hongyan Zhang, Yun Wang and Jinrong Duan
Appl. Sci. 2025, 15(13), 7468; https://doi.org/10.3390/app15137468 - 3 Jul 2025
Viewed by 242
Abstract
During the cultivation and circulation of Chinese mitten crab (Eriocheir sinensis), the difficulty in tracing geographic origin leads to quality uncertainty and market disorder. To address this challenge, this study proposes a two-stage origin traceability framework that integrates a lightweight object detector and [...] Read more.
During the cultivation and circulation of Chinese mitten crab (Eriocheir sinensis), the difficulty in tracing geographic origin leads to quality uncertainty and market disorder. To address this challenge, this study proposes a two-stage origin traceability framework that integrates a lightweight object detector and a high-precision classifier. In the first stage, an improved YOLOv10n-based model is designed by incorporating omni-dimensional dynamic convolution, a SlimNeck structure, and a Lightweight Shared Convolutional Detection head, which effectively enhances the detection accuracy of crab targets under complex multi-scale environments while reducing computational cost. In the second stage, an Improved GoogleNet’s Inception Net for Crab is developed based on the Inception module, with further integration of Asymmetric Convolution Blocks and Squeeze and Excitation modules to improve the feature extraction and classification ability for regional origin. A comprehensive crab dataset is constructed, containing images from diverse farming sites, including variations in species, color, size, angle, and background conditions. Experimental results show that the proposed detector achieves an mAP50 of 99.5% and an mAP50-95 of 88.5%, while maintaining 309 FPS and reducing GFLOPs by 35.3%. Meanwhile, the classification model achieves high accuracy with only 17.4% and 40% of the parameters of VGG16 and AlexNet, respectively. In conclusion, the proposed method achieves an optimal accuracy-speed-complexity trade-off, enabling robust real-time traceability for aquaculture systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 2676 KiB  
Article
Improved Filter Designs Using Image Processing Techniques for Color Vision Deficiency (CVD) Types
by Fatma Akalın, Nilgün Özkan Aksoy, Dilara Top and Esma Kara
Symmetry 2025, 17(7), 1046; https://doi.org/10.3390/sym17071046 - 2 Jul 2025
Viewed by 400
Abstract
The eye is one of our five sense organs, where optical and neural structures are integrated. It works in synchrony with the brain, enabling the formation of meaningful images. However, lack of function, complete absence or structural abnormalities of cone cells in the [...] Read more.
The eye is one of our five sense organs, where optical and neural structures are integrated. It works in synchrony with the brain, enabling the formation of meaningful images. However, lack of function, complete absence or structural abnormalities of cone cells in the cone cells in the retina causes the emergence of types of Color Vision Deficiency (CVD). This deficiency is characterized by the lack of clear vision in the use of colors in the same region of the spectrum, and greatly affects the quality of life of the patient. Therefore, it is important to develop filters that enable colors to be combined successfully. In this study, an original filter design was improved, built on a five-stage systematic structure that complements and supports itself. But optimization regarding performance value needs to be tested with objective methods independent of human decision. Therefore, in order to provide performance analyses based on objective evaluation criteria, original and enhanced images simulated by patients with seven different Color Vision Deficiency (CVD) types were classified with the MobileNet transfer learning model. The classification results show that the developed final filter greatly improves the differences in color perception levels in both eyes. Thus, color stimulation between the two eyes is more balanced, and perceptual symmetry is created. With perceptual symmetry, environmental colors are perceived more consistently and distinguishably, and the visual difficulties encountered by color blind individuals in daily life are reduced. Full article
(This article belongs to the Special Issue Symmetry in Computational Intelligence and Applications)
Show Figures

Figure 1

20 pages, 3602 KiB  
Article
Dust Aerosol Classification in Northwest China Using CALIPSO Data and an Enhanced 1D U-Net Network
by Xin Gong, Delong Xiu, Xiaoling Sun, Ruizhao Zhang, Jiandong Mao, Hu Zhao and Zhimin Rao
Atmosphere 2025, 16(7), 812; https://doi.org/10.3390/atmos16070812 - 2 Jul 2025
Viewed by 270
Abstract
Dust aerosols significantly affect climate and air quality in Northwest China (30–50° N, 70–110° E), where frequent dust storms complicate accurate aerosol classification when using CALIPSO satellite data. This study introduces an Enhanced 1D U-Net model to enhance dust aerosol retrieval, incorporating Inception [...] Read more.
Dust aerosols significantly affect climate and air quality in Northwest China (30–50° N, 70–110° E), where frequent dust storms complicate accurate aerosol classification when using CALIPSO satellite data. This study introduces an Enhanced 1D U-Net model to enhance dust aerosol retrieval, incorporating Inception modules for multi-scale feature extraction, Transformer blocks for global contextual modeling, CBAM attention mechanisms for improved feature selection, and residual connections for training stability. Using CALIPSO Level 1B and Level 2 Vertical Feature Mask (VFM) data from 2015 to 2020, the model processed backscatter coefficients, polarization characteristics, and color ratios at 532 nm and 1064 nm to classify aerosol types. The model achieved a precision of 94.11%, recall of 99.88%, and F1 score of 96.91% for dust aerosols, outperforming baseline models. Dust aerosols were predominantly detected between 0.44 and 4 km, consistent with observations from CALIPSO. These results highlight the model’s potential to improve climate modeling and air quality monitoring, providing a scalable framework for future atmospheric research. Full article
(This article belongs to the Section Aerosols)
Show Figures

Figure 1

Back to TopTop