Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,109)

Search Parameters:
Keywords = MobileNet v2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1696 KB  
Article
Optimizing Lightweight Convolutional Networks via Topological Attention and Entropy-Constrained Distillation: A Spectral–Topological Approach for Robust Facial Expression Recognition
by Xiaohong Dong, Yu Gao, Mengyan Liu and Wenxiaoman Yu
Algorithms 2026, 19(3), 177; https://doi.org/10.3390/a19030177 - 26 Feb 2026
Viewed by 63
Abstract
Deep learning models typically rely on large-scale datasets with accurate annotations, yet real-world applications inevitably suffer from label noise, which severely degrades generalization—particularly for lightweight neural networks with limited capacity. Existing learning with noisy labels methods are mainly designed for over-parameterized models and [...] Read more.
Deep learning models typically rely on large-scale datasets with accurate annotations, yet real-world applications inevitably suffer from label noise, which severely degrades generalization—particularly for lightweight neural networks with limited capacity. Existing learning with noisy labels methods are mainly designed for over-parameterized models and are often unsuitable for resource-constrained deployment. To address this challenge, we propose a robust framework that integrates a Micro Hybrid Attention Module (MHAM) with knowledge distillation (KD) for lightweight architectures such as MobileNetV3. MHAM employs a decoupled channel–spatial attention design to enhance discriminative feature extraction while suppressing noise-sensitive background responses. From a graph–signal perspective, MHAM can be interpreted as a spectral smoothing operator that improves optimization stability. In addition, knowledge distillation with soft teacher supervision mitigates overfitting to corrupted hard labels and reduces prediction uncertainty. Extensive experiments demonstrate the effectiveness of the proposed method. On FER2013, a real-world noisy facial expression recognition benchmark, our approach achieves 68.5% accuracy with only 0.52M parameters, while reducing optimization variance by 24%. On CIFAR-10 with 40% symmetric label noise, it improves accuracy from 54.85% to 60.10%. On CIFAR-10N with multiple types of real-world human annotation noise, the proposed method consistently achieves 63.9–71.9% accuracy under different noise protocols. These results show that the proposed framework provides an efficient and robust solution for noisy label learning in lightweight facial expression and object classification on edge devices. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms (2nd Edition))
Show Figures

Figure 1

22 pages, 4003 KB  
Article
Deep Learning-Based Classification of Paddy Crop Diseases Using a Custom Image Dataset
by Baghavathi Priya Sankaralingam, Krithikha Sanju Saravanan, Venisree Kalyana Sundaram, Richa Kumari Jaishwal and Jayanth Natarajan
AgriEngineering 2026, 8(3), 80; https://doi.org/10.3390/agriengineering8030080 - 26 Feb 2026
Viewed by 144
Abstract
Plant diseases pose significant threats globally due to the high economic losses and effects on food security. Traditional disease identification methods usually have limitations regarding their accuracy and efficiency. This study discusses six advanced deep learning models: VGG19, DenseNet201, Xception, InceptionResNetV2, MobileNetV2, and [...] Read more.
Plant diseases pose significant threats globally due to the high economic losses and effects on food security. Traditional disease identification methods usually have limitations regarding their accuracy and efficiency. This study discusses six advanced deep learning models: VGG19, DenseNet201, Xception, InceptionResNetV2, MobileNetV2, and EfficientNetV2B3. A dataset is used that is rich in diversity and contains high-quality images of diseased sections or parts of plants. These deep models are discussed and compared for studying their efficiencies in recognizing plant diseases accurately. EfficientNetV2B3 and Xception outperformed the rest of the models due to the ability of the model to capture major features from the image of the infected region. MobileNetV2 was also useful which provided a good trade-off between accuracy and computational efficiency. The study further applied transfer learning and image augmentation in boosting model performance and addressing the issue of class imbalance in the dataset. Results showed that the proposed approach proved much more reliable and efficient compared to conventional approaches to plant disease detection. Future efforts will be geared towards early detection of diseases to further assist farmers and researchers in order to upgrade the practices related to crop management. Additional data will be integrated, including hyperspectral images and environmental factors, for developing a robust and efficient system for plant disease detection. These models will be deployed in intelligent farming systems. Full article
Show Figures

Figure 1

12 pages, 884 KB  
Article
Classification of Pancreatic Cancer and Normal Tissue in 2D and 3D Optical Coherence Tomography Images Using Convolutional Neural Networks: A Comparative Study
by Maria Druzenko, Bastian Westerheide, Caroline Girmen, Niels König, Robert Schmitt, Svetlana Warkentin, Katharina Jöchle, Sebastian Cammann, Georg Wiltberger, Martin W. von Websky, Thomas Vogel, Florian W. R. Vondran and Iakovos Amygdalos
Cancers 2026, 18(5), 732; https://doi.org/10.3390/cancers18050732 - 25 Feb 2026
Viewed by 162
Abstract
Background/Objectives: Early and complete (R0) surgical resection is essential for optimal outcomes in pancreatic cancer. Optical coherence tomography (OCT) combined with artificial intelligence (AI) may offer real-time intraoperative guidance, potentially reducing reliance on frozen sections. This ex vivo study evaluated convolutional neural networks [...] Read more.
Background/Objectives: Early and complete (R0) surgical resection is essential for optimal outcomes in pancreatic cancer. Optical coherence tomography (OCT) combined with artificial intelligence (AI) may offer real-time intraoperative guidance, potentially reducing reliance on frozen sections. This ex vivo study evaluated convolutional neural networks (CNNs) for distinguishing pancreatic ductal adenocarcinoma (PDAC) from normal pancreatic tissue in OCT images obtained ex vivo. Methods: Between October 2020 and April 2021, OCT scans were obtained from resected pancreatic specimens of 27 adult patients. Tumor and adjacent normal tissue were imaged using a 1310 nm OCT system, followed by histopathological confirmation. A total of 25 PDAC and 30 non-malignant scans were preprocessed and analyzed using cross-validated CNN models (ResNet50, DenseNet121, and MobileNetV2) with both 2D and 3D inputs. Results: Using five-fold stratified cross-validation on 9040 2D and 3000 3D samples (224 px resolution), the 3D DenseNet121 model achieved the highest performance, with an F1-score of 0.74, sensitivity of 72%, and specificity of 81%. Other architectures demonstrated comparable results. Conclusions: AI-assisted OCT can accurately differentiate PDAC from normal pancreatic tissue ex vivo, supporting its potential as a rapid intraoperative diagnostic adjunct. Further studies are warranted to assess its in vivo performance and utility in evaluating resection margins. Full article
Show Figures

Figure 1

33 pages, 40829 KB  
Article
Lightweight Hybrid Deep Learning for Strawberry Disease Recognition and Edge Deployment Using Dynamic Multi-Scale CNN–Transformer Fusion
by Nasreddine Haqiq, Mounia Zaim, Mohamed Sbihi, Mustapha El Alaoui, Khalid El Amraoui, Youssef El Kazini, Hassane Roukhe and Lhoussaine Masmoudi
AgriEngineering 2026, 8(2), 75; https://doi.org/10.3390/agriengineering8020075 - 22 Feb 2026
Viewed by 211
Abstract
To implement a successful strawberry (Fragaria × ananassa) farming, fungal diseases must be detected in a timely manner so that informed crop protection decisions can be made. While field scouting is an option, it is manual and labor intensive. Scouting is also inaccurate [...] Read more.
To implement a successful strawberry (Fragaria × ananassa) farming, fungal diseases must be detected in a timely manner so that informed crop protection decisions can be made. While field scouting is an option, it is manual and labor intensive. Scouting is also inaccurate and reduces efficiency due to micro-climatic lighting and field clutter, among other factors. StrawberryDualNet is a framework that supports Integrated Pest Management and automates symptom surveillance. We present dual-path CNN–Transformer fusion design that integrates two branches: a dynamic multi-scale convolution and a lightweight transformer. The former is able to capture fine-grained morphological lesion textures, while the latter captures overall contextual patterns. The two representations are fused through a learnable gating mechanism to decrease visual uncertainty amongst differing symptoms. We used a stratified five-fold cross-validation to evaluate the framework among five economically significant pathogens. Our approach significantly outperformed other automated scouting baselines, achieving 95.1% accuracy and 95.3% precision, respectively, and it is successful for Anthracnose, Gray Mold, Powdery Mildew, Rhizopus Rot, and Black Spot. The model is also scaled down compared to others (0.04 M parameters; 0.72 MB, 13–20× smaller than MobileNetV2/ShuffleNetV2) and is thus able to be deployed on devices that are lacking computational resources. For edge feasibility, we assessed reduced-precision inference; 16-bit floating point quantization preserved baseline performance at 83 FPS, whereas 8-bit integer quantization caused notable accuracy degradation. Overall, the proposed local–global fusion design provides an accurate, interpretable, and scalable tool for real-time disease phenotyping in precision horticulture. Full article
Show Figures

Figure 1

27 pages, 4478 KB  
Article
Estimating Saturated Hydraulic Conductivity and Effective Net Capillary Drive Using a Portable Drip Infiltrometer Method
by Wendy L. Puente-Castillo, Lorenzo Borselli, Damiano Sarocchi, Azalea J. Ortiz-Rodriguez and Dino Torri
Geotechnics 2026, 6(1), 22; https://doi.org/10.3390/geotechnics6010022 - 14 Feb 2026
Viewed by 296
Abstract
Reliable field estimation of near-surface soil hydraulic parameters remains challenging, particularly in heterogeneous or stony soil environments. Conventional drip infiltrometers (DI) are widely used, but their field deployment may limit mobility and testing efficiency. This study presents a portable drip infiltrometer (PDI) methodology [...] Read more.
Reliable field estimation of near-surface soil hydraulic parameters remains challenging, particularly in heterogeneous or stony soil environments. Conventional drip infiltrometers (DI) are widely used, but their field deployment may limit mobility and testing efficiency. This study presents a portable drip infiltrometer (PDI) methodology that enhances field applicability while reducing testing time without compromising parameter robustness. The approach enables estimation of saturated hydraulic conductivity (Ks), effective net capillary drive (G), and sorptivity (S) by integrating image-based analysis of ponded surface areas using the Portable Drip Infiltrometer Software (PDIS v1.5) with linear and non-linear infiltration formulations optimized through evolutionary algorithms. A total of 34 PDI field tests were conducted across two Mexican regions with contrasting climatic and soil conditions. In semi-arid environments, Ks ranged from 1.07 to 12.82 mm h−1 and G from 89.1 to 1999.99 mm, whereas in semi-warm sub-humid settings, Ks ranged from 30.68 to 117.68 mm h−1 and G from 2.65 to 121.64 mm. Results indicate that linear formulations perform adequately under relatively homogeneous conditions, while non-linear PDI formulations become necessary as surface structural complexity increases. The PDI–PDIS framework provides a rapid, repeatable, and physically grounded tool for parameterizing near-surface hydraulic processes in heterogeneous soils. Full article
Show Figures

Figure 1

42 pages, 11792 KB  
Article
Automatic Childhood Pneumonia Diagnosis Based on Multi-Model Feature Fusion Using Chi-Square Feature Selection
by Amira Ouerhani, Tareq Hadidi, Hanene Sahli and Halima Mahjoubi
J. Imaging 2026, 12(2), 81; https://doi.org/10.3390/jimaging12020081 - 14 Feb 2026
Viewed by 247
Abstract
Pneumonia is one of the main reasons for child mortality, with chest radiography (CXR) being essential for its diagnosis. However, the low radiation exposure in pediatric analysis complicates the accurate detection of pneumonia, making traditional examination ineffective. Progress in medical imaging with convolutional [...] Read more.
Pneumonia is one of the main reasons for child mortality, with chest radiography (CXR) being essential for its diagnosis. However, the low radiation exposure in pediatric analysis complicates the accurate detection of pneumonia, making traditional examination ineffective. Progress in medical imaging with convolutional neural networks (CNN) has considerably improved performance, gaining widespread recognition for its effectiveness. This paper proposes an accurate pneumonia detection method based on different deep CNN architectures that combine optimal feature fusion. Enhanced VGG-19, ResNet-50, and MobileNet-V2 are trained on the most widely used pneumonia dataset, applying appropriate transfer learning and fine-tuning strategies. To create an effective feature input, the Chi-Square technique removes inappropriate features from every enhanced CNN. The resulting subsets are subsequently fused horizontally, to generate more diverse and robust feature representation for binary classification. By combining 1000 best features from VGG-19 and MobileNet-V2 models, the suggested approach records the best accuracy (97.59%), Recall (98.33%), and F1-score (98.19%) on the test set based on the supervised support vector machines (SVM) classifier. The achieved results demonstrated that our approach provides a significant enhancement in performance compared to previous studies using various ensemble fusion techniques while ensuring computational efficiency. We project this fused-feature system to significantly aid timely detection of childhood pneumonia, especially within constrained healthcare systems. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

30 pages, 7886 KB  
Article
Detection and Precision Application Path Planning for Cotton Spider Mite Based on UAV Multispectral Remote Sensing
by Hua Zhuo, Mei Yang, Bei Wu, Yuqin Xiao, Jungang Ma, Yanhong Chen, Manxian Yang, Yuqing Li, Yikun Zhao and Pengfei Shi
Agriculture 2026, 16(4), 424; https://doi.org/10.3390/agriculture16040424 - 12 Feb 2026
Viewed by 185
Abstract
Cotton spider mites pose a significant threat to cotton production, while traditional manual investigation and blanket pesticide application are inefficient for precision pest management in large-scale cotton fields. To address this challenge, this study developed an integrated UAV multispectral remote sensing system for [...] Read more.
Cotton spider mites pose a significant threat to cotton production, while traditional manual investigation and blanket pesticide application are inefficient for precision pest management in large-scale cotton fields. To address this challenge, this study developed an integrated UAV multispectral remote sensing system for spider mite monitoring and precision spraying. Multispectral imagery was acquired from cotton fields in Shaya County, Xinjiang using UAV-mounted cameras, and vegetation indices including RDVI, MSAVI, SAVI, and OSAVI were selected through feature optimization. Comparative evaluation of three machine learning models (Logistic Regression, Random Forest, and Support Vector Machine) and two deep learning models (1D-CNN and MobileNetV2) was conducted. Considering classification performance and computational efficiency for real-time UAV deployment, Random Forest was identified as optimal, achieving 85.47% accuracy, an 85.24% F1-score, and an AUC of 0.912. The model generated centimeter-level spatial distribution maps for precise spray zone delineation. An improved NSGA-III multi-objective path optimization algorithm was proposed, incorporating PCA-based heuristic initialization, differential evolution operators, and co-evolutionary dual population strategies to optimize deadheading distance, energy consumption, operation time, turning frequency, and load balancing. Ablation study validated the effectiveness of each component, with the fully improved algorithm reducing IGD by 59.94% and increasing HV by 5.90% compared to standard NSGA-III. Field validation showed 98.5% coverage of infested areas with only 3.6% path repetition, effectively minimizing pesticide waste and phytotoxicity risks. This study established a complete technical pipeline from monitoring to application, providing a valuable reference for precision pest control in large-scale cotton production systems. The framework demonstrated robust performance across multiple field sites, though its generalization is currently limited to one geographic region and growth stage. Future work will extend its application to additional cotton varieties, growth stages, and geographic regions. Full article
Show Figures

Figure 1

28 pages, 21245 KB  
Article
A Comparative Study of OCR Architectures for Korean License Plate Recognition: CNN–RNN-Based Models and MobileNetV3–Transformer-Based Models
by Seungju Lee and Gooman Park
Sensors 2026, 26(4), 1208; https://doi.org/10.3390/s26041208 - 12 Feb 2026
Viewed by 269
Abstract
This paper presents a systematic comparative study of optical character recognition (OCR) architectures for Korean license plate recognition under identical detection conditions. Although recent automatic license plate recognition (ALPR) systems increasingly adopt Transformer-based decoders, it remains unclear whether performance differences arise primarily from [...] Read more.
This paper presents a systematic comparative study of optical character recognition (OCR) architectures for Korean license plate recognition under identical detection conditions. Although recent automatic license plate recognition (ALPR) systems increasingly adopt Transformer-based decoders, it remains unclear whether performance differences arise primarily from sequence modeling strategies or from backbone feature representations. To address this issue, we employ a unified YOLOv12-based license plate detector and evaluate multiple OCR configurations, including a CNN with an Attention-LSTM decoder and a MobileNetV3 with a Transformer decoder. To ensure a fair comparison, a controlled ablation study is conducted in which the CNN backbone is fixed to ResNet-18 while varying only the sequence decoder. Experiments are performed on both static image datasets and tracking-based sequential datasets, assessing recognition accuracy, error characteristics, and processing speed across GPU and embedded platforms. The results demonstrate that the effectiveness of sequence decoders is highly dataset-dependent and strongly influenced by feature quality and region-of-interest (ROI) stability. Quantitative analysis further shows that tracking-induced error accumulation dominates OCR performance in sequential recognition scenarios. Moreover, Korean license plate–specific error patterns reveal failure modes not captured by generic OCR benchmarks. Finally, experiments on embedded platforms indicate that Transformer-based OCR models introduce significant computational and memory overhead, limiting their suitability for real-time deployment. These findings suggest that robust license plate recognition requires joint consideration of detection, tracking, and recognition rather than isolated optimization of OCR architectures. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

6 pages, 699 KB  
Proceeding Paper
Towards Electoral Digitization: Automatic Classification of Handwritten Numbers in PREP System Records
by Miguel Angel Camargo Rojas, Gabriel Sánchez Pérez, José Portillo-Portillo, Linda Karina Toscano Medina, Aldo Hernández Suárez, Jesús Olivares Mercado, Héctor Manuel Pérez Meana and Luis Javier García Villalba
Eng. Proc. 2026, 123(1), 35; https://doi.org/10.3390/engproc2026123035 - 12 Feb 2026
Viewed by 254
Abstract
The digitization of electoral processes requires robust systems for processing handwritten numerical data from voting documents. This paper presents a convolutional neural network study for handwritten digit recognition in Mexico’s PREP (Programa de Resultados Electorales Preliminares) system. Rather than individual digit classification, we [...] Read more.
The digitization of electoral processes requires robust systems for processing handwritten numerical data from voting documents. This paper presents a convolutional neural network study for handwritten digit recognition in Mexico’s PREP (Programa de Resultados Electorales Preliminares) system. Rather than individual digit classification, we approach the problem as direct 1000-class classification, treating each three-digit combination as a single class to maximize accuracy and simplify inference. We evaluated eight CNN architectures including ResNet variants, MobileNetV3, ShuffleNetV2, and EfficientNet, with ResNet-18 emerging as optimal for balancing accuracy and computational efficiency under CPU-only deployment. To address dataset challenges including class imbalance and image artifacts, we developed a customized RandAugment strategy applying photometric and limited geometric transformations that preserve semantic integrity. Our methodology demonstrates feasibility of deploying robust digit recognition systems in resource-constrained electoral environments while maintaining high accuracy. The research provides a practical framework for automated electoral data processing adaptable to similar systems across Latin America. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
Show Figures

Figure 1

11 pages, 683 KB  
Proceeding Paper
Adaptive Marine Predators Algorithm for Optimizing CNNs in Malaria Detection
by Abubakar Salisu Bashir, Usman Mahmud, Abdulkadir Abubakar Bichi, Abubakar Ado, Abdulrauf Garba Sharifai and Mansir Abubakar
Eng. Proc. 2026, 124(1), 25; https://doi.org/10.3390/engproc2026124025 - 11 Feb 2026
Viewed by 239
Abstract
Malaria remains a major global health burden, requiring rapid and reliable diagnostic tools to complement or replace labor-intensive manual microscopy. Although deep learning methods have demonstrated strong potential for automated malaria diagnosis, many existing approaches depend on computationally expensive transfer learning architectures or [...] Read more.
Malaria remains a major global health burden, requiring rapid and reliable diagnostic tools to complement or replace labor-intensive manual microscopy. Although deep learning methods have demonstrated strong potential for automated malaria diagnosis, many existing approaches depend on computationally expensive transfer learning architectures or exhibit sensitivity to suboptimal hyperparameter configurations. This study proposes a lightweight automated framework for binary classification of malaria cell images using a custom Convolutional Neural Network (CNN) optimized by a novel Adaptive Marine Predators Algorithm (AMPA). The proposed AMPA integrates a state-aware adaptive control factor that dynamically adjusts step size based on population loss, thereby improving search efficiency and reducing susceptibility to local optima. The framework was evaluated on the NIH Malaria Cell Image Dataset containing 27,558 single-cell images. Experimental results show that the AMPA-optimized CNN achieves a testing accuracy of 95.00% and an Area Under the Curve of 0.986. Comparative experiments indicate that the proposed model outperforms several reported lightweight architectures, including MobileNetV2 (92.00%) and YOLO-based detectors (94.07%), while achieving performance comparable to deeper networks such as VGG-16 (94.88%), with substantially lower computational complexity. The model further attains high sensitivity (0.94) and precision (0.96), supporting its suitability as a robust and resource-efficient approach for automated malaria screening research. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

18 pages, 7470 KB  
Article
Real-Time Bernoulli-Based Sequence Modeling for Efficient Intrusion Detection in Network Flow Data
by Abderrahman El Alami, Ismail El Batteoui and Khalid Satori
J. Cybersecur. Priv. 2026, 6(1), 32; https://doi.org/10.3390/jcp6010032 - 10 Feb 2026
Viewed by 186
Abstract
The exponential growth of network traffic and the increasing sophistication of cyberattacks have underscored the need for intelligent and real-time Intrusion Detection Systems (IDS). Traditional flow-based IDS models typically analyze each network flow independently, ignoring the temporal and contextual dependencies among flows, which [...] Read more.
The exponential growth of network traffic and the increasing sophistication of cyberattacks have underscored the need for intelligent and real-time Intrusion Detection Systems (IDS). Traditional flow-based IDS models typically analyze each network flow independently, ignoring the temporal and contextual dependencies among flows, which reduces their ability to recognize coordinated or multi-stage attacks. To address this limitation, this paper proposes a Bernoulli-based probabilistic sequence modeling framework that integrates statistical learning with visual feature representation for efficient intrusion detection. The approach begins with a comprehensive data-preprocessing pipeline that performs feature cleaning, encoding, normalization, and sequence aggregation. Each aggregated feature vector is then transformed into a 6 × 6 grayscale image, allowing the system to capture spatial correlations among network features through convolutional operations. A logistic regression model first estimates per-flow attack probabilities, and these are combined using the Bernoulli probability law to infer the likelihood of malicious activity across flow sequences. The resulting sequence-level representations are evaluated using lightweight classifiers such as TinyNet-6 × 6, MobileNetV2, and ResNet18. Experimental results on the CICIDS2017 dataset demonstrate that the proposed method achieves high detection accuracy with reduced computational cost compared to state-of-the-art deep models, highlighting its suitability for scalable, real-time IDS deployment. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

49 pages, 5201 KB  
Article
CancerNet-W: A Symmetry-Driven Adaptive Deep Learning Pipeline with Dynamic Learning Rate Control for Early Breast and Cervical Cancer Detection
by Kais Khrouf, Sameh Abd El-Ghany and A. A. Abd El-Aziz
Symmetry 2026, 18(2), 314; https://doi.org/10.3390/sym18020314 - 9 Feb 2026
Viewed by 245
Abstract
Malignant lymphoma and other cancer types remain major global health concerns due to their rapid progression and potential for fatal outcomes. Conventional diagnostic approaches are often invasive and time-consuming, contributing to delays in early detection and treatment. These limitations highlight the urgent need [...] Read more.
Malignant lymphoma and other cancer types remain major global health concerns due to their rapid progression and potential for fatal outcomes. Conventional diagnostic approaches are often invasive and time-consuming, contributing to delays in early detection and treatment. These limitations highlight the urgent need for more accurate, efficient, and non-invasive diagnostic solutions that support timely clinical decision-making. In this study, we introduce CancerNet-W, a deep learning (DL) model built upon EfficientNet-B3 for automated classification of breast and cervical cancers using histopathological images (HIs). The model incorporates an Intelligent Learning Rate Controller (ILRC) that adaptively optimizes the LR during training, enhancing stability and performance. The preprocessing pipeline includes data augmentation, resizing, and normalization for the two datasets to improve feature extraction. The Breast cancer HIs Classification (BreakHis) dataset contains 10,000 HIs and the Cervical cancer (SipaKMed) dataset consists of 25,000 images. Importantly, the model leverages morphological cues such as cellular symmetry, which plays a key role in differentiating normal tissue, typically exhibiting more symmetric cellular organization—from malignant tissue, where cancer progression disrupts structural symmetry and leads to notable nuclear and architectural asymmetry, a hallmark of breast and cervical malignancies. This observation aligns with established findings on symmetry breaking in tumorigenesis and nuclear pleomorphism in cancer pathology. CancerNet-W achieved remarkable performance as a general model, yielding 100% accuracy for cervical cancer and 99.89% for breast cancer, outperforming state-of-the-art models including EfficientNet-B4, EfficientNet-B5, DenseNet-201, and MobileNet-V2. To promote strong learning and reduce overfitting, stratified five-fold cross-validation was utilized for the training-validation dataset. Model selection and optimization were based solely on validation performance. An independent test set, kept separate from both training and validation, was employed for final evaluation. The results, accuracy at 98.11% for breast cancer and 99.60% for cervical cancer, reflect the average test performance from the model trained across the five folds. Therefore, the proposed framework provides consistent and dependable diagnostic predictions while significantly reducing the time and cost associated with cancer detection, demonstrating its potential as a valuable tool for clinical applications. Full article
Show Figures

Figure 1

29 pages, 11323 KB  
Article
DenseNet-CSL: An Enhanced Network for Multi-Class Recognition of Agricultural Pests, Weeds, and Crop Diseases
by Yiqi Huang, Tao Huang, Jing Du, Jinxue Qiu, Conghui Liu, Fanghao Wan, Wanqiang Qian, Xi Qiao and Liang Wang
Agriculture 2026, 16(4), 394; https://doi.org/10.3390/agriculture16040394 - 8 Feb 2026
Viewed by 237
Abstract
Ensuring food security and agricultural biosecurity increasingly depends on the rapid and accurate identification of harmful organisms that threaten crop production. Traditional identification methods rely heavily on expert knowledge, are time-consuming, and often fail in complex multi-species scenarios. To address these limitations, this [...] Read more.
Ensuring food security and agricultural biosecurity increasingly depends on the rapid and accurate identification of harmful organisms that threaten crop production. Traditional identification methods rely heavily on expert knowledge, are time-consuming, and often fail in complex multi-species scenarios. To address these limitations, this study establishes a comprehensive image dataset that includes three major categories of agricultural harmful organisms—pests, weeds, and crop diseases—and proposes an enhanced convolutional neural network, DenseNet-CSL (DenseNet with Coordinate Attention, Deep Supervision, and Label Smoothing), developed based on DenseNet121 for efficient multi-class recognition. The dataset comprises 62 pest species, 28 weed species, and 30 major crop diseases, totaling 23,995 images collected under diverse growth stages, ecological conditions, and imaging environments. DenseNet-CSL incorporates three targeted improvements: a Coordinate Attention mechanism to strengthen spatial and channel feature representation, Deep Supervision to accelerate convergence and enhance generalization, and Label Smoothing Loss to regularize the output distribution and reduce overconfidence, which is beneficial under imbalanced and noisy data. Experimental results demonstrate that DenseNet-CSL achieves a precision of 81.3%, a recall of 80.1%, and an F1-score of 80% on the constructed dataset—outperforming DenseNet121, ResNet101, EfficientNetV2, and MobileNetV3—while shortening inference time by 1.36 s and adding only 1.772 MB of additional model parameters. These findings highlight the effectiveness of DenseNet-CSL for multi-class recognition of agricultural pests, weeds, and diseases, and underscore the importance of multi-source, multi-scene datasets for improving model robustness and generalization. The proposed framework provides a viable technical pathway for intelligent diagnosis and monitoring of agricultural harmful organisms, supporting port quarantine and agricultural biosecurity applications. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

28 pages, 7334 KB  
Article
I-GhostNetV3: A Lightweight Deep Learning Framework for Vision-Sensor-Based Rice Leaf Disease Detection in Smart Agriculture
by Puyu Zhang, Rui Li, Yuxuan Liu, Guoxi Sun and Chenglin Wen
Sensors 2026, 26(3), 1025; https://doi.org/10.3390/s26031025 - 4 Feb 2026
Cited by 1 | Viewed by 347
Abstract
Accurate and timely diagnosis of rice leaf diseases is crucial for smart agriculture leveraging vision sensors. However, existing lightweight convolutional neural networks (CNNs) often struggle in complex field environments, where small lesions, cluttered backgrounds, and varying illumination complicate recognition. This paper presents I-GhostNetV3, [...] Read more.
Accurate and timely diagnosis of rice leaf diseases is crucial for smart agriculture leveraging vision sensors. However, existing lightweight convolutional neural networks (CNNs) often struggle in complex field environments, where small lesions, cluttered backgrounds, and varying illumination complicate recognition. This paper presents I-GhostNetV3, an incrementally improved GhostNetV3-based network for RGB rice leaf disease recognition. I-GhostNetV3 introduces two modular enhancements with controlled overhead: (1) Adaptive Parallel Attention (APA), which integrates edge-guided spatial and channel cues and is selectively inserted to enhance lesion-related representations (at the cost of additional computation), and (2) Fusion Coordinate-Channel Attention (FCCA), a near-neutral SE replacement that enables efficient spatial–channel feature fusion to suppress background interference. Experiments on the Rice Leaf Bacterial and Fungal Disease (RLBF) dataset show that I-GhostNetV3 achieves 90.02% Top-1 accuracy with 1.831 million parameters and 248.694 million FLOPs, outperforming MobileNetV2 and EfficientNet-B0 under our experimental setup while remaining compact relative to the original GhostNetV3. In addition, evaluation on PlantVillage-Corn serves as a supplementary transfer sanity check; further validation on independent real-field target domains and on-device profiling will be explored in future work. These results indicate that I-GhostNetV3 is a promising efficient backbone for future edge deployment in precision agriculture. Full article
Show Figures

Figure 1

26 pages, 4671 KB  
Article
MobileSteelNet: A Lightweight Steel Surface Defect Classification Network with Cross-Interactive Efficient Multi-Scale Attention
by Xiang Zou, Zhongming Liu, Chengjun Xu, Jiawei Zhang and Zhaoyu Li
Sensors 2026, 26(3), 1022; https://doi.org/10.3390/s26031022 - 4 Feb 2026
Viewed by 219
Abstract
Steel surface defect classification is critical for industrial quality control, yet existing methods struggle to balance accuracy and efficiency for real-time deployment in vision-based sensor systems. This paper presents MobileSteelNet, a lightweight deep learning framework that introduces two novel modules: multi-scale feature fusion [...] Read more.
Steel surface defect classification is critical for industrial quality control, yet existing methods struggle to balance accuracy and efficiency for real-time deployment in vision-based sensor systems. This paper presents MobileSteelNet, a lightweight deep learning framework that introduces two novel modules: multi-scale feature fusion (MSFF), for integrating multi-stage features; and Cross-Interactive Efficient Multi-Scale Attention (CIEMA), which unifies inter-channel interaction, parallel multi-scale spatial extraction, and grouped efficient computation. Experiments on the NEU-DET dataset demonstrate that MobileSteelNet achieves 91.36% average accuracy, surpassing ResNet-50 (88.01%) and lightweight networks, including MobileNetV2 (86.08%). Notably, it achieves 93.70% accuracy on Scratch-type defects, representing an 82.12 percentage point improvement over baseline MobileNetV1. With a model size of only 8.2 MB, MobileSteelNet maintains superior performance while meeting lightweight deployment requirements, making it suitable for edge deployment in vision sensor systems for steel manufacturing. Full article
(This article belongs to the Special Issue Advanced Sensing Technologies in Industrial Defect Detection)
Show Figures

Figure 1

Back to TopTop