Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,079)

Search Parameters:
Keywords = improved V-Net

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5633 KiB  
Article
Duck Egg Crack Detection Using an Adaptive CNN Ensemble with Multi-Light Channels and Image Processing
by Vasutorn Chaowalittawin, Woranidtha Krungseanmuang, Posathip Sathaporn and Boonchana Purahong
Appl. Sci. 2025, 15(14), 7960; https://doi.org/10.3390/app15147960 - 17 Jul 2025
Abstract
Duck egg quality classification is critical in farms, hatcheries, and salted egg processing plants, where cracked eggs must be identified before further processing or distribution. However, duck eggs present a unique challenge due to their white eggshells, which make cracks difficult to detect [...] Read more.
Duck egg quality classification is critical in farms, hatcheries, and salted egg processing plants, where cracked eggs must be identified before further processing or distribution. However, duck eggs present a unique challenge due to their white eggshells, which make cracks difficult to detect visually. In current practice, human inspectors use standard white light for crack detection, and many researchers have focused primarily on improving detection algorithms without addressing lighting limitations. Therefore, this paper presents duck egg crack detection using an adaptive convolutional neural network (CNN) model ensemble with multi-light channels. We began by developing a portable crack detection system capable of controlling various light sources to determine the optimal lighting conditions for crack visibility. A total of 23,904 images were collected and evenly distributed across four lighting channels (red, green, blue, and white), with 1494 images per channel. The dataset was then split into 836 images for training, 209 images for validation, and 449 images for testing per lighting condition. To enhance image quality prior to model training, several image pre-processing techniques were applied, including normalization, histogram equalization (HE), and contrast-limited adaptive histogram equalization (CLAHE). The Adaptive MobileNetV2 was employed to evaluate the performance of crack detection under different lighting and pre-processing conditions. The results indicated that, under red lighting, the model achieved 100.00% accuracy, precision, recall, and F1-score across almost all pre-processing methods. Under green lighting, the highest accuracy of 99.80% was achieved using the image normalization method. For blue lighting, the model reached 100.00% accuracy with the HE method. Under white lighting, the highest accuracy of 99.83% was achieved using both the original and HE methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

29 pages, 9069 KiB  
Article
Prediction of Temperature Distribution with Deep Learning Approaches for SM1 Flame Configuration
by Gökhan Deveci, Özgün Yücel and Ali Bahadır Olcay
Energies 2025, 18(14), 3783; https://doi.org/10.3390/en18143783 - 17 Jul 2025
Abstract
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST [...] Read more.
This study investigates the application of deep learning (DL) techniques for predicting temperature fields in the SM1 swirl-stabilized turbulent non-premixed flame. Two distinct DL approaches were developed using a comprehensive CFD database generated via the steady laminar flamelet model coupled with the SST k-ω turbulence model. The first approach employs a fully connected dense neural network to directly map scalar input parameters—fuel velocity, swirl ratio, and equivalence ratio—to high-resolution temperature contour images. In addition, a comparison was made with different deep learning networks, namely Res-Net, EfficientNetB0, and Inception Net V3, to better understand the performance of the model. In the first approach, the results of the Inception V3 model and the developed Dense Model were found to be better than Res-Net and Efficient Net. At the same time, file sizes and usability were examined. The second framework employs a U-Net-based convolutional neural network enhanced by an RGB Fusion preprocessing technique, which integrates multiple scalar fields from non-reacting (cold flow) conditions into composite images, significantly improving spatial feature extraction. The training and validation processes for both models were conducted using 80% of the CFD data for training and 20% for testing, which helped assess their ability to generalize new input conditions. In the secondary approach, similar to the first approach, studies were conducted with different deep learning models, namely Res-Net, Efficient Net, and Inception Net, to evaluate model performance. The U-Net model, which is well developed, stands out with its low error and small file size. The dense network is appropriate for direct parametric analyses, while the image-based U-Net model provides a rapid and scalable option to utilize the cold flow CFD images. This framework can be further refined in future research to estimate more flow factors and tested against experimental measurements for enhanced applicability. Full article
Show Figures

Figure 1

15 pages, 1794 KiB  
Article
Lightweight Dual-Attention Network for Concrete Crack Segmentation
by Min Feng and Juncai Xu
Sensors 2025, 25(14), 4436; https://doi.org/10.3390/s25144436 - 16 Jul 2025
Viewed by 42
Abstract
Structural health monitoring in resource-constrained environments demands crack segmentation models that match the accuracy of heavyweight convolutional networks while conforming to the power, memory, and latency limits of watt-level edge devices. This study presents a lightweight dual-attention network, which is a four-stage U-Net [...] Read more.
Structural health monitoring in resource-constrained environments demands crack segmentation models that match the accuracy of heavyweight convolutional networks while conforming to the power, memory, and latency limits of watt-level edge devices. This study presents a lightweight dual-attention network, which is a four-stage U-Net compressed to one-quarter of the channel depth and augmented—exclusively at the deepest layer—with a compact dual-attention block that couples channel excitation with spatial self-attention. The added mechanism increases computation by only 19%, limits the weight budget to 7.4 MB, and remains fully compatible with post-training INT8 quantization. On a pixel-labelled concrete crack benchmark, the proposed network achieves an intersection over union of 0.827 and an F1 score of 0.905, thus outperforming CrackTree, Hybrid 2020, MobileNetV3, and ESPNetv2. While refined weight initialization and Dice-augmented loss provide slight improvements, ablation experiments show that the dual-attention module is the main factor influencing accuracy. With 110 frames per second on a 10 W Jetson Nano and 220 frames per second on a 5 W Coral TPU achieved without observable accuracy loss, hardware-in-the-loop tests validate real-time viability. Thus, the proposed network offers cutting-edge crack segmentation at the kiloflop scale, thus facilitating ongoing, on-device civil infrastructure inspection. Full article
Show Figures

Figure 1

26 pages, 7857 KiB  
Article
Investigation of an Efficient Multi-Class Cotton Leaf Disease Detection Algorithm That Leverages YOLOv11
by Fangyu Hu, Mairheba Abula, Di Wang, Xuan Li, Ning Yan, Qu Xie and Xuedong Zhang
Sensors 2025, 25(14), 4432; https://doi.org/10.3390/s25144432 - 16 Jul 2025
Viewed by 49
Abstract
Cotton leaf diseases can lead to substantial yield losses and economic burdens. Traditional detection methods are challenged by low accuracy and high labor costs. This research presents the ACURS-YOLO network, an advanced cotton leaf disease detection architecture developed on the foundation of YOLOv11. [...] Read more.
Cotton leaf diseases can lead to substantial yield losses and economic burdens. Traditional detection methods are challenged by low accuracy and high labor costs. This research presents the ACURS-YOLO network, an advanced cotton leaf disease detection architecture developed on the foundation of YOLOv11. By integrating a medical image segmentation model, it effectively tackles challenges including complex background interference, the missed detection of small targets, and restricted generalization ability. Specifically, the U-Net v2 module is embedded in the backbone network to boost the multi-scale feature extraction performance in YOLOv11. Meanwhile, the CBAM attention mechanism is integrated to emphasize critical disease-related features. To lower the computational complexity, the SPPF module is substituted with SimSPPF. The C3k2_RCM module is appended for long–range context modeling, and the ARelu activation function is employed to alleviate the vanishing gradient problem. A database comprising 3000 images covering six types of cotton leaf diseases was constructed, and data augmentation techniques were applied. The experimental results show that ACURS-YOLO attains impressive performance indicators, encompassing a mAP_0.5 value of 94.6%, a mAP_0.5:0.95 value of 83.4%, 95.5% accuracy, 89.3% recall, an F1 score of 92.3%, and a frame rate of 148 frames per second. It outperforms YOLOv11 and other conventional models with regard to both detection precision and overall functionality. Ablation tests additionally validate the efficacy of each component, affirming the framework’s advantage in addressing complex detection environments. This framework provides an efficient solution for the automated monitoring of cotton leaf diseases, advancing the development of smart sensors through improved detection accuracy and practical applicability. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

16 pages, 3953 KiB  
Article
Skin Lesion Classification Using Hybrid Feature Extraction Based on Classical and Deep Learning Methods
by Maryem Zahid, Mohammed Rziza and Rachid Alaoui
BioMedInformatics 2025, 5(3), 41; https://doi.org/10.3390/biomedinformatics5030041 - 16 Jul 2025
Viewed by 114
Abstract
This paper proposes a hybrid method for skin lesion classification combining deep learning features with conventional descriptors such as HOG, Gabor, SIFT, and LBP. Feature extraction was performed by extracting features of interest within the tumor area using suggested fusion methods. We tested [...] Read more.
This paper proposes a hybrid method for skin lesion classification combining deep learning features with conventional descriptors such as HOG, Gabor, SIFT, and LBP. Feature extraction was performed by extracting features of interest within the tumor area using suggested fusion methods. We tested and compared features obtained from different deep learning models coupled to HOG-based features. Dimensionality reduction and performance improvement were achieved by Principal Component Analysis, after which SVM was used for classification. The compared methods were tested on the reference database skin cancer-malignant-vs-benign. The results show a significant improvement in terms of accuracy due to complementarity between the conventional and deep learning-based methods. Specifically, the addition of HOG descriptors led to an accuracy increase of 5% for EfficientNetB0, 7% for ResNet50, 5% for ResNet101, 1% for NASNetMobile, 1% for DenseNet201, and 1% for MobileNetV2. These findings confirm that feature fusion significantly enhances performance compared to the individual application of each method. Full article
Show Figures

Figure 1

16 pages, 2355 KiB  
Article
Generalising Stock Detection in Retail Cabinets with Minimal Data Using a DenseNet and Vision Transformer Ensemble
by Babak Rahi, Deniz Sagmanli, Felix Oppong, Direnc Pekaslan and Isaac Triguero
Mach. Learn. Knowl. Extr. 2025, 7(3), 66; https://doi.org/10.3390/make7030066 - 16 Jul 2025
Viewed by 78
Abstract
Generalising deep-learning models to perform well on unseen data domains with minimal retraining remains a significant challenge in computer vision. Even when the target task—such as quantifying the number of elements in an image—stays the same, data quality, shape, or form variations can [...] Read more.
Generalising deep-learning models to perform well on unseen data domains with minimal retraining remains a significant challenge in computer vision. Even when the target task—such as quantifying the number of elements in an image—stays the same, data quality, shape, or form variations can deviate from the training conditions, often necessitating manual intervention. As a real-world industry problem, we aim to automate stock level estimation in retail cabinets. As technology advances, new cabinet models with varying shapes emerge alongside new camera types. This evolving scenario poses a substantial obstacle to deploying long-term, scalable solutions. To surmount the challenge of generalising to new cabinet models and cameras with minimal amounts of sample images, this research introduces a new solution. This paper proposes a novel ensemble model that combines DenseNet-201 and Vision Transformer (ViT-B/8) architectures to achieve generalisation in stock-level classification. The novelty aspect of our solution comes from the fact that we combine a transformer with a DenseNet model in order to capture both the local, hierarchical details and the long-range dependencies within the images, improving generalisation accuracy with less data. Key contributions include (i) a novel DenseNet-201 + ViT-B/8 feature-level fusion, (ii) an adaptation workflow that needs only two images per class, (iii) a balanced layer-unfreezing schedule, (iv) a publicly described domain-shift benchmark, and (v) a 47 pp accuracy gain over four standard few-shot baselines. Our approach leverages fine-tuning techniques to adapt two pre-trained models to the new retail cabinets (i.e., standing or horizontal) and camera types using only two images per class. Experimental results demonstrate that our method achieves high accuracy rates of 91% on new cabinets with the same camera and 89% on new cabinets with different cameras, significantly outperforming standard few-shot learning methods. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

21 pages, 4147 KiB  
Article
AgriFusionNet: A Lightweight Deep Learning Model for Multisource Plant Disease Diagnosis
by Saleh Albahli
Agriculture 2025, 15(14), 1523; https://doi.org/10.3390/agriculture15141523 - 15 Jul 2025
Viewed by 163
Abstract
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB [...] Read more.
Timely and accurate identification of plant diseases is critical to mitigating crop losses and enhancing yield in precision agriculture. This paper proposes AgriFusionNet, a lightweight and efficient deep learning model designed to diagnose plant diseases using multimodal data sources. The framework integrates RGB and multispectral drone imagery with IoT-based environmental sensor data (e.g., temperature, humidity, soil moisture), recorded over six months across multiple agricultural zones. Built on the EfficientNetV2-B4 backbone, AgriFusionNet incorporates Fused-MBConv blocks and Swish activation to improve gradient flow, capture fine-grained disease patterns, and reduce inference latency. The model was evaluated using a comprehensive dataset composed of real-world and benchmarked samples, showing superior performance with 94.3% classification accuracy, 28.5 ms inference time, and a 30% reduction in model parameters compared to state-of-the-art models such as Vision Transformers and InceptionV4. Extensive comparisons with both traditional machine learning and advanced deep learning methods underscore its robustness, generalization, and suitability for deployment on edge devices. Ablation studies and confusion matrix analyses further confirm its diagnostic precision, even in visually ambiguous cases. The proposed framework offers a scalable, practical solution for real-time crop health monitoring, contributing toward smart and sustainable agricultural ecosystems. Full article
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)
Show Figures

Figure 1

19 pages, 3165 KiB  
Article
Majority Voting Ensemble of Deep CNNs for Robust MRI-Based Brain Tumor Classification
by Kuo-Ying Liu, Nan-Han Lu, Yung-Hui Huang, Akari Matsushima, Koharu Kimura, Takahide Okamoto and Tai-Been Chen
Diagnostics 2025, 15(14), 1782; https://doi.org/10.3390/diagnostics15141782 - 15 Jul 2025
Viewed by 191
Abstract
Background/Objectives: Accurate classification of brain tumors is critical for treatment planning and prognosis. While deep convolutional neural networks (CNNs) have shown promise in medical imaging, few studies have systematically compared multiple architectures or integrated ensemble strategies to improve diagnostic performance. This study [...] Read more.
Background/Objectives: Accurate classification of brain tumors is critical for treatment planning and prognosis. While deep convolutional neural networks (CNNs) have shown promise in medical imaging, few studies have systematically compared multiple architectures or integrated ensemble strategies to improve diagnostic performance. This study aimed to evaluate various CNN models and optimize classification performance using a majority voting ensemble approach on T1-weighted MRI brain images. Methods: Seven pretrained CNN architectures were fine-tuned to classify four categories: glioblastoma, meningioma, pituitary adenoma, and no tumor. Each model was trained using two optimizers (SGDM and ADAM) and evaluated on a public dataset split into training (70%), validation (10%), and testing (20%) subsets, and further validated on an independent external dataset to assess generalizability. A majority voting ensemble was constructed by aggregating predictions from all 14 trained models. Performance was assessed using accuracy, Kappa coefficient, true positive rate, precision, confusion matrix, and ROC curves. Results: Among individual models, GoogLeNet and Inception-v3 with ADAM achieved the highest classification accuracy (0.987). However, the ensemble approach outperformed all standalone models, achieving an accuracy of 0.998, a Kappa coefficient of 0.997, and AUC values above 0.997 for all tumor classes. The ensemble demonstrated improved sensitivity, precision, and overall robustness. Conclusions: The majority voting ensemble of diverse CNN architectures significantly enhanced the performance of MRI-based brain tumor classification, surpassing that of any single model. These findings underscore the value of model diversity and ensemble learning in building reliable AI-driven diagnostic tools for neuro-oncology. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

22 pages, 6645 KiB  
Article
Visual Detection on Aircraft Wing Icing Process Using a Lightweight Deep Learning Model
by Yang Yan, Chao Tang, Jirong Huang, Zhixiong Cen and Zonghong Xie
Aerospace 2025, 12(7), 627; https://doi.org/10.3390/aerospace12070627 - 12 Jul 2025
Viewed by 103
Abstract
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes [...] Read more.
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes a detection model, Wing Icing Detection DeeplabV3+ (WID-DeeplabV3+), for efficient and precise aircraft wing leading edge icing detection under natural lighting conditions. WID-DeeplabV3+ adopts the lightweight MobileNetV3 as its backbone network to enhance the extraction of edge features in icing areas. Ghost Convolution and Atrous Spatial Pyramid Pooling modules are incorporated to reduce model parameters and computational complexity. The model is optimized using the transfer learning method, where pre-trained weights are utilized to accelerate convergence and enhance performance. Experimental results show WID-DeepLabV3+ segments the icing edge at 1920 × 1080 within 0.03 s. The model achieves the accuracy of 97.15%, an IOU of 94.16%, a precision of 97%, and a recall of 96.96%, representing respective improvements of 1.83%, 3.55%, 1.79%, and 2.04% over DeeplabV3+. The number of parameters and computational complexity are reduced by 92% and 76%, respectively. With high accuracy, superior IOU, and fast inference speed, WID-DeeplabV3+ provides an effective solution for wing-icing detection. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

19 pages, 1442 KiB  
Article
Hyperspectral Imaging for Enhanced Skin Cancer Classification Using Machine Learning
by Teng-Li Lin, Arvind Mukundan, Riya Karmakar, Praveen Avala, Wen-Yen Chang and Hsiang-Chen Wang
Bioengineering 2025, 12(7), 755; https://doi.org/10.3390/bioengineering12070755 - 11 Jul 2025
Viewed by 272
Abstract
Objective: The classification of skin cancer is very helpful in its early diagnosis and treatment, considering the complexity involved in differentiating AK from BCC and SK. These conditions are generally not easily detectable due to their comparable clinical presentations. Method: This paper presents [...] Read more.
Objective: The classification of skin cancer is very helpful in its early diagnosis and treatment, considering the complexity involved in differentiating AK from BCC and SK. These conditions are generally not easily detectable due to their comparable clinical presentations. Method: This paper presents a new approach to hyperspectral imaging for enhancing the visualization of skin lesions called the Spectrum-Aided Vision Enhancer (SAVE), which has the ability to convert any RGB image into a narrow-band image (NBI) by combining hyperspectral imaging (HSI) to increase the contrast of the area of the cancerous lesions when compared with the normal tissue, thereby increasing the accuracy of classification. The current study investigates the use of ten different machine learning algorithms for the purpose of classification of AK, BCC, and SK, including convolutional neural network (CNN), random forest (RF), you only look once (YOLO) version 8, support vector machine (SVM), ResNet50, MobileNetV2, Logistic Regression, SVM with stochastic gradient descent (SGD) Classifier, SVM with logarithmic (LOG) Classifier and SVM- Polynomial Classifier, in assessing the capability of the system to differentiate AK from BCC and SK with heightened accuracy. Results: The results demonstrated that SAVE enhanced classification performance and increased its accuracy, sensitivity, and specificity compared to a traditional RGB imaging approach. Conclusions: This advanced method offers dermatologists a tool for early and accurate diagnosis, reducing the likelihood of misclassification and improving patient outcomes. Full article
Show Figures

Figure 1

28 pages, 14588 KiB  
Article
CAU2DNet: A Dual-Branch Deep Learning Network and a Dataset for Slum Recognition with Multi-Source Remote Sensing Data
by Xi Lyu, Chenyu Zhang, Lizhi Miao, Xiying Sun, Xinxin Zhou, Xinyi Yue, Zhongchang Sun and Yueyong Pang
Remote Sens. 2025, 17(14), 2359; https://doi.org/10.3390/rs17142359 - 9 Jul 2025
Viewed by 165
Abstract
The efficient and precise identification of urban slums is a significant challenge for urban planning and sustainable development, as their morphological diversity and complex spatial distribution make it difficult to use traditional remote sensing inversion methods. Current deep learning (DL) methods mainly face [...] Read more.
The efficient and precise identification of urban slums is a significant challenge for urban planning and sustainable development, as their morphological diversity and complex spatial distribution make it difficult to use traditional remote sensing inversion methods. Current deep learning (DL) methods mainly face challenges such as limited receptive fields and insufficient sensitivity to spatial locations when integrating multi-source remote sensing data, and high-quality datasets that integrate multi-spectral and geoscientific indicators to support them are scarce. In response to these issues, this study proposes a DL model (coordinate-attentive U2-DeepLab network [CAU2DNet]) that integrates multi-source remote sensing data. The model integrates the multi-scale feature extraction capability of U2-Net with the global receptive field advantage of DeepLabV3+ through a dual-branch architecture. Thereafter, the spatial semantic perception capability is enhanced by introducing the CoordAttention mechanism, and ConvNextV2 is adopted to optimize the backbone network of the DeepLabV3+ branch, thereby improving the modeling capability of low-resolution geoscientific features. The two branches adopt a decision-level fusion mechanism for feature fusion, which means that the results of each are weighted and summed using learnable weights to obtain the final output feature map. Furthermore, this study constructs the São Paulo slums dataset for model training due to the lack of a multi-spectral slum dataset. This dataset covers 7978 samples of 512 × 512 pixels, integrating high-resolution RGB images, Normalized Difference Vegetation Index (NDVI)/Modified Normalized Difference Water Index (MNDWI) geoscientific indicators, and POI infrastructure data, which can significantly enrich multi-source slum remote sensing data. Experiments have shown that CAU2DNet achieves an intersection over union (IoU) of 0.6372 and an F1 score of 77.97% on the São Paulo slums dataset, indicating a significant improvement in accuracy over the baseline model. The ablation experiments verify that the improvements made in this study have resulted in a 16.12% increase in precision. Moreover, CAU2DNet also achieved the best results in all metrics during the cross-domain testing on the WHU building dataset, further confirming the model’s generalizability. Full article
Show Figures

Figure 1

16 pages, 1347 KiB  
Article
Detection of Helicobacter pylori Infection in Histopathological Gastric Biopsies Using Deep Learning Models
by Rafael Parra-Medina, Carlos Zambrano-Betancourt, Sergio Peña-Rojas, Lina Quintero-Ortiz, Maria Victoria Caro, Ivan Romero, Javier Hernan Gil-Gómez, John Jaime Sprockel, Sandra Cancino and Andres Mosquera-Zamudio
J. Imaging 2025, 11(7), 226; https://doi.org/10.3390/jimaging11070226 - 7 Jul 2025
Viewed by 541
Abstract
Traditionally, Helicobacter pylori (HP) gastritis has been diagnosed by pathologists through the examination of gastric biopsies using optical microscopy with standard hematoxylin and eosin (H&E) staining. However, with the adoption of digital pathology, the identification of HP faces certain limitations, particularly due to [...] Read more.
Traditionally, Helicobacter pylori (HP) gastritis has been diagnosed by pathologists through the examination of gastric biopsies using optical microscopy with standard hematoxylin and eosin (H&E) staining. However, with the adoption of digital pathology, the identification of HP faces certain limitations, particularly due to insufficient resolution in some scanned images. Moreover, interobserver variability has been well documented in the traditional diagnostic approach, which may further complicate consistent interpretation. In this context, deep convolutional neural network (DCNN) models are showing promising results in the automated detection of this infection in whole-slide images (WSIs). The aim of the present article is to detect the presence of HP infection from our own institutional dataset of histopathological gastric biopsy samples using different pretrained and recognized DCNN and AutoML approaches. The dataset comprises 100 H&E-stained WSIs of gastric biopsies. HP infection was confirmed previously using immunohistochemical confirmation. A total of 45,795 patches were selected for model development. InceptionV3, Resnet50, and VGG16 achieved AUC (area under the curve) values of 1. However, InceptionV3 showed superior metrics such as accuracy (97%), recall (100%), F1 score (97%), and MCC (93%). BoostedNet and AutoKeras achieved accuracy, precision, recall, specificity, and F1 scores less than 85%. The InceptionV3 model was used for external validation, and the predictions across all patches yielded a global accuracy of 78%. In conclusion, DCNN models showed stronger potential for diagnosing HP in gastric biopsies compared with the auto ML approach. However, due to variability across pathology applications, no single model is universally optimal. A problem-specific approach is essential. With growing WSI adoption, DL can improve diagnostic accuracy, reduce variability, and streamline pathology workflows using automation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

14 pages, 1023 KiB  
Article
Economic Impact of Abortions in Dairy Cow Herds
by Osvaldo Palma, Lluís M. Plà-Aragonès, Alejandro Mac Cawley and Víctor M. Albornoz
Vet. Sci. 2025, 12(7), 645; https://doi.org/10.3390/vetsci12070645 - 7 Jul 2025
Viewed by 296
Abstract
This study aimed to explore Markov decision methods in order to solve the problem of dairy cow replacement, adding the special characteristics of two types of abortions due to different sanitary reasons that influence the economic, production, and reproduction performance of these animals. [...] Read more.
This study aimed to explore Markov decision methods in order to solve the problem of dairy cow replacement, adding the special characteristics of two types of abortions due to different sanitary reasons that influence the economic, production, and reproduction performance of these animals. The model was successfully validated against other models published in the literature. Python code v.3.13 was used to solve the problem and to ease future extensions with the inclusion of new variables. The results constitute tools that allow the veterinarian to explore more realistic scenarios by running a Markov simulation model that avoids the complexities leading to the problem of dimensionality in dynamic optimization models. In our study, the economic value of the herd considering RA and NLA abortions shows that the maximum net benefit is USD 178.77 per cow, and non-pregnant cows are slaughtered upon reaching six months of lactation, a value that is within the range of values reported by the literature that we have identified. At the optimum, the replacement model extended with abortion generates a difference of USD 0.69 per cow per month compared to the model that does not include the special abortion features. The changes in the net present value of each cow according to the month of culling depend on the variability of milk income and slaughter value and heifers’ replacement values, suggesting that any measure that seeks to improve the economic benefit of dairy cows should take greater account of these variables. Full article
Show Figures

Figure 1

31 pages, 2044 KiB  
Article
Optimized Two-Stage Anomaly Detection and Recovery in Smart Grid Data Using Enhanced DeBERTa-v3 Verification System
by Xiao Liao, Wei Cui, Min Zhang, Aiwu Zhang and Pan Hu
Sensors 2025, 25(13), 4208; https://doi.org/10.3390/s25134208 - 5 Jul 2025
Viewed by 237
Abstract
The increasing sophistication of cyberattacks on smart grid infrastructure demands advanced anomaly detection and recovery systems that balance high recall rates with acceptable precision while providing reliable data restoration capabilities. This study presents an optimized two-stage anomaly detection and recovery system combining an [...] Read more.
The increasing sophistication of cyberattacks on smart grid infrastructure demands advanced anomaly detection and recovery systems that balance high recall rates with acceptable precision while providing reliable data restoration capabilities. This study presents an optimized two-stage anomaly detection and recovery system combining an enhanced TimerXL detector with a DeBERTa-v3-based verification and recovery mechanism. The first stage employs an optimized increment-based detection algorithm achieving 95.0% for recall and 54.8% for precision through multidimensional analysis. The second stage leverages a modified DeBERTa-v3 architecture with comprehensive 25-dimensional feature engineering per variable to verify potential anomalies, improving the precision to 95.1% while maintaining 84.1% for recall. Key innovations include (1) a balanced loss function combining focal loss (α = 0.65, γ = 1.2), Dice loss (weight = 0.5), and contrastive learning (weight = 0.03) to reduce over-rejection by 73.4%; (2) an ensemble verification strategy using multithreshold voting, achieving 91.2% accuracy; (3) optimized sample weighting prioritizing missed positives (weight = 10.0); (4) comprehensive feature extraction, including frequency domain and entropy features; and (5) integration of a generative time series model (TimER) for high-precision recovery of tampered data points. Experimental results on 2000 hourly smart grid measurements demonstrate an F1-score of 0.873 ± 0.114 for detection, representing a 51.4% improvement over ARIMA (0.576), 621% over LSTM-AE (0.121), 791% over standard Anomaly Transformer (0.098), and 904% over TimesNet (0.087). The recovery mechanism achieves remarkably precise restoration with a mean absolute error (MAE) of only 0.0055 kWh, representing a 99.91% improvement compared to traditional ARIMA models and 98.46% compared to standard Anomaly Transformer models. We also explore an alternative implementation using the Lag-LLaMA architecture, which achieves an MAE of 0.2598 kWh. The system maintains real-time capability with a 66.6 ± 7.2 ms inference time, making it suitable for operational deployment. Sensitivity analysis reveals robust performance across anomaly magnitudes (5–100 kWh), with the detection accuracy remaining above 88%. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

31 pages, 20469 KiB  
Article
YOLO-SRMX: A Lightweight Model for Real-Time Object Detection on Unmanned Aerial Vehicles
by Shimin Weng, Han Wang, Jiashu Wang, Changming Xu and Ende Zhang
Remote Sens. 2025, 17(13), 2313; https://doi.org/10.3390/rs17132313 - 5 Jul 2025
Viewed by 503
Abstract
Unmanned Aerial Vehicles (UAVs) face a significant challenge in balancing high accuracy and high efficiency when performing real-time object detection tasks, especially amidst intricate backgrounds, diverse target scales, and stringent onboard computational resource constraints. To tackle these difficulties, this study introduces YOLO-SRMX, a [...] Read more.
Unmanned Aerial Vehicles (UAVs) face a significant challenge in balancing high accuracy and high efficiency when performing real-time object detection tasks, especially amidst intricate backgrounds, diverse target scales, and stringent onboard computational resource constraints. To tackle these difficulties, this study introduces YOLO-SRMX, a lightweight real-time object detection framework specifically designed for infrared imagery captured by UAVs. Firstly, the model utilizes ShuffleNetV2 as an efficient lightweight backbone and integrates the novel Multi-Scale Dilated Attention (MSDA) module. This strategy not only facilitates a substantial 46.4% reduction in parameter volume but also, through the flexible adaptation of receptive fields, boosts the model’s robustness and precision in multi-scale object recognition tasks. Secondly, within the neck network, multi-scale feature extraction is facilitated through the design of novel composite convolutions, ConvX and MConv, based on a “split–differentiate–concatenate” paradigm. Furthermore, the lightweight GhostConv is incorporated to reduce model complexity. By synthesizing these principles, a novel composite receptive field lightweight convolution, DRFAConvP, is proposed to further optimize multi-scale feature fusion efficiency and promote model lightweighting. Finally, the Wise-IoU loss function is adopted to replace the traditional bounding box loss. This is coupled with a dynamic non-monotonic focusing mechanism formulated using the concept of outlier degrees. This mechanism intelligently assigns elevated gradient weights to anchor boxes of moderate quality by assessing their relative outlier degree, while concurrently diminishing the gradient contributions from both high-quality and low-quality anchor boxes. Consequently, this approach enhances the model’s localization accuracy for small targets in complex scenes. Experimental evaluations on the HIT-UAV dataset corroborate that YOLO-SRMX achieves an mAP50 of 82.8%, representing a 7.81% improvement over the baseline YOLOv8s model; an F1 score of 80%, marking a 3.9% increase; and a substantial 65.3% reduction in computational cost (GFLOPs). YOLO-SRMX demonstrates an exceptional trade-off between detection accuracy and operational efficiency, thereby underscoring its considerable potential for efficient and precise object detection on resource-constrained UAV platforms. Full article
Show Figures

Figure 1

Back to TopTop