Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,654)

Search Parameters:
Keywords = loss classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 5753 KB  
Article
An Optimized Few-Shot Learning Framework for Fault Diagnosis in Milling Machines
by Faisal Saleem, Muhammad Umar and Jong-Myon Kim
Machines 2025, 13(11), 1010; https://doi.org/10.3390/machines13111010 (registering DOI) - 2 Nov 2025
Abstract
Reliable fault diagnosis of milling machines is essential for maintaining operational stability and cost-effective maintenance; however, it remains challenging due to limited labeled data and the highly non-stationary nature of acoustic emission (AE) signals. This study introduces an optimized Few-Shot Learning framework (FSL) [...] Read more.
Reliable fault diagnosis of milling machines is essential for maintaining operational stability and cost-effective maintenance; however, it remains challenging due to limited labeled data and the highly non-stationary nature of acoustic emission (AE) signals. This study introduces an optimized Few-Shot Learning framework (FSL) that integrates time–frequency analysis with attention-guided representation learning and distribution-aware classification for data-efficient fault detection. The framework converts AE signals into Continuous Wavelet Transform (CWT) scalograms, which are processed using a self-attention-enhanced ResNet-50 backbone to capture both local texture features and long-range dependencies in the signal. Adaptive prototype computation with learnable importance weighting refines class representations, while Mahalanobis distance-based matching ensures robust alignment between query and prototype embeddings under limited sample conditions. To further strengthen discriminability, contrastive loss with hard negative mining enforces compact intra-class clustering and clear inter-class separation. Comprehensive experiments under 7-way 5-shot settings and 5-fold stratified cross-validation demonstrate consistent and reliable performance, achieving a mean accuracy of 98.86% ± 0.97% (95% CI: [98.01%, 99.71%]). Additional evaluations across multiple spindle speeds (660 rpm and 1440 rpm) confirm that the model generalizes effectively under varying operating conditions. Grad-CAM++ activation maps further illustrate that the network focuses on physically meaningful fault-related regions, enhancing interpretability. The results verify that the proposed framework achieves robust, scalable, and interpretable fault diagnosis using minimal labeled data, offering a practical solution for predictive maintenance in modern intelligent manufacturing environments. Full article
Show Figures

Figure 1

26 pages, 1512 KB  
Article
Pulse-Driven Spin Paradigm for Noise-Aware Quantum Classification
by Carlos Riascos-Moreno, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computers 2025, 14(11), 475; https://doi.org/10.3390/computers14110475 (registering DOI) - 1 Nov 2025
Abstract
Quantum machine learning (QML) integrates quantum computing with classical machine learning. Within this domain, QML-CQ classification tasks, where classical data is processed by quantum circuits, have attracted particular interest for their potential to exploit high-dimensional feature maps, entanglement-enabled correlations, and non-classical priors. Yet, [...] Read more.
Quantum machine learning (QML) integrates quantum computing with classical machine learning. Within this domain, QML-CQ classification tasks, where classical data is processed by quantum circuits, have attracted particular interest for their potential to exploit high-dimensional feature maps, entanglement-enabled correlations, and non-classical priors. Yet, practical realizations remain constrained by the Noisy Intermediate-Scale Quantum (NISQ) era, where limited qubit counts, gate errors, and coherence losses necessitate frugal, noise-aware strategies. The Data Re-Uploading (DRU) algorithm has emerged as a strong NISQ-compatible candidate, offering universal classification capabilities with minimal qubit requirements. While DRU has been experimentally demonstrated on ion-trap, photonic, and superconducting platforms, no implementations exist for spin-based quantum processing units (QPU-SBs), despite their scalability potential via CMOS-compatible fabrication and recent demonstrations of multi-qubit processors. Here, we present a pulse-level, noise-aware DRU framework for spin-based QPUs, designed to bridge the gap between gate-level models and realistic spin-qubit execution. Our approach includes (i) compiling DRU circuits into hardware-proximate, time-domain controls derived from the Loss–DiVincenzo Hamiltonian, (ii) explicitly incorporating coherent and incoherent noise sources through pulse perturbations and Lindblad channels, (iii) enabling systematic noise-sensitivity studies across one-, two-, and four-spin configurations via continuous-time simulation, and (iv) developing a noise-aware training pipeline that benchmarks gate-level baselines against spin-level dynamics using information-theoretic loss functions. Numerical experiments show that our simulations reproduce gate-level dynamics with fidelities near unity while providing a richer error characterization under realistic noise. Moreover, divergence-based losses significantly enhance classification accuracy and robustness compared to fidelity-based metrics. Together, these results establish the proposed framework as a practical route for advancing DRU on spin-based platforms and motivate future work on error-attentive training and spin–quantum-dot noise modeling. Full article
24 pages, 2473 KB  
Article
Estimating Indirect Accident Cost Using a Two-Tiered Machine Learning Algorithm for the Construction Industry
by Ayesha Munira Chowdhury, Jurng-Jae Yee, Sang I. Park, Eun-Ju Ha and Jae-Ho Choi
Buildings 2025, 15(21), 3947; https://doi.org/10.3390/buildings15213947 (registering DOI) - 1 Nov 2025
Abstract
Accurately estimating total accident costs is essential for managing construction safety budgets. While direct costs are well-documented, indirect costs—such as productivity loss, material damage, and legal expenses—are difficult to predict and often overlooked. Traditional ratio-based methods lack accuracy due to variability across projects [...] Read more.
Accurately estimating total accident costs is essential for managing construction safety budgets. While direct costs are well-documented, indirect costs—such as productivity loss, material damage, and legal expenses—are difficult to predict and often overlooked. Traditional ratio-based methods lack accuracy due to variability across projects and accident types. This study introduces a two-tiered machine learning framework for real-time indirect cost estimation. In the first tier, classification models (decision tree, random forest, k-nearest neighbor, and XGBoost) predict total cost categories; in the second, regression models (decision tree, random forest, gradient boosting, and light-gradient boosting machine) estimate indirect costs. Using a dataset of 1036 construction accidents collected over two years, the model achieved accuracies above 87% in classification and an R2 of 0.95 with a training MSE of 0.21 in regression. Compared to conventional statistical and single-step models, it demonstrated superior predictive performance, reducing average deviations to $362.63 and sometimes achieving zero deviation. This framework enables more precise, real-time estimation of hidden costs, promoting better safety investment, reduced financial risk, and adaptive learning through retraining. When integrated with a national accident cost database, it supports ongoing improvement and informed risk management for construction stakeholders. Full article
Show Figures

Figure 1

26 pages, 4723 KB  
Article
Time-Frequency-Based Separation of Earthquake and Noise Signals on Real Seismic Data: EMD, DWT and Ensemble Classifier Approaches
by Yunus Emre Erdoğan and Ali Narin
Sensors 2025, 25(21), 6671; https://doi.org/10.3390/s25216671 (registering DOI) - 1 Nov 2025
Abstract
Earthquakes are sudden and destructive natural events caused by tectonic movements in the Earth’s crust. Although they cannot be predicted with certainty, rapid and reliable detection is essential to reduce loss of life and property. This study aims to automatically distinguish earthquake and [...] Read more.
Earthquakes are sudden and destructive natural events caused by tectonic movements in the Earth’s crust. Although they cannot be predicted with certainty, rapid and reliable detection is essential to reduce loss of life and property. This study aims to automatically distinguish earthquake and noise signals from real seismic data by analyzing time-frequency features. Signals were scaled using z-score normalization, and extracted with Empirical Mode Decomposition (EMD), Discrete Wavelet Transform (DWT), and combined EMD+DWT methods. Feature selection methods such as Lasso, ReliefF, and Student’s t-test were applied to identify the most discriminative features. Classification was performed with Ensemble Bagged Trees, Decision Trees, Random Forest, k-Nearest Neighbors (k-NN), and Support Vector Machines (SVM). The highest performance was achieved using the RF classifier with the Lasso-based EMD+DWT feature set, reaching 100% accuracy, specificity, and sensitivity. Overall, DWT and EMD+DWT features yielded higher performance than EMD alone. While k-NN and SVM were less effective, tree-based methods achieved superior results. Moreover, Lasso and ReliefF outperformed Student’s t-test. These findings show that time-frequency-based features are crucial for separating earthquake signals from noise and provide a basis for improving real-time detection. The study contributes to the academic literature and holds significant potential for integration into early warning and earthquake monitoring systems. Full article
Show Figures

Figure 1

27 pages, 2119 KB  
Article
Analyzing Surface Spectral Signature Shifts in Fire-Affected Areas of Elko County Nevada
by Ibtihaj Ahmad and Haroon Stephen
Fire 2025, 8(11), 429; https://doi.org/10.3390/fire8110429 (registering DOI) - 31 Oct 2025
Abstract
This study investigates post-fire vegetation transitions and spectral responses in the Snowstorm Fire (2017) and South Sugarloaf Fire (2018) in Nevada using Landsat 8 Operational Land Imager (OLI) surface reflectance imagery and unsupervised ISODATA classification. By comparing pre-fire and post-fire conditions, we have [...] Read more.
This study investigates post-fire vegetation transitions and spectral responses in the Snowstorm Fire (2017) and South Sugarloaf Fire (2018) in Nevada using Landsat 8 Operational Land Imager (OLI) surface reflectance imagery and unsupervised ISODATA classification. By comparing pre-fire and post-fire conditions, we have assessed changes in vegetation composition, spectral signatures, and the emergence of novel land cover types. The results revealed widespread conversion of shrubland and conifer-dominated systems to herbaceous cover with significant reductions in near-infrared reflectance and elevated shortwave infrared responses, indicative of vegetation loss and surface alteration. In the South Sugarloaf Fire, three new spectral classes emerged post-fire, representing ash-dominated, charred, and sparsely vegetated conditions. A similar new class emerged in Snowstorm, highlighting the spatial heterogeneity of fire effects. Class stability analysis confirmed low persistence of shrub and conifer types, with grassland and herbaceous classes showing dominant post-fire expansion. The findings highlight the ecological consequences of high-severity fire in sagebrush ecosystems, including reduced resilience, increased invasion risk, and type conversion. Unsupervised classification and spectral signature analysis proved effective for capturing post-fire landscape change and can support more accurate, site-specific post-fire assessment and restoration planning. Full article
22 pages, 3999 KB  
Article
Seagrass Mapping in Cyprus Using Earth Observation Advances
by Despoina Makri, Spyridon Christofilakos, Dimitris Poursanidis, Dimosthenis Traganos, Christodoulos Mettas, Neophytos Stylianou and Diofantos Hadjimitsis
Remote Sens. 2025, 17(21), 3610; https://doi.org/10.3390/rs17213610 (registering DOI) - 31 Oct 2025
Abstract
Seagrass meadows are vital for biodiversity and provide a plethora of ecosystem services, but significant losses due to human activity and climate change have been observed in recent decades. This study aims to evaluate whether the integration of Sentinel-2 composites, cloud computing (Google [...] Read more.
Seagrass meadows are vital for biodiversity and provide a plethora of ecosystem services, but significant losses due to human activity and climate change have been observed in recent decades. This study aims to evaluate whether the integration of Sentinel-2 composites, cloud computing (Google Earth Engine, GEE), and machine learning (ML) classifiers can produce accurate, scalable maps of seagrass habitats, enabling reliable estimates of associated carbon stocks. In this case study, we developed a methodological workflow for local-scale seagrass mapping in Cyprus, covering a total area of 310 km2. ML techniques, specifically the Random Forest (RF) classifier and Classification And Regression Tree (CART), were employed in the main processing stage. The RF classifier achieved an overall accuracy of 73.5%, with a seagrass-specific F1-score of 69.4%. Class-specific F1-scores ranged from 63.2% for hard bottoms to 98.2% for deep water, accounting for variability in habitat separability. The workflow is designed to be scalable across Cyprus and potentially the broader EMMENA region (Eastern Mediterranean, Middle East, and North Africa). Based on the mapped extent of Posidonia oceanica meadows, preliminary estimates suggest a carbon stock of approximately 19,000 Mg C in Cyprus. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

18 pages, 6703 KB  
Article
Lightweight Attention-Based Architecture for Accurate Melanoma Recognition
by Mohammad J. Beirami, Fiona Gruzmark, Rayyan Manwar, Maria Tsoukas and Kamran Avanaki
Electronics 2025, 14(21), 4281; https://doi.org/10.3390/electronics14214281 (registering DOI) - 31 Oct 2025
Abstract
Dermoscopy, a non-invasive imaging technique, has transformed dermatology by enabling early detection and differentiation of skin conditions. Integrating deep learning with dermoscopic images enhances diagnostic potential but raises computational challenges. This study introduces APNet, an attention-based architecture designed for melanoma detection, offering fewer [...] Read more.
Dermoscopy, a non-invasive imaging technique, has transformed dermatology by enabling early detection and differentiation of skin conditions. Integrating deep learning with dermoscopic images enhances diagnostic potential but raises computational challenges. This study introduces APNet, an attention-based architecture designed for melanoma detection, offering fewer parameters than conventional convolutional neural networks. Two baseline models are considered: HU-Net, a trimmed U-Net that uses only the encoding path for classification, and Pocket-Net, a lightweight U-Net variant that reduces parameters through fewer feature maps and efficient convolutions. While Pocket-Net is highly resource-efficient, its simplification can reduce performance. APNet extends Pocket-Net by incorporating squeeze-and-excitation (SE) attention blocks into the encoding path. These blocks adaptively highlight the most relevant dermoscopic features, such as subtle melanoma patterns, improving classification accuracy. The study evaluates APNet against Pocket-Net and HU-Net using four large, annotated dermoscopy datasets (ISIC 2017–2020), covering melanoma, benign nevi, and other lesions. Results show that APNet achieves faster processing than HU-Net while overcoming the performance loss observed in Pocket-Net. By reducing parameters without sacrificing accuracy, APNet provides a practical solution for computationally demanding dermoscopy, offering efficient and accurate melanoma detection where medical imaging resources are limited. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

21 pages, 5509 KB  
Article
A Deep Learning Approach for High-Resolution Canopy Height Mapping in Indonesian Borneo by Fusing Multi-Source Remote Sensing Data
by Andrew J. Chamberlin, Zac Yung-Chun Liu, Christopher G. L. Cross, Julie Pourtois, Iskandar Zulkarnaen Siregar, Dodik Ridho Nurrochmat, Yudi Setiawan, Kinari Webb, Skylar R. Hopkins, Susanne H. Sokolow and Giulio A. De Leo
Remote Sens. 2025, 17(21), 3592; https://doi.org/10.3390/rs17213592 - 30 Oct 2025
Viewed by 195
Abstract
Accurate mapping of forest canopy height is essential for monitoring forest structure, assessing biodiversity, and informing sustainable management practices. However, obtaining high-resolution canopy height data across large tropical landscapes remains challenging and prohibitively expensive. While machine learning approaches like Random Forest have become [...] Read more.
Accurate mapping of forest canopy height is essential for monitoring forest structure, assessing biodiversity, and informing sustainable management practices. However, obtaining high-resolution canopy height data across large tropical landscapes remains challenging and prohibitively expensive. While machine learning approaches like Random Forest have become standard for predicting forest attributes from remote sensing data, deep learning methods remain underexplored for canopy height mapping despite their potential advantages. To address this limitation, we developed a rapid, automatic, scalable, and cost-efficient deep learning framework that predicts tree canopy height at fine-grained resolution (30 × 30 m) across Indonesian Borneo’s tropical forests. Our approach integrates diverse remote sensing data, including Landsat-8, Sentinel-1, land cover classifications, digital elevation models, and NASA Carbon Monitoring System airborne LiDAR, along with derived vegetation indices, texture metrics, and climatic variables. This comprehensive data pipeline produced over 300 features from approximately 2 million observations across Bornean forests. Using LiDAR-derived canopy height measurements from ~100,000 ha as training data, we systematically compared multiple machine learning approaches and found that our neural network model achieved canopy height predictions with R2 of 0.82 and RMSE of 4.98 m, substantially outperforming traditional machine learning approaches such as Random Forest (R2 of 0.57–0.59). The model performed particularly well for forests with canopy heights between 10–40 m, though systematic biases were observed at the extremes of the height distribution. This framework demonstrates how freely available satellite data can be leveraged to extend the utility of limited LiDAR coverage, enabling cost-effective forest structure monitoring across vast tropical landscapes. The approach can be adapted to other forest regions worldwide, supporting applications in ecological research, conservation planning, and forest loss mitigation. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing and Geodata)
Show Figures

Figure 1

22 pages, 4001 KB  
Article
SolPowNet: Dust Detection on Photovoltaic Panels Using Convolutional Neural Networks
by Ömer Faruk Alçin, Muzaffer Aslan and Ali Ari
Electronics 2025, 14(21), 4230; https://doi.org/10.3390/electronics14214230 - 29 Oct 2025
Viewed by 182
Abstract
In recent years, the widespread adoption of photovoltaic (PV) panels for electricity generation has provided significant momentum toward sustainable energy goals. However, it has been observed that the accumulation of dust and contaminants on panel surfaces markedly reduces efficiency by blocking solar radiation [...] Read more.
In recent years, the widespread adoption of photovoltaic (PV) panels for electricity generation has provided significant momentum toward sustainable energy goals. However, it has been observed that the accumulation of dust and contaminants on panel surfaces markedly reduces efficiency by blocking solar radiation from reaching the surface. Consequently, dust detection has become a critical area of research into the energy efficiency of PV systems. This study proposes SolPowNet, a novel Convolutional Neural Network (CNN) model based on deep learning with a lightweight architecture that is capable of reliably distinguishing between images of clean and dusty panels. The performance of the proposed model was evaluated by testing it on a dataset containing images of 502 clean panels and 340 dusty panels and comprehensively comparing it with state-of-the-art CNN-based approaches. The experimental results demonstrate that SolPowNet achieves an accuracy of 98.82%, providing 5.88%, 3.57%, 4.7%, 18.82%, and 0.02% higher accuracy than the AlexNet, VGG16, VGG19, ResNet50, and Inception V3 models, respectively. These experimental results reveal that the proposed architecture exhibits more effective classification performance than other CNN models. In conclusion, SolPowNet, with its low computational cost and lightweight structure, enables integration into embedded and real-time applications. Thus, it offers a practical solution for optimizing maintenance planning in photovoltaic systems, managing panel cleaning intervals based on data, and minimizing energy production losses. Full article
Show Figures

Figure 1

16 pages, 4199 KB  
Article
Campus Abnormal Behavior Detection with a Spatio-Temporal Fusion–Temporal Difference Network
by Fupeng Wei, Yibo Jiao, Nan Wang, Kai Zheng, Ge Shi, Mengfan Yang and Wen Zhao
Electronics 2025, 14(21), 4221; https://doi.org/10.3390/electronics14214221 - 29 Oct 2025
Viewed by 155
Abstract
The detection of abnormal behavior has consistently garnered significant attention. Conventional methods employ vision-based dual-stream networks or 3D convolutions to represent spatio-temporal information in video sequences to identify normal and pathological behaviors. Nonetheless, these methodologies generally employ datasets balanced across data categories and [...] Read more.
The detection of abnormal behavior has consistently garnered significant attention. Conventional methods employ vision-based dual-stream networks or 3D convolutions to represent spatio-temporal information in video sequences to identify normal and pathological behaviors. Nonetheless, these methodologies generally employ datasets balanced across data categories and consist solely of two classifications. In actuality, anomalous behaviors frequently display multi-category characteristics, with each category’s distribution demonstrating a pronounced long-tail phenomenon. This paper presents a video-based technique for detecting multi-category abnormal behavior, termed the Spatio-Temporal Fusion–Temporal Difference Network (STF-TDN). The system first employs a temporal difference network (TDN) model to encapsulate movie temporal dynamics via local and global modeling. To enhance recognition performance, this study develops a feature fusion module—Spatial-Temporal Fusion (STF)—which augments the model’s representational capacity by amalgamating spatial and temporal data. Furthermore, given the long-tailed distribution characteristics of the datasets, this study employs focused loss rather than the conventional cross-entropy loss function to enhance the model’s recognition capability for under-represented categories. We perform comprehensive experiments and ablation studies on two datasets. Precision is 96.3% for the Violence5 dataset and 87.5% for the RWF-2000 dataset. The results of the experiment indicate the enhanced efficacy of the proposed strategy in detecting anomalous behavior. Full article
Show Figures

Figure 1

17 pages, 1214 KB  
Article
A Study of Gene Expression Levels of Parkinson’s Disease Using Machine Learning
by Sonia Lilia Mestizo-Gutiérrez, Joan Arturo Jácome-Delgado, Nicandro Cruz-Ramírez, Alejandro Guerra-Hernández, Jesús Alberto Torres-Sosa, Viviana Yarel Rosales-Morales and Gonzalo Emiliano Aranda-Abreu
BioMedInformatics 2025, 5(4), 60; https://doi.org/10.3390/biomedinformatics5040060 - 29 Oct 2025
Viewed by 312
Abstract
Parkinson’s disease (PD) is the second most common neurodegenerative disorder, characterized primarily by motor impairments due to the loss of dopaminergic neurons. Despite extensive research, the precise causes of PD remain unknown, and reliable non-invasive biomarkers are still lacking. This study aimed to [...] Read more.
Parkinson’s disease (PD) is the second most common neurodegenerative disorder, characterized primarily by motor impairments due to the loss of dopaminergic neurons. Despite extensive research, the precise causes of PD remain unknown, and reliable non-invasive biomarkers are still lacking. This study aimed to explore gene expression profiles in peripheral blood to identify potential biomarkers for PD using machine learning approaches. We analyzed microarray-based gene expression data from 105 individuals (50 PD patients, 33 with other neurodegenerative diseases, and 22 healthy controls) obtained from the GEO database (GSE6613). Preprocessing was performed using the “affy” package in R with Expresso normalization. Feature selection and classification were conducted using a decision tree approach (C4.5/J48 algorithm in WEKA), and model performance was evaluated with 10-fold cross-validation. Additional classifiers such as Support Vector Machine (SVM), the Naive Bayes classifier and Multilayer Perceptron Neural Network (MLP) were used for comparison. ROC curve analysis and Gene Ontology (GO) enrichment analysis were applied to the selected genes. A nine-gene decision tree model (TMEM104, TRIM33, GJB3, SPON2, SNAP25, TRAK2, SHPK, PIEZO1, RPL37) achieved 86.71% accuracy, 88% sensitivity, and 87% specificity. The model significantly outperformed other classifiers (SVM, Naive Bayes, MLP) in terms of overall predictive accuracy. ROC analysis showed moderate discrimination for some genes (e.g., TRAK2, TRIM33, PIEZO1), and GO enrichment revealed associations with synaptic processes, inflammation, mitochondrial transport, and stress response pathways. Our decision tree model based on blood gene expression profiles effectively discriminates between PD, other neurodegenerative conditions, and healthy controls, offering a non-invasive method for potential early diagnosis. Notably, TMEM104, TRIM33, and SNAP25 emerged as promising candidate biomarkers, warranting further investigation in larger and synthetic datasets to validate their clinical relevance. Full article
Show Figures

Graphical abstract

14 pages, 2237 KB  
Article
LPI Radar Waveform Modulation Recognition Based on Improved EfficientNet
by Yuzhi Qi, Lei Ni, Xun Feng, Hongquan Li and Yujia Zhao
Electronics 2025, 14(21), 4214; https://doi.org/10.3390/electronics14214214 - 28 Oct 2025
Viewed by 168
Abstract
To address the challenge of low modulation recognition accuracy for Low Probability of Intercept (LPI) radar waveforms under low Signal-to-Noise Ratio (SNR) conditions—a critical limitation in current radar signal processing research—this study proposes a novel recognition framework anchored in an improved EfficientNet model. [...] Read more.
To address the challenge of low modulation recognition accuracy for Low Probability of Intercept (LPI) radar waveforms under low Signal-to-Noise Ratio (SNR) conditions—a critical limitation in current radar signal processing research—this study proposes a novel recognition framework anchored in an improved EfficientNet model. First, to generate time–frequency images, the radar signals are initially subjected to time–frequency analysis using the Choi–Williams Distribution (CWD). Second, the Mobile Inverted Bottle-neck Convolution (MBConv) structure incorporates the Simple Attention Module (SimAM) to improve the network’s capacity to extract features from time–frequency images. Specifically, the original serial mechanism within the MBConv structure is replaced with a parallel convolution and attention approach, further optimizing feature extraction efficiency. Third, the network’s loss function is upgraded to Focal Loss. This modification aims to mitigate the issue of low recognition rates for specific radar signal types during training: by dynamically adjusting the loss weights of hard-to-recognize samples, it effectively improves the classification accuracy of challenging categories. Simulation experiments were conducted on 13 distinct types of LPI radar signals. The results demonstrate that the improved model validates the effectiveness of the proposed approach for LPI waveform modulation recognition, achieving an overall recognition accuracy of 96.48% on the test set. Full article
Show Figures

Figure 1

17 pages, 11184 KB  
Article
Automated Crack Detection in Micro-CT Scanning for Fiber-Reinforced Concrete Using Super-Resolution and Deep Learning
by João Pedro Gomes de Souza, Aristófanes Corrêa Silva, Marcello Congro, Deane Roehl, Anselmo Cardoso de Paiva, Sandra Pereira and António Cunha
Electronics 2025, 14(21), 4208; https://doi.org/10.3390/electronics14214208 - 28 Oct 2025
Viewed by 260
Abstract
Fiber-reinforced concrete is a crucial material for civil construction, and monitoring its health is important for preserving structures and preventing accidents and financial losses. Among non-destructive monitoring methods, Micro Computed Tomography (Micro-CT) imaging stands out as an inexpensive method that is free from [...] Read more.
Fiber-reinforced concrete is a crucial material for civil construction, and monitoring its health is important for preserving structures and preventing accidents and financial losses. Among non-destructive monitoring methods, Micro Computed Tomography (Micro-CT) imaging stands out as an inexpensive method that is free from noise and external interference. However, manual inspection of these images is subjective and requires significant human effort. In recent years, several studies have successfully utilized Deep Learning models for the automatic detection of cracks in concrete. However, according to the literature, a gap remains in the context of detecting cracks using Micro-CT images of fiber-reinforced concrete. Therefore, this work proposes a framework for automatic crack detection that combines the following: (a) a super-resolution-based preprocessing to generate, for each image, versions with double and quadruple the original resolution, (b) a classification step using EfficientNetB0 to classify the type of concrete matrix, (c) specific training of Detection Transformer (DETR) models for each type of matrix and resolution, and (d) and a votation committee-based post-processing among the models trained for each resolution to reduce false positives. The model was trained on a new publicly available dataset, the FIRECON dataset, which consists of 4064 images annotated by an expert, achieving metrics of 86.098% Intersection over Union, 89.37% Precision, 83.26% Recall, 84.99% F1-Score, and 44.69% Average Precision. The framework, therefore, significantly reduces analysis time and improves consistency compared to the manual methods used in previous studies. The results demonstrate the potential of Deep Learning to aid image analysis in damage assessments, providing valuable insights into the damage mechanisms of fiber-reinforced concrete and contributing to the development of durable, high-performance engineering materials. Full article
Show Figures

Figure 1

19 pages, 2107 KB  
Article
Multi-Feature Fusion and Cloud Restoration-Based Approach for Remote Sensing Extraction of Lake and Reservoir Water Bodies in Bijie City
by Bai Xue, Yiying Wang, Yanru Song, Changru Liu and Pi Ai
Appl. Sci. 2025, 15(21), 11490; https://doi.org/10.3390/app152111490 - 28 Oct 2025
Viewed by 116
Abstract
Current lake and reservoir water body extraction algorithms are confronted with two critical challenges: (1) design dependency on specific geographical features, leading to constrained cross-regional adaptability (e.g., the JRC Global Water Body Dataset achieves ~90% overall accuracy globally, while the ESA WorldCover 2020 [...] Read more.
Current lake and reservoir water body extraction algorithms are confronted with two critical challenges: (1) design dependency on specific geographical features, leading to constrained cross-regional adaptability (e.g., the JRC Global Water Body Dataset achieves ~90% overall accuracy globally, while the ESA WorldCover 2020 reaches ~92% for water body classification, both showing degraded performance in complex karst terrains); (2) information loss due to cloud occlusion, compromising dynamic monitoring accuracy. To address these limitations, this study presents a multi-feature fusion and multi-level hierarchical extraction algorithm for lake and reservoir water bodies, leveraging the Google Earth Engine (GEE) cloud platform and Sentinel-2 multispectral imagery in the karst landscape of Bijie City. The proposed method integrates the Automated Water Extraction Index (AWEIsh) and Modified Normalized Difference Water Index (MNDWI) for initial water body extraction, followed by a comprehensive fusion of multi-source data—including Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-up Index (NDBI), Normalized Difference Red-Edge Index (NDREI), Sentinel-2 B8/B9 spectral bands, and Digital Elevation Model (DEM). This strategy hierarchically mitigates vegetation shadows, topographic shadows, and artificial feature non-water targets. A temporal flood frequency algorithm is employed to restore cloud-occluded water bodies, complemented by morphological filtering to exclude non-target water features (e.g., rivers and canals). Experimental validation using high-resolution reference data demonstrates that the algorithm achieves an overall extraction accuracy exceeding 96% in Bijie City, effectively suppressing dark object interference (e.g., false positives due to topographic and anthropogenic features) while preserving water body boundary integrity. Compared with single-index methods (e.g., MNDWI), this method reduces false positive rates caused by building shadows and terrain shadows by 15–20%, and improves the IoU (Intersection over Union) by 6–13% in typical karst sub-regions. This research provides a universal technical framework for large-scale dynamic monitoring of lakes and reservoirs, particularly addressing the challenges of regional adaptability and cloud compositing in karst environments. Full article
Show Figures

Figure 1

22 pages, 4342 KB  
Article
Cloud-Based Personalized sEMG Classification Using Lightweight CNNs for Long-Term Haptic Communication in Deaf-Blind Individuals
by Kaavya Tatavarty, Maxwell Johnson and Boris Rubinsky
Bioengineering 2025, 12(11), 1167; https://doi.org/10.3390/bioengineering12111167 - 27 Oct 2025
Viewed by 365
Abstract
Deaf-blindness, particularly in progressive conditions such as Usher syndrome, presents profound challenges to communication, independence, and access to information. Existing tactile communication technologies for individuals with Usher syndrome are often limited by the need for close physical proximity to trained interpreters, typically requiring [...] Read more.
Deaf-blindness, particularly in progressive conditions such as Usher syndrome, presents profound challenges to communication, independence, and access to information. Existing tactile communication technologies for individuals with Usher syndrome are often limited by the need for close physical proximity to trained interpreters, typically requiring hand-to-hand contact. In this study, we introduce a novel, cloud-based, AI-assisted gesture recognition and haptic communication system designed for long-term use by individuals with Usher syndrome, whose auditory and visual abilities deteriorate with age. Central to our approach is a wearable haptic interface that relocates tactile input and output from the hands to an arm-mounted sleeve, thereby preserving manual dexterity and enabling continuous, bidirectional tactile interaction. The system uses surface electromyography (sEMG) to capture user-specific muscle activations in the hand and forearm and employs lightweight, personalized convolutional neural networks (CNNs), hosted on a centralized server, to perform real-time gesture classification. A key innovation of the system is its ability to adapt over time to each user’s evolving physiological condition, including the progressive loss of vision and hearing. Experimental validation using a public dataset, along with real-time testing involving seven participants, demonstrates that personalized models consistently outperform cross-user models in terms of accuracy, adaptability, and usability. This platform offers a scalable, longitudinally adaptable solution for non-visual communication and holds significant promise for advancing assistive technologies for the deaf-blind community. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

Back to TopTop