Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,035)

Search Parameters:
Keywords = transforming learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13580 KiB  
Article
Enabling Smart Grid Resilience with Deep Learning-Based Battery Health Prediction in EV Fleets
by Muhammed Cavus and Margaret Bell
Batteries 2025, 11(8), 283; https://doi.org/10.3390/batteries11080283 (registering DOI) - 24 Jul 2025
Abstract
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful [...] Read more.
The widespread integration of electric vehicles (EVs) into smart grid infrastructures necessitates intelligent and robust battery health diagnostics to ensure system resilience and performance longevity. While numerous studies have addressed the estimation of State of Health (SOH) and the prediction of remaining useful life (RUL) using machine and deep learning, most existing models fail to capture both short-term degradation trends and long-range contextual dependencies jointly. In this study, we introduce V2G-HealthNet, a novel hybrid deep learning framework that uniquely combines Long Short-Term Memory (LSTM) networks with Transformer-based attention mechanisms to model battery degradation under dynamic vehicle-to-grid (V2G) scenarios. Unlike prior approaches that treat SOH estimation in isolation, our method directly links health prediction to operational decisions by enabling SOH-informed adaptive load scheduling and predictive maintenance across EV fleets. Trained on over 3400 proxy charge-discharge cycles derived from 1 million telemetry samples, V2G-HealthNet achieved state-of-the-art performance (SOH RMSE: 0.015, MAE: 0.012, R2: 0.97), outperforming leading baselines including XGBoost and Random Forest. For RUL prediction, the model maintained an MAE of 0.42 cycles over a five-cycle horizon. Importantly, deployment simulations revealed that V2G-HealthNet triggered maintenance alerts at least three cycles ahead of critical degradation thresholds and redistributed high-load tasks away from ageing batteries—capabilities not demonstrated in previous works. These findings establish V2G-HealthNet as a deployable, health-aware control layer for smart city electrification strategies. Full article
Show Figures

Figure 1

15 pages, 2317 KiB  
Article
An Ensemble-Based AI Approach for Continuous Blood Pressure Estimation in Health Monitoring Applications
by Rafita Haque, Chunlei Wang and Nezih Pala
Sensors 2025, 25(15), 4574; https://doi.org/10.3390/s25154574 (registering DOI) - 24 Jul 2025
Abstract
Continuous blood pressure (BP) monitoring provides valuable insight into the body’s dynamic cardiovascular regulation across various physiological states such as physical activity, emotional stress, postural changes, and sleep. Continuous BP monitoring captures different variations in systolic and diastolic pressures, reflecting autonomic nervous system [...] Read more.
Continuous blood pressure (BP) monitoring provides valuable insight into the body’s dynamic cardiovascular regulation across various physiological states such as physical activity, emotional stress, postural changes, and sleep. Continuous BP monitoring captures different variations in systolic and diastolic pressures, reflecting autonomic nervous system activity, vascular compliance, and circadian rhythms. This enables early identification of abnormal BP trends and allows for timely diagnosis and interventions to reduce the risk of cardiovascular diseases (CVDs) such as hypertension, stroke, heart failure, and chronic kidney disease as well as chronic stress or anxiety disorders. To facilitate continuous BP monitoring, we propose an AI-powered estimation framework. The proposed framework first uses an expert-driven feature engineering approach that systematically extracts physiological features from photoplethysmogram (PPG)-based arterial pulse waveforms (APWs). Extracted features include pulse rate, ascending/descending times, pulse width, slopes, intensity variations, and waveform areas. These features are fused with demographic data (age, gender, height, weight, BMI) to enhance model robustness and accuracy across diverse populations. The framework utilizes a Tab-Transformer to learn rich feature embeddings, which are then processed through an ensemble machine learning framework consisting of CatBoost, XGBoost, and LightGBM. Evaluated on a dataset of 1000 subjects, the model achieves Mean Absolute Errors (MAE) of 3.87 mmHg (SBP) and 2.50 mmHg (DBP), meeting British Hypertension Society (BHS) Grade A and Association for the Advancement of Medical Instrumentation (AAMI) standards. The proposed architecture advances non-invasive, AI-driven solutions for dynamic cardiovascular health monitoring. Full article
Show Figures

Figure 1

24 pages, 1572 KiB  
Article
Optimizing DNA Sequence Classification via a Deep Learning Hybrid of LSTM and CNN Architecture
by Elias Tabane, Ernest Mnkandla and Zenghui Wang
Appl. Sci. 2025, 15(15), 8225; https://doi.org/10.3390/app15158225 (registering DOI) - 24 Jul 2025
Abstract
This study addresses the performance of deep learning models for predicting human DNA sequence classification through an exploration of ideal feature representation, model architecture, and hyperparameter tuning. It contrasts traditional machine learning with advanced deep learning approaches to ascertain performance with respect to [...] Read more.
This study addresses the performance of deep learning models for predicting human DNA sequence classification through an exploration of ideal feature representation, model architecture, and hyperparameter tuning. It contrasts traditional machine learning with advanced deep learning approaches to ascertain performance with respect to genomic data complexity. A hybrid network combining long short-term memory (LSTM) and convolutional neural networks (CNN) was developed to extract long-distance dependencies as well as local patterns from DNA sequences. The hybrid LSTM + CNN model achieved a classification accuracy of 100%, which is significantly higher than traditional approaches such as logistic regression (45.31%), naïve Bayes (17.80%), and random forest (69.89%), as well as other machine learning models such as XGBoost (81.50%) and k-nearest neighbor (70.77%). Among deep learning techniques, the DeepSea model also accounted for good performance (76.59%), while others like DeepVariant (67.00%) and graph neural networks (30.71%) were relatively lower. Preprocessing techniques, one-hot encoding, and DNA embeddings were mainly at the forefront of transforming sequence data to a compatible form for deep learning. The findings underscore the robustness of hybrid structures in genomic classification tasks and warrant future research on encoding strategy, model and parameter tuning, and hyperparameter tuning to further improve accuracy and generalization in DNA sequence analysis. Full article
Show Figures

Figure 1

15 pages, 2123 KiB  
Article
Multi-Class Visual Cyberbullying Detection Using Deep Neural Networks and the CVID Dataset
by Muhammad Asad Arshed, Zunera Samreen, Arslan Ahmad, Laiba Amjad, Hasnain Muavia, Christine Dewi and Muhammad Kabir
Information 2025, 16(8), 630; https://doi.org/10.3390/info16080630 (registering DOI) - 24 Jul 2025
Abstract
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media [...] Read more.
In an era where online interactions increasingly shape social dynamics, the pervasive issue of cyberbullying poses a significant threat to the well-being of individuals, particularly among vulnerable groups. Despite extensive research on text-based cyberbullying detection, the rise of visual content on social media platforms necessitates new approaches to address cyberbullying using images. This domain has been largely overlooked. In this paper, we present a novel dataset specifically designed for the detection of visual cyberbullying, encompassing four distinct classes: abuse, curse, discourage, and threat. The initial prepared dataset (cyberbullying visual indicators dataset (CVID)) comprised 664 samples for training and validation, expanded through data augmentation techniques to ensure balanced and accurate results across all classes. We analyzed this dataset using several advanced deep learning models, including VGG16, VGG19, MobileNetV2, and Vision Transformer. The proposed model, based on DenseNet201, achieved the highest test accuracy of 99%, demonstrating its efficacy in identifying the visual cues associated with cyberbullying. To prove the proposed model’s generalizability, the 5-fold stratified K-fold was also considered, and the model achieved an average test accuracy of 99%. This work introduces a dataset and highlights the potential of leveraging deep learning models to address the multifaceted challenges of detecting cyberbullying in visual content. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

23 pages, 6229 KiB  
Article
Damage Classification Approach for Concrete Structure Using Support Vector Machine Learning of Decomposed Electromechanical Admittance Signature via Discrete Wavelet Transform
by Jingwen Yang, Demi Ai and Duluan Zhang
Buildings 2025, 15(15), 2616; https://doi.org/10.3390/buildings15152616 - 23 Jul 2025
Abstract
The identification of structural damage types remains a key challenge in electromechanical impedance/admittance (EMI/EMA)-based structural health monitoring realm. This paper proposed a damage classification approach for concrete structures by using integrating discrete wavelet transform (DWT) decomposition of EMA signatures with supervised machine learning. [...] Read more.
The identification of structural damage types remains a key challenge in electromechanical impedance/admittance (EMI/EMA)-based structural health monitoring realm. This paper proposed a damage classification approach for concrete structures by using integrating discrete wavelet transform (DWT) decomposition of EMA signatures with supervised machine learning. In this approach, the EMA signals of arranged piezoelectric ceramic (PZT) patches were successively measured at initial undamaged and post-damaged states, and the signals were decomposed and processed using the DWT technique to derive indicators including the wavelet energy, the variance, the mean, and the entropy. Then these indicators, incorporated with traditional ones including root mean square deviation (RMSD), baseline-changeable RMSD named RMSDk, correlation coefficient (CC), and mean absolute percentage deviation (MAPD), were processed by a support vector machine (SVM) model, and finally damage type could be automatically classified and identified. To validate the approach, experiments on a full-scale reinforced concrete (RC) slab and application to a practical tunnel segment RC slab structure instrumented with multiple PZT patches were conducted to classify severe transverse cracking and minor crack/impact damages. Experimental and application results cogently demonstrated that the proposed DWT-based approach can precisely classify different types of damage on concrete structures with higher accuracy than traditional ones, highlighting the potential of the DWT-decomposed EMA signatures for damage characterization in concrete infrastructure. Full article
Show Figures

Figure 1

29 pages, 2105 KiB  
Article
The Impact of Rural Digital Economy Development on Agricultural Carbon Emission Efficiency: A Study of the N-Shaped Relationship
by Yong Feng, Shuokai Wang and Fangping Cao
Agriculture 2025, 15(15), 1583; https://doi.org/10.3390/agriculture15151583 - 23 Jul 2025
Abstract
This study investigates the impact of rural digital economy development on agricultural carbon emission efficiency, aiming to elucidate the intrinsic mechanisms and pathways through which digital technology enables low-carbon transformation in agriculture, thereby contributing to the achievement of agricultural carbon neutrality goals. Based [...] Read more.
This study investigates the impact of rural digital economy development on agricultural carbon emission efficiency, aiming to elucidate the intrinsic mechanisms and pathways through which digital technology enables low-carbon transformation in agriculture, thereby contributing to the achievement of agricultural carbon neutrality goals. Based on provincial-level panel data from China spanning 2011 to 2022, this study examines the relationship between the rural digital economy and agricultural carbon emission efficiency, along with its underlying mechanisms, using bidirectional fixed effects models, mediation effect analysis, and Spatial Durbin Models. The results indicate the following: (1) A significant N-shaped-curve relationship exists between rural digital economy development and agricultural carbon emission efficiency. Specifically, agricultural carbon emission efficiency exhibits a three-phase trajectory of “increase, decrease, and renewed increase” as the rural digital economy advances, ultimately driving a sustained improvement in efficiency. (2) Industrial integration acts as a critical mediating mechanism. Rural digital economy development accelerates the formation of the N-shaped curve by promoting the integration between agriculture and other sectors. (3) Spatial spillover effects significantly influence agricultural carbon emission efficiency. Due to geographical proximity, regional diffusion, learning, and demonstration effects, local agricultural carbon emission efficiency fluctuates with changes in neighboring regions’ digital economy development levels. (4) The relationship between rural digital economy development and agricultural carbon emission efficiency exhibits a significant inverted N-shaped pattern in regions with higher marketization levels, planting-dominated areas of southeast China, and digital economy demonstration zones. Further analysis reveals that within rural digital economy development, production digitalization and circulation digitalization demonstrate a more pronounced inverted N-shaped relationship with agricultural carbon emission efficiency. This study proposes strategic recommendations to maximize the positive impact of the rural digital economy on agricultural carbon emission efficiency, unlock its spatially differentiated contribution potential, identify and leverage inflection points of the N-shaped relationship between digital economy development and emission efficiency, and implement tailored policy portfolios—ultimately facilitating agriculture’s green and low-carbon transition. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

17 pages, 1377 KiB  
Article
Technology Adoption Framework for Supreme Audit Institutions Within the Hybrid TAM and TOE Model
by Babalwa Ceki and Tankiso Moloi
J. Risk Financial Manag. 2025, 18(8), 409; https://doi.org/10.3390/jrfm18080409 - 23 Jul 2025
Abstract
Advanced technologies, such as robotic process automation, blockchain, and machine learning, increase audit efficiency. Nonetheless, some Supreme Audit Institutions (SAIs) have not undergone digital transformation. This research aimed to develop a comprehensive framework for supreme audit institutions to adopt and integrate emerging technologies [...] Read more.
Advanced technologies, such as robotic process automation, blockchain, and machine learning, increase audit efficiency. Nonetheless, some Supreme Audit Institutions (SAIs) have not undergone digital transformation. This research aimed to develop a comprehensive framework for supreme audit institutions to adopt and integrate emerging technologies into their auditing processes using a hybrid theoretical approach based on the TAM (Technology Acceptance Model) and TOE (Technology–Organisation–Environment) models. The framework was informed by insights from nineteen highly experienced experts in the field from eight countries. Through a two-round Delphi questionnaire, the experts provided valuable input on the key factors, challenges, and strategies for successful technology adoption by public sector audit organisations. The findings of this research reveal that technology adoption in SAIs starts with solid management support led by the chief technology officer. They must evaluate the IT infrastructure and readiness for advanced technologies, considering the budget and funding. Integrating solutions like the SAI of Ghana’s Audit Management Information System can significantly enhance audit efficiency. Continuous staff training is essential to build a positive attitude toward new technologies, covering areas like data algorithm auditing and big data analysis. Assessing the complexity and compatibility of new technologies ensures ease of use and cost-effectiveness. Continuous support from technology providers and monitoring advancements will keep SAIs aligned with technological developments, enhancing their auditing capabilities. Full article
(This article belongs to the Special Issue Financial Management)
Show Figures

Figure 1

30 pages, 5118 KiB  
Article
Effective Comparison of Thermo-Mechanical Characteristics of Self-Compacting Concretes Through Machine Learning-Based Predictions
by Armando La Scala and Leonarda Carnimeo
Fire 2025, 8(8), 289; https://doi.org/10.3390/fire8080289 - 23 Jul 2025
Abstract
This present study proposes different machine learning-based predictors for the assessment of the residual compressive strength of Self-Compacting Concrete (SCC) subjected to high temperatures. The investigation is based on several literature algorithmic approaches based on Artificial Neural Networks with distinct training algorithms (Bayesian [...] Read more.
This present study proposes different machine learning-based predictors for the assessment of the residual compressive strength of Self-Compacting Concrete (SCC) subjected to high temperatures. The investigation is based on several literature algorithmic approaches based on Artificial Neural Networks with distinct training algorithms (Bayesian Regularization, Levenberg–Marquardt, Scaled Conjugate Gradient, and Resilient Backpropagation), Support Vector Regression, and Random Forest methods. A training database of 150 experimental data points is derived from a careful literature review, incorporating temperature (20–800 °C), geometric ratio (height/diameter), and corresponding compressive strength values. A statistical analysis revealed complex non-linear relationships between variables, with strong negative correlation between temperature and strength and heteroscedastic data distribution, justifying the selection of advanced machine learning techniques. Feature engineering improved model performance through the incorporation of quadratic terms, interaction variables, and cyclic transformations. The Resilient Backpropagation algorithm demonstrated superior performance with the lowest prediction errors, followed by Bayesian Regularization. Support Vector Regression achieved competitive accuracy despite its simpler architecture. Experimental validation using specimens tested up to 800 °C showed a good reliability of the developed systems, with prediction errors ranging from 0.33% to 23.35% across different temperature ranges. Full article
Show Figures

Figure 1

23 pages, 372 KiB  
Review
What Does Digital Well-Being Mean for School Development? A Theoretical Review with Perspectives on Digital Inequality
by Philipp Michael Weber, Rudolf Kammerl and Mandy Schiefner-Rohs
Educ. Sci. 2025, 15(8), 948; https://doi.org/10.3390/educsci15080948 - 23 Jul 2025
Abstract
As digital transformation progresses, schools are increasingly confronted with psychosocial challenges such as technostress, digital overload, and unequal participation in digital (learning) environments. This article investigates the conceptual relevance of digital well-being for school development, particularly in relation to social inequality. Despite growing [...] Read more.
As digital transformation progresses, schools are increasingly confronted with psychosocial challenges such as technostress, digital overload, and unequal participation in digital (learning) environments. This article investigates the conceptual relevance of digital well-being for school development, particularly in relation to social inequality. Despite growing attention, the term remains theoretically underdefined in educational research—a gap addressed through a theory-driven review. Drawing on a systematic search, 25 key studies were analyzed for their conceptual understanding and refinement of digital well-being, with a focus on educational relevance. Findings suggest that digital well-being constitutes a multidimensional state shaped by individual, media-related, and socio-structural factors. It emerges when individuals are able to successfully manage the demands of digital environments and is closely linked to digital inequality—particularly in terms of access, usage practices, and the resulting opportunities for participation and health promotion. Since the institutional role of schools has thus far received limited attention, this article shifts the focus toward schools as key arenas for negotiating digital norms and practices and calls for an equity-sensitive and health-conscious perspective on school development in the context of digitalization. In doing so, digital well-being is repositioned as a pedagogical cross-cutting issue that requires coordinated efforts across all levels of the education system, highlighting that equitable digital transformation in schools depends on a critical reflection of power asymmetries within society and educational institutions. The article concludes by advocating for the systematic integration of digital well-being into school development processes as a way to support inclusive digital participation and to foster a health-oriented digital school culture. Full article
40 pages, 4462 KiB  
Article
Leveraging Feature Extraction to Perform Time-Efficient Selection for Machine Learning Applications
by Duarte Coelho, Ana Madureira, Ivo Pereira, Ramiro Gonçalves, Susana Nicola, Inês César and Daniel Alves de Oliveira
Appl. Sci. 2025, 15(15), 8196; https://doi.org/10.3390/app15158196 - 23 Jul 2025
Abstract
In the age of rapidly advancing machine learning capabilities, the pursuit of maximum performance encounters the practical limitations imposed by limited resources in several fields. This work presents a cost-effective proposal for feature selection, which is a crucial part of machine learning processes, [...] Read more.
In the age of rapidly advancing machine learning capabilities, the pursuit of maximum performance encounters the practical limitations imposed by limited resources in several fields. This work presents a cost-effective proposal for feature selection, which is a crucial part of machine learning processes, and intends to partly solve this problem through computational time reduction. The proposed methodology aims to strike a careful balance between feature exploration and strict computational time concerns, by enhancing the quality and relevance of data. This approach focuses on the use of interim representations of feature combinations to significantly speed up a potentially slow and computationally expensive process. This strategy is evaluated in several datasets against other feature selection methods, and the results indicate a significant reduction in the temporal costs associated with this process, achieving a mean percentage decrease of 85%. Furthermore, this reduction is achieved while maintaining competitive model performance, demonstrating that the selected features remain effective for the learning task. These results emphasize the method’s feasibility, confirming its ability to transform machine learning applications in environments with limited resources. Full article
(This article belongs to the Special Issue Machine Learning and Soft Computing: Current Trends and Applications)
Show Figures

Figure 1

17 pages, 3715 KiB  
Article
Robust Low-Snapshot DOA Estimation for Sparse Arrays via a Hybrid Convolutional Graph Neural Network
by Hongliang Zhu, Hongxi Zhao, Chunshan Bao, Yiran Shi and Wenchao He
Sensors 2025, 25(15), 4563; https://doi.org/10.3390/s25154563 - 23 Jul 2025
Abstract
We propose a hybrid Convolutional Graph Neural Network (C-GNN) for direction-of-arrival (DOA) estimation in sparse sensor arrays under low-snapshot conditions. The C-GNN architecture combines 1D convolutional layers for local spatial feature extraction with graph convolutional layers for global structural learning, effectively capturing both [...] Read more.
We propose a hybrid Convolutional Graph Neural Network (C-GNN) for direction-of-arrival (DOA) estimation in sparse sensor arrays under low-snapshot conditions. The C-GNN architecture combines 1D convolutional layers for local spatial feature extraction with graph convolutional layers for global structural learning, effectively capturing both fine-grained and long-range array dependencies. Leveraging the difference coarray technique, the sparse array is transformed into a virtual uniform linear array (VULA) to enrich the spatial sampling; real-valued covariance matrices derived from the array measurements are used as the network’s input features. A final multi-layer perceptron (MLP) regression module then maps the learned representations to continuous DOA angle estimates. This approach capitalizes on the increased degrees of freedom offered by the virtual array while inherently incorporating the array’s geometric relationships via graph-based learning. The proposed C-GNN demonstrates robust performance in noisy, low-data scenarios, reliably estimating source angles even with very limited snapshots. By focusing on methodological innovation rather than bespoke architectural tuning, the framework shows promise for data-efficient DOA estimation in challenging practical conditions. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

17 pages, 2885 KiB  
Article
Energy Management of Electric–Hydrogen Coupled Integrated Energy System Based on Improved Proximal Policy Optimization Algorithm
by Jingbo Zhao, Zhengping Gao and Zhe Chen
Energies 2025, 18(15), 3925; https://doi.org/10.3390/en18153925 - 23 Jul 2025
Abstract
The electric–hydrogen coupled integrated energy system (EHCS) is a critical pathway for the low-carbon transition of energy systems. However, the inherent uncertainties of renewable energy sources present significant challenges to optimal energy management in the EHCS. To address these challenges, this paper proposes [...] Read more.
The electric–hydrogen coupled integrated energy system (EHCS) is a critical pathway for the low-carbon transition of energy systems. However, the inherent uncertainties of renewable energy sources present significant challenges to optimal energy management in the EHCS. To address these challenges, this paper proposes an energy management method for the EHCS based on an improved proximal policy optimization (IPPO) algorithm. This method aims to overcome the limitations of traditional heuristic algorithms, such as low solution accuracy, and the inefficiencies of mathematical programming methods. First, a mathematical model for the EHCS is established. Then, by introducing the Markov decision process (MDP), this mathematical model is transformed into a deep reinforcement learning framework. On this basis, the state space and action space of the system are defined, and a reward function is designed to guide the agent to learn to the optimal strategy, which takes into account the constraints of the system. Finally, the efficacy and economic viability of the proposed method are validated through numerical simulation. Full article
(This article belongs to the Special Issue Advances in Hydrogen Energy and Power System)
Show Figures

Figure 1

18 pages, 1794 KiB  
Article
Detection of Cumulative Bruising in Prunes Using Vis–NIR Spectroscopy and Machine Learning: A Nonlinear Spectral Response Approach
by Lisi Lai, Hui Zhang, Jiahui Gu and Long Wen
Appl. Sci. 2025, 15(15), 8190; https://doi.org/10.3390/app15158190 - 23 Jul 2025
Abstract
Early and accurate detection of mechanical damage in prunes is crucial for preserving postharvest quality and enabling automated sorting. This study proposes a practical and reproducible method for identifying cumulative bruising in prunes using visible–near-infrared (Vis–NIR) reflectance spectroscopy coupled with machine learning techniques. [...] Read more.
Early and accurate detection of mechanical damage in prunes is crucial for preserving postharvest quality and enabling automated sorting. This study proposes a practical and reproducible method for identifying cumulative bruising in prunes using visible–near-infrared (Vis–NIR) reflectance spectroscopy coupled with machine learning techniques. A self-developed impact simulation device was designed to induce progressive damage under controlled energy levels, simulating realistic postharvest handling conditions. Spectral data were collected from the equatorial region of each fruit and processed using a hybrid modeling framework comprising continuous wavelet transform (CWT) for spectral enhancement, uninformative variable elimination (UVE) for optimal wavelength selection, and support vector machine (SVM) for classification. The proposed CWT-UVE-SVM model achieved an overall classification accuracy of 93.22%, successfully distinguishing intact, mildly bruised, and cumulatively damaged samples. Notably, the results revealed nonlinear reflectance variations in the near-infrared region associated with repeated low-energy impacts, highlighting the capacity of spectral response patterns to capture progressive physiological changes. This research not only advances nondestructive detection methods for prune grading but also provides a scalable modeling strategy for cumulative mechanical damage assessment in soft horticultural products. Full article
Show Figures

Figure 1

17 pages, 3726 KiB  
Article
LEAD-Net: Semantic-Enhanced Anomaly Feature Learning for Substation Equipment Defect Detection
by Linghao Zhang, Junwei Kuang, Yufei Teng, Siyu Xiang, Lin Li and Yingjie Zhou
Processes 2025, 13(8), 2341; https://doi.org/10.3390/pr13082341 - 23 Jul 2025
Abstract
Substation equipment defect detection is a critical aspect of ensuring the reliability and stability of modern power grids. However, existing deep-learning-based detection methods often face significant challenges in real-world deployment, primarily due to low detection accuracy and inconsistent anomaly definitions across different substation [...] Read more.
Substation equipment defect detection is a critical aspect of ensuring the reliability and stability of modern power grids. However, existing deep-learning-based detection methods often face significant challenges in real-world deployment, primarily due to low detection accuracy and inconsistent anomaly definitions across different substation environments. To address these limitations, this paper proposes the Language-Guided Enhanced Anomaly Power Equipment Detection Network (LEAD-Net), a novel framework that leverages text-guided learning during training to significantly improve defect detection performance. Unlike traditional methods, LEAD-Net integrates textual descriptions of defects, such as historical maintenance records or inspection reports, as auxiliary guidance during training. A key innovation is the Language-Guided Anomaly Feature Enhancement Module (LAFEM), which refines channel attention using these text features. Crucially, LEAD-Net operates solely on image data during inference, ensuring practical applicability. Experiments on a real-world substation dataset, comprising 8307 image–text pairs and encompassing a diverse range of defect categories encountered in operational substation environments, demonstrate that LEAD-Net significantly outperforms state-of-the-art object detection methods (Faster R-CNN, YOLOv9, DETR, and Deformable DETR), achieving a mean Average Precision (mAP) of 79.51%. Ablation studies confirm the contributions of both LAFEM and the training-time text guidance. The results highlight the effectiveness and novelty of using training-time defect descriptions to enhance visual anomaly detection without requiring text input at inference. Full article
(This article belongs to the Special Issue Smart Optimization Techniques for Microgrid Management)
Show Figures

Figure 1

34 pages, 1247 KiB  
Article
SBCS-Net: Sparse Bayesian and Deep Learning Framework for Compressed Sensing in Sensor Networks
by Xianwei Gao, Xiang Yao, Bi Chen and Honghao Zhang
Sensors 2025, 25(15), 4559; https://doi.org/10.3390/s25154559 - 23 Jul 2025
Abstract
Compressed sensing is widely used in modern resource-constrained sensor networks. However, achieving high-quality and robust signal reconstruction under low sampling rates and noise interference remains challenging. Traditional CS methods have limited performance, so many deep learning-based CS models have been proposed. Although these [...] Read more.
Compressed sensing is widely used in modern resource-constrained sensor networks. However, achieving high-quality and robust signal reconstruction under low sampling rates and noise interference remains challenging. Traditional CS methods have limited performance, so many deep learning-based CS models have been proposed. Although these models show strong fitting capabilities, they often lack the ability to handle complex noise in sensor networks, which affects their performance stability. To address these challenges, this paper proposes SBCS-Net. This framework innovatively expands the iterative process of sparse Bayesian compressed sensing using convolutional neural networks and Transformer. The core of SBCS-Net is to optimize key SBL parameters through end-to-end learning. This can adaptively improve signal sparsity and probabilistically process measurement noise, while fully leveraging the powerful feature extraction and global context modeling capabilities of deep learning modules. To comprehensively evaluate its performance, we conduct systematic experiments on multiple public benchmark datasets. These studies include comparisons with various advanced and traditional compressed sensing methods, comprehensive noise robustness tests, ablation studies of key components, computational complexity analysis, and rigorous statistical significance tests. Extensive experimental results consistently show that SBCS-Net outperforms many mainstream methods in both reconstruction accuracy and visual quality. In particular, it exhibits excellent robustness under challenging conditions such as extremely low sampling rates and strong noise. Therefore, SBCS-Net provides an effective solution for high-fidelity, robust signal recovery in sensor networks and related fields. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop