Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,392)

Search Parameters:
Keywords = deep metric learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 9628 KB  
Article
Comparative Analysis of R-CNN and YOLOv8 Segmentation Features for Tomato Ripening Stage Classification and Quality Estimation
by Ali Ahmad, Jaime Lloret, Lorena Parra, Sandra Sendra and Francesco Di Gioia
Horticulturae 2026, 12(2), 127; https://doi.org/10.3390/horticulturae12020127 - 23 Jan 2026
Abstract
Accurate classification of tomato ripening stages and quality estimation is pivotal for optimizing post-harvest management and ensuring market value. This study presents a rigorous comparative analysis of morphological and colorimetric features extracted via two state-of-the-art deep learning-based instance segmentation frameworks—Mask R-CNN and YOLOv8n-seg—and [...] Read more.
Accurate classification of tomato ripening stages and quality estimation is pivotal for optimizing post-harvest management and ensuring market value. This study presents a rigorous comparative analysis of morphological and colorimetric features extracted via two state-of-the-art deep learning-based instance segmentation frameworks—Mask R-CNN and YOLOv8n-seg—and their efficacy in machine learning-driven ripening stage classification and quality prediction. Using 216 fresh-market tomato fruits across four defined ripening stages, we extracted 27 image-derived features per model, alongside 12 laboratory-measured physio-morphological traits. Multivariate analyses revealed that R-CNN features capture nuanced colorimetric and structural variations, while YOLOv8 emphasizes morphological characteristics. Machine learning classifiers trained with stratified 10-fold cross-validation achieved up to 95.3% F1-score when combining both feature sets, with R-CNN and YOLOv8 alone attaining 96.9% and 90.8% accuracy, respectively. These findings highlight a trade-off between the superior precision of R-CNN and the real-time scalability of YOLOv8. Our results demonstrate the potential of integrating complementary segmentation-derived features with laboratory metrics to enable robust, non-destructive phenotyping. This work advances the application of vision-based machine learning in precision agriculture, facilitating automated, scalable, and accurate monitoring of fruit maturity and quality. Full article
(This article belongs to the Special Issue Sustainable Practices in Smart Greenhouses)
55 pages, 3089 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 (registering DOI) - 22 Jan 2026
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

28 pages, 2184 KB  
Article
AptEVS: Adaptive Edge-and-Vehicle Scheduling for Hierarchical Federated Learning over Vehicular Networks
by Yu Tian, Nina Wang, Zongshuai Zhang, Wenhao Zou, Liangjie Zhao, Shiyao Liu and Lin Tian
Electronics 2026, 15(2), 479; https://doi.org/10.3390/electronics15020479 (registering DOI) - 22 Jan 2026
Abstract
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes [...] Read more.
Hierarchical federated learning (HFL) has emerged as a promising paradigm for distributed machine learning over vehicular networks. Despite recent advances in vehicle selection and resource allocation, most still adopt a fixed Edge-and-Vehicle Scheduling (EVS) configuration that keeps the number of participating edge nodes and vehicles per node constant across training rounds. However, given the diverse training tasks and dynamic vehicular environments, our experiments confirm that such static configurations struggle to efficiently meet the task-specific requirements across model accuracy, time delay, and energy consumption. To address this, we first formulate a unified, long-term training cost metric that balances these conflicting objectives. We then propose AptEVS, an adaptive scheduling framework based on deep reinforcement learning (DRL), designed to minimize this cost. The core of AptEVS is its phase-aware design, which adapts the scheduling strategy by first identifying the current training phase and then switching to specialized strategies accordingly. Extensive simulations demonstrate that AptEVS learns an effective scheduling policy online from scratch, consistently outperforming baselines and and reducing the long-term training cost by up to 66.0%. Our findings demonstrate that phase-aware DRL is both feasible and highly effective for resource scheduling over complex vehicular networks. Full article
(This article belongs to the Special Issue Technology of Mobile Ad Hoc Networks)
20 pages, 908 KB  
Article
Wearable ECG-PPG Deep Learning Model for Cardiac Index-Based Noninvasive Cardiac Output Estimation in Cardiac Surgery Patients
by Minwoo Kim, Min Dong Sung, Jimyeoung Jung, Sung Pil Cho, Junghwan Park, Sarah Soh, Hyun Chel Joo and Kyung Soo Chung
Sensors 2026, 26(2), 735; https://doi.org/10.3390/s26020735 (registering DOI) - 22 Jan 2026
Abstract
Accurate cardiac output (CO) measurement is vital for hemodynamic management; however, it usually requires invasive monitoring, which limits its continuous and out-of-hospital use. Wearable sensors integrated with deep learning offer a noninvasive alternative. This study developed and validated a lightweight deep learning model [...] Read more.
Accurate cardiac output (CO) measurement is vital for hemodynamic management; however, it usually requires invasive monitoring, which limits its continuous and out-of-hospital use. Wearable sensors integrated with deep learning offer a noninvasive alternative. This study developed and validated a lightweight deep learning model using wearable electrocardiography (ECG) and photoplethysmography (PPG) signals to predict CO and examined whether cardiac index-based normalization (Cardiac Index (CI) = CO/body surface area) improves performance. Twenty-seven patients who underwent cardiac surgery and had pulmonary artery catheters were prospectively enrolled. Single-lead ECG (HiCardi+ chest patch) and finger PPG (WristOx2 3150) were recorded simultaneously and processed through an ECG–PPG fusion network with cross-modal interaction. Three models were trained as follows: (1) CI prediction, (2) direct CO prediction, and (3) indirect CO prediction. The total number of CO = predicted CI × body surface area. Reference values were derived from thermodilution. The CI model achieved the best performance, and the indirect CO model showed significant reductions in error/agreement metrics (MAE/RMSE/bias; p < 0.0001), while correlation-based metrics are reported descriptively without implying statistical significance. The Pearson correlation coefficient (PCC) and percentage error (PE) for the indirect CO estimates (PCC = 0.904; PE = 23.75%). The indirect CO estimates met the predefined PE < 30% agreement benchmark for method-comparison; this is not a universal clinical standard. These results demonstrate that wearable ECG–PPG fusion deep learning can achieve accurate, noninvasive CO estimation and that CI-based normalization enhances model agreement with pulmonary artery catheter measurements, supporting continuous catheter-free hemodynamic monitoring. Full article
Show Figures

Figure 1

24 pages, 883 KB  
Article
SDA-Net: A Symmetric Dual-Attention Network with Multi-Scale Convolution for MOOC Dropout Prediction
by Yiwen Yang, Chengjun Xu and Guisheng Tian
Symmetry 2026, 18(1), 202; https://doi.org/10.3390/sym18010202 - 21 Jan 2026
Abstract
With the rapid development of Massive Open Online Courses (MOOCs), high dropout rates have become a major challenge, limiting the quality of online education and the effectiveness of targeted interventions. Although existing MOOC dropout prediction methods have incorporated deep learning and attention mechanisms [...] Read more.
With the rapid development of Massive Open Online Courses (MOOCs), high dropout rates have become a major challenge, limiting the quality of online education and the effectiveness of targeted interventions. Although existing MOOC dropout prediction methods have incorporated deep learning and attention mechanisms to improve predictive performance to some extent, they still face limitations in modeling differences in course difficulty and learning engagement, capturing multi-scale temporal learning behaviors, and controlling model complexity. To address these issues, this paper proposes a MOOC dropout prediction model that integrates multi-scale convolution with a symmetric dual-attention mechanism, termed SDA-Net. In the feature modeling stage, the model constructs a time allocation ratio matrix (MRatio), a resource utilization ratio matrix (SRatio), and a relative group-level ranking matrix (Rank) to characterize learners’ behavioral differences in terms of time investment, resource usage structure, and relative performance, thereby mitigating the impact of course difficulty and individual effort disparities on prediction outcomes. Structurally, SDA-Net extracts learning behavior features at different temporal scales through multi-scale convolution and incorporates a symmetric dual-attention mechanism composed of spatial and channel attention to adaptively focus on information highly correlated with dropout risk, enhancing feature representation while maintaining a relatively lightweight architecture. Experimental results on the KDD Cup 2015 and XuetangX public datasets demonstrate that SDA-Net achieves more competitive performance than traditional machine learning methods, mainstream deep learning models, and attention-based approaches on major evaluation metrics; in particular, it attains an accuracy of 93.7% on the KDD Cup 2015 dataset and achieves an absolute improvement of 0.2 percentage points in Accuracy and 0.4 percentage points in F1-Score on the XuetangX dataset, confirming that the proposed model effectively balances predictive performance and model complexity. Full article
(This article belongs to the Section Computer)
36 pages, 4575 KB  
Article
A PI-Dual-STGCN Fault Diagnosis Model Based on the SHAP-LLM Joint Explanation Framework
by Zheng Zhao, Shuxia Ye, Liang Qi, Hao Ni, Siyu Fei and Zhe Tong
Sensors 2026, 26(2), 723; https://doi.org/10.3390/s26020723 - 21 Jan 2026
Abstract
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability [...] Read more.
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability of graph data by introducing physical constraints and constructs a dual-graph architecture based on physical topology graphs and signal similarity graphs. The experimental results show that the dual-graph complementary architecture enhances diagnostic accuracy to 99.22%. Second, a general-purpose SHAP-LLM explanation framework is designed: Explainable AI (XAI) technology is used to analyze the decision logic of the diagnostic model and generate visual explanations, establishing a hierarchical knowledge base that includes performance metrics, explanation reliability, and fault experience. Retrieval-Augmented Generation (RAG) technology is innovatively combined to integrate model performance and Shapley Additive Explanations (SHAP) reliability assessment through the main report prompt, while the sub-report prompt enables detailed fault analysis and repair decision generation. Finally, experiments demonstrate that this approach avoids the uncertainty of directly using large models for fault diagnosis: we delegate all fault diagnosis tasks and core explainability tasks to more mature deep learning algorithms and XAI technology and only leverage the powerful textual reasoning capabilities of large models to process pre-quantified, fact-based textual information (e.g., model performance metrics, SHAP explanation results). This method enhances diagnostic transparency through XAI-generated visual and quantitative explanations of model decision logic while reducing the risk of large model hallucinations by restricting large models to reasoning over grounded, fact-based textual content rather than direct fault diagnosis, providing verifiable intelligent decision support for industrial fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

37 pages, 1683 KB  
Review
Sustainable Estimation of Tree Biomass and Volume Using UAV Imagery: A Comprehensive Review
by Dan Munteanu, Simona Moldovanu, Gabriel Murariu and Lucian Dinca
Sustainability 2026, 18(2), 1095; https://doi.org/10.3390/su18021095 - 21 Jan 2026
Abstract
Accurate estimation of tree biomass and volume is essential for sustainable forest management, climate change mitigation, and ecosystem service assessment. Recent advances in unmanned aerial vehicle (UAV) technology enable the acquisition of ultra-high-resolution optical and three-dimensional data, providing a resource-efficient alternative to traditional [...] Read more.
Accurate estimation of tree biomass and volume is essential for sustainable forest management, climate change mitigation, and ecosystem service assessment. Recent advances in unmanned aerial vehicle (UAV) technology enable the acquisition of ultra-high-resolution optical and three-dimensional data, providing a resource-efficient alternative to traditional field-based inventories. This review synthesizes 181 peer-reviewed studies on UAV-based estimation of tree biomass and volume across forestry, agricultural, and urban ecosystems, integrating bibliometric analysis with qualitative literature review. The results reveal a clear methodological shift from early structure-from-motion photogrammetry toward integrated frameworks combining three-dimensional canopy metrics, multispectral or LiDAR data, and machine learning or deep learning models. Across applications, tree height, crown geometry, and canopy volume consistently emerge as the most robust predictors of biomass and volume, enabling accurate individual-tree and plot-level estimates while substantially reducing field effort and ecological disturbance. UAV-based approaches demonstrate particularly strong performance in orchards, plantation forests, and urban environments, and increasing applicability in complex systems such as mangroves and mixed forests. Despite significant progress, key challenges remain, including limited methodological standardization, insufficient uncertainty quantification, scaling constraints beyond local extents, and the underrepresentation of biodiversity-rich and structurally complex ecosystems. Addressing these gaps is critical for the operational integration of UAV-derived biomass and volume estimates into sustainable land management, carbon accounting, and climate-resilient monitoring frameworks. Full article
32 pages, 472 KB  
Review
Electrical Load Forecasting in the Industrial Sector: A Literature Review of Machine Learning Models and Architectures for Grid Planning
by Jannis Eckhoff, Simran Wadhwa, Marc Fette, Jens Peter Wulfsberg and Chathura Wanigasekara
Energies 2026, 19(2), 538; https://doi.org/10.3390/en19020538 - 21 Jan 2026
Abstract
The energy transition, driven by the global shift toward renewable and electrification, necessitates accurate and efficient prediction of electrical load profiles to quantify energy consumption. Therefore, the systematic literature review (SLR), followed by PRISMA guidelines, synthesizes hybrid architectures for sequential electrical load profiles, [...] Read more.
The energy transition, driven by the global shift toward renewable and electrification, necessitates accurate and efficient prediction of electrical load profiles to quantify energy consumption. Therefore, the systematic literature review (SLR), followed by PRISMA guidelines, synthesizes hybrid architectures for sequential electrical load profiles, aiming to span statistical techniques, machine learning (ML), and deep learning (DL) strategies for optimizing performance and practical viability. The findings reveal a dominant trend towards complex hybrid models leveraging the combined strengths of DL architectures such as long short-term memory (LSTM) and optimization algorithms such as genetic algorithm and Particle Swarm Optimization (PSO) to capture non-linear relationships. Thus, hybrid models achieve superior performance by synergistically integrating components such as Convolutional Neural Network (CNN) for feature extraction and LSTMs for temporal modeling with feature selection algorithms, which collectively capture local trends, cross-correlations, and long-term dependencies in the data. A crucial challenge identified is the lack of an established framework to manage adaptable output lengths in dynamic neural network forecasting. Addressing this, we propose the first explicit idea of decoupling output length predictions from the core signal prediction task. A key finding is that while models, particularly optimization-tuned hybrid architectures, have demonstrated quantitative superiority over conventional shallow methods, their performance assessment relies heavily on statistical measures like Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). However, for comprehensive performance assessment, there is a crucial need for developing tailored, application-based metrics that integrate system economics and major planning aspects to ensure reliable domain-specific validation. Full article
(This article belongs to the Special Issue Power Systems and Smart Grids: Innovations and Applications)
Show Figures

Figure 1

15 pages, 2430 KB  
Article
Improved Detection of Small (<2 cm) Hepatocellular Carcinoma via Deep Learning-Based Synthetic CT Hepatic Arteriography: A Multi-Center External Validation Study
by Jung Won Kwak, Sung Bum Cho, Ki Choon Sim, Jeong Woo Kim, In Young Choi and Yongwon Cho
Diagnostics 2026, 16(2), 343; https://doi.org/10.3390/diagnostics16020343 - 21 Jan 2026
Abstract
Background/Objectives: Early detection of hepatocellular carcinoma (HCC), particularly small lesions (<2 cm), which is crucial for curative treatment, remains challenging with conventional liver dynamic computed tomography (LDCT). We aimed to develop a deep learning algorithm to generate synthetic CT during hepatic arteriography (CTHA) [...] Read more.
Background/Objectives: Early detection of hepatocellular carcinoma (HCC), particularly small lesions (<2 cm), which is crucial for curative treatment, remains challenging with conventional liver dynamic computed tomography (LDCT). We aimed to develop a deep learning algorithm to generate synthetic CT during hepatic arteriography (CTHA) from non-invasive LDCT and evaluate its lesion detection performance. Methods: A cycle-consistent generative adversarial network with an attention module [Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization (U-GAT-IT)] was trained using paired LDCT and CTHA images from 277 patients. The model was validated using internal (68 patients, 139 lesions) and external sets from two independent centers (87 patients, 117 lesions). Two radiologists assessed detection performance using a 5-point scale and the detection rate. Results: Synthetic CTHA significantly improved the detection of sub-centimeter (<1 cm) HCCs compared with LDCT in the internal set (69.6% vs. 47.8%, p < 0.05). This improvement was robust in the external set; synthetic CTHA detected a greater number of small lesions than LDCT. Quantitative metrics (structural similarity index measure and peak signal-to-noise ratio) indicated high structural fidelity. Conclusions: Deep-learning–based synthetic CTHA significantly enhanced the detection of small HCCs compared with standard LDCT, offering a non-invasive alternative with high detection sensitivity, which was validated across multicentric data. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

22 pages, 7392 KB  
Article
Recursive Deep Feature Learning for Hyperspectral Image Super-Resolution
by Jiming Liu, Chen Yi and Hehuan Li
Appl. Sci. 2026, 16(2), 1060; https://doi.org/10.3390/app16021060 - 20 Jan 2026
Abstract
The advancement of hyperspectral image super-resolution (HSI-SR) has been significantly propelled by deep learning techniques. However, current methods predominantly rely on 2D or 3D convolutional networks, which are inherently local and thus limited in modeling long-range spectral–depth interactions. This work introduces a novel [...] Read more.
The advancement of hyperspectral image super-resolution (HSI-SR) has been significantly propelled by deep learning techniques. However, current methods predominantly rely on 2D or 3D convolutional networks, which are inherently local and thus limited in modeling long-range spectral–depth interactions. This work introduces a novel network architecture designed to address this gap through recursive deep feature learning. Our model initiates with 3D convolutions to extract preliminary spectral–spatial features, which are progressively refined via densely connected grouped convolutions. A core innovation is a recursively formulated generalized self-attention mechanism, which captures long-range dependencies across the spectral dimension with linear complexity. To reconstruct fine spatial details across multiple scales, a progressive upsampling strategy is further incorporated. Evaluations on several public benchmarks demonstrate that the proposed approach outperforms existing state-of-the-art methods in both quantitative metrics and visual quality. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application, 2nd Edition)
Show Figures

Figure 1

21 pages, 4209 KB  
Article
High-Resolution Forest Structure Mapping with Deep Learning to Evaluate Restoration Outcomes
by J. Nicholas Hendershot, Becky L. Estes and Kristen N. Wilson
Remote Sens. 2026, 18(2), 346; https://doi.org/10.3390/rs18020346 - 20 Jan 2026
Abstract
Forest management interventions in fire-prone western U.S. forests aim to restore structural heterogeneity, yet tracking treatment efficacy at landscape scales remains a persistent challenge. Traditional monitoring tools often lack the spatial resolution or temporal frequency needed to assess fine-scale structural outcomes. While deep [...] Read more.
Forest management interventions in fire-prone western U.S. forests aim to restore structural heterogeneity, yet tracking treatment efficacy at landscape scales remains a persistent challenge. Traditional monitoring tools often lack the spatial resolution or temporal frequency needed to assess fine-scale structural outcomes. While deep learning approaches for mapping canopy structure from high-resolution satellite imagery have advanced rapidly, their application to operational monitoring of restoration outcomes with independent validation remains limited. This study demonstrates and validates a scalable monitoring workflow that integrates high-resolution PlanetScope multispectral imagery (~4.77 m) with a residual U-Net convolutional neural network (CNN) to quantify canopy structure dynamics in support of forest restoration programs. Trained using 3 m canopy cover data from the California Forest Observatory (CFO) as a reference, the model accurately segmented forest canopy from openings across a large, independent test area of ~1761 km2, with an overall accuracy of 92.2%, and an F1-score of 95.1%. Independent validation against airborne LiDAR across 140 km2 of heterogeneous terrain confirmed operational performance (overall accuracy 85.9%, F1-score 0.77 for canopy gaps). We applied this framework to quantify structural changes within the North Yuba Collaborative Forest Landscape Restoration Program from 2020 to 2024, providing managers with actionable metrics to evaluate treatment effectiveness against historical reference conditions. The treatments created 564 acres of new openings, significantly increasing structural heterogeneity, with 56% of new open area located within 12 m of residual canopy. While treatment outcomes aligned with the goal of fragmenting dense canopy, the resulting large openings (>5 acres) slightly exceeded historical reference conditions for the area. This validated workflow translates high-resolution satellite imagery into timely, actionable metrics of forest structure, enabling managers to rapidly evaluate treatment impacts and refine restoration strategies in fire-prone ecosystems. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

40 pages, 7546 KB  
Article
Hierarchical Soft Actor–Critic Agent with Automatic Entropy, Twin Critics, and Curriculum Learning for the Autonomy of Rock-Breaking Machinery in Mining Comminution Processes
by Guillermo González, John Kern, Claudio Urrea and Luis Donoso
Processes 2026, 14(2), 365; https://doi.org/10.3390/pr14020365 (registering DOI) - 20 Jan 2026
Abstract
This work presents a hierarchical deep reinforcement learning (DRL) framework based on Soft Actor–Critic (SAC) for the autonomy of rock-breaking machinery in surface mining comminution processes. The proposed approach explicitly integrates mobile navigation and hydraulic manipulation as coupled subprocesses within a unified decision-making [...] Read more.
This work presents a hierarchical deep reinforcement learning (DRL) framework based on Soft Actor–Critic (SAC) for the autonomy of rock-breaking machinery in surface mining comminution processes. The proposed approach explicitly integrates mobile navigation and hydraulic manipulation as coupled subprocesses within a unified decision-making architecture, designed to operate under the unstructured and highly uncertain conditions characteristic of open-pit mining operations. The system employs a hysteresis-based switching mechanism between specialized SAC subagents, incorporating automatic entropy tuning to balance exploration and exploitation, twin critics to mitigate value overestimation, and curriculum learning to manage the progressive complexity of the task. Two coupled subsystems are considered, namely: (i) a tracked mobile machine with a differential drive, whose continuous control enables safe navigation, and (ii) a hydraulic manipulator equipped with an impact hammer, responsible for the fragmentation and dismantling of rock piles through continuous joint torque actuation. Environmental perception is modeled using processed perceptual variables obtained from point clouds generated by an overhead depth camera, complemented with state variables of the machinery. System performance is evaluated in unstructured and uncertain simulated environments using process-oriented metrics, including operational safety, task effectiveness, control smoothness, and energy consumption. The results show that the proposed framework yields robust, stable policies that achieve superior overall process performance compared to equivalent hierarchical configurations and ablation variants, thereby supporting its potential applicability to DRL-based mining automation systems. Full article
(This article belongs to the Special Issue Advances in the Control of Complex Dynamic Systems)
Show Figures

Figure 1

18 pages, 10969 KB  
Article
Simulation Data-Based Dual Domain Network (Sim-DDNet) for Motion Artifact Reduction in MR Images
by Seong-Hyeon Kang, Jun-Young Chung, Youngjin Lee and for The Alzheimer’s Disease Neuroimaging Initiative
Magnetochemistry 2026, 12(1), 14; https://doi.org/10.3390/magnetochemistry12010014 - 20 Jan 2026
Abstract
Brain magnetic resonance imaging (MRI) is highly susceptible to motion artifacts that degrade fine structural details and undermine quantitative analysis. Conventional U-Net-based deep learning approaches for motion artifact reduction typically operate only in the image domain and are often trained on data with [...] Read more.
Brain magnetic resonance imaging (MRI) is highly susceptible to motion artifacts that degrade fine structural details and undermine quantitative analysis. Conventional U-Net-based deep learning approaches for motion artifact reduction typically operate only in the image domain and are often trained on data with simplified motion patterns, thereby limiting physical plausibility and generalization. We propose Sim-DDNet, a simulation-data-based dual-domain network that combines k-space-based motion simulation with a joint image-k-space reconstruction architecture. Motion-corrupted data were generated from T2-weighted Alzheimer’s Disease Neuroimaging Initiative brain MR scans using a k-space replacement scheme with three to five random rotational and translational events per volume, yielding 69,283 paired samples (49,852/6969/12,462 for training/validation/testing). Sim-DDNet integrates a real-valued U-Net-like image branch and a complex-valued k-space branch using cross attention, FiLM-based feature modulation, soft data consistency, and composite loss comprising L1, structural similarity index measure (SSIM), perceptual, and k-space-weighted terms. On the independent test set, Sim-DDNet achieved a peak signal-to-noise ratio of 31.05 dB, SSIM of 0.85, and gradient magnitude similarity deviation of 0.077, consistently outperforming U-Net and U-Net++ across all three metrics while producing less blurring, fewer residual ghost/streak artifacts, and reduced hallucination of non-existent structures. These results indicate that dual-domain, data-consistency-aware learning, which explicitly exploits k-space information, is a promising approach for physically plausible motion artifact correction in brain MRI. Full article
(This article belongs to the Special Issue Magnetic Resonances: Current Applications and Future Perspectives)
Show Figures

Figure 1

33 pages, 4465 KB  
Article
Environmentally Sustainable HVAC Management in Smart Buildings Using a Reinforcement Learning Framework SACEM
by Abdullah Alshammari, Ammar Ahmed E. Elhadi and Ashraf Osman Ibrahim
Sustainability 2026, 18(2), 1036; https://doi.org/10.3390/su18021036 - 20 Jan 2026
Abstract
Heating, ventilation, and air-conditioning (HVAC) systems dominate energy consumption in hot-climate buildings, where maintaining occupant comfort under extreme outdoor conditions remains a critical challenge, particularly under emerging time-of-use (TOU) electricity pricing schemes. While deep reinforcement learning (DRL) has shown promise for adaptive HVAC [...] Read more.
Heating, ventilation, and air-conditioning (HVAC) systems dominate energy consumption in hot-climate buildings, where maintaining occupant comfort under extreme outdoor conditions remains a critical challenge, particularly under emerging time-of-use (TOU) electricity pricing schemes. While deep reinforcement learning (DRL) has shown promise for adaptive HVAC control, existing approaches often suffer from comfort violations, myopic decision making, and limited robustness to uncertainty. This paper proposes a comfort-first hybrid control framework that integrates Soft Actor–Critic (SAC) with a Cross-Entropy Method (CEM) refinement layer, referred to as SACEM. The framework combines data-efficient off-policy learning with short-horizon predictive optimization and safety-aware action projection to explicitly prioritize thermal comfort while minimizing energy use, operating cost, and peak demand. The control problem is formulated as a Markov Decision Process using a simplified thermal model representative of commercial buildings in hot desert climates. The proposed approach is evaluated through extensive simulation using Saudi Arabian summer weather conditions, realistic occupancy patterns, and a three-tier TOU electricity tariff. Performance is assessed against state-of-the-art baselines, including PPO, TD3, and standard SAC, using comfort, energy, cost, and peak demand metrics, complemented by ablation and disturbance-based stress tests. Results show that SACEM achieves a comfort score of 95.8%, while reducing energy consumption and operating cost by approximately 21% relative to the strongest baseline. The findings demonstrate that integrating comfort-dominant reward design with decision-time look-ahead yields robust, economically viable HVAC control suitable for deployment in hot-climate smart buildings. Full article
Show Figures

Figure 1

32 pages, 3054 KB  
Article
Identification of Cholesterol in Plaques of Atherosclerotic Using Magnetic Resonance Spectroscopy and 1D U-Net Architecture
by Angelika Myśliwiec, Dawid Leksa, Avijit Paul, Marvin Xavierselvan, Adrian Truszkiewicz, Dorota Bartusik-Aebisher and David Aebisher
Molecules 2026, 31(2), 352; https://doi.org/10.3390/molecules31020352 - 19 Jan 2026
Viewed by 25
Abstract
Cholesterol plays a fundamental role in the human body—it stabilizes cell membranes, modulates gene expression, and is a precursor to steroid hormones, vitamin D, and bile salts. Its correct level is crucial for homeostasis, while both excess and deficiency are associated with serious [...] Read more.
Cholesterol plays a fundamental role in the human body—it stabilizes cell membranes, modulates gene expression, and is a precursor to steroid hormones, vitamin D, and bile salts. Its correct level is crucial for homeostasis, while both excess and deficiency are associated with serious metabolic and health consequences. Excessive accumulation of cholesterol leads to the development of atherosclerosis, while its deficiency disrupts the transport of fat-soluble vitamins. Magnetic resonance spectroscopy (MRS) enables the detection of cholesterol esters and the differentiation between their liquid and crystalline phases, but the technical limitations of clinical MRI systems require the use of dedicated coils and sequence modifications. This study demonstrates the feasibility of using MRS to identify cholesterol-specific spectral signatures in atherosclerotic plaque through ex vivo analysis. Using a custom-designed experimental coil adapted for small-volume samples, we successfully detected characteristic cholesterol peaks from plaque material dissolved in chloroform, with spectral signatures corresponding to established NMR databases. To further enhance spectral quality, a deep-learning denoising framework based on a 1D U-Net architecture was implemented, enabling the recovery of low-intensity cholesterol peaks that would otherwise be obscured by noise. The trained U-Net was applied to experimental MRS data from atherosclerotic plaques, where it significantly outperformed traditional denoising methods (Gaussian, Savitzky–Golay, wavelet, median) across six quantitative metrics (SNR, PSNR, SSIM, RMSE, MAE, correlation), enhancing low-amplitude cholesteryl ester detection. This approach substantially improved signal clarity and the interpretability of cholesterol-related resonances, supporting more accurate downstream spectral assessment. The integration of MRS with NMR-based lipidomic analysis, which allows the identification of lipid signatures associated with plaque progression and destabilization, is becoming increasingly important. At the same time, the development of high-resolution techniques such as μOCT provides evidence for the presence of cholesterol crystals and their potential involvement in the destabilization of atherosclerotic lesions. In summary, nanotechnology-assisted MRI has the potential to become an advanced tool in the proof-of-concept of atherosclerosis, enabling not only the identification of cholesterol and its derivatives, but also the monitoring of treatment efficacy. However, further clinical studies are necessary to confirm the practical usefulness of these solutions and their prognostic value in assessing cardiovascular risk. Full article
Show Figures

Figure 1

Back to TopTop