Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,363)

Search Parameters:
Keywords = deep neural network algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2671 KB  
Article
Sustainable and Reliable Smart Grids: An Abnormal Condition Diagnosis Method for Low-Voltage Distribution Nodes via Multi-Source Domain Deep Transfer Learning and Cloud-Edge Collaboration
by Dongli Jia, Tianyuan Kang, Xueshun Ye, Jun Zhou and Zhenyu Zhang
Sustainability 2026, 18(3), 1550; https://doi.org/10.3390/su18031550 - 3 Feb 2026
Abstract
The transition toward sustainable and resilient new-type power systems requires robust diagnostic frameworks for terminal power supply units to ensure continuous grid stability. To ensure the resilience of modern power systems, this paper proposes a multi-source domain deep Transfer Learning method for the [...] Read more.
The transition toward sustainable and resilient new-type power systems requires robust diagnostic frameworks for terminal power supply units to ensure continuous grid stability. To ensure the resilience of modern power systems, this paper proposes a multi-source domain deep Transfer Learning method for the abnormal condition diagnosis of low-voltage distribution nodes within a cloud-edge collaborative framework. This approach integrates feature selection based on the Categorical Boosting (CatBoost) algorithm with a hybrid architecture combining a Convolutional Neural Network (CNN) and a Residual Network (ResNet). Additionally, it utilizes a multi-loss adaptation strategy consisting of Multi-Kernel Maximum Mean Difference (MK-MMD), Local Maximum Mean Difference (LMMD), and Mean Squared Error (MSE) to effectively bridge domain gaps and ensure diagnostic consistency. By balancing global commonality with local adaptation, the framework optimizes resource efficiency, reducing collaborative training time by 19.3%. Experimental results confirm that the method effectively prevents equipment failure, achieving diagnostic accuracies of 98.29% for low-voltage anomalies and 88.96% for three-phase imbalance conditions. Full article
(This article belongs to the Special Issue Microgrids, Electrical Power and Sustainable Energy Systems)
8 pages, 780 KB  
Proceeding Paper
Enhancing Imbalanced Data Classification Using Style-Based Generative Adversarial Network-Based Data Augmentation: A Case Study of Computed Tomography Images of Brain Stroke
by Jhao-Sin Lai, Liang-Sian Lin, Pin-Chi Chen, Cheng-En Xie, Yao-Yu Chiang and Chien-Hsin Lin
Eng. Proc. 2025, 120(1), 26; https://doi.org/10.3390/engproc2025120026 - 2 Feb 2026
Abstract
Stroke is a leading cause of death and disability. However, brain computed tomography image classification using machine learning and deep learning algorithms frequently suffers from a class imbalance problem, making it difficult to effectively extract deep-detailed features from instances of minority stroke lesions. [...] Read more.
Stroke is a leading cause of death and disability. However, brain computed tomography image classification using machine learning and deep learning algorithms frequently suffers from a class imbalance problem, making it difficult to effectively extract deep-detailed features from instances of minority stroke lesions. In this study, we systematically implement three style-based generative adversarial network (StyleGAN)-based data augmentation approaches: StyleGAN2, StyleGAN3, and conditional StyleGAN3 to address class imbalance in brain stroke classification. Furthermore, we deploy an ensemble learning-based deep neural network to enhance the effect of those data augmentation algorithms on downstream classification tasks. Experimental results show that StyleGAN3 effectively outperforms the other two StyleGAN data augmentation approaches in terms of precision, recall, and F1-score when addressing highly imbalanced brain stroke classification. Overall, this paper demonstrates the efficacy of three StyleGAN-based data augmentation approaches in addressing imbalanced brain stroke detection. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

31 pages, 3609 KB  
Review
The Machine-Learning-Driven Transformation of Forest Biometrics: Progress and Pathways Ahead Review
by Markos Progios and Maria J. Diamantopoulou
Forests 2026, 17(2), 200; https://doi.org/10.3390/f17020200 - 2 Feb 2026
Abstract
Forest biometrics has emerged as one of the fastest-growing scientific disciplines within environmental sciences. Machine learning (ML), an increasingly essential approach that uses effective algorithms, has proven to be an accurate and cost-efficient solution to forest-related problems. Recently, ML methods have evolved, from [...] Read more.
Forest biometrics has emerged as one of the fastest-growing scientific disciplines within environmental sciences. Machine learning (ML), an increasingly essential approach that uses effective algorithms, has proven to be an accurate and cost-efficient solution to forest-related problems. Recently, ML methods have evolved, from traditional machine learning (TML) algorithms to more sophisticated approaches, such as deep learning (DL) and ensemble (ENS) methods. To uncover these developments, a structured review and analysis of 150 peer-reviewed studies was conducted, following a standardized workflow. The analysis reveals clear shifts in methodological adoption. During the most recent five-year period (2021–2025), DL and shallow neural network (SNN) methods dominated the literature, accounting for 37.5% of published studies, followed by ENS and TML methods, contributing 29.2% and 27.1%, respectively, presenting a marked increase in the utilization of artificial neural networks (ANNs) and related algorithms across the domains of forest biometrics. Nevertheless, overall trends indicate that the benefits of TML methods still need further exploration for ground-based received data. Advances in remote sensing and satellite data have brought large-scale remotely sensed data into environmental research, further boosting ML utilization. However, each field could be strengthened by implementing standardized evaluation metrics and broader geographic representation. In this way, robust and widely transferable modeling frameworks for forest ecosystems can be developed. At the same time, further research on algorithms and their applicability to natural resources proves a key component for comprehensive and sustainable forest management. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

8 pages, 2335 KB  
Proceeding Paper
Evaluation of Impact of Convolutional Neural Network-Based Feature Extractors on Deep Reinforcement Learning for Autonomous Driving
by Che-Cheng Chang, Po-Ting Wu and Yee-Ming Ooi
Eng. Proc. 2025, 120(1), 27; https://doi.org/10.3390/engproc2025120027 - 2 Feb 2026
Abstract
Reinforcement Learning (RL) enables learning optimal decision-making strategies by maximizing cumulative rewards. Deep reinforcement learning (DRL) enhances this process by integrating deep neural networks (DNNs) for effective feature extraction from high-dimensional input data. Unlike prior studies focusing on algorithm design, we investigated the [...] Read more.
Reinforcement Learning (RL) enables learning optimal decision-making strategies by maximizing cumulative rewards. Deep reinforcement learning (DRL) enhances this process by integrating deep neural networks (DNNs) for effective feature extraction from high-dimensional input data. Unlike prior studies focusing on algorithm design, we investigated the impact of different feature extractors, DNNs, on DRL performance. We propose an enhanced feature extraction model to improve control effectiveness based on the proximal policy optimization (PPO) framework in autonomous driving scenarios. Through a comparative analysis of well-known convolutional neural networks (CNNs), MobileNet, SqueezeNet, and ResNet, the experimental results demonstrate that our model achieves higher cumulative rewards and better control stability, providing valuable insights for DRL applications in autonomous systems. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

28 pages, 2553 KB  
Review
Comparative Study of Supervised Deep Learning Architectures for Background Subtraction and Motion Segmentation on CDnet2014
by Oussama Boufares, Wajdi Saadaoui and Mohamed Boussif
Signals 2026, 7(1), 14; https://doi.org/10.3390/signals7010014 - 2 Feb 2026
Abstract
Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination [...] Read more.
Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination changes, dynamic backgrounds, cast shadows, and camera movements. The emergence of supervised deep learning-based methods has significantly enhanced performance, surpassing traditional approaches on the benchmark dataset CDnet2014. In this context, this paper provides a comprehensive review of recent supervised deep learning techniques applied to background subtraction, along with an in-depth comparative analysis of state-of-the-art approaches available on the official CDnet2014 results platform. Specifically, we examine several key architecture families, including convolutional neural networks (CNN and FCN), encoder–decoder models such as FgSegNet and Motion U-Net, adversarial frameworks (GAN), Transformer-based architectures, and hybrid methods combining intermittent semantic segmentation with rapid detection algorithms such as RT-SBS-v2. Beyond summarizing existing works, this review contributes a structured cross-family comparison under a unified benchmark, a focused analysis of performance behavior across challenging CDnet2014 scenarios, and a critical discussion of the trade-offs between segmentation accuracy, robustness, and computational efficiency for practical deployment. Full article
Show Figures

Figure 1

34 pages, 2320 KB  
Article
Research on a Computing First Network Based on Deep Reinforcement Learning
by Qianwen Xu, Jingchao Wang, Shuangyin Ren, Zhongbo Li and Wei Gao
Electronics 2026, 15(3), 638; https://doi.org/10.3390/electronics15030638 - 2 Feb 2026
Abstract
The joint optimization of computing resources and network routing constitutes a central challenge in Computing First Networks (CFNs). However, existing research has predominantly focused on computation offloading decisions, whereas the cooperative optimization of computing power and network routing remains underexplored. Therefore, this study [...] Read more.
The joint optimization of computing resources and network routing constitutes a central challenge in Computing First Networks (CFNs). However, existing research has predominantly focused on computation offloading decisions, whereas the cooperative optimization of computing power and network routing remains underexplored. Therefore, this study investigates the joint routing optimization problem within the CFN framework. We first propose a computing resource scheduling architecture for CFN, termed SICRSA, which integrates Software-Defined Networking (SDN) and Information-Centric Networking (ICN). Building upon this architecture, we further introduce an ICN-based hierarchical naming scheme for computing services, design a computing service request packet format that extends the IP header, and detail the corresponding service request identification process and workflow. Furthermore, we propose Computing-Aware Routing via Graph and Long-term Dependency Learning (CRGLD), a Graph Neural Network (GNN), and Long Short-Term Memory (LSTM)-based routing optimization algorithm, within the SICRSA framework to address the computing-aware routing (CAR) problem. The algorithm incorporates a decision-making framework grounded in spatiotemporal feature learning, thereby enabling the joint and coordinated selection of computing nodes and transmission paths. Simulation experiments conducted on real-world network topologies demonstrate that CRGLD enhances both the quality of service and the intelligence of routing decisions in dynamic network environments. Moreover, CRGLD exhibits strong generalization capability when confronted with unfamiliar topologies and topological changes, effectively mitigating the poor generalization performance typical of traditional Deep Reinforcement Learning (DRL)-based routing models in dynamic settings. Full article
Show Figures

Figure 1

22 pages, 2193 KB  
Article
Deep Reinforcement Learning-Based Experimental Scheduling System for Clay Mineral Extraction
by Bo Zhou, Lei He, Yongqiang Li, Zhandong Lv and Shiping Zhang
Electronics 2026, 15(3), 617; https://doi.org/10.3390/electronics15030617 - 31 Jan 2026
Viewed by 96
Abstract
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to [...] Read more.
Efficient and non-destructive extraction of clay minerals is fundamental for shale oil and gas reservoir evaluation and enrichment mechanism studies. However, traditional manual extraction experiments face bottlenecks such as low efficiency and reliance on operator experience, which limit their scalability and adaptability to intelligent research demands. To address this, this paper proposes an intelligent experimental scheduling system for clay mineral extraction based on deep reinforcement learning. First, the complex experimental process is deconstructed, and its core scheduling stages are abstracted into a Flexible Job Shop Scheduling Problem (FJSP) model with resting time constraints. Then, a scheduling agent based on the Proximal Policy Optimization (PPO) algorithm is developed and integrated with an improved Heterogeneous Graph Neural Network (HGNN) to represent the relationships among operations, machines, and constraints. This enables effective capture of the complex topological structure of the experimental environment and facilitates efficient sequential decision-making. To facilitate future practical applicability, a four-layer system architecture is proposed, comprising the physical equipment layer, execution control layer, scheduling decision layer, and interactive application layer. A digital twin module is designed to bridge the gap between theoretical scheduling and physical execution. This study focuses on validating the core scheduling algorithm through realistic simulations. Simulation results demonstrate that the proposed HGNN-PPO scheduling method significantly outperforms traditional heuristic rules (FIFO, SPT), meta-heuristic algorithms (GA), and simplified reinforcement learning methods (PPO-MLP). Specifically, in large-scale problems, our method reduces the makespan by over 9% compared to the PPO-MLP baseline, and the algorithm runs more than 30 times faster than GA. This highlights its superior performance and scalability. This study provides an effective solution for intelligent scheduling in automated chemical laboratory workflows and holds significant theoretical and practical value for advancing the intelligentization of experimental sciences, including shale oil and gas research. Full article
Show Figures

Figure 1

16 pages, 2368 KB  
Article
Full-Depth Inversion of the Sound Speed Profile Using Remote Sensing Parameters via a Physics-Informed Neural Network
by Ke Qu, Zhanglong Li, Zixuan Zhang and Guangming Li
Remote Sens. 2026, 18(3), 438; https://doi.org/10.3390/rs18030438 - 30 Jan 2026
Viewed by 77
Abstract
Due to the limited number of deep sound speed profile (SSP) samples, the existing wide-area SSP inversion methods cannot estimate the full-depth SSP. In this paper, the full-depth SSP inversion is achieved by adding physical mechanism constraints to the neural network inversion algorithm. [...] Read more.
Due to the limited number of deep sound speed profile (SSP) samples, the existing wide-area SSP inversion methods cannot estimate the full-depth SSP. In this paper, the full-depth SSP inversion is achieved by adding physical mechanism constraints to the neural network inversion algorithm. A dimensionality reduction approach for SSP perturbation, based on the hydrodynamic mechanism of seawater, is proposed. Constrained by the characteristics of ocean stratification, a self-organizing map is employed to invert the depth of the sound channel axis and reconstruct the SSP from the sea surface to the sound channel axis. The SSP from the sound channel axis to the seabed is reconstructed by integrating the characteristics of the sound channel axis and the sound speed gradient characteristics of the deep sea isothermal layer. The efficacy of the method was validated by the Argo data from the South China Sea. The average root mean square error of the reconstructed full-depth SSP is 2.85 m/s. Additionally, the average error of transmission loss prediction within 50 km is 2.50 dB. The proposed method is capable of furnishing effective full-depth SSP information without the necessity of any in situ measurements, thereby meeting the requirements of certain underwater acoustic applications. Full article
Show Figures

Figure 1

40 pages, 581 KB  
Review
A Survey of AI-Enabled Predictive Maintenance for Railway Infrastructure: Models, Data Sources, and Research Challenges
by Francisco Javier Bris-Peñalver, Randy Verdecia-Peña and José I. Alonso
Sensors 2026, 26(3), 906; https://doi.org/10.3390/s26030906 - 30 Jan 2026
Viewed by 275
Abstract
Rail transport is central to achieving sustainable and energy-efficient mobility, and its digitalization is accelerating the adoption of condition-based maintenance (CBM) strategies. However, existing maintenance practices remain largely reactive or rely on limited rule-based diagnostics, which constrain safety, interoperability, and lifecycle optimization. This [...] Read more.
Rail transport is central to achieving sustainable and energy-efficient mobility, and its digitalization is accelerating the adoption of condition-based maintenance (CBM) strategies. However, existing maintenance practices remain largely reactive or rely on limited rule-based diagnostics, which constrain safety, interoperability, and lifecycle optimization. This survey provides a comprehensive and structured review of Artificial Intelligence techniques applied to the preventive, predictive, and prescriptive maintenance of railway infrastructure. We analyze and compare machine learning and deep learning approaches—including neural networks, support vector machines, random forests, genetic algorithms, and end-to-end deep models—applied to parameters such as track geometry, vibration-based monitoring, and imaging-based inspection. The survey highlights the dominant data sources and feature engineering techniques, evaluates the model performance across subsystems, and identifies research gaps related to data quality, cross-network generalization, model robustness, and integration with real-time asset management platforms. We further discuss emerging research directions, including Digital Twins, edge AI, and Cyber–Physical predictive systems, which position AI as an enabler of autonomous infrastructure management. This survey defines the key challenges and opportunities to guide future research and standardization in intelligent railway maintenance ecosystems. Full article
Show Figures

Figure 1

12 pages, 874 KB  
Proceeding Paper
Smart Pavement Systems with Embedded Sensors for Traffic and Environmental Monitoring
by Wai Yie Leong
Eng. Proc. 2025, 120(1), 12; https://doi.org/10.3390/engproc2025120012 - 29 Jan 2026
Abstract
The evolution of next-generation urban infrastructure necessitates the deployment of intelligent pavement systems capable of real-time data acquisition, adaptive response, and predictive analytics. This article presents the design, implementation, and performance evaluation of the smart pavement system incorporating multimodal embedded sensors for traffic [...] Read more.
The evolution of next-generation urban infrastructure necessitates the deployment of intelligent pavement systems capable of real-time data acquisition, adaptive response, and predictive analytics. This article presents the design, implementation, and performance evaluation of the smart pavement system incorporating multimodal embedded sensors for traffic density analysis, structural health monitoring, and environmental surveillance. SPS integrates piezoelectric transducers, micro-electro-mechanical system accelerometers, inductive loop coils, fiber Bragg grating (FBG) sensors, and capacitive moisture and temperature sensors within the asphalt and sub-base layers, forming a distributed sensor network that interfaces with an edge-AI-enabled data acquisition and control module. Each sensor node performs localized pre-processing using low-power microcontrollers and transmits spatiotemporal data to a centralized IoT gateway over an adaptive mesh topology via long-range wide-area network or 5G-Vehicle-to-Everything protocols. Data fusion algorithms employing Kalman filters, sensor drift compensation models, and deep convolutional recurrent neural networks enable accurate classification of vehicular loads, traffic, and anomaly detection. Additionally, the system supports real-time air pollutant detection (e.g., NO2, CO, and PM2.5) using embedded electrochemical and optical gas sensors linked to mobile roadside units. Field deployments on a 1.2 km highway testbed demonstrate the system’s capability to achieve 95.7% classification accuracy for vehicle type recognition, ±1.5 mm resolution in rut depth measurement, and ±0.2 °C thermal sensitivity across dynamic weather conditions. Predictive analytics driven by long short-term memory networks yield a 21.4% improvement in maintenance planning accuracy, significantly reducing unplanned downtimes and repair costs. The architecture also supports vehicle-to-infrastructure feedback loops for adaptive traffic signal control and incident response. The proposed SPS architecture demonstrates a scalable and resilient framework for cyber-physical infrastructure, paving the way for smart cities that are responsive, efficient, and sustainable. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

24 pages, 2292 KB  
Article
Tuning for Precision Forecasting of Green Market Volatility Time Series
by Sonia Benghiat and Salim Lahmiri
Stats 2026, 9(1), 12; https://doi.org/10.3390/stats9010012 - 29 Jan 2026
Viewed by 89
Abstract
In recent years, the green financial market has been exhibiting heightened volatility daily, largely due to policy changes and economic shifts. To explore the broader potential of predictive modeling in the context of short-term volatility time series, this study analyzes how fine-tuning hyperparameters [...] Read more.
In recent years, the green financial market has been exhibiting heightened volatility daily, largely due to policy changes and economic shifts. To explore the broader potential of predictive modeling in the context of short-term volatility time series, this study analyzes how fine-tuning hyperparameters in predictive models is essential for improving short-term forecasts of market volatility, particularly within the rapidly evolving domain of green financial markets. While traditional econometric models have long been employed to model market volatility, their application to green markets remains limited, especially when contrasted with the emerging potential of machine-learning and deep-learning approaches for capturing complex dynamics in this context. This study evaluates the performance of several data-driven forecasting models starting with machine-learning models: regression tree (RT) and support vector regression (SVR), and with deep-learning ones: long short-term memory (LSTM), convolutional neural network (CNN), and gated recurrent unit (GRU) applied to over a decade of daily estimated volatility data coming from three distinct green markets. Predictive accuracy is compared both with and without hyperparameter optimization methods. In addition, this study introduces the quantile loss metric to better capture the skewness and heavy tails inherent in these financial series, alongside two widely used evaluation metrics. This comparative analysis yields significant numerical and graphical insights, enhancing the understanding of short-term volatility predictability in green markets and advancing a relatively underexplored research domain. The study demonstrates that deep-learning predictors outperform machine-learning ones, and that including a hyperparameter tuning algorithm shows consistent improvements across all deep-learning models and for all volatility time series. Full article
(This article belongs to the Section Applied Statistics and Machine Learning Methods)
Show Figures

Figure 1

24 pages, 8146 KB  
Article
A Cattle Behavior Recognition Method Based on Graph Neural Network Compression on the Edge
by Hongbo Liu, Ping Song, Xiaoping Xin, Yuping Rong, Junyao Gao, Zhuoming Wang and Yinglong Zhang
Animals 2026, 16(3), 430; https://doi.org/10.3390/ani16030430 - 29 Jan 2026
Viewed by 123
Abstract
Cattle behavior is closely related to their health status, and monitoring cattle behavior using intelligent devices can assist herders in achieving precise and scientific livestock management. Current behavior recognition algorithms are typically executed on server platforms, resulting in increased power consumption due to [...] Read more.
Cattle behavior is closely related to their health status, and monitoring cattle behavior using intelligent devices can assist herders in achieving precise and scientific livestock management. Current behavior recognition algorithms are typically executed on server platforms, resulting in increased power consumption due to data transmission from edge devices and hindering real-time computation. An edge-based cattle behavior recognition method via Graph Neural Network (GNN) compression is proposed in this paper. Firstly, this paper proposes a wearable device that integrates data acquisition and model inference. This device achieves low-power edge inference function through a high-performance embedded microcontroller. Secondly, a sequential residual model tailored for single-frame data based on Inertial Measurement Unit (IMU) and displacement information is proposed. The model incrementally extracts deep features through two Residual Blocks (Resblocks), enabling effective cattle behavior classification. Finally, a compression method based on GNNs is introduced to adapt edge devices’ limited storage and computational resources. The method adopts GNNs as the backbone of the Actor–Critic model to autonomously search for an optimal pruning strategy under Floating-Point Operations (FLOPs) constraints. The experimental results demonstrate the effectiveness of the proposed method in cattle behavior classification. Moreover, enabling real-time inference on edge devices significantly reduces computational latency and power consumption, thereby highlighting the proposed method’s advantages for low-power, long-term operation. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

18 pages, 4545 KB  
Article
3D Medical Image Segmentation with 3D Modelling
by Mária Ždímalová, Kristína Boratková, Viliam Sitár, Ľudovít Sebö, Viera Lehotská and Michal Trnka
Bioengineering 2026, 13(2), 160; https://doi.org/10.3390/bioengineering13020160 - 29 Jan 2026
Viewed by 148
Abstract
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and [...] Read more.
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and planning. Volumetric analysis surpasses standard criteria by detecting subtle tumor changes, thereby aiding adaptive therapies. The objective of this study was to develop an enhanced, interactive Graphcut algorithm for 3D DICOM segmentation, specifically designed to improve boundary accuracy and 3D modeling of breast and brain tumors in datasets with heterogeneous tissue intensities. Methods: The standard Graphcut algorithm was augmented with a clustering mechanism (utilizing k = 2–5 clusters) to refine boundary detection in tissues with varying intensities. DICOM datasets were processed into 3D volumes using pixel spacing and slice thickness metadata. User-defined seeds were utilized for tumor and background initialization, constrained by bounding boxes. The method was implemented in Python 3.13 using the PyMaxflow library for graph optimization and pydicom for data transformation. Results: The proposed segmentation method outperformed standard thresholding and region growing techniques, demonstrating reduced noise sensitivity and improved boundary definition. An average Dice Similarity Coefficient (DSC) of 0.92 ± 0.07 was achieved for brain tumors and 0.90 ± 0.05 for breast tumors. These results were found to be comparable to state-of-the-art deep learning benchmarks (typically ranging from 0.84 to 0.95), achieved without the need for extensive pre-training. Boundary edge errors were reduced by a mean of 7.5% through the integration of clustering. Therapeutic changes were quantified accurately (e.g., a reduction from 22,106 mm3 to 14,270 mm3 post-treatment) with an average processing time of 12–15 s per stack. Conclusions: An efficient, precise 3D tumor segmentation tool suitable for diagnostics and planning is presented. This approach is demonstrated to be a robust, data-efficient alternative to deep learning, particularly advantageous in clinical settings where the large annotated datasets required for training neural networks are unavailable. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

25 pages, 876 KB  
Article
Multi-Scale Digital Twin Framework with Physics-Informed Neural Networks for Real-Time Optimization and Predictive Control of Amine-Based Carbon Capture: Development, Experimental Validation, and Techno-Economic Assessment
by Mansour Almuwallad
Processes 2026, 14(3), 462; https://doi.org/10.3390/pr14030462 - 28 Jan 2026
Viewed by 104
Abstract
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital [...] Read more.
Carbon capture and storage (CCS) is essential for achieving net-zero emissions, yet amine-based capture systems face significant challenges including high energy penalties (20–30% of power plant output) and operational costs ($50–120/tonne CO2). This study develops and validates a novel multi-scale Digital Twin (DT) framework integrating Physics-Informed Neural Networks (PINNs) to address these challenges through real-time optimization. The framework combines molecular dynamics, process simulation, computational fluid dynamics, and deep learning to enable real-time predictive control. A key innovation is the sequential training algorithm with domain decomposition, specifically designed to handle the nonlinear transport equations governing CO2 absorption with enhanced convergence properties.The algorithm achieves prediction errors below 1% for key process variables (R2> 0.98) when validated against CFD simulations across 500 test cases. Experimental validation against pilot-scale absorber data (12 m packing, 30 wt% MEA) confirms good agreement with measured profiles, including temperature (RMSE = 1.2 K), CO2 loading (RMSE = 0.015 mol/mol), and capture efficiency (RMSE = 0.6%). The trained surrogate enables computational speedups of up to four orders of magnitude, supporting real-time inference with response times below 100 ms suitable for closed-loop control. Under the conditions studied, the framework demonstrates reboiler duty reductions of 18.5% and operational cost reductions of approximately 31%. Sensitivity analysis identifies liquid-to-gas ratio and MEA concentration as the most influential parameters, with mechanistic explanations linking these to mass transfer enhancement and reaction kinetics. Techno-economic assessment indicates favorable investment metrics, though results depend on site-specific factors. The framework architecture is designed for extensibility to alternative solvent systems, with future work planned for industrial-scale validation and uncertainty quantification through Bayesian approaches. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
13 pages, 1455 KB  
Article
Deep Learning-Based All-Sky Cloud Image Recognition
by Ying Jiang, Debin Su, Yanbin Huang, Ning Yang and Jie Ao
Atmosphere 2026, 17(2), 142; https://doi.org/10.3390/atmos17020142 - 28 Jan 2026
Viewed by 99
Abstract
Accurate cloud identification is crucial for understanding the rapid evolution of weather systems, improving the accuracy of short-term forecasts, and ensuring aviation safety. Compared with traditional cloud image recognition methods, deep learning technology has advantages such as automatic learning of complex features, high-precision [...] Read more.
Accurate cloud identification is crucial for understanding the rapid evolution of weather systems, improving the accuracy of short-term forecasts, and ensuring aviation safety. Compared with traditional cloud image recognition methods, deep learning technology has advantages such as automatic learning of complex features, high-precision recognition, and strong robustness in changing environments, providing more reliable and detailed cloud information. This study utilized 256 cloud image observation data points collected by an all-sky imager from 3 to 30 November 2023, at the Tunchang County Meteorological Bureau in Hainan Province (19°21′N, 110°06′ E). A Convolutional Neural Network (CNN) model was employed for cloud image recognition. The results show that in terms of cloud recognition, the constructed CNN model achieved an accuracy rate, recall rate, and F1 score of 100%, 91%, and 95%, respectively, for clear skies and stratus clouds, cumulus clouds, and cirrus clouds, with an average recognition accuracy rate of 95%. In terms of cloud cover detection, when comparing the Normalized Red Blue Ratio (NRBR) and K-Means clustering algorithm with the system’s built-in monitoring results, the NRBR method performed optimally in cloud region segmentation, with cloud cover estimates closer to the actual distribution. In summary, deep learning technology demonstrates higher accuracy and strong robustness in all-sky cloud image recognition. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

Back to TopTop