Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (570)

Search Parameters:
Keywords = auto train

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 6589 KiB  
Article
Machine Learning (AutoML)-Driven Wheat Yield Prediction for European Varieties: Enhanced Accuracy Using Multispectral UAV Data
by Krstan Kešelj, Zoran Stamenković, Marko Kostić, Vladimir Aćin, Dragana Tekić, Tihomir Novaković, Mladen Ivanišević, Aleksandar Ivezić and Nenad Magazin
Agriculture 2025, 15(14), 1534; https://doi.org/10.3390/agriculture15141534 - 16 Jul 2025
Abstract
Accurate and timely wheat yield prediction is valuable globally for enhancing agricultural planning, optimizing resource use, and supporting trade strategies. Study addresses the need for precision in yield estimation by applying machine-learning (ML) regression models to high-resolution Unmanned Aerial Vehicle (UAV) multispectral (MS) [...] Read more.
Accurate and timely wheat yield prediction is valuable globally for enhancing agricultural planning, optimizing resource use, and supporting trade strategies. Study addresses the need for precision in yield estimation by applying machine-learning (ML) regression models to high-resolution Unmanned Aerial Vehicle (UAV) multispectral (MS) and Red-Green-Blue (RGB) imagery. Research analyzes five European wheat cultivars across 400 experimental plots created by combining 20 nitrogen, phosphorus, and potassium (NPK) fertilizer treatments. Yield variations from 1.41 to 6.42 t/ha strengthen model robustness with diverse data. The ML approach is automated using PyCaret, which optimized and evaluated 25 regression models based on 65 vegetation indices and yield data, resulting in 66 feature variables across 400 observations. The dataset, split into training (70%) and testing sets (30%), was used to predict yields at three growth stages: 9 May, 20 May, and 6 June 2022. Key models achieved high accuracy, with the Support Vector Regression (SVR) model reaching R2 = 0.95 on 9 May and R2 = 0.91 on 6 June, and the Multi-Layer Perceptron (MLP) Regressor attaining R2 = 0.94 on 20 May. The findings underscore the effectiveness of precisely measured MS indices and a rigorous experimental approach in achieving high-accuracy yield predictions. This study demonstrates how a precise experimental setup, large-scale field data, and AutoML can harness UAV and machine learning’s potential to enhance wheat yield predictions. The main limitations of this study lie in its focus on experimental fields under specific conditions; future research could explore adaptability to diverse environments and wheat varieties for broader applicability. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

15 pages, 4530 KiB  
Article
Clinical Validation of a Computed Tomography Image-Based Machine Learning Model for Segmentation and Quantification of Shoulder Muscles
by Hamidreza Rajabzadeh-Oghaz, Josie Elwell, Bradley Schoch, William Aibinder, Bruno Gobbato, Daniel Wessell, Vikas Kumar and Christopher P. Roche
Algorithms 2025, 18(7), 432; https://doi.org/10.3390/a18070432 - 14 Jul 2025
Viewed by 80
Abstract
Introduction: We developed a computed tomography (CT)-based tool designed for automated segmentation of deltoid muscles, enabling quantification of radiomic features and muscle fatty infiltration. Prior to use in a clinical setting, this machine learning (ML)-based segmentation algorithm requires rigorous validation. The aim [...] Read more.
Introduction: We developed a computed tomography (CT)-based tool designed for automated segmentation of deltoid muscles, enabling quantification of radiomic features and muscle fatty infiltration. Prior to use in a clinical setting, this machine learning (ML)-based segmentation algorithm requires rigorous validation. The aim of this study is to conduct shoulder expert validation of a novel deltoid ML auto-segmentation and quantification tool. Materials and Methods: A SwinUnetR-based ML model trained on labeled CT scans is validated by three expert shoulder surgeons for 32 unique patients. The validation evaluates the quality of the auto-segmented deltoid images. Specifically, each of the three surgeons reviewed the auto-segmented masks relative to CT images, rated masks for clinical acceptance, and performed a correction on the ML-generated deltoid mask if the ML mask did not completely contain the full deltoid muscle, or if the ML mask included any tissue other than the deltoid. Non-inferiority of the ML model was assessed by comparing ML-generated to surgeon-corrected deltoid masks versus the inter-surgeon variation in metrics, such as volume and fatty infiltration. Results: The results of our expert shoulder surgeon validation demonstrates that 97% of ML-generated deltoid masks were clinically acceptable. Only two of the ML-generated deltoid masks required major corrections and only one was deemed clinically unacceptable. These corrections had little impact on the deltoid measurements, as the median error in the volume and fatty infiltration measurements was <1% between the ML-generated deltoid masks and the surgeon-corrected deltoid masks. The non-inferiority analysis demonstrates no significant difference between the ML-generated to surgeon-corrected masks relative to inter-surgeon variations. Conclusions: Shoulder expert validation of this CT image analysis tool demonstrates clinically acceptable performance for deltoid auto-segmentation, with no significant differences observed between deltoid image-based measurements derived from the ML generated masks and those corrected by surgeons. These findings suggest that this CT image analysis tool has potential to reliably quantify deltoid muscle size, shape, and quality. Incorporating these CT image-based measurements into the pre-operative planning process may facilitate more personalized treatment decision making, and help orthopedic surgeons make more evidence-based clinical decisions. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

19 pages, 2299 KiB  
Article
A Supervised Machine Learning-Based Approach for Task Workload Prediction in Manufacturing: A Case Study Application
by Valentina De Simone, Valentina Di Pasquale, Joanna Calabrese, Salvatore Miranda and Raffaele Iannone
Machines 2025, 13(7), 602; https://doi.org/10.3390/machines13070602 - 12 Jul 2025
Viewed by 194
Abstract
Predicting workload for tasks in manufacturing is a complex challenge due to the numerous variables involved. In small- and medium-sized enterprises (SMEs), this process is often experience-based, leading to inaccurate predictions that significantly impact production planning, order management, and consequently the ability to [...] Read more.
Predicting workload for tasks in manufacturing is a complex challenge due to the numerous variables involved. In small- and medium-sized enterprises (SMEs), this process is often experience-based, leading to inaccurate predictions that significantly impact production planning, order management, and consequently the ability to meet customer deadlines. This paper presents an approach that leverages machine learning to enhance workload prediction with minimal data collection, making it particularly suitable for SMEs. A case study application using supervised machine learning models for regression, trained in an open-source data analytics, reporting, and integration platform (KNIME Analytics Platform), has been carried out. An Automated Machine Learning (AutoML) regression approach was employed to identify the most suitable model for task workload prediction based on minimising the Mean Absolute Error (MAE) scores. Specifically, the Regression Tree (RT) model demonstrated superior accuracy compared to more traditional simple averaging and manual predictions when modelling data for a single product type. When incorporating all available product data, despite a slight performance decrease, the XGBoost Tree Ensemble still outperformed the traditional approaches. These findings highlight the potential of machine learning to improve workload forecasting in manufacturing, offering a practical and easily implementable solution for SMEs. Full article
Show Figures

Figure 1

19 pages, 684 KiB  
Article
A Wi-Fi Fingerprinting Indoor Localization Framework Using Feature-Level Augmentation via Variational Graph Auto-Encoder
by Dongdeok Kim, Jae-Hyeon Park and Young-Joo Suh
Electronics 2025, 14(14), 2807; https://doi.org/10.3390/electronics14142807 - 12 Jul 2025
Viewed by 191
Abstract
Wi-Fi fingerprinting is a widely adopted technique for indoor localization in location-based services (LBS) due to its cost-effectiveness and ease of deployment using existing infrastructure. However, the performance of these systems often suffers due to missing received signal strength indicator (RSSI) measurements, which [...] Read more.
Wi-Fi fingerprinting is a widely adopted technique for indoor localization in location-based services (LBS) due to its cost-effectiveness and ease of deployment using existing infrastructure. However, the performance of these systems often suffers due to missing received signal strength indicator (RSSI) measurements, which can arise from complex indoor structures, device limitations, or user mobility, leading to incomplete and unreliable fingerprint data. To address this critical issue, we propose Feature-level Augmentation for Localization (FALoc), a novel framework that enhances Wi-Fi fingerprinting-based localization through targeted feature-level data augmentation. FALoc uniquely models the observation probabilities of RSSI signals by constructing a bipartite graph between reference points and access points, which is then processed by a variational graph auto-encoder (VGAE). Based on these learned probabilities, FALoc intelligently imputes likely missing RSSI values or removes unreliable ones, effectively enriching the training data. We evaluated FALoc using an MLP (Multi-Layer Perceptron)-based localization model on the UJIIndoorLoc and UTSIndoorLoc datasets. The experimental results demonstrate that FALoc significantly improves localization accuracy, achieving mean localization errors of 7.137 m on UJIIndoorLoc and 7.138 m on UTSIndoorLoc, which represent improvements of approximately 12.9% and 8.6% over the respective MLP baselines (8.191 m and 7.808 m), highlighting the efficacy of our approach in handling missing data. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Figure 1

24 pages, 1307 KiB  
Article
A Self-Supervised Specific Emitter Identification Method Based on Contrastive Asymmetric Masked Learning
by Dong Wang, Yonghui Huang, Tianshu Cui and Yan Zhu
Sensors 2025, 25(13), 4023; https://doi.org/10.3390/s25134023 - 27 Jun 2025
Viewed by 227
Abstract
Specific emitter identification (SEI) is a core technology for wireless device security that plays a crucial role in protecting wireless communication systems from various security threats. However, current deep learning-based SEI methods heavily rely on large amounts of labeled data for supervised training, [...] Read more.
Specific emitter identification (SEI) is a core technology for wireless device security that plays a crucial role in protecting wireless communication systems from various security threats. However, current deep learning-based SEI methods heavily rely on large amounts of labeled data for supervised training, facing challenges in non-cooperative communication scenarios. To address these issues, this paper proposes a novel contrastive asymmetric masked learning-based SEI (CAML-SEI) method, effectively solving the problem of SEI under scarce labeled samples. The proposed method constructs an asymmetric auto-encoder architecture, comprising an encoder network based on channel squeeze-and-excitation residual blocks to capture radio frequency fingerprint (RFF) features embedded in signals, while employing a lightweight single-layer convolutional decoder for masked signal reconstruction. This design promotes the learning of fine-grained local feature representations. To further enhance feature discriminability, a learnable non-linear mapping is introduced to compress high-dimensional encoded features into a compact low-dimensional space, accompanied by a contrastive loss function that simultaneously achieves feature aggregation of positive samples and feature separation of negative samples. Finally, the network is jointly optimized by combining signal reconstruction and feature contrast tasks. Experiments conducted on real-world ADS-B and Wi-Fi datasets demonstrate that the proposed method effectively learns generalized RFF features, and the results show superior performance compared with other SEI methods. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

22 pages, 3852 KiB  
Article
Early Detection of the Marathon Wall to Improve Pacing Strategies in Recreational Marathoners
by Mohamad-Medhi El Dandachi, Veronique Billat, Florent Palacin and Vincent Vigneron
AI 2025, 6(6), 130; https://doi.org/10.3390/ai6060130 - 19 Jun 2025
Viewed by 511
Abstract
The individual marathon optimal pacing sparring the runner to hit the “wall” after 2 h of running remain unclear. In the current study we examined to what extent Deep neural Network contributes to identify the individual optimal pacing training a Variational Auto Encoder [...] Read more.
The individual marathon optimal pacing sparring the runner to hit the “wall” after 2 h of running remain unclear. In the current study we examined to what extent Deep neural Network contributes to identify the individual optimal pacing training a Variational Auto Encoder (VAE) with a small dataset of nine runners. This last one has been constructed from an original one that contains the values of multiple physiological variables for 10 different runners during a marathon. We plot the Lyapunov exponent/Time graph on these variables for each runner showing that the marathon wall could be anticipated. The pacing strategy that this innovative technique sheds light on is to predict and delay the moment when the runner empties his reserves and ’hits the wall’ while considering the individual physical capabilities of each athlete. Our data suggest that given that a further increase of marathon runner using a cardio-GPS could benefit of their pacing run for optimizing their performance if AI would be used for learning how to self-pace his marathon race for avoiding hitting the wall. Full article
Show Figures

Figure 1

18 pages, 4601 KiB  
Article
An Intrusion Detection Method Based on Symmetric Federated Deep Learning in Complex Networks
by Lei Wang, Xuanrui Ren and Chunyi Wu
Symmetry 2025, 17(6), 952; https://doi.org/10.3390/sym17060952 - 15 Jun 2025
Viewed by 362
Abstract
The rapid development of the current 5G/6G network has added tremendous pressure to traditional security detection in the scenario of dealing with large-scale network attacks, resulting in high time complexity and low efficiency of attack identification. According to the deep network and its [...] Read more.
The rapid development of the current 5G/6G network has added tremendous pressure to traditional security detection in the scenario of dealing with large-scale network attacks, resulting in high time complexity and low efficiency of attack identification. According to the deep network and its symmetry principle, this paper proposes a complex network intrusion detection and recognition method based on symmetric federation optimization, named IDS, which aims to reduce the time complexity and improve the accuracy and efficiency of attack identification. By using a symmetric network UNet-based deep feature learning to reconstruct data and construct the input matrix, we optimize the federated deep learning algorithm with a symmetric auto-encoder to make it more suitable for a complex network environment. The experimental results demonstrate that the technology based on the symmetric network proposed in this paper possesses significant advantages in terms of intrusion detection accuracy and effectiveness, which can effectively identify network intrusion and improve the accuracy of current complex network intrusion detection. The proposed symmetric intrusion detection method not only solves the bottleneck of traditional detection methods and improves the training efficiency of the model, but it also provides a new idea and solution for network security research. Full article
Show Figures

Figure 1

15 pages, 2843 KiB  
Article
Improving the Precision of Deep-Learning-Based Head and Neck Target Auto-Segmentation by Leveraging Radiology Reports Using a Large Language Model
by Libing Zhu, Jean-Claude M. Rwigema, Xue Feng, Bilaal Ansari, Jingwei Duan, Yi Rong and Quan Chen
Cancers 2025, 17(12), 1935; https://doi.org/10.3390/cancers17121935 - 10 Jun 2025
Viewed by 478
Abstract
Background/Objectives: The accurate delineation of primary tumors (GTVp) and metastatic lymph nodes (GTVn) in head and neck (HN) cancers is essential for effective radiation treatment planning, yet remains a challenging and laborious task. This study aims to develop a deep-learning-based auto-segmentation (DLAS) [...] Read more.
Background/Objectives: The accurate delineation of primary tumors (GTVp) and metastatic lymph nodes (GTVn) in head and neck (HN) cancers is essential for effective radiation treatment planning, yet remains a challenging and laborious task. This study aims to develop a deep-learning-based auto-segmentation (DLAS) model trained on external datasets with false-positive elimination using clinical diagnosis reports. Methods: The DLAS model was trained on a multi-institutional public dataset with 882 cases. Forty-four institutional cases were randomly selected as the external testing dataset. DLAS-generated GTVp and GTVn were validated against clinical diagnosis reports to identify false-positive and false-negative segmentation errors using two large language models: ChatGPT-4 and Llama-3. False-positive ruling out was conducted by matching the centroids of AI-generated contours with the slice locations or anatomical regions described in the reports. Performance was evaluated using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), and tumor detection precision. Results: ChatGPT-4 outperformed Llama-3 in accurately extracting tumor locations from the diagnostic reports. False-positive contours were identified in 15 out of 44 cases. The DSCmean of the DLAS contours for GTVp and GTVn increased from 0.68 to 0.75 and from 0.69 to 0.75, respectively, after the ruling-out process. Notably, the average HD95 value for GTVn decreased from 18.81 mm to 5.2 mm. Post ruling out, the model achieved 100% precision for GTVp and GTVn when compared with the results of physician-determined contours. Conclusions: The false-positive ruling-out approach based on diagnostic reports effectively enhances the precision of DLAS in the HN region. The model accurately identifies the tumor location and detects all false-negative errors. Full article
Show Figures

Figure 1

27 pages, 19294 KiB  
Article
Classifying X-Ray Tube Malfunctions: AI-Powered CT Predictive Maintenance System
by Ladislav Pomšár, Maryna Tsvietaieva, Maros Krupáš and Iveta Zolotová
Appl. Sci. 2025, 15(12), 6547; https://doi.org/10.3390/app15126547 - 10 Jun 2025
Viewed by 496
Abstract
Computed tomography scans are among the most used medical imaging modalities. With increased popularity and usage, the need for maintenance also increases. In this work, the problem is tackled using machine learning methods to create a predictive maintenance system for the classification of [...] Read more.
Computed tomography scans are among the most used medical imaging modalities. With increased popularity and usage, the need for maintenance also increases. In this work, the problem is tackled using machine learning methods to create a predictive maintenance system for the classification of faulty X-ray tubes. Data for 137 different CT machines were collected, with 128 deemed to fulfil the quality criteria of the study. Of these, 66 have had X-ray tubes subsequently replaced. Afterwards, auto-regressive model coefficients and wavelet coefficients, as standard features in the area, are extracted. For classification, a set of different classical machine learning approaches is used alongside two different architectures of neural networks—1D VGG-style CNN and LSTM RNN. In total, seven different machine learning models are investigated. The best-performing model proved to be an LSTM trained on trimmed and normalised input data, with an accuracy of 87% and a recall of 100% for the faulty class. The developed model has the potential to maximise the uptime of CT machines and help mitigate the adverse effects of machine breakdowns. Full article
Show Figures

Figure 1

19 pages, 1563 KiB  
Article
Small Object Tracking in LiDAR Point Clouds: Learning the Target-Awareness Prototype and Fine-Grained Search Region
by Shengjing Tian, Yinan Han, Xiantong Zhao and Xiuping Liu
Sensors 2025, 25(12), 3633; https://doi.org/10.3390/s25123633 - 10 Jun 2025
Viewed by 583
Abstract
Light Detection and Ranging (LiDAR) point clouds are an essential perception modality for artificial intelligence systems like autonomous driving and robotics, where the ubiquity of small objects in real-world scenarios substantially challenges the visual tracking of small targets amidst the vastness of point [...] Read more.
Light Detection and Ranging (LiDAR) point clouds are an essential perception modality for artificial intelligence systems like autonomous driving and robotics, where the ubiquity of small objects in real-world scenarios substantially challenges the visual tracking of small targets amidst the vastness of point cloud data. Current methods predominantly focus on developing universal frameworks for general object categories, often sidelining the persistent difficulties associated with small objects. These challenges stem from a scarcity of foreground points and a low tolerance for disturbances. To this end, we propose a deep neural network framework that trains a Siamese network for feature extraction and innovatively incorporates two pivotal modules: the target-awareness prototype mining (TAPM) module and the regional grid subdivision (RGS) module. The TAPM module utilizes the reconstruction mechanism of the masked auto-encoder to distill prototypes within the feature space, thereby enhancing the salience of foreground points and aiding in the precise localization of small objects. To heighten the tolerance of disturbances in feature maps, the RGS module is devised to retrieve detailed features of the search area, capitalizing on Vision Transformer and pixel shuffle technologies. Furthermore, beyond standard experimental configurations, we have meticulously crafted scaling experiments to assess the robustness of various trackers when dealing with small objects. Comprehensive evaluations show our method achieves a mean Success of 64.9% and 60.4% under original and scaled settings, outperforming benchmarks by +3.6% and +5.4%, respectively. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

13 pages, 1739 KiB  
Article
CYTO-SV-ML: A Machine Learning Tool for Cytogenetic Structural Variant Analysis in Somatic Cell Type Using Genome Sequences
by Tao Zhang, Paul Auer, Stephen R. Spellman, Jing Dong, Wael Saber and Yung-Tsi Bolon
Life 2025, 15(6), 929; https://doi.org/10.3390/life15060929 - 9 Jun 2025
Viewed by 438
Abstract
(1) Background: Although whole genome sequencing (WGS) has enabled the comprehensive analyses of structural variants (SVs), more accurate and efficient methods are needed to distinguish large somatic SVs (SV size ≥ 1 Mb) traditionally detected through cytogenetic testing from germline SVs. (2) Methods: [...] Read more.
(1) Background: Although whole genome sequencing (WGS) has enabled the comprehensive analyses of structural variants (SVs), more accurate and efficient methods are needed to distinguish large somatic SVs (SV size ≥ 1 Mb) traditionally detected through cytogenetic testing from germline SVs. (2) Methods: A customized machine learning pipeline (CYTO-SV-ML) under Snakemake automation workflow was developed with a user interface to identify somatic cytogenetic SVs in WGS data. And this tool was applied for characterizing structural variation profiles in the whole blood of patients with myelodysplastic syndromes (MDSs). Known SVs mapped from well-established open databases were split into training and validation subsets for an AUTO-ML machine learning model in a CYTO-SV-ML pipeline. (3) Results: The benchmarking performance of the CYTO-SV-ML pipeline on somatic cytogenetic SV classification displayed an area under the receiver operating characteristic curve (AUCROC) of 0.94 for translocations and 0.92 for non-translocations, a sensitivity of 0.83 for translocations and 0.85 for non-translocations, and a specificity of 0.96 for translocations and 0.82 for non-translocations. Our method (207 somatic cytogenetic SVs) outperformed a conventional SV calling pipeline (143 somatic cytogenetic SVs) in an independent validation of clinical cytogenetic records. In addition, the CYTO-SV-ML pipeline uncovered novel somatic cytogenetic SVs in 49 (89%) of 55 patients without successful clinical cytogenetic results. (4) Conclusions: Our study demonstrates the high-performance machine learning approach of CYTO-SV-ML on benchmarking SV classification from genomic sequencing data, and further validations of novel anomalies by orthogonal methods will be essential to unlock its full clinical potential of cytogenetic diagnostics. Full article
(This article belongs to the Special Issue Molecular and Cellular Biology of Transplantation)
Show Figures

Figure 1

20 pages, 13445 KiB  
Article
Improving Tropical Forest Canopy Height Mapping by Fusion of Sentinel-1/2 and Bias-Corrected ICESat-2–GEDI Data
by Aobo Liu, Yating Chen and Xiao Cheng
Remote Sens. 2025, 17(12), 1968; https://doi.org/10.3390/rs17121968 - 6 Jun 2025
Viewed by 677
Abstract
Accurately estimating the forest canopy height is essential for quantifying forest biomass and carbon storage. Recently, the ICESat-2 and GEDI spaceborne LiDAR missions have significantly advanced global canopy height mapping. However, due to inherent sensor limitations, their footprint-level estimates often show systematic bias. [...] Read more.
Accurately estimating the forest canopy height is essential for quantifying forest biomass and carbon storage. Recently, the ICESat-2 and GEDI spaceborne LiDAR missions have significantly advanced global canopy height mapping. However, due to inherent sensor limitations, their footprint-level estimates often show systematic bias. Tall forests tend to be underestimated, while short forests are often overestimated. To address this issue, we used coincident G-LiHT airborne LiDAR measurements to correct footprint-level canopy heights from both ICESat-2 and GEDI, aiming to improve the canopy height retrieval accuracy across Puerto Rico’s tropical forests. The bias-corrected LiDAR dataset was then combined with multi-source predictors derived from Sentinel-1/2 and the 3DEP DEM. Using these inputs, we trained a canopy height inversion model based on the AutoGluon stacking ensemble method. Accuracy assessments show that, compared to models trained on uncorrected single-source LiDAR data, the new model built on the bias-corrected ICESat-2–GEDI fusion outperformed in both overall accuracy and consistency across canopy height gradients. The final model achieved a correlation coefficient (R) of 0.80, with a root mean square error (RMSE) of 3.72 m and a relative RMSE of 0.22. The proposed approach offers a robust and transferable approach for high-resolution canopy structure mapping and provides valuable support for carbon accounting and tropical forest management. Full article
(This article belongs to the Special Issue Machine Learning in Global Change Ecology: Methods and Applications)
Show Figures

Graphical abstract

16 pages, 2032 KiB  
Article
Auto-Segmentation and Auto-Planning in Automated Radiotherapy for Prostate Cancer
by Sijuan Huang, Jingheng Wu, Xi Lin, Guangyu Wang, Ting Song, Li Chen, Lecheng Jia, Qian Cao, Ruiqi Liu, Yang Liu, Xin Yang, Xiaoyan Huang and Liru He
Bioengineering 2025, 12(6), 620; https://doi.org/10.3390/bioengineering12060620 - 6 Jun 2025
Viewed by 540
Abstract
Objective: The objective of this study was to develop and assess the clinical feasibility of auto-segmentation and auto-planning methodologies for automated radiotherapy in prostate cancer. Methods: A total of 166 patients were used to train a 3D Unet model for segmentation of [...] Read more.
Objective: The objective of this study was to develop and assess the clinical feasibility of auto-segmentation and auto-planning methodologies for automated radiotherapy in prostate cancer. Methods: A total of 166 patients were used to train a 3D Unet model for segmentation of the gross tumor volume (GTV), clinical tumor volume (CTV), nodal CTV (CTVnd), and organs at risk (OARs). Performance was assessed by the Dice similarity coefficient (DSC), the Recall, Precision, Volume Ratio (VR), the 95% Hausdorff distance (HD95%), and the volumetric revision degree (VRD). An auto-planning network based on a 3D Unet was trained on 77 treatment plans derived from the 166 patients. Dosimetric differences and clinical acceptability of the auto-plans were studied. The effect of OAR editing on dosimetry was also evaluated. Results: On an independent set of 50 cases, the auto-segmentation process took 1 min 20 s per case. The DSCs for GTV, CTV, and CTVnd were 0.87, 0.88, and 0.82, respectively, with VRDs ranging from 0.09 to 0.14. The segmentation of OARs demonstrated high accuracy (DSC ≥ 0.83, Recall/Precision ≈ 1.0). The auto-planning process required 1–3 optimization iterations for 50%, 40%, and 10% of cases, respectively, and exhibited significant better conformity (p ≤ 0.01) and OAR sparing (p ≤ 0.03) while maintaining comparable target coverage. Only 6.7% of auto-plans were deemed unacceptable compared to 20% of manual plans, with 75% of auto-plans considered superior. Notably, the editing of OARs had no significant impact on doses. Conclusions: The accuracy of auto-segmentation is comparable to that of manual segmentation, and the auto-planning offers equivalent or better OAR protection, meeting the requirements of online automated radiotherapy and facilitating its clinical application. Full article
(This article belongs to the Special Issue Novel Imaging Techniques in Radiotherapy)
Show Figures

Figure 1

28 pages, 11557 KiB  
Review
Physics-Informed Neural Networks for Higher-Order Nonlinear Schrödinger Equations: Soliton Dynamics in External Potentials
by Leonid Serkin and Tatyana L. Belyaeva
Mathematics 2025, 13(11), 1882; https://doi.org/10.3390/math13111882 - 4 Jun 2025
Viewed by 893
Abstract
This review summarizes the application of physics-informed neural networks (PINNs) for solving higher-order nonlinear partial differential equations belonging to the nonlinear Schrödinger equation (NLSE) hierarchy, including models with external potentials. We analyze recent studies in which PINNs have been employed to solve NLSE-type [...] Read more.
This review summarizes the application of physics-informed neural networks (PINNs) for solving higher-order nonlinear partial differential equations belonging to the nonlinear Schrödinger equation (NLSE) hierarchy, including models with external potentials. We analyze recent studies in which PINNs have been employed to solve NLSE-type evolution equations up to the fifth order, demonstrating their ability to obtain one- and two-soliton solutions, as well as other solitary waves with high accuracy. To provide benchmark solutions for training PINNs, we employ analytical methods such as the nonisospectral generalization of the AKNS scheme of the inverse scattering transform and the auto-Bäcklund transformation. Finally, we discuss recent advancements in PINN methodology, including improvements in network architecture and optimization techniques. Full article
(This article belongs to the Special Issue New Trends in Nonlinear Dynamics and Nonautonomous Solitons)
Show Figures

Figure 1

18 pages, 2167 KiB  
Article
High-Cycle Fatigue Life Prediction of Additive Manufacturing Inconel 718 Alloy via Machine Learning
by Zongxian Song, Jinling Peng, Lina Zhu, Caiyan Deng, Yangyang Zhao, Qingya Guo and Angran Zhu
Materials 2025, 18(11), 2604; https://doi.org/10.3390/ma18112604 - 3 Jun 2025
Viewed by 556
Abstract
This study established a machine learning framework to enhance the accuracy of very-high-cycle fatigue (VHCF) life prediction in selective laser melted Inconel 718 alloy by systematically comparing the use of generative adversarial networks (GANs) and variational auto-encoders (VAEs) for data augmentation. We quantified [...] Read more.
This study established a machine learning framework to enhance the accuracy of very-high-cycle fatigue (VHCF) life prediction in selective laser melted Inconel 718 alloy by systematically comparing the use of generative adversarial networks (GANs) and variational auto-encoders (VAEs) for data augmentation. We quantified the influence of critical defect parameters (dimensions and stress amplitudes) extracted from fracture analyses on fatigue life and compared the performance of GANs versus VAEs in generating synthetic training data for three regression models (ANN, Random Forest, and SVR). The experimental fatigue data were augmented using both generative models, followed by hyperparameter optimization and rigorous validation against independent test sets. The results demonstrated that the GAN-generated data significantly improved the prediction metrics, with GAN-enhanced models achieving superior R2 scores (0.91–0.97 vs. 0.86 ± 0.87) and lower MAEs (1.13–1.62% vs. 2.00–2.64%) compared to the VAE-based approaches. This work not only establishes GANs as a breakthrough tool for AM fatigue prediction but also provides a transferable methodology for data-driven modeling of defect-dominated failure mechanisms in advanced materials. Full article
(This article belongs to the Special Issue High Temperature-Resistant Ceramics and Composites)
Show Figures

Figure 1

Back to TopTop