Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (453)

Search Parameters:
Keywords = while approaching another train

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5390 KB  
Article
Joint Optimization of Time Slot and Power Allocation in Underwater Acoustic Communication Networks
by Xuan Geng and Yongkang Hu
Sensors 2026, 26(7), 2188; https://doi.org/10.3390/s26072188 - 1 Apr 2026
Viewed by 304
Abstract
This paper proposes a joint optimization algorithm based on reinforcement learning to address the time slot and power allocation problem in underwater acoustic communication networks (UACNs). By maximizing the total capacity of successful transmissions as the optimization objective, two sub-objectives are formulated corresponding [...] Read more.
This paper proposes a joint optimization algorithm based on reinforcement learning to address the time slot and power allocation problem in underwater acoustic communication networks (UACNs). By maximizing the total capacity of successful transmissions as the optimization objective, two sub-objectives are formulated corresponding to time-slot scheduling and power allocation. The sub-objective corresponding to time-slot scheduling is addressed by constructing a Markov Decision Process (MDP) model based on Deep Q-Network (DQN) learning. In this model, the agent learns the time slot allocation policy with the goal of increasing the number of successfully transmitted links while reducing the collision. For the sub-objective corresponding to power allocation, another MDP model is developed, solved by the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm, in which each underwater transmission node acts as an independent agent. The MADDPG approach enables the system to improve channel capacity under energy limitation, which maximizes the total capacity of successfully transmitted links. In terms of model execution, the DQN adopts a centralized training and time slot allocation, while MADDPG uses a centralized training and distributed execution to select the transmission power by each node. Simulation results show that the proposed joint optimization algorithm demonstrates better performance in the number of successfully transmitted links and channel capacity compared to TDMA, Slotted ALOHA, and other algorithms. Full article
(This article belongs to the Special Issue Sensor Networks and Communication with AI)
Show Figures

Figure 1

16 pages, 2916 KB  
Article
Deep Learning-Based Relay Selection in a Decode-and-Forward Cooperative System with Energy Harvesting and Signal Space Diversity
by Ahmed Oun, Divyessh Maheshwari and Ahmed Ammar
Electronics 2026, 15(7), 1363; https://doi.org/10.3390/electronics15071363 - 25 Mar 2026
Viewed by 338
Abstract
Deep learning techniques have been widely applied in wireless communication systems to enhance resilience and reduce computational complexity. This paper investigates both traditional and deep learning-based approaches for real-time relay selection in a cooperative communication system with multiple energy-harvesting relays and signal space [...] Read more.
Deep learning techniques have been widely applied in wireless communication systems to enhance resilience and reduce computational complexity. This paper investigates both traditional and deep learning-based approaches for real-time relay selection in a cooperative communication system with multiple energy-harvesting relays and signal space diversity. The assumed relay decoding scheme is decode-and-forward (DF), with selection criteria based on successful decoding from the source, sufficient energy availability, and the best channel to the destination. The system performance is evaluated in terms of outage probability. Monte Carlo simulations are used to determine the exact outage probability of the system and to generate datasets for training machine learning models. The traditional machine learning models implemented include Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Support Vector Machines (SVMs). The deep learning-based method used is the deep neural network (DNN). Two datasets—one with six features and another with nine features—were used for training and testing. The 6-feature datasets are comparatively less random and complex than the 9-feature datasets. The results indicate that among traditional models KNN achieves the highest accuracy and is thus used as a benchmark to compare against DNN performance. For the 9-feature datasets, both KNN and DNN struggle to accurately approximate the exact outage probability, suggesting that the 9-feature datasets are too complex and noisy for effective modeling. However, on the 6-feature datasets, KNN achieves 77% accuracy, while DNN achieves a significantly higher accuracy of 99%. Due to its high accuracy, the DNN model closely approximates the exact outage probability while offering greater computational efficiency compared to the KNN model. These results underscore the potential of deep learning in optimizing real-time relay selection for energy-harvesting cooperative communication systems. Full article
(This article belongs to the Special Issue Advances in Networked Systems and Communication Protocols)
Show Figures

Figure 1

25 pages, 2129 KB  
Article
Stability and Forward Bifurcation Analysis of an SIPIVR Model for Poliovirus Transmission with Neural Network
by Abid Ali, Muhammad Arfan and Muhammad Asif
Symmetry 2026, 18(3), 435; https://doi.org/10.3390/sym18030435 - 2 Mar 2026
Viewed by 277
Abstract
The aim of this research is to formulate and analyze a modified SIpIVR mathematical model to study the transmission dynamics of poliovirus and assess the impact of vaccination on disease control. The proposed model extends classical SEIV-type frameworks [...] Read more.
The aim of this research is to formulate and analyze a modified SIpIVR mathematical model to study the transmission dynamics of poliovirus and assess the impact of vaccination on disease control. The proposed model extends classical SEIV-type frameworks by incorporating a recovered compartment with long-term immunity and by replacing the traditional exposed class with a pre-infectious compartment (Ip) that captures silent viral shedding during the incubation phase of poliovirus. This modification addresses the critical epidemiological feature that individuals can transmit the virus before showing symptoms while maintaining biological accuracy in compartment definition. Several fundamental analytical properties are rigorously established, including positivity, boundedness, and the existence of a biologically meaningful invariant region. The basic reproduction number R0 is derived using the next-generation matrix approach, and comprehensive stability analysis is carried out. The analysis shows that the DFE is locally and globally asymptotically stable whenever R0<1. Using center manifold theory, a forward bifurcation is rigorously demonstrated, indicating that disease persistence emerges smoothly as R0 crosses unity. Local and global sensitivity analyses of the basic reproduction number R0 identify critical epidemiological parameters, and points to vaccination coverage and transmission rates as key drivers of outbreak dynamics. Numerical simulations confirm the analytical results and illustrates two different epidemiological scenarios, one with R0<1, and another with R0>1 along with neural network analysis by using the same data from both cases in a built-in function package in MATLAB-2020 software. It utilizes all of its hidden layers to check the data used by the model for validation performance and training and to find the absolute and mean squared errors. It also shows how vaccination suppresses the spread of infection. These findings provide a strong mathematical basis for public health policy, offering strategic insight into how vaccination campaigns might be optimized to accelerate progress toward global polio eradication. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

22 pages, 3981 KB  
Article
Rotating Electric Machine Fault Diagnosis with Magnetic Flux Measurement Using Deep Learning Models
by Obinna Onodugo, Innocent Enyekwe and Emmanuel Agamloh
Energies 2026, 19(4), 1106; https://doi.org/10.3390/en19041106 - 22 Feb 2026
Viewed by 836
Abstract
This paper presents new techniques for electric machine diagnostics that combine advanced signal processing and artificial intelligence (AI)-based techniques using magnetic flux measurements acquired under various operating conditions. Developing an effective electric machine diagnostics tool is paramount for increased industrial productivity and extending [...] Read more.
This paper presents new techniques for electric machine diagnostics that combine advanced signal processing and artificial intelligence (AI)-based techniques using magnetic flux measurements acquired under various operating conditions. Developing an effective electric machine diagnostics tool is paramount for increased industrial productivity and extending the service life of the machine. The existing diagnostic tools face issues, including false indication of faults using classical methods, and the proposed data-driven methods based on machine learning lack transferability of model knowledge on an unseen dataset from different motor types or power ratings due to structural differences. To overcome these diagnostic drawbacks of statistical ML classifiers and classical approaches, innovative feature selection methods were employed in this work to preprocess the measured magnetic flux into a spectrogram image, and the transfer learning (TL) technique was applied to fine-tune convolution neural networks (CNNs) ImageNet pretrained models. The experimental results show the trained statistical ML classifiers and traditional CNN performance on unseen BU data and on the external data, and the performance demonstrated a lack of generalization on external datasets of different power ratings or structures. Models with such drawbacks cannot be used for developing effective diagnostic systems. The TL technique was employed on different deep CNN ImageNet pretrained models with spectrogram images as inputs to the deep CN network. This approach demonstrated an advanced and improved electric machine diagnostic system that addresses the drawbacks of the current ML-based diagnostic systems. The generalized model developed using CNN ResNet50 outperformed other deep CNN ImageNet models in correctly diagnosing faults on both the dataset generated from the authors’ lab and on an external dataset of a different machine from another research lab. Full article
Show Figures

Figure 1

17 pages, 3650 KB  
Article
Multi-Entropy Feature Concatenation for Data-Efficient Cross-Subject Classification of Alzheimer’s Disease and Frontotemporal Dementia from Single-Channel EEG
by Jiawen Li, Chen Ling, Weidong Zhang, Jujian Lv, Xianglei Hu, Kaihan Lin, Jun Yuan, Shuang Zhang and Rongjun Chen
Entropy 2026, 28(2), 212; https://doi.org/10.3390/e28020212 - 12 Feb 2026
Viewed by 360
Abstract
Alzheimer’s disease (AD) and frontotemporal dementia (FTD) are neurodegenerative disorders where early detection is vital. However, the need for long-term monitoring is incompatible with data-scarce settings, and methods trained on one subject often fail on another due to cross-subject variability. To address these [...] Read more.
Alzheimer’s disease (AD) and frontotemporal dementia (FTD) are neurodegenerative disorders where early detection is vital. However, the need for long-term monitoring is incompatible with data-scarce settings, and methods trained on one subject often fail on another due to cross-subject variability. To address these limitations, this study proposes a cross-subject, single-channel electroencephalography (EEG)-based method that uses Multi-Entropy Feature Concatenation (MEFC) to classify AD and FTD. First, single-channel EEG is processed through the Discrete Wavelet Transform (DWT) to extract five rhythms: delta, theta, alpha, beta, and gamma. Subsequently, Permutation Entropy (PE), Singular Spectrum Entropy (SSE), and Sample Entropy (SE) are calculated for each rhythm and concatenated to form a combined MEFC to characterize the non-linear dynamic properties of EEG. Lastly, Dynamic Time Warping (DTW), Pearson Correlation Coefficient (PCC), Wavelet Coherence (WC), and Hilbert Transform Correlation (HTC) are employed to measure the similarity between unknown rhythmic MEFC and those from AD, FTD, and Healthy Control (HC) groups, performing a data-driven classification via similarity measurement. Experimental results on 88 subjects in the AHEPA dataset demonstrate that the beta-rhythm with PCC yields a three-class accuracy of 76.14% using single-channel FP2. In another dataset, the Florida-Based dataset, involving 48 subjects, theta-rhythm with WC achieves a two-class accuracy of 83.33% using FP2. Furthermore, a MATLAB R2023b-based toolbox is developed using the proposed method. Such outcomes are impressive, given the limited data per individual (data-efficient), reliable performance across new subjects (cross-subject), and compatibility with wearable devices (single-channel), providing a novel entropy-based approach for EEG-based applications in biomedical engineering. Full article
(This article belongs to the Special Issue Entropy in Biomedical Engineering, 3rd Edition)
Show Figures

Figure 1

18 pages, 12622 KB  
Article
Flexible Solar Panel Recognition Using Deep Learning
by Mingyang Sun and Dinh Hoa Nguyen
Energies 2026, 19(4), 872; https://doi.org/10.3390/en19040872 - 7 Feb 2026
Viewed by 604
Abstract
Solar panels are an important device converting light energy into electricity not only from the sun but also from artificial light sources such as light emitting diodes (LEDs) or lasers. Recent advances in solar cell technologies enable them to be flexible, allowing them [...] Read more.
Solar panels are an important device converting light energy into electricity not only from the sun but also from artificial light sources such as light emitting diodes (LEDs) or lasers. Recent advances in solar cell technologies enable them to be flexible, allowing them to be attached to things with different sizes and shapes. Therefore, it is challenging for AI-equipped systems to automatically recognize and distinguish flexible solar panels from other surrounding objects in realistic, complicated environments. Traditional recognition methods usually suffer from low recognition accuracy and high computational cost. Hence, this paper proposes a deep learning method for solar panel recognition using a complete work flow that includes data acquisition and dataset construction, YOLOv8-based model training, real-time solar panel recognition, and extended functionality. The proposed method demonstrates the accurate identification of realistic flat and flexible solar panels, including bent and partially shaded panels, with a mean average precision (mAP)@0.5 of 99.4% and an mAP@0.5:0.95 of 90.4%. The Pareto front for the multi-objective loss function minimization problem is also investigated to determine the optimal set of weighting parameters for the loss components. Furthermore, another functionality is added to detect the sizes of different solar panels if multiple ones co-exist. These features provide a promising foundation for further usage of the proposed deep learning approach to recognize flexible solar panels in realistic contexts. Full article
(This article belongs to the Special Issue Renewable Energy System Technologies: 3rd Edition)
Show Figures

Figure 1

25 pages, 51444 KB  
Article
Local Contrast Enhancement in Digital Images Using a Tunable Modified Hyperbolic Tangent Transformation
by Camilo E. Echeverry and Manuel G. Forero
Mathematics 2026, 14(3), 571; https://doi.org/10.3390/math14030571 - 5 Feb 2026
Viewed by 407
Abstract
Low contrast is a frequent challenge in image analysis, especially within medical imaging and highly saturated scenes. To address this issue, we present a nonlinear transformation for local contrast enhancement in digital images. Our method adapts the hyperbolic tangent function using two parameters: [...] Read more.
Low contrast is a frequent challenge in image analysis, especially within medical imaging and highly saturated scenes. To address this issue, we present a nonlinear transformation for local contrast enhancement in digital images. Our method adapts the hyperbolic tangent function using two parameters: one to select the intensity range for modification and another to control the degree of enhancement. This approach outperforms conventional histogram-based techniques such as histogram equalization and specification in local contrast enhancement, without increasing computational cost, and produces smooth, artifact-free results in user-defined regions of interest. In addition, the proposed method was compared with CLAHE in MRIs, showing that, unlike CLAHE, the proposed method does not enhance the noise present in the background of the image. Furthermore, in deep learning contexts where dataset size is often limited, our method could serve as an effective data augmentation tool—generating varied contrast images while preserving anatomical structures, which improves neural network training for brain tumor detection in magnetic resonance imaging. The ability to manipulate local contrast may offer a pathway toward better interpretability of convolutional neural networks, as targeted contrast adjustments allow researchers to probe model sensitivity and enhance the explainability of classification and detection mechanisms. Full article
(This article belongs to the Special Issue Data Mining and Algorithms Applied in Image Processing)
Show Figures

Figure 1

20 pages, 1226 KB  
Review
Enhancing Performance and Quality of Life in Lower Limb Amputees: Physical Activity, a Valuable Tool—A Scoping Review
by Federica Delbello, Leonardo Zullo, Andrea Giacomini and Emiliana Bizzarini
Healthcare 2026, 14(2), 253; https://doi.org/10.3390/healthcare14020253 - 20 Jan 2026
Viewed by 663
Abstract
Background/Objectives: Lower limb amputation (LLA) negatively affects the physical and psychological health of individuals, leading to a lower quality of life and sedentary lifestyle. The objective of this scoping review is to search for evidence regarding physical activity interventions in individuals with LLA, [...] Read more.
Background/Objectives: Lower limb amputation (LLA) negatively affects the physical and psychological health of individuals, leading to a lower quality of life and sedentary lifestyle. The objective of this scoping review is to search for evidence regarding physical activity interventions in individuals with LLA, investigating improvements in specific outcomes related to quality of life and performance. Methods: PRISMA guidelines—extension for scoping reviews—were used to structure the study. The research was conducted between 26 July 2023 and 30 September 2023; it was structured by defining two PICO questions (P = amputation, I = physical exercise, O1 = quality of life, and O2 = performance) through Pubmed, Cochrane, and Pedro databases. The study included subjects with LLA of any etiology, in prosthetic or pre-prosthetic phase, practicing non-competitive physical activity. The results were then subjected to both qualitative and quantitative analysis. Results: Of the 615 studies identified, 18 were included in the review. They consisted of 6 systematic reviews (SR), 5 RCTs, 4 case–control studies, 1 case report (CR), and 2 cross-sectional (CS). Physical activity (PA) interventions were extremely heterogeneous and were, therefore, categorized into 6 modalities: surveys were the most reported strategies (57%), followed by personalized training (23%), strength training (13%), endurance training (13%), combined training (2%), and gait training (5%). Due to the heterogeneity of the studies, the variety of interventions proposed and the different outcomes registered, there is no evidence that one approach is more effective than another, while each group showed benefits on different specific outcomes. In total, five outcome categories were identified: quality of life was the most frequently analysed (42%), followed by cardiovascular fitness (20%), muscular fitness (14%), gait parameters (13%), functionality and disability (11%). Conclusions: PA represents a valuable strategy for improving performance and quality of life in individuals with LLA, offering a variety of interventions. Although there is no evidence that one strategy is better than the others, each activity has proven to be effective on specific outcomes, therefore, the choice must depend on the patient’s necessities. The preferred option should be the personalization of the training according to individual needs, coupled with long-term planning and remote monitoring. Creating meeting places and supporting occasions for sports activities could be a valid option. Further research could help to clarify the benefits of such interventions and enhance the understanding of how to optimize the management of LLA patients. Full article
Show Figures

Figure 1

23 pages, 1141 KB  
Article
Randomized Algorithms and Neural Networks for Communication-Free Multiagent Singleton Set Cover
by Guanchu He, Colton Hill, Joshua H. Seaton and Philip N. Brown
Games 2026, 17(1), 3; https://doi.org/10.3390/g17010003 - 12 Jan 2026
Viewed by 601
Abstract
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, [...] Read more.
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, we study how agents can formulate strategies without information about other agents’ actions so that system-level performance remains robust in the presence of communication failures. First, we use an algorithmic approach to study the scenario in which all agents lose the ability to communicate with one another, have a symmetric set of resources to choose from, and select actions independently according to a probability distribution over the resources. We show that the distribution that maximizes the expected system-level objective under this approach can be computed by solving a convex optimization problem, and we introduce a novel polynomial-time heuristic based on subset selection. Further, both of the methods are guaranteed to be within 11/e of the system’s optimal in expectation. Second, we use a learning-based approach to study how a system designer can employ neural networks to approximate optimal agent strategies in the presence of communication failures. The neural network, trained on system-level optimal outcomes obtained through brute-force enumeration, generates utility functions that enable agents to make decisions in a distributed manner. Empirical results indicate the neural network often outperforms greedy and randomized baseline algorithms. Collectively, these findings provide a broad study of optimal agent behavior and its impact on system-level performance when the information available to agents is extremely limited. Full article
(This article belongs to the Section Algorithmic and Computational Game Theory)
Show Figures

Figure 1

19 pages, 4383 KB  
Article
Integrating GAN-Generated SAR and Optical Imagery for Building Damage Mapping
by Chia Yee Ho, Bruno Adriano, Gerald Baier, Erick Mas, Sesa Wiguna, Magaly Koch and Shunichi Koshimura
Remote Sens. 2026, 18(1), 134; https://doi.org/10.3390/rs18010134 - 31 Dec 2025
Viewed by 1025
Abstract
Reliable assessment of building damage is essential for effective disaster management. Synthetic Aperture Radar (SAR) has become a valuable tool for damage detection, as it operates independently of the daylight and weather conditions. However, the limited availability of high-resolution pre-disaster SAR data remains [...] Read more.
Reliable assessment of building damage is essential for effective disaster management. Synthetic Aperture Radar (SAR) has become a valuable tool for damage detection, as it operates independently of the daylight and weather conditions. However, the limited availability of high-resolution pre-disaster SAR data remains a major obstacle to accurate damage evaluation, constraining the applicability of traditional change-detection approaches. This study proposes a comprehensive framework that leverages generated SAR data alongside optical imagery for building damage detection and further examines the influence of elevation data quality on SAR synthesis and model performance. The method integrates SAR image synthesis from a Digital Surface Model (DSM) and land cover inputs with a multimodal deep learning architecture capable of jointly localizing buildings and classifying damage levels. Two data modality scenarios are evaluated: a change-detection setting using pre-disaster authentic SAR and another using GAN-generated SAR, both combined with post-disaster SAR imagery for building damage assessment. Experimental results demonstrate that GAN-generated SAR can effectively substitute for authentic SAR in multimodal damage mapping. Models using generated pre-disaster SAR achieved comparable or superior performance to those using authentic SAR, with F1 scores of 0.730, 0.442, and 0.790 for the survived, moderate, and destroyed classes, respectively. Ablation studies further reveal that the model relies more heavily on land cover segmentation than on fine elevation details, suggesting that coarse-resolution DSMs (30 m) are sufficient as auxiliary input. Incorporating additional training regions further improved generalization and inter-class balance, confirming that high-quality generated SAR can serve as a viable alternative especially in the absence of authentic SAR, for scalable, post-disaster building damage assessment. Full article
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)
Show Figures

Figure 1

14 pages, 7150 KB  
Article
Using Tourist Diver Images to Estimate Coral Cover and Bleaching Prevalence in a Remote Indian Ocean Coral Reef System
by Anderson B. Mayfield and Alexandra C. Dempsey
Oceans 2026, 7(1), 1; https://doi.org/10.3390/oceans7010001 - 24 Dec 2025
Cited by 1 | Viewed by 845
Abstract
Citizen science approaches for monitoring, and even restoring, coral reefs have grown in popularity though tend to be restricted to those who have taken courses that expose them to the relevant methodologies. Now that cheap (~10 USD), waterproof pouches for smart phones are [...] Read more.
Citizen science approaches for monitoring, and even restoring, coral reefs have grown in popularity though tend to be restricted to those who have taken courses that expose them to the relevant methodologies. Now that cheap (~10 USD), waterproof pouches for smart phones are widely available, there is the potential for mass acquisition of coral reef images by non-scientists. Furthermore, with the emergence of better machine-learning-based image classification approaches, high-quality data can be extracted from low-resolution images (provided that key benthic organisms, namely corals, other invertebrates, & algae, can be distinguished). To determine whether informally captured images could yield comparable ecological data to point-intercept + photo-quadrat surveys conducted by highly proficient research divers, we trained an artificial intelligence (AI), CoralNet, with images taken before and during a bleaching event in 2015 in Chagos (Indian Ocean). The overall percent coral covers of the formal, “gold standard” method and the informal, “tourist diver” approach of 38.7 and 35.1%, respectively, were within ~10% of one another; coral bleaching percentages of 30.5 and 31.8%, respectively, were statistically comparable. Although the AI was prone to classifying bleached corals as healthy in ~one-third of cases, the fact that these data could be collected by someone with no knowledge of coral reef ecology might justify this approach in areas where divers or snorkelers have access to waterproof cameras and are keen to document coral reef condition. Full article
(This article belongs to the Special Issue Ocean Observing Systems: Latest Developments and Challenges)
Show Figures

Figure 1

21 pages, 1171 KB  
Article
Methodology for Detecting Suspicious Claims in Health Insurance Using Supervised Machine Learning
by Jose Villegas-Ortega, Luis Napoleon Quiroz Aviles, Juan Nazario Arancibia, Wilder Carpio Montenegro, Rosa Delgadillo and David Mauricio
Future Internet 2025, 17(12), 584; https://doi.org/10.3390/fi17120584 - 18 Dec 2025
Cited by 1 | Viewed by 867
Abstract
Health insurance fraud (HIF) places a substantial economic burden on global health systems. While supervised machine learning (SML) offers a promising solution for its detection, most approaches are ad hoc and lack a systematic methodological framework that ensures replicability, adaptability, and effectiveness, especially [...] Read more.
Health insurance fraud (HIF) places a substantial economic burden on global health systems. While supervised machine learning (SML) offers a promising solution for its detection, most approaches are ad hoc and lack a systematic methodological framework that ensures replicability, adaptability, and effectiveness, especially in contexts with severe class imbalance. We developed PDHIF (Phases for Detecting Fraud in Health Insurance), a six-phase systematic methodology that introduces a holistic focus that integrates fraud theory, actors, manifestations, and factors with the complete SML lifecycle. We applied this methodology in a case study using a dataset of 8.5 million claims from a public health insurance system in Peru. We trained and evaluated three SML models (Random Forest, XGBoost, and multilayer perceptron) in two experimental scenarios: one with the original, highly unbalanced dataset and another with a training set balanced via the K-means SMOTE technique. When PDHIF was applied, the results revealed a stark contrast: in the unbalanced scenario, the models were ineffective at detecting fraud (F1 score < 0.521) despite high accuracy (>98%). In the balanced scenario, the performance improved dramatically. The best-performing model, RF, achieved an F1 score of 0.994, a sensitivity of 0.994, and an AUC of 0.994 on the test set, demonstrating a robust ability to distinguish suspicious claims. Full article
Show Figures

Figure 1

17 pages, 4150 KB  
Article
An International Inter-Consortium Validation of Knowledge-Based Plan Prediction Modeling for Whole Breast Radiotherapy Treatment
by Lorenzo Placidi, Peter Griffin, Roberta Castriconi, Alessia Tudda, Giovanna Benecchi, Mark Burns, Elisabetta Cagni, Cathy Markham, Valeria Landoni, Eugenia Moretti, Caterina Oliviero, Giulia Rambaldi Guidasci, Guenda Meffe, Tiziana Rancati, Alessandro Scaggion, Karen McGoldrick, Vanessa Panettieri and Claudio Fiorino
Cancers 2025, 17(21), 3576; https://doi.org/10.3390/cancers17213576 - 5 Nov 2025
Viewed by 730
Abstract
Background: Knowledge-based (KB) planning is a promising approach to model prior planning experience and optimize radiotherapy. To enable the sharing of models across institutions, their transferability must be evaluated. This study aimed to validate KB prediction models developed by a national consortium using [...] Read more.
Background: Knowledge-based (KB) planning is a promising approach to model prior planning experience and optimize radiotherapy. To enable the sharing of models across institutions, their transferability must be evaluated. This study aimed to validate KB prediction models developed by a national consortium using data from another multi-institutional consortium in a different country. Methods: Ten right whole breast tangential field (RWB-TF) models were built within the national consortium. A cohort of 20 patients from the external consortium was used for testing. Transferability was defined when the ipsilateral (IPSI) lung first principal component (PC1) was within the 10th–90th percentile of the training set. Predicted dose–volume parameters were compared with clinical dose–volume histograms (cDVHs). Results: Planning target volume (PTV) coverage strategies were comparable between the two consortia, even though significant volume differences were observed for the PTV and contralateral breast (p = 0.002 and p = 0.02, respectively). For the IPSI lung, the standard deviation of predicted mean dose/V20 Gy was 1.13 Gy/2.9% in the external consortium versus 0.55 Gy/1.6% in the training consortium. Differences between the cDVH and the predicted IPSI lung mean dose and the volume receiving more than 20 Gy (V20 Gy) were <2 Gy and <5% in 88.7% and 92.3% of cases, respectively. PC1 values fell within the 10th–90th percentile for ≥90% of patients in 6/10 models and 65–85% for the remaining 4. Conclusions: This study demonstrates the feasibility of applying RWB-TF KB models beyond the consortium in which they were developed, supporting broader clinical implementation. This retrospective study was supported by AIRC (Associazione Italiana per la Ricerca sul Cancro) and registered on ClinicalTrials.gov (NCT06317948, 12 March 2024). Full article
Show Figures

Figure 1

21 pages, 2585 KB  
Article
Application of the WRF Model for Operational Wind Power Forecasting in Northeast Brazil
by Thiago Silva, Alexandre Costa, Olga C. Vilela, Ramiro Willmersdorf, José Vailson dos Santos Júnior, Luís Henrique Bezerra Alves, Pedro Tyaquiçã, Mateus Francisco Silva de Lima, Herbert Rafael Barbosa de Souza and Doris Veleda
Energies 2025, 18(21), 5731; https://doi.org/10.3390/en18215731 - 31 Oct 2025
Cited by 3 | Viewed by 909
Abstract
Northeastern Brazil (NEB) has a high potential for wind energy generation, making it a strategic area for the development of this renewable source. However, the region’s complex wind regime, driven by interactions between large-scale atmospheric systems, local circulations, and coastal topography, presents significant [...] Read more.
Northeastern Brazil (NEB) has a high potential for wind energy generation, making it a strategic area for the development of this renewable source. However, the region’s complex wind regime, driven by interactions between large-scale atmospheric systems, local circulations, and coastal topography, presents significant challenges for weather forecasting and wind energy applications. Despite this, detailed assessments of forecast performance using mesoscale models remain limited. The main objective was to develop an efficient strategy that enables satisfactory results by optimizing data assimilation, land use and topography information as well as improvements in physical parameterizations and post-processing, optimizing computational effort. Forecasting conducted during the year 2020 were validated with data from 20 anemometric measurement towers (AMTs), located at strategic points across various wind power complexes. The model’s performance was evaluated using statistical metrics such as MBE, MAE, nRMSE, standard deviation ratio, and correlation. Additionally, the impact of bias removal was assessed using two approaches: one that eliminates the mean error per forecasted time step and another employing artificial intelligence for bias removal training. The results revealed distinct characteristics for each analyzed location, with errors of diverse nature due to the local nuances of the measurements. However, both bias removal approaches showed significant improvements in wind characterization across all complexes. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

13 pages, 474 KB  
Article
Necessary and Sufficient Reservoir Condition for Universal Reservoir Computing
by Shuhei Sugiura, Ryo Ariizumi, Toru Asai and Shun-ichi Azuma
Mathematics 2025, 13(21), 3440; https://doi.org/10.3390/math13213440 - 28 Oct 2025
Viewed by 1234
Abstract
We discuss necessary and sufficient conditions for universal approximation using reservoir computing. Reservoir computing is a machine learning method used to train a dynamical system model by tuning only the static part of the model. The universality is the ability of the model [...] Read more.
We discuss necessary and sufficient conditions for universal approximation using reservoir computing. Reservoir computing is a machine learning method used to train a dynamical system model by tuning only the static part of the model. The universality is the ability of the model to approximate any dynamical system with any precision. In the previous studies, we provided two sufficient conditions for the universality. We employed the universality definition that has been discussed since the earliest studies on reservoir computing. In this present paper, we prove that these two conditions and the universality are equivalent to one another. Using this equivalence, we show that a universal model must have a “pathological” property that can only be achieved or approached by chaotic reservoirs. Full article
(This article belongs to the Special Issue Machine Learning: Mathematical Foundations and Applications)
Show Figures

Figure 1

Back to TopTop