Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,365)

Search Parameters:
Keywords = k-nearest neighbors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 17505 KiB  
Article
A Hybrid Spatio-Temporal Graph Attention (ST D-GAT Framework) for Imputing Missing SBAS-InSAR Deformation Values to Strengthen Landslide Monitoring
by Hilal Ahmad, Yinghua Zhang, Hafeezur Rehman, Mehtab Alam, Zia Ullah, Muhammad Asfandyar Shahid, Majid Khan and Aboubakar Siddique
Remote Sens. 2025, 17(15), 2613; https://doi.org/10.3390/rs17152613 - 28 Jul 2025
Abstract
Reservoir-induced landslides threaten infrastructures and downstream communities, making continuous deformation monitoring vital. Time-series InSAR, notably the SBAS algorithm, provides high-precision surface-displacement mapping but suffers from voids due to layover/shadow effects and temporal decorrelation. Existing deep-learning approaches often operate on fixed-size patches or ignore [...] Read more.
Reservoir-induced landslides threaten infrastructures and downstream communities, making continuous deformation monitoring vital. Time-series InSAR, notably the SBAS algorithm, provides high-precision surface-displacement mapping but suffers from voids due to layover/shadow effects and temporal decorrelation. Existing deep-learning approaches often operate on fixed-size patches or ignore irregular spatio-temporal dependencies, limiting their ability to recover missing pixels. With this objective, a hybrid spatio-temporal Graph Attention (ST-GAT) framework was developed and trained on SBAS-InSAR values using 24 influential features. A unified spatio-temporal graph is constructed, where each node represents a pixel at a specific acquisition time. The nodes are connected via inverse distance spatial edges to their K-nearest neighbors, and they have bidirectional temporal edges to themselves in adjacent acquisitions. The two spatial GAT layers capture terrain-driven influences, while the two temporal GAT layers model annual deformation trends. A compact MLP with per-map bias converts the fused node embeddings into normalized LOS estimates. The SBAS-InSAR results reveal LOS deformation, with 48% of missing pixels and 20% located near the Dasu dam. ST D-GAT reconstructed fully continuous spatio-temporal displacement fields, filling voids at critical sites. The model was validated and achieved an overall R2 (0.907), ρ (0.947), per-map R2 ≥ 0.807 with RMSE ≤ 9.99, and a ROC-AUC of 0.91. It also outperformed the six compared baseline models (IDW, KNN, RF, XGBoost, MLP, simple-NN) in both RMSE and R2. By combining observed LOS values with 24 covariates in the proposed model, it delivers physically consistent gap-filling and enables continuous, high-resolution landslide monitoring in radar-challenged mountainous terrain. Full article
Show Figures

Figure 1

17 pages, 1149 KiB  
Article
The Relationship Between Smartphone and Game Addiction, Leisure Time Management, and the Enjoyment of Physical Activity: A Comparison of Regression Analysis and Machine Learning Models
by Sevinç Namlı, Bekir Çar, Ahmet Kurtoğlu, Eda Yılmaz, Gönül Tekkurşun Demir, Burcu Güvendi, Batuhan Batu and Monira I. Aldhahi
Healthcare 2025, 13(15), 1805; https://doi.org/10.3390/healthcare13151805 - 25 Jul 2025
Viewed by 185
Abstract
Background/Objectives: Smartphone addiction (SA) and gaming addiction (GA) have become risk factors for individuals of all ages in recent years. Especially during adolescence, it has become very difficult for parents to control this situation. Physical activity and the effective use of free time [...] Read more.
Background/Objectives: Smartphone addiction (SA) and gaming addiction (GA) have become risk factors for individuals of all ages in recent years. Especially during adolescence, it has become very difficult for parents to control this situation. Physical activity and the effective use of free time are the most important factors in eliminating such addictions. This study aimed to test a new machine learning method by combining routine regression analysis with the gradient-boosting machine (GBM) and random forest (RF) methods to analyze the relationship between SA and GA with leisure time management (LTM) and the enjoyment of physical activity (EPA) among adolescents. Methods: This study presents the results obtained using our developed GBM + RF hybrid model, which incorporates LTM and EPA scores as inputs for predicting SA and GA, following the preprocessing of data collected from 1107 high school students aged 15–19 years. The results were compared with those obtained using routine regression results and the lasso, ElasticNet, RF, GBM, AdaBoost, bagging, support vector regression (SVR), K-nearest neighbors (KNN), multi-layer perceptron (MLP), and light gradient-boosting machine (LightGBM) models. In the GBM + RF model, probability scores obtained from GBM were used as input to RF to produce final predictions. The performance of the models was evaluated using the R2, mean absolute error (MAE), and mean squared error (MSE) metrics. Results: Classical regression analyses revealed a significant negative relationship between SA scores and both LTM and EPA scores. Specifically, as LTM and EPA scores increased, SA scores decreased significantly. In contrast, GA scores showed a significant negative relationship only with LTM scores, whereas EPA was not a significant determinant of GA. In contrast to the relatively low explanatory power of classical regression models, ML algorithms have demonstrated significantly higher prediction accuracy. The best performance for SA prediction was achieved using the Hybrid GBM + RF model (MAE = 0.095, MSE = 0.010, R2 = 0.9299), whereas the SVR model showed the weakest performance (MAE = 0.310, MSE = 0.096, R2 = 0.8615). Similarly, the Hybrid GBM + RF model also showed the highest performance for GA prediction (MAE = 0.090, MSE = 0.014, R2 = 0.9699). Conclusions: These findings demonstrate that classical regression analyses have limited explanatory power in capturing complex relationships between variables, whereas ML algorithms, particularly our GBM + RF hybrid model, offer more robust and accurate modeling capabilities for multifactorial cognitive and performance-related predictions. Full article
Show Figures

Figure 1

22 pages, 3429 KiB  
Article
Indoor Positioning and Tracking System in a Multi-Level Residential Building Using WiFi
by Elmer Magsino, Joshua Kenichi Sim, Rica Rizabel Tagabuhin and Jan Jayson Tirados
Information 2025, 16(8), 633; https://doi.org/10.3390/info16080633 - 24 Jul 2025
Viewed by 178
Abstract
The implementation of an Indoor Positioning System (IPS) in a three-storey residential building employing WiFi signals that can also be used to track indoor movements is presented in this study. The movement of inhabitants is monitored through an Android smartphone by detecting the [...] Read more.
The implementation of an Indoor Positioning System (IPS) in a three-storey residential building employing WiFi signals that can also be used to track indoor movements is presented in this study. The movement of inhabitants is monitored through an Android smartphone by detecting the Received Signal Strength Indicator (RSSI) signals from WiFi Anchor Points (APs).Indoor movement is detected through a successive estimation of a target’s multiple positions. Using the K-Nearest Neighbors (KNN) and Particle Swarm Optimization (PSO) algorithms, these RSSI measurements are trained for estimating the position of an indoor target. Additionally, the Density-based Spatial Clustering of Applications with Noise (DBSCAN) has been integrated into the PSO method for removing RSSI-estimated position outliers of the mobile device to further improve indoor position detection and monitoring accuracy. We also employed Time Reversal Resonating Strength (TRRS) as a correlation technique as the third method of localization. Our extensive and rigorous experimentation covers the influence of various weather conditions in indoor detection. Our proposed localization methods have maximum accuracies of 92%, 80%, and 75% for TRRS, KNN, and PSO + DBSCAN, respectively. Each method also has an approximate one-meter deviation, which is a short distance from our targets. Full article
Show Figures

Graphical abstract

33 pages, 3019 KiB  
Article
Aging Assessment of Power Transformers with Data Science
by Samuel Lessinger, Alzenira da Rosa Abaide, Rodrigo Marques de Figueiredo, Lúcio Renê Prade and Paulo Ricardo da Silva Pereira
Energies 2025, 18(15), 3960; https://doi.org/10.3390/en18153960 - 24 Jul 2025
Viewed by 226
Abstract
Maintenance techniques are fundamental in the context of the safe operation of continuous process installations, especially in electrical energy-transmission and/or -distribution substations. The operating conditions of power transformers are fundamental for the safe functioning of the electrical power system. Predictive maintenance consists of [...] Read more.
Maintenance techniques are fundamental in the context of the safe operation of continuous process installations, especially in electrical energy-transmission and/or -distribution substations. The operating conditions of power transformers are fundamental for the safe functioning of the electrical power system. Predictive maintenance consists of periodically monitoring the asset in use, in order to anticipate critical situations. This article proposes a methodology based on data science, machine learning and the Internet of Things (IoT), to track operational conditions over time and evaluate transformer aging. This characteristic is achieved with the development of a synchronization method for different databases and the construction of a model for estimating ambient temperatures using k-Nearest Neighbors. In this way, a history assessment is carried out with more consistency, given the environmental conditions faced by the equipment. The work evaluated data from three power transformers in different geographic locations, demonstrating the initial applicability of the method in identifying equipment aging. Transformer TR1 showed aging of 3.24×103%, followed by TR2 with 8.565×103% and TR3 showing 294.17×106% in the evaluated period of time. Full article
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 4th Edition)
Show Figures

Figure 1

24 pages, 1572 KiB  
Article
Optimizing DNA Sequence Classification via a Deep Learning Hybrid of LSTM and CNN Architecture
by Elias Tabane, Ernest Mnkandla and Zenghui Wang
Appl. Sci. 2025, 15(15), 8225; https://doi.org/10.3390/app15158225 - 24 Jul 2025
Viewed by 120
Abstract
This study addresses the performance of deep learning models for predicting human DNA sequence classification through an exploration of ideal feature representation, model architecture, and hyperparameter tuning. It contrasts traditional machine learning with advanced deep learning approaches to ascertain performance with respect to [...] Read more.
This study addresses the performance of deep learning models for predicting human DNA sequence classification through an exploration of ideal feature representation, model architecture, and hyperparameter tuning. It contrasts traditional machine learning with advanced deep learning approaches to ascertain performance with respect to genomic data complexity. A hybrid network combining long short-term memory (LSTM) and convolutional neural networks (CNN) was developed to extract long-distance dependencies as well as local patterns from DNA sequences. The hybrid LSTM + CNN model achieved a classification accuracy of 100%, which is significantly higher than traditional approaches such as logistic regression (45.31%), naïve Bayes (17.80%), and random forest (69.89%), as well as other machine learning models such as XGBoost (81.50%) and k-nearest neighbor (70.77%). Among deep learning techniques, the DeepSea model also accounted for good performance (76.59%), while others like DeepVariant (67.00%) and graph neural networks (30.71%) were relatively lower. Preprocessing techniques, one-hot encoding, and DNA embeddings were mainly at the forefront of transforming sequence data to a compatible form for deep learning. The findings underscore the robustness of hybrid structures in genomic classification tasks and warrant future research on encoding strategy, model and parameter tuning, and hyperparameter tuning to further improve accuracy and generalization in DNA sequence analysis. Full article
Show Figures

Figure 1

21 pages, 4847 KiB  
Article
The Application of KNN-Optimized Hybrid Models in Landslide Displacement Prediction
by Hongwei Jiang, Jiayi Wu, Hao Zhou, Mengjie Liu, Shihao Li, Yuexu Wu and Yongfan Guo
Eng 2025, 6(8), 169; https://doi.org/10.3390/eng6080169 - 23 Jul 2025
Viewed by 221
Abstract
Early warning systems depend heavily on the accuracy of landslide displacement forecasts. This study focuses on the Bazimen landslide located in the Three Gorges Reservoir region and proposes a hybrid prediction approach combining support vector regression (SVR) and long short-term memory (LSTM) networks. [...] Read more.
Early warning systems depend heavily on the accuracy of landslide displacement forecasts. This study focuses on the Bazimen landslide located in the Three Gorges Reservoir region and proposes a hybrid prediction approach combining support vector regression (SVR) and long short-term memory (LSTM) networks. These models are optimized via the K-Nearest Neighbor (KNN) algorithm. Initially, cumulative displacement data were separated into trend and cyclic elements using a smoothing approach. SVR and LSTM were then used to predict the components, and KNN was introduced to optimize input factors and classify the results, improving accuracy. The final KNN-optimized SVR-LSTM model effectively integrates static and dynamic features, addressing limitations of traditional models. The results show that LSTM performs better than SVR, with an RMSE and MAPE of 24.73 mm and 1.87% at monitoring point ZG111, compared to 30.71 mm and 2.15% for SVR. The sequential hybrid model based on KNN-optimized SVR and LSTM achieved the best performance, with an RMSE and MAPE of 23.11 mm and 1.68%, respectively. This integrated model, which combines multiple algorithms, offers improved prediction of landslide displacement and practical value for disaster forecasting in the Three Gorges area. Full article
(This article belongs to the Section Chemical, Civil and Environmental Engineering)
Show Figures

Figure 1

25 pages, 6316 KiB  
Article
Integration of Remote Sensing and Machine Learning Approaches for Operational Flood Monitoring Along the Coastlines of Bangladesh Under Extreme Weather Events
by Shampa, Nusaiba Nueri Nasir, Mushrufa Mushreen Winey, Sujoy Dey, S. M. Tasin Zahid, Zarin Tasnim, A. K. M. Saiful Islam, Mohammad Asad Hussain, Md. Parvez Hossain and Hussain Muhammad Muktadir
Water 2025, 17(15), 2189; https://doi.org/10.3390/w17152189 - 23 Jul 2025
Viewed by 479
Abstract
The Ganges–Brahmaputra–Meghna (GBM) delta, characterized by complex topography and hydrological conditions, is highly susceptible to recurrent flooding, particularly in its coastal regions where tidal dynamics hinder floodwater discharge. This study integrates Synthetic Aperture Radar (SAR) imagery with machine learning (ML) techniques to assess [...] Read more.
The Ganges–Brahmaputra–Meghna (GBM) delta, characterized by complex topography and hydrological conditions, is highly susceptible to recurrent flooding, particularly in its coastal regions where tidal dynamics hinder floodwater discharge. This study integrates Synthetic Aperture Radar (SAR) imagery with machine learning (ML) techniques to assess near real-time flood inundation patterns associated with extreme weather events, including recent cyclones between 2017 to 2024 (namely, Mora, Titli, Fani, Amphan, Yaas, Sitrang, Midhili, and Remal) as well as intense monsoonal rainfall during the same period, across a large spatial scale, to support disaster risk management efforts. Three machine learning algorithms, namely, random forest (RF), support vector machine (SVM), and K-nearest neighbors (KNN), were applied to flood extent data derived from SAR imagery to enhance flood detection accuracy. Among these, the SVM algorithm demonstrated the highest classification accuracy (75%) and exhibited superior robustness in delineating flood-affected areas. The analysis reveals that both cyclone intensity and rainfall magnitude significantly influence flood extent, with the western coastal zone (e.g., Morrelganj and Kaliganj) being most consistently affected. The peak inundation extent was observed during the 2023 monsoon (10,333 sq. km), while interannual variability in rainfall intensity directly influenced the spatial extent of flood-affected zones. In parallel, eight major cyclones, including Amphan (2020) and Remal (2024), triggered substantial flooding, with the most severe inundation recorded during Cyclone Remal with an area of 9243 sq. km. Morrelganj and Chakaria were consistently identified as flood hotspots during both monsoonal and cyclonic events. Comparative analysis indicates that cyclones result in larger areas with low-level inundation (19,085 sq. km) compared to monsoons (13,829 sq. km). However, monsoon events result in a larger area impacted by frequent inundation, underscoring the critical role of rainfall intensity. These findings underscore the utility of SAR-ML integration in operational flood monitoring and highlight the urgent need for localized, event-specific flood risk management strategies to enhance flood resilience in the GBM delta. Full article
Show Figures

Figure 1

19 pages, 1406 KiB  
Article
A Comparative Study of Dimensionality Reduction Methods for Accurate and Efficient Inverter Fault Detection in Grid-Connected Solar Photovoltaic Systems
by Shahid Tufail and Arif I. Sarwat
Electronics 2025, 14(14), 2916; https://doi.org/10.3390/electronics14142916 - 21 Jul 2025
Viewed by 201
Abstract
The continuous, effective operation of grid-connected photovoltaic (GCPV) systems depends on dependable inverter failure detection. Early, precise fault diagnosis improves general system dependability, lowers maintenance costs, and saves downtime. Although computing efficiency remains a difficulty, particularly in resource-limited contexts, machine learning-based fault detection [...] Read more.
The continuous, effective operation of grid-connected photovoltaic (GCPV) systems depends on dependable inverter failure detection. Early, precise fault diagnosis improves general system dependability, lowers maintenance costs, and saves downtime. Although computing efficiency remains a difficulty, particularly in resource-limited contexts, machine learning-based fault detection presents interesting prospects in accuracy and responsiveness. By streamlining data complexity and allowing faster and more effective fault diagnosis, dimensionality reduction methods play vital role. Using dimensionality reduction and ML techniques, this work explores inverter fault detection in GCPV systems. Photovoltaic inverter operational data was normalized and preprocessed. In the next step, dimensionality reduction using Principal Component Analysis (PCA) and autoencoder-based feature extraction were explored. For ML training four classifiers which include Random Forest (RF), logistic regression (LR), decision tree (DT), and K-Nearest Neighbors (KNN) were used. Trained on the whole standardized dataset, the RF model routinely produced the greatest accuracy of 99.87%, so efficiently capturing complicated feature interactions but requiring large processing resources and time of 36.47sec. LR model showed reduction in accuracy, but very fast training time compared to other models. Further, PCA greatly lowered computing demands, especially improving inference speed for LR and KNN. High accuracy of 99.23% across all models was maintained by autoencoder-derived features. Full article
Show Figures

Figure 1

14 pages, 959 KiB  
Article
Non-Invasive Assessment of Heat Comfort in Dairy Calves Based on Thermal Signature
by Rafael Vieira de Sousa, Jéssica Caetano Dias Campos, Gabriel Pagin, Danilo Florentino Pereira, Aline Rabello Conceição, Rubens André Tabile and Luciane Silva Martello
Dairy 2025, 6(4), 38; https://doi.org/10.3390/dairy6040038 - 21 Jul 2025
Viewed by 239
Abstract
Infrared thermography (IRT) is explored as a non-invasive method for indirectly measuring parameters related to animal performance and welfare. This study investigates a feature extraction method termed the “thermal signature” (TS), a descriptor vector derived from the temperature matrix of an animal’s body [...] Read more.
Infrared thermography (IRT) is explored as a non-invasive method for indirectly measuring parameters related to animal performance and welfare. This study investigates a feature extraction method termed the “thermal signature” (TS), a descriptor vector derived from the temperature matrix of an animal’s body surface, representing the percentage distribution of temperatures within predefined ranges. The TS, combined with environmental data, serves as a predictor attribute for machine learning-based classifier models to assess heat stress levels. The methodology was applied to a dataset collected from two groups of five dairy calves housed in a climate-controlled chamber and exposed to two artificial heat waves over 13 days. Data, including IRT measurements, respiratory rate (RR), rectal temperature (RT), and environmental variables, were collected five times daily (from 6 a.m. to 10 p.m., every four hours). Classifier models were developed using random forest (RF), support vector machine (SVM), artificial neural network (ANN), and K-nearest neighbor (KNN) algorithms. The RF models based on RR achieved the highest accuracies, 94.1% for two heat stress levels and 80.3% for three heat stress levels, using TS configurations with six temperature ranges. The integration of TS with machine learning-based models demonstrates promising results for developing or enhancing classifiers of heat stress levels in dairy calves. Full article
(This article belongs to the Section Dairy Animal Nutrition and Welfare)
Show Figures

Figure 1

29 pages, 4788 KiB  
Article
Statistical and Machine Learning Classification Approaches to Predicting and Controlling Peak Temperatures During Friction Stir Welding (FSW) of Al-6061-T6 Alloys
by Assad Anis, Muhammad Shakaib and Muhammad Sohail Hanif
J. Manuf. Mater. Process. 2025, 9(7), 246; https://doi.org/10.3390/jmmp9070246 - 21 Jul 2025
Viewed by 205
Abstract
This paper presents optimization of peak temperatures achieved during friction stir welding (FSW) of Al-6061-T6 alloys. This research work employed a novel approach by investigating the effect of FSW welding process parameters on peak temperatures through the implementation of finite element analysis (FEA), [...] Read more.
This paper presents optimization of peak temperatures achieved during friction stir welding (FSW) of Al-6061-T6 alloys. This research work employed a novel approach by investigating the effect of FSW welding process parameters on peak temperatures through the implementation of finite element analysis (FEA), the Taguchi method, analysis of variance (ANOVA), and machine learning (ML) algorithms. COMSOL 6.0 Multiphysics was used to perform FEA to predict peak temperatures, incorporating seven distinctive welding parameters: tool material, pin diameter, shoulder diameter, tool rotational speed, welding speed, axial force, and coefficient of friction. The influence of these parameters was investigated using an L32 Taguchi array and analysis of variance (ANOVA), revealing that axial force and tool rotational speed were the most significant parameters affecting peak temperatures. Some simulations showed temperatures exceeding the material’s melting point, indicating the need for improved thermal control. This was achieved by using three machine learning (ML) algorithms, i.e., Logistic Regression, k-Nearest Neighbors (k-NN), and Naive Bayes. A dataset of 324 data points was prepared using a factorial design to implement these algorithms. These algorithms predicted the welding conditions where the temperature exceeded the melting temperature of Al-6061-T6. It was found that the Logistic Regression classifier demonstrated the highest performance, achieving an accuracy of 98.14% as compared to Naive Bayes and k-NN classifiers. These findings contribute to sustainable welding practices by minimizing excessive heat generation, preserving material properties, and enhancing weld quality. Full article
Show Figures

Figure 1

37 pages, 5856 KiB  
Article
Machine Learning-Based Recommender System for Pulsed Laser Ablation in Liquid: Recommendation of Optimal Processing Parameters for Targeted Nanoparticle Size and Concentration Using Cosine Similarity and KNN Models
by Anesu Nyabadza and Dermot Brabazon
Crystals 2025, 15(7), 662; https://doi.org/10.3390/cryst15070662 - 20 Jul 2025
Viewed by 233
Abstract
Achieving targeted nanoparticle (NP) size and concentration combinations in Pulsed Laser Ablation in Liquid (PLAL) remains a challenge due to the highly nonlinear relationships between laser processing parameters and NP properties. Despite the promise of PLAL as a surfactant-free, scalable synthesis method, its [...] Read more.
Achieving targeted nanoparticle (NP) size and concentration combinations in Pulsed Laser Ablation in Liquid (PLAL) remains a challenge due to the highly nonlinear relationships between laser processing parameters and NP properties. Despite the promise of PLAL as a surfactant-free, scalable synthesis method, its industrial adoption is hindered by empirical trial-and-error approaches and the lack of predictive tools. The current literature offers limited application of machine learning (ML), particularly recommender systems, in PLAL optimization and automation. This study addresses this gap by introducing a ML-based recommender system trained on a 3 × 3 design of experiments with three replicates covering variables, such as fluence (1.83–1.91 J/cm2), ablation time (5–25 min), and laser scan speed (3000–3500 mm/s), in producing magnesium nanoparticles from powders. Multiple ML models were evaluated, including K-Nearest Neighbors (KNN), Extreme Gradient Boosting (XGBoost), Random Forest, and Decision trees. The DT model achieved the best performance for predicting the NP size with a mean percentage error (MPE) of 10%. The XGBoost model was optimal for predicting the NP concentration attaining a competitive MPE of 2%. KNN and Cosine similarity recommender systems were developed based on a database generated by the ML predictions. This intelligent, data-driven framework demonstrates the potential of ML-guided PLAL for scalable, precise NP fabrication in industrial applications. Full article
Show Figures

Figure 1

16 pages, 386 KiB  
Article
State Space Correspondence and Cross-Entropy Methods in the Assessment of Bidirectional Cardiorespiratory Coupling in Heart Failure
by Beatrice Cairo, Riccardo Pernice, Nikola N. Radovanović, Luca Faes, Alberto Porta and Mirjana M. Platiša
Entropy 2025, 27(7), 770; https://doi.org/10.3390/e27070770 - 20 Jul 2025
Viewed by 253
Abstract
The complex interplay between the cardiac and the respiratory systems, termed cardiorespiratory coupling (CRC), is a bidirectional phenomenon that can be affected by pathologies such as heart failure (HF). In the present work, the potential changes in strength of directional CRC were assessed [...] Read more.
The complex interplay between the cardiac and the respiratory systems, termed cardiorespiratory coupling (CRC), is a bidirectional phenomenon that can be affected by pathologies such as heart failure (HF). In the present work, the potential changes in strength of directional CRC were assessed in HF patients classified according to their cardiac rhythm via two measures of coupling based on k-nearest neighbor (KNN) estimation approaches, cross-entropy (CrossEn) and state space correspondence (SSC), applied on the heart period (HP) and respiratory (RESP) variability series, while also accounting for the complexity of the cardiac and respiratory rhythms. We tested the measures on 25 HF patients with sinus rhythm (SR, age: 58.9 ± 9.7 years; 23 males) and 41 HF patients with ventricular arrhythmia (VA, age 62.2 ± 11.0 years; 30 males). A predominant directionality of interaction from the cardiac to the respiratory rhythm was observed in both cohorts and using both methodologies, with similar statistical power, while a lower complexity for the RESP series compared to HP series was observed in the SR cohort. We conclude that CrossEn and SSC can be considered strictly related to each other when using a KNN technique for the estimation of the cross-predictability markers. Full article
(This article belongs to the Special Issue Entropy Methods for Cardiorespiratory Coupling Analysis)
Show Figures

Figure 1

23 pages, 2695 KiB  
Article
Estimation of Subtropical Forest Aboveground Biomass Using Active and Passive Sentinel Data with Canopy Height
by Yi Wu, Yu Chen, Chunhong Tian, Ting Yun and Mingyang Li
Remote Sens. 2025, 17(14), 2509; https://doi.org/10.3390/rs17142509 - 18 Jul 2025
Viewed by 298
Abstract
Forest biomass is closely related to carbon sequestration capacity and can reflect the level of forest management. This study utilizes four machine learning algorithms, namely Multivariate Stepwise Regression (MSR), K-Nearest Neighbors (k-NN), Artificial Neural Network (ANN), and Random Forest (RF), to estimate forest [...] Read more.
Forest biomass is closely related to carbon sequestration capacity and can reflect the level of forest management. This study utilizes four machine learning algorithms, namely Multivariate Stepwise Regression (MSR), K-Nearest Neighbors (k-NN), Artificial Neural Network (ANN), and Random Forest (RF), to estimate forest aboveground biomass (AGB) in Chenzhou City, Hunan Province, China. In addition, a canopy height model, constructed from a digital surface model (DSM) derived from Sentinel-1 Interferometric Synthetic Aperture Radar (InSAR) and an ICESat-2-corrected SRTM DEM, is incorporated to quantify its impact on the accuracy of AGB estimation. The results indicate the following: (1) The incorporation of multi-source remote sensing data significantly improves the accuracy of AGB estimation, among which the RF model performs the best (R2 = 0.69, RMSE = 24.26 t·ha−1) compared with the single-source model. (2) The canopy height model (CHM) obtained from InSAR-LiDAR effectively alleviates the signal saturation effect of optical and SAR data in high-biomass areas (>200 t·ha−1). When FCH is added to the RF model combined with multi-source remote sensing data, the R2 of the AGB estimation model is improved to 0.74. (3) In 2018, AGB in Chenzhou City shows clear spatial heterogeneity, with a mean of 51.87 t·ha−1. Biomass increases from the western hilly part (32.15–68.43 t·ha−1) to the eastern mountainous area (89.72–256.41 t·ha−1), peaking in Dongjiang Lake National Forest Park (256.41 t·ha−1). This study proposes a comprehensive feature integration framework that combines red-edge spectral indices for capturing vegetation physiological status, SAR-derived texture metrics for assessing canopy structural heterogeneity, and canopy height metrics to characterize forest three-dimensional structure. This integrated approach enables the robust and accurate monitoring of carbon storage in subtropical forests. Full article
(This article belongs to the Collection Feature Paper Special Issue on Forest Remote Sensing)
Show Figures

Figure 1

15 pages, 3326 KiB  
Article
Radiomics and Machine Learning Approaches for the Preoperative Classification of In Situ vs. Invasive Breast Cancer Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE–MRI)
by Luana Conte, Rocco Rizzo, Alessandra Sallustio, Eleonora Maggiulli, Mariangela Capodieci, Francesco Tramacere, Alessandra Castelluccia, Giuseppe Raso, Ugo De Giorgi, Raffaella Massafra, Maurizio Portaluri, Donato Cascio and Giorgio De Nunzio
Appl. Sci. 2025, 15(14), 7999; https://doi.org/10.3390/app15147999 - 18 Jul 2025
Viewed by 247
Abstract
Accurate preoperative distinction between in situ and invasive Breast Cancer (BC) is critical for clinical decision-making and treatment planning. Radiomics and Machine Learning (ML) have shown promise in enhancing diagnostic performance from breast MRI, yet their application to this specific task remains underexplored. [...] Read more.
Accurate preoperative distinction between in situ and invasive Breast Cancer (BC) is critical for clinical decision-making and treatment planning. Radiomics and Machine Learning (ML) have shown promise in enhancing diagnostic performance from breast MRI, yet their application to this specific task remains underexplored. The aim of this study was to evaluate the performance of several ML classifiers, trained on radiomic features extracted from DCE–MRI and supported by basic clinical information, for the classification of in situ versus invasive BC lesions. In this study, we retrospectively analysed 71 post-contrast DCE–MRI scans (24 in situ, 47 invasive cases). Radiomic features were extracted from manually segmented tumour regions using the PyRadiomics library, and a limited set of basic clinical variables was also included. Several ML classifiers were evaluated in a Leave-One-Out Cross-Validation (LOOCV) scheme. Feature selection was performed using two different strategies: Minimum Redundancy Maximum Relevance (MRMR), mutual information. Axial 3D rotation was used for data augmentation. Support Vector Machine (SVM), K Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) were the best-performing models, with an Area Under the Curve (AUC) ranging from 0.77 to 0.81. Notably, KNN achieved the best balance between sensitivity and specificity without the need for data augmentation. Our findings confirm that radiomic features extracted from DCE–MRI, combined with well-validated ML models, can effectively support the differentiation of in situ vs. invasive breast cancer. This approach is quite robust even in small datasets and may aid in improving preoperative planning. Further validation on larger cohorts and integration with additional imaging or clinical data are recommended. Full article
Show Figures

Figure 1

36 pages, 4468 KiB  
Article
Apis mellifera Bee Verification with IoT and Graph Neural Network
by Apolinar Velarde Martínez, Gilberto González Rodríguez and Juan Carlos Estrada Cabral
Appl. Sci. 2025, 15(14), 7969; https://doi.org/10.3390/app15147969 - 17 Jul 2025
Viewed by 195
Abstract
Automatic recognition systems (ARS) have been proposed in scientific and technological research for the care and preservation of endangered species; these systems, consisting of Internet of Things (IoT) devices and object-recognition techniques with artificial intelligence (AI), have emerged as proposed solutions to detect [...] Read more.
Automatic recognition systems (ARS) have been proposed in scientific and technological research for the care and preservation of endangered species; these systems, consisting of Internet of Things (IoT) devices and object-recognition techniques with artificial intelligence (AI), have emerged as proposed solutions to detect and prevent parasite attacks on Apis mellifera bees. This article presents a pilot ARS for the recognition and analysis of honeybees at the hive entrance using IoT devices and automatic object-recognition techniques, for the early detection of the Varroa mite in test apiaries. Two object-recognition techniques, namely the k-Nearest Neighbor Algorithm (kNN) and Graph Neural Network (GNN), were evaluated with an image dataset of 600 images from a single beehive. The results of the experiments show the viability of using GNN in real environments. GNN has greater accuracy in bee recognition, but with greater processing time, while the kNN classifier requires fewer processing resources but has lower recognition accuracy. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in the IoT)
Show Figures

Figure 1

Back to TopTop