Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (363)

Search Parameters:
Keywords = k-nearest neighbour

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3553 KiB  
Article
A Hybrid Artificial Intelligence Framework for Melanoma Diagnosis Using Histopathological Images
by Alberto Nogales, María C. Garrido, Alfredo Guitian, Jose-Luis Rodriguez-Peralto, Carlos Prados Villanueva, Delia Díaz-Prieto and Álvaro J. García-Tejedor
Technologies 2025, 13(8), 330; https://doi.org/10.3390/technologies13080330 - 1 Aug 2025
Viewed by 166
Abstract
Cancer remains one of the most significant global health challenges due to its high mortality rates and the limited understanding of its progression. Early diagnosis is critical to improving patient outcomes, especially in skin cancer, where timely detection can significantly enhance recovery rates. [...] Read more.
Cancer remains one of the most significant global health challenges due to its high mortality rates and the limited understanding of its progression. Early diagnosis is critical to improving patient outcomes, especially in skin cancer, where timely detection can significantly enhance recovery rates. Histopathological analysis is a widely used diagnostic method, but it is a time-consuming process that heavily depends on the expertise of highly trained specialists. Recent advances in Artificial Intelligence have shown promising results in image classification, highlighting its potential as a supportive tool for medical diagnosis. In this study, we explore the application of hybrid Artificial Intelligence models for melanoma diagnosis using histopathological images. The dataset used consisted of 506 histopathological images, from which 313 curated images were selected after quality control and preprocessing. We propose a two-step framework that employs an Autoencoder for dimensionality reduction and feature extraction of the images, followed by a classification algorithm to distinguish between melanoma and nevus, trained on the extracted feature vectors from the bottleneck of the Autoencoder. We evaluated Support Vector Machines, Random Forest, Multilayer Perceptron, and K-Nearest Neighbours as classifiers. Among these, the combinations of Autoencoder with K-Nearest Neighbours achieved the best performance and inference time, reaching an average accuracy of approximately 97.95% on the test set and requiring 3.44 min per diagnosis. The baseline comparison results were consistent, demonstrating strong generalisation and outperforming the other models by 2 to 13 percentage points. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

18 pages, 4863 KiB  
Article
Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection
by José Ramón Trillo, Felipe González-López, Juan Antonio Morente-Molinera, Roberto Magán-Carrión and Pablo García-Sánchez
Electronics 2025, 14(15), 3073; https://doi.org/10.3390/electronics14153073 - 31 Jul 2025
Viewed by 186
Abstract
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not [...] Read more.
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Graphical abstract

23 pages, 1885 KiB  
Article
Applying Machine Learning to DEEC Protocol: Improved Cluster Formation in Wireless Sensor Networks
by Abdulla Juwaied and Lidia Jackowska-Strumillo
Network 2025, 5(3), 26; https://doi.org/10.3390/network5030026 - 24 Jul 2025
Viewed by 185
Abstract
Wireless Sensor Networks (WSNs) are specialised ad hoc networks composed of small, low-power, and often battery-operated sensor nodes with various sensors and wireless communication capabilities. These nodes collaborate to monitor and collect data from the physical environment, transmitting it to a central location [...] Read more.
Wireless Sensor Networks (WSNs) are specialised ad hoc networks composed of small, low-power, and often battery-operated sensor nodes with various sensors and wireless communication capabilities. These nodes collaborate to monitor and collect data from the physical environment, transmitting it to a central location or sink node for further processing and analysis. This study proposes two machine learning-based enhancements to the DEEC protocol for Wireless Sensor Networks (WSNs) by integrating the K-Nearest Neighbours (K-NN) and K-Means (K-M) machine learning (ML) algorithms. The Distributed Energy-Efficient Clustering with K-NN (DEEC-KNN) and with K-Means (DEEC-KM) approaches dynamically optimize cluster head selection to improve energy efficiency and network lifetime. These methods are validated through extensive simulations, demonstrating up to 110% improvement in packet delivery and significant gains in network stability compared with the original DEEC protocol. The adaptive clustering enabled by K-NN and K-Means is particularly effective for large-scale and dynamic WSN deployments where node failures and topology changes are frequent. These findings suggest that integrating ML with clustering protocols is a promising direction for future WSN design. Full article
Show Figures

Figure 1

20 pages, 12036 KiB  
Article
Spatiotemporal Mapping of Grazing Livestock Behaviours Using Machine Learning Algorithms
by Guo Ye and Rui Yu
Sensors 2025, 25(15), 4561; https://doi.org/10.3390/s25154561 - 23 Jul 2025
Viewed by 295
Abstract
Grassland ecosystems are fundamentally shaped by the complex behaviours of livestock. While most previous studies have monitored grassland health using vegetation indices, such as NDVI and LAI, fewer have investigated livestock behaviours as direct drivers of grassland degradation. In particular, the spatial clustering [...] Read more.
Grassland ecosystems are fundamentally shaped by the complex behaviours of livestock. While most previous studies have monitored grassland health using vegetation indices, such as NDVI and LAI, fewer have investigated livestock behaviours as direct drivers of grassland degradation. In particular, the spatial clustering and temporal concentration patterns of livestock behaviours are critical yet underexplored factors that significantly influence grassland ecosystems. This study investigated the spatiotemporal patterns of livestock behaviours under different grazing management systems and grazing-intensity gradients (GIGs) in Wenchang, China, using high-resolution GPS tracking data and machine learning classification. the K-Nearest Neighbours (KNN) model combined with SMOTE-ENN resampling achieved the highest accuracy, with F1-scores of 0.960 and 0.956 for continuous and rotational grazing datasets. The results showed that the continuous grazing system failed to mitigate grazing pressure when grazing intensity was reduced, as the spatial clustering of livestock behaviours did not decrease accordingly, and the frequency of temporal peaks in grazing behaviour even showed an increasing trend. Conversely, the rotational grazing system responded more effectively, as reduced GIGs led to more evenly distributed temporal activity patterns and lower spatial clustering. These findings highlight the importance of incorporating livestock behavioural patterns into grassland monitoring and offer data-driven insights for sustainable grazing management. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 2960 KiB  
Article
Early Leak and Burst Detection in Water Pipeline Networks Using Machine Learning Approaches
by Kiran Joseph, Jyoti Shetty, Rahul Patnaik, Noel S. Matthew, Rudi Van Staden, Wasantha P. Liyanage, Grant Powell, Nathan Bennett and Ashok K. Sharma
Water 2025, 17(14), 2164; https://doi.org/10.3390/w17142164 - 21 Jul 2025
Viewed by 473
Abstract
Leakages in water distribution networks pose a formidable challenge, often leading to substantial water wastage and escalating operational costs. Traditional methods for leak detection often fall short, particularly when dealing with complex or subtle data patterns. To address this, a comprehensive comparison of [...] Read more.
Leakages in water distribution networks pose a formidable challenge, often leading to substantial water wastage and escalating operational costs. Traditional methods for leak detection often fall short, particularly when dealing with complex or subtle data patterns. To address this, a comprehensive comparison of fourteen machine learning algorithms was conducted, with evaluation based on key performance metrics such as multi-class classification metrics, micro and macro averages, accuracy, precision, recall, and F1-score. The data, collected from an experimental site under leak, major leak, and no-leak scenarios, was used to perform multi-class classification. The results highlight the superiority of models such as Random Forest, K-Nearest Neighbours, and Decision Tree in detecting leaks with high accuracy and robustness. Multiple models effectively captured the nuances in the data and accurately predicted the presence of a leak, burst, or no leak, thus automating leak detection and contributing to water conservation efforts. This research demonstrates the practical benefits of applying machine learning models in water distribution systems, offering scalable solutions for real-time leak detection. Furthermore, it emphasises the role of machine learning in modernising infrastructure management, reducing water losses, and promoting the sustainability of water resources, while laying the groundwork for future advancements in predictive maintenance and resilience of water infrastructure. Full article
(This article belongs to the Special Issue Urban Water Resources: Sustainable Management and Policy Needs)
Show Figures

Figure 1

16 pages, 2287 KiB  
Article
A Data-Driven Machine Learning Framework Proposal for Selecting Project Management Research Methodologies
by Otniel Didraga, Andrei Albu, Viorel Negrut, Diogen Babuc and Ovidiu Dobrican
Appl. Sci. 2025, 15(13), 7263; https://doi.org/10.3390/app15137263 - 27 Jun 2025
Viewed by 475
Abstract
Selecting appropriate research methodologies in project management traditionally relies on individual expertise and intuition, leading to variability in study design and reproducibility challenges. To address this gap, we introduce a machine learning-driven recommendation system that objectively matches project management use cases to suitable [...] Read more.
Selecting appropriate research methodologies in project management traditionally relies on individual expertise and intuition, leading to variability in study design and reproducibility challenges. To address this gap, we introduce a machine learning-driven recommendation system that objectively matches project management use cases to suitable research methods. Leveraging a curated dataset of 156 instances extracted from over 100 peer-reviewed articles, each example is annotated by one of five application domains (cost estimation, performance analysis, risk assessment, prediction, comparison) and one of seven methodology classes (e.g., regression analysis, time-series analysis, case study). We transformed textual descriptions into TF-IDF features and one-hot-encoded contextual domains, then trained and rigorously tuned three classifiers—random forest, support vector machine, and K-nearest neighbours—using stratified five-fold cross-validation. The random forest model achieved superior performance (93.8% ± 1.9% accuracy, macro-F1 = 0.93, ROC-AUC = 0.94), demonstrating robust generalisability across diverse scenarios, while SVM offered the highest precision on dominant classes. Our framework establishes a transparent, reproducible workflow—from literature extraction and annotation to model evaluation—and promises to standardise methodology selection, enhancing consistency and rigour in project management research design. Full article
(This article belongs to the Special Issue Machine Learning and Soft Computing: Current Trends and Applications)
Show Figures

Figure 1

17 pages, 2468 KiB  
Article
A Solution Surface in Nine-Dimensional Space to Optimise Ground Vibration Effects Through Artificial Intelligence in Open-Pit Mine Blasting
by Onalethata Saubi, Rodrigo S. Jamisola, Kesalopa Gaopale, Raymond S. Suglo and Oduetse Matsebe
Mining 2025, 5(3), 40; https://doi.org/10.3390/mining5030040 - 26 Jun 2025
Viewed by 307
Abstract
In this study, we model a solution surface, with each point having nine components using artificial intelligence (AI), to optimise the effects of ground vibration during blasting operations in an open-pit diamond mine. This model has eight input parameters that can be adjusted [...] Read more.
In this study, we model a solution surface, with each point having nine components using artificial intelligence (AI), to optimise the effects of ground vibration during blasting operations in an open-pit diamond mine. This model has eight input parameters that can be adjusted by blasting engineers to arrive at a desired output value of ground vibration. It is built using the best performing artificial neural network architecture that best fits the blasting data from 100 blasting events provided by the Debswana diamond mine. Other AI algorithms used to compare the model’s performance were the k-nearest neighbour, support vector machine, and random forest—together with more traditional statistical approaches, i.e., multivariate and regression analyses. The input parameters were burden, spacing, stemming length, hole depth, hole diameter, distance from the blast face to the monitoring point, maximum charge per delay, and powder factor. The optimised model allows for variations in the input values, given the constraints, such that the output ground vibration will be within the minimum acceptable value. Through unconstrained optimisation, the minimum value of ground vibration is around 0.1 mm/s, which is within the vibration range caused by a passing vehicle. Full article
(This article belongs to the Special Issue Mine Automation and New Technologies)
Show Figures

Figure 1

17 pages, 469 KiB  
Article
Similarity-Based Decision Support for Improving Agricultural Practices and Plant Growth
by Iulia Baraian, Honoriu Valean, Oliviu Matei and Rudolf Erdei
Appl. Sci. 2025, 15(12), 6936; https://doi.org/10.3390/app15126936 - 19 Jun 2025
Cited by 1 | Viewed by 332
Abstract
Similarity-based decision support systems have become essential tools for providing tailored and adaptive guidance across various domains. In agriculture, where managing extensive land areas poses significant challenges, the primary objective is often to maximize harvest yields while reducing costs, preserving crop health, and [...] Read more.
Similarity-based decision support systems have become essential tools for providing tailored and adaptive guidance across various domains. In agriculture, where managing extensive land areas poses significant challenges, the primary objective is often to maximize harvest yields while reducing costs, preserving crop health, and minimizing the use of chemical adjuvants. The application of similarity-based analysis enables the development of personalized farming recommendations, refined through shared data and insights, which contribute to improved plant growth and enhanced annual harvest outcomes. This study employs two algorithms, K-Nearest Neighbour (KNN) and Approximate Nearest Neighbour (ANN) using Locality Sensitive Hashing (LSH) to evaluate their effectiveness in agricultural decision-making. The results demonstrate that, under comparable farming conditions, KNN yields more accurate recommendations due to its reliance on exact matches, whereas ANN provides a more scalable solution well-suited for large datasets. Both approaches support improved agricultural decisions and promote more sustainable farming strategies. While KNN is more effective for smaller datasets, ANN proves advantageous in real-time applications that demand fast response times. The implementation of these algorithms represents a significant advancement toward data-driven and efficient agricultural practices. Full article
(This article belongs to the Special Issue Biosystems Engineering: Latest Advances and Prospects)
Show Figures

Figure 1

19 pages, 1433 KiB  
Article
Cost-Optimised Machine Learning Model Comparison for Predictive Maintenance
by Yating Yang and Muhammad Zahid Iqbal
Electronics 2025, 14(12), 2497; https://doi.org/10.3390/electronics14122497 - 19 Jun 2025
Viewed by 647
Abstract
Predictive maintenance is essential for reducing industrial downtime and costs, yet real-world datasets frequently encounter class imbalance and require cost-sensitive evaluation due to costly misclassification errors. This study utilises the SCANIA Component X dataset to advance predictive maintenance through machine learning, employing seven [...] Read more.
Predictive maintenance is essential for reducing industrial downtime and costs, yet real-world datasets frequently encounter class imbalance and require cost-sensitive evaluation due to costly misclassification errors. This study utilises the SCANIA Component X dataset to advance predictive maintenance through machine learning, employing seven supervised algorithms, Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbours, Multi-Layer Perceptron, XGBoost, and LightGBM, trained on time-series features extracted via a sliding window approach. A bespoke cost-sensitive metric, aligned with SCANIA’s misclassification cost matrix, assesses model performance. Three imbalance mitigation strategies, downsampling, downsampling with SMOTETomek, and manual class weighting, were explored, with downsampling proving most effective. Random Forest and Support Vector Machine models achieved high accuracy and low misclassification costs, whilst a voting ensemble further enhanced cost efficiency. This research emphasises the critical role of cost-aware evaluation and imbalance handling, proposing an ensemble-based framework to improve predictive maintenance in industrial applications Full article
Show Figures

Figure 1

22 pages, 14296 KiB  
Article
An Investigation of GNSS Radio Occultation Data Pattern for Temperature Monitoring and Analysis over Africa
by Usman Sa’i Ibrahim, Kamorudeen Aleem, Tajul Ariffin Musa, Terwase Tosin Youngu, Yusuf Yakubu Obadaki, Wan Anom Wan Aris and Kelvin Tang Kang Wee
NDT 2025, 3(2), 15; https://doi.org/10.3390/ndt3020015 - 18 Jun 2025
Viewed by 1476
Abstract
Climate change monitoring and analysis is a critical task that involves the consideration of both spatial and temporal dimensions. Theimproved spatial distribution of the global navigation satellite system (GNSS) ground-based Continuous Operating Reference (COR) stations can lead to enhanced results when coupled with [...] Read more.
Climate change monitoring and analysis is a critical task that involves the consideration of both spatial and temporal dimensions. Theimproved spatial distribution of the global navigation satellite system (GNSS) ground-based Continuous Operating Reference (COR) stations can lead to enhanced results when coupled with a continuous flow of data over time. In Africa, a significant number of COR stations do not operate continuously and lack collocation with meteorological sensors essential for climate studies. Consequently, Africa faces challenges related to inadequate spatial distribution and temporal data flow from GNSS ground-based stations, impacting climate change monitoring and analysis. This research delves into the pattern of GNSS radio occultation (RO) data across Africa, addressing the limitations of the GNSS ground-based data for climate change research. The spatial analysis employed Ripley’s F-, G-, K-, and L-functions, along with calculations of nearest neighbour and Kernel density. The analysis yielded a Moran’s p-value of 0.001 and a Moran’s I-value approaching 1.0. For temporal analysis, the study investigated the data availability period of selected GNSS RO missions. Additionally, it examined seasonal temperature variations from May 2001 to May 2023, showcasing alignment with findings from other researchers worldwide. Hence, this study suggests the utilisation of GNSS RO missions/campaigns like METOP and COSMIC owing to their superior spatial and temporal resolution. Full article
Show Figures

Figure 1

14 pages, 1363 KiB  
Article
Predicting Ischemic Stroke Patients to Transfer for Endovascular Thrombectomy Using Machine Learning: A Case Study
by Noreen Kamal, Joon-Ho Han, Simone Alim, Behzad Taeb, Abhishek Devpura, Shadi Aljendi, Judah Goldstein, Patrick T. Fok, Michael D. Hill, Joe Naoum-Sawaya and Elena Adela Cora
Healthcare 2025, 13(12), 1435; https://doi.org/10.3390/healthcare13121435 - 16 Jun 2025
Viewed by 438
Abstract
Introduction: Endovascular thrombectomy (EVT) is highly effective for ischemic stroke patients with a large vessel occlusion. EVT is typically only offered at urban hospitals; therefore, patients are transferred for EVT from hospitals that solely offer thrombolysis. There is uncertainly around patient selection [...] Read more.
Introduction: Endovascular thrombectomy (EVT) is highly effective for ischemic stroke patients with a large vessel occlusion. EVT is typically only offered at urban hospitals; therefore, patients are transferred for EVT from hospitals that solely offer thrombolysis. There is uncertainly around patient selection for transfer, which results in a large number of futile transfers. Machine learning (ML) may be able to provide a model that better predicts patients to transfer for EVT. Objective: The objective of the study is to determine if ML can provide decision support to more accurately select patients to transfer for EVT. Methods: This is a retrospective study. Data from Nova Scotia, Canada from 1 January 2018 to 31 December 2022 was used. Four supervised binary classification ML algorithms were applied, as follows: logistic regression, decision tree, random forest, and support vector machine. We also applied an ensemble method using the results of these four classification algorithms. The data was split into 80% training and 20% testing, and five-fold cross-validation was employed. Missing data was accounted for by the k-nearest neighbour’s algorithm. Model performance was assessed using accuracy, the futile transfer rate, and the false negative rate. Results: A total of 5156 ischemic stroke patients were identified during the time period. After exclusions, a final dataset of 93 patients was obtained. The accuracy of logistic regression, decision tree, random forest, support vector machine, and ensemble models was 68%, 79%, 74%, 63%, and 68%, respectively. The futile transfer rate with random forest and decision tree was 0% and 18.9%, respectively, and the false negative rate was 5.37 and 4.3%, respectively Conclusions: ML models can potentially reduce futile transfer rates, but future studies with larger datasets are needed to validate this finding and generalize it to other systems. Full article
Show Figures

Graphical abstract

39 pages, 4295 KiB  
Article
Evaluation of Smart Building Integration into a Smart City by Applying Machine Learning Techniques
by Mustafa Muthanna Najm Shahrabani and Rasa Apanaviciene
Buildings 2025, 15(12), 2031; https://doi.org/10.3390/buildings15122031 - 12 Jun 2025
Cited by 1 | Viewed by 620
Abstract
Smart buildings’ role is crucial for advancing smart cities’ performance in achieving environmental sustainability, resiliency, and efficiency. The integration barriers continue due to technology, infrastructure, and operations misalignments and are escalated due to inadequate assessment frameworks and classification systems. The existing literature on [...] Read more.
Smart buildings’ role is crucial for advancing smart cities’ performance in achieving environmental sustainability, resiliency, and efficiency. The integration barriers continue due to technology, infrastructure, and operations misalignments and are escalated due to inadequate assessment frameworks and classification systems. The existing literature on assessment methodologies reveals diverging evaluation frameworks for smart buildings and smart cities, non-uniform metrics and taxonomies that hinder scalability, and the low use of machine learning in predictive integration modelling. To fill these gaps, this paper introduces a novel machine learning model to predict smart building integration into smart city levels and assess their impact on smart city performance by leveraging data from 147 smart buildings in 13 regions. Six optimised machine learning algorithms (K-Nearest Neighbours (KNNs), Support Vector Regression (SVR), Random Forest, Adaptive Boosting (AdaBoost), Decision Tree (DT), and Extra Tree (ET)) were employed to train the model and perform feature engineering and permutation importance analysis. The SVR-trained model substantially outperformed other models, achieving an R-squared of 0.81, Root Mean Square Error (RMSE) of 0.33 and Mean Absolute Error (MAE) of 0.27, enabling precise integration prediction. Case studies revealed that low-integration buildings gain significant benefits from progressive target upgrades, whilst those buildings that have already implemented some integrated systems tend to experience diminishing marginal benefits with further, potentially disruptive upgrades. The conclusion of this study states that by utilising the developed machine learning model, owners and policymakers are capable of significantly improving the integration of smart buildings to build better, more sustainable, and resilient urban environments. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

17 pages, 775 KiB  
Article
A Multi-Objective Bio-Inspired Optimization for Voice Disorders Detection: A Comparative Study
by Maria Habib, Victor Vicente-Palacios and Pablo García-Sánchez
Algorithms 2025, 18(6), 338; https://doi.org/10.3390/a18060338 - 4 Jun 2025
Viewed by 452
Abstract
As early detection of voice disorders can significantly improve patients’ situation, the automated detection using Artificial Intelligence techniques can be crucial in various applications in this scope. This paper introduces a multi-objective bio-inspired, AI-based optimization approach for the automated detection of voice disorders. [...] Read more.
As early detection of voice disorders can significantly improve patients’ situation, the automated detection using Artificial Intelligence techniques can be crucial in various applications in this scope. This paper introduces a multi-objective bio-inspired, AI-based optimization approach for the automated detection of voice disorders. Different multi-objective evolutionary algorithms (the Non-dominated Sorting Genetic Algorithm (NSGA-II), Strength Pareto Evolutionary Algorithm (SPEA-II), and the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D)) have been compared to detect voice disorders by optimizing two conflicting objectives: error rate and the number of features. The optimization problem has been formulated as a wrapper-based algorithm for feature selection and multi-objective optimization relying on four machine learning algorithms: K-Nearest Neighbour algorithm (KNN), Random Forest (RF), Multilayer Perceptron (MLP), and Support Vector Machine (SVM). Three publicly available voice disorder datasets have been utilized, and results have been compared based on Inverted-Generational Distance, Hypervolume, spacing, and spread. The results reveal that NSGA-II with the MLP algorithm attained the best convergence and performance. Further, the conformal prediction is leveraged to quantify uncertainty in the feature-selected models, ensuring statistically valid confidence intervals for predictions. Full article
Show Figures

Graphical abstract

19 pages, 2079 KiB  
Article
Evaluation of Feature Selection and Regression Models to Predict Biomass of Sweet Basil by Using Drone and Satellite Imagery
by Luana Centorame, Nicolò La Porta, Michela Papandrea, Adriano Mancini and Ester Foppa Pedretti
Appl. Sci. 2025, 15(11), 6227; https://doi.org/10.3390/app15116227 - 31 May 2025
Viewed by 933
Abstract
The integration of precision agriculture technologies, such as remote sensing through drones and satellites, has significantly enhanced real-time crop monitoring. This study is among the first to combine multispectral data from both a drone equipped with Altum-PT camera and PlanetScope satellite imagery to [...] Read more.
The integration of precision agriculture technologies, such as remote sensing through drones and satellites, has significantly enhanced real-time crop monitoring. This study is among the first to combine multispectral data from both a drone equipped with Altum-PT camera and PlanetScope satellite imagery to predict fresh biomass in sweet basil grown in an open field, demonstrating the added value of integrating different spatial scales. A dataset of 40 sampling points was built by combining remote sensing data with field measurements, and seven vegetation indices were calculated for each point. Feature selection was performed using three different methods (F-score, Recursive Feature Elimination, and model-based selection), and the most informative features were then processed through Principal Component Analysis. Eight regression models were trained and evaluated using leave-one-out cross-validation. The best-performing models were Random Forest (R2 = 0.96 in training, R2 = 0.65 in testing) and k-Nearest Neighbours (R2 = 0.74 in training, R2 = 0.94 in testing), with kNN demonstrating superior generalization capability on unseen data. These findings highlight the potential of combining drone and satellite imagery for modelling basil agronomic traits, offering valuable insights for optimizing crop management strategies. Full article
(This article belongs to the Special Issue Applications of Image Processing Technology in Agriculture)
Show Figures

Figure 1

9 pages, 1736 KiB  
Proceeding Paper
Efficiency Enhancement and Estimation of Photovoltaic Energy Generation Using Dual-Axis Tracking Systems
by Aditya Aggarwal, Himanshu Himanshu, Manav Sidana, Girish Gupta, Ishtdeep Singh Sodhi and Anamika Sharma
Eng. Proc. 2025, 95(1), 4; https://doi.org/10.3390/engproc2025095004 - 29 May 2025
Viewed by 399
Abstract
The global need to transition towards sustainable energy sources has increased the exploration of efficient methods to harness solar energy. Traditional solar panels, being stationary, often fail to capture the rays of the sun optimally across the day. This paper presents a SunPath [...] Read more.
The global need to transition towards sustainable energy sources has increased the exploration of efficient methods to harness solar energy. Traditional solar panels, being stationary, often fail to capture the rays of the sun optimally across the day. This paper presents a SunPath navigator system that dynamically adjusts the solar panel’s angle, ensuring maximum exposure to the sun. The developed SunPath navigator system achieves a 27.67% average energy gain. This work has utilised the applications of various machine learning models, such as decision trees, AdaBoost, and K-nearest neighbour, for predicting energy generation. The relevance of these models is analysed based on multiple types of error such as MAE, MSE, RMSE, and R2. The decision tree outperforms the other two models with a minimum error rate. It is paving the way for a future where solar energy is a primary, economical, and user-friendly power source in urban and rural areas. The dual-axis tracking system not only enhances energy generation but also estimates future energy generation. Full article
Show Figures

Figure 1

Back to TopTop