Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,457)

Search Parameters:
Keywords = machining task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 7391 KiB  
Article
Reliable QoE Prediction in IMVCAs Using an LMM-Based Agent
by Michael Sidorov, Tamir Berger, Jonathan Sterenson, Raz Birman and Ofer Hadar
Sensors 2025, 25(14), 4450; https://doi.org/10.3390/s25144450 (registering DOI) - 17 Jul 2025
Abstract
Face-to-face interaction is one of the most natural forms of human communication. Unsurprisingly, Video Conferencing (VC) Applications have experienced a significant rise in demand over the past decade. With the widespread availability of cellular devices equipped with high-resolution cameras, Instant Messaging Video Call [...] Read more.
Face-to-face interaction is one of the most natural forms of human communication. Unsurprisingly, Video Conferencing (VC) Applications have experienced a significant rise in demand over the past decade. With the widespread availability of cellular devices equipped with high-resolution cameras, Instant Messaging Video Call Applications (IMVCAs) now constitute a substantial portion of VC communications. Given the multitude of IMVCA options, maintaining a high Quality of Experience (QoE) is critical. While content providers can measure QoE directly through end-to-end connections, Internet Service Providers (ISPs) must infer QoE indirectly from network traffic—a non-trivial task, especially when most traffic is encrypted. In this paper, we analyze a large dataset collected from WhatsApp IMVCA, comprising over 25,000 s of VC sessions. We apply four Machine Learning (ML) algorithms and a Large Multimodal Model (LMM)-based agent, achieving mean errors of 4.61%, 5.36%, and 13.24% for three popular QoE metrics: BRISQUE, PIQE, and FPS, respectively. Full article
Show Figures

Figure 1

17 pages, 10396 KiB  
Article
Feature Selection Based on Three-Dimensional Correlation Graphs
by Adam Dudáš and Aneta Szoliková
AppliedMath 2025, 5(3), 91; https://doi.org/10.3390/appliedmath5030091 (registering DOI) - 17 Jul 2025
Abstract
The process of feature selection is a critical component of any decision-making system incorporating machine or deep learning models applied to multidimensional data. Feature selection on input data can be performed using a variety of techniques, such as correlation-based methods, wrapper-based methods, or [...] Read more.
The process of feature selection is a critical component of any decision-making system incorporating machine or deep learning models applied to multidimensional data. Feature selection on input data can be performed using a variety of techniques, such as correlation-based methods, wrapper-based methods, or embedded methods. However, many conventionally used approaches do not support backwards interpretability of the selected features, making their application in real-world scenarios impractical and difficult to implement. This work addresses that limitation by proposing a novel correlation-based strategy for feature selection in regression tasks, based on a three-dimensional visualization of correlation analysis results—referred to as three-dimensional correlation graphs. The main objective of this study is the design, implementation, and experimental evaluation of this graphical model through a case study using a multidimensional dataset with 28 attributes. The experiments assess the clarity of the visualizations and their impact on regression model performance, demonstrating that the approach reduces dimensionality while maintaining or improving predictive accuracy, enhances interpretability by uncovering hidden relationships, and achieves better or comparable results to conventional feature selection methods. Full article
Show Figures

Figure 1

17 pages, 1296 KiB  
Article
Machine Learning Ensemble Algorithms for Classification of Thyroid Nodules Through Proteomics: Extending the Method of Shapley Values from Binary to Multi-Class Tasks
by Giulia Capitoli, Simone Magnaghi, Andrea D'Amicis, Camilla Vittoria Di Martino, Isabella Piga, Vincenzo L'Imperio, Marco Salvatore Nobile, Stefania Galimberti and Davide Paolo Bernasconi
Stats 2025, 8(3), 64; https://doi.org/10.3390/stats8030064 - 16 Jul 2025
Abstract
The need to improve medical diagnosis is of utmost importance in medical research, consisting of the optimization of accurate classification models able to assist clinical decisions. To minimize the errors that can be caused by using a single classifier, the voting ensemble technique [...] Read more.
The need to improve medical diagnosis is of utmost importance in medical research, consisting of the optimization of accurate classification models able to assist clinical decisions. To minimize the errors that can be caused by using a single classifier, the voting ensemble technique can be used, combining the classification results of different classifiers to improve the final classification performance. This paper aims to compare the existing voting ensemble techniques with a new game-theory-derived approach based on Shapley values. We extended this method, originally developed for binary tasks, to the multi-class setting in order to capture complementary information provided by different classifiers. In heterogeneous clinical scenarios such as thyroid nodule diagnosis, where distinct models may be better suited to identify specific subtypes (e.g., benign, malignant, or inflammatory lesions), ensemble strategies capable of leveraging these strengths are particularly valuable. The motivating application focuses on the classification of thyroid cancer nodules whose cytopathological clinical diagnosis is typically characterized by a high number of false positive cases that may result in unnecessary thyroidectomy. We apply and compare the performance of seven individual classifiers, along with four ensemble voting techniques (including Shapley values), in a real-world study focused on classifying thyroid cancer nodules using proteomic features obtained through mass spectrometry. Our results indicate a slight improvement in the classification accuracy for ensemble systems compared to the performance of single classifiers. Although the Shapley value-based voting method remains comparable to the other voting methods, we envision this new ensemble approach could be effective in improving the performance of single classifiers in further applications, especially when complementary algorithms are considered in the ensemble. The application of these techniques can lead to the development of new tools to assist clinicians in diagnosing thyroid cancer using proteomic features derived from mass spectrometry. Full article
Show Figures

Figure 1

42 pages, 6065 KiB  
Review
Digital Alchemy: The Rise of Machine and Deep Learning in Small-Molecule Drug Discovery
by Abdul Manan, Eunhye Baek, Sidra Ilyas and Donghun Lee
Int. J. Mol. Sci. 2025, 26(14), 6807; https://doi.org/10.3390/ijms26146807 - 16 Jul 2025
Abstract
This review provides a comprehensive analysis of the transformative impact of artificial intelligence (AI) and machine learning (ML) on modern drug design, specifically focusing on how these advanced computational techniques address the inherent limitations of traditional small-molecule drug design methodologies. It begins by [...] Read more.
This review provides a comprehensive analysis of the transformative impact of artificial intelligence (AI) and machine learning (ML) on modern drug design, specifically focusing on how these advanced computational techniques address the inherent limitations of traditional small-molecule drug design methodologies. It begins by outlining the historical challenges of the drug discovery pipeline, including protracted timelines, exorbitant costs, and high clinical failure rates. Subsequently, it examines the core principles of structure-based virtual screening (SBVS) and ligand-based virtual screening (LBVS), establishing the critical bottlenecks that have historically impeded efficient drug development. The central sections elucidate how cutting-edge ML and deep learning (DL) paradigms, such as generative models and reinforcement learning, are revolutionizing chemical space exploration, enhancing binding affinity prediction, improving protein flexibility modeling, and automating critical design tasks. Illustrative real-world case studies demonstrating quantifiable accelerations in discovery timelines and improved success probabilities are presented. Finally, the review critically examines prevailing challenges, including data quality, model interpretability, ethical considerations, and evolving regulatory landscapes, while offering forward-looking critical perspectives on the future trajectory of AI-driven pharmaceutical innovation. Full article
(This article belongs to the Special Issue Advances in Computer-Aided Drug Design Strategies)
Show Figures

Graphical abstract

23 pages, 3542 KiB  
Article
An Intuitive and Efficient Teleoperation Human–Robot Interface Based on a Wearable Myoelectric Armband
by Long Wang, Zhangyi Chen, Songyuan Han, Yao Luo, Xiaoling Li and Yang Liu
Biomimetics 2025, 10(7), 464; https://doi.org/10.3390/biomimetics10070464 - 15 Jul 2025
Viewed by 69
Abstract
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and [...] Read more.
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and efficiently through teleoperation. The lightweight, wearable myoelectric armband, due to its portability and environmental robustness, provides a natural human–robot gesture interaction interface. However, current myoelectric teleoperation gesture control faces two major challenges: (1) poor intuitiveness due to visual-motor misalignment; and (2) low efficiency from discrete, single-degree-of-freedom control modes. To address these challenges, this study proposes an integrated myoelectric teleoperation interface. The interface integrates the following: (1) a novel hybrid reference frame aimed at effectively mitigating visual-motor misalignment; and (2) a finite state machine (FSM)-based control logic designed to enhance control efficiency and smoothness. Four experimental tasks were designed using different end-effectors (gripper/dexterous hand) and camera viewpoints (front/side view). Compared to benchmark methods, the proposed interface demonstrates significant advantages in task completion time, movement path efficiency, and subjective workload. This work demonstrates the potential of the proposed interface to significantly advance the practical application of wearable myoelectric sensors in human–robot interaction. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

27 pages, 1817 KiB  
Article
A Large Language Model-Based Approach for Multilingual Hate Speech Detection on Social Media
by Muhammad Usman, Muhammad Ahmad, Grigori Sidorov, Irina Gelbukh and Rolando Quintero Tellez
Computers 2025, 14(7), 279; https://doi.org/10.3390/computers14070279 - 15 Jul 2025
Viewed by 164
Abstract
The proliferation of hate speech on social media platforms poses significant threats to digital safety, social cohesion, and freedom of expression. Detecting such content—especially across diverse languages—remains a challenging task due to linguistic complexity, cultural context, and resource limitations. To address these challenges, [...] Read more.
The proliferation of hate speech on social media platforms poses significant threats to digital safety, social cohesion, and freedom of expression. Detecting such content—especially across diverse languages—remains a challenging task due to linguistic complexity, cultural context, and resource limitations. To address these challenges, this study introduces a comprehensive approach for multilingual hate speech detection. To facilitate robust hate speech detection across diverse languages, this study makes several key contributions. First, we created a novel trilingual hate speech dataset consisting of 10,193 manually annotated tweets in English, Spanish, and Urdu. Second, we applied two innovative techniques—joint multilingual and translation-based approaches—for cross-lingual hate speech detection that have not been previously explored for these languages. Third, we developed detailed hate speech annotation guidelines tailored specifically to all three languages to ensure consistent and high-quality labeling. Finally, we conducted 41 experiments employing machine learning models with TF–IDF features, deep learning models utilizing FastText and GloVe embeddings, and transformer-based models leveraging advanced contextual embeddings to comprehensively evaluate our approach. Additionally, we employed a large language model with advanced contextual embeddings to identify the best solution for the hate speech detection task. The experimental results showed that our GPT-3.5-turbo model significantly outperforms strong baselines, achieving up to an 8% improvement over XLM-R in Urdu hate speech detection and an average gain of 4% across all three languages. This research not only contributes a high-quality multilingual dataset but also offers a scalable and inclusive framework for hate speech detection in underrepresented languages. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

30 pages, 2302 KiB  
Review
Early Detection of Alzheimer’s Disease Using Generative Models: A Review of GANs and Diffusion Models in Medical Imaging
by Md Minul Alam and Shahram Latifi
Algorithms 2025, 18(7), 434; https://doi.org/10.3390/a18070434 - 15 Jul 2025
Viewed by 45
Abstract
Alzheimer’s disease (AD) is a progressive, non-curable neurodegenerative disorder that poses persistent challenges for early diagnosis due to its gradual onset and the difficulty in distinguishing pathological changes from normal aging. Neuroimaging, particularly MRI and PET, plays a key role in detection; however, [...] Read more.
Alzheimer’s disease (AD) is a progressive, non-curable neurodegenerative disorder that poses persistent challenges for early diagnosis due to its gradual onset and the difficulty in distinguishing pathological changes from normal aging. Neuroimaging, particularly MRI and PET, plays a key role in detection; however, limitations in data availability and the complexity of early structural biomarkers constrain traditional diagnostic approaches. This review investigates the use of generative models, specifically Generative Adversarial Networks (GANs) and Diffusion Models, as emerging tools to address these challenges. These models are capable of generating high-fidelity synthetic brain images, augmenting datasets, and enhancing machine learning performance in classification tasks. The review synthesizes findings across multiple studies, revealing that GAN-based models achieved diagnostic accuracies up to 99.70%, with image quality metrics such as SSIM reaching 0.943 and PSNR up to 33.35 dB. Diffusion Models, though relatively new, demonstrated strong performance with up to 92.3% accuracy and FID scores as low as 11.43. Integrating generative models with convolutional neural networks (CNNs) and multimodal inputs further improved diagnostic reliability. Despite these advancements, challenges remain, including high computational demands, limited interpretability, and ethical concerns regarding synthetic data. This review offers a comprehensive perspective to inform future AI-driven research in early AD detection. Full article
(This article belongs to the Special Issue Advancements in Signal Processing and Machine Learning for Healthcare)
Show Figures

Graphical abstract

26 pages, 3020 KiB  
Article
Data-Driven Loan Default Prediction: A Machine Learning Approach for Enhancing Business Process Management
by Xinyu Zhang, Tianhui Zhang, Lingmin Hou, Xianchen Liu, Zhen Guo, Yuanhao Tian and Yang Liu
Systems 2025, 13(7), 581; https://doi.org/10.3390/systems13070581 - 15 Jul 2025
Viewed by 95
Abstract
Loan default prediction is a critical task for financial institutions, directly influencing risk management, loan approval decisions, and profitability. This study evaluates the effectiveness of machine learning models, specifically XGBoost, Gradient Boosting, Random Forest, and LightGBM, in predicting loan defaults. The research investigates [...] Read more.
Loan default prediction is a critical task for financial institutions, directly influencing risk management, loan approval decisions, and profitability. This study evaluates the effectiveness of machine learning models, specifically XGBoost, Gradient Boosting, Random Forest, and LightGBM, in predicting loan defaults. The research investigates the following question: How effective are machine learning models in predicting loan defaults compared to traditional approaches? A structured machine learning pipeline is developed, including data preprocessing, feature engineering, class imbalance handling (SMOTE and class weighting), model training, hyperparameter tuning, and evaluation. Models are assessed using accuracy, F1-score, ROC AUC, precision–recall curves, and confusion matrices. The results show that Gradient Boosting achieves the highest overall classification performance (accuracy = 0.8887, F1-score = 0.8084, recall = 0.8021), making it the most effective model for identifying defaulters. XGBoost exhibits superior discriminatory power with the highest ROC AUC (0.9714). A cost-sensitive threshold-tuning procedure is embedded to align predictions with regulatory loss weights to support audit requirements. Full article
(This article belongs to the Special Issue Data-Driven Methods in Business Process Management)
Show Figures

Figure 1

21 pages, 2217 KiB  
Article
AI-Based Prediction of Visual Performance in Rhythmic Gymnasts Using Eye-Tracking Data and Decision Tree Models
by Ricardo Bernardez-Vilaboa, F. Javier Povedano-Montero, José Ramon Trillo, Alicia Ruiz-Pomeda, Gema Martínez-Florentín and Juan E. Cedrún-Sánchez
Photonics 2025, 12(7), 711; https://doi.org/10.3390/photonics12070711 - 14 Jul 2025
Viewed by 106
Abstract
Background/Objective: This study aims to evaluate the predictive performance of three supervised machine learning algorithms—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN) in forecasting key visual skills relevant to rhythmic gymnastics. Methods: A total of 383 rhythmic gymnasts aged 4 [...] Read more.
Background/Objective: This study aims to evaluate the predictive performance of three supervised machine learning algorithms—decision tree (DT), support vector machine (SVM), and k-nearest neighbors (KNN) in forecasting key visual skills relevant to rhythmic gymnastics. Methods: A total of 383 rhythmic gymnasts aged 4 to 27 years were evaluated in various sports centers across Madrid, Spain. Visual assessments included clinical tests (near convergence point accommodative facility, reaction time, and hand–eye coordination) and eye-tracking tasks (fixation stability, saccades, smooth pursuits, and visual acuity) using the DIVE (Devices for an Integral Visual Examination) system. The dataset was split into training (70%) and testing (30%) subsets. Each algorithm was trained to classify visual performance, and predictive performance was assessed using accuracy and macro F1-score metrics. Results: The decision tree model demonstrated the highest performance, achieving an average accuracy of 92.79% and a macro F1-score of 0.9276. In comparison, the SVM and KNN models showed lower accuracies (71.17% and 78.38%, respectively) and greater difficulty in correctly classifying positive cases. Notably, the DT model outperformed the others in predicting fixation stability and accommodative facility, particularly in short-duration fixation tasks. Conclusion: The decision tree algorithm achieved the highest performance in predicting short-term fixation stability, but its effectiveness was limited in tasks involving accommodative facility, where other models such as SVM and KNN outperformed it in specific metrics. These findings support the integration of machine learning in sports vision screening and suggest that predictive modeling can inform individualized training and performance optimization in visually demanding sports such as rhythmic gymnastics. Full article
Show Figures

Figure 1

14 pages, 3218 KiB  
Article
Multi-Task Regression Model for Predicting Photocatalytic Performance of Inorganic Materials
by Zai Chen, Wen-Jie Hu, Hua-Kai Xu, Xiang-Fu Xu and Xing-Yuan Chen
Catalysts 2025, 15(7), 681; https://doi.org/10.3390/catal15070681 - 14 Jul 2025
Viewed by 143
Abstract
As renewable energy technologies advance, identifying efficient photocatalytic materials for water splitting to produce hydrogen has become an important research focus in materials science. This study presents a multi-task regression model (MTRM) designed to predict the conduction band minimum (CBM), valence band maximum [...] Read more.
As renewable energy technologies advance, identifying efficient photocatalytic materials for water splitting to produce hydrogen has become an important research focus in materials science. This study presents a multi-task regression model (MTRM) designed to predict the conduction band minimum (CBM), valence band maximum (VBM), and solar-to-hydrogen efficiency (STH) of inorganic materials. Utilizing crystallographic and band gap data from over 15,000 materials in the SNUMAT database, machine-learning methods are applied to predict CBM and VBM, which are subsequently used as additional features to estimate STH. A deep neural network framework with a multi-branch, multi-task regression structure is employed to address the issue of error propagation in traditional cascading models by enabling feature sharing and joint optimization of the tasks. The calculated results show that, while traditional tree-based models perform well in single-task predictions, MTRM achieves superior performance in the multi-task setting, particularly for STH prediction, with an MSE of 0.0001 and an R2 of 0.8265, significantly outperforming cascading approaches. This research provides a new approach to predicting photocatalytic material performance and demonstrates the potential of multi-task learning in materials science. Full article
(This article belongs to the Special Issue Recent Developments in Photocatalytic Hydrogen Production)
Show Figures

Figure 1

22 pages, 2775 KiB  
Article
Surface Broadband Radiation Data from a Bipolar Perspective: Assessing Climate Change Through Machine Learning
by Alice Cavaliere, Claudia Frangipani, Daniele Baracchi, Maurizio Busetto, Angelo Lupi, Mauro Mazzola, Simone Pulimeno, Vito Vitale and Dasara Shullani
Climate 2025, 13(7), 147; https://doi.org/10.3390/cli13070147 - 13 Jul 2025
Viewed by 199
Abstract
Clouds modulate the net radiative flux that interacts with both shortwave (SW) and longwave (LW) radiation, but the uncertainties regarding their effect in polar regions are especially high because ground observations are lacking and evaluation through satellites is made difficult by high surface [...] Read more.
Clouds modulate the net radiative flux that interacts with both shortwave (SW) and longwave (LW) radiation, but the uncertainties regarding their effect in polar regions are especially high because ground observations are lacking and evaluation through satellites is made difficult by high surface reflectance. In this work, sky conditions for six different polar stations, two in the Arctic (Ny-Ålesund and Utqiagvik [formerly Barrow]) and four in Antarctica (Neumayer, Syowa, South Pole, and Dome C) will be presented, considering the decade between 2010 and 2020. Measurements of broadband SW and LW radiation components (both downwelling and upwelling) are collected within the frame of the Baseline Surface Radiation Network (BSRN). Sky conditions—categorized as clear sky, cloudy, or overcast—were determined using cloud fraction estimates obtained through the RADFLUX method, which integrates shortwave (SW) and longwave (LW) radiative fluxes. RADFLUX was applied with daily fitting for all BSRN stations, producing two cloud fraction values: one derived from shortwave downward (SWD) measurements and the other from longwave downward (LWD) measurements. The variation in cloud fraction used to classify conditions from clear sky to overcast appeared consistent and reasonable when compared to seasonal changes in shortwave downward (SWD) and diffuse radiation (DIF), as well as longwave downward (LWD) and longwave upward (LWU) fluxes. These classifications served as labels for a machine learning-based classification task. Three algorithms were evaluated: Random Forest, K-Nearest Neighbors (KNN), and XGBoost. Input features include downward LW radiation, solar zenith angle, surface air temperature (Ta), relative humidity, and the ratio of water vapor pressure to Ta. Among these models, XGBoost achieved the highest balanced accuracy, with the best scores of 0.78 at Ny-Ålesund (Arctic) and 0.78 at Syowa (Antarctica). The evaluation employed a leave-one-year-out approach to ensure robust temporal validation. Finally, the results from cross-station models highlighted the need for deeper investigation, particularly through clustering stations with similar environmental and climatic characteristics to improve generalization and transferability across locations. Additionally, the use of feature normalization strategies proved effective in reducing inter-station variability and promoting more stable model performance across diverse settings. Full article
(This article belongs to the Special Issue Addressing Climate Change with Artificial Intelligence Methods)
Show Figures

Figure 1

23 pages, 16046 KiB  
Article
A False-Positive-Centric Framework for Object Detection Disambiguation
by Jasper Baur and Frank O. Nitsche
Remote Sens. 2025, 17(14), 2429; https://doi.org/10.3390/rs17142429 - 13 Jul 2025
Viewed by 228
Abstract
Existing frameworks for classifying the fidelity for object detection tasks do not consider false positive likelihood and object uniqueness. Inspired by the Detection, Recognition, Identification (DRI) framework proposed by Johnson 1958, we propose a new modified framework that defines three categories as visible [...] Read more.
Existing frameworks for classifying the fidelity for object detection tasks do not consider false positive likelihood and object uniqueness. Inspired by the Detection, Recognition, Identification (DRI) framework proposed by Johnson 1958, we propose a new modified framework that defines three categories as visible anomaly, identifiable anomaly, and unique identifiable anomaly (AIU) as determined by human interpretation of imagery or geophysical data. These categories are designed to better capture false positive rates and emphasize the importance of identifying unique versus non-unique targets compared to the DRI Index. We then analyze visual, thermal, and multispectral UAV imagery collected over a seeded minefield and apply the AIU Index for the landmine detection use-case. We find that RGB imagery provided the most value per pixel, achieving a 100% identifiable anomaly rate at 125 pixels on target, and the highest unique target classification compared to thermal and multispectral imaging for the detection and identification of surface landmines and UXO. We also investigate how the AIU Index can be applied to machine learning for the selection of training data and informing the required action to take after object detection bounding boxes are predicted. Overall, the anomaly, identifiable anomaly, and unique identifiable anomaly index prescribes essential context for false-positive-sensitive or resolution-poor object detection tasks with applications in modality comparison, machine learning, and remote sensing data acquisition. Full article
Show Figures

Figure 1

20 pages, 1916 KiB  
Article
Pre-Symptomatic Detection of Nicosulfuron Phytotoxicity in Vegetable Soybeans via Hyperspectral Imaging and ResNet-18
by Yun Xiang, Tian Liang, Yuanpeng Bu, Shiqiang Cai, Jingjie Guo, Zhongjing Su, Jinxuan Hu, Chang Cai, Bin Wang, Zhijuan Feng, Guwen Zhang, Na Liu and Yaming Gong
Agronomy 2025, 15(7), 1691; https://doi.org/10.3390/agronomy15071691 - 12 Jul 2025
Viewed by 201
Abstract
Herbicide phytotoxicity represented a critical constraint on crop safety in soybean–corn intercropping systems, where early detection of herbicide stress is essential for implementing timely mitigation strategies to preserve yield potential. Current methodologies lack rapid, non-invasive approaches for early-stage prediction of herbicide-induced stress. To [...] Read more.
Herbicide phytotoxicity represented a critical constraint on crop safety in soybean–corn intercropping systems, where early detection of herbicide stress is essential for implementing timely mitigation strategies to preserve yield potential. Current methodologies lack rapid, non-invasive approaches for early-stage prediction of herbicide-induced stress. To develop and validate a spectral-feature-based prediction model for herbicide concentration classification, we conducted a controlled experiment exposing three-leaf-stage vegetable soybean (Glycine max L.) seedlings to aqueous solutions containing three concentrations of nicosulfuron herbicide (0.5, 1, and 2 mL/L) alongside a water control. Hyperspectral imaging of randomly selected seedling leaves was systematically performed at 1, 3, 5, and 7 days post-treatment. We developed predictive models for herbicide phytotoxicity through advanced machine learning and deep learning frameworks. Key findings revealed that the ResNet-18 deep learning model achieved exceptional classification performance when analyzing the 386–1004 nm spectral range at day 7 post-treatment: 100% accuracy in binary classification (herbicide-treated vs. water control), 93.02% accuracy in three-class differentiation (water control, low/high concentration), and 86.53% accuracy in four-class discrimination across specific concentration gradients (0, 0.5, 1, 2 mL/L). Spectral analysis identified significant reflectance alterations between 518 and 690 nm through normalized reflectance and first-derivative transformations. Subsequent model optimization using this diagnostic spectral subrange maintained 100% binary classification accuracy while achieving 94.12% and 82.11% accuracy for three- and four-class recognition tasks, respectively. This investigation demonstrated the synergistic potential of hyperspectral imaging and deep learning for early herbicide stress detection in vegetable soybeans. Our findings established a novel methodological framework for pre-symptomatic stress diagnostics while demonstrating the technical feasibility of employing targeted spectral regions (518–690 nm) in field-ready real-time crop surveillance systems. Furthermore, these innovations offer significant potential for advancing precision agriculture in intercropping systems, specifically through refined herbicide application protocols and yield preservation via early-stage phytotoxicity mitigation. Full article
Show Figures

Figure 1

19 pages, 2299 KiB  
Article
A Supervised Machine Learning-Based Approach for Task Workload Prediction in Manufacturing: A Case Study Application
by Valentina De Simone, Valentina Di Pasquale, Joanna Calabrese, Salvatore Miranda and Raffaele Iannone
Machines 2025, 13(7), 602; https://doi.org/10.3390/machines13070602 - 12 Jul 2025
Viewed by 194
Abstract
Predicting workload for tasks in manufacturing is a complex challenge due to the numerous variables involved. In small- and medium-sized enterprises (SMEs), this process is often experience-based, leading to inaccurate predictions that significantly impact production planning, order management, and consequently the ability to [...] Read more.
Predicting workload for tasks in manufacturing is a complex challenge due to the numerous variables involved. In small- and medium-sized enterprises (SMEs), this process is often experience-based, leading to inaccurate predictions that significantly impact production planning, order management, and consequently the ability to meet customer deadlines. This paper presents an approach that leverages machine learning to enhance workload prediction with minimal data collection, making it particularly suitable for SMEs. A case study application using supervised machine learning models for regression, trained in an open-source data analytics, reporting, and integration platform (KNIME Analytics Platform), has been carried out. An Automated Machine Learning (AutoML) regression approach was employed to identify the most suitable model for task workload prediction based on minimising the Mean Absolute Error (MAE) scores. Specifically, the Regression Tree (RT) model demonstrated superior accuracy compared to more traditional simple averaging and manual predictions when modelling data for a single product type. When incorporating all available product data, despite a slight performance decrease, the XGBoost Tree Ensemble still outperformed the traditional approaches. These findings highlight the potential of machine learning to improve workload forecasting in manufacturing, offering a practical and easily implementable solution for SMEs. Full article
Show Figures

Figure 1

29 pages, 1234 KiB  
Article
Automatic Detection of the CaRS Framework in Scholarly Writing Using Natural Language Processing
by Olajide Omotola, Nonso Nnamoko, Charles Lam, Ioannis Korkontzelos, Callum Altham and Joseph Barrowclough
Electronics 2025, 14(14), 2799; https://doi.org/10.3390/electronics14142799 - 11 Jul 2025
Viewed by 241
Abstract
Many academic introductions suffer from inconsistencies and a lack of comprehensive structure, often failing to effectively outline the core elements of the research. This not only impacts the clarity and readability of the article but also hinders the communication of its significance and [...] Read more.
Many academic introductions suffer from inconsistencies and a lack of comprehensive structure, often failing to effectively outline the core elements of the research. This not only impacts the clarity and readability of the article but also hinders the communication of its significance and objectives to the intended audience. This study aims to automate the CaRS (Creating a Research Space) model using machine learning and natural language processing techniques. We conducted a series of experiments using a custom-developed corpus of 50 biology research article introductions, annotated with rhetorical moves and steps. The dataset was used to evaluate the performance of four classification algorithms: Prototypical Network (PN), Support Vector Machines (SVM), Naïve Bayes (NB), and Random Forest (RF); in combination with six embedding models: Word2Vec, GloVe, BERT, GPT-2, Llama-3.2-3B, and TEv3-small. Multiple experiments were carried out to assess performance at both the move and step levels using 5-fold cross-validation. Evaluation metrics included accuracy and weighted F1-score, with comprehensive results provided. Results show that the SVM classifier, when paired with Llama-3.2-3B embeddings, consistently achieved the highest performance across multiple tasks when trained on preprocessed dataset, with 79% accuracy and weighted F1-score on rhetorical moves and strong results on M2 steps (75% accuracy and weighted F1-score). While other combinations showed promise, particularly NB and RF with newer embeddings, none matched the consistency of the SVM–Llama pairing. Compared to existing benchmarks, our model achieves similar or better performance; however, direct comparison is limited due to differences in datasets and experimental setups. Despite the unavailability of the benchmark dataset, our findings indicate that SVM is an effective choice for rhetorical classification, even in few-shot learning scenarios. Full article
Show Figures

Figure 1

Back to TopTop