Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (463)

Search Parameters:
Keywords = AIS closing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2135 KiB  
Article
Development of an Automotive Electronics Internship Assistance System Using a Fine-Tuned Llama 3 Large Language Model
by Ying-Chia Huang, Hsin-Jung Tsai, Hui-Ting Liang, Bo-Siang Chen, Tzu-Hsin Chu, Wei-Sho Ho, Wei-Lun Huang and Ying-Ju Tseng
Systems 2025, 13(8), 668; https://doi.org/10.3390/systems13080668 - 6 Aug 2025
Abstract
This study develops and validates an artificial intelligence (AI)-assisted internship learning platform for automotive electronics based on the Llama 3 large language model, aiming to enhance pedagogical effectiveness within vocational training contexts. Addressing critical issues such as the persistent theory–practice gap and limited [...] Read more.
This study develops and validates an artificial intelligence (AI)-assisted internship learning platform for automotive electronics based on the Llama 3 large language model, aiming to enhance pedagogical effectiveness within vocational training contexts. Addressing critical issues such as the persistent theory–practice gap and limited innovation capability prevalent in existing curricula, we leverage the natural language processing (NLP) capabilities of Llama 3 through fine-tuning based on transfer learning to establish a specialized knowledge base encompassing fundamental circuit principles and fault diagnosis protocols. The implementation employs the Hugging Face Transformers library with optimized hyperparameters, including a learning rate of 5 × 10−5 across five training epochs. Post-training evaluations revealed an accuracy of 89.7% on validation tasks (representing a 12.4% improvement over the baseline model), a semantic comprehension precision of 92.3% in technical question-and-answer assessments, a mathematical computation accuracy of 78.4% (highlighting this as a current limitation), and a latency of 6.3 s under peak operational workloads (indicating a system bottleneck). Although direct trials involving students were deliberately avoided, the platform’s technical feasibility was validated through multidimensional benchmarking against established models (BERT-base and GPT-2), confirming superior domain adaptability (F1 = 0.87) and enhanced error tolerance (σ2 = 1.2). Notable limitations emerged in numerical reasoning tasks (Cohen’s d = 1.15 compared to human experts) and in real-time responsiveness deterioration when exceeding 50 concurrent users. The study concludes that Llama 3 demonstrates considerable promise for automotive electronics skills development. Proposed future enhancements include integrating symbolic AI modules to improve computational reliability, implementing Kubernetes-based load balancing to ensure latency below 2 s at scale, and conducting longitudinal pedagogical validation studies with trainees. This research provides a robust technical foundation for AI-driven vocational education, especially suited to mechatronics fields that require close integration between theoretical knowledge and practical troubleshooting skills. Full article
Show Figures

Figure 1

21 pages, 552 KiB  
Article
AgentsBench: A Multi-Agent LLM Simulation Framework for Legal Judgment Prediction
by Cong Jiang and Xiaolei Yang
Systems 2025, 13(8), 641; https://doi.org/10.3390/systems13080641 - 1 Aug 2025
Viewed by 294
Abstract
The justice system has increasingly applied AI techniques for legal judgment to enhance efficiency. However, most AI techniques focus on decision-making outcomes, failing to capture the deliberative nature of the real-world judicial process. To address these challenges, we propose a large language model-based [...] Read more.
The justice system has increasingly applied AI techniques for legal judgment to enhance efficiency. However, most AI techniques focus on decision-making outcomes, failing to capture the deliberative nature of the real-world judicial process. To address these challenges, we propose a large language model-based multi-agent framework named AgentsBench. Our approach leverages multiple LLM-driven agents that simulate the discussion process of the Chinese judicial bench, which is often composed of professional and lay judge agents. We conducted experiments on a legal judgment prediction task, and the results show that our framework outperforms existing LLM-based methods in terms of performance and decision quality. By incorporating these elements, our framework reflects real-world judicial processes more closely, enhancing accuracy, fairness, and societal consideration. While the simulation is based on China’s lay judge system, our framework is generalizable and can be adapted to various legal scenarios and other legal systems involving collective decision-making processes. Full article
(This article belongs to the Special Issue AI-Empowered Modeling and Simulation for Complex Systems)
Show Figures

Figure 1

32 pages, 2027 KiB  
Review
Harnessing the Loop: The Perspective of Circular RNA in Modern Therapeutics
by Yang-Yang Zhao, Fu-Ming Zhu, Yong-Juan Zhang and Huanhuan Y. Wei
Vaccines 2025, 13(8), 821; https://doi.org/10.3390/vaccines13080821 - 31 Jul 2025
Viewed by 333
Abstract
Circular RNAs (circRNAs) have emerged as a transformative class of RNA therapeutics, distinguished by their closed-loop structure conferring nuclease resistance, reduced immunogenicity, and sustained translational activity. While challenges in pharmacokinetic control and manufacturing standardization require resolution, emerging synergies between computational design tools and [...] Read more.
Circular RNAs (circRNAs) have emerged as a transformative class of RNA therapeutics, distinguished by their closed-loop structure conferring nuclease resistance, reduced immunogenicity, and sustained translational activity. While challenges in pharmacokinetic control and manufacturing standardization require resolution, emerging synergies between computational design tools and modular delivery platforms are accelerating clinical translation. In this review, we synthesize recent advances in circRNA therapeutics, with a focused analysis of their stability and immunogenic properties in vaccine and drug development. Notably, key synthesis strategies, delivery platforms, and AI-driven optimization methods enabling scalable production are discussed. Moreover, we summarize preclinical and emerging clinical studies that underscore the potential of circRNA in vaccine development and protein replacement therapies. As both a promising expression vehicle and programmable regulatory molecule, circRNA represents a versatile platform poised to advance next-generation biologics and precision medicine. Full article
(This article belongs to the Special Issue Evaluating the Immune Response to RNA Vaccine)
Show Figures

Figure 1

15 pages, 10795 KiB  
Article
DigiHortiRobot: An AI-Driven Digital Twin Architecture for Hydroponic Greenhouse Horticulture with Dual-Arm Robotic Automation
by Roemi Fernández, Eduardo Navas, Daniel Rodríguez-Nieto, Alain Antonio Rodríguez-González and Luis Emmi
Future Internet 2025, 17(8), 347; https://doi.org/10.3390/fi17080347 - 31 Jul 2025
Viewed by 243
Abstract
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, [...] Read more.
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, task planning, and dual-arm robotic execution within a modular, IoT-enabled infrastructure. DigiHortiRobot is structured into three progressive implementation phases: (i) monitoring and data acquisition through a multimodal perception system; (ii) decision support and virtual simulation for scenario analysis and intervention planning; and (iii) autonomous execution with feedback-based model refinement. The Physical Layer encompasses crops, infrastructure, and a mobile dual-arm robot; the virtual layer incorporates semantic modeling and simulation environments; and the synchronization layer enables continuous bi-directional communication via a nine-tier IoT architecture inspired by FIWARE standards. A robot task assignment algorithm is introduced to support operational autonomy while maintaining human oversight. The system is designed to optimize horticultural workflows such as seeding and harvesting while allowing farmers to interact remotely through cloud-based interfaces. Compared to previous digital agriculture approaches, DigiHortiRobot enables closed-loop coordination among perception, simulation, and action, supporting real-time task adaptation in dynamic environments. Experimental validation in a hydroponic greenhouse confirmed robust performance in both seeding and harvesting operations, achieving over 90% accuracy in localizing target elements and successfully executing planned tasks. The platform thus provides a strong foundation for future research in predictive control, semantic environment modeling, and scalable deployment of autonomous systems for high-value crop production. Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
Show Figures

Figure 1

40 pages, 7941 KiB  
Article
Synergistic Hierarchical AI Framework for USV Navigation: Closing the Loop Between Swin-Transformer Perception, T-ASTAR Planning, and Energy-Aware TD3 Control
by Haonan Ye, Hongjun Tian, Qingyun Wu, Yihong Xue, Jiayu Xiao, Guijie Liu and Yang Xiong
Sensors 2025, 25(15), 4699; https://doi.org/10.3390/s25154699 - 30 Jul 2025
Viewed by 402
Abstract
Autonomous Unmanned Surface Vehicle (USV) operations in complex ocean engineering scenarios necessitate robust navigation, guidance, and control technologies. These systems require reliable sensor-based object detection and efficient, safe, and energy-aware path planning. To address these multifaceted challenges, this paper proposes a novel synergistic [...] Read more.
Autonomous Unmanned Surface Vehicle (USV) operations in complex ocean engineering scenarios necessitate robust navigation, guidance, and control technologies. These systems require reliable sensor-based object detection and efficient, safe, and energy-aware path planning. To address these multifaceted challenges, this paper proposes a novel synergistic AI framework. The framework integrates (1) a novel adaptation of the Swin-Transformer to generate a dense, semantic risk map from raw visual data, enabling the system to interpret ambiguous marine conditions like sun glare and choppy water, enabling real-time environmental understanding crucial for guidance; (2) a Transformer-enhanced A-star (T-ASTAR) algorithm with spatio-temporal attentional guidance to generate globally near-optimal and energy-aware static paths; (3) a domain-adapted TD3 agent featuring a novel energy-aware reward function that optimizes for USV hydrodynamic constraints, making it suitable for long-endurance missions tailored for USVs to perform dynamic local path optimization and real-time obstacle avoidance, forming a key control element; and (4) CUDA acceleration to meet the computational demands of real-time ocean engineering applications. Simulations and real-world data verify the framework’s superiority over benchmarks like A* and RRT, achieving 30% shorter routes, 70% fewer turns, 64.7% fewer dynamic collisions, and a 215-fold speed improvement in map generation via CUDA acceleration. This research underscores the importance of integrating powerful AI components within a hierarchical synergy, encompassing AI-based perception, hierarchical decision planning for guidance, and multi-stage optimal search algorithms for control. The proposed solution significantly advances USV autonomy, addressing critical ocean engineering challenges such as navigation in dynamic environments, object avoidance, and energy-constrained operations for unmanned maritime systems. Full article
Show Figures

Figure 1

15 pages, 856 KiB  
Article
Automated Assessment of Word- and Sentence-Level Speech Intelligibility in Developmental Motor Speech Disorders: A Cross-Linguistic Investigation
by Micalle Carl and Michal Icht
Diagnostics 2025, 15(15), 1892; https://doi.org/10.3390/diagnostics15151892 - 28 Jul 2025
Viewed by 174
Abstract
Background/Objectives: Accurate assessment of speech intelligibility is necessary for individuals with motor speech disorders. Transcription or scaled rating methods by naïve listeners are the most reliable tasks for these purposes; however, they are often resource-intensive and time-consuming within clinical contexts. Automatic speech [...] Read more.
Background/Objectives: Accurate assessment of speech intelligibility is necessary for individuals with motor speech disorders. Transcription or scaled rating methods by naïve listeners are the most reliable tasks for these purposes; however, they are often resource-intensive and time-consuming within clinical contexts. Automatic speech recognition (ASR) systems, which transcribe speech into text, have been increasingly utilized for assessing speech intelligibility. This study investigates the feasibility of using an open-source ASR system to assess speech intelligibility in Hebrew and English speakers with Down syndrome (DS). Methods: Recordings from 65 Hebrew- and English-speaking participants were included: 33 speakers with DS and 32 typically developing (TD) peers. Speech samples (words, sentences) were transcribed using Whisper (OpenAI) and by naïve listeners. The proportion of agreement between ASR transcriptions and those of naïve listeners was compared across speaker groups (TD, DS) and languages (Hebrew, English) for word-level data. Further comparisons for Hebrew speakers were conducted across speaker groups and stimuli (words, sentences). Results: The strength of the correlation between listener and ASR transcription scores varied across languages, and was higher for English (r = 0.98) than for Hebrew (r = 0.81) for speakers with DS. A higher proportion of listener–ASR agreement was demonstrated for TD speakers, as compared to those with DS (0.94 vs. 0.74, respectively), and for English, in comparison to Hebrew speakers (0.91 for English DS speakers vs. 0.74 for Hebrew DS speakers). Listener–ASR agreement for single words was consistently higher than for sentences among Hebrew speakers. Speakers’ intelligibility influenced word-level agreement among Hebrew- but not English-speaking participants with DS. Conclusions: ASR performance for English closely approximated that of naïve listeners, suggesting potential near-future clinical applicability within single-word intelligibility assessment. In contrast, a lower proportion of agreement between human listeners and ASR for Hebrew speech indicates that broader clinical implementation may require further training of ASR models in this language. Full article
(This article belongs to the Special Issue Evaluation and Management of Developmental Disabilities)
Show Figures

Figure 1

26 pages, 27333 KiB  
Article
Gest-SAR: A Gesture-Controlled Spatial AR System for Interactive Manual Assembly Guidance with Real-Time Operational Feedback
by Naimul Hasan and Bugra Alkan
Machines 2025, 13(8), 658; https://doi.org/10.3390/machines13080658 - 27 Jul 2025
Viewed by 269
Abstract
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. [...] Read more.
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. In response, we present Gest-SAR, a SAR framework that integrates a custom MediaPipe-based gesture classification model to deliver adaptive light-guided pick-to-place assembly instructions and real-time error feedback within a closed-loop interaction instance. In a within-subject study, ten participants completed standardised Duplo-based assembly tasks using Gest-SAR, paper-based manuals, and tablet-based instructions; performance was evaluated via assembly cycle time, selection and placement error rates, cognitive workload assessed by NASA-TLX, and usability test by post-experimental questionnaires. Quantitative results demonstrate that Gest-SAR significantly reduces cycle times with an average of 3.95 min compared to Paper (Mean = 7.89 min, p < 0.01) and Tablet (Mean = 6.99 min, p < 0.01). It also achieved 7 times less average error rates while lowering perceived cognitive workload (p < 0.05 for mental demand) compared to conventional modalities. In total, 90% of the users agreed to prefer SAR over paper and tablet modalities. These outcomes indicate that natural hand-gesture interaction coupled with real-time visual feedback enhances both the efficiency and accuracy of manual assembly. By embedding AI-driven gesture recognition and AR projection into a human-centric assistance system, Gest-SAR advances the collaborative interplay between humans and machines, aligning with Industry 5.0 objectives of resilient, sustainable, and intelligent manufacturing. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

17 pages, 8549 KiB  
Article
A Fully Automated Analysis Pipeline for 4D Flow MRI in the Aorta
by Ethan M. I. Johnson, Haben Berhane, Elizabeth Weiss, Kelly Jarvis, Aparna Sodhi, Kai Yang, Joshua D. Robinson, Cynthia K. Rigsby, Bradley D. Allen and Michael Markl
Bioengineering 2025, 12(8), 807; https://doi.org/10.3390/bioengineering12080807 - 27 Jul 2025
Viewed by 341
Abstract
Four-dimensional (4D) flow MRI has shown promise for the assessment of aortic hemodynamics. However, data analysis traditionally requires manual and time-consuming human input at several stages. This limits reproducibility and affects analysis workflows, such that large-cohort 4D flow studies are lacking. Here, a [...] Read more.
Four-dimensional (4D) flow MRI has shown promise for the assessment of aortic hemodynamics. However, data analysis traditionally requires manual and time-consuming human input at several stages. This limits reproducibility and affects analysis workflows, such that large-cohort 4D flow studies are lacking. Here, a fully automated artificial intelligence (AI) 4D flow analysis pipeline was developed and evaluated in a cohort of over 350 subjects. The 4D flow MRI analysis pipeline integrated a series of previously developed and validated deep learning networks, which replaced traditionally manual processing tasks (background-phase correction, noise masking, velocity anti-aliasing, aorta 3D segmentation). Hemodynamic parameters (global aortic pulse wave velocity (PWV), peak velocity, flow energetics) were automatically quantified. The pipeline was evaluated in a heterogeneous single-center cohort of 379 subjects (age = 43.5 ± 18.6 years, 118 female) who underwent 4D flow MRI of the thoracic aorta (n = 147 healthy controls, n = 147 patients with a bicuspid aortic valve [BAV], n = 10 with mechanical valve prostheses, n = 75 pediatric patients with hereditary aortic disease). Pipeline performance with BAV and control data was evaluated by comparing to manual analysis performed by two human observers. A fully automated 4D flow pipeline analysis was successfully performed in 365 of 379 patients (96%). Pipeline-based quantification of aortic hemodynamics was closely correlated with manual analysis results (peak velocity: r = 1.00, p < 0.001; PWV: r = 0.99, p < 0.001; flow energetics: r = 0.99, p < 0.001; overall r ≥ 0.99, p < 0.001). Bland–Altman analysis showed close agreement for all hemodynamic parameters (bias 1–3%, limits of agreement 6–22%). Notably, limits of agreement between different human observers’ quantifications were moderate (4–20%). In addition, the pipeline 4D flow analysis closely reproduced hemodynamic differences between age-matched adult BAV patients and controls (median peak velocity: 1.74 m/s [automated] or 1.76 m/s [manual] BAV vs. 1.31 [auto.] vs. 1.29 [manu.] controls, p < 0.005; PWV: 6.4–6.6 m/s all groups, any processing [no significant differences]; kinetic energy: 4.9 μJ [auto.] or 5.0 μJ [manu.] BAV vs. 3.1 μJ [both] control, p < 0.005). This study presents a framework for the complete automation of quantitative 4D flow MRI data processing with a failure rate of less than 5%, offering improved measurement reliability in quantitative 4D flow MRI. Future studies are warranted to reduced failure rates and evaluate pipeline performance across multiple centers. Full article
(This article belongs to the Special Issue Recent Advances in Cardiac MRI)
Show Figures

Figure 1

24 pages, 12286 KiB  
Article
A UAV-Based Multi-Scenario RGB-Thermal Dataset and Fusion Model for Enhanced Forest Fire Detection
by Yalin Zhang, Xue Rui and Weiguo Song
Remote Sens. 2025, 17(15), 2593; https://doi.org/10.3390/rs17152593 - 25 Jul 2025
Viewed by 437
Abstract
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). [...] Read more.
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). RGB-Thermal fusion methods integrate visible-light texture and thermal infrared temperature features effectively, but current approaches are constrained by limited datasets and insufficient exploitation of cross-modal complementary information, ignoring cross-level feature interaction. A time-synchronized multi-scene, multi-angle aerial RGB-Thermal dataset (RGBT-3M) with “Smoke–Fire–Person” annotations and modal alignment via the M-RIFT method was constructed as a way to address the problem of data scarcity in wildfire scenarios. Finally, we propose a CP-YOLOv11-MF fusion detection model based on the advanced YOLOv11 framework, which can learn heterogeneous features complementary to each modality in a progressive manner. Experimental validation proves the superiority of our method, with a precision of 92.5%, a recall of 93.5%, a mAP50 of 96.3%, and a mAP50-95 of 62.9%. The model’s RGB-Thermal fusion capability enhances early fire detection, offering a benchmark dataset and methodological advancement for intelligent forest conservation, with implications for AI-driven ecological protection. Full article
(This article belongs to the Special Issue Advances in Spectral Imagery and Methods for Fire and Smoke Detection)
Show Figures

Figure 1

16 pages, 1143 KiB  
Article
AI-Driven Automated Test Generation Framework for VCU: A Multidimensional Coupling Approach Integrating Requirements, Variables and Logic
by Guangyao Wu, Xiaoming Xu and Yiting Kang
World Electr. Veh. J. 2025, 16(8), 417; https://doi.org/10.3390/wevj16080417 - 24 Jul 2025
Viewed by 322
Abstract
This paper proposes an AI-driven automated test generation framework for vehicle control units (VCUs), integrating natural language processing (NLP) and dynamic variable binding. To address the critical limitation of traditional AI-generated test cases lacking executable variables, the framework establishes a closed-loop transformation from [...] Read more.
This paper proposes an AI-driven automated test generation framework for vehicle control units (VCUs), integrating natural language processing (NLP) and dynamic variable binding. To address the critical limitation of traditional AI-generated test cases lacking executable variables, the framework establishes a closed-loop transformation from requirements to executable code through a five-layer architecture: (1) structured parsing of PDF requirements using domain-adaptive prompt engineering; (2) construction of a multidimensional variable knowledge graph; (3) semantic atomic decomposition of requirements and logic expression generation; (4) dynamic visualization of cause–effect graphs; (5) path-sensitization-driven optimization of test sequences. Validated on VCU software from a leading OEM, the method achieves 97.3% variable matching accuracy and 100% test case executability, reducing invalid cases by 63% compared to conventional NLP approaches. This framework provides an explainable and traceable automated solution for intelligent vehicle software validation, significantly enhancing efficiency and reliability in automotive testing. Full article
(This article belongs to the Special Issue Intelligent Electric Vehicle Control, Testing and Evaluation)
Show Figures

Figure 1

26 pages, 4687 KiB  
Article
Comparative Evaluation of YOLO and Gemini AI Models for Road Damage Detection and Mapping
by Zeynep Demirel, Shvan Tahir Nasraldeen, Öykü Pehlivan, Sarmad Shoman, Mustafa Albdairi and Ali Almusawi
Future Transp. 2025, 5(3), 91; https://doi.org/10.3390/futuretransp5030091 - 22 Jul 2025
Viewed by 509
Abstract
Efficient detection of road surface defects is vital for timely maintenance and traffic safety. This study introduces a novel AI-powered web framework, TriRoad AI, that integrates multiple versions of the You Only Look Once (YOLO) object detection algorithms—specifically YOLOv8 and YOLOv11—for automated detection [...] Read more.
Efficient detection of road surface defects is vital for timely maintenance and traffic safety. This study introduces a novel AI-powered web framework, TriRoad AI, that integrates multiple versions of the You Only Look Once (YOLO) object detection algorithms—specifically YOLOv8 and YOLOv11—for automated detection of potholes and cracks. A user-friendly browser interface was developed to enable real-time image analysis, confidence-based prediction filtering, and severity-based geolocation mapping using OpenStreetMap. Experimental evaluation was conducted using two datasets: one from online sources and another from field-collected images in Ankara, Turkey. YOLOv8 achieved a mean accuracy of 88.43% on internet-sourced images, while YOLOv11-B demonstrated higher robustness in challenging field environments with a detection accuracy of 46.15%, and YOLOv8 followed closely with 44.92% on mixed field images. The Gemini AI model, although highly effective in controlled environments (97.64% detection accuracy), exhibited a significant performance drop of up to 80% in complex field scenarios, with its accuracy falling to 18.50%. The proposed platform’s uniqueness lies in its fully integrated, browser-based design, requiring no device-specific installation, and its incorporation of severity classification with interactive geospatial visualization. These contributions address current gaps in generalization, accessibility, and practical deployment, offering a scalable solution for smart infrastructure monitoring and preventive maintenance planning in urban environments. Full article
Show Figures

Figure 1

24 pages, 73556 KiB  
Article
Neural Network-Guided Smart Trap for Selective Monitoring of Nocturnal Pest Insects in Agriculture
by Joel Hinojosa-Dávalos, Miguel Ángel Robles-García, Melesio Gutiérrez-Lomelí, Ariadna Berenice Flores Jiménez and Cuauhtémoc Acosta Lúa
Agriculture 2025, 15(14), 1562; https://doi.org/10.3390/agriculture15141562 - 21 Jul 2025
Viewed by 314
Abstract
Insect pests remain a major threat to agricultural productivity, particularly in open-field cropping systems where conventional monitoring methods are labor-intensive and lack scalability. This study presents the design, implementation, and field evaluation of a neural network-guided smart trap specifically developed to monitor and [...] Read more.
Insect pests remain a major threat to agricultural productivity, particularly in open-field cropping systems where conventional monitoring methods are labor-intensive and lack scalability. This study presents the design, implementation, and field evaluation of a neural network-guided smart trap specifically developed to monitor and selectively capture nocturnal insect pests under real agricultural conditions. The proposed trap integrates light and rain sensors, servo-controlled mechanical gates, and a single-layer perceptron neural network deployed on an ATmega-2560 microcontroller by Microchip Technology Inc. (Chandler, AZ, USA). The perceptron processes normalized sensor inputs to autonomously decide, in real time, whether to open or close the gate, thereby enhancing the selectivity of insect capture. The system features a removable tray containing a food-based attractant and yellow and green LEDs designed to lure target species such as moths and flies from the orders Lepidoptera and Diptera. Field trials were conducted between June and August 2023 in La Barca, Jalisco, Mexico, under diverse environmental conditions. Captured insects were analyzed and classified using the iNaturalist platform, with the successful identification of key pest species including Tetanolita floridiana, Synchlora spp., Estigmene acrea, Sphingomorpha chlorea, Gymnoscelis rufifasciata, and Musca domestica, while minimizing the capture of non-target organisms such as Carpophilus spp., Hexagenia limbata, and Chrysoperla spp. Statistical analysis using the Kruskal–Wallis test confirmed significant differences in capture rates across environmental conditions. The results highlight the potential of this low-cost device to improve pest monitoring accuracy, and lay the groundwork for the future integration of more advanced AI-based classification and species recognition systems targeting nocturnal Lepidoptera and other pest insects. Full article
(This article belongs to the Special Issue Design and Development of Smart Crop Protection Equipment)
Show Figures

Figure 1

16 pages, 2946 KiB  
Article
AI-Driven Comprehensive SERS-LFIA System: Improving Virus Automated Diagnostics Through SERS Image Recognition and Deep Learning
by Shuai Zhao, Meimei Xu, Chenglong Lin, Weida Zhang, Dan Li, Yusi Peng, Masaki Tanemura and Yong Yang
Biosensors 2025, 15(7), 458; https://doi.org/10.3390/bios15070458 - 16 Jul 2025
Viewed by 342
Abstract
Highly infectious and pathogenic viruses seriously threaten global public health, underscoring the need for rapid and accurate diagnostic methods to effectively manage and control outbreaks. In this study, we developed a comprehensive Surface-Enhanced Raman Scattering–Lateral Flow Immunoassay (SERS-LFIA) detection system that integrates SERS [...] Read more.
Highly infectious and pathogenic viruses seriously threaten global public health, underscoring the need for rapid and accurate diagnostic methods to effectively manage and control outbreaks. In this study, we developed a comprehensive Surface-Enhanced Raman Scattering–Lateral Flow Immunoassay (SERS-LFIA) detection system that integrates SERS scanning imaging with artificial intelligence (AI)-based result discrimination. This system was based on an ultra-sensitive SERS-LFIA strip with SiO2-Au NSs as the immunoprobe (with a theoretical limit of detection (LOD) of 1.8 pg/mL). On this basis, a negative–positive discrimination method combining SERS scanning imaging with a deep learning model (ResNet-18) was developed to analyze probe distribution patterns near the T line. The proposed machine learning method significantly reduced the interference of abnormal signals and achieved reliable detection at concentrations as low as 2.5 pg/mL, which was close to the theoretical Raman LOD. The accuracy of the proposed ResNet-18 image recognition model was 100% for the training set and 94.52% for the testing set, respectively. In summary, the proposed SERS-LFIA detection system that integrates detection, scanning, imaging, and AI automated result determination can achieve the simplification of detection process, elimination of the need for specialized personnel, reduction in test time, and improvement of diagnostic reliability, which exhibits great clinical potential and offers a robust technical foundation for detecting other highly pathogenic viruses, providing a versatile and highly sensitive detection method adaptable for future pandemic prevention. Full article
(This article belongs to the Special Issue Surface-Enhanced Raman Scattering in Biosensing Applications)
Show Figures

Figure 1

18 pages, 1010 KiB  
Review
Engineering IsPETase and Its Homologues: Advances in Enzyme Discovery and Host Optimisation
by Tolu Sunday Ogunlusi, Sylvester Sapele Ikoyo, Mohammad Dadashipour and Hong Gao
Int. J. Mol. Sci. 2025, 26(14), 6797; https://doi.org/10.3390/ijms26146797 - 16 Jul 2025
Viewed by 393
Abstract
Polyethylene terephthalate (PET) pollution represents a significant environmental challenge due to its widespread use and recalcitrant nature. PET-degrading enzymes, particularly Ideonella sakaiensis PETases (IsPETase), have emerged as promising biocatalysts for mitigating this problem. This review provides a comprehensive overview of recent [...] Read more.
Polyethylene terephthalate (PET) pollution represents a significant environmental challenge due to its widespread use and recalcitrant nature. PET-degrading enzymes, particularly Ideonella sakaiensis PETases (IsPETase), have emerged as promising biocatalysts for mitigating this problem. This review provides a comprehensive overview of recent advancements in the discovery and heterologous expression of IsPETase and closely related enzymes. We highlight innovative approaches, such as in silico and AI-based enzyme screening and advanced screening assays. Strategies to enhance enzyme secretion and solubility, such as using signal peptides, fusion tags, chaperone co-expression, cell surface display systems, and membrane permeability modulation, are critically evaluated. Despite considerable progress, challenges remain in achieving industrial-scale production and application. Future research must focus on integrating cutting-edge molecular biology techniques with host-specific optimisation to achieve sustainable and cost-effective solutions for PET biodegradation and recycling. This review aims to provide a foundation for further exploration and innovation in the field of enzymatic plastic degradation. Full article
(This article belongs to the Special Issue The Characterization and Application of Enzymes in Bioprocesses)
Show Figures

Figure 1

18 pages, 54426 KiB  
Article
Artificial Intelligence-Driven Identification of Favorable Geothermal Sites Based on Radioactive Heat Production: Case Study from Western Türkiye
by Elif Meriç İlkimen, Cihan Çolak, Mahrad Pisheh Var, Hakan Başağaoğlu, Debaditya Chakraborty and Ali Aydın
Appl. Sci. 2025, 15(14), 7842; https://doi.org/10.3390/app15147842 - 13 Jul 2025
Viewed by 363
Abstract
In recent years, the exploration and utilization of geothermal energy have received growing attention as a sustainable alternative to conventional energy sources. Reliable, data-driven identification of geothermal reservoirs, particularly in crystalline basement terrains, is crucial for reducing exploration uncertainties and costs. In such [...] Read more.
In recent years, the exploration and utilization of geothermal energy have received growing attention as a sustainable alternative to conventional energy sources. Reliable, data-driven identification of geothermal reservoirs, particularly in crystalline basement terrains, is crucial for reducing exploration uncertainties and costs. In such geological settings, magnetic susceptibility, radioactive heat production, and seismic wave characteristics play a vital role in evaluating geothermal energy potential. Building on this foundation, our study integrates in situ and laboratory measurements, collected using advanced sensors from spatially diverse locations, with statistical and unsupervised artificial intelligence (AI) clustering models. This integrated framework improves the effectiveness and reliability of identifying clusters of potential geothermal sites. We applied this methodology to the migmatitic gneisses within the Simav Basin in western Türkiye. Among the statistical and AI-based models evaluated, Density-Based Spatial Clustering of Applications with Noise and Autoencoder-Based Deep Clustering identified the most promising and spatially confined subregions with high geothermal production potential. The potential geothermal sites identified by the AI models align closely with those identified by statistical models and show strong agreement with independent datasets, including existing drilling locations, thermal springs, and the distribution of major earthquake epicenters in the region. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Earth Sciences—2nd Edition)
Show Figures

Figure 1

Back to TopTop