Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (300)

Search Parameters:
Keywords = low-cost AI systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 740 KB  
Article
A Scalable and Low-Cost Mobile RAG Architecture for AI-Augmented Learning in Higher Education
by Rodolfo Bojorque, Andrea Plaza, Pilar Morquecho and Fernando Moscoso
Appl. Sci. 2026, 16(2), 963; https://doi.org/10.3390/app16020963 (registering DOI) - 17 Jan 2026
Abstract
This paper presents a scalable and low-cost Retrieval Augmented Generation (RAG) architecture designed to enhance learning in university-level courses, with a particular focus on supporting students from economically disadvantaged backgrounds. Recent advances in large language models (LLMs) have demonstrated considerable potential in educational [...] Read more.
This paper presents a scalable and low-cost Retrieval Augmented Generation (RAG) architecture designed to enhance learning in university-level courses, with a particular focus on supporting students from economically disadvantaged backgrounds. Recent advances in large language models (LLMs) have demonstrated considerable potential in educational contexts; however, their adoption is often limited by computational costs and the need for stable broadband access, issues that disproportionately affect low-income learners. To address this challenge, we propose a lightweight, mobile, and friendly RAG system that integrates the LLaMA language model with the Milvus vector database, enabling efficient on device retrieval and context-grounded generation using only modest hardware resources. The system was implemented in a university-level Data Mining course and evaluated over four semesters using a quasi-experimental design with randomized assignment to experimental and control groups. Students in the experimental group had voluntary access to the RAG assistant, while the control group followed the same instructional schedule without exposure to the tool. The results show statistically significant improvements in academic performance for the experimental group, with p < 0.01 in the first semester and p < 0.001 in the subsequent three semesters. Effect sizes, measured using Hedges g to account for small cohort sizes, increased from 0.56 (moderate) to 1.52 (extremely large), demonstrating a clear and growing pedagogical impact over time. Qualitative feedback further indicates increased learner autonomy, confidence, and engagement. These findings highlight the potential of mobile RAG architectures to deliver equitable, high-quality AI support to students regardless of socioeconomic status. The proposed solution offers a practical engineering pathway for institutions seeking inclusive, scalable, and resource-efficient approaches to AI-enhanced education. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 2873 KB  
Article
Resource-Constrained Edge AI Solution for Real-Time Pest and Disease Detection in Chili Pepper Fields
by Hoyoung Chung, Jin-Hwi Kim, Junseong Ahn, Yoona Chung, Eunchan Kim and Wookjae Heo
Agriculture 2026, 16(2), 223; https://doi.org/10.3390/agriculture16020223 - 15 Jan 2026
Viewed by 41
Abstract
This paper presents a low-cost, fully on-premise Edge Artificial Intelligence (AI) system designed to support real-time pest and disease detection in open-field chili pepper cultivation. The proposed architecture integrates AI-Thinker ESP32-CAM module (ESP32-CAM) image acquisition nodes (“Sticks”) with a Raspberry Pi 5–based edge [...] Read more.
This paper presents a low-cost, fully on-premise Edge Artificial Intelligence (AI) system designed to support real-time pest and disease detection in open-field chili pepper cultivation. The proposed architecture integrates AI-Thinker ESP32-CAM module (ESP32-CAM) image acquisition nodes (“Sticks”) with a Raspberry Pi 5–based edge server (“Module”), forming a plug-and-play Internet of Things (IoT) pipeline that enables autonomous operation upon simple power-up, making it suitable for aging farmers and resource-limited environments. A Leaf-First 2-Stage vision model was developed by combining YOLOv8n-based leaf detection with a lightweight ResNet-18 classifier to improve the diagnostic accuracy for small lesions commonly occurring in dense pepper foliage. To address network instability, which is a major challenge in open-field agriculture, the system adopted a dual-protocol communication design using Hyper Text Transfer Protocol (HTTP) for Joint Photographic Experts Group (JPEG) transmission and Message Queuing Telemetry Transport (MQTT) for event-driven feedback, enhanced by Redis-based asynchronous buffering and state recovery. Deployment-oriented experiments under controlled conditions demonstrated an average end-to-end latency of 0.86 s from image capture to Light Emitting Diode (LED) alert, validating the system’s suitability for real-time decision support in crop management. Compared to heavier models (e.g., YOLOv11 and ResNet-50), the lightweight architecture reduced the computational cost by more than 60%, with minimal loss in detection accuracy. This study highlights the practical feasibility of resource-constrained Edge AI systems for open-field smart farming by emphasizing system-level integration, robustness, and real-time operability, and provides a deployment-oriented framework for future extension to other crops. Full article
(This article belongs to the Special Issue Smart Sensor-Based Systems for Crop Monitoring)
Show Figures

Figure 1

28 pages, 4532 KB  
Article
Green Transition Risks in the Construction Sector: A Qualitative Analysis of European Green Deal Policy Documents
by Muhammad Mubasher, Alok Rawat, Emlyn Witt and Simo Ilomets
Sustainability 2026, 18(2), 822; https://doi.org/10.3390/su18020822 - 14 Jan 2026
Viewed by 90
Abstract
The construction sector is central to achieving the objectives of the European Green Deal (EGD). While existing research on transition risks predominantly focuses on project- or firm-level challenges, less is known about the transition risks implied by high-level EU policy documents. This study [...] Read more.
The construction sector is central to achieving the objectives of the European Green Deal (EGD). While existing research on transition risks predominantly focuses on project- or firm-level challenges, less is known about the transition risks implied by high-level EU policy documents. This study addresses this gap by systematically analysing 101 EGD-related policy and guidance documents published between 2019 and February 2025. A mixed human–AI content analysis approach was applied, combining human expert manual coding with automated validation using large language models (Kimi K2 and GLM 4.6). The final dataset contains 2752 coded risk references organised into eight main categories and twenty-six subcategories. Results show that transition risks are most frequently associated with environmental, economic, and legislative domains, with Climate Change Impact, Cost of Transition, Pollution, Investment Risks, and Implementation Variability emerging as the most prominent risks across the corpus. Technological and social risks appear less frequently but highlight important systemic and contextual vulnerabilities. Overall, analysis of the EGD policy texts reveals the green transition as being constrained not only by environmental pressures but also by financial feasibility and execution capacity. The study provides a structured, policy-level risk profile of the EGD and demonstrates the value of hybrid human–LLM analysis for large-scale policy content analysis and interpretation. These insights support policymakers and industry stakeholders to anticipate structural uncertainties that may affect the construction sector’s transition toward a low-carbon, circular economy. Full article
Show Figures

Figure 1

34 pages, 12645 KB  
Article
Multimodal Intelligent Perception at an Intersection: Pedestrian and Vehicle Flow Dynamics Using a Pipeline-Based Traffic Analysis System
by Bao Rong Chang, Hsiu-Fen Tsai and Chen-Chia Chen
Electronics 2026, 15(2), 353; https://doi.org/10.3390/electronics15020353 - 13 Jan 2026
Viewed by 190
Abstract
Traditional automated monitoring systems adopted for Intersection Traffic Control still face challenges, including high costs, maintenance difficulties, insufficient coverage, poor multimodal data integration, and limited traffic information analysis. To address these issues, the study proposes a sovereign AI-driven Smart Transportation governance approach, developing [...] Read more.
Traditional automated monitoring systems adopted for Intersection Traffic Control still face challenges, including high costs, maintenance difficulties, insufficient coverage, poor multimodal data integration, and limited traffic information analysis. To address these issues, the study proposes a sovereign AI-driven Smart Transportation governance approach, developing a mobile AI solution equipped with multimodal perception, task decomposition, memory, reasoning, and multi-agent collaboration capabilities. The proposed system integrates computer vision, multi-object tracking, natural language processing, Retrieval-Augmented Generation (RAG), and Large Language Models (LLMs) to construct a Pipeline-based Traffic Analysis System (PTAS). The PTAS can produce real-time statistics on pedestrian and vehicle flows at intersections, incorporating potential risk factors such as traffic accidents, construction activities, and weather conditions for multimodal data fusion analysis, thereby providing forward-looking traffic insights. Experimental results demonstrate that the enhanced DuCRG-YOLOv11n pre-trained model, equipped with our proposed new activation function βsilu, can accurately identify various vehicle types in object detection, achieving a frame rate of 68.25 FPS and a precision of 91.4%. Combined with ByteTrack, it can track over 90% of vehicles in medium- to low-density traffic scenarios, obtaining a 0.719 in MOTA and a 0.08735 in MOTP. In traffic flow analysis, the RAG of Vertex AI, combined with Claude Sonnet 4 LLMs, provides a more comprehensive view, precisely interpreting the causes of peak-hour congestion and effectively compensating for missing data through contextual explanations. The proposed method can enhance the efficiency of urban traffic regulation and optimizes decision support in intelligent transportation systems. Full article
(This article belongs to the Special Issue Interactive Design for Autonomous Driving Vehicles)
Show Figures

Figure 1

32 pages, 946 KB  
Review
Paper-Based Microfluidic Chips for At-Home Point-of-Care Nucleic Acid Testing: Applications and Challenges
by Hao Liu, Yuhan Jia, Yitong Jiang, You Nie and Rongzhang Hao
Diagnostics 2026, 16(2), 251; https://doi.org/10.3390/diagnostics16020251 - 13 Jan 2026
Viewed by 134
Abstract
Along with the growing demands for personalized medicine and public health surveillance, diagnostic technologies capable of rapid and accurate pathogen nucleic acid testing in home settings are becoming increasingly crucial. Paper-based microfluidic chips (μPADs) have emerged as a potential core platform for enabling [...] Read more.
Along with the growing demands for personalized medicine and public health surveillance, diagnostic technologies capable of rapid and accurate pathogen nucleic acid testing in home settings are becoming increasingly crucial. Paper-based microfluidic chips (μPADs) have emerged as a potential core platform for enabling molecular testing at home, owing to their advantages of low cost, portability, and independence from complex instrumentation. However, significant challenges remain in the current μPADs systems regarding nucleic acid extraction efficiency, isothermal amplification stability, and signal readout standardization, which hinder their practical and large-scale application. This review systematically summarizes recent research progress in μPADs for home-based nucleic acid testing from four key aspects: extraction–amplification–detection system integration, with a particular focus on the synergistic effects and development trends of critical technologies such as material engineering, fluid control, signal transduction, and intelligent readout. We further analyze typical application cases of this technology in the rapid screening of infectious disease. Promising optimization pathways are proposed, focusing on standardized manufacturing, cold-chain-independent storage, and AI-assisted result interpretation, aiming to provide a feasible framework and forward-looking perspectives for constructing home-based molecular diagnostic systems. Full article
(This article belongs to the Special Issue Point-of-Care Testing (POCT) for Infectious Diseases)
Show Figures

Figure 1

20 pages, 2221 KB  
Article
Hybrid Web Architecture with AI and Mobile Notifications to Optimize Incident Management in the Public Sector
by Luis Alberto Pfuño Alccahuamani, Anthony Meza Bautista and Hesmeralda Rojas
Computers 2026, 15(1), 47; https://doi.org/10.3390/computers15010047 - 12 Jan 2026
Viewed by 125
Abstract
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve [...] Read more.
This study addresses the persistent inefficiencies in incident management within regional public institutions, where dispersed offices and limited digital infrastructure constrain timely technical support. The research aims to evaluate whether a hybrid web architecture integrating AI-assisted interaction and mobile notifications can significantly improve efficiency in this context. The ITIMS (Intelligent Technical Incident Management System) was designed using a Laravel 10 MVC backend, a responsive Bootstrap 5 interface, and a relational MariaDB/MySQL model optimized with migrations and composite indexes, and incorporated two low-cost integrations: a stateless AI chatbot through the OpenRouter API and asynchronous mobile notifications using the Telegram Bot API managed via Laravel Queues and webhooks. Developed through four Scrum sprints and deployed on an institutional XAMPP environment, the solution was evaluated from January to April 2025 with 100 participants using operational metrics and the QWU usability instrument. Results show a reduction in incident resolution time from 120 to 31 min (74.17%), an 85.48% chatbot interaction success rate, a 94.12% notification open rate, and a 99.34% incident resolution rate, alongside an 88% usability score. These findings indicate that a modular, low-cost, and scalable architecture can effectively strengthen digital transformation efforts in the public sector, especially in regions with resource and connectivity constraints. Full article
Show Figures

Graphical abstract

64 pages, 13395 KB  
Review
Low-Cost Malware Detection with Artificial Intelligence on Single Board Computers
by Phil Steadman, Paul Jenkins, Rajkumar Singh Rathore and Chaminda Hewage
Future Internet 2026, 18(1), 46; https://doi.org/10.3390/fi18010046 - 12 Jan 2026
Viewed by 492
Abstract
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence [...] Read more.
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence (AI) for a more dynamic and robust malware detection solution. An innovative approach utilising AI is focusing on image classification techniques to detect malware on resource-constrained Single-Board Computers (SBCs) such as the Raspberry Pi. In this method the conversion of malware binaries into 2D images is examined, which can be analysed by deep learning models such as convolutional neural networks (CNNs) to classify them as benign or malicious. The results show that the image-based approach demonstrates high efficacy, with many studies reporting detection accuracy rates exceeding 98%. That said, there is a significant challenge in deploying these demanding models on devices with limited processing power and memory, in particular those involving of both calculation and time complexity. Overcoming this issue requires critical model optimisation strategies. Successful approaches include the use of a lightweight CNN architecture and federated learning, which may be used to preserve privacy while training models with decentralised data are processed. This hybrid workflow in which models are trained on powerful servers before the learnt algorithms are deployed on SBCs is an emerging field attacting significant interest in the field of cybersecurity. This paper synthesises the current state of the art, performance compromises, and optimisation techniques contributing to the understanding of how AI and image representation can enable effective low-cost malware detection on resource-constrained systems. Full article
Show Figures

Graphical abstract

20 pages, 641 KB  
Review
Telemedicine in Oral and Maxillofacial Surgery: A Narrative Review of Clinical Applications, Outcomes and Future Directions
by Luigi Angelo Vaira, Valentina Micheluzzi, Jerome R. Lechien, Antonino Maniaci, Fabio Maglitto, Giovanni Cammaroto, Stefania Troise, Carlos M. Chiesa-Estomba, Giuseppe Consorti, Giulio Cirignaco, Alberto Maria Saibene, Giannicola Iannella, Carlos Navarro-Cuéllar, Giovanni Maria Soro, Giovanni Salzano, Gavino Casu and Giacomo De Riu
J. Clin. Med. 2026, 15(2), 452; https://doi.org/10.3390/jcm15020452 - 7 Jan 2026
Viewed by 156
Abstract
Objectives: Telemedicine has rapidly expanded in oral and maxillofacial surgery (OMFS), especially during the COVID-19 pandemic, but its specific roles and limitations across the care pathway remain unclear. This narrative review aimed to map telemedicine modalities and indications in OMFS, summarize reported outcomes, [...] Read more.
Objectives: Telemedicine has rapidly expanded in oral and maxillofacial surgery (OMFS), especially during the COVID-19 pandemic, but its specific roles and limitations across the care pathway remain unclear. This narrative review aimed to map telemedicine modalities and indications in OMFS, summarize reported outcomes, and identify priorities for future research. Methods: A narrative synthesis was undertaken after a systematic search of medical and engineering databases to 10 October 2025. Studies applying telemedicine, telehealth, telepresence or teleradiology to OMFS practice were eligible, including trials, observational cohorts, technical reports and surveys. Data were extracted in duplicate and organized thematically; heterogeneity precluded meta-analysis. Results: Fifty studies met the inclusion criteria. Telemedicine was mainly used for preoperative consultation and triage, postoperative follow-up, trauma teleradiology and tele-expertise, oncologic and oral medicine follow-up, temporomandibular disorders, and education or humanitarian work. In low-risk outpatient and postoperative settings, remote consultations showed high concordance with in-person plans, similar complication or reattendance rates, reduced travel, and high satisfaction. In trauma networks, telemedicine supported timely triage and reduced unnecessary inter-hospital transfers. Evidence in oral oncology and complex mucosal disease was more cautious, favouring hybrid models and escalation to face-to-face assessment. Data on cost-effectiveness and impacts on equity were limited. Conclusions: Telemedicine in OMFS has moved from niche innovation to a pragmatic adjunct across the clinical pathway. Current evidence supports its use for selected pre- and postoperative care and trauma triage within risk-stratified hybrid models, while underscoring the need for stronger comparative and implementation studies, clear governance on equity and data protection, and alignment with wider digital and AI-enabled health systems. Full article
(This article belongs to the Special Issue Recent Advances in Reconstructive Oral and Maxillofacial Surgery)
Show Figures

Figure 1

14 pages, 1025 KB  
Article
visionMC: A Low-Cost AI System Using Facial Recognition and Voice Interaction to Optimize Primary Care Workflows
by Marius Cioca and Adriana Lavinia Cioca
Inventions 2026, 11(1), 6; https://doi.org/10.3390/inventions11010006 - 6 Jan 2026
Viewed by 160
Abstract
This pilot study evaluated the visionMC system, a low-cost artificial intelligence system integrating HOG-based facial recognition and voice notifications, for workflow optimization in a family medicine practice. Implemented on a Raspberry Pi 4, the system was tested over two weeks with 50 patients. [...] Read more.
This pilot study evaluated the visionMC system, a low-cost artificial intelligence system integrating HOG-based facial recognition and voice notifications, for workflow optimization in a family medicine practice. Implemented on a Raspberry Pi 4, the system was tested over two weeks with 50 patients. It achieved 85% recognition accuracy and an average detection time of 3.4 s. Compared with baseline, patient waiting times showed a substantial reduction in waiting time and administrative workload, and the administrative workload decreased by 5–7 min per patient. A satisfaction survey (N = 35) indicated high acceptance, with all scores above 4.5/5, particularly for usefulness and waiting time reduction. These results suggest that visionMC can improve efficiency and enhance patient experience with minimal financial and technical requirements. Larger multicenter studies are warranted to confirm scalability and generalizability. visionMC demonstrates that effective AI integration in small practices is feasible with minimal resources, supporting scalable digital health transformation. Beyond biometric identification, the system’s primary contribution is streamlining practice management by instantly displaying the arriving patient and enabling rapid chart preparation. Personalized greetings enhance patient experience, while email alerts on motion events provide a secondary security benefit. These combined effects drove the observed reductions in waiting and administrative times. Full article
Show Figures

Figure 1

34 pages, 4007 KB  
Review
Symbiotic Intelligence for Sustainable Cities: A Decadal Review of Generative AI, Ethical Algorithms, and Global South Innovations in Urban Green Space Research
by Tianrong Xu, Ainoriza Mohd Aini, Nikmatul Adha Nordin, Qi Shen, Liyan Huang and Wenbo Xu
Buildings 2026, 16(1), 231; https://doi.org/10.3390/buildings16010231 - 5 Jan 2026
Viewed by 233
Abstract
Urban Green Spaces (UGS) are integral components of the built environment, significantly contributing to its ecological, social, and performance dimensions, including microclimate regulation, occupant well-being, and energy efficiency. This decadal review (2015–2025) systematically analyzes 70 high-impact studies to propose a “Symbiotic Intelligence” framework. [...] Read more.
Urban Green Spaces (UGS) are integral components of the built environment, significantly contributing to its ecological, social, and performance dimensions, including microclimate regulation, occupant well-being, and energy efficiency. This decadal review (2015–2025) systematically analyzes 70 high-impact studies to propose a “Symbiotic Intelligence” framework. This framework integrates Generative AI, ethical algorithms, and innovations from the Global South to revolutionize the planning, design, and management of UGS within building landscapes and urban fabrics. Our analysis reveals that Generative AI can optimize participatory design processes and generate efficient planning schemes, increasing public satisfaction by 41% and achieving fivefold efficiency gains. Metaverse digital twins enable high-fidelity simulation of UGS performance with a mere 3.2% error rate, providing robust tools for building environment analysis. Ethical algorithms, employing fairness metrics and SHAP values, are pivotal for equitable resource distribution, having been shown to reduce UGS allocation disparities in low-income communities by 67%. Meanwhile, innovations from the Global South, such as lightweight federated learning and low-cost sensors, offer scalable solutions for building-environment monitoring under resource constraints, reducing model generalization error by 18% and decreasing data acquisition costs by 90%. However, persistent challenges-including data heterogeneity, algorithmic opacity (with only 23% of studies adopting interpretability tools), and significant data gaps in the Global South (coverage < 15%)-hinder equitable progress. Future research should prioritize developing UGS-climate-building coupling models, decentralized federated frameworks for building management systems, and blockchain-based participatory planning to establish a more robust foundation for sustainable built environments. This study provides an interdisciplinary roadmap for integrating intelligent UGS into building practices, contributing to the advancement of green buildings, occupant-centric design, and the overall sustainability and resilience of our built environment. Full article
Show Figures

Figure 1

21 pages, 279 KB  
Review
AI Applications in Electrocardiography for Ischemic and Structural Heart Disease: A Review of the Current State
by Eugene J. Kim, Dhir Gala, Mohammed Ayyad, Manaal Pramanik and Amgad N. Makaryus
J. Clin. Med. 2026, 15(1), 316; https://doi.org/10.3390/jcm15010316 - 1 Jan 2026
Viewed by 299
Abstract
Cardiovascular disease is the leading cause of morbidity and mortality worldwide, with ischemic and structural heart diseases being key contributors. While the 12-lead electrocardiogram (ECG) is a common low-cost diagnostic test, its interpretation is limited by human variability. Through machine learning with large [...] Read more.
Cardiovascular disease is the leading cause of morbidity and mortality worldwide, with ischemic and structural heart diseases being key contributors. While the 12-lead electrocardiogram (ECG) is a common low-cost diagnostic test, its interpretation is limited by human variability. Through machine learning with large diverse ECG data sets and artificial intelligence (AI) algorithms, ECG analysis can be automated for pattern recognition with higher accuracy. AI-augmented ECG algorithms have been demonstrated to be able to detect myocardial infarction with high accuracy and reduce door-to-balloon coronary intervention times. Similar models can be utilized to detect subtle ECG waveforms suggestive of current or future asymptomatic left ventricular dysfunction, aortic stenosis, and hypertrophic cardiomyopathy. Despite these promising results, there is concern for generalizability and bias or errors in training data. As AI systems evolve to multimodal integration, AI-augmented ECG has the potential to redefine cardiovascular diagnostics and enable earlier detection, risk stratification, and precision-guided interventions. Full article
26 pages, 3943 KB  
Review
Review of Numerical Simulation of Overburden Grouting in Foundation Improvement
by Pengfei Guo, Weiquan Zhao, Linxiu Qu, Xifeng Li, Yahui Ma and Pan Li
Geotechnics 2026, 6(1), 3; https://doi.org/10.3390/geotechnics6010003 - 1 Jan 2026
Viewed by 246
Abstract
Overburden layers, composed of unconsolidated sediments, are widely distributed in construction, transportation, and water conservancy projects, but their inherent defects (e.g., developed pores, low strength) easily induce engineering disasters. Grouting is a core reinforcement technology, yet traditional design relying on empirical formulas and [...] Read more.
Overburden layers, composed of unconsolidated sediments, are widely distributed in construction, transportation, and water conservancy projects, but their inherent defects (e.g., developed pores, low strength) easily induce engineering disasters. Grouting is a core reinforcement technology, yet traditional design relying on empirical formulas and on-site trials suffers from high costs and low prediction accuracy. Numerical simulation has become a key bridge connecting grouting theory and practice. This study systematically reviews the numerical simulation of overburden grouting based on 82 core articles screened via the PRISMA framework. First, the theoretical system is clarified: core governing equations for seepage, stress, grout diffusion, and chemical fields, as well as their coupling mechanisms (e.g., HM coupling via effective stress principle), are sorted out, and the advantages/disadvantages of different equations are quantified. The material parameter characterization focuses on grout rheological models (Newtonian, power-law, Bingham) and overburden heterogeneity modeling. Second, numerical methods and engineering applications are analyzed: discrete (DEM) and continuous (FEM/FDM) methods, as well as their coupling modes, are compared; the simulation advantages (visualization of diffusion mechanisms, parameter controllability, low-cost risk prediction) are verified by typical cases. Third, current challenges and trends are identified: bottlenecks include the poor adaptability of models in heterogeneous strata, unbalanced accuracy–efficiency, insufficient rheological models for complex grouts, and theoretical limitations of multi-field coupling. Future directions involve AI-driven parameter optimization, cross-scale simulation, HPC-enhanced computing efficiency, and targeted models for environmentally friendly grouts. The study concludes that overburden grouting simulation has formed a complete “theory–parameter–method–application” system, evolving from a “theoretical tool” to the “core of engineering decision-making”. The core contradiction lies in the conflict between refinement requirements and technical limitations, and breakthroughs rely on the interdisciplinary integration of AI, multi-scale simulation, and HPC. This review provides a clear technical context for researchers and practical reference for engineering technicians. Full article
(This article belongs to the Special Issue Recent Advances in Geotechnical Engineering (3rd Edition))
Show Figures

Figure 1

21 pages, 3769 KB  
Article
Benchmarking Robust AI for Microrobot Detection with Ultrasound Imaging
by Ahmed Almaghthawi, Changyan He, Suhuai Luo, Furqan Alam, Majid Roshanfar and Lingbo Cheng
Actuators 2026, 15(1), 16; https://doi.org/10.3390/act15010016 - 29 Dec 2025
Viewed by 316
Abstract
Microrobots are emerging as transformative tools in minimally invasive medicine, with applications in non-invasive therapy, real-time diagnosis, and targeted drug delivery. Effective use of these systems critically depends on accurate detection and tracking of microrobots within the body. Among commonly used imaging modalities, [...] Read more.
Microrobots are emerging as transformative tools in minimally invasive medicine, with applications in non-invasive therapy, real-time diagnosis, and targeted drug delivery. Effective use of these systems critically depends on accurate detection and tracking of microrobots within the body. Among commonly used imaging modalities, including MRI, CT, and optical imaging, ultrasound (US) offers an advantageous balance of portability, low cost, non-ionizing safety, and high temporal resolution, making it particularly suitable for real-time microrobot monitoring. This study reviews current detection strategies and presents a comparative evaluation of six advanced AI-based multi-object detectors, including ConvNeXt, Res2NeXt-101, ResNeSt-269, U-Net, and the latest YOLO variants (v11, v12), being applied to microrobot detection in US imaging. Performance is assessed using standard metrics (AP50–95, precision, recall, F1-score) and robustness to four visual perturbations: blur, brightness variation, occlusion, and speckle noise. Additionally, feature-level sensitivity analyses are conducted to identify the contributions of different visual cues. Computational efficiency is also measured to assess suitability for real-time deployment. Results show that ResNeSt-269 achieved the highest detection accuracy, followed by Res2NeXt-101 and ConvNeXt, while YOLO-based detectors provided superior computational efficiency. These findings offer actionable insights for developing robust and efficient microrobot tracking systems with strong potential in diagnostic and therapeutic healthcare applications. Full article
Show Figures

Figure 1

9 pages, 2357 KB  
Proceeding Paper
AI-Enhanced Mono-View Geometry for Digital Twin 3D Visualization in Autonomous Driving
by Ing-Chau Chang, Yu-Chiao Chang, Chunghui Kuo and Chin-En Yen
Eng. Proc. 2025, 120(1), 6; https://doi.org/10.3390/engproc2025120006 - 25 Dec 2025
Viewed by 314
Abstract
To address the critical problem of 3D object detection in autonomous driving scenarios, we developed a novel digital twin architecture. This architecture combines AI models with geometric optics algorithms of camera systems for autonomous vehicles, characterized by low computational cost and high generalization [...] Read more.
To address the critical problem of 3D object detection in autonomous driving scenarios, we developed a novel digital twin architecture. This architecture combines AI models with geometric optics algorithms of camera systems for autonomous vehicles, characterized by low computational cost and high generalization capability. The architecture leverages monocular images to estimate the real-world heights and 3D positions of objects using vanishing lines and the pinhole camera model. The You Only Look Once (YOLOv11) object detection model is employed for accurate object category identification. These components are seamlessly integrated to construct a digital twin system capable of real-time reconstruction of the surrounding 3D environment. This enables the autonomous driving system to perform real-time monitoring and optimized decision-making. Compared with conventional deep-learning-based 3D object detection models, the architecture offers several notable advantages. Firstly, it mitigates the significant reliance on large-scale labeled datasets typically required by deep learning approaches. Secondly, its decision-making process inherently provides interpretability. Thirdly, it demonstrates robust generalization capabilities across diverse scenes and object types. Finally, its low computational complexity makes it particularly well-suited for resource-constrained in-vehicle edge devices. Preliminary experimental results validate the reliability of the proposed approach, showing a depth prediction error of less than 5% in driving scenarios. Furthermore, the proposed method achieves significantly faster runtime, corresponding to only 42, 27, and 22% of MonoAMNet, MonoSAID, and MonoDFNet, respectively. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

15 pages, 1841 KB  
Article
RFID Tag-Integrated Multi-Sensors with AIoT Cloud Platform for Food Quality Analysis
by Zeyu Cao, Zhipeng Wu and John Gray
Electronics 2026, 15(1), 106; https://doi.org/10.3390/electronics15010106 - 25 Dec 2025
Viewed by 415
Abstract
RFID (Radio Frequency Identification) technology has become an essential instrument in numerous industrial sectors, enhancing process efficiency and streamlining operations, allowing for the automated tracking of goods and equipment without the need for manual intervention. Nevertheless, the deployment of industrial IoT systems necessitates [...] Read more.
RFID (Radio Frequency Identification) technology has become an essential instrument in numerous industrial sectors, enhancing process efficiency and streamlining operations, allowing for the automated tracking of goods and equipment without the need for manual intervention. Nevertheless, the deployment of industrial IoT systems necessitates the establishment of complex sensor networks to enable detailed multi-parameter monitoring of items. Despite these advancements, challenges remain in item-level sensing, data analysis, and the management of power consumption. To mitigate these shortcomings, this study presents a holistic AI-assisted, semi-passive RFID-integrated multi-sensor system designed for robust food quality monitoring. The primary contributions are threefold: First, a compact (45 mm ∗ 38 mm) semi-passive UHF RFID tag is developed, featuring a rechargeable lithium battery to ensure long-term operation and extend the readable range up to 10 m. Second, a dedicated IoT cloud platform is implemented to handle big data storage and visualization, ensuring reliable data management. Third, the system integrates machine learning algorithms (LSTM) to analyze sensing data for real-time food quality assessment. The system’s efficacy is validated through real-world experiments on food products, demonstrating its capability for low-cost, long-distance, and intelligent quality control. This technology enables low-cost, timely, and sustainable quality assessments over medium and long distances, with battery life extending up to 27 days under specific conditions. By deploying this technology, quantified food quality assessment and control can be achieved. Full article
Show Figures

Figure 1

Back to TopTop