Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (72)

Search Parameters:
Keywords = automated sensor-screening

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4400 KB  
Article
Enhancing Fire Safety Education Through PLC and HMI-Driven Interactive Learning
by Musa Al-Yaman, Miral AlMashayeikh, Majd AlFedailat, Ahmad M. A. Malkawi and Majid Al-Taee
Fire 2026, 9(3), 121; https://doi.org/10.3390/fire9030121 - 12 Mar 2026
Viewed by 711
Abstract
Fire safety plays a vital role in protecting lives, property, and the environment, and it keeps communities and organizations running safely. Many existing fire pump control systems fall short in educational and small-to-medium industrial settings: they often control only one pump at a [...] Read more.
Fire safety plays a vital role in protecting lives, property, and the environment, and it keeps communities and organizations running safely. Many existing fire pump control systems fall short in educational and small-to-medium industrial settings: they often control only one pump at a time, rely heavily on manual monitoring, and come with high costs that limit accessibility. To address these gaps, we developed an affordable, hands-on educational kit that brings real-world fire safety systems into the classroom using modern automation technology. The system is built around a Delta DVP12SA211R PLC chosen for its built-in real-time clock, integrated RS-232/RS-485 ports for reliable communication, and expanded with DVP16SP11R digital I/O and DVP04AD-S2 analog input modules to interface with simulated sensors mimicking smoke detection and water pressure. Students interact with the system through a Delta DOP-110IS HMI, which features Ethernet connectivity for remote observation, electrical isolation for safe operation, and a 200 ms screen update rate to ensure responsive, realistic feedback. The kit enables learners to explore critical emergency scenarios, including automatic switching between jockey and main pumps, low-pressure alerts, and system failover, transforming theoretical concepts into tangible skills. In user evaluations, 57.1% of students with no prior experience reported that the simulations closely mirrored real-world systems, while 80% of those with a fire safety background found the kit reinforced their existing knowledge; notably, 57.1% of instructors rated it as highly effective for teaching core fire safety principles across diverse learner profiles. By integrating industrial-grade hardware with scenario-based learning, this tool not only deepens understanding of fire protection systems but also better prepares future engineers for the practical demands of fire safety and industrial automation careers. Full article
Show Figures

Graphical abstract

22 pages, 2804 KB  
Article
A Comprehensive Evaluation Method for Greenhouse-Grown Lettuce Based on RGB Images and Hyperspectral Data
by Duoer Ma, Hong Ren, Qi Zeng, Yidi Liu, Lulu Ma, Qiang Zhang, Ze Zhang and Jiangli Wang
Agronomy 2026, 16(6), 600; https://doi.org/10.3390/agronomy16060600 - 11 Mar 2026
Viewed by 402
Abstract
Quality grading of greenhouse lettuce requires rapid external appearance screening and nondestructive internal quality assessment. However, existing detection methods struggle to simultaneously evaluate both external and internal quality while maintaining efficiency, resulting in a lack of scientific and comprehensive integrated evaluation standards for [...] Read more.
Quality grading of greenhouse lettuce requires rapid external appearance screening and nondestructive internal quality assessment. However, existing detection methods struggle to simultaneously evaluate both external and internal quality while maintaining efficiency, resulting in a lack of scientific and comprehensive integrated evaluation standards for current crop grading. To address this issue, this study leveraged the technical strengths of different sensors to construct separate models: an RGB image-based monitoring model for external quality and a hyperspectral-based estimation model for internal quality. Using a combined objective–subjective weighting method, this approach scientifically integrated external and internal quality monitoring indicators to establish a comprehensive evaluation method for greenhouse lettuce quality. The results demonstrate that features such as canopy projection area, compactness, and color components can be extracted from RGB images. Combined with Ridge regression, this approach achieves high-accuracy estimation of lettuce fresh weight and leaf area (R2 ≥ 0.880). For intrinsic quality, by combining hyperspectral data with the CARS and SPA band selection algorithms, a Random Forest (RF)-based inversion model for chlorophyll, soluble sugar, protein, and vitamin C content was developed. The AHP-CRITIC method effectively resolved the weight imbalance caused by an excessive coefficient of variation in appearance indicators, thereby achieving the scientific integration of appearance and internal quality data. The grading outcomes of this integrated evaluation method were highly consistent with industry standards (kappa coefficient: 0.788). This approach establishes an effective link between the rapid monitoring of external and internal quality for comprehensive evaluation, providing a novel technical pathway and scientific basis for nondestructive post-harvest detection and automated grading of greenhouse vegetables. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

14 pages, 1565 KB  
Article
Non-Invasive Detection of Coronary Artery Disease Using Wearable Vest with Integrated Phonocardiogram Sensors
by Matthew Fynn, Milan Marocchi, Javed Rashid, Yue Rong, Goutam Saha and Kayapanda Mandana
J. Vasc. Dis. 2026, 5(2), 11; https://doi.org/10.3390/jvd5020011 - 26 Feb 2026
Viewed by 470
Abstract
Background: Cardiovascular disease (CVD) remains the leading cause of death and disability worldwide. Among its subtypes, coronary artery disease (CAD) is the most common and often develops silently, without noticeable symptoms. CAD-related murmurs typically fall below the human hearing threshold, limiting the effectiveness [...] Read more.
Background: Cardiovascular disease (CVD) remains the leading cause of death and disability worldwide. Among its subtypes, coronary artery disease (CAD) is the most common and often develops silently, without noticeable symptoms. CAD-related murmurs typically fall below the human hearing threshold, limiting the effectiveness of traditional stethoscope-based auscultation. Currently, the gold standard for CAD diagnosis is coronary angiography, an invasive and expensive procedure usually reserved for symptomatic patients. This highlights the global need for a non-invasive, cost-effective pre-screening tool for asymptomatic CAD detection. Objectives: This study investigates the effectiveness of a wearable vest equipped with multiple digital stethoscopes to detect CAD. By applying signal processing and machine learning to multichannel phonocardiogram (PCG) data, we aim to evaluate the accuracy of CAD detection. We further assess the impact of incorporating patient metadata to enhance model performance. Methods: Data were collected from 40 CAD patients and 40 non-CAD individuals using a wearable vest with seven embedded PCG sensors. Subjects performed 10 s breath-hold recordings in a clinical setting. Linear-frequency cepstral coefficients were extracted from the PCG signals and classified using a support vector machine. Metadata, including body mass index, blood pressure, type 2 diabetes, and hypertension, were integrated to assess performance gains. Results: A combination of four channels achieved an accuracy of 80.44%, a 7% improvement over the best single-channel result. Incorporating metadata increased accuracy to 82.08%. Conclusions: The wearable vest demonstrated promising clinical potential, exceeding a 75% sensitivity-specificity average, and may support accessible, automated CAD screening in future validated settings. Full article
(This article belongs to the Section Cardiovascular Diseases)
Show Figures

Figure 1

15 pages, 2307 KB  
Article
An Open-Source Horizontal Strabismus Simulator as an Evaluation Platform for Monocular Gaze Estimation Using Deep Learning Models
by Shumpei Takinami, Yuka Morita, Jun Seita and Tetsuro Oshika
J. Eye Mov. Res. 2026, 19(1), 20; https://doi.org/10.3390/jemr19010020 - 9 Feb 2026
Viewed by 1268
Abstract
Strabismus affects 2–4% of the global population, with horizontal cases accounting for more than 90%. Automated screening using monocular gaze estimation technology shows promise for early detection. However, existing models assume normal binocular vision, and their applicability to strabismus remains unvalidated due to [...] Read more.
Strabismus affects 2–4% of the global population, with horizontal cases accounting for more than 90%. Automated screening using monocular gaze estimation technology shows promise for early detection. However, existing models assume normal binocular vision, and their applicability to strabismus remains unvalidated due to the lack of evaluation platforms capable of reproducing disconjugate eye movements with known ground-truth angles. To address this gap, we developed an open-source, low-cost (approximately 200 USD) horizontal strabismus simulator. The simulator features two independently controllable artificial eyeballs mounted on a two-axis gimbal mechanism with servo motors and gyro sensors for real-time angle measurement. Mechanical accuracy achieved a mean absolute error of less than 0.1° across all axes, well below the clinical detection threshold of 1 prism diopter (≈0.57°). An evaluation of three representative AI models (Single Eye, GazeNet, and EyeNet) revealed estimation errors of 6.44–8.75°, substantially exceeding the clinical target of 2.8°. At this error level, small-angle strabismus (<15 prism diopters) would likely be missed, underscoring the need for strabismus-specific model development. Moreover, rapid accuracy degradation was observed beyond ±15° gaze angles. This platform establishes baseline performance metrics and provides a foundation for advancing gaze estimation technology for strabismus screening. Full article
Show Figures

Figure 1

25 pages, 2339 KB  
Article
An Operational Ground-Based Vicarious Radiometric Calibration Method for Thermal Infrared Sensors: A Case Study of GF-5A WTI
by Jingwei Bai, Yunfei Bao, Guangyao Zhou, Shuyan Zhang, Hong Guan, Mingmin Zhang, Yongchao Zhao and Kang Jiang
Remote Sens. 2026, 18(2), 302; https://doi.org/10.3390/rs18020302 - 16 Jan 2026
Viewed by 438
Abstract
High-resolution TIR missions require sustained and well-characterized radiometric accuracy to support applications such as land surface temperature retrieval, drought monitoring, and surface energy budget analysis. To address this need, we develop an operational and automated ground-based vicarious radiometric calibration framework for TIR sensors [...] Read more.
High-resolution TIR missions require sustained and well-characterized radiometric accuracy to support applications such as land surface temperature retrieval, drought monitoring, and surface energy budget analysis. To address this need, we develop an operational and automated ground-based vicarious radiometric calibration framework for TIR sensors and demonstrate its performance using the Wide-swath Thermal Infrared Imager (WTI) onboard Gaofen-5 01A (GF-5A). Three arid Gobi calibration sites were selected by integrating Moderate Resolution Imaging Spectroradiometer (MODIS) cloud products, Shuttle Radar Topography Mission (SRTM)-derived topography, and WTI-based radiometric uniformity metrics to ensure low cloud cover, flat terrain, and high spatial homogeneity. Automated ground stations deployed at Golmud, Dachaidan, and Dunhuang have continuously recorded 1 min contact surface temperature since October 2023. Field-measured emissivity spectra, Integrated Global Radiosonde Archive (IGRA) radiosonde profiles, and MODTRAN (MODerate resolution atmospheric TRANsmission) v5.2 simulations were combined to compute top-of-atmosphere (TOA) radiances, which were subsequently collocated with WTI imagery. After data screening and gain-stratified regression, linear calibration coefficients were derived for each TIR band. Based on 189 scenes from February–July 2024, all four bands exhibit strong linearity (R-squared greater than 0.979). Validation using 45 independent scenes yields a mean brightness–temperature root-mean-square error (RMSE) of 0.67 K. A full radiometric-chain uncertainty budget—including contact temperature, emissivity, atmospheric profiles, and radiative transfer modeling—results in a combined standard uncertainty of 1.41 K. The proposed framework provides a low-maintenance, traceable, and high-frequency solution for the long-term on-orbit radiometric calibration of GF-5A WTI and establishes a reproducible pathway for future TIR missions requiring sustained calibration stability. Full article
(This article belongs to the Special Issue Radiometric Calibration of Satellite Sensors Used in Remote Sensing)
Show Figures

Figure 1

10 pages, 2555 KB  
Proceeding Paper
Mine Gas Emission Monitoring Following the Cessation of Mining Activities in a Hard Coal Region
by Vladimír Krenžel, Petr Mierva, Jan Vostřez, Petr Křístek, Daniel Gogol, Andrea Siroká and David Semančík
Eng. Proc. 2025, 116(1), 45; https://doi.org/10.3390/engproc2025116045 - 13 Jan 2026
Cited by 1 | Viewed by 257
Abstract
This article provides an in-depth overview of mine gas emission monitoring practices in the Ostrava-Karviná Coalfield (OKR), one of the most significant regions in Central Europe affected by post-mining methane leakage. The study presents field measurement techniques, including atmogeochemical surveys, systematic methane screening [...] Read more.
This article provides an in-depth overview of mine gas emission monitoring practices in the Ostrava-Karviná Coalfield (OKR), one of the most significant regions in Central Europe affected by post-mining methane leakage. The study presents field measurement techniques, including atmogeochemical surveys, systematic methane screening in soil air, and surface emission rate monitoring using accumulation chambers. Over the course of several long-term projects, more than 43 km2 of land were surveyed, and risk classification maps were developed based on measured methane concentrations and surface release rates. These data support land-use planning, the design of degasification measures, and the verification of their effectiveness. Results confirm that methane emissions persist even decades after mine closures and vary depending on atmospheric pressure and local geological conditions. The OKR methodology was also compared to international practices in Poland, Canada, and China. The article concludes with future research directions focused on automation, integration of sensor networks, and predictive modeling of gas migration in post-mining environments. Full article
Show Figures

Figure 1

41 pages, 701 KB  
Review
New Trends in the Use of Artificial Intelligence and Natural Language Processing for Occupational Risks Prevention
by Natalia Orviz-Martínez, Efrén Pérez-Santín and José Ignacio López-Sánchez
Safety 2026, 12(1), 7; https://doi.org/10.3390/safety12010007 - 8 Jan 2026
Viewed by 1301
Abstract
In an increasingly technologized and automated world, workplace safety and health remain a major global challenge. After decades of regulatory frameworks and substantial technical and organizational advances, the expanding interaction between humans and machines and the growing complexity of work systems are gaining [...] Read more.
In an increasingly technologized and automated world, workplace safety and health remain a major global challenge. After decades of regulatory frameworks and substantial technical and organizational advances, the expanding interaction between humans and machines and the growing complexity of work systems are gaining importance. In parallel, the digitalization of Industry 4.0/5.0 is generating unprecedented volumes of safety-relevant data and new opportunities to move from reactive analysis to proactive, data-driven prevention. This review maps how artificial intelligence (AI), with a specific focus on natural language processing (NLP) and large language models (LLMs), is being applied to occupational risk prevention across sectors. A structured search of the Web of Science Core Collection (2013–October 2025), combined OSH-related terms with AI, NLP and LLM terms. After screening and full-text assessment, 123 studies were discussed. Early work relied on text mining and traditional machine learning to classify accident types and causes, extract risk factors and support incident analysis from free-text narratives. More recent contributions use deep learning to predict injury severity, potential serious injuries and fatalities (PSIF) and field risk control program (FRCP) levels and to fuse textual data with process, environmental and sensor information in multi-source risk models. The latest wave of studies deploys LLMs, retrieval-augmented generation and vision–language architectures to generate task-specific safety guidance, support accident investigation, map occupations and job tasks and monitor personal protective equipment (PPE) compliance. Together, these developments show that AI-, NLP- and LLM-based systems can exploit unstructured OSH information to provide more granular, timely and predictive safety insights. However, the field is still constrained by data quality and bias, limited external validation, opacity, hallucinations and emerging regulatory and ethical requirements. In conclusion, this review positions AI and LLMs as tools to support human decision-making in OSH and outlines a research agenda centered on high-quality datasets and rigorous evaluation of fairness, robustness, explainability and governance. Full article
(This article belongs to the Special Issue Advances in Ergonomics and Safety)
Show Figures

Figure 1

36 pages, 2139 KB  
Systematic Review
A Systematic Review of the Practical Applications of Synthetic Aperture Radar (SAR) for Bridge Structural Monitoring
by Homer Armando Buelvas Moya, Minh Q. Tran, Sergio Pereira, José C. Matos and Son N. Dang
Sustainability 2026, 18(1), 514; https://doi.org/10.3390/su18010514 - 4 Jan 2026
Viewed by 998
Abstract
Within the field of the structural monitoring of bridges, numerous technologies and methodologies have been developed. Among these, methods based on synthetic aperture radar (SAR) which utilise satellite data from missions such as Sentinel-1 (European Space Agency-ESA) and COSMO-SkyMed (Agenzia Spaziale Italiana—ASI) to [...] Read more.
Within the field of the structural monitoring of bridges, numerous technologies and methodologies have been developed. Among these, methods based on synthetic aperture radar (SAR) which utilise satellite data from missions such as Sentinel-1 (European Space Agency-ESA) and COSMO-SkyMed (Agenzia Spaziale Italiana—ASI) to capture displacements, temperature-related changes, and other geophysical measurements have gained increasing attention. However, SAR has yet to establish its value and potential fully; its broader adoption hinges on consistently demonstrating its robustness through recurrent applications, well-defined use cases, and effective strategies to address its inherent limitations. This study presents a systematic literature review (SLR) conducted in accordance with key stages of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 framework. An initial corpus of 1218 peer-reviewed articles was screened, and a final set of 25 studies was selected for in-depth analysis based on citation impact, keyword recurrence, and thematic relevance from the last five years. The review critically examines SAR-based techniques—including Differential Interferometric SAR (DInSAR), multi-temporal InSAR (MT-InSAR), and Persistent Scatterer Interferometry (PSI), as well as approaches to integrating SAR data with ground-based measurements and complementary digital models. Emphasis is placed on real-world case studies and persistent technical challenges, such as atmospheric artefacts, Line-of-Sight (LOS) geometry constraints, phase noise, ambiguities in displacement interpretation, and the translation of radar-derived deformations into actionable structural insights. The findings underscore SAR’s significant contribution to the structural health monitoring (SHM) of bridges, consistently delivering millimetre-level displacement accuracy and enabling engineering-relevant interpretations. While standalone SAR-based techniques offer wide-area monitoring capabilities, their full potential is realised only when integrated with complementary procedures such as thermal modelling, multi-sensor validation, and structural knowledge. Finally, this document highlights the persistent technical constraints of InSAR in bridge monitoring—including measurement ambiguities, SAR image acquisition limitations, and a lack of standardised, automated workflows—that continue to impede operational adoption but also point toward opportunities for methodological improvement. Full article
(This article belongs to the Special Issue Sustainable Practices in Bridge Construction)
Show Figures

Figure 1

12 pages, 2357 KB  
Article
Real-Time Cr(VI) Concentration Monitoring in Chrome Plating Wastewater Using RGB Sensor and Machine Learning
by Hanui Yang and Donghee Park
Eng 2026, 7(1), 17; https://doi.org/10.3390/eng7010017 - 1 Jan 2026
Viewed by 573
Abstract
The transition to the 4th Industrial Revolution (4IR) in the electroplating industry necessitates intelligent, real-time monitoring systems to replace traditional, time-consuming offline analysis. In this study, we developed a cost-effective, automated measurement system for hexavalent chromium (Cr(VI)) in plating wastewater using an Arduino-based [...] Read more.
The transition to the 4th Industrial Revolution (4IR) in the electroplating industry necessitates intelligent, real-time monitoring systems to replace traditional, time-consuming offline analysis. In this study, we developed a cost-effective, automated measurement system for hexavalent chromium (Cr(VI)) in plating wastewater using an Arduino-based RGB sensor. Unlike conventional single-variable approaches, we conducted a comprehensive feature sensitivity analysis on multi-sensor data (including pH, ORP, and EC). While electrochemical sensors were found to be susceptible to pH interference, the analysis identified that the Red and Green optical channels are the most critical indicators due to the distinct chromatic characteristics of Cr(VI). Specifically, the combination of these two channels effectively functions as a dual-variable sensing mechanism, compensating for potential interferences. To optimize prediction accuracy, a systematic machine learning strategy was employed. While the Convolutional Neural Network (CNN) achieved the highest classification accuracy of 89% for initial screening, a polynomial regression algorithm was ultimately implemented to model the non-linear relationship between sensor outputs and concentration. The derived regression model achieved an excellent determination coefficient (R2 = 0.997), effectively compensating for optical saturation effects at high concentrations. Furthermore, by integrating this sensing model with the chemical stoichiometry of the reduction process, the proposed system enables the precise, automated dosing of reducing agents. This capability facilitates the establishment of a “Digital Twin” for wastewater treatment, offering a practical ICT (Information and Communication Technology)-based solution for autonomous process control and strict environmental compliance. Full article
(This article belongs to the Section Chemical, Civil and Environmental Engineering)
Show Figures

Figure 1

24 pages, 20297 KB  
Review
Artificial Intelligence-Aided Microfluidic Cell Culture Systems
by Muhammad Sohail Ibrahim and Minseok Kim
Biosensors 2026, 16(1), 16; https://doi.org/10.3390/bios16010016 - 24 Dec 2025
Viewed by 1692
Abstract
Microfluidic cell culture systems and organ-on-a-chip platforms provide powerful tools for modeling physiological processes, disease progression, and drug responses under controlled microenvironmental conditions. These technologies rely on diverse cell culture methodologies, including 2D and 3D culture formats, spheroids, scaffold-based systems, hydrogels, and organoid [...] Read more.
Microfluidic cell culture systems and organ-on-a-chip platforms provide powerful tools for modeling physiological processes, disease progression, and drug responses under controlled microenvironmental conditions. These technologies rely on diverse cell culture methodologies, including 2D and 3D culture formats, spheroids, scaffold-based systems, hydrogels, and organoid models, to recapitulate tissue-level functions and generate rich, multiparametric datasets through high-resolution imaging, integrated sensors, and biochemical assays. The heterogeneity and volume of these data introduce substantial challenges in pre-processing, feature extraction, multimodal integration, and biological interpretation. Artificial intelligence (AI), particularly machine learning and deep learning, offers solutions to these analytical bottlenecks by enabling automated phenotyping, predictive modeling, and real-time control of microfluidic environments. Recent advances also highlight the importance of technical frameworks such as dimensionality reduction, explainable feature selection, spectral pre-processing, lightweight on-chip inference models, and privacy-preserving approaches that support robust and deployable AI–microfluidic workflows. AI-enabled microfluidic and organ-on-a-chip systems now span a broad application spectrum, including cancer biology, drug screening, toxicity testing, microbial and environmental monitoring, pathogen detection, angiogenesis studies, nerve-on-a-chip models, and exosome-based diagnostics. These platforms also hold increasing potential for precision medicine, where AI can support individualized therapeutic prediction using patient-derived cells and organoids. As the field moves toward more interpretable and autonomous systems, explainable AI will be essential for ensuring transparency, regulatory acceptance, and biological insight. Recent AI-enabled applications in cancer modeling, drug screening, etc., highlight how deep learning can enable precise detection of phenotypic shifts, classify therapeutic responses with high accuracy, and support closed-loop regulation of microfluidic environments. These studies demonstrate that AI can transform microfluidic systems from static culture platforms into adaptive, data-driven experimental tools capable of enhancing assay reproducibility, accelerating drug discovery, and supporting personalized therapeutic decision-making. This narrative review synthesizes current progress, technical challenges, and future opportunities at the intersection of AI, microfluidic cell culture platforms, and advanced organ-on-a-chip systems, highlighting their emerging role in precision health and next-generation biomedical research. Full article
(This article belongs to the Collection Microsystems for Cell Cultures)
Show Figures

Figure 1

26 pages, 2310 KB  
Systematic Review
A Systematic Review of Intelligent Navigation in Smart Warehouses Using Prisma: Integrating AI, SLAM, and Sensor Fusion for Mobile Robots
by Domagoj Zimmer, Mladen Jurišić, Ivan Plaščak, Željko Barač, Hrvoje Glavaš, Dorijan Radočaj and Robert Benković
Eng 2025, 6(12), 339; https://doi.org/10.3390/eng6120339 - 1 Dec 2025
Viewed by 2086
Abstract
This systematic review focuses on intelligent navigation as a core enabler of autonomy in smart warehouses, where mobile robots must dynamically perceive, reason, and act in complex, human-shared environments. By synthesizing advancements in AI-driven decision-making, SLAM, and multi-sensor fusion, the study highlights how [...] Read more.
This systematic review focuses on intelligent navigation as a core enabler of autonomy in smart warehouses, where mobile robots must dynamically perceive, reason, and act in complex, human-shared environments. By synthesizing advancements in AI-driven decision-making, SLAM, and multi-sensor fusion, the study highlights how intelligent navigation architectures reduce operational uncertainty and enhance task efficiency in logistics automation. Smart warehouses, powered by mobile robots and AGVs and integrated with AI and algorithms, are enabling more efficient storage with less human labour. This systematic review followed PRISMA 2020 guidelines to systematically identify, screen, and synthesize evidence from 106 peer-reviewed scientific articles (including pri-mary studies, technical papers, and reviews) published between 2020–2025, sourced from Web of Science. Thematic synthesis was conducted across 8 domains: AI, SLAM, sensor fusion, safety, network, path planning, implementation, and design. The transition to smart warehouses requires modern technologies to automate tasks and optimize resources. This article examines how intelligent systems can be integrated with mathematical models to improve navigation accuracy, reduce costs and prioritize human safety. Real-time data management with precise information for AMRs and AGVs is crucial for low-risk operation. This article studies AI, the IoT, LiDAR, machine learning (ML), SLAM and other new technologies for the successful implementation of mobile robots in smart warehouses. Modern technologies such as reinforcement learning optimize the routes and tasks of mobile robots. Data and sensor fusion methods integrate information from various sources to provide a more precise understanding of the indoor environment and inventory. Semantic mapping enables mobile robots to navigate and interact with complex warehouse environments with high accuracy in real time. The article also analyses how virtual reality (VR) can improve the spatial orientation of mobile robots by developing sophisticated navigation solutions that reduce time and financial costs. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

18 pages, 2653 KB  
Article
Compact Microcontroller-Based LED-Driven Photoelectric System for Accurate Photoresponse Mapping Compatible with Internet of Things
by Bohdan Sus, Alexey Kozynets, Sergii Litvinenko, Alla Ivanyshyn, Tetiana Bubela, Mikołaj Skowron and Krzysztof Przystupa
Electronics 2025, 14(23), 4614; https://doi.org/10.3390/electronics14234614 - 24 Nov 2025
Viewed by 741
Abstract
A compact LED (light emission diode)-based illumination unit controlled by a microcontroller was developed for recombination-type silicon sensor structures. The system employs an 8 × 8 LED matrix that provides programmable spatial excitation patterns across a 2.2 × 2.2 mm sensor surface. Its [...] Read more.
A compact LED (light emission diode)-based illumination unit controlled by a microcontroller was developed for recombination-type silicon sensor structures. The system employs an 8 × 8 LED matrix that provides programmable spatial excitation patterns across a 2.2 × 2.2 mm sensor surface. Its operation is based on changes in the silicon surface recombination properties upon analyte interaction, producing photocurrent variations of 10–50 nA depending on the dipole moment. Compared with conventional laser-based systems, the proposed LED illumination significantly reduces cost, complexity, and power consumption while maintaining sufficient optical intensity for reliable photoresponse detection. The embedded controller enables precise timing, synchronization with the photocurrent acquisition unit, and flexible adaptation for various biological fluid analyses. This implementation demonstrates a scalable and cost-efficient alternative to stationary LBIC setups and supports integration into portable or IoT-compatible diagnostic systems. For comparative screening, the LED array was used instead of the focused laser beam typically employed in LBIC (laser beam-induced current) measurements. This paper substantially reduced the peak optical intensity at the sample surface, minimizing local thermal heating critical for enzyme-based or plasma samples sensitive to temperature fluctuations. Photocurrent mapping reveals charge-state modification of recombination centers at the SiOx/Si interface under optical excitation. Further optimization is expected for compact or simplified configurations, particularly those aimed at portable applications and automated physiological monitoring systems. Full article
Show Figures

Figure 1

19 pages, 1738 KB  
Article
Design and Implementation of a Smart Parking System with Real-Time Slot Detection and Automated Gate Access
by Mohammad Ali Sahraei
Technologies 2025, 13(11), 503; https://doi.org/10.3390/technologies13110503 - 1 Nov 2025
Viewed by 6360
Abstract
By increasing the number of vehicles, an intelligent parking system can help drivers in finding parking slots by providing real-time information. To address this issue, this study developed an Arduino-based automated parking system integrating sensors to assist drivers in quickly discovering available parking [...] Read more.
By increasing the number of vehicles, an intelligent parking system can help drivers in finding parking slots by providing real-time information. To address this issue, this study developed an Arduino-based automated parking system integrating sensors to assist drivers in quickly discovering available parking slots with real-time space detection and dynamic access control. This system consists of ultrasonic sensors, NodeMCU, an LCD screen, a servo motor, and an Arduino Uno. Each ultrasonic sensor is assigned a specific number corresponding to its slot number, which helps to identify the locations. These sensors were connected to the NodeMCU to collect, process, and transfer data to the Arduino board. If the ultrasonic sensor cannot detect the vehicle in the parking space, the LCD screen will show the number of specific slots. The Arduino will use the servo motor to open the entrance gate if a vehicle is detected by another ultrasonic sensor next to it. Otherwise, the system prevents any vehicle from entering the parking area when all of the available spaces are occupied. The system prototype is constructed and empirically evaluated to verify its performance and efficiency. The results indicate that the system successfully monitors parking spot occupancy and validates its capacity for real-time information updates. Full article
Show Figures

Figure 1

25 pages, 3956 KB  
Review
Multi-Sensor Monitoring, Intelligent Control, and Data Processing for Smart Greenhouse Environment Management
by Emmanuel Bicamumakuba, Md Nasim Reza, Hongbin Jin, Samsuzzaman, Kyu-Ho Lee and Sun-Ok Chung
Sensors 2025, 25(19), 6134; https://doi.org/10.3390/s25196134 - 3 Oct 2025
Cited by 13 | Viewed by 9202
Abstract
Management of smart greenhouses represents a transformative advancement in precision agriculture, enabling sustainable intensification of food production through the integration of multi-sensor networks, intelligent control, and sophisticated data filtering techniques. Unlike conventional greenhouses that rely on manual monitoring, smart greenhouses combine environmental sensors, [...] Read more.
Management of smart greenhouses represents a transformative advancement in precision agriculture, enabling sustainable intensification of food production through the integration of multi-sensor networks, intelligent control, and sophisticated data filtering techniques. Unlike conventional greenhouses that rely on manual monitoring, smart greenhouses combine environmental sensors, Internet of Things (IoT) platforms, and artificial intelligence (AI)-driven decision making to optimize microclimates, improve yields, and enhance resource efficiency. This review systematically investigates three key technological pillars, multi-sensor monitoring, intelligent control, and data filtering techniques, for smart greenhouse environment management. A structured literature screening of 114 peer-reviewed studies was conducted across major databases to ensure methodological rigor. The analysis compared sensor technologies such as temperature, humidity, carbon dioxide (CO2), light, and energy to evaluate the control strategies such as IoT-based automation, fuzzy logic, model predictive control, and reinforcement learning, along with filtering methods like time- and frequency-domain, Kalman, AI-based, and hybrid models. Major findings revealed that multi-sensor integration enhanced precision and resilience but faced changes in calibration and interoperability. Intelligent control improved energy and water efficiency yet required robust datasets and computational resources. Advanced filtering strengthens data integrity but raises concerns of scalability and computational cost. The distinct contribution of this review was an integrated synthesis by linking technical performance to implementation feasibility, highlighting pathways towards affordable, scalable, and resilient smart greenhouse systems. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

25 pages, 783 KB  
Systematic Review
KAVAI: A Systematic Review of the Building Blocks for Knowledge-Assisted Visual Analytics in Industrial Manufacturing
by Adrian J. Böck, Stefanie Größbacher, Jan Vrablicz, Christina Stoiber, Alexander Rind, Josef Suschnigg, Tobias Schreck, Wolfgang Aigner and Markus Wagner
Appl. Sci. 2025, 15(18), 10172; https://doi.org/10.3390/app151810172 - 18 Sep 2025
Viewed by 1186
Abstract
Industry 4.0 produces large volumes of sensor and machine data, offering new possibilities for manufacturing analytics but also creating challenges in combining domain knowledge with visual analysis. We present a systematic review of 13 peer-reviewed knowledge-assisted visual analytics (KAVA) systems published between 2014 [...] Read more.
Industry 4.0 produces large volumes of sensor and machine data, offering new possibilities for manufacturing analytics but also creating challenges in combining domain knowledge with visual analysis. We present a systematic review of 13 peer-reviewed knowledge-assisted visual analytics (KAVA) systems published between 2014 and 2024, following PRISMA guidelines for the identification, screening, and inclusion processes. The survey is organized around six predefined building blocks, namely, user group, industrial domain, visualization, knowledge, data and machine learning, with a specific emphasis on the integration of knowledge and visualization in the reviewed studies. We find that ontologies, taxonomies, rule sets, and knowledge graphs provide explicit representations of expert understanding, sometimes enriched with annotations and threshold specifications. These structures are stored in RDF or graph databases, relational tables, or flat files, though interoperability is limited, and post-design contributions are not always persisted. Explicit knowledge is visualized through standard and specialized techniques, including thresholds in time-series plots, annotated dashboards, node–link diagrams, customized machine views from ontologies, and 3D digital twins with expert-defined rules. Line graphs, bar charts, and scatterplots are the most frequently used chart types, often augmented with thresholds and annotations derived from explicit knowledge. Recurring challenges include fragmented storage, heterogeneous data and knowledge types, limited automation, inconsistent validation of user input, and scarce long-term evaluations. Addressing these gaps will be essential for developing adaptable, reusable KAVA systems for industrial analytics. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

Back to TopTop