Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (447)

Search Parameters:
Keywords = intelligent and integrated lighting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3506 KiB  
Review
Spectroscopic and Imaging Technologies Combined with Machine Learning for Intelligent Perception of Pesticide Residues in Fruits and Vegetables
by Haiyan He, Zhoutao Li, Qian Qin, Yue Yu, Yuanxin Guo, Sheng Cai and Zhanming Li
Foods 2025, 14(15), 2679; https://doi.org/10.3390/foods14152679 - 30 Jul 2025
Viewed by 50
Abstract
Pesticide residues in fruits and vegetables pose a serious threat to food safety. Traditional detection methods have defects such as complex operation, high cost, and long detection time. Therefore, it is of great significance to develop rapid, non-destructive, and efficient detection technologies and [...] Read more.
Pesticide residues in fruits and vegetables pose a serious threat to food safety. Traditional detection methods have defects such as complex operation, high cost, and long detection time. Therefore, it is of great significance to develop rapid, non-destructive, and efficient detection technologies and equipment. In recent years, the combination of spectroscopic techniques and imaging technologies with machine learning algorithms has developed rapidly, providing a new attempt to solve this problem. This review focuses on the research progress of the combination of spectroscopic techniques (near-infrared spectroscopy (NIRS), hyperspectral imaging technology (HSI), surface-enhanced Raman scattering (SERS), laser-induced breakdown spectroscopy (LIBS), and imaging techniques (visible light (VIS) imaging, NIRS imaging, HSI technology, terahertz imaging) with machine learning algorithms in the detection of pesticide residues in fruits and vegetables. It also explores the huge challenges faced by the application of spectroscopic and imaging technologies combined with machine learning algorithms in the intelligent perception of pesticide residues in fruits and vegetables: the performance of machine learning models requires further enhancement, the fusion of imaging and spectral data presents technical difficulties, and the commercialization of hardware devices remains underdeveloped. This review has proposed an innovative method that integrates spectral and image data, enhancing the accuracy of pesticide residue detection through the construction of interpretable machine learning algorithms, and providing support for the intelligent sensing and analysis of agricultural and food products. Full article
Show Figures

Figure 1

35 pages, 4940 KiB  
Article
A Novel Lightweight Facial Expression Recognition Network Based on Deep Shallow Network Fusion and Attention Mechanism
by Qiaohe Yang, Yueshun He, Hongmao Chen, Youyong Wu and Zhihua Rao
Algorithms 2025, 18(8), 473; https://doi.org/10.3390/a18080473 - 30 Jul 2025
Viewed by 137
Abstract
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to [...] Read more.
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to run efficiently on mobile devices or edge devices, so the research on lightweight face expression recognition is particularly important. However, feature extraction and classification methods of lightweight convolutional neural network expression recognition algorithms mostly used at present are not specifically and fully optimized for the characteristics of facial expression images, yet fail to make full use of the feature information in face expression images. To address the lack of facial expression recognition models that are both lightweight and effectively optimized for expression-specific feature extraction, this study proposes a novel network design tailored to the characteristics of facial expressions. In this paper, we refer to the backbone architecture of MobileNet V2 network, and redesign LightExNet, a lightweight convolutional neural network based on the fusion of deep and shallow layers, attention mechanism, and joint loss function, according to the characteristics of the facial expression features. In the network architecture of LightExNet, firstly, deep and shallow features are fused in order to fully extract the shallow features in the original image, reduce the loss of information, alleviate the problem of gradient disappearance when the number of convolutional layers increases, and achieve the effect of multi-scale feature fusion. The MobileNet V2 architecture has also been streamlined to seamlessly integrate deep and shallow networks. Secondly, by combining the own characteristics of face expression features, a new channel and spatial attention mechanism is proposed to obtain the feature information of different expression regions as much as possible for encoding. Thus improve the accuracy of expression recognition effectively. Finally, the improved center loss function is superimposed to further improve the accuracy of face expression classification results, and corresponding measures are taken to significantly reduce the computational volume of the joint loss function. In this paper, LightExNet is tested on the three mainstream face expression datasets: Fer2013, CK+ and RAF-DB, respectively, and the experimental results show that LightExNet has 3.27 M Parameters and 298.27 M Flops, and the accuracy on the three datasets is 69.17%, 97.37%, and 85.97%, respectively. The comprehensive performance of LightExNet is better than the current mainstream lightweight expression recognition algorithms such as MobileNet V2, IE-DBN, Self-Cure Net, Improved MobileViT, MFN, Ada-CM, Parallel CNN(Convolutional Neural Network), etc. Experimental results confirm that LightExNet effectively improves recognition accuracy and computational efficiency while reducing energy consumption and enhancing deployment flexibility. These advantages underscore its strong potential for real-world applications in lightweight facial expression recognition. Full article
Show Figures

Figure 1

26 pages, 27333 KiB  
Article
Gest-SAR: A Gesture-Controlled Spatial AR System for Interactive Manual Assembly Guidance with Real-Time Operational Feedback
by Naimul Hasan and Bugra Alkan
Machines 2025, 13(8), 658; https://doi.org/10.3390/machines13080658 - 27 Jul 2025
Viewed by 166
Abstract
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. [...] Read more.
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. In response, we present Gest-SAR, a SAR framework that integrates a custom MediaPipe-based gesture classification model to deliver adaptive light-guided pick-to-place assembly instructions and real-time error feedback within a closed-loop interaction instance. In a within-subject study, ten participants completed standardised Duplo-based assembly tasks using Gest-SAR, paper-based manuals, and tablet-based instructions; performance was evaluated via assembly cycle time, selection and placement error rates, cognitive workload assessed by NASA-TLX, and usability test by post-experimental questionnaires. Quantitative results demonstrate that Gest-SAR significantly reduces cycle times with an average of 3.95 min compared to Paper (Mean = 7.89 min, p < 0.01) and Tablet (Mean = 6.99 min, p < 0.01). It also achieved 7 times less average error rates while lowering perceived cognitive workload (p < 0.05 for mental demand) compared to conventional modalities. In total, 90% of the users agreed to prefer SAR over paper and tablet modalities. These outcomes indicate that natural hand-gesture interaction coupled with real-time visual feedback enhances both the efficiency and accuracy of manual assembly. By embedding AI-driven gesture recognition and AR projection into a human-centric assistance system, Gest-SAR advances the collaborative interplay between humans and machines, aligning with Industry 5.0 objectives of resilient, sustainable, and intelligent manufacturing. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

24 pages, 12286 KiB  
Article
A UAV-Based Multi-Scenario RGB-Thermal Dataset and Fusion Model for Enhanced Forest Fire Detection
by Yalin Zhang, Xue Rui and Weiguo Song
Remote Sens. 2025, 17(15), 2593; https://doi.org/10.3390/rs17152593 - 25 Jul 2025
Viewed by 334
Abstract
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). [...] Read more.
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). RGB-Thermal fusion methods integrate visible-light texture and thermal infrared temperature features effectively, but current approaches are constrained by limited datasets and insufficient exploitation of cross-modal complementary information, ignoring cross-level feature interaction. A time-synchronized multi-scene, multi-angle aerial RGB-Thermal dataset (RGBT-3M) with “Smoke–Fire–Person” annotations and modal alignment via the M-RIFT method was constructed as a way to address the problem of data scarcity in wildfire scenarios. Finally, we propose a CP-YOLOv11-MF fusion detection model based on the advanced YOLOv11 framework, which can learn heterogeneous features complementary to each modality in a progressive manner. Experimental validation proves the superiority of our method, with a precision of 92.5%, a recall of 93.5%, a mAP50 of 96.3%, and a mAP50-95 of 62.9%. The model’s RGB-Thermal fusion capability enhances early fire detection, offering a benchmark dataset and methodological advancement for intelligent forest conservation, with implications for AI-driven ecological protection. Full article
(This article belongs to the Special Issue Advances in Spectral Imagery and Methods for Fire and Smoke Detection)
Show Figures

Figure 1

22 pages, 1329 KiB  
Review
Visual Field Examinations for Retinal Diseases: A Narrative Review
by Ko Eun Kim and Seong Joon Ahn
J. Clin. Med. 2025, 14(15), 5266; https://doi.org/10.3390/jcm14155266 - 25 Jul 2025
Viewed by 154
Abstract
Visual field (VF) testing remains a cornerstone in assessing retinal function by measuring how well different parts of the retina detect light. It is essential for early detection, monitoring, and management of many retinal diseases. By mapping retinal sensitivity, VF exams can reveal [...] Read more.
Visual field (VF) testing remains a cornerstone in assessing retinal function by measuring how well different parts of the retina detect light. It is essential for early detection, monitoring, and management of many retinal diseases. By mapping retinal sensitivity, VF exams can reveal functional loss before structural changes become visible. This review summarizes how VF testing is applied across key conditions: hydroxychloroquine (HCQ) retinopathy, age-related macular degeneration (AMD), diabetic retinopathy (DR) and macular edema (DME), and inherited disorders including inherited dystrophies such as retinitis pigmentosa (RP). Traditional methods like the Goldmann kinetic perimetry and simple tools such as the Amsler grid help identify large or central VF defects. Automated perimetry (e.g., Humphrey Field Analyzer) provides detailed, quantitative data critical for detecting subtle paracentral scotomas in HCQ retinopathy and central vision loss in AMD. Frequency-doubling technology (FDT) reveals early neural deficits in DR before blood vessel changes appear. Microperimetry offers precise, localized sensitivity maps for macular diseases. Despite its value, VF testing faces challenges including patient fatigue, variability in responses, and interpretation of unreliable results. Recent advances in artificial intelligence, virtual reality perimetry, and home-based perimetry systems are improving test accuracy, accessibility, and patient engagement. Integrating VF exams with these emerging technologies promises more personalized care, earlier intervention, and better long-term outcomes for patients with retinal disease. Full article
(This article belongs to the Special Issue New Advances in Retinal Diseases)
Show Figures

Figure 1

80 pages, 962 KiB  
Review
Advancements in Hydrogels: A Comprehensive Review of Natural and Synthetic Innovations for Biomedical Applications
by Adina-Elena Segneanu, Ludovic Everard Bejenaru, Cornelia Bejenaru, Antonia Blendea, George Dan Mogoşanu, Andrei Biţă and Eugen Radu Boia
Polymers 2025, 17(15), 2026; https://doi.org/10.3390/polym17152026 - 24 Jul 2025
Viewed by 719
Abstract
In the rapidly evolving field of biomedical engineering, hydrogels have emerged as highly versatile biomaterials that bridge biology and technology through their high water content, exceptional biocompatibility, and tunable mechanical properties. This review provides an integrated overview of both natural and synthetic hydrogels, [...] Read more.
In the rapidly evolving field of biomedical engineering, hydrogels have emerged as highly versatile biomaterials that bridge biology and technology through their high water content, exceptional biocompatibility, and tunable mechanical properties. This review provides an integrated overview of both natural and synthetic hydrogels, examining their structural properties, fabrication methods, and broad biomedical applications, including drug delivery systems, tissue engineering, wound healing, and regenerative medicine. Natural hydrogels derived from sources such as alginate, gelatin, and chitosan are highlighted for their biodegradability and biocompatibility, though often limited by poor mechanical strength and batch variability. Conversely, synthetic hydrogels offer precise control over physical and chemical characteristics via advanced polymer chemistry, enabling customization for specific biomedical functions, yet may present challenges related to bioactivity and degradability. The review also explores intelligent hydrogel systems with stimuli-responsive and bioactive functionalities, emphasizing their role in next-generation healthcare solutions. In modern medicine, temperature-, pH-, enzyme-, light-, electric field-, magnetic field-, and glucose-responsive hydrogels are among the most promising “smart materials”. Their ability to respond to biological signals makes them uniquely suited for next-generation therapeutics, from responsive drug systems to adaptive tissue scaffolds. Key challenges such as scalability, clinical translation, and regulatory approval are discussed, underscoring the need for interdisciplinary collaboration and continued innovation. Overall, this review fosters a comprehensive understanding of hydrogel technologies and their transformative potential in enhancing patient care through advanced, adaptable, and responsive biomaterial systems. Full article
25 pages, 9119 KiB  
Article
An Improved YOLOv8n-Based Method for Detecting Rice Shelling Rate and Brown Rice Breakage Rate
by Zhaoyun Wu, Yehao Zhang, Zhongwei Zhang, Fasheng Shen, Li Li, Xuewu He, Hongyu Zhong and Yufei Zhou
Agriculture 2025, 15(15), 1595; https://doi.org/10.3390/agriculture15151595 - 24 Jul 2025
Viewed by 231
Abstract
Accurate and real-time detection of rice shelling rate (SR) and brown rice breakage rate (BR) is crucial for intelligent hulling sorting but remains challenging because of small grain size, dense adhesion, and uneven illumination causing missed detections and blurred boundaries in traditional YOLOv8n. [...] Read more.
Accurate and real-time detection of rice shelling rate (SR) and brown rice breakage rate (BR) is crucial for intelligent hulling sorting but remains challenging because of small grain size, dense adhesion, and uneven illumination causing missed detections and blurred boundaries in traditional YOLOv8n. This paper proposes a high-precision, lightweight solution based on an enhanced YOLOv8n with improvements in network architecture, feature fusion, and attention mechanism. The backbone’s C2f module is replaced with C2f-Faster-CGLU, integrating partial convolution (PConv) local convolution and convolutional gated linear unit (CGLU) gating to reduce computational redundancy via sparse interaction and enhance small-target feature extraction. A bidirectional feature pyramid network (BiFPN) weights multiscale feature fusion to improve edge positioning accuracy of dense grains. Attention mechanism for fine-grained classification (AFGC) is embedded to focus on texture and damage details, enhancing adaptability to light fluctuations. The Detect_Rice lightweight head compresses parameters via group normalization and dynamic convolution sharing, optimizing small-target response. The improved model achieved 96.8% precision and 96.2% mAP. Combined with a quantity–mass model, SR/BR detection errors reduced to 1.11% and 1.24%, meeting national standard (GB/T 29898-2013) requirements, providing an effective real-time solution for intelligent hulling sorting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

36 pages, 5908 KiB  
Review
Exploring the Frontier of Integrated Photonic Logic Gates: Breakthrough Designs and Promising Applications
by Nikolay L. Kazanskiy, Ivan V. Oseledets, Artem V. Nikonorov, Vladislava O. Chertykovtseva and Svetlana N. Khonina
Technologies 2025, 13(8), 314; https://doi.org/10.3390/technologies13080314 - 23 Jul 2025
Viewed by 500
Abstract
The increasing demand for high-speed, energy-efficient computing has propelled the development of integrated photonic logic gates, which utilize the speed of light to surpass the limitations of traditional electronic circuits. These gates enable ultrafast, parallel data processing with minimal power consumption, making them [...] Read more.
The increasing demand for high-speed, energy-efficient computing has propelled the development of integrated photonic logic gates, which utilize the speed of light to surpass the limitations of traditional electronic circuits. These gates enable ultrafast, parallel data processing with minimal power consumption, making them ideal for next-generation computing, telecommunications, and quantum applications. Recent advancements in nanofabrication, nonlinear optics, and phase-change materials have facilitated the seamless integration of all-optical logic gates onto compact photonic chips, significantly enhancing performance and scalability. This paper explores the latest breakthroughs in photonic logic gate design, key material innovations, and their transformative applications. While challenges such as fabrication precision and electronic–photonic integration remain, integrated photonic logic gates hold immense promise for revolutionizing optical computing, artificial intelligence, and secure communication. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

29 pages, 7403 KiB  
Article
Development of Topologically Optimized Mobile Robotic System with Machine Learning-Based Energy-Efficient Path Planning Structure
by Hilmi Saygin Sucuoglu
Machines 2025, 13(8), 638; https://doi.org/10.3390/machines13080638 - 22 Jul 2025
Viewed by 370
Abstract
This study presents the design and development of a structurally optimized mobile robotic system with a machine learning-based energy-efficient path planning framework. Topology optimization (TO) and finite element analysis (FEA) were applied to reduce structural weight while maintaining mechanical integrity. The optimized components [...] Read more.
This study presents the design and development of a structurally optimized mobile robotic system with a machine learning-based energy-efficient path planning framework. Topology optimization (TO) and finite element analysis (FEA) were applied to reduce structural weight while maintaining mechanical integrity. The optimized components were manufactured using Fused Deposition Modeling (FDM) with ABS (Acrylonitrile Butadiene Styrene) material. A custom power analysis tool was developed to compare energy consumption between the optimized and initial designs. Real-world current consumption data were collected under various terrain conditions, including inclined surfaces, vibration-inducing obstacles, gravel, and direction-altering barriers. Based on this dataset, a path planning model was developed using machine learning algorithms, capable of simultaneously optimizing both energy efficiency and path length to reach a predefined target. Unlike prior works that focus separately on structural optimization or learning-based navigation, this study integrates both domains within a single real-world robotic platform. Performance evaluations demonstrated superior results compared to traditional planning methods, which typically optimize distance or energy independently and lack real-time consumption feedback. The proposed framework reduces total energy consumption by 5.8%, cuts prototyping time by 56%, and extends mission duration by ~20%, highlighting the benefits of jointly applying TO and ML for sustainable and energy-aware robotic design. This integrated approach addresses a critical gap in the literature by demonstrating that mechanical light-weighting and intelligent path planning can be co-optimized in a deployable robotic system using empirical energy data. Full article
(This article belongs to the Special Issue Design and Manufacturing: An Industry 4.0 Perspective)
Show Figures

Figure 1

9 pages, 2459 KiB  
Proceeding Paper
Beyond the Red and Green: Exploring the Capabilities of Smart Traffic Lights in Malaysia
by Mohd Fairuz Muhamad@Mamat, Mohamad Nizam Mustafa, Lee Choon Siang, Amir Izzuddin Hasani Habib and Azimah Mohd Hamdan
Eng. Proc. 2025, 102(1), 4; https://doi.org/10.3390/engproc2025102004 - 22 Jul 2025
Viewed by 231
Abstract
Traffic congestion poses a significant challenge to modern urban environments, impacting both driver satisfaction and road safety. This paper investigates the effectiveness of a smart traffic light system (STL), a solution developed under the Intelligent Transportation System (ITS) initiative by the Ministry of [...] Read more.
Traffic congestion poses a significant challenge to modern urban environments, impacting both driver satisfaction and road safety. This paper investigates the effectiveness of a smart traffic light system (STL), a solution developed under the Intelligent Transportation System (ITS) initiative by the Ministry of Works Malaysia, to address these issues in Malaysia. The system integrates a network of sensors, AI-enabled cameras, and Automatic Number Plate Recognition (ANPR) technology to gather real-time data on traffic volume and vehicle classification at congested intersections. This data is utilized to dynamically adjust traffic light timings, prioritizing traffic flow on heavily congested roads while maintaining safety standards. To evaluate the system’s performance, a comprehensive study was conducted at a selected intersection. Traffic patterns were automatically analyzed using camera systems, and the performance of the STL was compared to that of traditional traffic signal systems. The average travel time from the start to the end intersection was measured and compared. Preliminary findings indicate that the STL significantly reduces travel times and improves overall traffic flow at the intersection, with average travel time reductions ranging from 7.1% to 28.6%, depending on site-specific factors. While further research is necessary to quantify the full extent of the system’s impact, these initial results demonstrate the promising potential of STL technology to enhance urban mobility and more efficient and safer roadways by moving beyond traditional traffic signal functionalities. Full article
Show Figures

Figure 1

17 pages, 1927 KiB  
Article
ConvTransNet-S: A CNN-Transformer Hybrid Disease Recognition Model for Complex Field Environments
by Shangyun Jia, Guanping Wang, Hongling Li, Yan Liu, Linrong Shi and Sen Yang
Plants 2025, 14(15), 2252; https://doi.org/10.3390/plants14152252 - 22 Jul 2025
Viewed by 324
Abstract
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification [...] Read more.
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification tasks. Unlike existing hybrid approaches, ConvTransNet-S uniquely introduces three key innovations: First, a Local Perception Unit (LPU) and Lightweight Multi-Head Self-Attention (LMHSA) modules were introduced to synergistically enhance the extraction of fine-grained plant disease details and model global dependency relationships, respectively. Second, an Inverted Residual Feed-Forward Network (IRFFN) was employed to optimize the feature propagation path, thereby enhancing the model’s robustness against interferences such as lighting variations and leaf occlusions. This novel combination of a LPU, LMHSA, and an IRFFN achieves a dynamic equilibrium between local texture perception and global context modeling—effectively resolving the trade-offs inherent in standalone CNNs or transformers. Finally, through a phased architecture design, efficient fusion of multi-scale disease features is achieved, which enhances feature discriminability while reducing model complexity. The experimental results indicated that ConvTransNet-S achieved a recognition accuracy of 98.85% on the PlantVillage public dataset. This model operates with only 25.14 million parameters, a computational load of 3.762 GFLOPs, and an inference time of 7.56 ms. Testing on a self-built in-field complex scene dataset comprising 10,441 images revealed that ConvTransNet-S achieved an accuracy of 88.53%, which represents improvements of 14.22%, 2.75%, and 0.34% over EfficientNetV2, Vision Transformer, and Swin Transformer, respectively. Furthermore, the ConvTransNet-S model achieved up to 14.22% higher disease recognition accuracy under complex background conditions while reducing the parameter count by 46.8%. This confirms that its unique multi-scale feature mechanism can effectively distinguish disease from background features, providing a novel technical approach for disease diagnosis in complex agricultural scenarios and demonstrating significant application value for intelligent agricultural management. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

19 pages, 1563 KiB  
Review
Autonomous Earthwork Machinery for Urban Construction: A Review of Integrated Control, Fleet Coordination, and Safety Assurance
by Zeru Liu and Jung In Kim
Buildings 2025, 15(14), 2570; https://doi.org/10.3390/buildings15142570 - 21 Jul 2025
Viewed by 225
Abstract
Autonomous earthwork machinery is gaining traction as a means to boost productivity and safety on space-constrained urban sites, yet the fast-growing literature has not been fully integrated. To clarify current knowledge, we systematically searched Scopus and screened 597 records, retaining 157 peer-reviewed papers [...] Read more.
Autonomous earthwork machinery is gaining traction as a means to boost productivity and safety on space-constrained urban sites, yet the fast-growing literature has not been fully integrated. To clarify current knowledge, we systematically searched Scopus and screened 597 records, retaining 157 peer-reviewed papers (2015–March 2025) that address autonomy, integrated control, or risk mitigation for excavators, bulldozers, and loaders. Descriptive statistics, VOSviewer mapping, and qualitative synthesis show the output rising rapidly and peaking at 30 papers in 2024, led by China, Korea, and the USA. Four tightly linked themes dominate: perception-driven machine autonomy, IoT-enabled integrated control systems, multi-sensor safety strategies, and the first demonstrations of fleet-level collaboration (e.g., coordinated excavator clusters and unmanned aerial vehicle and unmanned ground vehicle (UAV–UGV) site preparation). Advances include centimeter-scale path tracking, real-time vision-light detection and ranging (LiDAR) fusion and geofenced safety envelopes, but formal validation protocols and robust inter-machine communication remain open challenges. The review distils five research priorities, including adaptive perception and artificial intelligence (AI), digital-twin integration with building information modeling (BIM), cooperative multi-robot planning, rigorous safety assurance, and human–automation partnership that must be addressed to transform isolated prototypes into connected, self-optimizing fleets capable of delivering safer, faster, and more sustainable urban construction. Full article
(This article belongs to the Special Issue Automation and Robotics in Building Design and Construction)
Show Figures

Figure 1

25 pages, 5160 KiB  
Review
A Technological Review of Digital Twins and Artificial Intelligence for Personalized and Predictive Healthcare
by Silvia L. Chaparro-Cárdenas, Julian-Andres Ramirez-Bautista, Juan Terven, Diana-Margarita Córdova-Esparza, Julio-Alejandro Romero-Gonzalez, Alfonso Ramírez-Pedraza and Edgar A. Chavez-Urbiola
Healthcare 2025, 13(14), 1763; https://doi.org/10.3390/healthcare13141763 - 21 Jul 2025
Viewed by 531
Abstract
Digital transformation is reshaping the healthcare field by streamlining diagnostic workflows and improving disease management. Within this transformation, Digital Twins (DTs), which are virtual representations of physical systems continuously updated by real-world data, stand out for their ability to capture the complexity of [...] Read more.
Digital transformation is reshaping the healthcare field by streamlining diagnostic workflows and improving disease management. Within this transformation, Digital Twins (DTs), which are virtual representations of physical systems continuously updated by real-world data, stand out for their ability to capture the complexity of human physiology and behavior. When coupled with Artificial Intelligence (AI), DTs enable data-driven experimentation, precise diagnostic support, and predictive modeling without posing direct risks to patients. However, their integration into healthcare requires careful consideration of ethical, regulatory, and safety constraints in light of the sensitivity and nonlinear nature of human data. In this review, we examine recent progress in DTs over the past seven years and explore broader trends in AI-augmented DTs, focusing particularly on movement rehabilitation. Our goal is to provide a comprehensive understanding of how DTs bolstered by AI can transform healthcare delivery, medical research, and personalized care. We discuss implementation challenges such as data privacy, clinical validation, and scalability along with opportunities for more efficient, safe, and patient-centered healthcare systems. By addressing these issues, this review highlights key insights and directions for future research to guide the proactive and ethical adoption of DTs in healthcare. Full article
Show Figures

Figure 1

22 pages, 14158 KiB  
Article
Enhanced YOLOv8 for Robust Pig Detection and Counting in Complex Agricultural Environments
by Jian Li, Wenkai Ma, Yanan Wei and Tan Wang
Animals 2025, 15(14), 2149; https://doi.org/10.3390/ani15142149 - 21 Jul 2025
Viewed by 253
Abstract
Accurate pig counting is crucial for precision livestock farming, enabling optimized feeding management and health monitoring. Detection-based counting methods face significant challenges due to mutual occlusion, varying illumination conditions, diverse pen configurations, and substantial variations in pig densities. Previous approaches often struggle with [...] Read more.
Accurate pig counting is crucial for precision livestock farming, enabling optimized feeding management and health monitoring. Detection-based counting methods face significant challenges due to mutual occlusion, varying illumination conditions, diverse pen configurations, and substantial variations in pig densities. Previous approaches often struggle with complex agricultural environments where lighting conditions, pig postures, and crowding levels create challenging detection scenarios. To address these limitations, we propose EAPC-YOLO (enhanced adaptive pig counting YOLO), a robust architecture integrating density-aware processing with advanced detection optimizations. The method consists of (1) an enhanced YOLOv8 network incorporating multiple architectural improvements for better feature extraction and object localization. These improvements include DCNv4 deformable convolutions for irregular pig postures, BiFPN bidirectional feature fusion for multi-scale information integration, EfficientViT linear attention for computational efficiency, and PIoU v2 loss for improved overlap handling. (2) A density-aware post-processing module with intelligent NMS strategies that adapt to different crowding scenarios. Experimental results on a comprehensive dataset spanning diverse agricultural scenarios (nighttime, controlled indoor, and natural daylight environments with density variations from 4 to 30 pigs) demonstrate our method achieves 94.2% mAP@0.5 for detection performance and 96.8% counting accuracy, representing 12.3% and 15.7% improvements compared to the strongest baseline, YOLOv11n. This work enables robust, accurate pig counting across challenging agricultural environments, supporting precision livestock management. Full article
Show Figures

Figure 1

40 pages, 16352 KiB  
Review
Surface Protection Technologies for Earthen Sites in the 21st Century: Hotspots, Evolution, and Future Trends in Digitalization, Intelligence, and Sustainability
by Yingzhi Xiao, Yi Chen, Yuhao Huang and Yu Yan
Coatings 2025, 15(7), 855; https://doi.org/10.3390/coatings15070855 - 20 Jul 2025
Viewed by 637
Abstract
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale [...] Read more.
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale degradation and macro-scale deformation. With the deep integration of digital twin technology, spatial information technologies, intelligent systems, and sustainable concepts, earthen site surface conservation technologies are transitioning from single-point applications to multidimensional integration. However, challenges remain in terms of the insufficient systematization of technology integration and the absence of a comprehensive interdisciplinary theoretical framework. Based on the dual-core databases of Web of Science and Scopus, this study systematically reviews the technological evolution of surface conservation for earthen sites between 2000 and 2025. CiteSpace 6.2 R4 and VOSviewer 1.6 were used for bibliometric visualization analysis, which was innovatively combined with manual close reading of the key literature and GPT-assisted semantic mining (error rate < 5%) to efficiently identify core research themes and infer deeper trends. The results reveal the following: (1) technological evolution follows a three-stage trajectory—from early point-based monitoring technologies, such as remote sensing (RS) and the Global Positioning System (GPS), to spatial modeling technologies, such as light detection and ranging (LiDAR) and geographic information systems (GIS), and, finally, to today’s integrated intelligent monitoring systems based on multi-source fusion; (2) the key surface technology system comprises GIS-based spatial data management, high-precision modeling via LiDAR, 3D reconstruction using oblique photogrammetry, and building information modeling (BIM) for structural protection, while cutting-edge areas focus on digital twin (DT) and the Internet of Things (IoT) for intelligent monitoring, augmented reality (AR) for immersive visualization, and blockchain technologies for digital authentication; (3) future research is expected to integrate big data and cloud computing to enable multidimensional prediction of surface deterioration, while virtual reality (VR) will overcome spatial–temporal limitations and push conservation paradigms toward automation, intelligence, and sustainability. This study, grounded in the technological evolution of surface protection for earthen sites, constructs a triadic framework of “intelligent monitoring–technological integration–collaborative application,” revealing the integration needs between DT and VR for surface technologies. It provides methodological support for addressing current technical bottlenecks and lays the foundation for dynamic surface protection, solution optimization, and interdisciplinary collaboration. Full article
Show Figures

Graphical abstract

Back to TopTop