Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline

Search Results (366)

Search Parameters:
Keywords = sim–real

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3473 KiB  
Article
Reinforcement Learning for Bipedal Jumping: Integrating Actuator Limits and Coupled Tendon Dynamics
by Yudi Zhu, Xisheng Jiang, Xiaohang Ma, Jun Tang, Qingdu Li and Jianwei Zhang
Mathematics 2025, 13(15), 2466; https://doi.org/10.3390/math13152466 - 31 Jul 2025
Abstract
In high-dynamic bipedal locomotion control, robotic systems are often constrained by motor torque limitations, particularly during explosive tasks such as jumping. One of the key challenges in reinforcement learning lies in bridging the sim-to-real gap, which mainly stems from both inaccuracies in simulation [...] Read more.
In high-dynamic bipedal locomotion control, robotic systems are often constrained by motor torque limitations, particularly during explosive tasks such as jumping. One of the key challenges in reinforcement learning lies in bridging the sim-to-real gap, which mainly stems from both inaccuracies in simulation models and the limitations of motor torque output, ultimately leading to the failure of deploying learned policies in real-world systems. Traditional RL methods usually focus on peak torque limits but ignore that motor torque changes with speed. By only limiting peak torque, they prevent the torque from adjusting dynamically based on velocity, which can reduce the system’s efficiency and performance in high-speed tasks. To address these issues, this paper proposes a reinforcement learning jump-control framework tailored for tendon-driven bipedal robots, which integrates dynamic torque boundary constraints and torque error-compensation modeling. First, we developed a torque transmission coefficient model based on the tendon-driven mechanism, taking into account tendon elasticity and motor-control errors, which significantly improves the modeling accuracy. Building on this, we derived a dynamic joint torque limit that adapts to joint velocity, and designed a torque-aware reward function within the reinforcement learning environment, aimed at encouraging the policy to implicitly learn and comply with physical constraints during training, effectively bridging the gap between simulation and real-world performance. Hardware experimental results demonstrate that the proposed method effectively satisfies actuator safety limits while achieving more efficient and stable jumping behavior. This work provides a general and scalable modeling and control framework for learning high-dynamic bipedal motion under complex physical constraints. Full article
Show Figures

Figure 1

24 pages, 783 KiB  
Article
An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing
by Min Cui and Yipeng Wang
Sensors 2025, 25(15), 4705; https://doi.org/10.3390/s25154705 (registering DOI) - 30 Jul 2025
Abstract
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling [...] Read more.
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users’ specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 3950 KiB  
Article
A Deep Reinforcement Learning-Based Concurrency Control of Federated Digital Twin for Software-Defined Manufacturing Systems
by Rubab Anwar, Jin-Woo Kwon and Won-Tae Kim
Appl. Sci. 2025, 15(15), 8245; https://doi.org/10.3390/app15158245 - 24 Jul 2025
Viewed by 196
Abstract
Modern manufacturing demands real-time, scalable coordination that legacy manufacturing management systems cannot provide. Digital transformation encompasses the entire manufacturing infrastructure, which can be represented by digital twins for facilitating efficient monitoring, prediction, and optimization of factory operations. A Federated Digital Twin (FDT) emerges [...] Read more.
Modern manufacturing demands real-time, scalable coordination that legacy manufacturing management systems cannot provide. Digital transformation encompasses the entire manufacturing infrastructure, which can be represented by digital twins for facilitating efficient monitoring, prediction, and optimization of factory operations. A Federated Digital Twin (FDT) emerges by combining heterogeneous digital twins, enabling real-time collaboration, data sharing, and collective decision-making. However, deploying FDTs introduces new concurrency control challenges, such as priority inversion and synchronization failures, which can potentially cause process delays, missed deadlines, and reduced customer satisfaction. Traditional concurrency control approaches in the computing domain, due to their reliance on static priority assignments and centralized control, are inadequate for managing dynamic, real-time conflicts effectively in real production lines. To address these challenges, this study proposes a novel concurrency control framework combining Deep Reinforcement Learning with the Priority Ceiling Protocol. Using SimPy-based discrete-event simulations, which accurately model the asynchronous nature of FDT interactions, the proposed approach adaptively optimizes resource allocation and effectively mitigates priority inversion. The results demonstrate that against the rule-based PCP controller, our hybrid DRLCC enhances completion time maximum of 24.27% to a minimum of 1.51%, urgent-job delay maximum of 6.65% and a minimum of 2.18%, while preserving lower-priority inversions. Full article
Show Figures

Figure 1

22 pages, 2538 KiB  
Article
Enhancing Supervisory Control with GPenSIM
by Reggie Davidrajuh, Shuanglin Tang and Yuming Feng
Machines 2025, 13(8), 641; https://doi.org/10.3390/machines13080641 - 23 Jul 2025
Viewed by 203
Abstract
Supervisory control theory (SCT), based on Petri nets, offers a robust framework for modeling and controlling discrete-event systems but faces significant challenges in scalability, expressiveness, and practical implementation. This paper introduces General-purpose Petri Net Simulator and Real-Time Controller (GPenSIM), a MATLAB version 24.1.0.2689473 [...] Read more.
Supervisory control theory (SCT), based on Petri nets, offers a robust framework for modeling and controlling discrete-event systems but faces significant challenges in scalability, expressiveness, and practical implementation. This paper introduces General-purpose Petri Net Simulator and Real-Time Controller (GPenSIM), a MATLAB version 24.1.0.2689473 (R2024a) Update 6-based modular Petri net framework, as a novel solution to these limitations. GPenSIM leverages modular decomposition to mitigate state-space explosion, enabling parallel execution of weakly coupled Petri modules on multi-core systems. Its programmable interfaces (pre-processors and post-processors) extend classical Petri nets’ expressiveness by enforcing nonlinear, temporal, and conditional constraints through custom MATLAB scripts, addressing the rigidity of traditional linear constraints. Furthermore, the integration of GPenSIM with MATLAB facilitates real-time control synthesis, performance optimization, and seamless interaction with external hardware and software, bridging the gap between theoretical models and industrial applications. Empirical studies demonstrate the efficacy of GPenSIM in reconfigurable manufacturing systems, where it reduced downtime by 30%, and in distributed control scenarios, where decentralized modules minimized synchronization delays. Grounded in systems theory principles of interconnectedness, GPenSIM emphasizes dynamic relationships between components, offering a scalable, adaptable, and practical tool for supervisory control. This work highlights the potential of GPenSIM to overcome longstanding limitations in SCT, providing a versatile platform for both academic research and industrial deployment. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

23 pages, 6199 KiB  
Article
PDAA: An End-to-End Polygon Dynamic Adjustment Algorithm for Building Footprint Extraction
by Longjie Luo, Jiangchen Cai, Bin Feng and Liufeng Tao
Remote Sens. 2025, 17(14), 2495; https://doi.org/10.3390/rs17142495 - 17 Jul 2025
Viewed by 195
Abstract
Buildings are a significant component of urban space and are essential to smart cities, catastrophe monitoring, and land use planning. However, precisely extracting building polygons from remote sensing images remains difficult because of the variety of building designs and intricate backgrounds. This paper [...] Read more.
Buildings are a significant component of urban space and are essential to smart cities, catastrophe monitoring, and land use planning. However, precisely extracting building polygons from remote sensing images remains difficult because of the variety of building designs and intricate backgrounds. This paper proposes an end-to-end polygon dynamic adjustment algorithm (PDAA) to improve the accuracy and geometric consistency of building contour extraction by dynamically generating and optimizing polygon vertices. The method first locates building instances through the region of interest (RoI) to generate initial polygons, and then uses four core modules for collaborative optimization: (1) the feature enhancement module captures local detail features to improve the robustness of vertex positioning; (2) the contour vertex tuning module fine-tunes vertex coordinates through displacement prediction to enhance geometric accuracy; (3) the learnable redundant vertex removal module screens key vertices based on a classification mechanism to eliminate redundancy; and (4) the missing vertex completion module iteratively restores missed vertices to ensure the integrity of complex contours. PDAA dynamically adjusts the number of vertices to adapt to the geometric characteristics of different buildings, while simplifying the prediction process and reducing computational complexity. Experiments on public datasets such as WHU, Vaihingen, and Inria show that PDAA significantly outperforms existing methods in terms of average precision (AP) and polygon similarity (PolySim). It is at least 2% higher than existing methods in terms of average precision (AP), and the generated polygonal contours are closer to the real building geometry. Values of 75.4% AP and 84.9% PolySim were achieved on the WHU dataset, effectively solving the problems of redundant vertices and contour smoothing, and providing high-precision building vector data support for scenarios such as smart cities and emergency response. Full article
Show Figures

Figure 1

22 pages, 11043 KiB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 - 17 Jul 2025
Viewed by 349
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

22 pages, 76473 KiB  
Article
Modeling Renewable Energy Feed-In Dynamics in a German Metropolitan Region
by Sebastian Bottler and Christian Weindl
Processes 2025, 13(7), 2270; https://doi.org/10.3390/pr13072270 - 16 Jul 2025
Viewed by 239
Abstract
This study presents community-specific modeling approaches for simulating power injection from photovoltaic and wind energy systems in a German metropolitan region. Developed within the EMN_SIM project and based on openly accessible datasets, the methods are broadly transferable across Germany. For PV, a cluster-based [...] Read more.
This study presents community-specific modeling approaches for simulating power injection from photovoltaic and wind energy systems in a German metropolitan region. Developed within the EMN_SIM project and based on openly accessible datasets, the methods are broadly transferable across Germany. For PV, a cluster-based model groups systems by geographic and technical characteristics, using real weather data to reduce computational effort. Validation against measured specific yields shows strong agreement, confirming energetic accuracy. The wind model operates on a per-turbine basis, integrating technical specifications, land use, and high-resolution wind data. Energetic validation indicates good consistency with Bavarian reference values, while power-based comparisons with selected turbines show reasonable correlation, subject to expected limitations in wind data resolution. The resulting high-resolution generation profiles reveal spatial and temporal patterns valuable for grid planning and targeted policy design. While further validation with additional measurement data could enhance model precision, the current results already offer a robust foundation for urban energy system analyses and future grid integration studies. Full article
(This article belongs to the Special Issue Recent Advances in Energy and Dynamical Systems)
Show Figures

Figure 1

23 pages, 951 KiB  
Article
Multi-Objective Evolution and Swarm-Integrated Optimization of Manufacturing Processes in Simulation-Based Environments
by Panagiotis D. Paraschos, Georgios Papadopoulos and Dimitrios E. Koulouriotis
Machines 2025, 13(7), 611; https://doi.org/10.3390/machines13070611 - 16 Jul 2025
Viewed by 331
Abstract
This paper presents a digital twin-driven multi-objective optimization approach for enhancing the performance and productivity of a multi-product manufacturing system under complex operational challenges. More specifically, the concept of digital twin is applied to virtually replicate a physical system that leverages real-time data [...] Read more.
This paper presents a digital twin-driven multi-objective optimization approach for enhancing the performance and productivity of a multi-product manufacturing system under complex operational challenges. More specifically, the concept of digital twin is applied to virtually replicate a physical system that leverages real-time data fusion from Internet of Things devices or sensors. JaamSim serves as the platform for modeling the digital twin, simulating the dynamics of the manufacturing system. The implemented digital twin is a manufacturing system that incorporates a three-stage production line to complete and stockpile two gear types. The production line is subject to unpredictable events, including equipment breakdowns, maintenance, and product returns. The stochasticity of these real-world-like events is modeled using a normal distribution. Manufacturing control strategies, such as CONWIP and Kanban, are implemented to evaluate the impact on the performance of the manufacturing system in a simulation environment. The evaluation is performed based on three key indicators: service level, the amount of work-in-progress items, and overall system profitability. Multiple objective functions are formulated to optimize the behavior of the system by reducing the work-in-progress items and improving both cost-effectiveness and service level. To this end, the proposed approach couples the JaamSim-based digital twins with evolutionary and swarm-based algorithms to carry out the multi-objective optimization under varying conditions. In this sense, the present work offers an early demonstration of an industrial digital twin, implementing an offline simulation-based manufacturing environment that utilizes optimization algorithms. Results demonstrate the trade-offs between the employed strategies and offer insights on the implementation of hybrid production control systems in dynamic environments. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

21 pages, 12122 KiB  
Article
RA3T: An Innovative Region-Aligned 3D Transformer for Self-Supervised Sim-to-Real Adaptation in Low-Altitude UAV Vision
by Xingrao Ma, Jie Xie, Di Shao, Aiting Yao and Chengzu Dong
Electronics 2025, 14(14), 2797; https://doi.org/10.3390/electronics14142797 - 11 Jul 2025
Viewed by 272
Abstract
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework [...] Read more.
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework that enables robust Sim-to-Real adaptation. Specifically, we first develop a dual-branch strategy for self-supervised feature learning, integrating Masked Autoencoders and contrastive learning. This approach extracts domain-invariant representations from unlabeled simulated imagery to enhance robustness against occlusion while reducing annotation dependency. Leveraging these learned features, we then introduce a 3D Transformer fusion module that unifies multi-view RGB and LiDAR point clouds through cross-modal attention. By explicitly modeling spatial layouts and height differentials, this component significantly improves recognition of small and occluded targets in complex low-altitude environments. To address persistent fine-grained domain shifts, we finally design region-level adversarial calibration that deploys local discriminators on partitioned feature maps. This mechanism directly aligns texture, shadow, and illumination discrepancies which challenge conventional global alignment methods. Extensive experiments on UAV benchmarks VisDrone and DOTA demonstrate the effectiveness of RA3T. The framework achieves +5.1% mAP on VisDrone and +7.4% mAP on DOTA over the 2D adversarial baseline, particularly on small objects and sparse occlusions, while maintaining real-time performance of 17 FPS at 1024 × 1024 resolution on an RTX 4080 GPU. Visual analysis confirms that the synergistic integration of 3D geometric encoding and local adversarial alignment effectively mitigates domain gaps caused by uneven illumination and perspective variations, establishing an efficient pathway for simulation-to-reality UAV perception. Full article
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)
Show Figures

Figure 1

19 pages, 1293 KiB  
Article
Open-Source Real-Time SDR Platform for Rapid Prototyping of LANS AFS Receiver
by Rion Sobukawa and Takuji Ebinuma
Aerospace 2025, 12(7), 620; https://doi.org/10.3390/aerospace12070620 - 10 Jul 2025
Viewed by 499
Abstract
The Lunar Augmented Navigation Service (LANS) is the lunar equivalent of GNSS for future lunar explorations. It offers users accurate position, navigation, and timing (PNT) capabilities on and around the Moon. The Augmented Forward Signal (AFS) is a standardized signal structure for LANS, [...] Read more.
The Lunar Augmented Navigation Service (LANS) is the lunar equivalent of GNSS for future lunar explorations. It offers users accurate position, navigation, and timing (PNT) capabilities on and around the Moon. The Augmented Forward Signal (AFS) is a standardized signal structure for LANS, and its recommended standard was published online on 7 February 2025. This work presents software-defined radio (SDR) implementations of the LANS AFS simulator and receiver, which were rapidly developed within a month of the signal specification release. Based on open-source GNSS software, including GPS-SDR-SIM and Pocket SDR, our system provides a valuable platform for future algorithm research and hardware-in-the-loop testing. The receiver can operate on embedded platforms, such as the Raspberry Pi 5, in real-time. This feature makes it suitable for lunar surface applications, where conventional PC-based SDR systems are impractical due to their size, weight, and power requirements. Our approach demonstrates how open-source SDR frameworks can be rapidly applied to emerging satellite navigation signals, even for extraterrestrial PNT applications. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

32 pages, 12851 KiB  
Article
Research on Autonomous Vehicle Lane-Keeping and Navigation System Based on Deep Reinforcement Learning: From Simulation to Real-World Application
by Chia-Hsin Cheng, Hsiang-Hao Lin and Yu-Yong Luo
Electronics 2025, 14(13), 2738; https://doi.org/10.3390/electronics14132738 - 7 Jul 2025
Viewed by 409
Abstract
In recent years, with the rapid development of science and technology and the substantial improvement of computing power, various deep learning research topics have been promoted. However, existing autonomous driving technologies still face significant challenges in achieving robust lane-keeping and navigation performance, especially [...] Read more.
In recent years, with the rapid development of science and technology and the substantial improvement of computing power, various deep learning research topics have been promoted. However, existing autonomous driving technologies still face significant challenges in achieving robust lane-keeping and navigation performance, especially when transferring learned models from simulation to real-world environments due to environmental complexity and domain gaps. Many fields such as computer vision, natural language processing, and medical imaging have also accelerated their development due to the emergence of this wave, and the field of self-driving cars is no exception. The trend of self-driving cars is unstoppable. Many technology companies and automobile manufacturers have invested a lot of resources in the research and development of self-driving technology. With the emergence of different levels of self-driving cars, most car manufacturers have already reached the L2 level of self-driving classification standards and are moving towards L3 and L4 levels. This study applies deep reinforcement learning (DRL) to train autonomous vehicles with lane-keeping and navigation capabilities. Through simulation training and Sim2Real strategies, including domain randomization and CycleGAN, the trained models are evaluated in real-world environments to validate performance. The results demonstrate the feasibility of DRL-based autonomous driving and highlight the challenges in transferring models from simulation to reality. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

17 pages, 2210 KiB  
Article
An Adaptive Vehicle Stability Enhancement Controller Based on Tire Cornering Stiffness Adaptations
by Jianbo Feng, Zepeng Gao and Bingying Guo
World Electr. Veh. J. 2025, 16(7), 377; https://doi.org/10.3390/wevj16070377 - 4 Jul 2025
Viewed by 237
Abstract
This study presents an adaptive integrated chassis control strategy for enhancing vehicle stability under different road conditions, specifically through the real-time estimation of tire cornering stiffness. A hierarchical control architecture is developed, combining active front steering (AFS) and direct yaw moment control (DYC). [...] Read more.
This study presents an adaptive integrated chassis control strategy for enhancing vehicle stability under different road conditions, specifically through the real-time estimation of tire cornering stiffness. A hierarchical control architecture is developed, combining active front steering (AFS) and direct yaw moment control (DYC). A recursive regularized weighted least squares algorithm is designed to estimate tire cornering stiffness from measurable vehicle states, eliminating the need for additional tire sensors. Leveraging this estimation, an adaptive sliding mode controller (ASMC) is proposed in the upper layer, where a novel self-tuning mechanism adjusts control parameters based on tire saturation levels and cornering stiffness variation trends. The lower-layer controller employs a weighted least squares allocation method to distribute control efforts while respecting physical and friction constraints. Co-simulations using MATLAB 2018a/Simulink and CarSim validate the effectiveness of the proposed framework under both high- and low-friction scenarios. Compared with conventional ASMC and DYC strategies, the proposed controller exhibits improved robustness, reduced sideslip, and enhanced trajectory tracking performance. The results demonstrate the significance of the real-time integration of tire dynamics into chassis control in improving vehicle handling and stability. Full article
Show Figures

Figure 1

19 pages, 2374 KiB  
Article
Analysis of Opportunities to Reduce CO2 and NOX Emissions Through the Improvement of Internal Inter-Operational Transport
by Szymon Pawlak, Tomasz Małysa, Angieszka Fornalczyk, Angieszka Sobianowska-Turek and Marzena Kuczyńska-Chałada
Sustainability 2025, 17(13), 5974; https://doi.org/10.3390/su17135974 - 29 Jun 2025
Viewed by 382
Abstract
The reduction of environmental pollutant emissions—including greenhouse gases, particulate matter, and other harmful substances—represents one of the foremost challenges in climate policy, economics, and industrial management today. Excessive emissions of CO2, NOX, and suspended particulates exert significant impacts on [...] Read more.
The reduction of environmental pollutant emissions—including greenhouse gases, particulate matter, and other harmful substances—represents one of the foremost challenges in climate policy, economics, and industrial management today. Excessive emissions of CO2, NOX, and suspended particulates exert significant impacts on climate change as well as human health and welfare. Consequently, numerous studies and regulatory and technological initiatives are underway to mitigate these emissions. One critical area is intra-plant transport within manufacturing facilities, which, despite its localized scope, can substantially contribute to a company’s total emissions. This paper aims to assess the potential of computer simulation using FlexSim software as a decision-support tool for planning inter-operational transport, with a particular focus on environmental aspects. The study analyzes real operational data from a selected production plant (case study), concentrating on the optimization of the number of transport units, their routing, and the layout of workstations. It is hypothesized that reducing the number of trips, shortening transport routes, and efficiently utilizing transport resources can lead to lower emissions of carbon dioxide (CO2) and nitrogen oxides (NOX). The findings provide a basis for a broader adoption of digital tools in sustainable production planning, emphasizing the integration of environmental criteria into decision-making processes. Furthermore, the results offer a foundation for future analyses that consider the development of green transport technologies—such as electric and hydrogen-powered vehicles—in the context of their implementation in the internal logistics of manufacturing enterprises. Full article
Show Figures

Figure 1

30 pages, 9159 KiB  
Article
Accuracy–Efficiency Trade-Off: Optimizing YOLOv8 for Structural Crack Detection
by Jiahui Zhang, Zoia Vladimirovna Beliaeva and Yue Huang
Sensors 2025, 25(13), 3873; https://doi.org/10.3390/s25133873 - 21 Jun 2025
Viewed by 1387
Abstract
To address the accuracy–efficiency trade-off faced by deep learning models in structural crack detection, this paper proposes an optimized version of the YOLOv8 model. YOLO (You Only Look Once) is a real-time object detection algorithm known for its high speed and decent accuracy. [...] Read more.
To address the accuracy–efficiency trade-off faced by deep learning models in structural crack detection, this paper proposes an optimized version of the YOLOv8 model. YOLO (You Only Look Once) is a real-time object detection algorithm known for its high speed and decent accuracy. To improve crack feature representation, the backbone is enhanced with the SimAM attention mechanism. A lightweight C3Ghost module reduces parameter count and computation, while a bidirectional multi-scale feature fusion structure replaces the standard neck to enhance efficiency. Experimental results show that the proposed model achieves a mean Average Precision (mAP) of 88.7% at 0.5 IoU and 69.4% for mAP@0.5:0.95, with 12.3% fewer Giga Floating Point Operations (GFlops), and faster inference. These improvements significantly enhance the detection of fine cracks while maintaining real-time performance, making it suitable for engineering scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 11784 KiB  
Article
RST-YOLOv8: An Improved Chip Surface Defect Detection Model Based on YOLOv8
by Wenjie Tang, Yangjun Deng and Xu Luo
Sensors 2025, 25(13), 3859; https://doi.org/10.3390/s25133859 - 21 Jun 2025
Viewed by 661
Abstract
Surface defect detection in chips is crucial for ensuring product quality and reliability. This paper addresses the challenge of low identification accuracy in chip surface defect detection, which arises from the similarity of defect characteristics, small sizes, and significant scale differences. We propose [...] Read more.
Surface defect detection in chips is crucial for ensuring product quality and reliability. This paper addresses the challenge of low identification accuracy in chip surface defect detection, which arises from the similarity of defect characteristics, small sizes, and significant scale differences. We propose an enhanced chip surface defect detection algorithm based on an improved version of YOLOv8, termed RST-YOLOv8. This study introduces the C2f_RVB module, which incorporates RepViTBlock technology. This integration effectively optimizes feature representation capabilities while significantly reducing the model’s parameter count. By enhancing the expressive power of deep features, we achieve a marked improvement in the identification accuracy of small defect targets. Additionally, we employ the SimAM attention mechanism, enabling the model to learn three-dimensional channel information, thereby strengthening its perception of defect characteristics. To address the issues of missed detections and false detections of small targets in chip surface defect detection, we designed a task-aligned dynamic detection head (TADDH) to facilitate interaction between the localization and classification detection heads. This design improves the accuracy of small target detection. Experimental evaluations on the PCB_DATASET indicate that our model improved the mAP@0.5 by 10.3%. Furthermore, significant progress was achieved in experiments on the chip surface defect dataset, where mAP@0.5 increased by 5.4%. Simultaneously, the model demonstrated significant advantages in terms of computational complexity, as both the number of parameters and GFLOPs were effectively controlled. This showcases the model’s balance between high precision and a lightweight design. The experimental results show that the RST-YOLOv8 model has a significant advantage in detection accuracy for chip surface defects compared to other models. It not only enhances detection accuracy but also achieves an optimal balance between computational resource consumption and real-time performance, providing an ideal technical pathway for chip surface defect detection tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop