Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = sim-to-real world

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3473 KiB  
Article
Reinforcement Learning for Bipedal Jumping: Integrating Actuator Limits and Coupled Tendon Dynamics
by Yudi Zhu, Xisheng Jiang, Xiaohang Ma, Jun Tang, Qingdu Li and Jianwei Zhang
Mathematics 2025, 13(15), 2466; https://doi.org/10.3390/math13152466 - 31 Jul 2025
Viewed by 260
Abstract
In high-dynamic bipedal locomotion control, robotic systems are often constrained by motor torque limitations, particularly during explosive tasks such as jumping. One of the key challenges in reinforcement learning lies in bridging the sim-to-real gap, which mainly stems from both inaccuracies in simulation [...] Read more.
In high-dynamic bipedal locomotion control, robotic systems are often constrained by motor torque limitations, particularly during explosive tasks such as jumping. One of the key challenges in reinforcement learning lies in bridging the sim-to-real gap, which mainly stems from both inaccuracies in simulation models and the limitations of motor torque output, ultimately leading to the failure of deploying learned policies in real-world systems. Traditional RL methods usually focus on peak torque limits but ignore that motor torque changes with speed. By only limiting peak torque, they prevent the torque from adjusting dynamically based on velocity, which can reduce the system’s efficiency and performance in high-speed tasks. To address these issues, this paper proposes a reinforcement learning jump-control framework tailored for tendon-driven bipedal robots, which integrates dynamic torque boundary constraints and torque error-compensation modeling. First, we developed a torque transmission coefficient model based on the tendon-driven mechanism, taking into account tendon elasticity and motor-control errors, which significantly improves the modeling accuracy. Building on this, we derived a dynamic joint torque limit that adapts to joint velocity, and designed a torque-aware reward function within the reinforcement learning environment, aimed at encouraging the policy to implicitly learn and comply with physical constraints during training, effectively bridging the gap between simulation and real-world performance. Hardware experimental results demonstrate that the proposed method effectively satisfies actuator safety limits while achieving more efficient and stable jumping behavior. This work provides a general and scalable modeling and control framework for learning high-dynamic bipedal motion under complex physical constraints. Full article
Show Figures

Figure 1

23 pages, 783 KiB  
Article
An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing
by Min Cui and Yipeng Wang
Sensors 2025, 25(15), 4705; https://doi.org/10.3390/s25154705 - 30 Jul 2025
Viewed by 187
Abstract
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling [...] Read more.
Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users’ specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

23 pages, 951 KiB  
Article
Multi-Objective Evolution and Swarm-Integrated Optimization of Manufacturing Processes in Simulation-Based Environments
by Panagiotis D. Paraschos, Georgios Papadopoulos and Dimitrios E. Koulouriotis
Machines 2025, 13(7), 611; https://doi.org/10.3390/machines13070611 - 16 Jul 2025
Viewed by 363
Abstract
This paper presents a digital twin-driven multi-objective optimization approach for enhancing the performance and productivity of a multi-product manufacturing system under complex operational challenges. More specifically, the concept of digital twin is applied to virtually replicate a physical system that leverages real-time data [...] Read more.
This paper presents a digital twin-driven multi-objective optimization approach for enhancing the performance and productivity of a multi-product manufacturing system under complex operational challenges. More specifically, the concept of digital twin is applied to virtually replicate a physical system that leverages real-time data fusion from Internet of Things devices or sensors. JaamSim serves as the platform for modeling the digital twin, simulating the dynamics of the manufacturing system. The implemented digital twin is a manufacturing system that incorporates a three-stage production line to complete and stockpile two gear types. The production line is subject to unpredictable events, including equipment breakdowns, maintenance, and product returns. The stochasticity of these real-world-like events is modeled using a normal distribution. Manufacturing control strategies, such as CONWIP and Kanban, are implemented to evaluate the impact on the performance of the manufacturing system in a simulation environment. The evaluation is performed based on three key indicators: service level, the amount of work-in-progress items, and overall system profitability. Multiple objective functions are formulated to optimize the behavior of the system by reducing the work-in-progress items and improving both cost-effectiveness and service level. To this end, the proposed approach couples the JaamSim-based digital twins with evolutionary and swarm-based algorithms to carry out the multi-objective optimization under varying conditions. In this sense, the present work offers an early demonstration of an industrial digital twin, implementing an offline simulation-based manufacturing environment that utilizes optimization algorithms. Results demonstrate the trade-offs between the employed strategies and offer insights on the implementation of hybrid production control systems in dynamic environments. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

21 pages, 12122 KiB  
Article
RA3T: An Innovative Region-Aligned 3D Transformer for Self-Supervised Sim-to-Real Adaptation in Low-Altitude UAV Vision
by Xingrao Ma, Jie Xie, Di Shao, Aiting Yao and Chengzu Dong
Electronics 2025, 14(14), 2797; https://doi.org/10.3390/electronics14142797 - 11 Jul 2025
Viewed by 294
Abstract
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework [...] Read more.
Low-altitude unmanned aerial vehicle (UAV) vision is critically hindered by the Sim-to-Real Gap, where models trained exclusively on simulation data degrade under real-world variations in lighting, texture, and weather. To address this problem, we propose RA3T (Region-Aligned 3D Transformer), a novel self-supervised framework that enables robust Sim-to-Real adaptation. Specifically, we first develop a dual-branch strategy for self-supervised feature learning, integrating Masked Autoencoders and contrastive learning. This approach extracts domain-invariant representations from unlabeled simulated imagery to enhance robustness against occlusion while reducing annotation dependency. Leveraging these learned features, we then introduce a 3D Transformer fusion module that unifies multi-view RGB and LiDAR point clouds through cross-modal attention. By explicitly modeling spatial layouts and height differentials, this component significantly improves recognition of small and occluded targets in complex low-altitude environments. To address persistent fine-grained domain shifts, we finally design region-level adversarial calibration that deploys local discriminators on partitioned feature maps. This mechanism directly aligns texture, shadow, and illumination discrepancies which challenge conventional global alignment methods. Extensive experiments on UAV benchmarks VisDrone and DOTA demonstrate the effectiveness of RA3T. The framework achieves +5.1% mAP on VisDrone and +7.4% mAP on DOTA over the 2D adversarial baseline, particularly on small objects and sparse occlusions, while maintaining real-time performance of 17 FPS at 1024 × 1024 resolution on an RTX 4080 GPU. Visual analysis confirms that the synergistic integration of 3D geometric encoding and local adversarial alignment effectively mitigates domain gaps caused by uneven illumination and perspective variations, establishing an efficient pathway for simulation-to-reality UAV perception. Full article
(This article belongs to the Special Issue Innovative Technologies and Services for Unmanned Aerial Vehicles)
Show Figures

Figure 1

32 pages, 12851 KiB  
Article
Research on Autonomous Vehicle Lane-Keeping and Navigation System Based on Deep Reinforcement Learning: From Simulation to Real-World Application
by Chia-Hsin Cheng, Hsiang-Hao Lin and Yu-Yong Luo
Electronics 2025, 14(13), 2738; https://doi.org/10.3390/electronics14132738 - 7 Jul 2025
Viewed by 441
Abstract
In recent years, with the rapid development of science and technology and the substantial improvement of computing power, various deep learning research topics have been promoted. However, existing autonomous driving technologies still face significant challenges in achieving robust lane-keeping and navigation performance, especially [...] Read more.
In recent years, with the rapid development of science and technology and the substantial improvement of computing power, various deep learning research topics have been promoted. However, existing autonomous driving technologies still face significant challenges in achieving robust lane-keeping and navigation performance, especially when transferring learned models from simulation to real-world environments due to environmental complexity and domain gaps. Many fields such as computer vision, natural language processing, and medical imaging have also accelerated their development due to the emergence of this wave, and the field of self-driving cars is no exception. The trend of self-driving cars is unstoppable. Many technology companies and automobile manufacturers have invested a lot of resources in the research and development of self-driving technology. With the emergence of different levels of self-driving cars, most car manufacturers have already reached the L2 level of self-driving classification standards and are moving towards L3 and L4 levels. This study applies deep reinforcement learning (DRL) to train autonomous vehicles with lane-keeping and navigation capabilities. Through simulation training and Sim2Real strategies, including domain randomization and CycleGAN, the trained models are evaluated in real-world environments to validate performance. The results demonstrate the feasibility of DRL-based autonomous driving and highlight the challenges in transferring models from simulation to reality. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

24 pages, 9889 KiB  
Article
An Intelligent Management System and Advanced Analytics for Boosting Date Production
by Shaymaa E. Sorour, Munira Alsayyari, Norah Alqahtani, Kaznah Aldosery, Anfal Altaweel and Shahad Alzhrani
Sustainability 2025, 17(12), 5636; https://doi.org/10.3390/su17125636 - 19 Jun 2025
Viewed by 682
Abstract
The date palm industry is a vital pillar of agricultural economies in arid and semi-arid regions; however, it remains vulnerable to challenges such as pest infestations, post-harvest diseases, and limited access to real-time monitoring tools. This study applied the baseline YOLOv11 model and [...] Read more.
The date palm industry is a vital pillar of agricultural economies in arid and semi-arid regions; however, it remains vulnerable to challenges such as pest infestations, post-harvest diseases, and limited access to real-time monitoring tools. This study applied the baseline YOLOv11 model and its optimized variant, YOLOv11-Opt, to automate the detection, classification, and monitoring of date fruit varieties and disease-related defects. The models were trained on a curated dataset of real-world images collected in Saudi Arabia and enhanced through advanced data augmentation techniques, dynamic label assignment (SimOTA++), and extensive hyperparameter optimization. The experimental results demonstrated that YOLOv11-Opt significantly outperformed the baseline YOLOv11, achieving an overall classification accuracy of 99.04% for date types and 99.69% for disease detection, with ROC-AUC scores exceeding 99% in most cases. The optimized model effectively distinguished visually complex diseases, such as scale insert and dry date skin, across multiple date types, enabling high-resolution, real-time inference. Furthermore, a visual analytics dashboard was developed to support strategic decision-making by providing insights into production trends, disease prevalence, and varietal distribution. These findings underscore the value of integrating optimized deep learning architectures and visual analytics for intelligent, scalable, and sustainable precision agriculture. Full article
(This article belongs to the Special Issue Sustainable Food Processing and Food Packaging Technologies)
Show Figures

Figure 1

20 pages, 4951 KiB  
Article
LNT-YOLO: A Lightweight Nighttime Traffic Light Detection Model
by Syahrul Munir and Huei-Yung Lin
Smart Cities 2025, 8(3), 95; https://doi.org/10.3390/smartcities8030095 - 6 Jun 2025
Viewed by 1136
Abstract
Autonomous vehicles are one of the key components of smart mobility that leverage innovative technology to navigate and operate safely in urban environments. Traffic light detection systems, as a key part of autonomous vehicles, play a key role in navigation during challenging traffic [...] Read more.
Autonomous vehicles are one of the key components of smart mobility that leverage innovative technology to navigate and operate safely in urban environments. Traffic light detection systems, as a key part of autonomous vehicles, play a key role in navigation during challenging traffic scenarios. Nighttime driving poses significant challenges for autonomous vehicle navigation, particularly in regard to the accuracy of traffic lights detection (TLD) systems. Existing TLD methodologies frequently encounter difficulties under low-light conditions due to factors such as variable illumination, occlusion, and the presence of distracting light sources. Moreover, most of the recent works only focused on daytime scenarios, often overlooking the significantly increased risk and complexity associated with nighttime driving. To address these critical issues, this paper introduces a novel approach for nighttime traffic light detection using the LNT-YOLO model, which is based on the YOLOv7-tiny framework. LNT-YOLO incorporates enhancements specifically designed to improve the detection of small and poorly illuminated traffic signals. Low-level feature information is utilized to extract the small-object features that have been missing because of the structure of the pyramid structure in the YOLOv7-tiny neck component. A novel SEAM attention module is proposed to refine the features that represent both the spatial and channel information by leveraging the features from the Simple Attention Module (SimAM) and Efficient Channel Attention (ECA) mechanism. The HSM-EIoU loss function is also proposed to accurately detect a small traffic light by amplifying the loss for hard-sample objects. In response to the limited availability of datasets for nighttime traffic light detection, this paper also presents the TN-TLD dataset. This newly curated dataset comprises carefully annotated images from real-world nighttime driving scenarios, featuring both circular and arrow traffic signals. Experimental results demonstrate that the proposed model achieves high accuracy in recognizing traffic lights in the TN-TLD dataset and in the publicly available LISA dataset. The LNT-YOLO model outperforms the original YOLOv7-tiny model and other state-of-the-art object detection models in mAP performance by 13.7% to 26.2% on the TN-TLD dataset and by 9.5% to 24.5% on the LISA dataset. These results underscore the model’s feasibility and robustness compared to other state-of-the-art object detection models. The source code and dataset will be available through the GitHub repository. Full article
Show Figures

Figure 1

20 pages, 3265 KiB  
Article
Enhancing Rare Class Performance in HOI Detection with Re-Splitting and a Fair Test Dataset
by Gyubin Park and Afaque Manzoor Soomro
Information 2025, 16(6), 474; https://doi.org/10.3390/info16060474 - 6 Jun 2025
Viewed by 589
Abstract
In Human–Object Interaction (HOI) detection, class imbalance severely limits the performance of a model on infrequent interaction categories. To overcome this problem, a Re-Splitting algorithm has been developed. This algorithm implements DreamSim-based clustering and performs k-means-based partitioning to restructure the train–test splits. By [...] Read more.
In Human–Object Interaction (HOI) detection, class imbalance severely limits the performance of a model on infrequent interaction categories. To overcome this problem, a Re-Splitting algorithm has been developed. This algorithm implements DreamSim-based clustering and performs k-means-based partitioning to restructure the train–test splits. By doing so, the approach balances the rarities and frequent classes of interaction equally, thereby increasing robustness. A Real-World test dataset has also been introduced. This dataset is comparable to a truly independent benchmark. It is designed to address class distribution bias, which is commonly present in traditional test sets. However, as shown in the Experiment and Evaluation subsection, a high level of performance can be achieved for the general case using different few-shot and rare-class training instances. Models trained solely on the re-split dataset show significant improvements in rare-class mAP, particularly for one-stage models. Evaluation on the test dataset from the real world further emphasizes previously overlooked model performance and supports fair structuring of dataset. The methods are validated with extensive experiments using five one-stage and two two-stage models. Our analysis shows that reshaping dataset distributions increases rare-class detection by as much as 8.0 mAP. This study paves the way for balanced training and evaluation leading to the formulation of a general framework for scalable, fair, and generalizable HOI detection. Full article
Show Figures

Figure 1

24 pages, 11083 KiB  
Article
DTTF-Sim: A Digital Twin-Based Simulation System for Continuous Autonomous Driving Testing
by Zhigang Liang, Jian Wang, Tingyu Zhang and Xinyu Yong
Sensors 2025, 25(11), 3447; https://doi.org/10.3390/s25113447 - 30 May 2025
Viewed by 751
Abstract
As autonomous driving technology matures, the focus shifts to enhancing the safety and reliability of these systems. Simulation testing is a critical method for efficiently and rapidly validating the performance of autonomous vehicles (AVs). A robust AV system requires extensive testing across a [...] Read more.
As autonomous driving technology matures, the focus shifts to enhancing the safety and reliability of these systems. Simulation testing is a critical method for efficiently and rapidly validating the performance of autonomous vehicles (AVs). A robust AV system requires extensive testing across a wide range of scenarios and iterative improvements. However, current simulation systems have limitations in supporting diverse scenarios, often relying on expert-designed situations. To address these challenges, we introduce DTTF-Sim, a novel simulation system based on Digital Twin technology for traffic flow. DTTF-Sim aims to accurately replicate real-world traffic flow conditions, offering continuous long-term simulation capabilities for AV testing. The system can simulate detailed dynamic traffic scenarios with a focus on interactions between multiple vehicles and between AVs and background traffic vehicles, modeling the strategic decision-making processes that occur in these encounters. This paper outlines the architecture and functionalities of DTTF-Sim, highlighting its ability to overcome the shortcomings of existing simulation platforms. We demonstrate the effectiveness of DTTF-Sim through case studies and experimental results, showing its potential to significantly advance the development and testing of autonomous driving technologies. Full article
(This article belongs to the Special Issue Data and Network Analytics in Transportation Systems)
Show Figures

Figure 1

36 pages, 5341 KiB  
Review
Deep Reinforcement Learning of Mobile Robot Navigation in Dynamic Environment: A Review
by Yingjie Zhu, Wan Zuha Wan Hasan, Hafiz Rashidi Harun Ramli, Nor Mohd Haziq Norsahperi, Muhamad Saufi Mohd Kassim and Yiduo Yao
Sensors 2025, 25(11), 3394; https://doi.org/10.3390/s25113394 - 28 May 2025
Viewed by 2744
Abstract
Deep reinforcement learning (DRL), a vital branch of artificial intelligence, has shown great promise in mobile robot navigation within dynamic environments. However, existing studies mainly focus on simplified dynamic scenarios or the modeling of static environments, which results in trained models lacking sufficient [...] Read more.
Deep reinforcement learning (DRL), a vital branch of artificial intelligence, has shown great promise in mobile robot navigation within dynamic environments. However, existing studies mainly focus on simplified dynamic scenarios or the modeling of static environments, which results in trained models lacking sufficient generalization and adaptability when faced with real-world dynamic environments, particularly in handling complex task variations, dynamic obstacle interference, and multimodal data fusion. Addressing these gaps is essential for enhancing its real-time performance and versatility. Through a comparative analysis of classical DRL algorithms, this study highlights their advantages and limitations in handling real-time navigation tasks under dynamic environmental conditions. In particular, the paper systematically examines value-based, policy-based, and hybrid-based DRL methods, discussing their applicability to different navigation challenges. Additionally, by reviewing recent studies from 2021 to 2024, it identifies key trends in DRL-based navigation, revealing a strong focus on indoor environments while outdoor navigation and multi-robot collaboration remain underexplored. The analysis also highlights challenges in real-world deployment, particularly in sim-to-real transfer and sensor fusion. Based on these findings, this paper outlines future directions to enhance real-time adaptability, multimodal perception, and collaborative learning frameworks, providing theoretical and technical insights for advancing DRL in dynamic environments. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

31 pages, 8007 KiB  
Review
Sustainable Innovation Management: Balancing Economic Growth and Environmental Responsibility
by Morgan Alamandi
Sustainability 2025, 17(10), 4362; https://doi.org/10.3390/su17104362 - 12 May 2025
Cited by 1 | Viewed by 1220
Abstract
Sustainable innovation management (SIM) is increasingly recognized as a pivotal framework for addressing the dual challenges of economic growth and environmental responsibility. In response to escalating global pressures, this review explores how SIM can drive sustainable development by balancing profitability with ecological stewardship. [...] Read more.
Sustainable innovation management (SIM) is increasingly recognized as a pivotal framework for addressing the dual challenges of economic growth and environmental responsibility. In response to escalating global pressures, this review explores how SIM can drive sustainable development by balancing profitability with ecological stewardship. Drawing on recent academic and industry sources, the paper examines the intersection of circular economy principles, emerging technologies, and policy frameworks in shaping sustainable innovation strategies. The review is structured around three key pillars: the integration of technologies, such as artificial intelligence, blockchain, and the Internet of things in sustainable operations; the influence of regulatory drivers, including carbon pricing and environmental, social, and governance standards; and empirical case studies that highlight both challenges and success factors in SIM adoption. By synthesizing real-world applications across sectors and geographies, this study provides qualitative insights and quantitative indicators (e.g., CO2 reduction, return on investment, material reuse rates) to inform practical strategies for business leaders and policymakers. Addressing gaps such as the lack of global harmonization in sustainability metrics and the under-representation of developing economies, this review contributes to a more inclusive and actionable understanding of SIM. This paper concludes by offering future research directions and policy recommendations aimed at accelerating the transition toward sustainable and circular business models. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Graphical abstract

19 pages, 36390 KiB  
Article
TerrAInav Sim: An Open-Source Simulation of UAV Aerial Imaging from Map-Based Data
by Seyedeh Parisa Dajkhosh, Peter M. Le, Orges Furxhi and Eddie L. Jacobs
Remote Sens. 2025, 17(8), 1454; https://doi.org/10.3390/rs17081454 - 18 Apr 2025
Viewed by 898
Abstract
Capturing real-world aerial images for vision-based navigation (VBN) is challenging due to limited availability and conditions that make it nearly impossible to access all desired images from any location. The complexity increases when multiple locations are involved. State-of-the-art solutions, such as deploying UAVs [...] Read more.
Capturing real-world aerial images for vision-based navigation (VBN) is challenging due to limited availability and conditions that make it nearly impossible to access all desired images from any location. The complexity increases when multiple locations are involved. State-of-the-art solutions, such as deploying UAVs (unmanned aerial vehicles) for aerial imaging or relying on existing research databases, come with significant limitations. TerrAInav Sim offers a compelling alternative by simulating a UAV to capture bird’s-eye view map-based images at zero yaw with real-world visible-band specifications. This open-source tool allows users to specify the bounding box (top-left and bottom-right) coordinates of any region on a map. Without the need to physically fly a drone, the virtual Python UAV performs a raster search to capture images. Users can define parameters such as the flight altitude, aspect ratio, diagonal field of view of the camera, and the overlap between consecutive images. TerrAInav Sim’s capabilities range from capturing a few low-altitude images for basic applications to generating extensive datasets of entire cities for complex tasks like deep learning. This versatility makes TerrAInav a valuable tool for not only VBN but also other applications, including environmental monitoring, construction, and city management. The open-source nature of the tool also allows for the extension of the raster search to other missions. A dataset of Memphis, TN, has been provided along with this simulator. A supplementary dataset is also provided, which includes data from a 3D world generation package for comparison. Full article
Show Figures

Graphical abstract

17 pages, 8413 KiB  
Article
Monocular Vision-Based Depth Estimation of Forward-Looking Scenes for Mobile Platforms
by Li Wei, Meng Ding and Shuai Li
Appl. Sci. 2025, 15(8), 4267; https://doi.org/10.3390/app15084267 - 12 Apr 2025
Cited by 1 | Viewed by 672
Abstract
The depth estimation of forward-looking scenes is one of the fundamental tasks for an Intelligent Mobile Platform to perceive its surrounding environment. In response to this requirement, this paper proposes a self-supervised monocular depth estimation method that can be utilized across various mobile [...] Read more.
The depth estimation of forward-looking scenes is one of the fundamental tasks for an Intelligent Mobile Platform to perceive its surrounding environment. In response to this requirement, this paper proposes a self-supervised monocular depth estimation method that can be utilized across various mobile platforms, including unmanned aerial vehicles (UAVs) and autonomous ground vehicles (AGVs). Building on the foundational framework of Monodepth2, we introduce an intermediate module between the encoder and decoder of the depth estimation network to facilitate multiscale fusion of feature maps. Additionally, we integrate the channel attention mechanism ECANet into the depth estimation network to enhance the significance of important channels. Consequently, the proposed method addresses the issue of losing critical features, which can lead to diminished accuracy and robustness. The experiments presented in this paper are conducted on two datasets: KITTI, a publicly available dataset collected from real-world environments used to evaluate depth estimation performance for AGV platforms, and AirSim, a custom dataset generated using simulation software to assess depth estimation performance for UAV platforms. The experimental results demonstrate that the proposed method can overcome the adverse effects of varying working conditions and accurately perceive detailed depth information in specific regions, such as object edges and targets of different scales. Furthermore, the depth predicted by the proposed method is quantitatively compared with the ground truth depth, and a variety of evaluation metrics confirm that our method exhibits superior inference capability and robustness. Full article
Show Figures

Figure 1

15 pages, 3698 KiB  
Article
On Slope Attitude Angle Estimation for Mass-Production Range-Extended Electric Vehicles Based on the Extended Kalman Filter Approach
by Ye Wang, Hanchi Hong, Yan Xiao, Honglei Zhang, Rui Wang, Zhenyu Qin and Shuiwen Shen
World Electr. Veh. J. 2025, 16(4), 210; https://doi.org/10.3390/wevj16040210 - 2 Apr 2025
Viewed by 448
Abstract
Since vehicle attitude cannot be readily measured, this paper designs a state observer based on the information available on the CAN bus. The attitude angle estimated in this way is not only robust in practical applications but can also replace an IMU sensor [...] Read more.
Since vehicle attitude cannot be readily measured, this paper designs a state observer based on the information available on the CAN bus. The attitude angle estimated in this way is not only robust in practical applications but can also replace an IMU sensor for accurate remaining fuel range prediction under complex driving conditions. The primary innovation of this work is the development of an extended Kalman filter (EKF)-based estimation of the vehicle pitch attitude angle and its deployment in real-world vehicle systems. Firstly, a vehicle longitudinal model considering the suspension dynamics is established, followed by a model-based extended Kalman filter (EKF) design. Then, the EKF algorithm is verified by a co-simulation using Simulink and CarSim of typical working conditions. Numerical tests indicate the effectiveness of the EKF algorithm, with the estimation error being below 0.5°. Finally, the proposed EKF is engineered to range-extended NETA electrical vehicles and applied for reliable remaining fuel range prediction. The mass-production application proves that the EKF observer can respond to changes in body pitch motion stably and rapidly, and the estimated error is less than 1.5°. Full article
Show Figures

Figure 1

16 pages, 2027 KiB  
Article
Estimating Bus Mass Using a Hybrid Approach: Integrating Forgetting Factor Recursive Least Squares with the Extended Kalman Filter
by Jingyang Du, Qian Wang and Xiaolei Yuan
Sensors 2025, 25(6), 1741; https://doi.org/10.3390/s25061741 - 11 Mar 2025
Cited by 1 | Viewed by 766
Abstract
The vehicle mass is a crucial state variable for achieving safe and energy-efficient driving, as it directly impacts the vehicle’s power performance, braking efficiency, and handling stability. However, current methods frequently rely on particular operating conditions or supplementary sensors, which limits their ability [...] Read more.
The vehicle mass is a crucial state variable for achieving safe and energy-efficient driving, as it directly impacts the vehicle’s power performance, braking efficiency, and handling stability. However, current methods frequently rely on particular operating conditions or supplementary sensors, which limits their ability to provide accurate, stable, and convenient vehicle mass estimation. Moreover, as a form of public transportation, buses are subject to stringent safety standards. The frequent variations in passenger numbers result in substantial fluctuations in vehicle mass, thereby complicating the accuracy of mass estimation. To address these challenges, this paper proposes a hybrid vehicle mass estimation algorithm that integrates Robust Forgetting Factor Recursive Least Squares (Robust FFRLS) and Extended Kalman Filter (EKF). By sequentially employing these two methods, the algorithm conducts dual-stage mass estimation and incorporates a proportional coordination factor to balance the outputs from FFRLS and EKF, thereby improving the accuracy of the estimated mass. Importantly, the proposed method does not necessitate the installation of new sensors, relying instead on data from existing CAN-bus and IMU sensors, thus addressing cost control concerns for mass-produced vehicles. The algorithm was validated through MATLAB(2022b)-TruckSim(2019.0) simulations under three loading conditions: empty, half-load, and full-load. The results demonstrate that the proposed algorithm maintains an error rate below 10% across all conditions, outperforming single-method approaches and meeting the stringent requirements for vehicle mass estimation in safety and stability functions. Future work will focus on conducting real-world tests under various driving conditions to further validate the robustness and applicability of the proposed method. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Back to TopTop