Next Article in Journal
A Mixed Gas Component Identification and Concentration Estimation Method for Unbalanced Gas Sensor Array Samples
Previous Article in Journal
Physical Workload Patterns in U-18 Basketball Using LPS and MEMS Data: A Principal Component Analysis by Quarter and Playing Position
Previous Article in Special Issue
Classifying Advanced Driver Assistance System (ADAS) Activation from Multimodal Driving Data: A Real-World Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Delay-Compensated Lane-Coordinate Vehicle State Estimation Using Low-Cost Sensors

School of Mechanical Engineering, Pusan National University, Busan 46241, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 6251; https://doi.org/10.3390/s25196251
Submission received: 8 September 2025 / Revised: 3 October 2025 / Accepted: 7 October 2025 / Published: 9 October 2025
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)

Abstract

Accurate vehicle state estimation in a lane coordinate system is essential for safe and reliable operation of Advanced Driver Assistance Systems (ADASs) and autonomous driving. However, achieving robust lane-based state estimation using only low-cost sensors, such as a camera, an IMU, and a steering angle sensor, remains challenging due to the complexity of vehicle dynamics and the inherent signal delays in vision systems. This paper presents a lane-coordinate-based vehicle state estimator that addresses these challenges by combining a vehicle dynamics-based bicycle model with an Extended Kalman Filter (EKF) and a signal delay compensation algorithm. The estimator performs real-time estimation of lateral position, lateral velocity, and heading angle, including the unmeasurable lateral velocity about the lane, by predicting the vehicle’s state evolution during camera processing delays. A computationally efficient camera processing pipeline, incorporating lane segmentation via a pre-trained network and lane-based state extraction, is implemented to support practical applications. Validation using real vehicle driving data on straight and curved roads demonstrates that the proposed estimator provides continuous, high-accuracy, and delay-compensated lane-coordinate-based vehicle states. Compared to conventional camera-only methods and estimators without delay compensation, the proposed approach significantly reduces estimation errors and phase lag, enabling the reliable and real-time acquisition of vehicle-state information critical for ADAS and autonomous driving applications.

1. Introduction

Advancements in Advanced Driver Assistance Systems (ADASs) and fully autonomous driving (AD) are critical for safe and efficient future transportation systems [1,2]. Functions such as Lane Keeping Assist (LKA), Adaptive Cruise Control (ACC), and urban autonomous driving rely heavily on accurate and robust perception of the vehicle’s state within its environment [3]. For stable operation, it is essential to estimate not only the vehicle’s absolute position in a global coordinate system but also its relative state with respect to the lane. In particular, lateral position, lateral velocity, and heading angle in a lane coordinate system (curvilinear coordinates) provide more intuitive and useful information for path planning and control algorithms [4,5,6]. Among these, lateral velocity relative to the lane is a critical state for motion prediction but cannot be directly measured with typical onboard sensors.
Current state-of-the-art approaches achieve centimeter-level accuracy by fusing high-cost sensors such as HD maps, LiDAR, radar, and GPS [2,7,8,9]. For example, Bersani et al. proposed a framework that integrates pre-built road maps with multi-sensor data to estimate the vehicle and surrounding obstacles in the lane coordinate system [4]. However, these methods are costly and lack scalability for general-purpose vehicles. Consequently, there is an urgent need for map-less state estimation methods that utilize only low-cost sensors, such as cameras and IMUs [10].
Low-cost sensor-based approaches face two main challenges. First, many existing methods rely on simple kinematic models, such as the Constant Turn Rate and Acceleration (CTRA) model [11,12,13]. While sufficient for stable driving, these models fail to capture real vehicle behavior during sudden steering or evasive maneuvers, where vehicle dynamics—including tire lateral forces and slip—dominate. This limitation can severely degrade estimation performance. Recent studies emphasize the importance of dynamic models that accurately reflect a vehicle’s physical characteristics for precise estimation, particularly of lateral velocity [4,9,14,15].
The second challenge is the inherent delay of vision-based sensors. Cameras and other perception sensors have slow and variable data acquisition rates due to the large amount of data processing, leading to unevenly delayed measurements [16,17]. Such delays can degrade perception performance and destabilize control loops [18]. Various attempts have been made to solve the problems caused by such delays. For example, to fundamentally reduce the delay itself, a wide range of methods have been explored [19,20], from traditional computer vision techniques for fast lane detection to modern approaches that use deep learning with lightweight models [21] or fusing different feature extraction methods [22]. However, even with these efforts to reduce processing time, a slight time delay can still lead to a significant performance degradation in vehicle control systems.
Another approach involves compensating for signal delays that are unavoidable due to heavy computational loads or communication processes. Currently, a widely used method for signal delay compensation is to modify the timestamp of the input signal by the amount of the delay [23,24,25]. However, most research on vehicle state estimation using signal delay compensation has not yet incorporated the vehicle’s dynamic characteristics [26,27]. Although there was a study by Wang et al. that considered vehicle dynamics and vision sensor signal delay to estimate the in-lane lateral position and angle [16], it did not estimate the lane-based lateral velocity, which still leaves a limitation in fully explaining the vehicle’s in-lane behavior.
To address these challenges, this study makes a significant contribution by proposing a novel lane-coordinate-based state estimator that operates using only low-cost sensors (a camera, IMU, and steering angle sensor) without relying on HD maps. The core of our approach is the tight integration of a vehicle dynamics-based bicycle model within an Extended Kalman Filter (EKF). This foundation overcomes the limitations of traditional kinematic models, and its most critical achievement is the accurate, real-time estimation of the vehicle’s lateral velocity relative to the lane—a key state for motion prediction that cannot be directly measured with typical onboard sensors. By robustly estimating this crucial, unmeasurable state alongside lateral position and heading angle, our method provides a comprehensive understanding of the vehicle’s dynamic behavior within the lane, which is essential for advanced vehicle control. Furthermore, by using the dynamics model to propagate the state forward during sensor processing periods, the estimator effectively compensates for the signal delay from the vision system, ensuring the final output remains timely and accurate. To support practical implementation, a computationally efficient camera processing pipeline is also introduced, including lane segmentation using a pre-trained network and lane-based state extraction. The overall structure of the proposed estimator is illustrated in Figure 1, which highlights the integration of signals from low-cost sensors, the proposed estimator contains adaptive delay compensation and an asynchronous compensation structure, and estimation results include lateral velocity about the lane.
The proposed estimator provides continuous, accurate, and delay-free lane-coordinate-based vehicle states, including the unmeasurable lateral velocity, which are directly applicable to ADAS and autonomous driving. Validation with real vehicle data on straight and curved roads demonstrates the estimator’s effectiveness under realistic driving conditions and confirms its real-time performance.

2. Vehicle Model for Estimator

2.1. Road-Following Vehicle Model in Vehicle Coordinates

A reliable vehicle model is essential for state estimation. A commonly used representation is the road-following vehicle model, which assumes small variations in lateral deviations and heading angle, ensuring the validity of linear approximations [28,29]. This model describes the lateral behavior of a vehicle based on a given steering angle. It is particularly suitable for explaining a vehicle’s lane-based state while considering its dynamic characteristics, as it defines the state in terms of lateral error relative to a given path or lane. The model is expressed as:
d ˙ v ˙ ψ ˙ ψ ˙ d r ˙ = 0 1 u 0 0 ( C α f + C α r ) m u 0 a C α f + b C α r m u u 0 0 0 1 0 a C α f + b C α r I z u 0 ( a 2 C α f + b 2 C α r ) I z u d v ψ ψ d r + 0 C α f m 0 a C α f I z δ f + 0 0 1 0 r d
where d is lateral displacement relative to the lane centerline, v is the lateral velocity, r is the yaw rate, ψ is the vehicle heading angle, ψd is the desired heading angle, u is the longitudinal velocity, Cαf and Cαr are the front and rear tire cornering stiffness, m is the vehicle mass, a and b are the distances from the front and rear axles to the center of gravity(CG), Iz is the yaw moment of inertia, δf is the front steering angle, rd is the desired yaw rate along the lane center, and the longitudinal velocity u is assumed constant. These variables are also summarized in Table 1.
This formulation is expressed in the vehicle coordinate system and will serve as the basis for its representation in the lane coordinate system, described next.

2.2. Vehicle Model in Lane Coordinates

To represent the vehicle model in the lane coordinate system, as shown in Figure 2, additional states must be introduced: the lateral displacement from the lane center and the corresponding lateral velocity. These are expressed as:
d ˙ = v n ,
v n = u sin ( Δ ψ ) + v cos ( Δ ψ ) .
where Δψ = ψ − ψd is the heading angle offset between the vehicle and the lane center. Assuming constant longitudinal velocity, the derivative of the vehicle’s lateral velocity relative to the lane center is:
v ˙ n = v ˙ cos ( Δ ψ ) + u cos ( Δ ψ ) Δ ψ ˙ v sin ( Δ ψ ) Δ ψ ˙ .
For small Δψ, Equation (4) can be simplified using the relation from (1).
v ˙ n = C α f + C α r m u v + b C α f a C α r m u r + C a f m δ f Δ ψ v ( r r d ) u r d ,
Because the lane curvature κ changes gradually, it is assumed constant over short intervals, yielding the relation between desired yaw rate, curvature, and longitudinal velocity:
r d = u κ , κ ˙ 0 .
Substituting into (5) gives:
v ˙ n = C α f + C α r m u v + b C α f a C α r m u r + C a f m δ f Δ ψ v ( r u κ ) u 2 κ .
The heading angle offset relative to the lane center is expressed as:
Δ ψ ˙ = ψ ˙ ψ ˙ d = r r d = r u κ ,
Combining the above, the nonlinear vehicle model in lane coordinates is expressed as:
X ˙ = f ( X , U ) = v n r u κ C α f + C α r m u v + b C α r a C α f m u r Δ ψ v r + C a f m δ f + Δ ψ v r d u 2 κ C α f + C α r m u v + ( b C α r a C α f m u u ) r + C a f m δ f b C α r a C α f I z u v a 2 C α f + b 2 C α r I z u r + a C a f I z δ f 0
w h e r e X = d Δ ψ v n v r κ T , U = δ f .
Therefore, in addition to v and r, curvature κ must also be included as a state. While κ can be directly extracted from camera images, such measurements suffer from delays and low update rates, making them unsuitable as direct model inputs. Moreover, assuming a zero-curvature rate of change prevents accurate estimation using the system model alone. Therefore, curvature is incorporated through a measurement model, with its implications addressed in the estimator design section.
The derivation above establishes a vehicle model expressed in lane coordinates, where lateral dynamics and road curvature are explicitly incorporated. Since curvature cannot be accurately predicted from the system model alone, reliable lane information is required. The following section introduces the lane detection framework, which provides the curvature and heading references necessary for the estimator design.

3. Vehicle State Estimator in Lane Coordinates

3.1. Lane Detection Algorithm

To provide the lane information required for curvature and heading references in the estimator, we employ a camera-based detection method. Unlike other sensors, a camera does not directly measure lane geometry; instead, image data must be processed to extract relevant lane features. Since the system operates on a moving vehicle, real-time performance is critical, and, therefore, the computational efficiency of the detection algorithm is a key concern.
We employ a method that balances robustness and efficiency, comprising three main steps: lane segmentation, coordinate transformation, and lane shape extraction. The overall procedure is illustrated in Figure 3.
First, lane segmentation is performed using a pre-trained Fully Convolutional Network (FCN) with ResNet50 as the backbone. This network is a highly accessible model commonly used for image classification and segmentation [19,21]. Given these characteristics, we adopted this segmentation-based methodology as a representative example of a lane detection method that introduces signal delay. Specifically, we chose FCN as the base architecture and used a pre-trained ResNet50 provided by PyTorch v1.12 as the backbone. This model was then fine-tuned on a lane recognition dataset [30,31] and subsequently utilized as a lane segmentation network. The network is trained to segment lane components from input road images, and the output retains only the pixels corresponding to lane markings:
I s e g = n e t ( I ) .
Because the segmented image is expressed in a distorted camera coordinate system, it is transformed into the vehicle’s local coordinate system. Using info., the camera’s installation parameters, orientation, and distortion characteristics, the image portion between the vanishing point and the vehicle’s position is projected onto the horizontal plane:
I l o c = t r a n s f o r m a t i o n ( I s e g , i n f o . ) .
Even after projection, lanes may not be distinctly separated. An adjacent-point searching and clustering method is therefore applied to distinguish the left and right lanes:
I l a n e _ i = c l u s t e r i n g ( I l o c ) ( i = l e f t , r i g h t ) .
Finally, a third-order polynomial is fitted to the lane shape. With an appropriate meter-to-pixel ratio, the lane geometry is obtained in the vehicle’s local coordinate system:
P i = r p 2 m l i n e f i t ( I l a n e _ i ) , i = { l e f t , r i g h t } .
This polynomial representation directly yields the required lane-based states: lateral offset d, heading angle Δψ, and curvature κ. Under the small-angle assumption for Δψ, these states are computed as:
d i = P i ( 0 ) ,
Δ ψ i = arctan ( P i ( 0 ) ) P i ( 0 ) ,
κ i = P i ( 0 ) / ( 1 + P i ( 0 ) 2 ) 3 / 2 P i ( 0 ) ,
w h e r e P i ( x ) = p 3 _ i x 3 + p 2 _ i x 2 + p 1 _ i x + p 0 _ i ( i = l e f t , r i g h t ) ,
Δ ψ 0 , P i ( 0 ) 2 0 .
Alternatively, they can be expressed in terms of the polynomial coefficients, with improved accuracy obtained by using the midpoint between the left and right lanes:
d = ( p 0 _ l e f t + p 0 _ r i g h t ) / 2 ,
Δ ψ = ( p 1 _ l e f t + p 1 _ r i g h t ) / 2 ,
κ = ( 2 p 2 _ l e f t + 2 p 2 _ r i g h t ) / 2 .
In summary, the algorithm provides accurate lane-based states, but the processing introduces non-negligible delays due to its computational complexity. These delays lead to errors in estimating the vehicle’s instantaneous state relative to the lane. Therefore, it is necessary to design an estimator that can accurately estimate the current state by properly accounting for this delay. For accurate estimation, the exact amount of delay must be considered. To precisely measure this delay, we included a function to measure the image processing time when implementing the algorithm described above. The following section presents a methodology to compensate for this delay through estimator design.

3.2. Vehicle State Estimator

To achieve robust estimation in the presence of sensor delays and signal faults, careful estimator design is required. As derived in Equation (9), the vehicle states of interest are governed by nonlinear dynamics. At the same time, real-time performance is essential for vehicle applications, necessitating an estimation framework that balances accuracy with computational efficiency.
In this study, the Extended Kalman Filter (EKF) is adopted. The EKF is particularly well suited to this problem because it can accommodate nonlinear system models while maintaining relatively low computational cost, avoiding the need for sigma-point propagation as in the Unscented Kalman Filter [32]. The EKF operates through prediction and correction steps based on local linearization of the nonlinear model.
The state vector x is defined to include the target lane-coordinate-based states and their auxiliary dynamics, rearranged from the relationships derived in Section 2:
X ˙ = f ( X , U ) = x 3 x 5 u x 6 C α f + C α r m u x 4 + b C α r a C α f m u x 5 x 2 x 4 x 5 + C a f m δ f + u x 2 x 4 x 6 u 2 x 6 C α f + C α r m u x 4 + ( b C α r a C α f m u u ) x 5 + C a f m δ f b C α r a C α f I z u x 4 a 2 C α f + b 2 C α r I z u x 5 + a C a f I z δ f 0
w h e r e X = [ x 1 x 2 x 3 x 4 x 5 x 6 ] T = d Δ ψ v n v r κ T , U = δ f .
The measurable outputs Z, obtained from the IMU and camera sensors, are related to the state vector as:
Z = h ( X , U ) = C α f + C α r m u x 4 + b C α r a C α f m u x 5 + C a f m δ f x 5 x 1 x 2 x 6
w h e r e Z = z I M U z c a m T , z I M U = a y r , z c a m = d Δ ψ κ , a y = v ˙ + u r
By introducing the process noise covariance Q and measurement noise covariance R, with respective noise terms wk and vk, the EKF problem formulation can be expressed as:
X k = f ( X k 1 , U k ) + w k , Z k = h ( X k , U k ) + v k .
Accordingly, the EKF proceeds with prediction and measurement update steps [33]. First, during the prediction step, the nonlinear system is linearized for state propagation:
F k = f x X ^ k 1 | k 1 , U k , H k = h x X ^ k | k 1 , U k .
The predicted state and covariance are then updated as:
X ^ k | k 1 = f ( X ^ k 1 | k 1 , U k ) , P ^ k | k 1 = F k P k 1 | k 1 F k T + Q k .
Next, during the measurement update step, the state and covariance are corrected using the measurement information:
X ^ k | k = X ^ k | k 1 + K k ( Z k h ( X ^ k | k 1 ) ) , P k | k = ( I K k H k ) P ^ k | k 1 ,
w h e r e K k = P ^ k | k 1 H k T S k 1 , S k = H k P ^ k | k 1 H k T + R k .
A key challenge arises from the fact that the sensors operate at different update rates. In particular, camera-based measurements incur a significant delay due to image processing. To overcome these issues, the proposed framework incorporates two dedicated strategies, which will be described in the following subsection.

3.3. Compensation for Asynchronous Measurements

In conventional approaches, when a sensor such as a camera operates at a slower update frequency than other sensors, two options are typically used: (1) generating virtual measurements through zero-order hold or extrapolation, or (2) forcing the entire estimator to operate at the slower frequency. Both approaches have notable drawbacks. Virtual measurements may accumulate errors over time, while slowing the estimator prevents effective use of higher-frequency sensor information and may hinder downstream applications, such as control systems, that require fast updates.
To overcome this limitation, the proposed methodology omits the measurement update whenever a new camera measurement is unavailable and performs only the model-based prediction step of the EKF, as expressed in Equation (26):
X ^ k | k = X ^ k | k 1 , P k | k = P ^ k | k 1
This approach eliminates the errors introduced by virtual measurements while still enabling continuous estimation of vehicle states using high-frequency inputs from the IMU and steering angle sensors.
Naturally, the absence of measurements introduces concerns about observability. However, since this study assumes that no signal loss persists beyond a single measurement period, periodic observability is sufficient for reliable estimation. Observability was verified by conducting a rank test on the observability matrix O, defined using the linearized system matrix F and measurement matrix H, as given in Equation (27):
O = H H F H F n 1 T

3.4. Compensation for Delayed Measurements

As discussed earlier, the processing of camera images to extract lane information introduces an inherent signal delay. This delay, combined with errors caused by the limited field of view, can significantly reduce the accuracy of vehicle state estimation relative to the lane. Accordingly, proper delay compensation is essential.
In the proposed framework, delay compensation is achieved by correcting the timestamp of the delayed measurement and performing a re-estimation of the state from the corrected timestamp to the current. Specifically, if the processing delay of the camera is denoted as τ, the corrected measurement timestamp is given by Equation (28):
z c a m , t r u e ( t τ ) = z c a m , m ( t )
In this study, only the processing delay of the camera is considered, as transmission delays are negligible. Since the image processing time required to extract lane information is known, the exact delay τ can be determined for each incoming measurement. By correcting the timestamp, the lane measurement can be aligned with the actual time at which it was captured.
To incorporate this corrected measurement into the current state estimate, the system performs a re-estimation over the interval t = [tcurτ, tcur], as illustrated in Figure 4 and expressed in Equation (29):
X ^ ( t + Δ t ) = E K F ( X ( t ) , U ( t ) , Z ( t ) , P ( t ) )
During this re-estimation process, measurement updates are applied whenever new measurements are available. However, since only the camera signal is delay-compensated in this study, the procedure consists primarily of time updates using steering angle and IMU measurements, except for the initial step that incorporates the delay-corrected camera measurement.

4. Validation

4.1. Experiment Configuration

The proposed estimator was validated using data collected in real-world driving environments. This approach ensures that robustness is evaluated against actual signal faults encountered in practice, rather than against artificially generated signal errors.
For the experimental setup, a medium-sized sedan was equipped with the necessary sensors and devices. An OxTS RTK-GNSS (RT-K) system and a high-definition (HD) lane map were used to obtain ground-truth vehicle states, as shown in Figure 5. All validation data—including RT-K, IMU, and steering angle sensor signals—were logged via a dSPACE AutoBox using high-speed CAN. The AutoBox is a platform developed to support real-time Hardware-in-the-Loop (HIL) testing for vehicle Electronic Control Units (ECUs). Signals input into this device are recorded along with the device’s internal time, allowing the recorded data to be processed based on the precise time of input. Furthermore, in this study, it was assumed that the time taken for signals from each sensor, including the camera, to be transmitted via wired communication is very small and can therefore be considered negligible. This means that the estimator can process all sensor signals based on their generation time. The only signal delay that needs to be considered in this research is the processing delay that occurs during the extraction of the vehicle’s lane-based state from the camera image. The camera was connected to a laptop interfaced with the AutoBox, ensuring that all measurements were synchronized. The RT-K system provides highly accurate position and state information, which, when combined with the HD lane map, enables precise determination of the vehicle’s true states in the lane coordinate system. In addition, the estimator implementation required vehicle parameters (e.g., mass, inertia, tire stiffness). For this, vehicle parameters identified from prior dynamic behavior test data were used, as presented in Table 2.
Validation experiments were conducted on public roads featuring both straight and curved sections. To ensure sufficiently rich excitation of lateral dynamics, the vehicle was driven with maneuvers such as sine-wave steering within the lane and a Double-Lane Change (DLC), such as overtaking a preceding vehicle. Driving speeds ranged between 30 and 60 kph. Consequently, each scenario was composed of driving data lasting between 10 and 18 s and a trajectory length of 100 to 250 m. The data for each scenario was of sufficient length to fully exhibit the characteristics of the corresponding maneuver, making it suitable for providing meaningful validation results for lateral velocity estimation.
Because this study introduces a new lane-based vehicle state estimator with delay compensation, the vehicle states directly extracted from the camera (without estimation) were used as the first baseline for comparison. However, the lateral velocity vn in the lane coordinate system cannot be directly measured by any onboard sensor. Therefore, it was assumed that the vehicle’s lateral slip velocity v was sufficiently small to be neglected. Under this assumption, the baseline vn was computed using the vehicle’s longitudinal velocity u and the camera-measured heading angle deviation Δψ, as shown in Equation (30):
C a m o n l y = d Δ ψ u sin ( Δ ψ )
Additionally, to evaluate the effect of the proposed delay compensation, estimation results obtained without applying delay compensation were used as a second baseline. This allowed direct assessment of the performance improvements achieved by the proposed methodology in reducing phase lag.

4.2. Experiment Result and Analysis

4.2.1. Computing Efficiency Analysis

Before presenting the validation results, we first analyzed the processing time of the camera image-to-vehicle state extraction process and the proposed estimator. As mentioned earlier, despite the use of a pre-trained network for computational efficiency, the camera image processing required approximately 0.3 s per frame, as shown in Figure 6. This represents a considerable computational burden, even when executed on a high-performance laptop (AMD Ryzen 9 7845HX with NVIDIA RTX 4070). Such latency can critically impair the accuracy of vehicle state estimation. In contrast, the proposed estimator with signal delay compensation accurately estimates the current vehicle state regardless of the delay magnitude. Moreover, as illustrated in the second graph of Figure 6, the estimator itself operates at very high speed, ensuring real-time estimation without performance issues.

4.2.2. Sine Wave Driving Scenario

The first experiment was conducted at 60 km/h on a straight road during a one m-amplitude sine wave maneuver, as shown in Figure 7. The results (right plot of Figure 7) show that the raw camera measurement is significantly delayed. Without delay compensation, this delay propagates into the estimation results, as seen in the blue trace. However, with the proposed delay-compensated estimator, the results align closely with the ground truth and are free from phase lag. In addition, by accounting for signal characteristics, the estimator achieves a substantially lower error than the raw measurement.
The second experiment (Figure 8) was performed at 30 km/h on a curved road with a 150 m radius. Although the maneuver was similar, the lower speed required larger and faster steering inputs, producing a larger Δψ while lateral offset and vn remained similar. Again, the proposed estimator accurately tracked the vehicle’s state, effectively compensating for delay even under large lateral motions at low speed. Due to the shorter time axis, it is particularly evident that the camera-only signals appear as sparse, delayed points. In contrast, the proposed estimator provided smooth, continuous estimates at 100 Hz without phase lag. The non-compensated estimator still lagged and failed to capture the vehicle’s motion fully.

4.2.3. Double-Lane-Change Scenario

The third and fourth scenarios (Figure 9 and Figure 10) involve Double-Lane-Change (DLC) maneuvers at approximately 40 km/h. Scenario 3 was conducted on a straight road, while scenario 4 was performed on a curved road with a 100 m radius.
In these cases, the lateral displacement was about 3 m (the width of a lane), resulting in lateral velocities roughly twice those of the sine wave experiments. The results again confirm that camera-only measurements and estimators without delay compensation suffer significant delays. In contrast, the proposed estimator closely tracks the ground truth without delay, even during large and rapid lateral movements.

4.2.4. Overall Validation Analysis

For a comprehensive quantitative analysis of the four driving scenarios (sine wave maneuvers on a straight and a curved road, and double-lane changes), we calculated the Root Mean Square Error (RMSE) and Peak Error for each state variable. We summarized them in Table 3. The baseline was established using values directly measured from the camera and using the proposed estimator without delay compensation, and its performance was compared and analyzed against the proposed estimator.
The numerical results in Table 3 clearly demonstrate the superiority of the proposed methodology. The proposed estimator provides substantial quantitative improvements over the camera-only approach. In terms of RMSE, the proposed method reduced the error for d from a range of 0.121–0.201 m to 0.055–0.090 m, for Δψ from 0.486–2.040° to 0.269–0.821°, and for vn from 0.166–0.271 m/s to 0.068–0.117 m/s. Furthermore, the Peak Error also showed significant reductions for most states. The error for d decreased from 0.273–0.454 m to 0.122–0.233 m, and for vn from 0.357–0.475 m/s to 0.162–0.288 m/s. For Δψ, the Peak Error was generally reduced from 1.224–3.565° to 0.671–1.724°, with the exception of Scenario 3, where a slight increase from 1.522° to 1.811° was observed. Notably, the estimator that does not incorporate delay compensation shows performance that is either similar to or even worse than simply calculating the target state using only camera information. This is because the non-compensated estimator acts as a type of low-pass filter, causing the state to be estimated with a slight but persistent delay relative to the camera measurements.
This performance improvement can be attributed to two key elements of our research. First, the adaptive signal delay compensation algorithm effectively eliminates the substantial delay from camera image processing, allowing for accurate and real-time estimation of the vehicle’s current states. The significant error reduction, particularly in the high lateral velocity Double-Lane-Change (DLC) scenarios, proves the critical importance of delay compensation during dynamic maneuvers. Second, by fusing the vehicle dynamics model with IMU and steering angle sensor data, we were able to physically and logically estimate the intervals between the discontinuous and noisy camera measurements and effectively filter out noise by considering signal characteristics.
To validate the estimator’s performance while considering real-world driving scenarios, we selected sinusoidal driving and double-lane-change driving, which generate significant lateral motion on straight and curved roads, respectively, as validation scenarios. Since the proposed methodology consistently demonstrated performance across all these scenarios, we expect it to perform appropriately in most similar real-world driving situations. However, more complex driving conditions, such as rapid acceleration/deceleration or inclement weather, were not included in the scope of this study. Therefore, its performance in these typical driving environments requires further validation through future research.
In summary, the proposed estimator demonstrated robust and reliable performance across dynamic maneuvers and diverse road conditions. These results confirm that our methodology overcomes fundamental limitations of camera-based sensing, providing highly accurate real-time state information essential for ADAS and autonomous driving applications.

5. Conclusions

This paper presented a robust, real-time state estimator capable of determining a vehicle’s complete lateral state within the lane coordinate system. The primary contribution lies in the successful estimation of the lane-based lateral velocity—a critical yet unmeasurable state for motion prediction—using only a camera, IMU, and steering angle sensor, without reliance on high-cost sensors or HD maps. This was achieved by implementing a vehicle dynamics-based bicycle model within an Extended Kalman Filter (EKF), which proved superior to conventional kinematic approaches in capturing actual vehicle behavior, especially during dynamic maneuvers.
Furthermore, this dynamics-based framework inherently addresses the challenge of vision sensor latency. By predicting the vehicle’s state evolution during measurement delays, the proposed method effectively reduces estimation errors and phase lag. Validation with real driving data demonstrated that the estimator provides continuous and accurate state estimates across diverse scenarios, including lane changes and sine-wave maneuvers, confirming its reliability and real-time performance for practical applications.
Future work will focus on integrating the estimator with lateral controllers for closed-loop validation, testing robustness under adverse weather, and extending the approach to estimate the lane-coordinate-based states of surrounding vehicles through multi-sensor fusion.

Author Contributions

Conceptualization, C.A. and W.K.; methodology, W.K.; software, M.K.; validation, W.K. and M.K.; formal analysis, M.K.; resources, C.A.; data curation, M.K.; writing—original draft preparation, M.K.; writing—review and editing, C.A.; visualization, M.K.; supervision, C.A.; project administration, C.A.; funding acquisition, C.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef]
  2. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  3. Ghorai, P.; Eskandarian, A.; Kim, Y.K.; Mehr, G. State Estimation and Motion Prediction of Vehicles and Vulnerable Road Users for Cooperative Autonomous Driving: A Survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16983–17002. [Google Scholar] [CrossRef]
  4. Bersani, M.; Mentasti, S.; Dahal, P.; Arrigoni, S.; Vignati, M.; Cheli, F.; Matteucci, M. An integrated algorithm for ego-vehicle and obstacles state estimation for autonomous driving. Robot. Auton. Syst. 2021, 139, 103662. [Google Scholar] [CrossRef]
  5. Jo, K.; Lee, M.; Kim, J.; Sunwoo, M. Tracking and Behavior Reasoning of Moving Vehicles Based on Roadway Geometry Constraints. IEEE Trans. Intell. Transp. Syst. 2017, 18, 460–476. [Google Scholar] [CrossRef]
  6. Kim, J.; Jo, K.; Lim, W.; Lee, M.; Sunwoo, M. Curvilinear-Coordinate-Based Object and Situation Assessment for Highly Automated Vehicles. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1559–1575. [Google Scholar] [CrossRef]
  7. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards fully autonomous driving: Systems and algorithms. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 163–168. [Google Scholar]
  8. Ghallabi, F.; El-Haj-Shhade, G.; Mittet, M.A.; Nashashibi, F. LIDAR-Based road signs detection For Vehicle Localization in an HD Map. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1484–1490. [Google Scholar]
  9. Vázquez, J.L.; Brühlmeier, M.; Liniger, A.; Rupenyan, A.; Lygeros, J. Optimization-Based Hierarchical Motion Planning for Autonomous Racing. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 2397–2403. [Google Scholar]
  10. Güzel, M.S. Autonomous Vehicle Navigation Using Vision and Mapless Strategies: A Survey. Adv. Mech. Eng. 2013, 5, 234747. [Google Scholar] [CrossRef]
  11. Tsogas, M.; Polychronopoulos, A.; Amditis, A. Unscented Kalman filter design for curvilinear motion models suitable for automotive safety applications. In Proceedings of the 2005 7th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005; p. 8. [Google Scholar]
  12. Gruyer, D.; Belaroussi, R.; Revilloud, M. Accurate lateral positioning from map data and road marking detection. Expert Syst. Appl. 2016, 43, 1–8. [Google Scholar] [CrossRef]
  13. Bersani, M.; Vignati, M.; Mentasti, S.; Arrigoni, S.; Cheli, F. Vehicle state estimation based on Kalman filters. In Proceedings of the 2019 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), Turin, Italy, 2–4 July 2019; pp. 1–6. [Google Scholar]
  14. Guo, H.; Cao, D.; Chen, H.; Lv, C.; Wang, H.; Yang, S. Vehicle dynamic state estimation: State of the art schemes and perspectives. IEEE/CAA J. Autom. Sin. 2018, 5, 418–431. [Google Scholar] [CrossRef]
  15. Liu, H.; Wang, P.; Lin, J.; Ding, H.; Chen, H.; Xu, F. Real-Time Longitudinal and Lateral State Estimation of Preceding Vehicle Based on Moving Horizon Estimation. IEEE Trans. Veh. Technol. 2021, 70, 8755–8768. [Google Scholar] [CrossRef]
  16. Wang, Y.; Liu, Y.; Fujimoto, H.; Hori, Y. Vision-Based Lateral State Estimation for Integrated Control of Automated Vehicles Considering Multirate and Unevenly Delayed Measurements. IEEE/ASME Trans. Mechatron. 2018, 23, 2619–2627. [Google Scholar] [CrossRef]
  17. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
  18. Nam, H.; Choi, W.; Ahn, C. Model Predictive Control for Evasive Steering of an Autonomous Vehicle. Int. J. Automot. Technol. 2019, 20, 1033–1042. [Google Scholar] [CrossRef]
  19. Zakaria, N.J.; Shapiai, M.I.; Ghani, R.A.; Yassin, M.N.M.; Ibrahim, M.Z.; Wahid, N. Lane Detection in Autonomous Vehicles: A Systematic Review. IEEE Access 2023, 11, 3729–3765. [Google Scholar] [CrossRef]
  20. Tang, J.; Li, S.; Liu, P. A review of lane detection methods based on deep learning. Pattern Recognit. 2021, 111, 107623. [Google Scholar] [CrossRef]
  21. Qin, Z.; Zhang, P.; Li, X. Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 2555–2568. [Google Scholar] [CrossRef]
  22. Kao, Y.; Che, S.; Zhou, S.; Guo, S.; Zhang, X.; Wang, W. LHFFNet: A hybrid feature fusion method for lane detection. Sci. Rep. 2024, 14, 16353. [Google Scholar] [CrossRef]
  23. Valls, M.I.; Hendrikx, H.F.C.; Reijgwart, V.J.F.; Meier, F.V.; Sa, I.; Dubé, R.; Gawel, A.; Bürki, M.; Siegwart, R. Design of an Autonomous Racecar: Perception, State Estimation and System Integration. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2048–2055. [Google Scholar]
  24. Byun, Y.-S.; Jeong, R.-G.; Kang, S.-W. Vehicle Position Estimation Based on Magnetic Markers: Enhanced Accuracy by Compensation of Time Delays. Sensors 2015, 15, 28807–28825. [Google Scholar] [CrossRef] [PubMed]
  25. Wischnewski, A.; Stahl, T.; Betz, J.; Lohmann, B. Vehicle Dynamics State Estimation and Localization for High Performance Race Cars. IFAC-PapersOnLine 2019, 52, 154–161. [Google Scholar] [CrossRef]
  26. Bai, S.; Hu, J.; Yan, Y.; Pi, D.; Ding, H.; Shen, L.; Yin, G. An Adaptive UKF for Vehicle State Estimation with Delayed Measurements and Packet Loss. IEEE/ASME Trans. Mechatron. 2025, 30, 236–251. [Google Scholar] [CrossRef]
  27. Wang, Z.; Gao, Y.; Fang, C.; Liu, L.; Zeng, D.; Dong, M. State-Estimation-Based Control Strategy Design for Connected Cruise Control with Delays. IEEE Syst. J. 2023, 17, 99–110. [Google Scholar] [CrossRef]
  28. Reina, G.; Paiano, M.; Blanco-Claraco, J.-L. Vehicle parameter estimation using a model-based estimator. Mech. Syst. Signal Process. 2017, 87, 227–241. [Google Scholar] [CrossRef]
  29. Anderson, R.; Bevly, D.M. Using GPS with a model-based estimator to estimate critical vehicle states. Veh. Syst. Dyn. 2010, 48, 1413–1438. [Google Scholar] [CrossRef]
  30. AIHub. Lane/Crosswalk Recognition Images (Capital Area). Available online: https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=197 (accessed on 25 September 2025).
  31. PyTorch. resnet50. Available online: https://docs.pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html#torchvision.models.ResNet50_Weights (accessed on 25 September 2025).
  32. Konatowski, S.; Kaniewski, P.; Matuszewski, J. Comparison of Estimation Accuracy of EKF, UKF and PF Filters. Annu. Navig. 2016, 23, 69–87. [Google Scholar] [CrossRef]
  33. Ribeiro, M.I. Kalman and Extended Kalman Filters: Concept, Derivation and Properties; Institute for Systems and Robotics: Lisbon, Portugal, 2004. [Google Scholar]
Figure 1. Estimator structure.
Figure 1. Estimator structure.
Sensors 25 06251 g001
Figure 2. Vehicle expressed in lane coordinate.
Figure 2. Vehicle expressed in lane coordinate.
Sensors 25 06251 g002
Figure 3. Overview of the lane detection algorithm pipeline. The process consists of three main stages: (1) lane segmentation using a pre-trained deep neural network, (2) transformation of the segmented image into the vehicle’s local coordinate system, and (3) lane shape extraction via point clustering and polynomial fitting. The resulting polynomial representation provides the lane geometry—including lateral offset, heading angle deviation, and curvature—expressed in the vehicle coordinate system.
Figure 3. Overview of the lane detection algorithm pipeline. The process consists of three main stages: (1) lane segmentation using a pre-trained deep neural network, (2) transformation of the segmented image into the vehicle’s local coordinate system, and (3) lane shape extraction via point clustering and polynomial fitting. The resulting polynomial representation provides the lane geometry—including lateral offset, heading angle deviation, and curvature—expressed in the vehicle coordinate system.
Sensors 25 06251 g003
Figure 4. Conceptual illustration of the delay compensation process.
Figure 4. Conceptual illustration of the delay compensation process.
Sensors 25 06251 g004
Figure 5. Experimental setup for data collection and validation.
Figure 5. Experimental setup for data collection and validation.
Sensors 25 06251 g005
Figure 6. Camera and proposed estimation process time.
Figure 6. Camera and proposed estimation process time.
Sensors 25 06251 g006
Figure 7. Experiment road and result for scenario 1. Sine wave driving on a straight road in the direction of the red arrow.
Figure 7. Experiment road and result for scenario 1. Sine wave driving on a straight road in the direction of the red arrow.
Sensors 25 06251 g007
Figure 8. Experiment road and result for scenario 2. Sine wave driving on a curved road in the direction of the red arrow.
Figure 8. Experiment road and result for scenario 2. Sine wave driving on a curved road in the direction of the red arrow.
Sensors 25 06251 g008
Figure 9. Experiment road and result for scenario 3. Double-Lane Change on a straight road in the direction of the red arrow.
Figure 9. Experiment road and result for scenario 3. Double-Lane Change on a straight road in the direction of the red arrow.
Sensors 25 06251 g009
Figure 10. Experiment road and result for scenario 4. Double-Lane Change on a curved road in the direction of the red arrow.
Figure 10. Experiment road and result for scenario 4. Double-Lane Change on a curved road in the direction of the red arrow.
Sensors 25 06251 g010
Table 1. Symbols.
Table 1. Symbols.
Description Description
dLateral offset about the lane [m]mVehicle mass [kg]
ψHeading angle in global [rad]IzYaw moment of inertia [Nm]
ψdHeading angle of the lane [rad]aDistance from front axle to CG [m]
ΔψHeading angle from the lane [rad]bDistance from rear axle to CG [m]
uLongitudinal velocity [m/s]CαfFront tire cornering stiffness [N/rad]
vLateral velocity [m/s]CαrRear tire cornering stiffness [N/rad]
vnLateral velocity about the lane [m/s]δfFront Steer angle [rad]
rYaw rate [rad/s]τAmount of signal delay [sec]
rdDesired yaw rate along the lane [rad/s]zMeasurements from the sensor
Table 2. Used vehicle parameter for the used model in the estimator.
Table 2. Used vehicle parameter for the used model in the estimator.
ParameterValueParameterValue
m1850 [kg]Iz2800 [Nm]
a1.35 [m]Cαf86,000 [N/rad]
b1.45 [m]Cαr92,000 [N/rad]
Table 3. Root mean square error (RMSE) and Peak error (PE) of experiment results.
Table 3. Root mean square error (RMSE) and Peak error (PE) of experiment results.
Scenario 1Scenario 2Scenario 3Scenario 4
Cam. 1Est. 2Prop. 3Cam. 1Est. 2Prop. 3Cam. 1Est. 2Prop. 3Cam. 1Est. 2Prop. 3
d (m)RMSE0.2010.2560.0550.1210.1310.0610.1710.1770.0900.1830.1920.077
PE0.3690.4900.1220.2730.2880.1600.4540.4540.2330.4180.4500.196
Δψ (deg)RMSE1.0031.0080.2692.0402.0590.8210.5770.5590.4810.4860.5110.332
PE1.8031.7730.6713.5653.6261.7241.5221.4421.8111.2241.2421.150
vn (m/s)RMSE0.2710.2670.0790.1850.1270.0680.1700.1430.1170.1660.2160.081
PE0.4750.5250.2880.4140.2610.1620.4690.3450.2810.3570.5130.196
1 Camera only, 2 Estimator without delay compensation, 3 Proposed estimator with delay compensation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, M.; Kang, W.; Ahn, C. Delay-Compensated Lane-Coordinate Vehicle State Estimation Using Low-Cost Sensors. Sensors 2025, 25, 6251. https://doi.org/10.3390/s25196251

AMA Style

Kim M, Kang W, Ahn C. Delay-Compensated Lane-Coordinate Vehicle State Estimation Using Low-Cost Sensors. Sensors. 2025; 25(19):6251. https://doi.org/10.3390/s25196251

Chicago/Turabian Style

Kim, Minsu, Weonmo Kang, and Changsun Ahn. 2025. "Delay-Compensated Lane-Coordinate Vehicle State Estimation Using Low-Cost Sensors" Sensors 25, no. 19: 6251. https://doi.org/10.3390/s25196251

APA Style

Kim, M., Kang, W., & Ahn, C. (2025). Delay-Compensated Lane-Coordinate Vehicle State Estimation Using Low-Cost Sensors. Sensors, 25(19), 6251. https://doi.org/10.3390/s25196251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop