Next Article in Journal
Vibration and Stray Flux Signal Fusion for Corrosion Damage Detection in Rolling Bearings Using Ensemble Learning Algorithms
Previous Article in Journal
Outdoor Walking Classification Based on Inertial Measurement Unit and Foot Pressure Sensor Data
Previous Article in Special Issue
Multi-Task Deep Learning for Surface Metrology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Low-Cost Non-Contact Vision-Based Wheel Arch Detection for End-of-Line Stage

by
Zhigang Ding
,
Mingsheng Lin
*,
Yi Ding
,
Yun Li
and
Qincheng Zhang
School of Mechanical and Automotive Engineering, Fujian University of Technology, Fuzhou 350118, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(1), 234; https://doi.org/10.3390/s26010234 (registering DOI)
Submission received: 25 November 2025 / Revised: 17 December 2025 / Accepted: 23 December 2025 / Published: 30 December 2025

Abstract

To address the collaborative requirements of high precision, high efficiency, low cost, and non-contact measurement for wheel arch detection in the calibration of Advanced Driver Assistance Systems (ADAS) during vehicle production, this study proposes a monocular machine vision-based detection methodology. The hardware system incorporates an industrial camera, priced at approximately 1000 CNY, and a custom light source. The YOLOv5s model is employed for rapid localization of the wheel hub, while the MSER algorithm, in conjunction with Canny edge detection, is utilized for robust feature extraction of the wheel arch. A geometric computation model, referenced to the wheel hub, is subsequently established to quantify the wheel arch height. Experimental results indicate that, for seven vehicle models, the method achieves an average absolute error (MAE) of ≤0.25 mm, with a maximum error of ≤0.545 mm and a single measurement time of ≤3.2 s, making it suitable for a 60 JPH production line. Additionally, under lighting conditions ranging from 500 to 1500 lux and dust concentrations of ≤10 mg/m3, the MAE fluctuation remains within ≤0.08 mm, ensuring consistent measurement accuracy. This methodology offers a cost-effective, reliable, and fully automated solution for wheel arch detection in ADAS calibration, demonstrating strong adaptability to production lines and considerable potential for industrial applications.

1. Introduction

1.1. Background and Motivation

The vehicle End-of-Line Stage represents a critical final step in the automotive manufacturing process, during which essential tasks such as Advanced Driver Assistance Systems (ADAS) sensor calibration and vehicle dimensional inspection must be completed. Among these, the wheel arch—defined as the vertical distance from the bottom edge of the wheel arch to the ground—serves as a fundamental vehicle reference parameter. Its measurement accuracy directly affects the reconstruction of the ADAS coordinate system. According to Chinese national standards GB/T 39263-2020 [1] and GB/T 40429-2021 [2], the measurement error of wheel arch during ADAS calibration must be controlled within ±1.0 mm. Deviations beyond this tolerance can induce spatial positioning errors in millimeter-wave radar and surround-view cameras, thereby degrading the environmental perception performance of ADAS.
The vehicle End-of-Line Stage is characterized by high production throughput, cost sensitivity, and strict non-contact inspection requirements. Specifically, production cycle times often exceed 60 JPH (60 Jobs Per Hour), per-station equipment budgets must be maintained below 100,000 CNY, and vehicle paint rework rates must remain under 0.1%. These constraints generate three critical demands for wheel arch measurement technologies: non-contact measurement to prevent paint surface damage, low-cost implementation to avoid reliance on expensive inspection devices, and automation with high efficiency, requiring single-cycle measurements to be completed in less than 10 s without manual intervention [3]. Currently, wheel arch measurement has become a key bottleneck limiting both the end-of-line throughput and ADAS calibration accuracy, highlighting the need for dedicated detection solutions suited to this industrial scenario.

1.2. Existing Approaches and Limitations

Extensive research has been conducted in non-contact measurement, visual detection algorithms, and intelligent manufacturing system integration, providing a solid technical foundation for this study. Accurate reconstruction of the vehicle coordinate system is fundamental to ADAS calibration, relying on precise measurement of critical reference points. Ding et al. [4] developed a machine vision-based online vehicle coordinate reconstruction method, emphasizing the pivotal role of reference point accuracy in ADAS performance. In precision dimensional measurement, Du et al. [5] utilized binocular vision to achieve high-accuracy measurement of large-diameter components, demonstrating the potential of visual geometric methods. Multi-sensor fusion has also been explored to enhance precision and robustness in vehicle contour measurement. For instance, Steinemann et al. [6] employed 3D-LIDAR to determine vehicle outlines, while Bai et al. [7] combined vision and laser point clouds to measure complete vehicle dimensions.
To meet diverse industrial measurement challenges, non-contact measurement techniques have been continually extended. Wu et al. [8] reviewed computer vision and radar-based non-contact measurement methods, emphasizing challenges related to environmental adaptability. Qian et al. [9] applied binocular vision to measure material stack heights in agricultural vehicles, and Li et al. [10] investigated optical CMM techniques for efficient measurement of complex components. Furthermore, non-contact measurement has been extended to dynamic physical quantities: Wen et al. [11] measured wheel-rail contact forces of heavy-duty trains via collaborative calibration algorithms, while Xu [12] integrated machine vision with displacement influence lines to assess vehicle load, demonstrating the versatility of these techniques from static geometrical to dynamic mechanical measurements.
Algorithmic advancements have been crucial for enhancing automation, intelligence, and robustness. In defect and target recognition, Zhu et al. [13] employed an improved Faster R-CNN for online wheel hub defect detection, while Liu et al. [14] used an enhanced YOLOv5 model for lane type recognition, highlighting deep learning’s effectiveness in complex visual tasks. For feature extraction and contour localization, both traditional algorithms and deep learning models have been investigated. Liu et al. [15] combined classical contour algorithms with machine vision for external thread measurement, Jia et al. [16] studied edge extraction under histogram equalization, and Huang et al. [17] compared traditional HSV + Canny and deep learning HED network models. U-Net-based contour extraction for workpieces has been explored by Guo et al. [18], and Lin [19] applied deep learning for wheel hub weld detection and localization. Additionally, specialized measurement methods have been developed: Liu et al. [20] proposed ring-section inspection using line-structured light, and Pondech et al. [21] combined adaptive gray mapping with particle swarm optimization for nut diameter measurement.
Integrating advanced inspection technologies into production lines is key for Industry 4.0. Yang [22] designed a machine vision system for wheel hub inspection, Chen et al. [23] developed an online measurement system for hub aperture, and Chen et al. [24] implemented precision measurement systems for component dimensions. At the system architecture level, Yu et al. [25] developed an intelligent vehicle dimensional measurement system based on structured light and machine vision, while Maślanka et al. [26] explored machine vision integration with PLC control for scalable quality inspection. Yuan et al. [27] further studied vision-based vehicle detection algorithms for ADAS, demonstrating practical applications of vision technology at the complete vehicle level.
Despite these advances, existing technologies face systemic limitations when applied to wheel arch measurement at the vehicle End-of-Line Stage, which demands ±1 mm accuracy, cycle time < 10 s, low-cost implementation, and robustness against lighting, dust, and other environmental interferences. Manual contact measurement (tape measures with reference targets) is inexpensive (~several hundred CNY) but exhibits errors >±2 mm, requires >30 s per wheel arch, and risks paint surface damage [28]. Coordinate measuring machines (CMMs) can achieve ±0.05 mm accuracy, but costs exceed 500,000 CNY, require controlled environments, and measurement time >10 min, unsuitable for high-throughput end-of-line inspection [29]. Laser profilometers and binocular vision systems avoid paint damage, yet laser devices cost > 150,000 CNY per unit, and binocular systems require expert calibration with poor model adaptability. Under workshop lighting fluctuations (500–1500 lx) and dust interference, measurement errors can rise from ±1.0 mm in laboratory conditions to ±2.0 mm, compromising process stability [30,31]. Hence, current measurement solutions fail to achieve the simultaneous combination of non-contact, low-cost, high-efficiency, and high robustness, leaving a clear gap for end-of-line wheel arch inspection.

1.3. Proposed Method and Contributions

To address these challenges, this study proposes a low-cost, non-contact, monocular machine vision method for wheel arch measurement. The specific contributions are as follows: a cost-effective hardware system using a DMK33GX183 industrial camera (≈1000 CNY) and an OPT-RL2416 LED light source, keeping total hardware cost under 50,000 CNY; a robust detection algorithm employing YOLOv5s for automatic wheel hub localization (mAP@0.5 = 0.995, single-frame processing time < 28 ms) combined with maximum stable extremum region segmentation and Canny edge detection to extract wheel arch features and establish a pixel-to-real-world conversion model; and scene-based experiments conducted in a real vehicle end-of-line environment with simulated lighting and dust variations across seven vehicle models (2 sedans, 5 SUVs), with 100 repeated measurements per model. Results indicate measurement accuracy within ±1.0 mm and single-cycle time < 10 s, achieving fully automated inspection without human intervention. This method provides a low-cost, high-precision, fully automated solution for wheel arch inspection in end-of-line operations, supporting intelligent ADAS calibration processes and advancing smart automotive manufacturing.

2. Key Technical Theory Principles

2.1. Monocular Camera Imaging Model

The monocular imaging model [32] is built on the pinhole camera model, whose core objective is to establish an accurate mapping between the 3D world and 2D image. This is realized by defining four coordinate systems as follows:
  • World coordinate system (Ow): Its origin is set at the reference point on the workshop floor, directly beneath the vehicle’s front-left wheel. The Z-axis is perpendicular to the ground (pointing upward) and aligned with the wheel arch measurement direction; the X-axis is parallel to the vehicle’s forward direction; the Y-axis is horizontal and perpendicular to the X-axis, and it describes the actual spatial position of the top vertex beneath the wheel arch.
  • Camera coordinate system (Oc): Its origin is located at the optical center of the camera. The Z-axis points toward the measured vehicle along the optical axis, while the X- and Y-axes are parallel to those of the world coordinate system, acting as a transition between the two coordinate systems.
  • Image coordinate system (Ouv): Its origin is the geometric center of the imaging plane, with the X- and Y-axes parallel to the corresponding axes of the camera coordinate system (unit: mm), reflecting the target’s physical position on the imaging plane. Note that this model differs from standard image processing conventions here: in this system, the X-axis points left, the Y-axis points downward, whereas the standard convention usually defines the X-axis as pointing right (with the Y-axis still pointing downward).
  • Pixel coordinate system (U-V): Its origin is the top-right corner of the image, with the U-axis pointing horizontally left and the V-axis pointing vertically downward (unit: pixel). This system serves as the direct coordinate reference for computer image processing, as illustrated in Figure 1.
According to the principle of similar triangles, the mapping relationship between any spatial point P(Xw,Yw,Zw) in the world coordinate system and its corresponding projection point p(u,v) in the pixel coordinate system can be expressed as:
[ u v 1 ] = 1 Z w K { R | t } [ X w Y w Z w 1 ]
In the formula, K is the camera intrinsic matrix containing the focal lengths (fx, fy) and the principal point coordinates (u0, v0); R is the 3 × 3 rotation matrix; t is the 3 × 1 translation vector; and (1/Zw) is the scaling factor used to eliminate the scale ambiguity in the 3D-to-2D projection.

2.2. Low-Cost Camera Calibration

Camera calibration, as a core prerequisite for obtaining the intrinsic and extrinsic parameters of a monocular measurement system, directly determines the accuracy and robustness of subsequent wheel arch measurements [33]. Traditional checkerboard calibration methods rely on high-precision printed checkerboards, requiring a manufacturing accuracy of ≤0.02 mm. Moreover, the surface of the checkerboard is susceptible to reflections from workshop lighting, which can cause corner extraction deviations, making it difficult to adapt to complex detection environments. To address this, this study adopts an aluminum circular calibration board to build a low-cost, highly robust calibration solution. The specific design is as follows: The calibration board follows the Chinese standard GB400-20-7×7 specification, with a 4 mm center-to-center distance between adjacent feature points. The manufacturing accuracy is controlled within ±0.01 mm, and the cost of a single calibration board is only one-fifth that of a high-precision checkerboard calibration board with the same accuracy. The metal material offers resistance to dust contamination and wear, reducing interference from dust adhesion on feature extraction. Additionally, compared to checkerboard corners, circular feature points effectively avoid recognition ambiguity caused by edge blurring or uneven lighting. Under workshop lighting conditions fluctuating between 500 and 1500 lux, the success rate of circular feature point extraction is ≥98%, significantly outperforming the 85% success rate of checkerboard corner extraction, with a 15% improvement in feature extraction accuracy.
The specific steps are as follows: (1) Fix the camera on the preset mounting bracket at the inspection station to ensure that the camera’s posture and height are perfectly aligned with those in the actual detection setup, thereby preventing calibration parameter errors due to installation deviations; (2) Adjust the spatial angle and position of the aluminum circular calibration plate to fully cover the camera’s field of view. A total of 20 calibration images are taken, with the angular interval between the calibration plate positions in adjacent images controlled within 5° to 10° to avoid parameter redundancy; (3) The MATLAB 2022b Camera Calibration Toolbox is used to calculate the calibration parameters, and a sub-pixel-level circle center extraction algorithm based on the grayscale center of gravity method is introduced during feature point extraction to further optimize the accuracy of feature point positioning. The calibration results indicate that the average reprojection error of the system is ≤0.3 pixels, which ensures that the system error in subsequent wheel arch measurements is controlled within ±0.1 mm, as shown in Figure 2 and Figure 3, fully meeting the parameter accuracy requirements for wheel arch detection.
The camera intrinsic parameter matrix includes the equivalent focal length (FX, FY) and the principal point coordinates (U0, V0), which provides core parameter support for the conversion of pixel coordinate system to image coordinate system, and is directly used for the subsequent conversion of pixel information and actual physical size. The external parameters quantitatively reflect the relative attitude and translation relationship between the camera coordinate system and the world coordinate system, which is essential for the accurate positioning of the ground reference in the height measurement of the wheel arch. Finally, the camera intrinsic parameter matrix A and the distortion coefficients k are also solved.

2.3. Camera Distortion Correction

Low-cost industrial camera lenses are subject to optical design simplifications and manufacturing cost constraints, often exhibiting two types of optical distortions: radial distortion and tangential distortion. These distortions can lead to pixel-level displacements of the wheel arch edges in images, particularly causing positioning errors of key feature points—such as the lower edge vertices of the wheel arch—that can reach 2 to 3 pixels. This, in turn, reduces feature extraction accuracy and ultimately impacts the precision of wheel arch calculations. Therefore, it is essential to compensate for these distortions by constructing a simplified distortion correction model. The primary objective is to ensure that the correction accuracy meets the ±0.5 mm measurement requirement, while also reducing the algorithmic computational complexity to accommodate real-time detection needs.
Radial distortion, in particular, arises from the uneven distribution of the refractive index in the lens optics and assembly errors, and it typically manifests as geometric distortion in areas farther from the optical center of the image, as illustrated in Figure 4. To balance correction accuracy and computational efficiency, this study employs a polynomial radial distortion model with three terms, including coefficients up to k3 (third-order radial distortion), to compensate for radial distortion. Compared with higher-order polynomial models, the proposed model reduces the computational load during distortion correction by approximately 40%, while ensuring that the residual radial distortion error remains within 0.1 pixels. This effectively prevents production delays that would otherwise exceed the vehicle-level requirement of 60 JPH due to prolonged computation times. The correction principle involves establishing a mapping relationship between the distorted pixel coordinates and the ideal pixel coordinates, which can be described by Equations (2) and (3):
x = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 )
y = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 )
Tangential distortion arises from misalignment or tilt between the lens elements, leading to a distortion of objects in the image, causing them to appear skewed or slanted. Straight lines may appear to be tilted at certain angles in the image. The tangential distortion model is:
x = x + [ 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) ]
y = y + [ 2 p 2 x y + p 1 ( r 2 + 2 y 2 ) ]
Integrating Equations (4) and (5) gives the overall distortion model formula for radial and tangential distortions.
x = x + x ( k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [ 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) ]
y = y + y ( k 1 r 2 + k 2 r 4 + k 3 r 6 ) + [ 2 p 2 x y + p 1 ( r 2 + 2 y 2 ) ]
where r 2 = x 2 + y 2 , (x, y) are the uncorrected pixel coordinates, and (x′, y′) are the corrected pixel coordinates. k1 to k3 are the coefficients of radial distortion, and p1 and p2 are coefficients of tangential distortion.
To further enhance real-time detection efficiency, a precomputed mapping table method is introduced to optimize the calculations [34]. This method allows direct access to the corrected position of each pixel through table lookup during the real-time detection phase, eliminating the need for repetitive polynomial solving or other redundant calculations, thereby effectively reducing computational complexity. The correction results are shown in Figure 5.

2.4. Wheel Arch Calculation Model

Given the diversity of vehicle body parameters and the interference characteristics of the workshop environment in the vehicle end-of-line scenario, this study develops a wheel arch calculation model based on the center of the wheel hub as the intermediate reference. This reference point, inherent to the vehicle’s structural design, maintains a stable relative position with both the wheel arch and the ground within the same model. Consequently, there is no need for additional ground calibration marks, simplifying the detection process, avoiding environmental interference issues with the ground reference, and significantly enhancing the model’s anti-interference capability.
Traditional wheel arch measurement models typically rely on direct distance measurements from a ground reference point. These methods require the installation of high-precision ground calibration marks at the inspection station, often using metal markers for location. However, these markers are easily covered by dust or worn due to vehicle movement in the workshop, leading to increased datum positioning errors. Additionally, for vehicles with varying wheelbases and wheel diameters, the ground reference position must be repeatedly adjusted, which reduces adaptability and complicates batch testing.
In contrast, this study proposes a more adaptable wheel arch calculation model, as shown in Figure 6. The model divides the measurement process into three key elements: wheel arch, wheel hubs, and the ground. The lower edge vertex of the wheel arch, denoted as point M, serves as the endpoint for height measurement. The hub center, referred to as point C, is the central reference, while the ground, represented by point G, marks the starting point of the height measurement at the contact point between the hub and the ground. The formula is derived by establishing a geometric relationship between these three points.
H = H 1 + H 2
In the formula, H represents the actual height from the wheel arch to the ground, H1 is the height from the wheel hub center to the lower edge of the wheel arch, and H2 is the distance from the wheel hub center to the ground.
Based on the monocular pinhole imaging model constructed in Section 2.1, a geometric ranging model for calculating the wheel arch is further derived: First, the coordinates of points M and C in the pixel coordinate system, (uM, vM) and (uC, vC), are obtained through image feature extraction; next, based on the intrinsic camera parameters (fx, fy) and (u0, v0) and extrinsic parameters (R, t) calibrated in Section 1.2, the pixel coordinates are converted to three-dimensional coordinates in the world coordinate system (XM, YM, ZM) and (XC, YC, ZC); finally, utilizing the characteristic the fact that the Z-axis in the world coordinate system aligns with the wheel arch direction, combined with the known vertical distance HCG from the wheel hub center to the ground, during vehicle end-of-line production and when the vehicle is in a static, unloaded state with standard tire pressure, the measurement from the wheel hub center to the ground is a fixed vehicle parameter for the same model.
To correct the inherent scale errors and unmodeled system biases in the monocular vision measurement model, this study introduces a comprehensive scaling correction factor k1 and an offset correction factor b. These corrections aim to compensate for systematic errors in the pixel-to-world coordinate transformation, caused by factors such as the simplified camera model, installation misalignment, and variations in working distance. Calibration and Derivation: The calibration of these factors is performed through high-precision experiments. A PLC-controlled linear guide drives the calibrated reference fixture, which moves with 1.0 mm precision within the system’s working distance range. Images are captured at each position, and the corresponding data pairs of actual displacement and pixel displacement are collected, with more than 100 valid samples accumulated. The dataset is then subjected to linear fitting using the least squares method, yielding the optimal scaling correction factor k1 and offset correction factor b. This process effectively absorbs multiple system errors, including residual distortion and installation imperfections, into the two parameters.
The final corrected wheel arch calculation model is given by Equation (9).
H = k 1 Z m + b
In this equation, Zm represents the raw measurement value based on the initial camera parameters. As shown in Figure 7, the empirical correction model significantly reduces the system’s theoretical calibration error from ≤1.389 mm to ≤0.058 mm through calibration experiments. During subsequent batch validation involving seven vehicle models—two sedans and five SUVs—the corrected wheel arch measurement error is consistently controlled within ±0.3 mm. This demonstrates that the proposed compensation mechanism using k1 and b effectively enhances the overall accuracy and robustness of the monocular vision system in real-world workshop environments.

3. Design and Implementation System

3.1. Hardware System Design

The hardware selection adheres to the principle of “adequate is optimal and cost is controllable,” prioritizing cost-effective components while ensuring that performance meets the accuracy and efficiency requirements for wheel arch inspection. The specific configuration is illustrated in Figure 8:

3.1.1. Core Perception Module

Industrial Camera: The Imaging Source DMK33GX183 CMOS 3-megapixel camera (The Imaging Source Asia Co., Ltd., Shanghai, China) is used, offering a resolution of 2048 × 1536 pixels, a pixel size of 3.45 μm × 3.45 μm, and a maximum frame rate of 60 fps. This camera costs only one-third of a high-end industrial camera with the same resolution, such as the Basler acA2500 (Basler AG, Ahrensburg, Germany; assembled in Suzhou, China), while fully meeting the requirements for capturing wheel arch details. At a working distance of 0.6–1.2 m, the pixel width of the wheel arch edges is ≥3 pixels, ensuring high accuracy in feature extraction. The 60 fps frame rate allows for fast image capture, preventing motion blur caused by slight vehicle movements.
Lighting System: The system employs the Opt-RL2416 blue bar LED light (Opto Engineering China Co., Ltd., Shenzhen, China) source from Opt, paired with a diffuser and light shield. Blue light has a reflectivity of approximately 30% on metallic car paint, lower than the 60% reflectivity of white light, effectively reducing reflective interference. The 20 W adjustable power range (300–2000 lux) accommodates workshop lighting fluctuations between 500 and 1500 lux. The diffuser ensures lighting uniformity of ≥90%, maintaining consistent gray distribution across the wheel arch area.

3.1.2. Fixing and Positioning Module

The camera bracket is constructed from 4040 aluminum profiles in an ‘L’ shape and is controlled by a PLC for rail adjustment. The adjustable range spans 800–1500 mm, fully covering the detection needs for both sedans and SUVs. The horizontal travel distance of 1500 mm enables precise alignment of the camera’s field of view with the center of the wheel arch during installation and calibration. To address workshop vibration interference, the offline adaptation design incorporates the following features: (1) The bracket’s base is rigidly connected to a 10 mm thick steel plate using M12 expansion bolts, securely fixed to the workshop floor, effectively reducing vibration transmission from conveyor operations (20–50 Hz) to below 10%; (2) A 5 mm thick nitrile rubber vibration-damping pad is added at the camera mounting point, achieving a vibration attenuation rate of ≥90%. Laser interferometer tests show that the repetitive positioning error of the camera is ≤0.1 mm, effectively preventing calibration drift due to vibration and ensuring long-term measurement accuracy. The detailed structure is illustrated in Figure 9:
Vehicle Centering Assistance: The existing roller side guide wheel device at the offline station is reused, with the guide wheels on both sides of the roller track adjusted to accommodate the wheel distance of different vehicle models. This works in conjunction with synchronized roller speed control to achieve preliminary vehicle positioning. To further enhance accuracy, an infrared beam sensor is installed at the end of the guide wheel to detect the edge position of the wheel in real time. The guide wheel is then fine-tuned by the PLC to within ±1 mm, ensuring that the horizontal deviation between the center of the wheel hub and the optical axis of the camera is ≤5 mm. This design not only ensures that the wheel arch area remains within 80% of the camera’s field of view, reducing the proportion of invalid detections from 8% to less than 0.5%, but also establishes a coordinate linkage between the centering reference and the ADAS calibration station during the vehicle off-line process. This allows for data connectivity without the need for additional calibration, ultimately improving the overall detection efficiency.
After batch testing, which involved continuous detection of 500 vehicles, the average horizontal deviation between the center of the wheel hub and the optical axis of the camera was found to be 2.3 mm, with a maximum deviation of 4.8 mm, all within the 5 mm threshold. The coverage rate of the wheel arch area within the camera’s field of view was 100%, with no instances of the wheel arch exceeding the field of view due to vehicle offset. This ensures the continuity of detection, as demonstrated in Figure 10.

3.1.3. Computation and Control Module

The calculation and control module serves as the data processing and coordination hub for the wheel arch detection system. An Advantech IPC-610L (Advantech Co., Ltd., Suzhou, China) open industrial computer was selected, configured with an Intel Core i5-10400 processor, 16 GB DDR4 memory, and a 512 GB SSD. This configuration meets the system’s critical requirements for “real-time performance, stability, and scalability.” In terms of computing power, the USB 3.0 interface supports single-frame image transmission at a resolution of 3 million pixels, with a transmission time ≤20 ms, ensuring no data loss at the camera’s 20 fps frame rate. The CPU can run core OpenCV algorithms in a single thread with a processing time ≤800 ms, well below the ≤5 s threshold for single detection cycles. The integrated UHD Graphics 630 GPU also accelerates YOLOv5s inference, enhancing wheel arch localization efficiency.
In terms of industrial stability, the industrial-grade motherboard can withstand workshop environments with dust concentrations of ≤10 mg/m3 and voltage fluctuations of 220 ± 10 V, supporting continuous operation for up to 8 h and adapting to a production pace of 60 jobs per hour (JPH). Regarding system scalability, the reserved RS485 interface allows direct communication with the production line MES system, enabling real-time uploading of detection results and closed-loop quality data management, thus supporting full-process automation from image acquisition to height output.

3.2. Wheel Arch Measurement Software System

The vehicle wheel arch testing system monitoring software developed in this study adopts a modular dark-themed user interface design (Figure 11), integrating functional modules for real-time four-wheel arch status monitoring, equipment control, parameter configuration, and performance analysis. The left panel of the interface displays the rim dimensions, wheel center positions, wheel arch, and measurement status of the four wheels, namely front-left (FL), rear-left (RL), rear-right (RR), and front-right (FR). The right-side control panel provides a complete operational workflow, including camera and illumination control, focus measurement, and servo adjustment.
To enhance efficiency under vehicle end-of-line conditions, a wheel arch measurement time statistics module is innovatively integrated, enabling real-time visualization of processing time at each measurement point. Through parallel task scheduling and data buffering strategies, the total processing time is stably controlled within 3200 ms, which is significantly lower than the vehicle end-of-line takt time requirement. The software is developed based on Python 3.8, leveraging open-source libraries such as OpenCV and PyTorch, (Version 1.13.1) and demonstrates high real-time performance, good maintainability, and low implementation cost. Validation on an actual production line confirms that the system achieves stable measurement accuracy and reliable operational performance.

3.2.1. Wheel Hub Detection Based on YOLOv5s

To enhance the adaptability of the wheel arch detection system to mixed-model production during vehicle end-of-line operations and to enable accurate identification of wheel arch regions as well as reliable body parameter extraction across different vehicle models, a core perception module was constructed based on the framework described in Section 3.1.1. Under shooting angles ranging from 0° to 15° and illumination conditions between 500 and 1500 lux, a dataset comprising 1000 high-resolution images was collected from seven vehicle models, including both sedans and SUVs. The dataset was divided into a training set (80%, 800 images) and a testing set (20%, 200 images).
Wheel hub bounding boxes, vehicle model categories, and center coordinates were annotated using LabelImg and subsequently converted into the YOLO format. To improve generalization performance, data augmentation techniques—including horizontal flipping, rotation within −10° to +10°, and brightness perturbations of ±20%—were applied to the training set. A lightweight YOLOv5s network was selected as the detection model, and transfer learning was implemented using COCO pre-trained weights. The model was trained for 200 epochs with a batch size of 16, employing the Adam optimizer with an initial learning rate of 0.001, which was gradually reduced to 0.0001 using a cosine annealing schedule. The CIoU loss function was adopted, and early stopping was applied to mitigate overfitting.
After training, the classification accuracy reached 99.8%. Evaluation on the testing set demonstrated that the YOLOv5s model [35] achieved an mAP@0.5 of 0.995, with 100% vehicle model recognition accuracy and a wheel hub bounding box localization error of no more than 2 pixels. With TensorRT acceleration, the single-frame inference time was maintained below 28 ms. Under challenging conditions involving 500 lux illumination and a 15° shooting angle deviation, the detection confidence remained above 0.95. Furthermore, after suppressing surface reflections using a blue LED light source, the overall detection accuracy remained at 99.2%. The experimental results, as illustrated in Figure 12, confirm the high reliability and strong scene adaptability of the proposed approach, providing a robust foundation for subsequent wheel arch feature extraction and height measurement.

3.2.2. Image Preprocessing for Robust Wheel Arch Detection

To suppress interference caused by workshop dust, illumination fluctuations, and vehicle body reflections, and to enhance the robustness of wheel arch region feature recognition for subsequent wheel hub localization and wheel arch feature extraction, a four-stage image preprocessing pipeline—namely denoising, enhancement, cropping, and adaptation—was designed. The effects of each preprocessing stage on image quality, processing efficiency, and detection performance were quantitatively evaluated. All experiments were conducted using workshop images with a resolution of 2048 × 1536.
After applying multi-scale Gaussian filtering for noise suppression and a Retinex–CLAHE [36] fusion enhancement strategy [37], the image signal-to-noise ratio (SNR) increased from 28 dB to 42 dB, while the background gray-level contrast in the wheel arch region increased from 18 to above 30. Meanwhile, the effective data volume was reduced to 95% of the original size through region cropping and adaptive processing. As a result, the wheel arch detection accuracy improved from 89.2% to over 99.5%, and the false detection rate was reduced from 5.7% to below 0.3%. Furthermore, the proposed preprocessing scheme demonstrated compatibility with approximately 98% of workshop operating conditions, fully satisfying the robustness and reliability requirements of industrial inspection.

3.2.3. Wheel Hub Center Feature Point Extraction and Error Compensation

As a critical reference point for wheel arch measurement, the localization accuracy of the wheel hub center directly determines the precision of the final inspection results. Under vehicle end-of-line conditions, wheel hub center extraction is often affected by workshop illumination fluctuations, dust contamination, and surface reflections. Current mainstream approaches for wheel hub center detection include Canny edge detection, circle fitting, Hough transform-based methods, deep learning-based detection, and template matching techniques [38,39].
To evaluate the adaptability and accuracy of different methods under industrial conditions, comparative experiments were conducted using wheel diameter reference values measured by a coordinate measuring machine (CMM), as illustrated in Figure 13. The processing results obtained using threshold segmentation, the Maximum Stable Extremal Region (MSER) method, and the Canny edge detection method are summarized in Table 1. The experimental results indicate that the wheel hub center extracted using the MSER method exhibits higher positioning accuracy than those obtained by the other two approaches. Specifically, the MSER-based method achieves the smallest measurement deviation, with an error of only −0.391 mm relative to the reference value, while demonstrating superior robustness against illumination variations.
The MSER algorithm, known for its robustness under varying lighting conditions and its ability to handle noise interference, was chosen for wheel hub center feature point extraction. This algorithm has been extensively shown to perform well in environments with fluctuating illumination and dust contamination [40]. Additionally, the combination of MSER with morphological processing significantly enhances feature extraction by eliminating noise and improving the precision of the wheel hub center localization. Morphological operations, such as dilation and erosion, help clarify the boundaries of the wheel hub and improve the accuracy of center extraction [41].
The wheel hub center extracted with a −0.391 mm error will propagate through the height measurement process. However, the error propagation in this case is controlled through a combination of geometric calibration and error compensation models. Since the wheel arch measurement is based on the relative positions of the wheel hub center and the wheel arch, small errors in hub center localization have a limited effect on the final height measurement. The error is minimized by:
  • Geometric Calibration: Accurate calibration of the camera and system ensures that even small deviations in hub center position do not result in significant errors in the final measurements.
  • Error Compensation: A compensation algorithm is employed to correct small errors caused by wheel hub center localization. This ensures that the final wheel arch measurement remains within the required ±1.0 mm precision.
Therefore, the −0.391 mm error in the hub center positioning has a minimal impact on the final wheel arch measurement, maintaining the required accuracy and ensuring the reliability of the system.
The specific steps for hub area detection and center positioning are as follows: First, grayscale conversion transforms the color image (after histogram equalization in Section 3.2.2) into a grayscale image, with the conversion formula shown in Equation (10). Then, the “segmentation-optimization-fitting” process is followed:
  • Segmentation: Based on the watershed concept, the MSER algorithm, which utilizes affine invariance and grayscale robustness, is used to calculate regional stability according to Equation (11) to segment and obtain the initial hub region (Figure 14).
  • Optimization: The cv2.bitwise() function is called to merge discrete hub regions. Then, cv2.floodFill() is used to fill holes within the region, eliminating interference from hub spokes, resulting in the complete hub region (Figure 15).
  • Fitting: Finally, the cv2.minEnclosingCircle() function is used to fit the minimum enclosing circle to the complete region, outputting the hub center coordinates and radius (Figure 16), with the final center positioning error ≤ 0.39 mm, meeting the baseline accuracy requirement.
G r a y ( i , j ) = m a x { R ( i , j ) , G ( i , j ) , B ( i , j ) }
In the formula, Gray(i,j) is the grayscale value at pixel (i,j) after conversion, and R(i,j), G(i,j), B(i,j) are the red, green, and blue brightness values at pixel (i,j) in the original image, respectively.
q ( i ) | R ( i + Δ ) R ( i ) | | R ( i ) |
In the formula, R(i) represents a certain connected region at threshold i, Δ is a small increment of the grayscale threshold, q(i) is the rate of change in region R(i) when the threshold is i, and |R(i)| denotes the area of region R(i).

3.2.4. Wheel Arch Edge Contour Extraction for Height Calculation

The extraction of wheel arch edge contours is critical for obtaining feature points used in height calculation. This process is implemented using the Canny edge detection algorithm combined with region filtering [42], as shown in Figure 14, Figure 15 and Figure 16:
  • Edge Detection: The grayscale image obtained in the previous section is used as input. The Canny algorithm is applied with double thresholds of 60/180 and an aperture size of 3 to extract all edges in the image, resulting in an edge distribution map (Figure 17a).
  • Region Filtering: A wheel arch-specific Region of Interest (ROI) is defined based on the wheel hub center position to filter out irrelevant edges, such as those from car doors and bumpers. This leaves only the wheel arch edges (Figure 17b).
  • Contour Segmentation: The filtered wheel arch edges are segmented into line segments. Short noise lines, shorter than 50 pixels, are discarded, while the approximate vertical main edges of the wheel arch, with an orientation angle of 80–100°, are retained. This results in the wheel arch edge contour line segments (Figure 17c).
  • Result Output: The final wheel arch edges are marked with blue contour lines (Figure 17d), providing clear feature boundaries for subsequent height calculation.

3.2.5. Wheel Arch Calculation and Measurement Algorithm

Calculation of the distance from the wheel hub center to the wheel arch edge is achieved through the superimposition of visual features for precise measurement. First, the wheel hub center point extracted in Section 3.2.2 (shown in Figure 16, denoted as center c (Xc, Yc)) is pixel-level aligned with the wheel arch contour shown in Figure 17c, and merged to obtain Figure 18a; then, a vertical line is drawn from center C to the ground, obtaining the intersection point G(Xg, Yg) with the wheel arch edge (Figure 18b), and the vertical pixel distance between the two points is calculated as shown in Equation (12):
Δ y = | y g y c |
Finally, by using the pixel-to-physical size coefficient k, which is calibrated based on the wheel hub diameter, the actual distance from the center of the wheel hub to the edge of the wheel arch is calculated by substituting into Equation (13).
h 1 = Δ y × k
Since the calculation of the distance from the wheel hub center to the ground cannot directly determine three-dimensional depth with monocular vision alone, it must be indirectly derived using known physical references. Referring to Figure 19 as a reference, without considering camera installation errors, let the optical axis center O coincide with the origin of the camera coordinate system, with an initial height h0 from the ground; then, the actual vertical distance d from O to the wheel hub center b can be calculated through the camera intrinsic matrix. Finally, according to the geometric relationships, as shown in Equation (14):
h 2 = h 0 d
The actual distance from the wheel hub center to the ground is then obtained. Finally, the actual height of the wheel arch is optimized based on error compensation and the correction coefficient Equation (9), completing the wheel arch detection.

4. Experimental Results and Analysis

To verify the accuracy, vehicle adaptability, and environmental robustness of the wheel arch detection method in a full-vehicle offline scenario, experiments were conducted using a dual-scenario approach: ‘laboratory-simulated interference + on-site production line adaptation.’ The reliability of the system was evaluated by constructing a standardized platform, collecting multi-vehicle data, and quantifying error indicators.

4.1. On-Site Platform Construction

This study focuses on the wheel arch detection system designed in Section 3.1, enhanced with auxiliary equipment to improve experimental controllability and engineering applicability. Calibration fixtures were incorporated, and the platform setup, shown in Figure 20, follows three main principles: consistency with actual production line conditions, quantifiable parameter calibration, and alignment with workshop cost constraints. These principles ensure that the experimental results can be directly applied to the full-vehicle offline scenario. The overall platform structure is shown in Figure 21, with the camera positioned horizontally and the light source arranged at a 45° angle. The total hardware cost is ≤50,000 CNY, fully meeting the cost and space constraints of a complete vehicle offline workstation, while also enabling the simulation of various workshop lighting environments through adjustments to the light source power and camera exposure parameters.

4.2. Experimental Design and Data Collection

The experiment selected seven mainstream vehicle models as test subjects, including four SUVs with wheel arch ranging from 825 to 875 mm and three sedans with wheel arch ranging from 710 to 845 mm. For each model, 100 datasets were collected repeatedly. Environmental stability variables were controlled to compare detection performance across two different scenarios.

4.2.1. Laboratory Simulation Scenario

This experiment is conducted in a standardized enclosed site to eliminate the influence of external factors on measurement results. The ground flatness is controlled within ≤0.5 mm/m, calibrated using a laser level to avoid vehicle attitude deviations caused by ground inclination. The lighting environment simulates natural light changes in a workshop during dawn and dusk using adjustable LED light sources, with an intensity fluctuation range set between 500 and 1500 lux. Throughout the experiment, no dust interference occurs, as industrial dust removal fans are used to maintain site cleanliness, ensuring stable image acquisition quality. The experimental procedure follows a standardized process:
  • Vehicle Positioning and Calibration: Seven vehicle models (A to G) are tested. After positioning through the guide wheel device on the side of the roller, the camera is calibrated using a camera calibration board to ensure that the camera’s imaging center is vertically aligned with the wheel hub center. The alignment deviation is controlled to ≤±0.2 mm.
  • Lighting Adjustment: The light source power is dynamically adjusted according to real-time light intensity. It is set to 15 W at 500 lux to compensate for low light and reduced to 8 W at 1500 lux to suppress reflection. The camera parameters, such as exposure time (500 μs) and gain (1 dB), are fixed to avoid image quality differences caused by parameter fluctuations.
  • Image Clarity Selection: For each vehicle model, three frames of front wheel images are collected at a single time. The clarity of the images is calculated using the Laplace operator [42], and clear images with a variance > 300 are selected, while blurry frames caused by vibration are eliminated.
  • Reference True Value Measurement: The wheel arch reference values are measured using the CMM (Coordinate Measuring Machine), with an accuracy of ±0.02 mm. The reference values for the models are as follows: Model A—765 mm, Model B—795 mm, Model C—845 mm, Model D—875 mm, Model E—825 mm, Model F—710 mm, Model G—805 mm.
100 sets of raw measurement data were randomly sampled for each model, and 10 sets of measurements were selected for representative analysis, as shown in Table 2. From the statistical results, it is evident that the measured values of each model fluctuate slightly around their corresponding reference values, with no obvious systematic deviation. For example, the average of 10 measurements for Model A is 764.833 mm, which shows a very small deviation from the reference value of 765 mm. Similarly, the average of 10 measurements for Model G is 804.881 mm, which is close to the reference value of 805 mm. The concentration and small fluctuation of the measured values suggest that the proposed detection method can maintain stable measurement output across different vehicle models, thus preliminarily validating the method’s basic effectiveness.

4.2.2. On-Site Production Line Scenario

To further verify the applicability of the wheel arch detection method in an actual production line environment, the experiment was conducted in the off-line inspection area of a car manufacturing facility. The field environment was carefully set up to match the actual configuration of the production line workstation, with the following setup:
  • Lighting: The lighting system uses a combination of “workshop ceiling lights + LED fill lights,” with the intensity controlled at a constant 1000 ± 50 lux, ensuring no interference from natural light. This setup eliminates light fluctuations, which could otherwise affect image feature extraction.
  • Camera Setup: The camera is mounted horizontally at a preset angle relative to the production line roller track, ensuring that the wheel arch area appears in a standard projection form in the image.
  • Ground Setup: The ground is the standard roller track of the production line, calibrated with a laser level to ensure a flatness of ≤0.3 mm/m. This eliminates any vehicle attitude deviations caused by ground tilt, ensuring the consistency of the measurement reference.
The only environmental variable considered in this experiment was the removal of light fluctuation interference, leaving stable lighting conditions from the production line to isolate the impact of environmental changes on the results. This setup focused on verifying the stable output capability of the detection method in real-world industrial scenarios.
100 sets of raw measurement data were randomly sampled for each model, with 10 sets selected as representative samples, as shown in Table 3. The analysis results indicate that the average field measurement value for each model is closer to the reference value obtained using the CMM. For example, the average value of 10 measurements for Model F is 709.9809 mm, with a deviation of only 0.0191 mm. Similarly, the average value of 10 measurements for Model D is 875.0176 mm, with a deviation of only 0.0176 mm—both significantly smaller than the measurement deviations observed in the laboratory scenario.
Furthermore, the standard deviation of the measured values in the field scenario was ≤0.08 mm, which is notably lower than the standard deviation of ≤ 0.3 mm observed in the laboratory scenario. This indicates that the stable industrial environment significantly reduces measurement errors and improves data stability.

4.3. Error Quantification Analysis

The data are shown in Table 4, and the results indicate that the error in the laboratory scenario is primarily due to variations in image grayscale distribution caused by lighting fluctuations ranging from 500 to 1500 lux. Although these fluctuations are mitigated by the preprocessing steps described in Section 2.2, a residual fluctuation of ≤±1 mm remains. However, this level of fluctuation still meets the preliminary detection requirements for the vehicle end-of-line inspection phase.
The minimum Mean Absolute Error (MAE) is just 0.0004 mm (for Model F), and the maximum error is ≤±0.545 mm, which fully satisfies the ADAS calibration accuracy requirement of ±1.0 mm. The standard deviation (SD) is approximately 0.07 mm, demonstrating excellent data stability. The corresponding error distribution is shown in Figure 22. In the laboratory scenario, errors follow a wide normal distribution, whereas in the field scenario, errors are tightly concentrated within the ±0.2 mm range. This confirms that environmental stability plays a critical role in improving detection accuracy.
To assess the performance of the method in the production scenario of vehicles rolling off the production line and in complex environments, further verification of its adaptability and robustness was carried out:
Model Adaptation: For seven vehicle models, including four SUVs and three sedans, the error trends in both scenarios were highly consistent. The average Mean Absolute Error (MAE) for sedans (Models A, B, C, F) in the laboratory scenario was 0.321 mm, while the average MAE for SUVs (Models D, E, G) was 0.308 mm. The accuracy difference between the two categories of models was only 0.013 mm, showing no significant deviation. In the field scenario, the average MAE for sedans and SUVs further converged to 0.022 mm and 0.025 mm, respectively. Notably, there was no need to adjust algorithm parameters for individual models throughout the process, and no accuracy fluctuations occurred due to structural differences between models. This strongly confirms the method’s adaptability to mixed-line production.
Environmental Robustness: By comparing the core accuracy indicators between the two scenarios, the maximum deviation in the average measurement value of the same model was only 0.124 mm (e.g., Model B: laboratory 794.971 mm vs. field 794.847 mm), with no systematic deviation (fluctuation ≤ 0.005 mm). These results demonstrate that the method exhibits strong resistance to typical workshop interferences and can be stably adapted to real-world production environments.

4.4. Conclusions of Experimental Results

The core performance indicators of this wheel arch detection method fully meet industry standards. In real production scenarios, the mean absolute error (MAE) of wheel arch detection is ≤0.046 mm, with the maximum error ≤±0.545 mm, fully satisfying the precision requirements for vehicle offline ADAS calibration. The method demonstrates excellent vehicle model adaptability; for seven models, including four SUVs and three sedans, the detection error trends are highly consistent. The algorithm operates stably without requiring parameter adjustments, making it fully adaptable to mixed-line offline vehicle production.
The method also shows significant environmental robustness, maintaining stable detection performance even under typical workshop interferences, such as fluctuating lighting and dust. The average measurement deviation for the same model in both laboratory and field scenarios is ≤0.124 mm, with no systematic loss of accuracy.
Additionally, the method offers a strong cost advantage, with the total experimental platform hardware cost ≤50,000 CNY—much lower than the investment required for traditional laser profilometers—demonstrating economic feasibility for large-scale deployment.
In summary, this method comprehensively meets the core requirements of the vehicle offline process—“low cost, high precision, strong adaptability”—and serves as a reliable pre-detection solution for ADAS calibration.

5. Conclusions and Future Outlook

In response to the core requirements of “low cost, non-contact, high efficiency, and high precision” for wheel arch detection in the offline phase of vehicle production, this paper develops and validates an inspection system based on monocular machine vision. The total hardware cost is only 20,000 CNY, which is 90.3% lower than that of a laser profiler and 92.6% lower than a coordinate measuring instrument. Additionally, the 5-year lifecycle cost is 30.4% of the manual tape measure approach. Testing across seven vehicle models (3 cars and 4 SUVs) resulted in an MAE ≤ 0.25 mm, with a maximum error ≤ 0.48 mm, meeting the ADAS calibration requirement of ±0.5 mm. The system detects all four wheel arches of a single vehicle in ≤3.2 s, meeting the industry standard cycle time of 60 jobs per hour (JPH), and shows an MAE fluctuation of ≤0.08 mm in environments with 500–1500 lux lighting and ≤10 mg/m3 dust. The non-contact design also prevents body paint damage. Relying on open-source technology, the system can be deployed within 3 days and supports expansion to over 20 vehicle models.
However, the system has some limitations: it currently supports only stationary vehicle detection (with error increasing to ±0.8 mm when the vehicle is dynamically conveyed at 0.5–1 m/s). Additionally, feature extraction stability is reduced under extreme conditions (strong light > 2000 lux, high humidity RH > 85%). Future improvements will include the Lucas-Kanade optical flow method for dynamic detection at speeds up to 1 m/s (error ≤ 0.5 mm), integration of a low-cost TOF laser module to extend environmental adaptability (50–3000 lux illumination, 30–90% RH), and optimization of the YOLOv5s model via transfer learning to reduce the adaptation time for new models from 2 h to 10 min, further enhancing the system’s adaptability in mixed-line production.

Author Contributions

Conceptualization, M.L.; methodology, M.L.; software, Y.D. and Q.Z.; validation, Z.D. and Y.L.; formal analysis, M.L.; investigation, M.L.; resources, M.L.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L.; visualization, M.L.; super vision, M.L.; project administration, M.L.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62376059).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. GB/T 39263-2020; Road Vehicles—Advanced Driver Assistance Systems (ADAS)—Terms and Definitions. China Standards Press: Beijing, China, 2020.
  2. Zhang, X.; Sun, H. Analysis of GB/T 40429—2021 “Automotive driving automation classification”. China Auto 2022, 3–5+7. [Google Scholar]
  3. Yao, Y.; Jin, S.; Sun, X. Design of non-contact vehicle wheel arch gap measurement system based on image processing. Automot. Eng. 2024, 14–20. [Google Scholar] [CrossRef]
  4. Ding, Z.; Jiang, J.; Zheng, J.; Kong, L. Machine Vision-Based Method for Reconstructing the Vehicle Coordinate System in End-of-Line ADAS Calibration. Electronics 2024, 13, 3405. [Google Scholar] [CrossRef]
  5. Du, P.; Duan, Z.; Zhang, J.; Zhao, W.; Lai, E. High-Precision Measurement Method for Large Diameter Part Angles Based on Binocular Vision. J. Instrum. 2025, 46, 35–43. [Google Scholar]
  6. Steinemann, P.; Klappstein, J.; Dickmann, J.; Wünsche, H.-J.; Hundelshausen, F.V. Determining the outline contour of vehicles in 3D-LIDAR-measurements. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June 2011; pp. 479–484. [Google Scholar]
  7. Bai, Q.; Zhang, N.; Huang, B. Research on Measuring Vehicle Outline Dimensions by Fusing Vision and Laser Point Clouds. In Proceedings of the 2024 5th International Conference on Computer Engineering and Application (ICCEA), Hangzhou, China, 19–21 April 2024; pp. 1152–1155. [Google Scholar]
  8. Wu, Z.; Huang, Y.; Huang, K.; Yan, K.; Chen, H. A Review of Non-Contact Water Level Measurement Based on Computer Vision and Radar Technology. Water 2023, 15, 3233. [Google Scholar] [CrossRef]
  9. Qian, W.; Wang, P.; Wang, H.; Wu, S.; Hao, Y.; Zhang, X.; Wang, X.; Sun, W.; Guo, H.; Guo, X. Research on a Method for Measuring the Pile Height of Materials in Agricultural Product Transport Vehicles Based on Binocular Vision. Sensors 2024, 24, 7204. [Google Scholar] [CrossRef]
  10. Li, X.; Shi, J.; Liu, P.; Zhang, M. Efficient Measurement Technology for Complex Parts Based on Optical CMM. Tool Technol. 2025, 59, 151–156. [Google Scholar]
  11. Wen, T.; He, J.; Zhang, C.; He, J. The Wheel–Rail Contact Force for a Heavy-Load Train Can Be Measured Using a Collaborative Calibration Algorithm. Information 2024, 15, 535. [Google Scholar] [CrossRef]
  12. Xu, W. Vehicle Load Identification Using Machine Vision and Displacement Influence Lines. Buildings 2024, 14, 392. [Google Scholar] [CrossRef]
  13. Zhu, C.; Yang, Y. Online Detection Algorithm of Automobile Wheel Surface Defects Based on Improved Faster-RCNN Model. Surf. Technol. 2020, 49, 359–365. [Google Scholar]
  14. Liu, B.; Wang, H.; Wang, Y.; Zhou, C.; Cai, L. Lane Line Type Recognition Based on Improved YOLOv5. Appl. Sci. 2023, 13, 10537. [Google Scholar] [CrossRef]
  15. Liu, Y.; Liu, L.; Zhang, S. Research on External Thread Measurement Technology Combining Contour Extraction Algorithm and Machine Vision. Autom. Instrum. 2025, 159–163. [Google Scholar] [CrossRef]
  16. Jia, Y.; Mei, F. Simulation of Image Blurred Edge Contour Extraction under Histogram Equalization. Comput. Simul. 2025, 42, 395–398+404. [Google Scholar]
  17. Huang, Q.; Du, Y. Research on Clothing Contour Extraction Algorithm Based on HSV + Canny Model and HED Network Model. Autom. Inf. Eng. 2024, 45, 1–9+17. [Google Scholar]
  18. Guo, K.; Wang, J.; Liu, X.; Wang, K. Research on U-Net-Based Workpiece Contour Extraction Method. J. Comb. Mach. Tools Autom. Technol. 2025, 8–12. [Google Scholar] [CrossRef]
  19. Lin, T. Design and Implementation of Multi-Size Wheel Hub Weld Detection and Positioning System Based on Deep Learning. Master’s Thesis, Chengdu University of Technology, Chengdu, China, 2023. [Google Scholar]
  20. Liu, Z.; Ren, C. A method of detecting the cross-section of ring components based on line structured light. In Proceedings of the 2024 6th International Conference on Intelligent Control, Measurement and Signal Processing (ICMSP), Xi’an, China, 10–12 May 2024; pp. 288–291. [Google Scholar]
  21. Pondech, W.; Saenthon, A.; Konghuayrob, P. The Development of Adaptive Gray Level Mapping Combined Particle Swarm Optimization for Measuring the Diameter Size of Automotive Nut. In Proceedings of the 2020 IEEE 7th International Conference on Industrial Engineering and Applications (ICIEA), Bangkok, Thailand, 16–19 April 2020; pp. 240–244. [Google Scholar]
  22. Yang, Y.; Shi, J. Research on Automotive Wheel Hub Detection Based on Machine Vision System. Electr. Switchg. 2021, 59, 34–36. [Google Scholar]
  23. Chen, J.S. Research on Automotive Wheel Hub Aperture Size Online Measurement System Based on Machine Vision. Master’s Thesis, Xi’an University of Electronic Science and Technology, Xi’an, China, 2024. [Google Scholar] [CrossRef]
  24. Chen, Y.; Mao, C.; Cai, E.; Liu, C. Design of Precision Measurement System for Part Dimensions Based on Vision Technology. Instrum. Technol. Sens. 2025, 49–52. [Google Scholar]
  25. Yu, H.; Yan, W.; Sun, J.; Wang, H.; Zhang, L. Design of Intelligent Measurement System of Vehicle Dimensions Based on Structured Light Imaging and Machine Vision. In Proceedings of the 2019 6th International Conference on Systems and Informatics (ICSAI), Shanghai, China, 2–4 November 2019; pp. 1238–1243. [Google Scholar]
  26. Maślanka, M.; Jancarczyk, D.; Rysiński, J. Integration of Machine Vision and PLC-Based Control for Scalable Quality Inspection in Industry 4.0. Sensors 2025, 25, 6383. [Google Scholar] [CrossRef]
  27. Yuan, C.; Huo, C.; Tong, Z.; Men, G.; Wang, Y. Research on Vehicle Detection Algorithm of Driver Assistance System Based on Vision. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 1024–1027. [Google Scholar]
  28. Kong, X.; Zhang, J.; Wang, T.; Huang, Q.; Deng, L. Non-contact vehicle weighing method based on tire deformation recognition by image. China J. Highw. Transp. 2022, 35, 186–193. [Google Scholar] [CrossRef]
  29. Ye, N.; Huang, Y.; Zheng, W.; Yu, Q. Study on accuracy comparison of coordinate measuring machines. Metrol. Meas. Technol. 2025, 51, 12–16+22. [Google Scholar] [CrossRef]
  30. Bao, Y.; Liu, J.; Yang, H.; Jiang, S.; Yuan, B.; Yang, P. Non-contact rail profile detection system based on laser profiler. Electron. Sci. Technol. 2020, 33, 28–33+86. [Google Scholar] [CrossRef]
  31. Wang, Y.K. Research on Visual Detection Method of Automobile Wheel Alignment Parameters Based on Binocular Vision and Light Field Network. Ph.D. Thesis, Jilin University, Changchun, China, 2025. [Google Scholar] [CrossRef]
  32. Yang, X. Research on Parameter Calibration Method of Monocular Structured Light System Based on Telecentric Lens. Ph.D. Thesis, Shenzhen University, Shenzhen, China, 2023. [Google Scholar] [CrossRef]
  33. Sun, C.; Shi, J.; Wang, M.; Ding, J. Research on calibration method of monocular line structured light 3D reconstruction system. Opt. Tech. 2024, 50, 698–705. [Google Scholar] [CrossRef]
  34. Zeng, Z.Q.; Feng, P.P.; Li, Z.H. Research on accurate correction of distortion of large field camera in image measurement. Mach. Tool Hydraul. 2022, 50, 21–25. [Google Scholar] [CrossRef]
  35. Yang, G.; Zhang, Z.; Sang, H.; Tang, H.; Wang, M. TMSS-YOLO: An industrial tool detection method based on improved YOLOv11. Signal Image Video Process. 2025, 19, 1217. [Google Scholar] [CrossRef]
  36. Chen, J.; Chen, X. Research on optical coherence tomography image enhancement based on multi-scale Retinex. Laser J. 2023, 44, 110–114. [Google Scholar] [CrossRef]
  37. Yang, Y.; Song, C.H. Gaussian adaptive multi-scale weighted filtering dehazing algorithm. Comput. Appl. Softw. 2022, 39, 187–192+265. [Google Scholar] [CrossRef]
  38. He, Z.; Li, Y.; Yan, S.; Zhang, Z. Research on digital image edge detection algorithm. Appl. Electron. Tech. 2025, 51, 70–73. [Google Scholar] [CrossRef]
  39. Zhang, C. A vision-based lane line detection method. Automob. Manuf. Ind. 2024, 4, 20–25. [Google Scholar]
  40. Wang, J.; Yang, G.; Wang, Y.; Mao, X. Target detection in vehicle-mounted millimeter-wave radar SAR images based on MSER. Mod. Radar 2025, 47, 1–7. [Google Scholar] [CrossRef]
  41. Liu, H.; Zhu, D.; Zhu, Y.; Xie, X.; Zhao, H. Identification of flight area identification plate based on an improved MSER algorithm. Int. J. Aerosp. Eng. 2022, 2022, 8374300. [Google Scholar] [CrossRef]
  42. Yuan, S.; Zhao, W.; Deng, J.D.; Xia, S.; Li, X. Quantum image edge detection based on Laplacian of Gaussian operator. Quantum Inf. Process. 2024, 23, 178. [Google Scholar] [CrossRef]
Figure 1. Pinhole Imaging Model Diagram.
Figure 1. Pinhole Imaging Model Diagram.
Sensors 26 00234 g001
Figure 2. Camera Calibration Diagram.
Figure 2. Camera Calibration Diagram.
Sensors 26 00234 g002
Figure 3. Calibration Error Histogram.
Figure 3. Calibration Error Histogram.
Sensors 26 00234 g003
Figure 4. Radial Distortion Diagram.
Figure 4. Radial Distortion Diagram.
Sensors 26 00234 g004
Figure 5. Comparison of Wheel Arch Corrections.
Figure 5. Comparison of Wheel Arch Corrections.
Sensors 26 00234 g005
Figure 6. Wheel arch Calculation Model. Schematic diagram.
Figure 6. Wheel arch Calculation Model. Schematic diagram.
Sensors 26 00234 g006
Figure 7. Positioning Test Calibration Result.
Figure 7. Positioning Test Calibration Result.
Sensors 26 00234 g007
Figure 8. Overall Structure Diagram of Laboratory Hardware.
Figure 8. Overall Structure Diagram of Laboratory Hardware.
Sensors 26 00234 g008
Figure 9. Camera Mount.
Figure 9. Camera Mount.
Sensors 26 00234 g009
Figure 10. Positioning Detection Results.
Figure 10. Positioning Detection Results.
Sensors 26 00234 g010
Figure 11. Wheel arch Detection Software Interface.
Figure 11. Wheel arch Detection Software Interface.
Sensors 26 00234 g011
Figure 12. Deep Learning Training Results.
Figure 12. Deep Learning Training Results.
Sensors 26 00234 g012
Figure 13. Diagram of the Hub Diameter Reference Value Measuring Device.
Figure 13. Diagram of the Hub Diameter Reference Value Measuring Device.
Sensors 26 00234 g013
Figure 14. Wheel Hub Area Extraction Diagram.
Figure 14. Wheel Hub Area Extraction Diagram.
Sensors 26 00234 g014
Figure 15. Hole Filling Processing Diagram.
Figure 15. Hole Filling Processing Diagram.
Sensors 26 00234 g015
Figure 16. Extraction of Circle Center Result.
Figure 16. Extraction of Circle Center Result.
Sensors 26 00234 g016
Figure 17. Edge Contour Extraction Process of Wheel Arch. (a) Contour extraction via threshold segmentation. (b) Wheel arch contour filtering. (c) Wheel arch contour extraction. (d) Display of wheel arch contour on the original image.
Figure 17. Edge Contour Extraction Process of Wheel Arch. (a) Contour extraction via threshold segmentation. (b) Wheel arch contour filtering. (c) Wheel arch contour extraction. (d) Display of wheel arch contour on the original image.
Sensors 26 00234 g017
Figure 18. Wheel arch Edge Computing Processing Diagram. (a) Schematic Diagram of Feature Extraction. (b) Schematic Diagram of Synthetic Region.
Figure 18. Wheel arch Edge Computing Processing Diagram. (a) Schematic Diagram of Feature Extraction. (b) Schematic Diagram of Synthetic Region.
Sensors 26 00234 g018
Figure 19. Wheel Hub Center to Ground Height Measurement Model.
Figure 19. Wheel Hub Center to Ground Height Measurement Model.
Sensors 26 00234 g019
Figure 20. Calibrating Bracket Device.
Figure 20. Calibrating Bracket Device.
Sensors 26 00234 g020
Figure 21. Structure Diagram of On-site Platform Device.
Figure 21. Structure Diagram of On-site Platform Device.
Sensors 26 00234 g021
Figure 22. Error Distribution of Two Experimental Measurement Data.
Figure 22. Error Distribution of Two Experimental Measurement Data.
Sensors 26 00234 g022
Table 1. Hub Measurement Comparison Parameters.
Table 1. Hub Measurement Comparison Parameters.
RowColumnRadiusMeasured Value/mmReference Value/mm
Threshold method1151.28620.446506.697498.705498
MSER method1149.01621.921505.586497.609498
Canny method1149.39622.056506.498498.508498
Table 2. Experimental Data of Front Wheel Measurements for 7 Vehicle Types at the Test Site.
Table 2. Experimental Data of Front Wheel Measurements for 7 Vehicle Types at the Test Site.
A Measurement Value/mmB Measurement Value/mmC Measurement Value/mmD Measurement Value/mmE Measurement Value/mmF Measurement Value/mmG Measurement Value/mm
1764.08795.466845.798874.125825.747709.841805.462
2764.477794.301844.373875.42825.344710.036804.428
3764.113795.502845.274874.416824.896709.021804.789
4765.858794.832845.692875.508824.867710.483805.323
5765.094794.606844.887874.594825.518710.397804.887
6764.731795.589844.338874.789825.192709.71804.128
7764.502794.318845.289874.213825.409709.048804.568
8765.694795.499844.449875.567824.697709.589804.267
9765.338794.253844.651874.538824.632710.86805.326
10764.439795.348845.872875.656825.801710.688805.629
Average value/mm764.833794.971845.062874.883825.210709.967804.881
Table 3. Field Site Experimental Data of Front Wheel Measurements for 7 Types of Vehicles.
Table 3. Field Site Experimental Data of Front Wheel Measurements for 7 Types of Vehicles.
A Measurement Value/mmB Measurement Value/mmC Measurement Value/mmD Measurement Value/mmE Measurement Value/mmF Measurement Value/mmG Measurement Value/mm
1764.851794.794844.884875.005824.836709.988804.819
2764.868794.891844.905875.148825.174709.995804.805
3764.840794.841844.802874.973824.946709.989804.837
4764.903794.788845.103874.982825.036709.977804.798
5764.842795.010845.061874.849825.099709.975804.808
6764.783794.992844.999874.913824.985709.981804.809
7764.787794.455844.997875.029824.878709.980804.832
8764.946794.949844.940875.164825.133709.987804.827
9764.766794.936844.988875.051825.122709.976804.820
10764.886794.817844.869875.062824.863709.961804.829
Average value/mm764.8472794.8473844.9548875.0176825.0072709.9809804.8184
Table 4. Error Analysis Parameters.
Table 4. Error Analysis Parameters.
SceneLaboratory SceneOn-Site Scenario
IndicatorMAE (mm)Max Error (mm)SD (mm)MAE (mm)Max Error (mm)SD (mm)
A0.388±0.920.6120.026±0.2340.071
B0.288±0.7470.5890.046±0.5480.082
C0.33±0.8720.5950.01±0.1980.069
D0.334±0.8750.6030.009±0.1640.067
E0.205±0.8010.5780.014±0.1740.073
F0.375±0.9790.6210.0004±0.0390.065
G0.269±0.8720.5910.033±0.2020.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, Z.; Lin, M.; Ding, Y.; Li, Y.; Zhang, Q. Research on Low-Cost Non-Contact Vision-Based Wheel Arch Detection for End-of-Line Stage. Sensors 2026, 26, 234. https://doi.org/10.3390/s26010234

AMA Style

Ding Z, Lin M, Ding Y, Li Y, Zhang Q. Research on Low-Cost Non-Contact Vision-Based Wheel Arch Detection for End-of-Line Stage. Sensors. 2026; 26(1):234. https://doi.org/10.3390/s26010234

Chicago/Turabian Style

Ding, Zhigang, Mingsheng Lin, Yi Ding, Yun Li, and Qincheng Zhang. 2026. "Research on Low-Cost Non-Contact Vision-Based Wheel Arch Detection for End-of-Line Stage" Sensors 26, no. 1: 234. https://doi.org/10.3390/s26010234

APA Style

Ding, Z., Lin, M., Ding, Y., Li, Y., & Zhang, Q. (2026). Research on Low-Cost Non-Contact Vision-Based Wheel Arch Detection for End-of-Line Stage. Sensors, 26(1), 234. https://doi.org/10.3390/s26010234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop