A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration
Abstract
1. Introduction
1.1. Challenges of 3D Vision-Based Methods
- (1)
- Complicated calibration and limited flexibility: Most passive and active 3D measurement systems require a complex and rigorous calibration procedure to reconstruct the wheel shape. Once calibrated, cameras and light sources need to remain fixed, which limits its flexibility and application range. Moreover, the reflective surface of the wheel hub leads to low texture and feature information, which affects stereo matching accuracy and ultimately reduces the accuracy of 3D reconstruction.
- (2)
- Additional target board and potential damage: Commercial scanner-based methods often require mounting a clamped target board with a special reflective film onto the wheel. This process is time-consuming, may cause secondary damage to the wheel, and incurs additional costs due to the consumable film, thereby reducing overall efficiency.
- (3)
- Inefficient and noise-sensitive full-cloud registration: Directly using the entire 3D point cloud acquired by the sensor to estimate wheel angles not only increases computational cost and reduces measurement efficiency, but also introduces interference from background, noise, and points from the vehicle body during point-cloud registration. These irrelevant points can significantly degrade registration accuracy. Therefore, it is more effective to perform 3D registration and angle estimation using only a stable, wheel-related subset of the point cloud.
- (4)
- Challenges in precise wheel segmentation: The reconstructed point cloud includes not only the wheel but also background elements, making accurate region-of-interest (ROI) extraction essential. Existing methods often rely on traditional image processing algorithms (e.g., Canny edge detection and Hough transforms), which are sensitive to background clutter and environmental changes. These methods often require manual parameter tuning and lack the precision needed for accurate wheel boundary and detail extraction.
1.2. Outline of Our Work
- (1)
- As a relatively stable region of the wheel, the wheel rim provides reliable geometric cues, so using only the rim point cloud to estimate wheel angles helps improve both measurement efficiency and angle accuracy. Building on this idea, the proposed method reformulates the wheel alignment task as a pipeline of wheel rim mask extraction, corresponding point cloud registration, and wheel angle calculation. Compared with existing approaches, it offers greater flexibility, lower cost, and higher efficiency, as it removes the need for complex calibration procedures, target boards, and additional auxiliary equipment or materials.
- (2)
- Wheel rim extraction is critical for accurate angle estimation. To enhance the precision of wheel rim segmentation, we propose APCS-SwinUnet, a task-driven adaptation of Swin-Unet for wheel rim extraction. Atrous spatial pyramid pooling is embedded into the encoder to capture multi-scale contextual information for wheel rims of different sizes and viewpoints, while a channel-spatial attention mechanism in the decoder selectively enhances rim features and suppress background clutter. This design jointly improves the representation of global wheel contours and fine-grained rim structures, both essential for reliable angle estimation. Compared with traditional image processing pipelines and baseline deep networks, APCS-SwinUnet achieves higher accuracy and robustness in wheel rim segmentation.
- (3)
- The segmented mask is used to isolate the wheel rim point cloud, effectively filtering out irrelevant background data. The iterative closest point algorithm is then employed to register the initial and target wheel rim point clouds. After registration, the corresponding rotation and translation matrices are obtained, and the rotation matrix is subsequently used to compute the wheel’s toe and camber angles.
2. Related Work
2.1. Wheel Alignment Sensing and Measurement Methods
2.1.1. Inertial-Based Wheel Alignment Methods
2.1.2. Vision-Based Wheel Alignment Methods
- 2D Vision-Based Measurement Methods
- 3D Vision-Based Measurement Methods
2.2. Deep Learning-Based Thin Structure Segmentation Methods
3. Framework of Proposed Solution
4. Wheel Rim Segmentation Based on APCS-SwinUnet
4.1. Motivation for Using Segmentation Network
4.2. Network Architecture of APCS-SwinUnet
4.2.1. Atrous Spatial Pyramid Pooling
4.2.2. Attention Fusion Module
4.2.3. Hybrid Loss Function
5. Toe and Camber Angles Calculation Based on Iterative Closest Point
5.1. Point Cloud Extraction of Wheel Rim
5.2. Point Cloud Registration of Wheel Rim Based on Iterative Closest Point
5.3. Toe and Camber Angles Calculation
6. Experimental Setup and Configuration
6.1. Measurement System
6.2. Three-Dimensional Scanner Description
6.3. Description of Clinometer
6.4. Server Configuration
7. Experimental Results and Analysis
7.1. Wheel Segmentation Experiments
7.1.1. Datasets for Training and Testing
7.1.2. Parameter Settings of APCS-SwinUnet
7.1.3. Evaluation Criteria for Segmentation Network
7.1.4. Comparisons with Different Segmentation Networks
- (1)
- Quantitative Analysis:
- (2)
- Qualitative Analysis:
7.1.5. Comparison Results Under Inconsistent Illuminations
7.1.6. Feature Visualization of Encoder and Decoder
7.1.7. Comparison Results on the Public Wheel Dataset
7.2. Point Cloud Extraction Result of Wheel Rim
7.3. Wheel Angle Measurement and Evaluation
7.3.1. Point Cloud Registration Result
7.3.2. Measurement Results of Toe and Camber Angles
7.4. Extended Experiments
7.4.1. Angle Measurement Results of the Raw Point Couds
7.4.2. Impact of Pixel Shifts on Angle Measurement
7.4.3. Impact of Segmentation Methods on Angle Measurement
7.4.4. Runtime and Computational Efficiency
7.4.5. Repeatability Experiments at Different Distances and Illuminations
8. Conclusions and Future Development
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Yunta, J.; Garcia-Pozuelo, D.; Diaz, V.; Olatunbosun, O. Influence of camber angle on tire tread behavior by an on-board strain-based system for intelligent tires. Measurement 2019, 145, 631–639. [Google Scholar] [CrossRef]
- Xu, G.; Wang, Y.; He, W.; Shen, H.; Chen, F.; Li, X.T.; Zhao, X.X. Large-scale all-wheel alignment globally registered by laser line family and verified by global benchmark. IEEE Trans. Instrum. Meas. 2024, 73, 5013510. [Google Scholar] [CrossRef]
- Young, J.S.; Hsu, H.Y.; Chuang, C.Y. Camber angle inspection for vehicle wheel alignments. Sensors 2017, 17, 285. [Google Scholar] [CrossRef]
- Feng, L.H.; Chen, W.; Cheng, M.; Zhang, W.G. The gravity-based approach for online recalibration of wheel force sensors. IEEE/ASME Trans. Mechatron. 2019, 24, 1686–1697. [Google Scholar] [CrossRef]
- Chatur, S. Computer based wireless automobile wheel alignment system using accelerometer. Inter. J. Eng. Sci. 2015, 4, 62–69. [Google Scholar]
- D’Mello, G.; Gomes, R.; Mascarenhas, R.; Ballal, S.; Kamath, V.S.; Lobo, V.J. Wheel alignment detection with IoT embedded system. Mater. Today Proc. 2022, 52, 1924–1929. [Google Scholar] [CrossRef]
- Bohari, A.A.; Hafiz, M.F.H.M.; Yi, S.S.; Jamal, N.; Talib, M.N.M.; Safuan, S.N.M. Development of automobile wheel smart alignment monitoring system. PaperASIA 2024, 40, 28–35. [Google Scholar] [CrossRef]
- Lee, H.; Choi, S.B. Online detection of toe angle misalignment based on lateral tire force and tire aligning moment. Int. J. Automot. Technol. 2023, 24, 623–632. [Google Scholar] [CrossRef]
- Tang, X.L.; Shi, Y.; Chen, B.; Longden, M.; Farooq, R.; Lees, H.; Jia, Y. A miniature and intelligent low-power in situ wireless monitoring system for automotive wheel alignment. Measurement 2023, 211, 112578. [Google Scholar] [CrossRef]
- Song, L.M.; Wang, R.H.; Chen, E.Z.; Yang, Y.G.; Zhu, X.J.; Liu, M.Y. Research on global calibration method of large-scene multi-vision sensors in wheel alignment. Meas. Sci. Technol. 2022, 32, 105023. [Google Scholar] [CrossRef]
- Xu, G.; Wei, H.; Fang, C.; Hui, S.; Li, X.T. Automatic and accurate vision-based measurement of camber and toe-in alignment of vehicle wheel. IEEE Trans. Instrum. Meas. 2022, 71, 5024613. [Google Scholar] [CrossRef]
- Xu, G.; Shen, H.; Li, X.T.; Chen, F.; He, W. Large-range reconstruction with non-pre-calibrated camera via MBM of laser referenced by radial-collinear-features. IEEE Trans. Instrum. Meas. 2022, 71, 5013510. [Google Scholar] [CrossRef]
- Ge, P.X.; Wang, H.Q.; Wang, Y.H.; Wang, B. Calibration of ring multicamera system with transparent target for panoramic measurement. IEEE Sens. J. 2022, 22, 23154–23164. [Google Scholar] [CrossRef]
- Jiang, T.; Cui, H.; Cheng, X.S. A calibration strategy for vision-guided robot assembly system of large cabin. Measurement 2020, 163, 107991. [Google Scholar] [CrossRef]
- Roshan, M.C.; Isaksson, M.; Pranata, A. A geometric calibration method for thermal cameras using a ChArUco board. Infrared Phys. Technol. 2024, 138, 105219. [Google Scholar] [CrossRef]
- Xu, G.; He, W.; Chen, F.; Shen, H.; Li, X.T. One-dimension orientation method of caster and kingpin inclination of vehicle wheel alignment. Measurement 2022, 198, 111371. [Google Scholar] [CrossRef]
- Padegaonkar, A.; Brahme, M.; Bangale, M.; Raj, A.N.J. Implementation of machine vision system for finding defects in wheel alignment. Int. J. Comput. Inf. Technol. 2014, 1, 339–344. [Google Scholar]
- Furferi, R.; Governi, L.; Volpe, Y.; Carfagni, M. Design and assessment of a machine vision system for automatic vehicle wheel alignment. Int. J. Adv. Robot. Syst. 2013, 10, 242. [Google Scholar] [CrossRef]
- Senjalia, J.; Pandya, P.; Kapadia, H. Measurement of wheel alignment using camera calibration and laser triangulation. In Proceedings of the 2013 Nirma University International Conference on Engineering (NUiCONE), Ahmedabad, India, 28–30 November 2013; pp. 1–5. [Google Scholar]
- Kim, S.H.; Lee, K.I. Wheel alignment of a suspension module unit using a laser module. Sensors 2020, 20, 1648. [Google Scholar] [CrossRef] [PubMed]
- Baek, D.; Cho, S.; Bang, H. Wheel alignment inspection by 3D point cloud monitoring. Mech. Sci. Technol. 2014, 28, 1465–1471. [Google Scholar] [CrossRef]
- Wang, J.; Zeng, Z.; Sharma, P.K.; Alfarraj, O.; Tolba, A.; Zhang, J.; Wang, L. Dual-path network combining CNN and transformer for pavement crack segmentation. Autom. Constr. 2024, 158, 105217. [Google Scholar] [CrossRef]
- Gao, G.; Li, J.Y.; Yang, L.; Liu, Y.H. A multi-scale global attention network for blood vessel segmentation from fundus images. Measurement 2023, 222, 113553. [Google Scholar] [CrossRef]
- Geetha, G.K.; Yang, H.J.; Sim, S.H. Fast detection of missing thin propagating cracks during deep-learning-based concrete crack/non-crack classification. Sensors 2023, 23, 1419. [Google Scholar]
- Siriborvornratanakul1, T. Image segmentation for thin structures using a zero-shot learner. Int. J. Inf. Technol. 2025, 17, 721–726. [Google Scholar] [CrossRef]
- Li, J.Y.; Gao, G.; Yang, L.; Bian, G.B.; Liu, Y.H. DPF-Net: A dual-path progressive fusion network for retinal vessel segmentation. IEEE Trans. Instrum. Meas. 2023, 72, 2517817. [Google Scholar] [CrossRef]
- Chen, L.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Xu, M.B.; Han, Y.M.; Zhong, X.T.; Sang, F.Y.; Zhang, Y.A. A precise registration method for large-scale urban point clouds based on phased and spatial geometric features. Meas. Sci. Technol. 2025, 36, 015202. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Yuan, Y.H.; Huang, L.; Guo, J.Y.; Zhang, C.; Chen, X.L.; Wang, J.D. OCNet: Object context for semantic segmentation. Int. J. Comput. Vis. 2021, 129, 2375–2398. [Google Scholar] [CrossRef]
- Zunair, H.; Hamza, A.B. Sharp U-Net: Depthwise convolutional network for biomedical image segmentation. Comput. Biol. Med. 2021, 136, 104699. [Google Scholar] [CrossRef] [PubMed]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.0399. [Google Scholar] [CrossRef]
- Zhou, Z.W.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A nested U-Net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Huang, H.M.; Lin, L.F.; Tong, R.F.; Hu, H.J.; Zhang, Q.W.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Chen, J.N.; Lu, Y.Y.; Yu, Q.H.; Luo, X.D.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.Y.; Chen, J.; Jiang, D.S.; Zhang, X.P.; Tian, Q.; Wang, M.N. Swin-Unet: Unet-like pure Transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
- Available online: https://www.kaggle.com/datasets/adamnovozmsk/cawdec?resource=download (accessed on 31 May 2019).
- ISO/IEC. ISO/IEC Guide 98-6:2021(en), Uncertainty of Measurement—Part 6: Developing and Using Measurement Models; International Organization for Standardization: Geneva, Switzerland, 2021. [Google Scholar]

















| Methods | Dice (%) | HD95 |
|---|---|---|
| FCN | 84.12 | 8.62 |
| DeepLabV3 | 86.67 | 8.18 |
| OcNet | 88.11 | 8.06 |
| U-Net | 85.76 | 11.90 |
| Att-Unet | 88.65 | 9.65 |
| U-Net 2+ | 89.10 | 3.00 |
| U-Net 3+ | 89.49 | 3.43 |
| TransUnet | 89.85 | 2.56 |
| SwinUnet | 90.23 | 2.47 |
| APCS-SwinUnet | 90.66 | 2.11 |
| Methods | Dice (%) | HD95 |
|---|---|---|
| FCN | 84.35 | 16.95 |
| DeepLabV3 | 86.57 | 10.51 |
| OcNet | 87.74 | 17.48 |
| U-Net | 84.10 | 21.48 |
| Att-Unet | 86.97 | 31.55 |
| U-Net 2+ | 86.89 | 25.86 |
| U-Net 3+ | 88.60 | 12.46 |
| TransUnet | 90.19 | 2.80 |
| SwinUnet | 90.16 | 2.71 |
| APCS-SwinUnet | 90.46 | 2.51 |
| Methods | Dice (%) | HD95 |
|---|---|---|
| FCN | 70.38 | 28.79 |
| DeepLabV3 | 72.47 | 4.37 |
| OcNet | 72.24 | 4.58 |
| U-Net | 72.87 | 16.68 |
| Att-Unet | 74.31 | 6.13 |
| U-Net 2+ | 74.49 | 5.53 |
| U-Net 3+ | 75.50 | 9.51 |
| TransUnet | 75.35 | 4.00 |
| SwinUnet | 75.27 | 9.92 |
| APCS-SwinUnet | 75.88 | 3.84 |
| (rad) | (rad) | (rad) | (°) | (°) | (°) |
|---|---|---|---|---|---|
| −0.00020 | 0.04952 | −0.00340 | −0.01146 | 2.83729 | −0.19481 |
| Noise Level | Real (°) | Meas (°) | Error (°) | (%) |
|---|---|---|---|---|
| 0.01 | 2.8500 | 2.8373 | −0.0127 | 0.45 |
| 0.02 | 2.8500 | 2.8341 | −0.0159 | 0.56 |
| 0.03 | 2.8500 | 2.8333 | −0.0167 | 0.59 |
| 0.04 | 2.8500 | 2.8334 | −0.0166 | 0.58 |
| 0.05 | 2.8500 | 2.8339 | −0.0161 | 0.56 |
| 0.06 | 2.8500 | 2.8348 | −0.0152 | 0.53 |
| 0.07 | 2.8500 | 2.8340 | −0.0160 | 0.56 |
| 0.08 | 2.8500 | 2.8333 | −0.0167 | 0.59 |
| 0.09 | 2.8500 | 2.8339 | −0.0161 | 0.56 |
| 0.10 | 2.8500 | 2.8336 | −0.0164 | 0.58 |
| No | (mm) | (mm) | Real (°) | Meas (°) | Error (°) | (%) |
|---|---|---|---|---|---|---|
| 1 | 687 | 743 | −1.450 | −1.457 | −0.007 | 0.48 |
| 2 | 677 | 754 | 2.850 | 2.837 | −0.013 | 0.46 |
| 3 | 691 | 733 | −1.650 | −1.755 | −0.105 | 6.36 |
| No | (mm) | (mm) | Real (°) | Meas (°) | Error (°) | (%) |
|---|---|---|---|---|---|---|
| 1 | 751 | 802 | −0.750 | −0.807 | −0.057 | 7.60 |
| 2 | 778 | 821 | −0.650 | −0.685 | −0.035 | 5.38 |
| 3 | 750 | 808 | 3.200 | 3.358 | 0.158 | 4.94 |
| Sampling Rate | Time (s) | Real (°) | Meas (°) | (%) |
|---|---|---|---|---|
| 10% | 260.105 | 1.766 | −1.084 | 38.04 |
| 20% | 1006.079 | 1.747 | −1.103 | 38.70 |
| 30% | 2461.575 | 1.732 | −1.118 | 39.23 |
| 40% | 5875.771 | 1.728 | −1.122 | 39.37 |
| 50% | 8530.951 | 1.728 | −1.122 | 39.37 |
| 60% | 10,223.756 | 1.721 | −1.129 | 39.61 |
| 70% | 16,907.151 | 1.720 | −1.130 | 39.65 |
| 80% | 24,226.577 | 1.723 | −1.127 | 39.54 |
| (°) | (°) | (%) | (°) | (°) | (%) | ||
|---|---|---|---|---|---|---|---|
| 0 | −1.457 | 0.011 | 0.48 | 0 | −1.457 | 0.011 | 0.48 |
| −1 | −1.439 | 0.011 | 0.76 | 1 | −1.469 | −0.019 | 1.31 |
| −2 | −1.424 | 0.026 | 1.79 | 2 | −1.481 | −0.031 | 2.14 |
| −3 | −1.416 | 0.034 | 2.34 | 3 | −1.498 | −0.048 | 3.31 |
| −4 | −1.405 | 0.045 | 3.10 | 4 | −1.510 | −0.060 | 4.14 |
| −5 | −1.394 | 0.056 | 3.86 | 5 | −1.528 | −0.078 | 5.38 |
| −6 | −1.357 | 0.093 | 6.41 | 6 | −1.543 | −0.093 | 6.41 |
| −7 | −1.321 | 0.129 | 8.90 | 7 | −1.560 | −0.110 | 7.59 |
| −8 | −1.291 | 0.159 | 10.97 | 8 | −1.594 | −0.144 | 9.93 |
| −9 | −1.257 | 0.193 | 13.31 | 9 | −1.612 | −0.162 | 11.17 |
| −10 | −1.202 | 0.248 | 17.10 | 10 | −1.666 | −0.216 | 14.90 |
| −15 | −0.840 | 0.610 | 42.07 | 15 | −2.019 | −0.569 | 39.24 |
| −20 | −0.480 | 0.970 | 66.90 | 20 | −2.431 | −0.981 | 67.66 |
| −30 | 0.207 | 1.657 | 114.28 | 30 | −3.101 | −1.651 | 113.86 |
| −40 | 0.963 | 2.413 | 166.41 | 40 | −3.833 | −2.383 | 164.34 |
| −50 | 1.850 | 3.300 | 227.59 | 50 | −4.892 | −3.442 | 237.38 |
| Methods | Dice (%) | HD95 | Meas (°) | Error (°) | (%) |
|---|---|---|---|---|---|
| FCN | 83.29 | 2.24 | 3.425 | 0.225 | 7.03 |
| DeepLabV3 | 85.12 | 3.00 | 3.530 | 0.330 | 10.31 |
| OcNet | 86.36 | 2.24 | 3.515 | 0.315 | 9.84 |
| U-Net | 87.94 | 4.47 | 3.769 | 0.569 | 17.78 |
| Att-Unet | 89.67 | 3.61 | 3.646 | 0.446 | 13.94 |
| U-Net 2+ | 89.13 | 4.00 | 2.741 | −0.459 | 14.34 |
| U-Net 3+ | 89.27 | 2.00 | 2.980 | −0.220 | 6.88 |
| TransUnet | 89.72 | 2.83 | 2.874 | −0.326 | 10.19 |
| SwinUnet | 90.09 | 2.24 | 3.404 | 0.204 | 6.38 |
| APCS-SwinUnet | 90.55 | 2.00 | 3.358 | 0.158 | 4.94 |
| No | Toe−In | Toe−Out | Negative−Camber | Positive−Camber | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 12:45 p.m.~12.59 p.m. | 11:39 a.m.~11:57 a.m. | 18:01 p.m.~18:19 p.m. | 19:08 p.m.~19:25 p.m. | |||||||||||||
| = 698 mm | = 744 mm | = 704 mm | = 742 mm | = 769 mm | = 831 mm | = 698 mm | = 745 mm | |||||||||
| Real (°) | Meas (°) | Error (°) | Real (°) | Meas (°) | Error (°) | Real (°) | Meas (°) | Real (°) | Meas (°) | Error (°) | Real (°) | |||||
| 1 | −0.900 | −0.902 | −0.002 | 1.400 | 1.307 | −0.093 | −0.900 | −1.014 | −0.114 | 1.250 | 1.323 | 0.073 | ||||
| 2 | −0.900 | −0.946 | −0.046 | 1.400 | 1.292 | −0.108 | −0.900 | −0.932 | −0.032 | 1.250 | 1.260 | 0.010 | ||||
| 3 | −0.900 | −0.933 | −0.033 | 1.400 | 1.227 | −0.173 | −0.900 | −0.954 | −0.054 | 1.250 | 1.224 | −0.026 | ||||
| 4 | −0.900 | −0.877 | 0.023 | 1.400 | 1.283 | −0.117 | −0.900 | −1.000 | −0.100 | 1.250 | 1.228 | −0.022 | ||||
| 5 | −0.900 | −0.912 | −0.012 | 1.400 | 1.281 | −0.119 | −0.900 | −0.997 | −0.097 | 1.250 | 1.315 | 0.065 | ||||
| 6 | −0.900 | −0.873 | 0.027 | 1.400 | 1.348 | −0.052 | −0.900 | −0.935 | −0.035 | 1.250 | 1.401 | 0.151 | ||||
| 7 | −0.900 | −0.904 | −0.004 | 1.400 | 1.311 | −0.089 | −0.900 | −0.887 | 0.013 | 1.250 | 1.325 | 0.075 | ||||
| 8 | −0.900 | −0.873 | 0.027 | 1.400 | 1.259 | −0.141 | −0.900 | −0.888 | 0.012 | 1.250 | 1.289 | 0.039 | ||||
| 9 | −0.900 | −0.838 | 0.062 | 1.400 | 1.314 | −0.086 | −0.900 | −0.924 | −0.024 | 1.250 | 1.287 | 0.037 | ||||
| 10 | −0.900 | −0.863 | 0.037 | 1.400 | 1.318 | −0.082 | −0.900 | −0.938 | −0.038 | 1.250 | 1.389 | 0.139 | ||||
| 11 | −0.900 | −0.906 | −0.006 | 1.400 | 1.380 | −0.020 | −0.900 | −0.997 | −0.097 | 1.250 | 1.346 | 0.096 | ||||
| 12 | −0.900 | −0.959 | −0.059 | 1.400 | 1.355 | −0.045 | −0.900 | −0.925 | −0.025 | 1.250 | 1.274 | 0.024 | ||||
| 13 | −0.900 | −0.927 | −0.027 | 1.400 | 1.313 | −0.087 | −0.900 | −0.926 | −0.026 | 1.250 | 1.251 | 0.001 | ||||
| 14 | −0.900 | −0.870 | 0.030 | 1.400 | 1.358 | −0.042 | −0.900 | −0.958 | −0.058 | 1.250 | 1.235 | −0.015 | ||||
| 15 | −0.900 | −0.906 | −0.006 | 1.400 | 1.368 | −0.032 | −0.900 | −0.974 | −0.074 | 1.250 | 1.342 | 0.092 | ||||
| 16 | −0.900 | −0.873 | 0.027 | 1.400 | 1.258 | −0.142 | −0.900 | −0.958 | −0.058 | 1.250 | 1.392 | 0.142 | ||||
| 17 | −0.900 | −0.924 | −0.024 | 1.400 | 1.256 | −0.144 | −0.900 | −0.891 | 0.009 | 1.250 | 1.316 | 0.066 | ||||
| 18 | −0.900 | −0.909 | −0.009 | 1.400 | 1.214 | −0.186 | −0.900 | −0.907 | −0.007 | 1.250 | 1.304 | 0.054 | ||||
| 19 | −0.900 | −0.850 | 0.050 | 1.400 | 1.266 | −0.134 | −0.900 | −0.944 | −0.044 | 1.250 | 1.283 | 0.033 | ||||
| 20 | −0.900 | −0.899 | 0.001 | 1.400 | 1.279 | −0.121 | −0.900 | −0.948 | −0.048 | 1.250 | 1.381 | 0.131 | ||||
| 21 | −0.900 | −0.889 | 0.011 | 1.400 | 1.342 | −0.058 | −0.900 | −0.922 | −0.022 | 1.250 | 1.343 | 0.093 | ||||
| 22 | −0.900 | −0.918 | −0.018 | 1.400 | 1.326 | −0.074 | −0.900 | −0.851 | 0.049 | 1.250 | 1.275 | 0.025 | ||||
| 23 | −0.900 | −0.902 | −0.002 | 1.400 | 1.272 | −0.128 | −0.900 | −0.868 | 0.032 | 1.250 | 1.248 | −0.002 | ||||
| 24 | −0.900 | −0.842 | 0.058 | 1.400 | 1.331 | −0.069 | −0.900 | −0.907 | −0.007 | 1.250 | 1.236 | −0.014 | ||||
| 25 | −0.900 | −0.877 | 0.023 | 1.400 | 1.342 | −0.058 | −0.900 | −0.909 | −0.009 | 1.250 | 1.343 | 0.093 | ||||
| MAE | −0.045 | 0.010 | ||||||||||||||
| RMSE | 0.078 | 0.066 | ||||||||||||||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Shi, B.; Liu, H.; Zappa, E. A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration. Metrology 2026, 6, 4. https://doi.org/10.3390/metrology6010004
Shi B, Liu H, Zappa E. A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration. Metrology. 2026; 6(1):4. https://doi.org/10.3390/metrology6010004
Chicago/Turabian StyleShi, Bo, Hongli Liu, and Emanuele Zappa. 2026. "A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration" Metrology 6, no. 1: 4. https://doi.org/10.3390/metrology6010004
APA StyleShi, B., Liu, H., & Zappa, E. (2026). A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration. Metrology, 6(1), 4. https://doi.org/10.3390/metrology6010004

