Next Article in Journal
MLP-MFF: Lightweight Pyramid Fusion MLP for Ultra-Efficient End-to-End Multi-Focus Image Fusion
Previous Article in Journal
Lightweight SCL-YOLOv8: A High-Performance Model for Transmission Line Foreign Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Field Calibration of the Optical Properties of Pedestrian Targets in Autonomous Emergency Braking Tests Using a Three-Dimensional Multi-Faceted Standard Body

by
Weijie Wang
1,
Chundi Zheng
1,
Houping Wu
1,
Guojin Feng
1,
Ruoduan Sun
1,
Tao Liang
1,2,
Xikuai Xie
1,2,
Qiaoxiang Zhang
1,
Yingwei He
1,* and
Haiyong Gan
1,*
1
Optics Division, National Institute of Metrology, Beijing 100029, China
2
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(16), 5145; https://doi.org/10.3390/s25165145
Submission received: 28 June 2025 / Revised: 20 July 2025 / Accepted: 9 August 2025 / Published: 19 August 2025
(This article belongs to the Section Optical Sensors)

Abstract

Highlights

What are the main findings?
  • Novel calibration apparatus: This manuscript introduces a three-dimensional multi-faceted standard body used as a value transfer medium to calibrate the optical properties (expressed as BRDF values) of pedestrian test targets. This approach is innovative because it addresses the challenges of traditional BRDF measurement methods, especially in outdoor, real-world testing scenarios.
  • Integration of imaging and algorithmic methods: The proposed method combines a camera-based analytical algorithm with imaging techniques to map and transfer calibration values from a standard white plate to complex pedestrian surfaces. This integration is shown to improve the reliability of sensor calibration on non-ideal and dynamically changing targets.
What are the implications of the main findings?
  • Enhanced calibration reliability: The development of an SI-traceable, field-applicable calibration method fills an important gap in ensuring that pedestrian target test objects used in AEB evaluations maintain consistent and reliable optical properties. This directly influences the accuracy of sensor-based safety systems in autonomous vehicles.
  • Quantitative insights into target degradation: This article documents how repeated usage (i.e., crash–scatter–reassembly cycles) affects the BRDF properties of targets—showing measurable changes such as decreased uniformity and altered reflectivity. These insights have strong implications for interpreting AEB test data and for establishing maintenance or replacement schedules for test targets.

Abstract

To address the growing need for field calibration of the optical properties of pedestrian targets used in autonomous emergency braking (AEB) tests, a novel three-dimensional multi-faceted standard body (TDMFSB) was developed. A camera-based analytical algorithm was proposed to evaluate the bidirectional reflectance distribution function (BRDF) characteristics of pedestrian targets. Additionally, a field calibration method applied in AEB testing scenarios (CPFAO and CPLA protocols) on one new and one aged typical pedestrian target of the same type revealed a 21% decrease in the BRDF uniformity of the aged target compared to the new one, confirming optical degradation due to repeated “crash–scatter–reassembly” cycles. The surface wear of the aged target on the side facing the vehicle produced a smoother surface, increasing its BRDF magnitude by 25% compared to the new target and making it easily detectable by the vehicle’s perception system. This led to “reverse scoring,” a safety risk in performance evaluation, necessitating timely calibration of AEB pedestrian targets to ensure reliable test results. The findings provide valuable insights into the development of regulatory techniques, evaluation standards, and technical specifications for test targets and offer a practical path toward full-life-cycle traceability and quality control.

1. Introduction

In recent years, the rapid increase in the number of motor vehicles has been accompanied by a steady rise in traffic accidents. According to the Global Status Report on Road Safety 2018 [1], approximately 1.35 million people worldwide lose their lives each year due to road traffic accidents. Excessive vehicle speed is one of the primary contributing factors. When a vehicle travels at speeds below 50 km/h, the risk of fatal injury to an adult pedestrian in a collision is relatively low. However, when vehicle speed exceeds 80 km/h, the fatality risk for adult pedestrians rises significantly. Research has shown that a 5% reduction in average vehicle speed can reduce traffic-related fatalities by as much as 30% [2].
The automatic emergency braking (AEB) system is an active safety feature that can autonomously apply the brakes when an imminent collision is detected, thereby reducing vehicle speed [3,4,5]. The presence of AEB systems significantly lowers the fatality rate in traffic accidents and is now considered a key indicator in evaluating the reliability and safety of advanced driver assistance systems (ADASs). It is also an important component of autonomous driving tests [6].
Between 2018 and 2021, ISO technical committees in Europe established standardized guidelines for test targets used in various traffic scenarios. These include specifications for rear-end vehicle test targets [7], pedestrian targets [8], three-dimensional vehicle targets, and cyclist targets [9]. For example, ISO 19206-2:2018 [10] stipulates that the surface reflectivity of pedestrian targets—especially the infrared reflectivity of skin-like areas—should closely match that of real human skin, typically ranging from 40% to 60%. Additionally, variations in the reflectance of clothing materials from different viewing angles should not exceed 20%.
Among all road users, pedestrians are the most vulnerable and tend to exhibit highly unpredictable movement patterns, which pose significant challenges for AEB systems in terms of detection accuracy and decision-making [11]. To date, several studies have investigated the calibration of pedestrian targets’ radar cross-section (RCS) properties, given that the RCS serves as an effective model for representing the reflectivity characteristics of target vehicles in vehicle-to-vehicle (V2V) scenarios [12]. The China Automotive Technology and Research Center Co., Ltd. (Tianjin, China) has developed radar reflection signal calibration methods for existing AEB pedestrian targets [13]. Hunan University (Changsha, China) has established standard dimensions for adult AEB test targets based on the 50th-percentile body dimensions of Chinese adults [14]. The National Intelligent Connected Vehicle Quality Infrastructure Center of China (Shenzhen, China) has implemented a calibration scheme using triangular cones for RCS data, along with signal filtering, to establish a comprehensive RCS testing protocol [15]. The University of Michigan Transportation Research Institute (UMTRI) (Ann Arbor, MI, USA) has also proposed a strategy for calibrating the RCS of AEB targets; this method effectively validates the millimeter-wave radar response characteristics of various targets, including pedestrians, cyclists, and vehicles [16].
The surface reflectance of objects significantly affects the performance of sensors in autonomous driving systems [17,18,19]. However, the reflectance of AEB test targets is influenced by factors such as the light source direction, radiation intensity, and viewing angle. These optical scattering behaviors must be characterized using bidirectional reflectance distribution function (BRDF) parameters. For pedestrian detection, most autonomous vehicles rely on technologies such as machine vision (including both visible and infrared imaging), millimeter-wave radar, multi-sensor fusion, and convolutional neural networks [20]. In this context, since image-based perception plays a critical role in identifying vulnerable road users, this study specifically focuses on analyzing the BRDF optical characteristics of pedestrian targets used in AEB systems.
Common BRDF measurement techniques and devices include a wide variety of mainstream approaches, such as the gonioreflectometer [21,22], specular BRDF acquisition systems [23], and imaging-based systems [24]. However, these systems are often limited by the shape and size of the sample object. While they allow relatively fast BRDF data collection, this often comes at the cost of reduced accuracy. Additionally, their performance is constrained by the number and angular distribution of light sources and sensors. The National Institute of Metrology, China, has developed a hemispherical BRDF calibration system, which currently serves as the national reference for BRDF calibration. For outdoor measurements, portable BRDF devices based on drone platforms have also been developed [25,26,27].
This paper proposes an in situ BRDF measurement approach for AEB test targets. The proposed method ensures the accuracy of autonomous driving test results by enabling the rapid acquisition of BRDF data from these targets. Preliminary research on imaging techniques, combined with an algorithm for bidirectional reflectance distribution function (BRDF) analysis [28], has shown the theoretical feasibility of carrying out field calibration of the optical properties of pedestrian targets. Compared to previous work [28], this study places greater emphasis on a field-viable methodology supporting experimental evaluation. Key contributions include (1) the addition of the EMD-based matching facet calculation algorithm for metrological traceability of the BRDF and (2) light source vector refinements, where instead of calculating a single combined vector, separate synthesized light vectors of the top facets are calculated for different viewing directions, making it more adaptable to field environmental constraints. These additions establish a comprehensive methodological framework for robust field assessment.
Compared to other mainstream BRDF acquisition methods, the proposed system demonstrates notable advantages in terms of cost, scalability, and accuracy.
From a cost perspective, traditional gonioreflectometers typically cost over one million CNY (Chinese yuan), while UAV-based BRDF acquisition platforms often exceed several hundred thousand CNY. In contrast, our system is designed to be cost-effective, with a total investment of about one hundred thousand CNY, making its practical deployment more viable. In terms of scalability, gonioreflectometers are mainly confined to laboratory settings and are suitable only for standardized samples. UAV-based systems offer better field adaptability but may face limitations under complex lighting or terrain conditions. The system developed in this study is inherently modular and field-oriented, offering high adaptability to a variety of testing environments.
Regarding measurement accuracy, gonioreflectometers represent the highest laboratory-grade reference standard. UAV-based platforms, while traceable to laboratory references, are often affected by environmental fluctuations such as light. Our system, however, is designed with on-site traceability in mind, incorporating light vector decomposition and determination. This allows it to maintain a balance between high-efficiency field measurements and reliable accuracy, providing a practical and quantifiable solution for BRDF calibration in real-world AEB testing scenarios.
A pre- and post-crash field measurement scene is illustrated in Figure 1. To account for the long-term use of pedestrian targets by major autonomous driving manufacturers and to ensure the representativeness and engineering comparability of the experiment, two sets of pedestrian targets were specially customized by a professional manufacturer for this study. One set consists of brand-new, unused targets serving as a reference under ideal conditions; the other set comprises targets that have been in regular use for over six months. After thorough coordination with the test site, the used targets were subjected to standard testing conditions to simulate long-term operational scenarios, allowing for evaluation of how changes in optical properties impact the performance of AEB systems.
To address the challenges in evaluating the optical properties of AEB test target surfaces and the current lack of field calibration methods, this study introduces a calibration method for the spatially distributed optical reflectance characteristics of typical humanoid targets. A field-traceable calibration apparatus was developed, which integrates a three-dimensional multi-faceted standard body with a camera. By combining imaging techniques with a BRDF analytical algorithm, the system enables rapid measurement and traceable quantification of BRDF values, including in regions that have sustained damage.

2. Calibration Principles and Procedures

The bidirectional reflectance distribution function (BRDF) characterizes the scattering and radiative properties of rough surfaces. It is crucial in various applications, including target detection, tracking, recognition, feature extraction, and stealth technology [29]. As illustrated in Figure 2, the BRDF is defined as the ratio of the reflected radiance, dLr(θiirr), in the reflected direction (θrr) within a small solid angle to the incident illuminance, dEi(θii), in the incident direction (θii), also within a small solid angle. This relationship is expressed by the following equation:
f r ( θ i , Φ i ; θ r , Φ r ) = d L r ( θ i , Φ i ; θ r , Φ r ; E i ) d E i ( θ i , Φ i ) [ s r 1 ] .
This study establishes a BRDF calibration workflow for AEB targets based on value transfer. First, the BRDF of a standard white plate (WP) is calibrated in a laboratory setting to serve as a reference. A six-axis robotic arm (Mitsubishi, Nagoya, Japan) is then employed to capture multi-angle images of both the standard white plate and a three-dimensional multi-faceted standard body. This step establishes a mapping between grayscale image values and BRDF values, thereby completing the value transfer from the standard plate to the multi-faceted body.
Subsequently, the three-dimensional multi-faceted body is used as the transfer medium. Within a unified coordinate system, multi-angle images of the AEB target are captured. Image processing algorithms are then applied to extract vectors for the light source and viewing direction, from which BRDF values in the damaged regions of the target are derived. This enables dynamic assessment and standardized evaluation of the target’s reflective properties. The overall calibration process is illustrated in Figure 3.

2.1. Value Transfer Process for the Standard White Plate

The National Institute of Metrology, China, utilizes a hemispherical BRDF calibration apparatus that currently serves as the national standard for BRDF calibration. This system supports high-precision calibration of the BRDF values of standard white plates [30].

2.2. Value Transfer Process for the Multi-Faceted Standard Body

Once the BRDF values of the standard white plate are obtained, the multi-faceted body module undergoes BRDF calibration and value transfer in a laboratory environment. The calibration procedure is as follows.
A specific wavelength, λ, is selected during image acquisition. The incident angle of the light source is denoted as (θii), and the camera’s viewing angle is denoted as (θrr). A six-axis robotic arm is used to alternately position the standard white plate and the multi-faceted standard body for calibration. The BRDF value of the standard white plate is denoted as f r W P ( θ i , Φ i , θ r , Φ r , λ ) . A camera is mounted on a circular rail. Using a light source with the same wavelength and maintaining the same lighting and viewing conditions as in field acquisition, images are captured from various angles. The calibration process is shown in Figure 4.
For calibration, the facet of the multi-faceted body under consideration and the corresponding facet of the standard white plate are aligned to the same position and height, thereby forming the calibration plane. The average grayscale values of the white plate and the facet to be calibrated are recorded as L r   W P ( θ i , Φ i , θ r , Φ r , λ ) and L r   T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) , respectively. The BRDF of the facet on the multi-faceted body is then calculated using the comparative method described by Equation (2):
f r T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) = L r T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) L r W P ( θ i , Φ i , θ r , Φ r , λ ) × f r W P ( θ i , Φ i , θ r , Φ r , λ ) .

2.3. Transfer Process for the AEB Target

2.3.1. Construction of the Measurement Scene

Field calibration must follow standardized procedures to ensure data reliability:
  • Measurement Scene Selection: Priority should be given to open, unobstructed outdoor spaces. The three-dimensional multi-faceted standard body and AEB pedestrian target must be positioned away from high-reflectivity objects (e.g., metal structures or glass facades) to avoid environmental light interference. Although their spatial positions can be flexibly adjusted, an unobstructed testing field of view must be maintained to prevent occlusion in optical measurements.
  • Outdoor Light Source Configuration: Natural sunlight should be used as the sole light source. Calibration should be conducted under clear weather conditions (illuminance ≥ 2000 lx), avoiding harsh midday sunlight or shadow interference.
  • Indoor Light Source Configuration: A single main light source (e.g., an LED array) should be used, and all other ambient light sources must be turned off. The distance between the light source and the target must be at least five times the target length to ensure uniform illumination.
  • Camera Parameter Calibration: The camera, standard body, and measurement area of the target must be at the same horizontal height. To reduce perspective distortion, the distance between the standard body and the target should be ≤x cm, and the distance between the camera and both the target and standard body should be ≥20x cm. A top-down perspective is employed during image acquisition to cover the full frontal and lateral features of the target.
  • Standard Body Pre-calibration: After capturing a baseline image directly facing the camera, the camera is rotated at 90° intervals to acquire images from multiple angles. These images are used to calculate incident angles and reflected direction vectors.
  • Dynamic Measurement of the Target: After fixing the incident light direction, the pedestrian target is adjusted to a predefined damage position, and images covering the full extent of the damaged region are acquired. Simultaneously, a photometer records light intensity values, sampling once per image frame.
  • Dynamic Light Intensity Monitoring: For outdoor experiments, an illuminance meter is employed to monitor ambient light intensity in real time. The meter should be placed in an unobstructed area, and each image must be tagged with a corresponding light intensity timestamp for subsequent BRDF compensation calculations.

2.3.2. Algorithm Processing

Following image acquisition, processing proceeds through a six-step algorithmic workflow, illustrated in Figure 5.

2.3.3. Coordinate System Based on TDMFSB

First, the multi-faceted body module is extracted from the captured images. The outer boundary of the module is identified, and its sphere center ( X 0 , Y 0 , Z 0 ) is used as the origin of the coordinate system. The line connecting the sphere center and the camera center is defined as the Z-axis. Based on this, the coordinate system is constructed, and the radius (R) of the multi-faceted body module is computed. Taking the chest center of the AEB pedestrian target ( X 0 , Y 0 , Z 0 ) as the origin of the pedestrian target coordinate system, a similar coordinate system is constructed using the line connecting this point to the camera direction as the Z-axis. The coordinate system is illustrated in Figure 6. The distance between the AEB pedestrian target and the multi-faceted body module is denoted as Ld.

2.3.4. Determine the Incident Light Vector

To determine the angular orientation of the normal vector for each polygon, four individual images of the standard body are used. For each image, the constituent polygons on the multi-faceted body module are identified, and the center coordinates of their circumscribed circles are computed. Taking the blue polygon at point (X2,Y2,Z2) in Figure 7 as an example, the center coordinate of the blue polygon (X2, Y2, Z2) represents the projection center of the polygon on the XY plane. Accordingly, the angle θ between the polygon normal and the Z-axis satisfies the following:
θ = arccos ( X 2 X 0 ) 2 + ( Y 2 Y 0 ) 2 + ( Z 2 Z 0 ) 2 R .
The angle Φ between the NZ plane (formed by the blue polygon normal N and the Z-axis) and the XZ plane satisfies the following:
Φ = arctan ( Y 2 Y 0 X 2 X 0 ) .
Using the same method, the angles θ and Φ can be calculated for all polygons visible in the images. Subsequently, the synthetic vector of the light source is calculated. The principle is to infer the direction from the orientation of the brightest polygons. For each polygon, the directional angles θ and Φ of its normal vector are recorded, and the average grayscale value Pn in the original image is used as the magnitude of the vector within the polygon area. The resulting polar coordinate vector Nn for each polygon is expressed as Nn = [θn, Φn, Pn].
From all the vectors, the one with the maximum magnitude—e.g., N2 in the figure—is identified. Then, all vectors corresponding to polygon centers with a Euclidean distance from the center (X2, Y2, Z2) of the polygon with the maximum vector N2 less than aR (where R is the radius of the multi-faceted body module) are selected—e.g., N1 through N6 in the figure. These vectors lying within the selected region are then combined to form a synthetic vector.
The same procedure is repeated for all four images, resulting in four light source vector components. After performing coordinate transformations to unify their orientations, the four light source vectors are fused into a final synthetic incident light vector. The direction of this vector defines the incident direction of the composite light source, denoted as (θi, Φi). The complete procedure for determining the incident vector is shown in Figure 8.

2.3.5. EMD-Based Matching Facet Calculation Method

Once the composite incident direction is determined, the next step is to identify the facet on the standard body that best matches the direction of the measured object for BRDF value transfer. The Earth Mover’s Distance (EMD) algorithm is employed for this purpose. Also known as the “bulldozer distance,” the EMD is a metric used to measure the similarity between two probability distributions [31]. The concept is analogous to reshaping terrain by moving soil—one distribution is considered a “pile of earth,” and the EMD quantifies the minimum effort required to reshape it into another. Rubner et al. [32] extended this concept to image retrieval, and it has since gained widespread academic attention:
E M D ( P , Q ) = m i n F = { f i j } i j f i j d i j i j f i j .
Mathematically, the EMD measures the “transportation cost” between two point sets or distributions. For a set of size N, two distributions, P and Q, are defined as S = { ( w j , m j ) } j = 1 N , where mj is the j-th element and wj is its weight. Two sets P = { ( p i , u i ) } i = 1 m and Q = { ( q j , v j ) } j = 1 n containing m and n elements, respectively, are given. The EMD between these sets becomes a transportation problem in which the elements of P act as suppliers with supplies ui, and the elements of Q act as consumers with demands vj. The quantities pi and qj represent supply and demand, respectively. The EMD is defined as the minimum total cost required to satisfy the supply–demand problem [33].
To apply the EMD in the context of matching the standard body to the measured pedestrian target, a hexagon of the same size as those on the multi-faceted standard body is affixed to the test location on the target. During calibration, the target is rotated at a specific angle, which causes deformation of the hexagon, as shown in Figure 9.
As shown in Figure 10, the pre- and post-rotation hexagons are extracted, and their vertices form two point sets, P and Q. The Euclidean distance between each vertex Pi and Qj is treated as the weight ωij.
The value fij can be computed as follows:
ω i j = | | p i q j | | .
This yields a distance matrix D, where the elements dij represent the distance between the i-th vertex of the base polygon P and the j-th vertex of the target polygon Q:
D = | ω 11 ω 12 ω 1 j ω 21 ω 22 ω 2 j ω i 1 ω i 2 ω i j | .
Assume that the computed distance matrix is
D = [ 0.5 1.2 0.9 1.8 0.7 1.1 0.4 1.5 0.9 1.3 0.3 1.0 0.6 0.8 1.4 0.1 1.3 0.4 0.5 1.1 1.0 0.6 0.8 0.7 0.2 ] .
Using linear assignment, the optimal assignment solution yields a total cost of 0.5 + 0.4 + 0.6 + 0.5 + 0.2 = 2.2. The total mass to be transported, i.e., the number of vertices, is 5. Thus, the resulting EMD is 0.440.
In practice, a new coordinate system is established in which the hexagon on the pedestrian target and each polygon on the standard body are translated to the origin. The EMD is computed between the transformed target hexagon and every polygon on the standard body. If the number of vertices on two faces is unequal, the lower count is used as the denominator in the EMD calculation. For example, when comparing a hexagon to a pentagon, the transported mass is 5. The polygon on the standard body with the smallest EMD is selected as the matching facet (MF).

2.3.6. Determination of the Reflected Vector

When capturing images directly facing the target, the line connecting the center of the sphere of the multi-faceted body module and the center of the camera is defined as the Z-axis. Under this condition, the viewing angle of the multi-faceted body module is (0°, 0°). Given that the distance from the camera to the target Lr is significantly greater than the distance Ld between the multi-faceted body module and the AEB pedestrian target, the viewing angle of the AEB pedestrian target can also be approximated as (0°, 0°).
However, when either the camera or the target is rotated by a certain angle, the camera’s angle relative to the matched facet changes and must be recalculated. If the camera is rotated by (α°, β°) after initially facing the target, the matched facet is changed to facet 5, as illustrated in Figure 6. To determine the camera’s rotation angle, the normal vector N5 of facet 5 must be rotated to align with the Z-axis of the coordinate system.
During this rotation, the distance between the light source and the standard body is assumed to be significantly larger than the distance from the center of the sphere to facet 5. Therefore, the light rays can be approximated as passing through both facet 5 and the sphere center simultaneously.
To construct the spatial relationship among the normal vector N5 of facet 5, the light source, and the camera, three spatial distance components are introduced: L5, LL, and Lc. These represent, respectively, the radius R of the multi-faceted body module, the distance from the light source to the module LL, and the distance from the camera to the module Lc. These parameters define the spatial vectors used in the calculation.
A polar coordinate system is then established with the origin at (X0, Y0). The polar coordinates of the vectors NL, N5, and Nc are expressed as ( θ L , Φ L , L L ) , ( θ 5 , Φ 5 , L 5 ) , and (0, 0, Lc), respectively. These polar coordinates are subsequently converted into Cartesian coordinates denoted as ( X L , Y L , Z L ) , ( X 5 , Y 5 , Z 5 ) , and ( 0, 0 , Z c ) .
The rotation angles α and β can then be calculated as
α = arctan ( y 5 z 5 ) , β = arctan ( x 5 z 5 ) .
To align N5 with the Z-axis, it must be rotated clockwise by -α around the X-axis and clockwise by β around the Y-axis. After this transformation, facet 5 can be considered to directly face the camera, and its viewing angle is then ( θ r , Φ r ) = ( α , β ) . This transformation demonstrates the change in the viewing angle relative to the matched facet, as shown in Figure 11.
Using coordinate transformation matrices, the rotation process can be validated. The Cartesian coordinates of the rotated normal vector N‘5 can be obtained using the following equation [22]:
[ X 5 Y 5 Z 5 1 ] = [ cos ( β ) 0 sin ( β ) 0 0 1 0 0 sin ( β ) 0 cos ( β ) 0 0 0 0 1 ] [ 1 0 0 0 0 cos ( α ) sin ( α ) 0 0 sin ( α ) cos ( α ) 0 0 0 0 1 ] [ X 5 Y 5 Z 5 1 ] .
The accuracy of the result can be confirmed by comparing the coordinates obtained through matrix operations with those derived from algorithmic computation.

2.3.7. BRDF Transfer Process

Before initiating the BRDF transfer process, it is essential to conduct a preliminary analysis of the incident lighting conditions. These conditions are influenced not only by direct solar radiation but also by scattered and reflected light emanating from atmospheric and environmental sources. Direct sunlight, as the primary light source, is characterized by strong directionality and high intensity. However, as it passes through the atmosphere, it undergoes Rayleigh and Mie scattering, resulting in diffuse illumination. This phenomenon is particularly pronounced in the short-wavelength blue region and constitutes a significant component of atmospheric scattered light [34].
Additionally, environmental surfaces, such as the ground, buildings, and vegetation, reflect sunlight and thereby act as secondary light sources. These are collectively referred to as ambient reflected light. The combined effects of direct sunlight, atmospheric scattering, ambient reflection, and cloud scattering result in complex and variable illumination conditions. Consequently, light arrives from all directions under virtually all weather conditions.
Tian et al. [35] from Peking University (Beijing, China) proposed that the BRDF (bidirectional reflectance distribution function) of vegetation canopies can be expressed as the sum of single and multiple scattering components. In practice, light often strikes a surface from various directions, and the contributions of the BRDF components vary accordingly. Therefore, the overall BRDF of a surface can be modeled as a weighted sum of the BRDF values from all incident directions.
Similarly, as illustrated in Figure 12, the BRDF of a matching facet on a multi-faceted standard body can be approximated as a weighted sum of the BRDFs of its individual facets. However, during field calibration, only facets 1 through 5 are considered in calculating the total incident light vector. The remaining facets are primarily illuminated by light reflected from the ground and surrounding walls. As a result, the BRDF contribution for facet 1 is mainly derived from facets 1 to 5.
The weight of each contributing facet is calculated based on the ratio of its grayscale value to the total grayscale value of all contributing facets.
Using a comparative method, the BRDF value of the calibration facet on the multi-faceted standard body is calculated as follows:
f r T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) = L r T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) / E T D M F S B a L r W P a ( θ i , Φ i , θ r , Φ r , λ ) / E W P a × f r W P a ( θ i , Φ i , θ r , Φ r , λ ) .
where f r W P ( θ i , Φ i , θ r , Φ r , λ ) is the BRDF of the standard white plate under various incident light angles; L r   W P ( θ i , Φ i , θ r , Φ r , λ ) is the average grayscale value of the white plate captured by the camera; L r T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) is the average grayscale value of the multi-faceted standard body surface; and ETDMFSB a and EWP a are the ambient illuminance values during the measurements of the multi-faceted standard body and the white plate, respectively.
Compared to Equation (2), Equation (11) introduces two additional parameters, ETDMFSB a and EWP a, which represent the ambient illuminance values measured during the imaging of the multi-faceted standard body and the white plate, respectively. In practice, these parameters reflect the stability of the light source illuminance at the calibration position, as indicated by the readings from the monitoring detector. The aim of including these two parameters is to correct for fluctuations in light source intensity that may affect the grayscale values in the process.
During image capture, the grayscale value recorded by the camera not only reflects the surface reflectance of the object but is also significantly influenced by the prevailing lighting conditions. Without compensating for illumination variation, BRDF values obtained from the same surface at different times may exhibit systematic errors.
Therefore, in Equation (11), grayscale values are normalized by their corresponding illuminance values to eliminate the influence of lighting variation, thereby enhancing the accuracy and repeatability of the BRDF calculation.
η i = L T D M F S D   a E T D M F S B   a a = 1 5 L T D M F S B   a E T D M F S B   a .
The BRDF value f r M F ( θ i , Φ i , θ r , Φ r , λ ) of the matching facet is then given by
f r M F ( θ i , Φ i , θ r , Φ r , λ ) = η i × f r T D M F S B a ( θ i , Φ i , θ r , Φ r , λ ) .

2.3.8. Value Transfer from Standard Body to Target

After determining the BRDF of the matching facet on the standard body, a field image is loaded, and the grayscale values L M F and L M e F of the matching facet and the measured facet (MeF) on the pedestrian target are extracted. The BRDF of the target surface is then calculated using the following formula:
f r M e F ( θ i , Φ i , θ r , Φ r , λ ) = f r M F ( θ i , Φ i , θ r , Φ r , λ ) L M e F L M F .

3. Experimental Procedure

This section begins by presenting the experimental validation of the light vector algorithm’s effectiveness under laboratory optical darkroom conditions (Section 3.1). Subsequently, Section 3.2 and Section 3.3 detail the field implementation methodology for pedestrian target characterization based on the theoretical framework established in Section 2, including the development of a comprehensive field calibration workflow. Section 3.4 quantitatively examines how the target’s optical properties influence AEB test outcomes.

3.1. Indoor Light Vector Algorithm Experiments

Preliminary validation of the algorithm was conducted in a laboratory of the National Institute of Metrology, Beijing, China. The experimental setup included a three-dimensional multi-faceted standard body, a camera, and two standard white plates.
The multi-faceted standard body was fabricated via three-dimensional printing. To mitigate light leakage at the seams of the polygonal panels and enhance edge definition, black adhesive tape (Deli, Ningbo, China) was applied along the edges of each polygon.
The camera used in the experiment was a monochrome camera (Thorlabs, Newton, NJ, USA), featuring a 5-megapixel resolution and a maximum frame rate of 35 fps. The two standard white plates (NIM, Beijing, China) were made of ceramic; white plate 1 measured 10 cm × 10 cm, and white plate 2 had a diameter of 15 cm.
The verification experiment was divided into two parts. The first part aimed to validate the accuracy of the light source calculation algorithm. This experiment was conducted in a darkroom located in the underground laboratory at the Changping campus of the National Institute of Metrology, Beijing, China. The multi-faceted standard body was positioned at a height of H above the ground (in this experiment, the height from the ground to the chest area of the pedestrian target was 1.5 m; to ensure consistency with subsequent field experiments, the height H of the multi-faceted standard body above the ground was also set to 1.5 m) and at a distance of Lc from the camera (in this experiment, due to the limited size of the laboratory space, Lc = 3 m). This configuration was recorded as position 1. Subsequently, the camera was moved Lc to the left and right of the multi-faceted body, denoted as positions 2 and 3, respectively.
The results of the algorithmic processing are shown in Figure 13 and Figure 14.
With the reference direction established, the actual incident angle of the light source relative to the multi-faceted body was measured at (69.95°, 29.05°). The angle calculated using the synthetic light source vector was (70.76°, 30.01°). The automated detection and analysis yielded a deviation of less than θ (θ < 3°), as illustrated in Figure 15.
It should be noted that the synthesized incident direction may be influenced by refracted light from surrounding objects. As a result, there may be a certain degree of deviation between the calculated and actual incident angles.

3.2. Outdoor Field Testing

Based on the algorithm described in the previous section, on-site BRDF transfer for the target can be achieved. The overall field calibration workflow is illustrated in Figure 16.
Field validation experiments were conducted at the testing grounds of an institute related to intelligent connected vehicles. Two typical pedestrian targets were selected for evaluation: a new pedestrian target and an aged pedestrian target. The experimental setup included a monochrome camera equipped with a 550 nm bandpass optical filter, an Archimedean multi-faceted standard body, and the pedestrian targets. The three-dimensional multi-faceted standard body was fabricated through three-dimensional printing. The calibration process strictly followed the field calibration procedure illustrated in Figure 16.
The testing procedure consisted of the following seven steps.

3.2.1. Selection of the Measurement Scene

Test scenes were selected based on the CPFAO and CPLA scenarios defined in the C-NCAP standards [36,37]:
  • A pedestrian crossing from behind an obstacle while the vehicle proceeds straight.
  • A pedestrian walking longitudinally in front of a vehicle moving straight.
These scenarios established the vehicle speed and crash points for the subsequent full-scale tests.

3.2.2. Deployment of Measurement Equipment

Following the layout of the measurement scene, the equipment was deployed accordingly. To capture the images required for light source calculation, the multi-faceted standard body was placed along the road centerline to avoid shadow occlusion from the surrounding background. The camera was first positioned facing direction 1 at a distance of Lc from the standard body (to ensure consistency with the calibration procedure conducted in the laboratory, the distance was also set to Lc = 3 m), and this location was marked as position 1. Subsequently, the camera was placed at positions 2 and 3, facing directions 2 and 3, respectively, each at the same distance from the standard body, and images were taken at each position. The field experiment scene is illustrated in Figure 17.
For BRDF value transfer imaging, the pedestrian target was positioned Ld cm to the right of the standard body (in this experiment, Ld = 10 cm). Three camera positions were established: position 1 was located L meters directly perpendicular to the line connecting the target and the standard body (in this experiment, due to the relevant provisions in ISO-19206, L = 10 m); positions 2 and 3 were placed 3 m to the left and right of position 1, respectively. These positions provided a viewing angle range of −16.7° to 16.7°. Positions 2 and 3 were slightly adjusted forward to maintain a constant 10 m distance from the target. Images captured during the light source vector calculation phase are illustrated in Figure 18.

3.2.3. Image Capture

After setting up the equipment, image acquisition commenced. The pedestrian target was first oriented facing the camera. Three images were taken at position 1, and the illuminance readings at the time of imaging were recorded. The target was then rotated to face sideways and backwards, and images were captured for each orientation in the same manner. After completing the sequence at position 1, the camera was moved to positions 2 and 3, and the process was repeated. The images were collected during the entire calibration process.

3.2.4. Algorithmic Calculation of Incident Light Vector

Following image capture, the images from the light source vector calculation phase were processed. Since the upper portion of the standard body receives the strongest illumination and the least environmental reflection, only the upper incident vector was computed. Using the algorithm, facet-normal vectors were derived, and the corresponding polygons were filtered within a circular region with a Euclidean distance from the center of the polygon with the maximum vector of less than aR (with a set to 0.8 in this experiment), establishing the coordinate system. Within this system, the zenith and azimuth angles were defined as (θ1, φ1) = (−10.1°, 79.9°). The remaining facets were processed in a similar manner.

3.2.5. Calculation of Viewing Vectors

The camera was placed at three different positions. Within the defined coordinate system, the viewing angle was expressed in terms of θr and φr. Because the camera moved only within the XZ plane, φr remained at 0°, while θr took on three values:
  • θr1 = −16.7° (left view);
  • θr2 = 0° (frontal view);
  • θr3 = 16.7° (right view).
Following the above steps, the incident and viewing angles for facets 1 through 5 were determined. The angular data are summarized in Table 1.

3.2.6. Calculation of the Matching Facet and Extraction of Image Grayscale Information

The matching facet between the multi-faceted standard body and the pedestrian target was determined using a sample scene, in which the camera, placed at position 1, captured a frontal image of the pedestrian target. In Figure 19, the red region on the pedestrian target was approximated as a plane. Hexagonal patches were pre-applied to the target, and each patch was compared with every facet of the standard body using the Earth Mover’s Distance (EMD) algorithm to identify the best match.
For the scene with the camera positioned in front of the pedestrian target, the EMD values between surface A on the pedestrian target and all facets of the standard body were calculated using Equation (5). These results are presented in Table 2. Comparison of the values revealed that facet 8 of the standard body had the smallest EMD value (22.91) relative to surface A. Therefore, facet 8 was identified as the matching facet for the approximate frontal region of the pedestrian target.
Subsequently, the average grayscale values LMF and LMeF of the approximate region on the pedestrian target and its corresponding matching facet on the standard body were extracted and recorded.

3.2.7. Laboratory Calibration of the TDMFSB

After obtaining the incident light vectors for facets 1 to 5, the standard body was transported to the laboratory for calibration, following the procedure described in Section 2. The laboratory setup included a six-degree-of-freedom robotic arm and a monochrome camera. Both the standard white plate and the standard body undergoing calibration were positioned 2.3 m from the camera. To match the 550 nm bandpass filter mounted on the camera lens, a broadband tunable laser radiation field source with a wavelength of 550 nm was selected to perform illumination.
In the field experiment, the incident angle of the light source was (θi, Φi), and the angle between the laser and the camera was denoted by θ. During laboratory calibration, the camera was placed in three orientations relative to the standard body: directly in front, 16.7° to the left, and 16.7° to the right. The robotic arm’s zenith angle was adjusted to simulate these camera viewing angles, as defined by φr in Table 1.
The standard white plate and the standard body were alternately mounted on the robotic arm. At each angular position listed in Table 1, images were captured in accordance with the method described in Section 2. Illuminance readings were recorded for each image. Representative images obtained during this process are shown in Figure 20. Labels 1–5 correspond to the sources of incident light vectors; φ1, φ2, and φ3 represent the angular positions listed in Table 1, simulating the left view (arm raised), frontal view (arm level), and right view (arm lowered), respectively—corresponding to field calibration perspectives.
After image acquisition, each image set was processed to extract the grayscale values L r W P ( θ i , Φ i , θ r , Φ r , λ ) and L r T D M F S B ( θ i , Φ i , θ r , Φ r , λ ) from both the standard white plate and the corresponding facet of the standard body. Using Equation (12), the synthetic bidirectional reflectance distribution function (BRDF) value for the matching facet of the standard body was calculated. The resulting BRDF values were visualized using pseudo-color processing applied to monochrome images of the standard body. The results are displayed in Figure 21.

3.3. Computation of BRDF Calibration Results

After obtaining the BRDF values for each facet of the standard body through laboratory calibration, the final BRDF values of the pedestrian target were computed. This was achieved by combining the calibrated BRDF values with the average grayscale values of the matching and measured facets, using the equations previously described. Both regional averaging and pixel-wise computation approaches were employed.
Calibration was conducted from three camera positions—front, side, and back—for each orientation of the pedestrian target, resulting in nine sets of data. The computed BRDF results were then visualized using pseudo-color rendering. The visualization outcomes are presented in Figure 22. The calculation process for the quantities labeled in the figure refers to Section 3.4.3.

3.4. Correlative Analysis of Pedestrian Target Characteristics and AEB Performance in Autonomous Driving

Following the BRDF calibration of the pedestrian targets, a series of full-scale vehicle tests were performed to evaluate AEB (autonomous emergency braking) performance. These tests, which assessed targets with differing characteristics, were conducted in accordance with the CPFAO and CPLA test scenarios outlined in the C-NCAP protocol. CPFAO (car-to-pedestrian farside adult with obstruction) refers to a scenario in which the vehicle collides with a farside adult pedestrian target under an obstruction condition. In this scenario, the sensors of the vehicle under test primarily detect the front and side information of the pedestrian target. CPLA (car-to-pedestrian longitudinal adult) refers to a pedestrian target walking longitudinally in the vehicle’s path. In this scenario, the sensors of the vehicle under test (VUT) primarily detect the rear-side information of the pedestrian target.
In Figure 23a, F stands for the acceleration distance of the target; D stands for the distance from the target’s starting point to the theoretical crash point; L stands for the 25% crash point on the front structure of the test vehicle. In Figure 23b, G stands for the acceleration distance of the pedestrian target; S stands for the constant-speed walking distance of the pedestrian target; and N stands for the 50% crash point on the front structure of the test vehicle. In addition, a grating door is the trigger device for the pedestrian target’s traction system, designed to synchronize the timing between the vehicle under test (VUT) and the pedestrian target. The position of the grating door varies with the vehicle speed to ensure that when the VUT passes through the grating door, the traction system begins to move the pedestrian target. Without any braking intervention by the VUT, this setup enables a collision to occur between the vehicle and the pedestrian target at the anticipated crash point.
Scoring criteria were established to assess the performance of intelligent vehicles, providing a standardized framework for evaluating their perception capabilities. As shown in Figure 23, the CPFAO and CPLA scoring systems are based on the distance between the vehicle’s stop point and the theoretical crash point. A score of 0 was assigned if the stop point was within 0–15 cm of the crash point. The score increased by 5 points for every additional 15 cm: a score of 5 for 15–30 cm, a score of 10 for 30–45 cm, and so on. A maximum score of 100 was awarded when the stop point exceeded 3 m from the theoretical crash point. The scoring system is summarized in Table 3.
Collision avoidance tests were conducted using a typical pedestrian target, considered a vulnerable road user (VRU), in both the CPFAO and CPLA scenarios as defined by C-NCAP, using a range of vehicle speeds.
In the CPFAO scenario, the test vehicle accelerated from a standstill to a constant speed of 10 km/h, 20 km/h, or 30 km/h. The pedestrian target crossed the street perpendicularly at a constant speed of 6.5 km/h, emerging from behind an obstacle on the far side of the road (opposite the driver’s side).
In the CPLA scenario, the test vehicle maintained a constant speed of 10 km/h, 15 km/h, or 20 km/h, while the pedestrian target moved longitudinally at 6.5 km/h. The target began 25 m ahead of the vehicle and moved in the same direction.
In both scenarios, once the vehicle’s sensors detected the pedestrian target, the AEB system was activated. The system first issued a warning and then automatically initiated emergency braking in an effort to avoid a collision. During each test, the response time following AEB activation and the final stop position of the vehicle were precisely recorded. A theoretical crash point—defined as the collision location assuming no evasive action by the pedestrian and no braking by the vehicle—was established. The distance between the actual stop point and the theoretical crash point was then measured.
For each vehicle speed, three collision test trials were conducted. The displacement and velocity profiles of various objects were derived from the experiments and are presented below. Figure A1, Figure A2, Figure A3, Figure A4, Figure A5 and Figure A6 of Appendix A display the experimental data obtained at different vehicle speeds in the CPFAO and CPLA test scenarios.

3.4.1. CPFAO Test Results

Tests were conducted in the CPFAO scenario at vehicle speeds of 10 km/h, 20 km/h, and 30 km/h, with the following data collected.
(1) CPFAO—10 km/h
In the CPFAO scenario, real vehicle tests were conducted at a speed of 10 km/h using the two types of pedestrian targets. The experimental results are shown in Figure A1. The average collision distance for the aged pedestrian target was 1.15 m, while that for the new pedestrian target was 0.43 m.
(2) CPFAO—20 km/h
In the CPFAO scenario, real vehicle tests were conducted at a speed of 20 km/h using the two types of pedestrian targets. The experimental results are shown in Figure A2. The average collision distance for the aged pedestrian target was 1.04 m, while that for the new pedestrian target was 1.4 m.
(3) CPFAO—30 km/h
In the CPFAO scenario, real vehicle tests were conducted at a speed of 30 km/h using the two types of pedestrian targets. The experimental results are shown in Figure A3. The average collision distance for the aged pedestrian target was 0.53 m, while that for the new pedestrian target was 0.09 m.
The scoring results for the CPFAO test scenario are presented in Table 4.

3.4.2. CPLA Test Results

Tests were conducted in the CPLA scenario at vehicle speeds of 10 km/h, 15 km/h, and 20 km/h, with the following data collected.
(1) CPLA—10 km/h
In the CPLA scenario, real vehicle tests were conducted at a speed of 10 km/h using the two types of pedestrian targets. The experimental results are shown in Figure A4. The average collision distance for the aged pedestrian target was 2.42 m, while that for the new pedestrian target was 0.85 m.
(2) CPLA—15 km/h
In the CPLA scenario, real vehicle tests were conducted at a speed of 15 km/h using the two types of pedestrian targets. The experimental results are shown in Figure A5. The average collision distance for the aged pedestrian target was 0.30 m, while that for the new pedestrian target was 0.14 m.
(3) CPLA—20 km/h
In the CPLA scenario, real vehicle tests were conducted at a speed of 10 km/h using the two types of pedestrian targets. The experimental results are shown in Figure A6. The average collision distance for the aged pedestrian target was 0.38 m, while that for the new pedestrian target was 0.65 m.
The scoring results for the CPLA test scenario are presented in Table 5.

3.4.3. BRDF vs. Score Data Analysis

The results above were summarized based on the vehicle scores in both test scenarios. First, the BRDF calibration values of the pedestrian targets were evaluated. For each target, the average BRDF values for the front, side, and back are denoted as frfront, frside, and frback, respectively.
For the CPFAO scenario, in which the pedestrian crosses the street, the combined average of the front and side BRDF values was used as the representative calibration value, denoted as fr(CPFAO). In the CPLA scenario, where the vehicle follows the pedestrian, the back BRDF value was used as the representative value, denoted as fr(CPLA). The overall average BRDF value across all facets of a pedestrian target is denoted as fravg, and the relative standard deviation of this average is represented as fr-uni.
The computational formula is given by
f r u n i = f r m a x f r m i n f r a v g .
Based on the calculation using Equation (15), the BRDF uniformity values fr-uni(aged) and fr-uni(new) for the aged and new pedestrian targets are 15.1% and 12.0%, respectively, indicating a 21% relative difference in uniformity between the two targets. In the CPFAO scenario, the approaching vehicle primarily observes the front and side surfaces of the pedestrian target. The average BRDF values of the front and side surfaces, denoted as fr(CPFAO)aged and fr(CPFAO)new, are 0.0274 and 0.0206, respectively. This indicates that the aged target has a 25% higher average BRDF than the new target in these directions. Similarly, in the CPLA scenario, where the vehicle approaches from behind, the observed surface is the back of the pedestrian target. The average BRDF values of the back surfaces, denoted as fr(CPLA)aged and fr(CPLA)new, are 0.0247 and 0.0187, respectively. Again, the aged target exhibits a 25% higher average BRDF than the new target.
Based on the above calculation rules, the BRDF calibration results and variation ranges for the pedestrian targets are illustrated in Figure 24.
The final experimental scores for the two pedestrian targets are summarized in Figure 25. Let the test score of pedestrian target i (where i = new, aged) in test scenario s (where s = CPFAO, CPLA) during the j-th trial be denoted as Si,s,j. The final average score S i of each type of pedestrian target is calculated as follows:
S i = 1 N s j = 1 n s S   i , s , j .
Here, N represents the total number of tests conducted for each target across both scenarios, and ns denotes the number of test repetitions under scenario s (in this study, nCPFAO = 9 and nCPLA = 9; thus, N = 18). Si,s,j is the score of the j-th test. The average scores S n e w , S a g e d are calculated separately for the new and aged pedestrian targets, and the results are summarized in Figure 25.

3.4.4. Conclusions from Experimental Data

Based on the experimental data described above, the final conclusions are as follows:
  • In both the CPFAO and CPLA test scenarios, as vehicle speed increased, the corresponding test scores decreased. This trend indicates that overall safety performance declines with rising vehicle velocity.
  • Although the new and aged pedestrian targets belong to the same pedestrian target type, the BRDF value of the aged target in the direction of vehicle approach was approximately 25% higher than that of the new target in both test scenarios. Consequently, the aged target was more easily detected by the vehicle under the same environmental conditions, allowing the vehicle to initiate braking more effectively. This surface change increases BRDF values and enhances the target’s detectability, potentially leading to artificially improved safety performance scores during AEB testing.
  • After approximately six months of repeated use, the conformity of the aged target with ISO requirements declined by around 21% compared to that of the new target. Repeated impacts and surface abrasion significantly degraded the diffuse reflection properties of the target’s surface. The front-facing surface of the aged target became smoother, resulting in stronger reflectivity and improved detectability. This led to the observation that tests using the aged target yielded higher scores than those using the new one. This introduces a safety paradox, where degraded physical targets produce misleadingly favorable results, potentially introducing safety risks with continued usage.

4. Summary and Discussion

This study presents a comprehensive account of the development and application of perception testing modules and field calibration techniques for typical target objects in intelligent connected vehicles. As the technology behind intelligent connected vehicles continues to rapidly evolve, the need for reliable and robust testing methodologies has become increasingly urgent.
Among the critical safety functions of advanced driver assistance systems (ADASs), automatic emergency braking (AEB) is particularly vital. However, inconsistencies in the quality and optical reflectance characteristics of test targets have compromised the accuracy and reliability of current AEB evaluations. These limitations significantly affect the safety performance of autonomous vehicles.
To address this issue, this study introduces a field calibration method based on bidirectional reflectance distribution function (BRDF) imaging. A novel three-dimensional multi-faceted standard body was employed as a value transfer medium to enable accurate on-site calibration of the optical reflectance characteristics of typical pedestrian targets. The proposed method comprehensively evaluates target reflectance by constructing a three-dimensional coordinate system, capturing multiple images, calculating light source and viewing vectors, and deriving BRDF values.
Experimental results confirmed that the proposed method satisfies the optical reflectance requirements specified in standards such as ISO 19206. This confirmation provides strong assurance of the reliability and consistency of AEB testing.
In practical application, this study demonstrates the effectiveness of the autonomous driving perception testing module and the field calibration techniques through full-scale vehicle tests. By creating modular test scenarios that replicate a range of real-world road and traffic conditions, the study enabled a thorough evaluation of perception, decision-making, and execution capabilities in autonomous vehicles. The test results show that the developed testing module and calibration technology significantly enhance both the efficiency and accuracy of autonomous vehicle testing. This lays a strong foundation for the broader commercialization of intelligent connected vehicles.
Importantly, this study also highlights a potential safety concern in AEB testing: when the BRDF uniformity of a pedestrian target undergoes a significant negative deviation—such as increasing to 15% or 25%—it may lead to a counterintuitive phenomenon where aged targets receive higher scores than new targets, a situation referred to as “score inversion.” This outcome compromises the fairness and reliability of safety performance assessments. Therefore, it is suggested that AEB test developers and relevant stakeholders pay close attention to BRDF uniformity during testing.
To enhance the study’s practical value, a three-tiered compensation framework can be proposed: (1) short-term correction factors derived from real-time BRDF monitoring to neutralize uniformity bias; (2) medium-term replacement cycles calibrated to target degradation rates; and (3) long-term standardization of a threshold (e.g., ≤15%) of BRDF uniformity. This would help prevent misleading results and support the establishment of more robust and fair evaluation standards.
In conclusion, the development of an autonomous driving perception testing module and field calibration techniques for typical targets has effectively improved the testing accuracy and efficiency of intelligent connected vehicles. These contributions offer essential support for the commercialization of autonomous driving technologies and address a critical gap in the reliable evaluation of the optical properties of test targets. Looking ahead, as technology continues to advance and applications expand, these techniques are expected to play an increasingly pivotal role in accelerating the development of the intelligent connected vehicle industry.

Author Contributions

Conceptualization, Y.H. and W.W.; methodology, Y.H., W.W., and R.S.; software, W.W., X.X., and Y.H.; validation, W.W., Y.H., and C.Z.; formal analysis, Y.H., W.W., T.L., and X.X.; investigation, Y.H. and W.W.; resources, Y.H., C.Z., H.W., and G.F.; data curation, W.W., X.X., T.L., and Y.H.; writing—original draft preparation, W.W. and Y.H.; writing—review and editing, Y.H., W.W., R.S., and Q.Z.; visualization, Y.H., W.W., X.X., and R.S.; supervision, Y.H. and H.G.; project administration, Y.H. and H.G.; funding acquisition, H.G., Y.H., G.F., H.W., and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Key R&D Projects, China (2021YFF0600204); the National Institute of Metrology Research Project (YFCZL2202; AKYZD1909-2&2004-5); and the International Science & Technology Cooperation Program of China (2015DFA60390).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors acknowledge the support of Hebei PriAuto Tech Co. Ltd.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AEBAutomatic Emergency Braking System
BRDFBidirectional Reflectance Distribution Function
EMDEarth Mover’s Distance
RCSRadar Cross-Section
TDMFSBThree-Dimensional Multi-Faceted Standard Body
CPFAOCar-to-Pedestrian Farside Adult with Obstruction
CPLACar-to-Pedestrian Longitudinal Adult
ADASAdvanced Driver Assistance System
V2VVehicle-To-Vehicle
UAVUnmanned Aerial Vehicle
CNYChinese Yuan
WPWhite Plate
MFMatching Facet
MeFMeasured Facet

Appendix A

To facilitate understanding, the annotations in Figure A1a are explained as an illustrative example. The labeling conventions in Figure A1b–d and Figure A2, Figure A3, Figure A4, Figure A5 and Figure A6 are identical to those defined in Figure A1a.
Figure A1a shows the following:
  • Grating door time: The moment when the front end of the VUT passes through the grating door, at which time the pedestrian target’s traction system is activated, and the pedestrian target begins to move.
  • Warning time: The first moment when the AEB system issues an alert to the driver.
  • Stop time: The moment when the VUT comes to a complete stop.
  • Pedestrian target displacement: The curve represents the change in displacement of the pedestrian target over time.
  • Vehicle displacement: The curve represents the change in displacement of the VUT over time.
  • Pedestrian target velocity: The curve represents the change in velocity of the pedestrian target over time.
  • Vehicle velocity: The curve represents the change in velocity of the VUT over time.
  • Left vertical axis: The velocity axis.
  • Right vertical axis: The displacement axis.
  • Horizontal axis: The time axis, with the moment the VUT begins to move set as the time-zero point.
  • Distance between stop point and crash point: The measured distance between the front end of the vehicle and the anticipated crash point after the VUT comes to a complete stop.
  • Distance between alert point and crash point: A combination of the distance difference between the alert point and the stop point and the distance between the stop point and the crash point.
Figure A1. CPFAO—10 km/h test results: (a) CPFAO aged pedestrian target 10 km/h test 1; (b) CPFAO aged pedestrian target 10 km/h test 2; (c) CPFAO aged pedestrian target 10 km/h test 3; (d) CPFAO new pedestrian target 10 km/h test 1.
Figure A1. CPFAO—10 km/h test results: (a) CPFAO aged pedestrian target 10 km/h test 1; (b) CPFAO aged pedestrian target 10 km/h test 2; (c) CPFAO aged pedestrian target 10 km/h test 3; (d) CPFAO new pedestrian target 10 km/h test 1.
Sensors 25 05145 g0a1aSensors 25 05145 g0a1b
Figure A2. CPFAO—20 km/h test results: (a) CPFAO aged pedestrian target 20 km/h test 1; (b) CPFAO aged pedestrian target 20 km/h test 2; (c) CPFAO new pedestrian target 20 km/h test 1; (d) CPFAO new pedestrian target 20 km/h test 2; (e) CPFAO new pedestrian target 20 km/h test 3.
Figure A2. CPFAO—20 km/h test results: (a) CPFAO aged pedestrian target 20 km/h test 1; (b) CPFAO aged pedestrian target 20 km/h test 2; (c) CPFAO new pedestrian target 20 km/h test 1; (d) CPFAO new pedestrian target 20 km/h test 2; (e) CPFAO new pedestrian target 20 km/h test 3.
Sensors 25 05145 g0a2aSensors 25 05145 g0a2b
Figure A3. CPFAO—30 km/h test results: (a) CPFAO aged pedestrian target 30 km/h test 1; (b) CPFAO aged pedestrian target 30 km/h test 2; (c) CPFAO aged pedestrian target 30 km/h test 3; (d) CPFAO new pedestrian target 30 km/h test 1; (e) CPFAO new pedestrian target 30 km/h test 2; (f) CPFAO new pedestrian target 30 km/h test 3.
Figure A3. CPFAO—30 km/h test results: (a) CPFAO aged pedestrian target 30 km/h test 1; (b) CPFAO aged pedestrian target 30 km/h test 2; (c) CPFAO aged pedestrian target 30 km/h test 3; (d) CPFAO new pedestrian target 30 km/h test 1; (e) CPFAO new pedestrian target 30 km/h test 2; (f) CPFAO new pedestrian target 30 km/h test 3.
Sensors 25 05145 g0a3aSensors 25 05145 g0a3b
Figure A4. CPLA—10 km/h test results: (a) CPLA aged pedestrian target 10 km/h test 1; (b) CPLA aged pedestrian target 10 km/h test 2; (c) CPLA new pedestrian target 10 km/h test 1; (d) CPLA new pedestrian target 10 km/h test 2; (e) CPLA new pedestrian target 10 km/h test 3.
Figure A4. CPLA—10 km/h test results: (a) CPLA aged pedestrian target 10 km/h test 1; (b) CPLA aged pedestrian target 10 km/h test 2; (c) CPLA new pedestrian target 10 km/h test 1; (d) CPLA new pedestrian target 10 km/h test 2; (e) CPLA new pedestrian target 10 km/h test 3.
Sensors 25 05145 g0a4aSensors 25 05145 g0a4b
Figure A5. CPLA—15 km/h test results: (a) CPLA aged pedestrian target 15 km/h test 1; (b) CPLA aged pedestrian target 15 km/h test 2; (c) CPLA aged pedestrian target 15 km/h test 3; (d) CPLA new pedestrian target 15 km/h test 1; (e) CPLfA new pedestrian target 15 km/h test 2; (f) CPLA new pedestrian target 15 km/h test 3.
Figure A5. CPLA—15 km/h test results: (a) CPLA aged pedestrian target 15 km/h test 1; (b) CPLA aged pedestrian target 15 km/h test 2; (c) CPLA aged pedestrian target 15 km/h test 3; (d) CPLA new pedestrian target 15 km/h test 1; (e) CPLfA new pedestrian target 15 km/h test 2; (f) CPLA new pedestrian target 15 km/h test 3.
Sensors 25 05145 g0a5aSensors 25 05145 g0a5b
Figure A6. CPLA—20 km/h test results: (a) CPLA aged pedestrian target 20 km/h test 1; (b) CPLA aged pedestrian target 20 km/h test 2; (c) CPLA aged pedestrian target 20 km/h test 3; (d) CPLA new pedestrian target 20 km/h test 1; (e) CPLA new pedestrian target 20 km/h test 2; (f) CPLA new pedestrian target 20 km/h test 3.
Figure A6. CPLA—20 km/h test results: (a) CPLA aged pedestrian target 20 km/h test 1; (b) CPLA aged pedestrian target 20 km/h test 2; (c) CPLA aged pedestrian target 20 km/h test 3; (d) CPLA new pedestrian target 20 km/h test 1; (e) CPLA new pedestrian target 20 km/h test 2; (f) CPLA new pedestrian target 20 km/h test 3.
Sensors 25 05145 g0a6aSensors 25 05145 g0a6b

References

  1. World Health Organization. Global Status Report on Road Safety: Time for Action; World Health Organization: Geneva, Switzerland, 2015. [Google Scholar]
  2. Searson, D.; Anderson, R.W.; Hutchinson, T.P. Integrated assessment of pedestrian head impact protection in testing secondary safety and autonomous emergency braking. Accid. Anal. Prev. 2014, 63, 1–8. [Google Scholar] [CrossRef] [PubMed]
  3. Li, L.; Zhu, X.C.; Dong, X.F.; Ma, Z.X. A Research on the Collision Avoidance Strategy for Autonomous Emergency Braking System. Automot. Eng. 2015, 37, 168–174. [Google Scholar]
  4. Zuo, P.W.; Zhang, L.S.; Li, Y.X. Current Status and Future Trends in the Development of Automatic Emergency Braking Systems. Auto Ind. Res. 2017, 2, 25–29. [Google Scholar] [CrossRef]
  5. Wada, T.; Doi, S.I.; Tsuru, N.; Isaji, K.; Kaneko, H. Characterization of expert drivers’ last-second braking and its application to a collision avoidance system. IEEE Trans. Intell. Transp. Syst. 2010, 11, 413–422. [Google Scholar] [CrossRef]
  6. Ren, Z.B.; Guo, C.C.; Teng, J.K. Test Scheme for Autonomous Driving of Intelligent Connected Vehicles. Commer. Veh. 2024, 1, 92–93. [Google Scholar] [CrossRef]
  7. ISO 19206-1-2018; Road Vehicles—Test Devices for Target Vehicles, Vulnerable Road Users and Other Objects, for Assessment of Active Safety Functions—Part 1: Requirements for Passenger Vehicle Rear-End Targets. ISO: Geneva, Switzerland, 2018.
  8. ISO 19206-2-2018; Road Vehicles—Test Devices for Target Vehicles, Vulnerable Road Users and Other Objects, for Assessment of Active Safety Functions—Part 2: Requirements for Pedestrian Targets. ISO: Geneva, Switzerland, 2018.
  9. ISO 19206-4-2020; Road Vehicles—Test Devices for Target Vehicles, Vulnerable Road Users and Other Objects, for Assessment of Active Safety Functions—Part 4: Requirements for Bicyclist Targets. ISO: Geneva, Switzerland, 2020.
  10. ISO 19206-3-2021; Road Vehicles—Test Devices for Target Vehicles, Vulnerable Road Users and Other Objects, for Assessment of Active Safety Functions—Part 3: Requirements for Passenger Vehicle 3D Targets. ISO: Geneva, Switzerland, 2021.
  11. Khan, A.U.; Mint, S.J.; Shah, S.N.H.; Schneider, C.; Robert, J. Exploring the Impact of Bistatic Target Reflectivity in ISAC-Enabled V2V Setup Across Diverse Geometrical Road Layouts. IEEE Open J. Veh. Technol. 2025, 6, 948–968. [Google Scholar] [CrossRef]
  12. Jiang, Y.N. Pedestrian Tracking Research in Automated Driving Scene. Master’s Thesis, Chongqing University of Technology, Chongqing, China, 2020. [Google Scholar]
  13. Liu, Z.; Ma, W.; Liu, W.; Zhang, S. Research on Calibration Technology of AEB Target Pedestrian Dummy Based on Radar and Infrared Reflected Signal. In Proceedings of the 2021 13th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), Beihai, China, 16–17 January 2021. [Google Scholar]
  14. Luo, R.H. Development and Research of AEB Dummy Based on Chinese Human Body Characteristics. Master’s Thesis, Hunan University, Changsha, China, 2020. [Google Scholar]
  15. Xie, X.B.; Feng, X.F. Exploration of Metrological Calibration Methods for Test Targets in Intelligent and Connected Vehicle Testing. China Metrol. 2024, 6, 37–38. [Google Scholar] [CrossRef]
  16. Belgovane, D.J., Jr. Advancing Millimeter-Wave Vehicular Radar Test Targets for Automatic Emergency Braking (AEB) Sensor Evaluation. Ph.D. Thesis, Ohio State University, Columbus, OH, USA, 2017. [Google Scholar]
  17. Sharma, A.; Malhotra, J. Evaluating the effects of material reflectivity and atmospheric attenuation on photonic radar performance in free space optical channels. J. Opt. Commun. 2023, s2051–s2057. Available online: https://www.degruyterbrill.com/document/doi/10.1515/joc-2023-0176/html (accessed on 8 August 2025). [CrossRef]
  18. Stark, B.; Zhao, T.; Chen, Y. An analysis of the effect of the bidirectional reflectance distribution function on remote sensing imagery accuracy from small unmanned aircraft systems. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016. [Google Scholar]
  19. Zou, Y.; Zhang, L.; Zhang, J.; Li, B.; Lv, X. Developmental trends in the application and measurement of the Bidirectional Reflection Distribution Function. Sensors 2022, 22, 1739. [Google Scholar] [CrossRef]
  20. Jing, G.; Luo, Z.; Gao, Y.; Wang, X.; Hong, H. Review on metrological testing technology for essential elements of highway transport. J. Highw. Transp. Res. Dev. 2025, 42, 1–21. [Google Scholar]
  21. Wang, B.; Wu, Z.-S.; Gong, Y. Influence of surface material characteristics on laser radar 3D imaging of targets. In Proceedings of the Optical Metrology and Inspection for Industrial Applications, Beijing, China, 18–20 October 2010. [Google Scholar]
  22. Liu, D.; Huang, W.; Chu, R.; Li, Z.; Jin, X.; Zhang, H.; Wang, Y.; Ji, S. Study on Performance Testing and Evaluation of Autonomous Emergency Braking System Based on Self-Constructed Comprehensive Performance Evaluation Index Model. Sensors 2025, 25, 2171. [Google Scholar] [CrossRef] [PubMed]
  23. Murray-Coleman, J.; Smith, A. The automated measurement of BRDFs and their application to luminaire modeling. J. Illumin. Eng. Soc. 1990, 19, 87–99. [Google Scholar] [CrossRef]
  24. Dana, K.J. BRDF/BTF measurement device. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001. [Google Scholar]
  25. Matusik, W. A Data-Driven Reflectance Model. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2003. [Google Scholar]
  26. Kim, M.; Jin, C.; Lee, S.; Kim, K.-M.; Lim, J.; Choi, C. Calibration of BRDF based on the field goniometer system using a UAV multispectral camera. Sensors 2022, 22, 7476. [Google Scholar] [CrossRef] [PubMed]
  27. Weil-Zattelman, S.; Kizel, F. Image-Based BRDF Measurement. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 3, 171–177. [Google Scholar] [CrossRef]
  28. Wang, W.; Wu, H.; Zheng, C.; Liang, T.; Cui, S.; Xie, X.; Feng, G.; Gan, H.; He, Y. Research on calibration method for optical characteristics of pedestrian targets in autonomous vehicle testing. Proc. SPIE—Int. Soc. Opt. Eng. 2024, 13498, 64–76. [Google Scholar]
  29. Jiao, Z.; Dong, Y.; Schaaf, C.B.; Chen, J.M.; Román, M.; Wang, Z.; Zhang, H.; Ding, A.; Erb, A.; Hill, M.J. An algorithm for the retrieval of the clumping index (CI) from the MODIS BRDF product using an adjusted version of the kernel-driven BRDF model. Remote Sens. Environ. 2018, 209, 594–611. [Google Scholar] [CrossRef]
  30. Wu, H.; Feng, G.; Zheng, C.; Li, P.; Wang, Y. Study of Space Spectral Characteristics of the BRDF Diffuse Standard Plate in Visible and Infrared Bands. In Proceedings of the Applied Optics and Photonics China (AOPC 2015), Beijing, China, 5–7 May 2015. [Google Scholar]
  31. Zhang, C.; Cai, Y.; Lin, G.; Shen, C. Deep EMD: Differentiable earth mover’s distance for few-shot learning. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5632–5648. [Google Scholar]
  32. Chambers, J.M.; Mallows, C.L.; Stuck, B. A Method for Simulating Stable Random Variables. J. Am. Stat. Assoc. 1976, 71, 340–344. [Google Scholar] [CrossRef]
  33. Wang, Y. Long-Range EEG Epilepsy Automatic Detection Algorithm Based on Bulldozer Distance. Master’s Thesis, Shandong University, Jinan, China, 2014. [Google Scholar]
  34. Dong, H.; Wang, Y.; Li, J.F. Study on distribution characteristics of diffuse solar radiation in hemisphere sky. J. Xi’an Univ. Archit. Technol. (Nat. Sci. Ed.) 2018, 50, 435–440. [Google Scholar]
  35. Pinty, B.; Lattanzio, A.; Martonchik, J.V.; Verstraete, M.M.; Gobron, N.; Taberner, M.; Widlowski, J.-L.; Dickinson, R.E.; Govaerts, Y. Coupling diffuse sky radiation and surface albedo. J. Atmos. Sci. 2005, 62, 2580–2591. [Google Scholar] [CrossRef]
  36. Tian, D.F.; Yang, S.Q.; Xu, D.W.; Ren, H.Z.; Fan, W.J.; Liu, R.Y. FAPAR retrieval from GF-5 hyperspectral images based on unified BRDF model. Natl. Remote Sens. Bull. 2023, 27, 711–723. [Google Scholar] [CrossRef]
  37. China Automotive Technology and Research Center Co., Ltd. C-NCAP Administrative Rules (2021 Edition): Pedestrian Automatic Emergency Braking System Testing; CATARC: Tianjin, China, 2021. [Google Scholar]
Figure 1. Pre- and post-crash field measurement scene.
Figure 1. Pre- and post-crash field measurement scene.
Sensors 25 05145 g001
Figure 2. Definition of BRDF.
Figure 2. Definition of BRDF.
Sensors 25 05145 g002
Figure 3. Overall calibration workflow.
Figure 3. Overall calibration workflow.
Sensors 25 05145 g003
Figure 4. BRDF transfer process.
Figure 4. BRDF transfer process.
Sensors 25 05145 g004
Figure 5. Algorithmic processing workflow.
Figure 5. Algorithmic processing workflow.
Sensors 25 05145 g005
Figure 6. Coordinate system used. (the angular parameters θi and φi are denoted by blue lines, and the calibration region is highlighted in red).
Figure 6. Coordinate system used. (the angular parameters θi and φi are denoted by blue lines, and the calibration region is highlighted in red).
Sensors 25 05145 g006
Figure 7. Procedure for computing polygon angles.
Figure 7. Procedure for computing polygon angles.
Sensors 25 05145 g007
Figure 8. Determination of the incident vector.
Figure 8. Determination of the incident vector.
Sensors 25 05145 g008
Figure 9. Deformation of hexagon (a) before and (b) after rotation.
Figure 9. Deformation of hexagon (a) before and (b) after rotation.
Sensors 25 05145 g009
Figure 10. EMD computation using hexagons.
Figure 10. EMD computation using hexagons.
Sensors 25 05145 g010
Figure 11. Determination of the reflected vector.
Figure 11. Determination of the reflected vector.
Sensors 25 05145 g011
Figure 12. Schematic of BRDF synthesis.
Figure 12. Schematic of BRDF synthesis.
Sensors 25 05145 g012
Figure 13. Algorithm-processed images from the verification phase.
Figure 13. Algorithm-processed images from the verification phase.
Sensors 25 05145 g013
Figure 14. Synthetic light source vector.
Figure 14. Synthetic light source vector.
Sensors 25 05145 g014
Figure 15. Verification of indoor light source vector.
Figure 15. Verification of indoor light source vector.
Sensors 25 05145 g015
Figure 16. Field calibration workflow.
Figure 16. Field calibration workflow.
Sensors 25 05145 g016
Figure 17. Field test scene: (a) layout of on-site equipment; (b) camera positions.
Figure 17. Field test scene: (a) layout of on-site equipment; (b) camera positions.
Sensors 25 05145 g017
Figure 18. Images captured during the light source vector calculation phase.
Figure 18. Images captured during the light source vector calculation phase.
Sensors 25 05145 g018
Figure 19. The EMD matching process: (a) partial matching regions between the target and the standard body; (b) all facets of the standard body involved in EMD computation; (c) the computed matching facet.
Figure 19. The EMD matching process: (a) partial matching regions between the target and the standard body; (b) all facets of the standard body involved in EMD computation; (c) the computed matching facet.
Sensors 25 05145 g019
Figure 20. Images captured during calibration of the standard white plate and standard body.
Figure 20. Images captured during calibration of the standard white plate and standard body.
Sensors 25 05145 g020
Figure 21. Pseudo-color visualization of BRDF results for the standard body: (a) BRDF visualization when the target is captured from θr1 = −16.7°; (b) BRDF visualization when the target is captured from θr2 = 0°; (c) BRDF visualization when the target is captured from θr3 = 16.7°.
Figure 21. Pseudo-color visualization of BRDF results for the standard body: (a) BRDF visualization when the target is captured from θr1 = −16.7°; (b) BRDF visualization when the target is captured from θr2 = 0°; (c) BRDF visualization when the target is captured from θr3 = 16.7°.
Sensors 25 05145 g021
Figure 22. BRDF calibration and pseudo-color visualization results for new and aged pedestrian targets (front, side, back views).
Figure 22. BRDF calibration and pseudo-color visualization results for new and aged pedestrian targets (front, side, back views).
Sensors 25 05145 g022
Figure 23. Test scenes: (a) CPFAO and (b) CPLA. F: the acceleration distance of the target; D: the distance from the target’s starting point to the theoretical crash point; point L: the 25% crash point on the front structure of the test vehicle; G: the acceleration distance of the pedestrian target; S: the constant-speed walking distance of the pedestrian target; point N: the 50% crash point on the front structure of the test vehicle; VUT: vehicle under test; OV: obstacle vehicle.
Figure 23. Test scenes: (a) CPFAO and (b) CPLA. F: the acceleration distance of the target; D: the distance from the target’s starting point to the theoretical crash point; point L: the 25% crash point on the front structure of the test vehicle; G: the acceleration distance of the pedestrian target; S: the constant-speed walking distance of the pedestrian target; point N: the 50% crash point on the front structure of the test vehicle; VUT: vehicle under test; OV: obstacle vehicle.
Sensors 25 05145 g023
Figure 24. BRDF calibration results and variation ranges of pedestrian targets.
Figure 24. BRDF calibration results and variation ranges of pedestrian targets.
Sensors 25 05145 g024
Figure 25. Summary of experimental scores for the two targets.
Figure 25. Summary of experimental scores for the two targets.
Sensors 25 05145 g025
Table 1. Angular computation results for facets 1–5 during field and laboratory calibration stages.
Table 1. Angular computation results for facets 1–5 during field and laboratory calibration stages.
Source of Light VectorLight Vector Zenith Angle φiViewing Vector Zenith Angle φrViewing Vector Azimuth Angle θr1 (Left View)Viewing Vector Azimuth Angle θr2 (Frontal View)Viewing Vector Azimuth Angle θr3 (Right View)Angle Between Light and Camera Viewing Direction θi1 (Frontal View)Angle Between Light and Camera Viewing Direction θi2 (Left or Right View)Robotic Arm Zenith Angle φ1 (Left View)Robotic Arm Zenith Angle φ2 (Frontal View)Robotic Arm Zenith Angle φ3 (Right View)
Facet 179.9°−16.7°16.7°79.88°80.83°−16.7°16.7°
Facet 269.1°−16.7°16.7°69.73°66.27°−16.7°16.7°
Facet 380°−16.7°16.7°80.06°81.92°−16.7°16.7°
Facet 469.5°−16.7°16.7°70.7°77.20°−16.7°16.7°
Facet 580.5°−16.7°16.7°80.74°84.92°−16.7°16.7°
Table 2. EMD values between surface A and facets 1–16 of the standard body.
Table 2. EMD values between surface A and facets 1–16 of the standard body.
Surface No.EMD Value to Surface ASurface No.EMD Value to Surface A
Facet 169.9691Facet 263.3331
Facet 351.5347Facet 435.7649
Facet 575.5017Facet 678.9820
Facet 736.9407Facet 822.9104
Facet 952.1091Facet 1061.2501
Facet 1168.0692Facet 1252.6523
Facet 1341.0880Facet 1467.0089
Facet 1562.8452Facet 1677.0957
Table 3. CPFAO and CPLA scoring.
Table 3. CPFAO and CPLA scoring.
Stop Distance (cm)ScoreStop Distance (cm)ScoreStop Distance (cm)Score
0~150105~12035210~22570
15~305120~13540225~24075
30~4510135~15045240~25580
45~6015150~16550255~27085
60~7520165~18055270~28590
75~9025180~19560285~30095
90~10530195~21065>300100
Table 4. Test results of CPFAO scenario for new and aged pedestrian targets. Distances between stopping point and theoretical collision point and AEB test scores at different speeds and in different trials.
Table 4. Test results of CPFAO scenario for new and aged pedestrian targets. Distances between stopping point and theoretical collision point and AEB test scores at different speeds and in different trials.
Target TypeSpeed (km/h)Test No.Distance (m)Score (points)
New pedestrian target10Test 10.4310
Test 2//
Test 3//
20Test 11.4045
Test 200
Test 300
30Test 100
Test 20.090
Test 300
Aged pedestrian target10Test 11.9360
Test 20.5420
Test 30.9830
20Test 100
Test 2//
Test 31.0430
30Test 10.7220
Test 20.5315
Test 30.3310
Table 5. Test results of CPLA scenario for new and aged pedestrian targets. Distances between stopping point and theoretical collision point and AEB test scores at different speeds and in different trials.
Table 5. Test results of CPLA scenario for new and aged pedestrian targets. Distances between stopping point and theoretical collision point and AEB test scores at different speeds and in different trials.
Target TypeSpeed (km/h)Test No.Distance (m)Score (Points)
New pedestrian target10Test 100
Test 21.340
Test 31.1530
15Test 10.235
Test 20.050
Test 300
20Test 10.25
Test 21.340
Test 30.4515
Aged pedestrian target10Test 100
Test 22.4280
Test 3//
15Test 10.010
Test 20.5915
Test 300
20Test 10.4915
Test 20.5415
Test 30.120
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Zheng, C.; Wu, H.; Feng, G.; Sun, R.; Liang, T.; Xie, X.; Zhang, Q.; He, Y.; Gan, H. Field Calibration of the Optical Properties of Pedestrian Targets in Autonomous Emergency Braking Tests Using a Three-Dimensional Multi-Faceted Standard Body. Sensors 2025, 25, 5145. https://doi.org/10.3390/s25165145

AMA Style

Wang W, Zheng C, Wu H, Feng G, Sun R, Liang T, Xie X, Zhang Q, He Y, Gan H. Field Calibration of the Optical Properties of Pedestrian Targets in Autonomous Emergency Braking Tests Using a Three-Dimensional Multi-Faceted Standard Body. Sensors. 2025; 25(16):5145. https://doi.org/10.3390/s25165145

Chicago/Turabian Style

Wang, Weijie, Chundi Zheng, Houping Wu, Guojin Feng, Ruoduan Sun, Tao Liang, Xikuai Xie, Qiaoxiang Zhang, Yingwei He, and Haiyong Gan. 2025. "Field Calibration of the Optical Properties of Pedestrian Targets in Autonomous Emergency Braking Tests Using a Three-Dimensional Multi-Faceted Standard Body" Sensors 25, no. 16: 5145. https://doi.org/10.3390/s25165145

APA Style

Wang, W., Zheng, C., Wu, H., Feng, G., Sun, R., Liang, T., Xie, X., Zhang, Q., He, Y., & Gan, H. (2025). Field Calibration of the Optical Properties of Pedestrian Targets in Autonomous Emergency Braking Tests Using a Three-Dimensional Multi-Faceted Standard Body. Sensors, 25(16), 5145. https://doi.org/10.3390/s25165145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop