Height Fluctuations and Surface Gradients in Topographic Measurements

Topographic maps are composed of pixels associated with coordinates (x, y, z) on a surface. Each pixel location (x, y) is linked with fluctuations in a measured height sample (z). Fluctuations here are uncertainties in heights estimated from multiple topographic measurements at the same position. Height samples (z) are measured at individual locations (x, y) in topographic measurements and compared with gradients on topographies. Here, gradients are slopes on a surface calculated at the scale of the sampling interval from inclination angles of vectors that are normal to triangular facets formed by adjacent height samples (z = z(x, y)). Similarities between maps of gradients logs and height fluctuations are apparent. This shows that the fluctuations are exponentially dependent on local surface gradients. The highest fluctuations correspond to tool/material interactions for turned surfaces and to regions of maximum plastic deformation for sandblasted surfaces. Finally, for abraded, heterogeneous, multilayer surfaces, fluctuations are dependent on both abrasion and light/sub-layer interactions. It appears that the natures of irregular surface topographies govern fluctuation regimes, and that regions which are indicative of surface functionality, or integrity, can have the highest fluctuations.


Introduction
Surface topographies are important for understanding physical, chemical, and biological phenomena and surface creation in all manufacturing processes. Krolczyk et al. [1] show the importance and the influence of manufacturing process parameters on surface functionalities through surface topographies. Value creation in manufacturing industries derives from providing desired functionality and aesthetics for customers and stakeholders. These functionalities and aesthetics can be related to surface topographies. Evaluations of surface topographies, which are necessary for evidence-based product and process design, quality control in manufacturing, and for advancing science, start with topographic measurements. These generally consist of hundreds of thousands to millions of individual height samplings located in regular, spatial arrays. Evaluating the quality of these measurements is key to evaluating topographies and understanding how they influence functionality and are influenced by processing.
Recently, Lemesle et al. [2] showed in a top-down study of fluctuations how location specific fluctuations, interpreted as uncertainties, in height measurements can be estimated from multiple measurements at the same position. Pixel height is observed on the same need the whole dataset for the analysis and begin the evaluation immediately after the scan. Another approach to surface roughness uncertainties is made by Mills [10], where the roughness of road pavements in Kansas was simulated using hierarchical Markov Chain Monte Carlo methods. It was then possible to reflect prevailing roughness conditions without neglecting uncertainty.
Even with precautions and careful analyses, one wonders if it is possible to guarantee results of studies on the influence of measurement data sets on dispersion. The weakness of this vision is the determination of the reliability of the data set. More precisely, the quality of topographic measurements needs to be quantified. Even in cases of excellent discrimination capacities of certain analyses, it is unclear whether differences should be attributed to the topographies or to measurement fluctuations. This is why it makes sense to assess fluctuation-based uncertainties in measurements in order to evaluate their influence on discrimination or correlation results.
A general lack of terminology in the literature is observed regarding the characterization of surface features, while ISO 8785 [11] shows a simplified view of a large surface feature range. A similar observation is possible for gradients and a multiple terminologies and descriptions that can be used to describe surfaces. Gradients are scalar representations of spatial variations of heights (z) between one location and its neighbors. Gradients, as mentioned above and demonstrated below, are calculated from normal vectors obtained from triangular facets fitted to adjacent height samples. Gradients, as used here, are sometimes referred to as slopes on a surface. Gradients are calculated here at each location (x, y) on the surface topography and are used to generate gradient maps.
The objective of this work which to determine the extent to which gradients could be origins of fluctuation-based uncertainties in optical measurement of topographies. This extends a previous top-down study [2]. Fluctuation-based uncertainties are indicated by fluctuations in height measurements (z) repeated at a single location (x, y). Gradients are a local geometric property of a measurand. This paper examines relations between height fluctuations and local gradients at any location on topographic measurement maps. This is a new approach to fluctuation quantification of topographic measurements. It can increase the value of surface metrology through insights into measurement repeatability, and for finding useful functional correlations and discrimination, which also depend on using pertinent scales and topographic characterization parameters [3]. Lack of repeatability, as indicated by large fluctuations, can impact these abilities.
In optical surface metrology, the optical properties of measurands, which include height gradients, can influence measurement quality. The measurand and physical phenomena of the measurement apparatus are linked in terms of influencing measurement quality due to the point focus detection criteria compared to the measurement technologies used. Height fluctuations and height gradients acquisition therefore depend on the physics used in the measurement systems. At a certain scale, resolution depends on the type of optical properties of surface features, such as brightness and transparency. All measurement systems are not able to measure the same information. At the same scale and resolution, it is better to select an apparatus according to its ability to measure this type of surface with low height fluctuation. Similar reasoning could be applied when strong surface gradient must be measured.
In addition to the scale, surface resolution has an impact on the calculation of some parameters. In a nutshell, the resolution of each measurement apparatus depends on the measuring scale, which in turn depends on the surface being measured. For instance, slope as characterized by the S dq parameter is based on triangular facets carried by the surface topography point heights with some smoothing. S dq results depend on sampling intervals, although ISO 25178 does not yet require these to be reported with S dq [12].
However, one of the most important criteria is the manufactured surface topography. Due to geometric and optical properties at each location, an apparatus' response to height could depend on the numerical aperture, lateral resolution, or light reflection. For example, the surface reflectivity plays a crucial role in shaping the white light correlogram by deter-Materials 2023, 16, 5408 4 of 28 mining which wavelengths of light are reflected or absorbed by a surface. High reflectivity leads to a correlogram with strong correlations across the spectrum, while low reflectivity results in reduced correlations for certain wavelengths. More precisely, reflectivity can influence white light interferograms by affecting the intensity, contrast, and visibility, and potentially introducing phase shifts in the interference fringes. Gross et al. [13] demonstrated that the statistical error of each measurement location depends on the brightness of the associated speckle; a dark speckle gives a more uncertain measurement than a bright one. If the brightness is lower than the camera's noise threshold, the measurement fails completely, and an outlier appears that leads to a NaN value. Pavel Pavliček and Ondřej Hýblt [14] showed that, under both theoretical considerations and experimental ones, the measurement uncertainty caused by the surface roughness depends on the intensity of the individual speckle: the brighter the corresponding speckle, the more precise the measurement. They also showed that distribution of the measurement error caused by surface roughness depends on the roughness itself.
Local material properties can also affect the amplitude of fluctuations and are principally governed by fluctuations in the refractive index [15]. More precisely, refractive index fluctuations refer to the variations in the refractive index of a material or medium over space or time. In an ideal situation, the refractive index of a material is considered constant and uniform. However, in reality, many factors can cause the refractive index to fluctuate. These fluctuations can occur due to variations in temperature, density, composition, or any other physical parameter that influences the optical properties of the medium and affects the shape of the white-light correlogram due to the first-order chromatic dispersion of the refractive index [16].
Landscapes could be described by many maps, including heights, gradients, and grey levels. These maps can be evaluated by correlating them and through localized measurement uncertainty maps. The aim here is not to quantify the uncertainties of single surfaces directly but rather to link the gradients and height fluctuations from a series of surface measurements for an uncertainty deduction.
The methodology proposed in this paper here is summarized: 1.
Measuring n times the same surface zone; 2.
Calculating the height standard deviation (fluctuations) and the mean gradient at each pixel from the set of n maps; 3.
Finding a correlation between fluctuations and gradients by using a new developed tool, named Bounded Bivariate Density (B 2 D) plotting method (2D or 3D view).

Choice of Surfaces
The method is applied on the same surfaces analyzed in [2]: one turned surface, two sandblasted surfaces and two zones of a biplane, multilayered ophthalmic lens (Essilor, Creteil, France).
These surfaces were selected as representative of surface topographies for application of the following method. The turned surface (Figure 1a) was machined at 120 m/min with a tungsten carbide D-type insert on an aluminum alloy AU4G part. It has periodic motifs and multiscale topographies oriented in the cutting direction. The sandblasted surfaces ( Figure 1b) were created by propelling corundum particles at 3 and 6 bars on aluminum alloy AU4G parts for 60 s. These surfaces have isotropic, complex multiscale topographies. Two surfaces were studied to observe the influence in changing the process pressure on the surfaces. Finally, the third surfaces ( Figure 1c) were obtained by abrading an ophthalmic lens with an abrasive media for 300 cycles (150 cycles per minute). In a nutshell, this ophthalmic lens is studied because it includes distinct deep scratches, nano scratches on the surface bulk, and pits. nutshell, this ophthalmic lens is studied because it includes distinct deep scratches, nano scratches on the surface bulk, and pits. To sum up, a wide choice of surface topographies might be analyzed but only three types of surfaces were selected to perform the analysis because of their local characteristics, such as the gradient, or their particular functionalities, such as transparency. These three types of surfaces are used to: • Quantify the role of optical properties on the nature of fluctuations (transparent and opaque ophthalmic lens).

•
Quantify the influence of directional gradients on fluctuations (isotropic for sandblasting surfaces and anisotropic for tooled surfaces).

•
Check the repeatability of fluctuation estimation by taking two different locations of the same ophthalmic lens.

•
Quantify the role of roughness amplitude on fluctuation amplitude (two sandblasting surfaces with Sa of 2.8 µm for a sandblasting pressure of 3 bars and Sa of 3.9 µm for a sandblasting pressure of 6 bars).

Measurement Methods
The turned and sandblasted surfaces were measured on a WLI (Bruker Contour GT™) with a 20× magnification, a numerical aperture of 0.46, an optical lateral resolution of 0.7 µm, and a sampling interval of 0.49 µm. The abraded ophthalmic lenses were measured on a SWLI (Zygo NewView™ 7300) with 50× magnification, a numerical aperture of 0.55, an optical lateral resolution of 0.52 µm, and a sampling interval of 0.22 µm.
A study on roughness parameters (Sa, Sal, Str, Sdr, Spd, and Spc) is presented in Appendix B. Each parameter was calculated on each surface presented here.

Calculation of Fluctuations
The method for fluctuations calculation proposed by Lemesle et al. [2] is summarized here. Measured surface topographies consist of amplitudes, zi,j,n where (i, j) are the To sum up, a wide choice of surface topographies might be analyzed but only three types of surfaces were selected to perform the analysis because of their local characteristics, such as the gradient, or their particular functionalities, such as transparency. These three types of surfaces are used to: • Quantify the role of optical properties on the nature of fluctuations (transparent and opaque ophthalmic lens).

•
Quantify the influence of directional gradients on fluctuations (isotropic for sandblasting surfaces and anisotropic for tooled surfaces).

•
Check the repeatability of fluctuation estimation by taking two different locations of the same ophthalmic lens. • Quantify the role of roughness amplitude on fluctuation amplitude (two sandblasting surfaces with S a of 2.8 µm for a sandblasting pressure of 3 bars and S a of 3.9 µm for a sandblasting pressure of 6 bars).

Measurement Methods
The turned and sandblasted surfaces were measured on a WLI (Bruker Contour GT™) with a 20× magnification, a numerical aperture of 0.46, an optical lateral resolution of 0.7 µm, and a sampling interval of 0.49 µm. The abraded ophthalmic lenses were measured on a SWLI (Zygo NewView™ 7300) with 50× magnification, a numerical aperture of 0.55, an optical lateral resolution of 0.52 µm, and a sampling interval of 0.22 µm.
A study on roughness parameters (S a , S al , S tr , S dr , S pd, and S pc ) is presented in Appendix B. Each parameter was calculated on each surface presented here. Mean map, µ i,j (Equation (1)): Standard deviation map, σ i,j , calculated from the n measured maps (Equation (2)): 3.
Normalization of the standard deviation,σ i,j , (Equation (3)), by dividing by the root mean square roughness parameter for the entire measured surface, S q , which is the second statistical moment, standard deviation, or variance of the surface heights (Equation (4)):σ where Π d,i,j,n is the polynomial of degree d which best fits to the surface by least squared interpolation on heights z i,j,n and removes the form from the surface.

Gradient Maps
The idea here is to study the relations of a height, aka pixel, at a location (i, j) with its close neighbors. Spatial scales must be decorrelated as much as possible. Only the adjacent points are used to compute the gradient.
The computation of the surface gradient in topography is standardized by the ISO 25178:2 standard [7] in which the calculation of the Root Mean Square Gradient, S dq , is proposed (Equation (5)).
Blateyron [17] proposed to associate two plots which represent the z normal distribution on each triangular facet of the surface mesh and the orientation of the facet in the (x, y) plane. He thus showed the great richness of looking at the probability density of the local gradients. The computation is described in the Surfstand project proposed by Blunt et al. [18], who used a seven-points Lagrange interpolation. This proposition was to be extended to 3D by Dong [19] based on an initial idea by Chetwynd [20] (Equation (6)).
However, the use of this interpolation will cause some problems. This intercorrelation strongly correlates the errors on nine points (3 × 3 grid). However, the fluctuations, in particular with high local surface gradients where the points are calculable, i.e., without NaN, especially for the optical methods, are researched. The probability of having one non-measured point in nine, or in twelve, allowing for an interpolation, becomes high and makes the interpolated gradient not calculable. This is highlighted by Gomez et al. [21]: by using a denoising technique, i.e., a 3 × 3 pixel denoising filter, the nine neighboring pixels will be correlated, and the number of uncorrelated image points P will be reduced by a factor of nine. Using the ISO 25178 method, this will lead to correlating 49 points in this study (Equation (7)). Therefore, the idea is not to obtain a smooth gradient which will be well able to characterize a surface [22], but a gradient which can capture the noise. As the noise has fractal characteristics, a method which characterizes it must be retained. Ungar et al. [23] showed that triangulation captures the fractal aspect with optical methods. As a consequence, the computed gradient will be the gradient of the elementary facet. Lemesle et al. showed that computing the gradient on a facet allows for a better description of the physics of failure at very small scales [24].
The z gradient, i.e., the gradient of the normal (ν i,j,n ) to the surface of the plane ( Figure 2), is noted ∇ i,j,n (Equation (8)). This gradient is here equal to the z normal because it is assumed that the fluctuations will depend on the gradient. The maximum normalized gradient is therefore limited to 1, which corresponds to an infinite gradient. The mean gradient,∇ i,j , is obtained by averaging the n elementary maps, including NaN, which is described by Equation (9). However, the use of this interpolation will cause some problems. This intercorr tion strongly correlates the errors on nine points (3 × 3 grid). However, the fluctuation particular with high local surface gradients where the points are calculable, i.e., with NaN, especially for the optical methods, are researched. The probability of having non-measured point in nine, or in twelve, allowing for an interpolation, becomes high makes the interpolated gradient not calculable. This is highlighted by Gomez et al. [ by using a denoising technique, i.e., a 3 × 3 pixel denoising filter, the nine neighbor pixels will be correlated, and the number of uncorrelated image points P will be redu by a factor of nine. Using the ISO 25178 method, this will lead to correlating 49 point this study (Equation (7)). Therefore, the idea is not to obtain a smooth gradient which be well able to characterize a surface [22], but a gradient which can capture the noise the noise has fractal characteristics, a method which characterizes it must be retained. gar et al. [23] showed that triangulation captures the fractal aspect with optical metho As a consequence, the computed gradient will be the gradient of the elementary fa Lemesle et al. showed that computing the gradient on a facet allows for a better desc tion of the physics of failure at very small scales [24].
The z gradient, i.e., the gradient of the normal (νi,j,n) to the surface of the plane (Fig   2), is noted ∇i,j,n (Equation (8)). This gradient is here equal to the z normal because assumed that the fluctuations will depend on the gradient. The maximum normali gradient is therefore limited to 1, which corresponds to an infinite gradient. The m gradient, , , is obtained by averaging the n elementary maps, including NaN, whic described by Equation (9). Spatial representation of gradient given by 3D triangular facet obtained from surface m inspired by Blateyron [17].
, , = tan , , = tan (arccos( ⃗ • , , ⃗⃗⃗⃗⃗⃗⃗⃗⃗ )) with ‖ ⃗‖ = 1 and ‖ , , The moments Q q of all gradients based on the empirical probability density of ∇ i,j,n can be calculated. This probability density is noted (Equations (10) and (11)). The momentŝ Q q of the mean gradient map based on the empirical probability density of∇ i,j can be calculated. This probability density is noted as p∇ (Equations (12) and (13)).
A major interest is calculating fluctuations statistics by gradient class. The entire range of values is divided into a series of equal intervals in log scale. However, from preliminary studies, it is possible to build a homogeneous repartition of the fluctuations if the interval is 0.01 by performing the variable change described by Equation (14). A set of k − 1 continuous intervals is therefore obtained.

B 2 D Plot
Bounded Bivariate Density (B 2 D) involves plotting the bivariate density of both gradient and individual standardized fluctuations. The individual standardized fluctuations, Se i,j,n , is calculated at each location (i, j) of an elementary measured map and is described by Equation (15). B 2 D is constructed with the pairs (Se i,j,n , ∇ i,j,n ) and the 2D probability density function, p ∇,Se , defined by Equation (16).

Fluctuations/Gradient Graphs
From these computations defined previously, several graphs can be constructed.

• Global standardized fluctuations versus Gradients
The graphs of statistics based on for each gradient interval defined by Equation (14) are plotted. The density probability function,σ i,j , is defined according to Equations (17) and (18).
Several conditions on the density function can be built by class of gradient, indexed by k (Equation (19)) and their associated moments {µσ, Q50σ, minσ, maxσ, Q5σ and Q95σ}.
The six following graphs can therefore be plotted: • This graph seems to be similar to the graph described by Equation (19) but it is a complementary information. Instead of taking the I × J values ofσ i,j , i.e., on the entire set of measurements, to estimate the probability density function, the I × J × n values of standardized fluctuations, Se, will be taken (Equations (21) and (22)). (21) log 10 ∇ k , log 10 µσ ,k , log 10 Q50σ ,k , log 10 minσ ,k , log 10 maxσ ,k , log 10 Q5σ ,k , log 10 Q95σ ,k

Results
Figure 3b represents the mean gradient map of the turning surfaces calculated from Equation (10). This map∇ i,j highly resembles to the map σ i,j (Figure 3a). It seems visually that these two maps are highly correlated. Equation (20) is thus plotted (Figure 3c). In log-log scale, the different curves of the global standardized fluctuationsσ i,j follow an exponential growth with the mean gradient∇ i,j . This highlights the relation between the fluctuations and the surface gradient. Fluctuation-based uncertainties are related to local surface gradients. Therefore, it is difficult to quantify measurement uncertainties without the surface topography. As a result, these top-down approaches are vital for understanding topographic measurement uncertainties.
On Figure 3d plotting individual standardized fluctuations (standardized error) versus gradients, the same tendency is observed but with a plateau for low gradients over three decades from 10 −7 to 10 −4 . There is a critical gradient under which the fluctuations no longer depend on the gradient. However, the fluctuation range remains high for three decades (5th percentile of 10 −4 , a median of 10 −2 and a 95th percentile of 10 −1 then a variation of 10  The visualization of the gradient concentration is possible thanks to Figure 4a. The fluctuations are symmetrical, centered and, with respect to null values, a high density of a quasi-zero fluctuations. The symmetry shows that these amplitude fluctuations are not biased. The continuous shape of the fluctuations plot means that there is no z-offset between each map measurement, nor any recalibration after each measurement. However, visually, the fluctuations range progresses according to a continuous exponential growth with the log of the gradients.
Despite that, a question is left open about functional field of surface finishing. Manufactured surface topographies are adequately characterized by amplitudes alone. Gradients are better characterizations. Indeed, many physical models' formulations include differential operators, such as ∂z ∂x , ∂z ∂y , ∂ 2 z ∂x 2 , ∂ 2 z ∂y 2 . For example, friction depends on asperities' shapes and their gradients, which form angles of attack, or on rake angles for characterizing cutting-like interactions with a counterface. Often, greater gradients provoke greater physical responses. Fluctuations are important because they correspond to gradients. Therefore, fluctuation-based uncertainties are problematic for functional characterization of, and for numerical modelling on, measured surfaces.
Finally, this method is applied to the ophthalmic lens in Figures 6 and 7. A projection of fluctuations and mean gradients is presented in Figure 6a,b, respectively. More computation graphs are shown in Appendices A.3 and A.4. Lemesle et al. [2] highlighted two zones which are separated by an uncertainty border (Z1 and Z2 in Figure 6). The separation of these zones is not due to their gradients, because these two zones do not appear on the gradient map (Figure 6b). Finally, this method is applied to the ophthalmic lens in Figures 6 and 7. A projection of fluctuations and mean gradients is presented in Figure 6a,b, respectively. More computation graphs are shown in Appendixes A.3 and A.4. Lemesle et al. [2] highlighted two zones which are separated by an uncertainty border (Z1 and Z2 in Figure 6). The Through an analysis of B 2 D plots (Figure 7a,c), an offset is observed on the probability density with bimodal aspect in the direction of the fluctuations. More precisely, at the second position on the Figure 7a, a multiplication of 2D B 2 D plot modes can be observed due to a problem that arose during the surface measurements. This mode multiplication is caused by four topographical maps of the measurement set. Appendix B.5 shows the difference between these four maps and the other maps in the same measurement set on the roughness parameter control charts. Moreover, the histogram of fluctuationsσ i,j puts forward a bimodal structure [2]. In the direction of the high gradients on the B 2 D plot, there is a 'new' probability density highly extended in fluctuation but centered on the null value corresponding to gradients superior to 10 −1 (Figure 7b). Therefore, the zone reflects high gradients due to a multitude of scratches created by the abrasive grains on the multilayer lens. The amplitude of this second histogram compared to the first one (inferior to 0.1) reflects the fraction of damaged zones with high fluctuation. The most flagrante surface feature is the boundary between the two fluctuation zones defined by the high amplitude scratch. An interpretation could be provided from the measurement system. In fact, it should be noted that the multilayer and transparent lens is measured with an interferometric microscope. The high amplitude scratch has locally modified the thickness of the layers distinctively. The hypothesis of a complex light multi-reflection in this groove and the multilayers can be a potential interpretation [25]. separation of these zones is not due to their gradients, because these two zones do not appear on the gradient map (Figure 6b). Through an analysis of B 2 D plots (Figure 7a,c), an offset is observed on the probability density with bimodal aspect in the direction of the fluctuations. More precisely, at the second position on the Figure 7a, a multiplication of 2D B 2 D plot modes can be observed due to a problem that arose during the surface measurements. This mode multiplication is caused by four topographical maps of the measurement set. Appendix B.5 shows the difference between these four maps and the other maps in the same measurement set on the roughness parameter control charts. Moreover, the histogram of fluctuations , puts forward a bimodal structure [2]. In the direction of the high gradients on the B 2 D plot, there is a 'new' probability density highly extended in fluctuation but centered on the null value corresponding to gradients superior to 10 −1 (Figure 7b). Therefore, the zone reflects high gradients due to a multitude of scratches created by the abrasive grains on the mul- A comparison can be made between the lowest fluctuation plateau of the two fluctuation zones and Figure 6a of the second lens position. It shows that the fluctuation level is estimated to 10 −2.5 = 0.0032 µm (3.2 nm), then a factor of 6 and 10, respectively, with respect to the scratch with no large amplitude. This observation shows that the complexity of the interaction between the measuring system and that the Uncertainties are Topographically and Material Dependent (UTMD). Leach et al. [26] made the attribution in their 2021 paper on metrological characteristics. It seems unreasonable to envisage a mathematical physical model that details this phenomenon which has not been encountered in other measure-ments performed on the biplane lens. Therefore, another top-down approach seems to be responsible for the manifestation of this optical disturbance of interferometer system on a UTMD surface with complex abrasion mechanisms. This optical artefact could introduce errors in the calculation of roughness parameters, or even a bias which would diminish the understanding of the different tribological phenomena responsible for the damage.  A comparison can be made between the lowest fluctuation plateau of the two fluctuation zones and Figure 6a of the second lens position. It shows that the fluctuation level is estimated to 10 −2.5 = 0.0032 µm (3.2 nm), then a factor of 6 and 10, respectively, with respect to the scratch with no large amplitude. This observation shows that the complexity of the interaction between the measuring system and that the Uncertainties are Topographically and Material Dependent (UTMD). Leach et al. [26] made the attribution in their 2021 paper on metrological characteristics. It seems unreasonable to envisage a mathematical physical model that details this phenomenon which has not been encountered in other measurements performed on the biplane lens. Therefore, another top-down approach seems to be responsible for the manifestation of this optical disturbance of interferometer system on a UTMD surface with complex abrasion mechanisms. This optical artefact could introduce errors in the calculation of roughness parameters, or even a bias which would diminish the understanding of the different tribological phenomena responsible for the damage.
Liu et al. [27] proposed a model to estimate the measurement uncertainty caused by surface gradient for a white light interferometer. Within the measurement range of one single CCD (Charge-Coupled Device) pixel, height differences of the workpiece cannot be resolved laterally. Measurements are influenced by light coming from the specified range. The coherent light also results in a speckle pattern. The effect of the continuum of light illumination is calculated numerically. It is treated as N scattering regions, where N is a large number. The idea of Liu et al. [27] is to use the works of Pavliček and Soubusta [28], which created a model of uncertainties on a pixel due to scattering on a 'flat' rough surface to tilt the reference plane and analyze the effect on the tilt amplitude. Pavliček and Soubusta [28] therefore found a relation for the measurement uncertainties, δz (Equation (23)). Liu et al. [27] proposed a model to estimate the measurement uncertainty caused by surface gradient for a white light interferometer. Within the measurement range of one single CCD (Charge-Coupled Device) pixel, height differences of the workpiece cannot be resolved laterally. Measurements are influenced by light coming from the specified range. The coherent light also results in a speckle pattern. The effect of the continuum of light illumination is calculated numerically. It is treated as N scattering regions, where N is a large number. The idea of Liu et al. [27] is to use the works of Pavliček and Soubusta [28], which created a model of uncertainties on a pixel due to scattering on a 'flat' rough surface to tilt the reference plane and analyze the effect on the tilt amplitude. Pavliček and Soubusta [28] therefore found a relation for the measurement uncertainties, δ z (Equation (23)).
where I is the mean speckle pattern intensity, I is the individual speckle intensity, and σ h is the standard deviation of the rough surface height distribution (Equation (24)) with the assumption described in Equation (25), where l c is the coherence length.
Pavliček and Hýbl [29] proposed using a CCD camera and a LED with a central wavelength of 850 nm and a FWHM (Full Width at Half Maximum) spectral width of 40 nm for the interferometric measurements. For this light source configuration, the coherence length lc is therefore equal to 4.8 µm from Equation (26).
After analytical computation in a profile (i.e., in 2D), Liu et al. [27] found Equation (27) without fixing a density probability function on the amplitudes z i . The only formulated geometrical assumption is that the reference plane has no uncertainties and the height amplitude distribution of z i is symmetric about the reference plane.
where d is the pixel size and θ is the tilt angle. dtan(θ) is considered as the numerical gradient of the function. They therefore proposed a 3D extension (Equation (28)).
They found the final new expression of a pixel uncertainty (Equation (29)), i.e., pixel located in (x, y) of the topographic map, by replacing σ h by σ' h in Equation (23).
where δ r is the random uncertainty introduced by the environmental issue. Local uncertainties in (x, y) are explained through Equation (29) for a proportional relation with the local surface facet gradients by neglecting environment effects. This method tends to establish a relation where uncertainties increase with gradients on a pixel. A robust statistic has to be found to confirm or disconfirm the linearity of the uncertainty/gradient relation.

Formulation
If the fact that the pixel is square is included, then ∆x = ∆y = ∆s. Assuming the gradient becomes large enough that the terms σ h 2 and δ r 2 become negligible, Equation (30) is obtained. A relationship between the local mean uncertainty σ i,j , and the average local gradient ∇ i,j of pixel (i, j) is obtained in Equation (31).
Equation (29) is obtained by Liu et al. [27] and according to research by Pavliček and Soubusta [29]. Pavliček and Soubusta assume that the object's surface is rough and the diffraction-limited point-spread function of the imaging system is broad by comparison with the microscopic surface variations, i.e., the microstructure of the surface is not laterally resolved by the imaging system. This causes the object wave to consist of contributions from many independent scattering regions. However, when the tilt method from Liu et al. is used, a correlation is introduced between scattering regions in the pixel that is not taken into account by Pavliček and Soubusta. Factor 12 in Equation (28) is also based on pure regular geometry assumptions, not statistical ones. Finally, in the case of fractal surfaces where gradient depends on the scale, the cut off between roughness of the surface and roughness of the facet in the scattering region is not really separated. For all of these reasons, we propose to include these effects by weighting the gradient effect with a factor, c, which leads to the final formula (Equation (32)).

Application
The method was applied to a fractal surface (fractal dimension of 2.3) obtained by grinding a TA6V specimen with a grade 80 SiC paper. A zone of this specimen was measured 100 times by SWLI (Zygo NewView™ 7300) with a 50× lens over a range of 140 µm × 105 µm, ∆s = 0.21 µm (Figure 8). The resulting S q is 0.36 µm.
A relationship between the local mean uncertainty σ i,j , and the average local gradient , of pixel (i, j) is obtained in Equation (31).
Equation (29) is obtained by Liu et al. [27] and according to research by Pavliček and Soubusta [29]. Pavliček and Soubusta assume that the object's surface is rough and the diffraction-limited point-spread function of the imaging system is broad by comparison with the microscopic surface variations, i.e., the microstructure of the surface is not laterally resolved by the imaging system. This causes the object wave to consist of contributions from many independent scattering regions. However, when the tilt method from Liu et al. is used, a correlation is introduced between scattering regions in the pixel that is not taken into account by Pavliček and Soubusta. Factor 12 in Equation (28) is also based on pure regular geometry assumptions, not statistical ones. Finally, in the case of fractal surfaces where gradient depends on the scale, the cut off between roughness of the surface and roughness of the facet in the scattering region is not really separated. For all of these reasons, we propose to include these effects by weighting the gradient effect with a factor, c, which leads to the final formula (Equation (32)).

Application
The method was applied to a fractal surface (fractal dimension of 2.3) obtained by grinding a TA6V specimen with a grade 80 SiC paper. A zone of this specimen was measured 100 times by SWLI (Zygo NewView™ 7300) with a 50× lens over a range of 140 µm × 105 µm, ∆s = 0.21µm (Figure 8). The resulting Sq is 0.36 µm.  , . The c value of 0.2 is found and shows that the effect of the gradient must be minimized by 80% compared with Liu's initial model and, finally,

Discussion
The B 2 D plot is a visual tool which gives a quick visual representation of data reliability. The B 2 D plot augments traditional quality maps, because it improves the vision of surface topographies through another point of view. Therefore, this tool can guide measurement parameter selection, especially where local gradients are high. The method of gradient calculation used here is quite basic.
Other calculation methods could be performed to see how they describe correlations between uncertainty and gradient maps. This is why it will be necessary to build quality indicators to determine if a B 2 D plot, calculated with method "X", is better than another with "Y". This requires different surfaces for testing the calculation efficiency. The use of numerical simulation for generating surfaces can potentially be a suitable tool to test the appearance of a typical pattern in the B 2 D plot, such as the role of a z offset on each map, the effect of a spatial correlation over time, the effect of a x/y offset during multi-map acquisition, the addition of white noise, waviness, and tilt. The influence of artefacts/uncertainties calculated by numerical algorithms from the metrological domain could be used to assess on B 2 D plot. For example, simulation of the tactile measurement with a variable stylus curvature radius. This approach could then allow introduction of the bottom-up approach in the top-down philosophy.
The effect of adding non-measured points on the construction must also be studied by simulation. Indeed, the probability that the measuring system of the apparatus considers a point to be not measurable depends on each apparatus and, obviously, on the internal algorithm of height calculation, which could introduce a threshold function. The effect of missing data has not been examined in this study although it is considered in the statistical calculations; missing data could be problematic for different algorithms used for gradient calculation, for example.
Moreover, a paradox can emerge because the more a surface is 'noisy' or has a topography with strong local gradients, the higher the probability of obtaining NaN on the multi-map. A bias thus appears regarding the number of data points, i.e., zi,j, with high gradient decreases; instead, it should increase with an average increase of the local

Discussion
The B 2 D plot is a visual tool which gives a quick visual representation of data reliability. The B 2 D plot augments traditional quality maps, because it improves the vision of surface topographies through another point of view. Therefore, this tool can guide measurement parameter selection, especially where local gradients are high. The method of gradient calculation used here is quite basic.
Other calculation methods could be performed to see how they describe correlations between uncertainty and gradient maps. This is why it will be necessary to build quality indicators to determine if a B 2 D plot, calculated with method "X", is better than another with "Y". This requires different surfaces for testing the calculation efficiency. The use of numerical simulation for generating surfaces can potentially be a suitable tool to test the appearance of a typical pattern in the B 2 D plot, such as the role of a z offset on each map, the effect of a spatial correlation over time, the effect of a x/y offset during multi-map acquisition, the addition of white noise, waviness, and tilt. The influence of artefacts/uncertainties calculated by numerical algorithms from the metrological domain could be used to assess on B 2 D plot. For example, simulation of the tactile measurement with a variable stylus curvature radius. This approach could then allow introduction of the bottom-up approach in the top-down philosophy.
The effect of adding non-measured points on the construction must also be studied by simulation. Indeed, the probability that the measuring system of the apparatus considers a point to be not measurable depends on each apparatus and, obviously, on the internal algorithm of height calculation, which could introduce a threshold function. The effect of missing data has not been examined in this study although it is considered in the statistical calculations; missing data could be problematic for different algorithms used for gradient calculation, for example.
Moreover, a paradox can emerge because the more a surface is 'noisy' or has a topography with strong local gradients, the higher the probability of obtaining NaN on the multi-map. A bias thus appears regarding the number of data points, i.e., z i,j , with high gradient decreases; instead, it should increase with an average increase of the local gradients. This bias can be significant and causes a problem when calculating roughness parameters on surfaces with NaN. In addition to any problem of roughness parameters calculation, the result will also be biased. The use of a 'smooth' function for filling points would only amplify this bias by adding smoothed data when the data must be rougher. This is a real challenge in the world of surface topography. Once these analyses have been performed, it will therefore be useful to build indicators which will help to interpret uncertainties and to set the apparatus using a top-down approach.

Conclusions
The presented method is built to quantify the uncertainties and make the link between them and the surface gradients. A visual graphic tool, the B 2 D plots in 2D and 3D, is developed to highlight measurement anomalies and build quantifiable indicators. In addition, according to this visualization tool, an aid for setting measurement apparatus is possible through a hundred measurement repetitions at the same position. The measurement aberrations could be distinguished with the control cards made on MountainsMap 9.3 (Appendix B). The final results can be summarized in the following five points: 1.
The nature of optical measurements of irregular topographies is such that fluctuationbased uncertainties observed in repeated height measurements at one location vary with respect to the height gradients. 2.
Fluctuation maps, which are based on repeated measurements, and maps of the logs of height gradients appear similar which indicates that they are correlated spatially. 3.
The highest fluctuation-based uncertainties correspond to signatures of the tool/material interaction for the turned surface, and to regions with the maximum plastic deformation for the sandblasted surfaces.

4.
Fluctuation-based uncertainties for an abraded surface depend on both abrasion and on light/sub-layer interactions.

5.
Topographic regions which characterize surface functionality, or integrity, have the highest fluctuation-based uncertainties in their height measurements at individual locations in a measurement.
This paper opens another future possibility about the integration of uncertainties on roughness parameters. If, currently, uncertainties integration calculations could be trivial for simple roughness parameters, like S q , it could be tough to make the integration for others, like motif parameters.
A huge database is currently developed including different types of surface topography to apply and to assess with others' methods. A software solution is in development to list and store all surface topographies related to our laboratory into the database. Funding: This research was funded by the ELSAT2020 project, co-financed by the European Union with the European Regional Development Fund, the French state and the Hauts-de-France region.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement: Not applicable.
Acknowledgments: The authors thank the company, Essilor, for providing the ophthalmic lens, and the MorphoMeca platform for the use of the measuring apparatus.

Conflicts of Interest:
The authors declare no conflict of interest. other hand, a shorter autocorrelation length indicates more localized or short-range roughness features. S tr , texture-aspect ratio, is also studied. It corresponds to the ratio between the characteristic wavelength of a texture and its aspect ratio. It quantifies the anisotropy or elongation of texture features on a surface. The characteristic wavelength is determined using a Fourier transform analysis of the texture profile, and the aspect ratio represents the elongation or stretching of the texture features.

Abbreviations
In simpler terms, the Texture-aspect ratio describes the elongation or stretching of texture patterns on a surface. A higher Texture-aspect ratio indicates a more elongated or anisotropic texture, while a lower ratio represents a more isotropic or uniformly distributed texture. • Hybrid parameters: they combine both height and spatial aspects to provide a more comprehensive description of the surface texture. The S dr parameter (developed interfacial area ratio) is observed. This is expressed as the percentage of the definition area's additional surface area contributed by the texture as compared to the planar definition area. The S dr value of a completely level surface is null. • Feature parameters: in the context of surface texture analysis, they refer to specific measurements that describe particular aspects or characteristics of surface features. These parameters provide quantitative information about specific types of irregularities or patterns present on a surface. Watershed segmentation by Wolf Pruning is used to divide the topography into hills and compute S pd and S pc parameters. S pd , density of peaks, refers to the concentration of peaks in the surface texture of a material. It provides a measure of how closely spaced or abundant the peaks are on the surface. It can be expressed as the number of peaks per unit area or per unit length, depending on the analysis method used. S pc , arithmetic mean peak curvature, represents the arithmetic mean of the principal curvature of the peaks on the surface.