Measuring the Thickness of Metal Coatings: A Review of the Methods

: Thickness dramatically a ﬀ ects the functionality of coatings. Accordingly, the techniques in use to determine the thickness are of utmost importance for coatings research and technology. In this review, we analyse some of the most appropriate methods for determining the thickness of metallic coatings. In doing so, we classify the techniques into two categories: (i) destructive and (ii) non-destructive. We report on the peculiarity and accuracy of each of these methods with a focus on the pros and cons. The manuscript also covers practical issues, such as the complexity of the procedure and the time required to obtain results. While the analysis focuses most on metal coatings, many methods are also applicable to ﬁlms of other materials.


Introduction
The use of thin films has become a ubiquitous practice across much of the scientific and industrial sector. Coatings are widely used to obtain a synergistic action between the characteristics of the substrate and the covering material. They can enhance physical, chemical and aesthetic properties and lower the costs of the final product. For all these reasons, measurement of the thickness in composite materials is mandatory, both to obtain the right characteristics in the final artefact, as well as to keep the costs under control. The nature of the films may be extremely vast, and dielectrics (organic, such as polymers and self-assembled monolayers (SAM), or inorganic, such as metal oxides), semiconductors and metals are all used in the formation of films. In this study, we focused on the characterization of metallic films obtained through electrodeposition or vapor phase deposition, but most of the techniques mentioned in this article can be applied to films used for different materials. As far as the dimensions of the film are concerned, the thinnest measurable thickness coincides with an atomic monolayer (ML), while the thicker layers can reach hundreds of microns in the case of electroforming. Therefore, in this review, we carried out an overview of all the investigation techniques in this range.
According to the type, composition and thickness of the film itself and the substrate, various methodologies can be employed to investigate the thickness of the layers [1]. The processes can be classified into two different categories: destructive and non-destructive techniques.
A destructive technique permits the direct measurement of the thickness. This requires the sample to be modified at the macroscopic or microscopic level; therefore, the analyzed sample cannot be used and must be disposed of. On the other hand, a non-destructive technique facilitates the measurement in order to accurately measure the thicknesses; as a rule of the thumb, the surface roughness and consequently the abrasive grain size must be in the same order of magnitude or less of the thickness to be analyzed, generally 0.3-1 micron. The lapping process can be carried out by hand or using special automatic lapping machines. Wet abrasion is carried out first using gradually finer sandpaper and then with a cloth soaked in an abrasive suspension. The suspended particles can be alumina or diamond dust. Initially, with sufficiently coarse sandpaper, the entire part that was damaged during cutting is removed, about 1 mm in depth; then, the grain size of the sandpaper is decreased. Finegrain size is not used immediately, as it would not be possible to remove the deepest scratches. In the event of lapping by hand, the sample must not be pressed too hard, and care must be taken not to consume one side more than the other to avoid introducing a parallax error. For this reason, it is advisable to rotate the sample in the same direction as the lapping machine disk and occasionally turning it by 90° with respect to its normal axis. When the sample is thought to be ready, it is washed and inspected visually or, if available, under an optical microscope to check that all the scratches have been flattened. If the sample is sufficiently smooth, the next phase of microscopic analysis can be carried out. By way of example, a microscopic analysis of the same cross-section is shown in Figure  1, where a comparison between an optical and electron microscope is shown in the image. The multilayer sample is made of brass/Cu/bronze/Pd/Au/Ni. With the optical microscope, it is possible to distinguish most of the layers. Even the thin gold layer is visible, although not quantifiable; it is not possible to discriminate between the bronze and palladium layers. Layers with a thickness over 500 nm can be measured with an optical microscope, but for thinner coatings, and to distinguish brass and palladium, the electron microscopic image is necessary.

Focused Ion Beam Cross-Sectioning
Another series of techniques enabling fast characterization of thin metallic films employ focused beams of ions. Ionic probes display different beam-matter interactions concerning electrons, resulting in other effects of the impinging beam on the surface [44][45][46][47][48][49]. First of all, it is worth noting that the ion beam-matter interaction always modifies the surface of the sample (often in a destructive way, which must be taken into account for each one of the ion methods listed below). Particularly, when an ion beam hits a surface, it can generate a series of different effects with respect to an electron beam; all these effects are a consequence of elastic or inelastic collisions between the charged particles

Focused Ion Beam Cross-Sectioning
Another series of techniques enabling fast characterization of thin metallic films employ focused beams of ions. Ionic probes display different beam-matter interactions concerning electrons, resulting in other effects of the impinging beam on the surface [44][45][46][47][48][49]. First of all, it is worth noting that the ion beam-matter interaction always modifies the surface of the sample (often in a destructive way, which Coatings 2020, 10, 1211 5 of 36 must be taken into account for each one of the ion methods listed below). Particularly, when an ion beam hits a surface, it can generate a series of different effects with respect to an electron beam; all these effects are a consequence of elastic or inelastic collisions between the charged particles and the atoms forming the surface. The most important interactions can be listed as follows: (a) surface sputtering, (b) secondary ion emission, (c) deformation of the surface reticule, (d) ion implantation, (e) emission of electrons, (f) emission of electromagnetic waves and (g) heating. All these effects manifest at the same time; their ratio is strongly dependent on the physicochemical nature of both the probe and the targeted surface, and on the apparatus set-up. For a specific ion source, parameters such beam energy, beam current, and impact angle can be tailored to favor one phenomenon with respect to the others; as an example, by decreasing the accelerating voltage of the emitted ions, it is possible to decrease the deformation of the surface lattice (amorphization) strongly at the cost of reduced control over the ion beam. Among all the ion beam-matter interactions, three are used for the characterization of thin metallic films: (a) the production of electrons [50], (b) the sputtering of neutral surface atoms [51] and (c) secondary ion (SI) emissions [52]. The first one exploits a signal that is similar to the one used for SEM imaging, which permits the acquisition of topographic information of the surfaces. The second phenomenon allows subtractive manufacturing to be performed on the sample surface, exposing details of its inmost structure. The third enables qualitative and quantitative analysis of the nature of the material. An FIB apparatus can permit both thin film preparation and analysis in a single workflow by preparing the surface cross-section and acquiring its images, though not all FIB columns are fitted to perform both processes well. As already mentioned, different elements can be used to produce ions; their charge/mass ratio can modify the sputtering ratios, manifesting profoundly the production of SI and SE. Today, the main commercial FIB machines are engineered to work using He, Ga, Ar or Xe ions [53]. Smaller ions, like He, are well suited for imaging purposes, while larger ions like Ar and Xe are used for fast sputtering; Ga FIB machines are considered a good trade-off between the two effects and are still considered to be the best choice for all purpose tasks.
From a technical point of view, FIB machines are very similar to SEMs. Both of them have a column that can be divided into (a) a top part composed of the source, responsible for the ion/electron generation and (b) a bottom part, the focusing apparatus, a series of electrostatic/electromagnetic lenses, condensers and apertures to focalize and control the beam [45]. In FIBs, the working principle exploited for ion emission varies depending on the physicochemical properties of the element of choice; we can divide the sources into three main branches: gaseous field ionization sources (GFISs) for He [54], liquid metal ion sources (LMISs) for Ga [45] and plasma sources for Ar and Xe [55,56]. Despite SEMs, in FIBs, the focusing apparatus is composed of electrostatic rather than electromagnetic elements (lenses, condensers, etc.). Ions in fact suffer weakly from Lorentz forces due to their slower travelling speed (with respect to electrons) in the column, also meaning that these instruments are less prone to suffer from stray external magnetic fields. Ion sources are not interchangeable; the focusing apparatus has to be finely engineered to suit the particular physical properties of the emitted element. This means also that the choice between different FIB machines, exploiting different sources, has to be carefully planned with respect of the need. Historically, industrial FIB columns were intended as standalone instruments; these devices were used especially in the semiconductor/electronics industry, where their use both for quality checking and prototyping decreased from the 1970s. Today, FIBs can be found mainly when paired with an SEM column [57]. This double column configuration allows for a vast array of different procedures for the characterization of materials in a range from mm down to the nm for all R&D fields. Moreover, FIB/SEMs can be equipped with a vast array of accessories and sensors to enable different characterization techniques, fully exploiting both the electron and the ionic beam. Particularly, all the FIB methods are well suited to the characterization of metallic films of thicknesses ranging from 50 µm down to fractions of a nm.
Among the available accessories, the gas injection system (GIS) has particular relevance, as it allows for additive manufacturing in the range of tens of nm using both ionic and electronic beams [58][59][60]. The GIS is constituted by a series of external reservoirs containing the precursors of the elements to Coatings 2020, 10, 1211 6 of 36 be deposited onto surfaces, which are heated to produce a reactive gas. A hollow needle injects this gas in the vicinity of the surface of the sample surface. The beam-molecule interaction results in the degradation of the precursors followed by precipitation of the precursor elements onto the surface. Various elements can be deposited in this way; among them, the most common deposits are constituted by W, Pt, Au, Cr and C [59][60][61].
A cross-sectioning procedure can also be performed by focused ion beams. By exploiting the sputtering effect of FIBs, it is possible to raster a surface producing precise trenches, which can be used to observe the in-depth evolution of the sample [62]. The trench dimensions can vary from mm (for Xe or Ar plasma FIBs) to less than 1 µm, with a maximum depth in the range of hundreds of microns. Due to the small dimensions of the holes produced on the surface, this process (unlike mechanical cross-sectioning) is sometimes considered to be semi-destructive. FIB cross-sectioning is also faster with respect to mechanical cross-sectioning, because it permits one to produce clear cuts in 20-60 min (depending on the size of the hole) without the need for further polishing. For these reasons, it is particularly well suited for the characterization of thin films having thicknesses smaller than microns. Due to the small dimensions of the cross-sections, SEM, SIM or EDS analysis are usually adopted as subsequent characterization techniques.
Even if the cross-sectioning process can be performed only by using an FIB for the coarse determination of thick films, precise determination of films down to 10 µm will require the use of an FIB/SEM equipped with the GIS. This is because in order to avoid FIB-induced surface erosion, the surface of interest has to be covered by a thin protective layer. In FIB/SEMs, the protective layer is produced by a two-step deposition process. First, a thin layer of metal is deposited on the surface using the electron beam (e-beam deposition). Then, a thicker metallic layer is deposited using the ionic beam (i-beam deposition). The e-beam deposition is needed to avoid surface degradation (and thus loss of thickness information on the topmost layer) due to direct impingement of the ionic beam on the surface [63]. The full workflow for the preparation of a cross-section using an FIB is shown in Figure 2.
Coatings 2020, 10, x FOR PEER REVIEW 6 of 36 the surface. Various elements can be deposited in this way; among them, the most common deposits are constituted by W, Pt, Au, Cr and C [59][60][61]. A cross-sectioning procedure can also be performed by focused ion beams. By exploiting the sputtering effect of FIBs, it is possible to raster a surface producing precise trenches, which can be used to observe the in-depth evolution of the sample [62]. The trench dimensions can vary from mm (for Xe or Ar plasma FIBs) to less than 1 µm, with a maximum depth in the range of hundreds of microns. Due to the small dimensions of the holes produced on the surface, this process (unlike mechanical cross-sectioning) is sometimes considered to be semi-destructive. FIB cross-sectioning is also faster with respect to mechanical cross-sectioning, because it permits one to produce clear cuts in 20-60 min (depending on the size of the hole) without the need for further polishing. For these reasons, it is particularly well suited for the characterization of thin films having thicknesses smaller than microns. Due to the small dimensions of the cross-sections, SEM, SIM or EDS analysis are usually adopted as subsequent characterization techniques.
Even if the cross-sectioning process can be performed only by using an FIB for the coarse determination of thick films, precise determination of films down to 10 µm will require the use of an FIB/SEM equipped with the GIS. This is because in order to avoid FIB-induced surface erosion, the surface of interest has to be covered by a thin protective layer. In FIB/SEMs, the protective layer is produced by a two-step deposition process. First, a thin layer of metal is deposited on the surface using the electron beam (e-beam deposition). Then, a thicker metallic layer is deposited using the ionic beam (i-beam deposition). The e-beam deposition is needed to avoid surface degradation (and thus loss of thickness information on the topmost layer) due to direct impingement of the ionic beam on the surface [63]. The full workflow for the preparation of a cross-section using an FIB is shown in Figure 2. An example of FIB cross-sectioning workflow: (a) the sample is inserted in the chamber with 0° tilt; (b) a thin layer of metal is deposited on the surface using the electron beam; (c) the stage is tilted and a thicker metallic layer is deposited using the ionic beam; (d) the focused ionic beam is rastered on the surface creating the cross section by sputtering effect; (e) final SEM image of the crosssection.

Angle Lapping
Angle lapping is a sample preparation method used to increase the resolution in thin-film

Angle Lapping
Angle lapping is a sample preparation method used to increase the resolution in thin-film thickness determination of the adopted microscopy characterization technique; it is based on a change in cutting geometry for cross-sectioning.
During traditional preparation procedures, to obtain the stratigraphic information from the sample, the cutting plane is perpendicular to the surface (θ cut = 90 • ) [64]. The uncovered section gives direct stratigraphic data on the displacement and thicknesses of the layers above the substrate. However, in angle lapping, the sectioning cut is performed at very low angles with respect to the sample surface (θ cut < 10 • ) ( Figure 3). This produces a "magnification effect" on the newly created section surface, on which all the layers appear stretched. Knowing the cutting angle and using trigonometry, it is consequently easy to derive the film thickness. The main advantage of this method is the magnification effect, which allows the resolution limits of the microscopic technique adopted for the quantification to be overcome, even for extremely thin films. A very flat film surface is mandatory for the precise determination of the thickness of the layers underneath; moreover, the operator must put special care into the preparation of the surface after the cut. It is common, especially for mechanically-machined soft materials, to introduce thin-film deformation onto the cutting surface [65]. The origin of this sample preparation technique comes from the first metallographic studies, and it is still considered as a valuable and low-cost method to overcome the resolution limits of optic microscopy characterization methods. In time, its adoption has shifted to FIB cross-sectioning or lamella preparation, enabling fine characterization of ultrathin films.
Coatings 2020, 10, x FOR PEER REVIEW  7 of 36 During traditional preparation procedures, to obtain the stratigraphic information from the sample, the cutting plane is perpendicular to the surface (θcut = 90°) [64]. The uncovered section gives direct stratigraphic data on the displacement and thicknesses of the layers above the substrate. However, in angle lapping, the sectioning cut is performed at very low angles with respect to the sample surface (θcut < 10°) ( Figure 3). This produces a "magnification effect" on the newly created section surface, on which all the layers appear stretched. Knowing the cutting angle and using trigonometry, it is consequently easy to derive the film thickness. The main advantage of this method is the magnification effect, which allows the resolution limits of the microscopic technique adopted for the quantification to be overcome, even for extremely thin films. A very flat film surface is mandatory for the precise determination of the thickness of the layers underneath; moreover, the operator must put special care into the preparation of the surface after the cut. It is common, especially for mechanically-machined soft materials, to introduce thin-film deformation onto the cutting surface [65]. The origin of this sample preparation technique comes from the first metallographic studies, and it is still considered as a valuable and low-cost method to overcome the resolution limits of optic microscopy characterization methods. In time, its adoption has shifted to FIB cross-sectioning or lamella preparation, enabling fine characterization of ultrathin films. Comparison between film thickness determination: traditional cross-sectioning (using a cut perpendicular to the surface, (left column)), and angle lapping (using small cutting angles in respect to the surface, (right column)).

TEM Lamella Preparation
The ultimate procedure for thickness determination of thin films is the TEM lamella preparation process. During this preparation procedure, a small portion of the sample surface, usually a 10 × 5 ×

. TEM Lamella Preparation
The ultimate procedure for thickness determination of thin films is the TEM lamella preparation process. During this preparation procedure, a small portion of the sample surface, usually a 10 × 5 × 1 micron solid, is extracted from the sample surface. This parallelepiped shaped lamella is then thinned down to less than 100 nm to be observed transversally, to uncover the surface stratigraphy [66,67]. The TEM lamella preparation process is relevant to characterize thin films (below 1 µm) using electron beam methods. The small thickness of the lamella decreases the interaction volume abruptly, cutting down the signal coming from in-depth SE and BSE during SEM (sharpening of the image), and permitting electron transmission during TEM or STEM ( Figure 4).
Coatings 2020, 10, x FOR PEER REVIEW 8 of 36 thinned down to less than 100 nm to be observed transversally, to uncover the surface stratigraphy [66,67]. The TEM lamella preparation process is relevant to characterize thin films (below 1 µm) using electron beam methods. The small thickness of the lamella decreases the interaction volume abruptly, cutting down the signal coming from in-depth SE and BSE during SEM (sharpening of the image), and permitting electron transmission during TEM or STEM ( Figure 4). To prepare a lamella, an FIB/SEM equipped with a GIS and a nano manipulator is required. Moreover, the lamella preparation process is quite complex and is constituted by numerous steps that can vary depending on the load-out of the adopted machine, on the geometry of the sample chamber and on the sample architecture. Generally, the workflow can be divided into three main stages: In the first step, the lamella is shaped (carved) directly onto the surface of the sample. In the second step, called lift-out, the lamella is detached from the sample and is mounted on a TEM support grid. In the third step, the lamella is finally thinned down to enable transmission electron analysis. This last process is the most delicate for the obtainment of clean, defect-free lamellas. In Figure 5, an example of a workflow for a Tescan GAIA 3 FIB/SEM is shown. (c) a nanomanipulator is moved on the side of the lamella and soldered to the body, and then the lamella is detached from the surface; (d-f) the lamella is moved from the surface of the sample to the TEM support; (g) the lamella is soldered to the support; (h) the nanomanipulator is detached from the lamella; (i) the lamella is thinned down until electron-transparency; (j) in the end, lamella quality is tested using the microscope in-built STEM detector.
The lamella preparation procedure enables the study of coatings even at sub-nanometric lateral resolution (for HR-TEMs). To prepare a lamella, an FIB/SEM equipped with a GIS and a nano manipulator is required. Moreover, the lamella preparation process is quite complex and is constituted by numerous steps that can vary depending on the load-out of the adopted machine, on the geometry of the sample chamber and on the sample architecture. Generally, the workflow can be divided into three main stages: In the first step, the lamella is shaped (carved) directly onto the surface of the sample. In the second step, called lift-out, the lamella is detached from the sample and is mounted on a TEM support grid. In the third step, the lamella is finally thinned down to enable transmission electron analysis. This last process is the most delicate for the obtainment of clean, defect-free lamellas. In Figure 5, an example of a workflow for a Tescan GAIA 3 FIB/SEM is shown.

Calo Tester
Coatings 2020, 10, x FOR PEER REVIEW 8 of 36 thinned down to less than 100 nm to be observed transversally, to uncover the surface stratigraphy [66,67]. The TEM lamella preparation process is relevant to characterize thin films (below 1 µm) using electron beam methods. The small thickness of the lamella decreases the interaction volume abruptly, cutting down the signal coming from in-depth SE and BSE during SEM (sharpening of the image), and permitting electron transmission during TEM or STEM ( Figure 4). To prepare a lamella, an FIB/SEM equipped with a GIS and a nano manipulator is required. Moreover, the lamella preparation process is quite complex and is constituted by numerous steps that can vary depending on the load-out of the adopted machine, on the geometry of the sample chamber and on the sample architecture. Generally, the workflow can be divided into three main stages: In the first step, the lamella is shaped (carved) directly onto the surface of the sample. In the second step, called lift-out, the lamella is detached from the sample and is mounted on a TEM support grid. In the third step, the lamella is finally thinned down to enable transmission electron analysis. This last process is the most delicate for the obtainment of clean, defect-free lamellas. In Figure 5, an example of a workflow for a Tescan GAIA 3 FIB/SEM is shown. (c) a nanomanipulator is moved on the side of the lamella and soldered to the body, and then the lamella is detached from the surface; (d-f) the lamella is moved from the surface of the sample to the TEM support; (g) the lamella is soldered to the support; (h) the nanomanipulator is detached from the lamella; (i) the lamella is thinned down until electron-transparency; (j) in the end, lamella quality is tested using the microscope in-built STEM detector.
The lamella preparation procedure enables the study of coatings even at sub-nanometric lateral resolution (for HR-TEMs). (c) a nanomanipulator is moved on the side of the lamella and soldered to the body, and then the lamella is detached from the surface; (d-f) the lamella is moved from the surface of the sample to the TEM support; (g) the lamella is soldered to the support; (h) the nanomanipulator is detached from the lamella; (i) the lamella is thinned down until electron-transparency; (j) in the end, lamella quality is tested using the microscope in-built STEM detector. The lamella preparation procedure enables the study of coatings even at sub-nanometric lateral resolution (for HR-TEMs).

Calo Tester
The Calo tester, also known as a ball crater or crater grinding, is a semi-destructive technique that is not very widespread but extremely practical in some cases; in addition, it is regulated in ISO 26423 [68] (Ex EN-1071 and VDI 3198 [69]). Compared to the cross-section, it has the advantage of being only locally destructive; therefore, the sample, instead of having to be cut in half, is excavated in an area with a diameter of about one millimeter [70,71]. Furthermore, the analysis is much faster, as the sample does not have to be incorporated and lapped.
This technique consists of fixing the sample on a variable angle support, onto which is placed a steel sphere covered by an abrasive suspension; the sphere is also in contact with a rotating cylinder that makes it roll ( Figure 6). Within a few minutes, depending on the hardness of the sample and its angle of inclination, which is translated to the weight that the sphere impresses on it, a circular crater will form, revealing all the layers [72]. Since the abrasion angle is small (due to the diameter of the sphere), layers of even a few microns will have a much greater apparent size and can be appreciable under a standard optical microscope. As in the case of the cross-section, only the layers having a different color can be distinguished. Once the diameters of the concentric circles that have been created on the sample have been measured, using a dedicated formula ( Figure 6) that takes into account the diameter of the abrasion sphere, the real thickness of the films can be obtained. Manufacturers ensure that the range of thicknesses that can be measured is between 50 and 0.1 microns.  [69]). Compared to the cross-section, it has the advantage of being only locally destructive; therefore, the sample, instead of having to be cut in half, is excavated in an area with a diameter of about one millimeter [70,71]. Furthermore, the analysis is much faster, as the sample does not have to be incorporated and lapped. This technique consists of fixing the sample on a variable angle support, onto which is placed a steel sphere covered by an abrasive suspension; the sphere is also in contact with a rotating cylinder that makes it roll ( Figure 6). Within a few minutes, depending on the hardness of the sample and its angle of inclination, which is translated to the weight that the sphere impresses on it, a circular crater will form, revealing all the layers [72]. Since the abrasion angle is small (due to the diameter of the sphere), layers of even a few microns will have a much greater apparent size and can be appreciable under a standard optical microscope. As in the case of the cross-section, only the layers having a different color can be distinguished. Once the diameters of the concentric circles that have been created on the sample have been measured, using a dedicated formula ( Figure 6) that takes into account the diameter of the abrasion sphere, the real thickness of the films can be obtained. Manufacturers ensure that the range of thicknesses that can be measured is between 50 and 0.1 microns. Figure 6. Experimental setup, working functionality and equation to calculate the film thickness (t) with a Calo tester; r is the radius of the sphere; x is the difference between the radius of the crater and radius of the part of the crater at the bottom of the coating; y is diameter of the crater from which it is subtracted x.

Optical Microscopy
To measure a cross-section of a sample, it is necessary to use the microscope in reflection mode; additionally, to be able to recognize the different layers, it is essential to have sufficiently high contrast, or more simply, different colors. Distinguishing different layers made, for example, of the same silvery metals, is impossible, while it is straightforward to measure, for example, a silver film on copper or brass.
The lateral resolution, i.e., the minimum distance between two resolved points, is defined by the Abbe principle. The resolution is related to the wavelength of the source used, in this case, the visible photons. The theoretical resolution of an optical microscope, not taking into account optical aberrations, using white light is about 0.2 microns. However, more realistically, the measurement of film with a thickness of less than one micron is difficult and has a consistent uncertainty [73-76]. Figure 6. Experimental setup, working functionality and equation to calculate the film thickness (t) with a Calo tester; r is the radius of the sphere; x is the difference between the radius of the crater and radius of the part of the crater at the bottom of the coating; y is diameter of the crater from which it is subtracted x.

Optical Microscopy
To measure a cross-section of a sample, it is necessary to use the microscope in reflection mode; additionally, to be able to recognize the different layers, it is essential to have sufficiently high contrast, or more simply, different colors. Distinguishing different layers made, for example, of the same silvery metals, is impossible, while it is straightforward to measure, for example, a silver film on copper or brass.
The lateral resolution, i.e., the minimum distance between two resolved points, is defined by the Abbe principle. The resolution is related to the wavelength of the source used, in this case, the visible photons. The theoretical resolution of an optical microscope, not taking into account optical aberrations, using white light is about 0.2 microns. However, more realistically, the measurement of film with a thickness of less than one micron is difficult and has a consistent uncertainty [73][74][75][76].

Electron Microscopy
Scanning electron microscopy is used when the thicknesses are too small to be analyzed with the optical microscope, or if the layers have an optical contrast among them that is too low. As a general rule, if the cross-section of a sample can be adequately analyzed with an optical microscope, there are no valid reasons to use an electron microscope; in fact, the sample needs some characteristics in order to be measured with an SEM. Moreover, SEM analysis is intrinsically more expensive than a simple optical analysis. Despite this, for most coatings, optical microscopy is not enough to obtain adequate results. A sample that is analyzed by SEM must be stable in high vacuum (about 10 −7 bar), under irradiation by an electron beam and must be conductive. These limitations can be circumvented by means of certain techniques: there are specific environmental or low-vacuum SEMs that allow analysis to be carried out with a pressure in the order of one Pascal; the acceleration potential of the electron beam can be reduced to a few kV in order not to damage the sample (but with a consequent decrease in image quality); non-conductive samples can be graphitized to avoid the accumulation of surface charges that, in this way, are dispersed to ground. However, these problems arise when biological, organic or polymeric samples are analyzed; for metallographic cross-sections, there are rarely complications of this type.
Numerous advances have been made in the last few decades in the field of electron microscopy. The resolution of these instruments is quite variable and ranges from about 20 nm for older instruments with thermionic emission, up to less than the nanometer scale for new instruments with a field emission source (FEG-SEM). In SEM, the main limit to the resolution is not so much the wavelength of the probe, the electrons, as it is the diameter of the beam and therefore its focus and collimation. For this reason, a stratagem to improve the resolution consists in bringing the sample closer to the source, reducing the working distance and, consequently, the opening of the electronic beam cone. The SEM images can be acquired using secondary electrons, backscattered electrons or through a microanalysis map. Because of the different nature of the signals, they differ in the depth from which the signal comes (Table 2) and in their volume of interaction ( Figure 7). A higher volume of interaction results in a lower lateral resolution, which translates into a less clear separation of the edges between the films that have to be analyzed [1,14]. simple optical analysis. Despite this, for most coatings, optical microscopy is not enough to obtain adequate results. A sample that is analyzed by SEM must be stable in high vacuum (about 10 −7 bar), under irradiation by an electron beam and must be conductive. These limitations can be circumvented by means of certain techniques: there are specific environmental or low-vacuum SEMs that allow analysis to be carried out with a pressure in the order of one Pascal; the acceleration potential of the electron beam can be reduced to a few kV in order not to damage the sample (but with a consequent decrease in image quality); non-conductive samples can be graphitized to avoid the accumulation of surface charges that, in this way, are dispersed to ground. However, these problems arise when biological, organic or polymeric samples are analyzed; for metallographic crosssections, there are rarely complications of this type. Numerous advances have been made in the last few decades in the field of electron microscopy. The resolution of these instruments is quite variable and ranges from about 20 nm for older instruments with thermionic emission, up to less than the nanometer scale for new instruments with a field emission source (FEG-SEM). In SEM, the main limit to the resolution is not so much the wavelength of the probe, the electrons, as it is the diameter of the beam and therefore its focus and collimation. For this reason, a stratagem to improve the resolution consists in bringing the sample closer to the source, reducing the working distance and, consequently, the opening of the electronic beam cone. The SEM images can be acquired using secondary electrons, backscattered electrons or through a microanalysis map. Because of the different nature of the signals, they differ in the depth from which the signal comes (Table 2) and in their volume of interaction ( Figure 7). A higher volume of interaction results in a lower lateral resolution, which translates into a less clear separation of the edges between the films that have to be analyzed [1,14].  SEs are produced when a primary electron from the beam excites an electron of the atoms of the sample to the point of tearing it from the nucleus; these electrons are low in energy (<50 eV), and only those generated most superficially on the surface are detected (Figure 8a). SEs carry the SEs are produced when a primary electron from the beam excites an electron of the atoms of the sample to the point of tearing it from the nucleus; these electrons are low in energy (<50 eV), and only those generated most superficially on the surface are detected (Figure 8a). SEs carry the morphological information with them, so the contrast in the image is based on the heights of the sample. Since our sample has been levelled, the contrast produced can only be due to the different natures of the material. The SE in small part also contains information on the composition, since the number of electrons emitted is proportional to the atomic number of the element under the electron beam. Thus, layers of elements with significantly different atomic numbers can be distinguished [77,78]. A trick that allows similar elements to be observed with the SE is to chemically etch the sample before the analysis in order to slightly corrode one material compared to another; in this way, morphological differences, clearly visible to the SE, are reintroduced between the different layers. However, the etching must be studied appropriately with respect to the sample that has to be analyzed.
( Figure 8b). Therefore, BSEs carry mainly compositional information; heavier elements generate more BSE. By sacrificing part of the lateral resolution, the image obtained will be much more contrasted by highlighting differences between the layers that can be below an atomic number unit for more modern systems, allowing in some cases even deposits of a metal to be distinguished from its alloy (for example, copper on brass).
Finally, the EPMA signal can be used (Figures 7d and 8c). This is the signal with the highest volume of interaction, and for this reason, the lateral resolution is inferior compared to the previous ones. In this case, the objects to be analyzed are not the electrons but the X photons that are emitted from the sample after the interaction with the primary electrons, which contain detailed information on the composition. With EPMA, alloys can be distinguished, which vary in composition by a few percentage points. Then, the different layers can be highlighted by making a map or a linear scan perpendicular to the layers. However, given the low lateral resolution, films just below the micron are challenging to quantify.
Another trick to increase lateral resolution is to reduce the volume of interaction, and this is possible by decreasing the energy of the electrons by lowering their acceleration potential. This dramatically reduces the signal, but with modern instruments, it is possible to obtain good images using acceleration potentials of only a few kV. This strategy is minimal; however, in the case of the EPMA, to have a good emission of the X photons, the electron beam must have indicatively an acceleration potential one and a half times the energy of the emission peak of interest.  BSEs are the primary electrons which, interacting with the positive nuclei of the atoms of the sample, are backscattered towards the source; for this reason, they are very energetic and penetrating ( Figure 8b). Therefore, BSEs carry mainly compositional information; heavier elements generate more BSE. By sacrificing part of the lateral resolution, the image obtained will be much more contrasted by highlighting differences between the layers that can be below an atomic number unit for more modern systems, allowing in some cases even deposits of a metal to be distinguished from its alloy (for example, copper on brass).
Finally, the EPMA signal can be used (Figures 7d and 8c). This is the signal with the highest volume of interaction, and for this reason, the lateral resolution is inferior compared to the previous ones. In this case, the objects to be analyzed are not the electrons but the X photons that are emitted from the sample after the interaction with the primary electrons, which contain detailed information on the composition. With EPMA, alloys can be distinguished, which vary in composition by a few percentage points. Then, the different layers can be highlighted by making a map or a linear scan perpendicular to the layers. However, given the low lateral resolution, films just below the micron are challenging to quantify.
Another trick to increase lateral resolution is to reduce the volume of interaction, and this is possible by decreasing the energy of the electrons by lowering their acceleration potential. This dramatically reduces the signal, but with modern instruments, it is possible to obtain good images using acceleration potentials of only a few kV. This strategy is minimal; however, in the case of the EPMA, to have a good emission of the X photons, the electron beam must have indicatively an acceleration potential one and a half times the energy of the emission peak of interest.

Scanning Ion Microscopy
Scanning Ion Microscopy exploits electron emission from a surface when targeted with an energetic ion beam. These emitted electrons, with comparable energies with respect to secondary electrons produced in SEM beam-sample interaction, can be collected by a detector to form an image. Like scanning electron microscopy, SIM can be used for the study of cross-sections, bringing more surface-sensitive images. Ions tend to have smaller mean free paths inside the matter with respect to electrons [54,79]; this smaller beam-surface interaction volume brings crispier, less bulk-mediated images. Moreover, the number of SEs produced per impacting ion is larger compared to the number of SEs produced by the impact of an electron, which leads to an enhanced signal [80] even at low ionic currents. In the end, due to their dimensions, ions tend to be particularly sensitive to crystal orientation; images acquired by SIM possess stronger crystalline contrast, due to the channeling effect [81], with respect to their SEM counterparts. Despite all these strong points, two technical downsides prevent SIM from being a valuable alternative to SEM. Ion image acquisition is a destructive process; beam rastering alters the surface of the sample by producing sputtering. Furthermore, even with recent beam-columns, SEM resolutions still cannot be reached. The most common LMIS sources are able today to obtain a resolution of tens of nanometers (about 20 nm for Ga), strongly depending on the chemical nature of the primary ions.
SIM microscopy can usually be performed in every FIB/SEM apparatus, using every ion source, and by adopting the Everhart-Thornley scintillator used for SEM SE imaging [14]. This detector can also be found in standalone FIB machines, which are more difficult to find today due to the wider diffusion of combined FIB/SEMs. Only a particular dedicated SIM device still holds a small portion of the market, namely the helium ion microscope (HIM). This microscope represents one of the best machines for ultimate scanning imaging. HIM is in fact capable of delivering very narrow beam spots, bringing resolutions below 0.27 nm with minimal bulk interaction volume [80,82,83]. Moreover, due to its chemical nature, the use of He minimizes implantation and reaction phenomena that can occur when adopting more reactive ions like gallium.

Data Analysis
Once the image is obtained with a microscopic technique, pixels must be converted into a unit of length. Most the software that allows microscopic images (both optical and electronic) to be acquired commonly have a tool for the extraction of this information; otherwise, there are free or paid for software that allows the same job to be done, e.g., ImageJ, Gimp and Adobe Photoshop. In order to convert the pixels to lengths on the image, a reference scale must be printed on it, and through that scale, all the dimensions of interest in the image can be obtained. The thickness of the films can therefore be measured, with the appropriate care taken to ensure the measurement is perpendicular to the film. By making the measurement at several points, statistics of the thicknesses can also be carried out. However, in the event that the edges of the film in the image are not very defined, finding the limits by eye can be difficult, as shown in the blurred image of an Au film reported in Figure 1. In these cases, instead of analyzing the image, it is more practical to observe the profile graph in which the greyscale values are reported. If the different layers have a contrast between them, they also have a distinct grey value; the delimitation between one layer and another can be defined as the point where the value is intermediate between the two layers or, more rigorously, at the inflexion points of the graph. In this case the spatial resolution, and therefore the uncertainty of the measurement, is defined as the distance between the points where the variation of the grey value is in the range of 20%-80% [84], as required by ISO 18516 [85]. On the other hand, an incorrect method to make the separation of the layers more defined is to digitally postprocess the images by acting on brightness and contrast. In fact, as opposed to varying these parameters during the measurement, by performing software alterations of the image, the edge of the film can move as the different shades of grey are processed, as demonstrated in Figure 9.

Profilometry and Scanning Probe Microscopy
Profilometry is a class of techniques that permits the characterization of the line profiles of surfaces. It can be divided into non-contact and contact profilometry, depending on the probe used. In non-contact profilometry, light is used as the probing medium; information is then extracted using interferometric methods. On the contrary, contact profilometry relies on the use of a sharp needle, the tip of which is shifted horizontally on the sample surface by following a predetermined pattern. During this in-plane (x-y) movement, the needle is free to move on the vertical (z) axis according to the surface profile, and thus to collect the height value with respect to the horizontal movement, giving a line-profile of the surface [86]. Contact profilometry is the oldest profilometric technique but is still widely adopted in industrial contexts to acquire fast information on surface roughness. Contact profilometry can also be used to determine the thickness of thin films adopting the "scratch method". This technique exploits discontinuities on the coating surface, which uncover the base substrate underneath. By horizontally moving the tip between the substrate surface and the coating surface, it is then possible to record a height difference between the base material and film surface and thus determine the value of the coating thickness ( Figure 10). Despite its apparent simplicity, the scratch method has some intrinsic limitations: (a) Test samples have to be prepared accordingly, with coating-free zones for the substrate height measurement or (b) a scratch has to be performed on the continuum of the sample coating, to uncover the substrate. This second procedure is risky because the base surface can easily be deformed by the scratching process, hindering precise thickness measurement. Moreover, coatings could show different thickness values near the boundaries with the uncovered zones, both due to a "border effect" arising during the film deposition or to plastic

Profilometry and Scanning Probe Microscopy
Profilometry is a class of techniques that permits the characterization of the line profiles of surfaces. It can be divided into non-contact and contact profilometry, depending on the probe used. In non-contact profilometry, light is used as the probing medium; information is then extracted using interferometric methods. On the contrary, contact profilometry relies on the use of a sharp needle, the tip of which is shifted horizontally on the sample surface by following a predetermined pattern. During this in-plane (x-y) movement, the needle is free to move on the vertical (z) axis according to the surface profile, and thus to collect the height value with respect to the horizontal movement, giving a line-profile of the surface [86]. Contact profilometry is the oldest profilometric technique but is still widely adopted in industrial contexts to acquire fast information on surface roughness. Contact profilometry can also be used to determine the thickness of thin films adopting the "scratch method". This technique exploits discontinuities on the coating surface, which uncover the base substrate underneath. By horizontally moving the tip between the substrate surface and the coating surface, it is then possible to record a height difference between the base material and film surface and thus determine the value of the coating thickness ( Figure 10). Despite its apparent simplicity, the scratch method has some intrinsic limitations: (a) Test samples have to be prepared accordingly, with coating-free zones for the substrate height measurement or (b) a scratch has to be performed on the continuum of the sample coating, to uncover the substrate. This second procedure is risky because the base surface can easily be deformed by the scratching process, hindering precise thickness measurement. Moreover, coatings could show different thickness values near the boundaries with the uncovered zones, both due to a "border effect" arising during the film deposition or to plastic deformation during the scratching of the surface. It is then mandatory to perform various line acquisitions to acquire some profile statistics.
tings 2020, 10, x FOR PEER REVIEW 14 of 3 Figure 10. Line-scan performed using the "scratch method", side-view; h represents the measured height.
Scanning probe microscopy (SPM) can be used to perform thickness measurements. This clas techniques, which is considered as a subgenre of contact profilometry, exploits different tip rface interactions to trace the surface profile; mechanical contact, van der Waals forces, capillar ces, electrostatic forces and chemical bonding are among the most important ones. Atomic forc croscopy (AFM) [87,88], which usually exploits mechanical interactions between the probe and th rface, can be considered as a natural evolution of contact profilometry. AFM permits a surface t reconstructed by acquiring parallel line profiles, which can be stitched to form an image tha ntains 3D information of the surface. Just like contact profilometry, AFM uses a small needl unted on a cantilever to acquire the surface image. The rastering of this probe across the surfac esponsible for the acquisition of line profiles, which are encoded by a laser/CCD system. AFM ca rk both in contact mode or tapping mode. In contact mode, the tip is shifted continuously on th rface just like in profilometric measurements; tapping mode exploits a periodic vibration of the ti the surface [89]. The choice between these two scanning modes depends mainly on the hardnes the surface under study. The tapping mode is suitable for soft materials, like polymer films, wher continuous shifting of the tip on the surface would produce deformation and artefacts. In contrast, for hard surfaces (most of the metallic, ceramic or semiconductor coatings/layers ntact mode can be used without producing deformation of the surface. There are also other SM thods to acquire direct thickness of thin films. Conductive AFM (C-AFM) can be used to measur ating thicknesses in materials with high electric resistance by applying Ohm's second law Line-scan performed using the "scratch method", side-view; h represents the measured height.
Scanning probe microscopy (SPM) can be used to perform thickness measurements. This class of techniques, which is considered as a subgenre of contact profilometry, exploits different tip-surface interactions to trace the surface profile; mechanical contact, van der Waals forces, capillary forces, electrostatic forces and chemical bonding are among the most important ones. Atomic force microscopy (AFM) [87,88], which usually exploits mechanical interactions between the probe and the surface, can be considered as a natural evolution of contact profilometry. AFM permits a surface to be reconstructed by acquiring parallel line profiles, which can be stitched to form an image that contains 3D information of the surface. Just like contact profilometry, AFM uses a small needle mounted on a cantilever to acquire the surface image. The rastering of this probe across the surface is responsible for the acquisition of line profiles, which are encoded by a laser/CCD system. AFM can work both in contact mode or tapping mode. In contact mode, the tip is shifted continuously on the surface just like in profilometric measurements; tapping mode exploits a periodic vibration of the tip on the surface [89]. The choice between these two scanning modes depends mainly on the hardness of the surface under study. The tapping mode is suitable for soft materials, like polymer films, where the continuous shifting of the tip on the surface would produce deformation and artefacts.
In contrast, for hard surfaces (most of the metallic, ceramic or semiconductor coatings/layers), contact mode can be used without producing deformation of the surface. There are also other SMP methods to acquire direct thickness of thin films. Conductive AFM (C-AFM) can be used to measure coating thicknesses in materials with high electric resistance by applying Ohm's second law. Thickness sensitivity of C-AFM can span from hundreds of microns (for the cheaper bench apparatus) to tens of nanometers, with a scanning length that can reach tens of millimeters. AFM is much more sensitive; it can reach sub-nanometric z sensitivity, but in contrast to common profilometry has a limit in terms of a relatively small working area, which with difficultly can be at most a 50 × 50 µm 2

Secondary Ion Mass Spectrometry
Secondary ion mass spectrometry exploits the ion emissions [90] that are produced from a surface when targeted by a primary ion beam to acquire compositional depth-profiles of a sample. The ions produced by the primary beam, called secondary ions (SIs), are a small percentage with respect to neutral sputtered atoms (around 1% ions, depending from the ion source and surface [91]), but they can be collected by a mass analyzer in order to acquire the elemental composition of the sample. By exploiting the destructive ion-matter interaction, it is then possible to peel layers of the surface by rastering (in a similar fashion with respect to EDS mapping) to collect the punctual SI signal and to reconstruct 3D compositional maps of the topmost layers of a surface. In order to perform SIMS, two components are needed: (1) a scanning ion column, to produce a controlled ion beam, and (2) a mass selector/detector. Again, the nature, dimension and energy of the used ionic beam deeply influences the operation parameters, in particular the SI yield. In SIMS, the same LMIS and GFIS sources can be used, with the addition of low energy clustered ions sources [92], which are more "soft" on the impact on the surface, producing larger SIs and smaller in-depth etchings [93]. Due to their low fragmentation capability, cluster ion sources are usually adopted for the study of light elements and organic molecules, such as polymers.
By contrast, the use of single ions as beam components (such as Ga LMIS) is particularly suited for elemental analysis in inorganic chemistry, due to their extensive fragmentation capability upon impacting on the surface. As regards the mass devices responsible for the analysis, they can be divided into three categories, depending on the principle used to select and separate the incoming ions: magnetic sector, quadrupole and time of flight (TOF). Time of flight is the most common for inorganic characterization [91,93,94]; it is capable of separating SIs with small charge/mass variations, usually occurring from the sputtering of hard metallic coatings, semiconductors or ceramics by means of a hard, energetic ionic beam (like Ga). TOF SIMS can be easily implemented in almost all FIB/SEM devices using the in-built ion sources.

Thickness Determination by Chemical Dissolution
To determine the thickness of a film, it is also possible to exploit the techniques of chemical analysis such as atomic absorption spectroscopy (AAS) [95,96] or inductively coupled plasma mass spectrometry (ICP-MS) [97]. The procedure consists of dissolving the film in solution and then analyzing its concentration. Knowing the surface area and the density of the material, it is possible to find the thickness of the film. Thanks to the high sensitivity of these analytical techniques, even the thicknesses of ultra-thin films can be measured. AAS is the least sensitive of the two methods, but yet it is capable of probing concentration down to 0.1 ppm. A 0.1 ppm solution obtained from an object with a 1 cm 2 surface on which a material with a density of 10 g/cm 3 is deposited and then dissolved in 10 mL of solution corresponds to a thickness of just 1 nm. On the other hand, if the film is very thick, the solution will be too concentrated for the direct spectroscopic analysis and a dilution will be needed.
Depending on the type of material to be dissolved, the appropriate solution must be used. The dissolution can be electrolytical [98] or chemical; in the first case, an anodic current is applied to favor the oxidation of the metal, while in the second, an oxidizing species is present in solution. The following are chemical dissolution solutions for select metals [99]: The advantage of this method is the ability to analyze very thin layers with relative simplicity; however, there are problems that limit its use. In addition to being destructive, the measurement is not local but provides the average thickness with respect to the entire sample. Since the total metal content in solution is analyzed, and the thickness is obtained by knowing the density and area of the sample, the latter must be known exactly. For flat surfaces, it is easy to measure the geometric area, but objects with three-dimensional and complex geometries may require use of a 3D scanner or electrochemical measurements. Finally, using very aggressive solutions, it is very complex to limit the attack to a single layer, especially if we want to remove the film of a noble metal on a non-noble substrate. Then, if more than one layer contains the same metal, for example, a copper film on brass, determining the thickness is much more complicated. Furthermore, alloys can have a preferential attack of one metal over the other.

X-ray Fluorescence Spectroscopy
X-ray fluorescence spectroscopy is an analysis tool widely used for the elemental analysis and chemical analysis of materials. When materials are exposed to high-energy X-rays, ionization of their component atoms may take place, exciting them; during the relaxation process, characteristic X photons are emitted and detected for analysis. Due to incident high-energy X-rays, inner shell (K, L, M, etc.) transition phenomena occurs within 100 fs, producing characteristic fluorescence radiation. Ionization consists of the ejection of one or more electrons from the atom and may occur if the atom is exposed to radiation with an energy larger than its ionization energy. X-rays and gamma rays can be energetic enough to eject tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way makes the electronic structure of the atom unstable, and electrons in higher orbitals "fall" into the lower orbital to fill the holes left behind. In falling, energy is released in the form of photons with an amount of energy equal to the difference between the two orbitals involved. Thus, materials emit radiation of the characteristic energies of the present atoms. The apparatus ( Figure 11) consists of (i) an X-ray source or tube; (ii) a fixed or exchangeable collimator that determines the measurement spot; (iii) one or more filters to attenuate the characteristic lines of the tube and making the light more "white"; and (iv) a detector.

X-Ray Fluorescence Spectroscopy
X-ray fluorescence spectroscopy is an analysis tool widely used for the elemental analysis and chemical analysis of materials. When materials are exposed to high-energy X-rays, ionization of their component atoms may take place, exciting them; during the relaxation process, characteristic X photons are emitted and detected for analysis. Due to incident high-energy X-rays, inner shell (K, L, M, etc.) transition phenomena occurs within 100 fs, producing characteristic fluorescence radiation. Ionization consists of the ejection of one or more electrons from the atom and may occur if the atom is exposed to radiation with an energy larger than its ionization energy. X-rays and gamma rays can be energetic enough to eject tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way makes the electronic structure of the atom unstable, and electrons in higher orbitals "fall" into the lower orbital to fill the holes left behind. In falling, energy is released in the form of photons with an amount of energy equal to the difference between the two orbitals involved. Thus, materials emit radiation of the characteristic energies of the present atoms. The apparatus ( Figure 11) consists of (i) an X-ray source or tube; (ii) a fixed or exchangeable collimator that determines the measurement spot; (iii) one or more filters to attenuate the characteristic lines of the tube and making the light more "white"; and (iv) a detector. A variety of samples in different states, such as solids, powders and liquids, can be analyzed using this technique. It can also be used to measure the composition and thickness of coating and layers. The characteristic photons of the sample are collected by a detector that uses the same working principle of EPMA. Both the source and emitted photons can pass through an analyzing crystal that acts as a monochromator, differentiating between energy dispersive (ED) XRF without analyzing the crystal; wavelength dispersive (WD) XRF, in which the emitted photons are selected with a monochromator; and monochromatic wavelength dispersive (MWD) XRF, in which two optics are used, one for the source and one for the emitted photons. For reasons of cost and ease of use, energy dispersion instruments are the most used. The incoming high-energy beam is very penetrating ( Figure 12); for this reason, the maximum detectable thickness is related to the energy of the emitted X-rays. A variety of samples in different states, such as solids, powders and liquids, can be analyzed using this technique. It can also be used to measure the composition and thickness of coating and layers. The characteristic photons of the sample are collected by a detector that uses the same working principle of EPMA. Both the source and emitted photons can pass through an analyzing crystal that acts as a monochromator, differentiating between energy dispersive (ED) XRF without analyzing the crystal; wavelength dispersive (WD) XRF, in which the emitted photons are selected with a monochromator; and monochromatic wavelength dispersive (MWD) XRF, in which two optics are used, one for the source and one for the emitted photons. For reasons of cost and ease of use, energy dispersion instruments are the most used. The incoming high-energy beam is very penetrating ( Figure 12); for this reason, the maximum detectable thickness is related to the energy of the emitted X-rays. XRF is the most common instrument used by industries for film thickness investigations, since it is fast, non-destructive and relatively simple to use, making it perfect for the quality control of the products [100,101]. For this reason, there are also standard procedures to measure for compliance with ISO 3497 [102] and ASTM B568 [103] regulations. Commercial instruments can measure easily the thickness of almost any material (with some restrictions for lighter elements), whether conductive or not, in the range from 10 nm to 100 µm [104,105]; nevertheless, depending on the materials under investigation and the instrumental settings, the limits of measurement can be extended from less than 1 nm [106] to a few centimeters [107]. The lateral resolution of XRF is very low, and the spot size commonly ranges from 0.1 to 15 mm. The relative intensity (normalized respect to a bulk element) of an emission from a film follows an exponential trend [19] but can be approximated to a second-order curve for small ranges far from the saturation thickness [8]. The emission of a gold film on a Cu substrate, as a function of the thickness, is reported in Figure 13 using a log-log scale. In this case, the range of thicknesses between the relative intensities of 0.9 (semi-infinite thickness [108]) and 0.1 (infinitely small) of gold is between 0.2 and 50 µm. The output of the instrument is a spectrum in which the position of the peaks corresponds to the spectroscopic emission of the elements present in the sample, while the intensity is correlated to the sample composition in the volume of the interaction of the incident beam. For this reason, there is no direct information on the thicknesses, but the intensity of the peaks in the spectra will be a function of thickness. A sample with a thicker coating will emit more photons from the film and fewer from the substrate than a thinner one. Since no information about the thickness can be extracted a priori from the spectra, only with the right assumptions of the nature of the sample and the use of a good calibration curve can the thickness information can be deconvoluted; this complication could bring XRF is the most common instrument used by industries for film thickness investigations, since it is fast, non-destructive and relatively simple to use, making it perfect for the quality control of the products [100,101]. For this reason, there are also standard procedures to measure for compliance with ISO 3497 [102] and ASTM B568 [103] regulations. Commercial instruments can measure easily the thickness of almost any material (with some restrictions for lighter elements), whether conductive or not, in the range from 10 nm to 100 µm [104,105]; nevertheless, depending on the materials under investigation and the instrumental settings, the limits of measurement can be extended from less than 1 nm [106] to a few centimeters [107]. The lateral resolution of XRF is very low, and the spot size commonly ranges from 0.1 to 15 mm. The relative intensity (normalized respect to a bulk element) of an emission from a film follows an exponential trend [19] but can be approximated to a second-order curve for small ranges far from the saturation thickness [8]. The emission of a gold film on a Cu substrate, as a function of the thickness, is reported in Figure 13 using a log-log scale. In this case, the range of thicknesses between the relative intensities of 0.9 (semi-infinite thickness [108]) and 0.1 (infinitely small) of gold is between 0.2 and 50 µm. XRF is the most common instrument used by industries for film thickness investigations, since it is fast, non-destructive and relatively simple to use, making it perfect for the quality control of the products [100,101]. For this reason, there are also standard procedures to measure for compliance with ISO 3497 [102] and ASTM B568 [103] regulations. Commercial instruments can measure easily the thickness of almost any material (with some restrictions for lighter elements), whether conductive or not, in the range from 10 nm to 100 µm [104,105]; nevertheless, depending on the materials under investigation and the instrumental settings, the limits of measurement can be extended from less than 1 nm [106] to a few centimeters [107]. The lateral resolution of XRF is very low, and the spot size commonly ranges from 0.1 to 15 mm. The relative intensity (normalized respect to a bulk element) of an emission from a film follows an exponential trend [19] but can be approximated to a second-order curve for small ranges far from the saturation thickness [8]. The emission of a gold film on a Cu substrate, as a function of the thickness, is reported in Figure 13 using a log-log scale. In this case, the range of thicknesses between the relative intensities of 0.9 (semi-infinite thickness [108]) and 0.1 (infinitely small) of gold is between 0.2 and 50 µm. The output of the instrument is a spectrum in which the position of the peaks corresponds to the spectroscopic emission of the elements present in the sample, while the intensity is correlated to the sample composition in the volume of the interaction of the incident beam. For this reason, there is no direct information on the thicknesses, but the intensity of the peaks in the spectra will be a function The output of the instrument is a spectrum in which the position of the peaks corresponds to the spectroscopic emission of the elements present in the sample, while the intensity is correlated to the sample composition in the volume of the interaction of the incident beam. For this reason, there is no direct information on the thicknesses, but the intensity of the peaks in the spectra will be a function of thickness. A sample with a thicker coating will emit more photons from the film and fewer from the substrate than a thinner one. Since no information about the thickness can be extracted a priori from the spectra, only with the right assumptions of the nature of the sample and the use of a good calibration curve can the thickness information can be deconvoluted; this complication could bring large uncertainties or even erroneous results.
Deriving the coating's thickness from the X-ray spectrum requires an experimental calibration curve that employs standards; however, due to the large dependence of the X-ray spectrum on the nature of the coating and the substrate, standards are not always available. The variability of thickness, layer composition, multilayer architectures and the substrate's chemical nature creates difficulties in producing certified standards. This issue is critical in industrial applications. Indeed, the determination of precious metal coatings in the fashion industry is a major one, where the products are made with many coatings and substrates with extreme variability in the system. A calibration curve obtained with standards of known thicknesses was used to measure vanadium (V) oxide nanometric films on glass with a portable XRF measuring the attenuation of Ca emissions [5]. Hamann [109] was able to detect fractions up to 1% of a monolayer of over 20 samples without the use of standards or models, combining WD-XRF and XRD measurements to obtain the proportionality constant between X-ray emitted intensity and the number of atoms per unit area.
Nowadays, the most common approach is the use of the fundamental parameter (FP) method [6,104,110,111]. FP relies on theoretical equations that consider the composition and thickness of a sample to evaluate the XRF intensity. Practically, the FP method is combined with a few pure element empirical standards to correct unpredicted deviations due to matrix effects [112,113]. With the FP method, it is possible to determine the film thickness of single and even multilayer samples if the structure and the composition are known exactly; nevertheless, the error correlated with the measurement is significant. Typical accuracy for single-layer samples is ±5%, while for multiple-layer samples, this value grows to ±10% for the upper layer and ±37% for the first underlayer [114][115][116] due to inaccuracies in the method for complex samples.
Often, the thickness and composition of the underlying layers in multilayer architectures are not exactly known, and they are introduced in the measurement software using an initial estimation [117]. Many authors investigated the FP method for multi-layered samples in the micron range (Au/Ni/Cu [114]) as well as in the nanometer range (Ni/Cu/Si [118]). This method is advantageous when it is difficult to obtain accurate certified reference materials for layer thickness calibration, such as in the case of semiconductor research [119]. Vrilink [115] showed a good correlation between FP and SEM and profilometry measurements of multilayer samples with different compositions (Rh, Ta, W, Ti, Pd, Pt, Ni, Au, Cr) between 20 and 250 nm, considering the density variation for thin films. Ager [117] highlights the discrepancy between SEM and non-destructive techniques like RBS and XRF measurements due to differences in density between bulk metal and thin films due to porosity; in the paper, comparison between references and electroplated samples were performed to prove the hypothesis for ancient gildings, but his considerations are valid in many other fields. Exploiting the FP, both the emission line of the top layer as well as the reduction in the intensity of the underlying layer can be used for thickness determination, as shown in a study in 2017 in which the results of atomic layer deposition (ALD) oxides samples were tested [108].
An alternative to the use of standards and the FP method consists of a semiquantitative approach based on calibration curves obtained with simulation software using MC algorithms. During the simulation, when the materials and the architecture to simulate are chosen, it is also possible to specify the density of the materials; in this way, the user can decide to simulate materials that have a porosity different from the nominal one due to the deposition method, as for example happens during electroplating in which the density of the coatings is often lower than that of the bulk material. Moreover, the MC method simulates X-ray spectra using a statistical approach that counts the photon interactions in the sample. With this approach, inhomogeneities of the sample, spectral and spatial distributions of the beam, polarization effects, photo-absorption, multiple fluorescence and scattering effects, which are difficult to model with the FP method, can be considered. The simulation approach is not very common, probably because the FP was preferred for many years since it was computationally favorable, but with the latest technological development, even a personal computer can be used to obtain a good simulation in a relatively small amount of time. The two main software programs that provide a simulated spectrum with the MC approach are XRMC [120] and XMI-MSIM [121]. Both codes use the Xraylib database [122,123]. XRMC is generally used for complex 3D geometries, while XMI-MSIM can only simulate samples composed of parallel layers; however, for simple geometries, XMI-MSIM is currently superior to XRMC in simulating XRF experiments [124]. Thickness evaluation using the MC method is widely used in the field of cultural heritage applications. Schiavon [125] used XRMC code to obtain the thickness and composition of Nuragic artifacts, comparing the simulations with the experimental measurements to confirm hypotheses based on bulk chemical composition, structural observations and historical information. A similar approach was used by Brunetti [126] and Bottaini [127] for Peruvian and Portuguese artifacts; an MC simulation was performed defining the experimental setup and the sample, and then the simulated spectra were compared to the measured one visually and with the chi-squared test. If differences were found, the model was corrected until the two spectra matched, determining both the composition and structures. Besides the comparative method, MC simulation can also be employed to obtain a calibration curve based on the simulated standard. XMI-MSIM has been successfully used for this purpose for electroplated samples, normalizing the result against the semi-infinite bulk element, with even better results than with the FP semi-empirical method [8]. Combining MC simulation with multivariate analysis, it is even possible to obtain reliable results for multilayer samples or complex matrices like copper coatings on copper alloy [7]. A similar approach was used by Pessanha [9] for cultural heritage gildings on Pb using PENELOPE code, exploiting the ratio between two lines of the same element for normalization, as if it were an internal standard. Cesareo and his collaborators have used this data processing approach widely in the last decade [128][129][130][131][132][133][134], exploiting the differential attenuation (or self-attenuation) of the substrate (or coating) of two lines of the same element. The curves of the X line ratios over the thickness can be obtained by knowing the value of these ratios for an infinitely thin layer and a semi-infinite one; these values are tabulated. The accuracy of MC methods in cultural heritage has been assessed as approximately 0.1 µm [134].
XRF techniques are not commonly used for organic films since they do not give fluorescent radiation detectable in air. Porcinai [135] used X-ray attenuation of the substrate, calculating the ratio between two emission lines of the same element to evaluate the thickness of polymers for protective purposes; empirical, semi-empirical and analytical (FP) methods were compared. Recently De Almeida [136] used a multivariate approach to evaluate multiple regions in the XRF spectra and to obtain the thickness of polymeric films. To keep up with all the innovations in this field, every year a review [137][138][139] on the last advances in XRF group techniques is published to highlights the latest developments in instrumentation, methodologies and data handling in this field.

Electron Probe Microanalysis
Electron probe microanalysis was first developed in 1951 by Casting [140]. EPMA permits one to analyze the composition of homogeneous materials in a region of few microns from the surface. The EPMA can be conducted using two different approaches: wavelength dispersive X-ray spectroscopy (WDS) [141] or energy dispersive X-ray spectroscopy [142][143][144][145][146]. WDS is generally considered an excellent method for microanalysis because is more sensitive and has a higher resolution than EDS, but it is more expensive and needs a dedicated device. EDS, on the other hand, can be conducted by Coatings 2020, 10, 1211 20 of 36 merely coupling a detector to SEM, a widespread instrument in the academic and industrial sphere, especially since the recent availability of inexpensive benchtop instruments.
As mentioned above, EPMA can be used in the destructive procedure for mapping the cross-sectioned samples; here, it is used as a non-destructive technique, measuring the sample perpendicularly to the surface [147]. This technique interprets every sample as homogeneous, since the output information is a spectrum. For this reason, there is no direct information on the thickness, but the intensity of the peaks in the spectra are a function of thickness as well as it is for XRF. EPMA is not much known for thickness measurement, but it is an attractive candidate because it enables fast, quantitative [148,149] and non-destructive [150] analysis with the additional benefit of having a lateral resolution in the micron range [151]. In addition to that, the probe (electrons) is not very penetrating (Figure 14), and for this reason, it is possible (by adjusting the beam energy) to analyze ultrathin films or just the top layer to obtain their composition [152][153][154][155][156]. The EPMA detector is present as an upgrade of conventional SEM, but most of the instruments come at least with the EDS detector by default. The electron bombardment of the beam excites the atoms in the sample, knocking out the electrons from the inner shells. Such a state is unstable, and the resulting electron-hole is immediately filled by a higher-energy electron from a higher atomic orbital. The energy difference is released in the form of an X-ray quantum. The resulting X-ray radiation is characteristic of the transition and the atom. For a single element, different transitions are allowed, depending on which shell the higher-energy electron comes from and which shell the hole must filled in. This results in X-ray quanta, which are marked with Kα, Kβ, Lα, etc. The energy of Xray lines (position of the lines in the spectrum) is an indicator of which element is under investigation. The intensity of the line depends on the concentration of the element within the sample. Furthermore, the electrons, slowing down in the electric field of the atomic nuclei, generate an X-ray braking radiation, called bremsstrahlung, which constitutes the continuous background of the EPMA spectrum. The EPMA detector exploits the energy interaction between X-rays and suitable material, generally represented by a single silicon crystal doped with lithium, coated at both ends with a gold conductive layer, at a temperature of −192 °C with liquid nitrogen. Other variants are the high purity germanium detectors and silicon drift detector (SSD) with Peltier cooling. When an X-ray photon is absorbed in the sensitive area of the detector, then electron-hole pairs are produced, which causes the production of an electric current, which is then sensitively amplified. WDS instruments present a diffracting crystal that selects the photons to be sent to the detector, which measures only the number of pulses, i.e., photons. In the EDS system, there is not a photon selector; thus, the signal of each photon is processed to obtain its energy value; during this time, the system rejects every other signal, resulting in a dead time. High dead time produces high spectra with high resolution but a low signal, because many photons are rejected; on the other hand, low dead time produces a high signal but wide peaks. A longer process time is needed for quantitative analysis where spectral resolution is important, whereas if maximizing the number of X-rays in a spectrum or map is most important, a shorter process time can be used. Si (Li) detectors operate at count rates of about 1 to 20 kCPS with optimal dead times of 20%-30%. The reason why SDD is now preferred to Si (Li) detectors is that they can handle much higher count rates of >100 kCPS and dead times of 50%. The count rate can be The EPMA detector is present as an upgrade of conventional SEM, but most of the instruments come at least with the EDS detector by default. The electron bombardment of the beam excites the atoms in the sample, knocking out the electrons from the inner shells. Such a state is unstable, and the resulting electron-hole is immediately filled by a higher-energy electron from a higher atomic orbital. The energy difference is released in the form of an X-ray quantum. The resulting X-ray radiation is characteristic of the transition and the atom. For a single element, different transitions are allowed, depending on which shell the higher-energy electron comes from and which shell the hole must filled in. This results in X-ray quanta, which are marked with Kα, Kβ, Lα, etc. The energy of X-ray lines (position of the lines in the spectrum) is an indicator of which element is under investigation. The intensity of the line depends on the concentration of the element within the sample. Furthermore, the electrons, slowing down in the electric field of the atomic nuclei, generate an X-ray braking radiation, called bremsstrahlung, which constitutes the continuous background of the EPMA spectrum. The EPMA detector exploits the energy interaction between X-rays and suitable material, generally represented by a single silicon crystal doped with lithium, coated at both ends with a gold conductive layer, at a temperature of −192 • C with liquid nitrogen. Other variants are the high purity germanium detectors and silicon drift detector (SSD) with Peltier cooling. When an X-ray photon is absorbed in the sensitive area of the detector, then electron-hole pairs are produced, which causes the production of an electric current, which is then sensitively amplified. WDS instruments present a diffracting crystal that selects the photons to be sent to the detector, which measures only the number of pulses, i.e., photons. In the EDS system, there is not a photon selector; thus, the signal of each photon is processed to obtain its energy value; during this time, the system rejects every other signal, resulting in a dead time. High dead time produces high spectra with high resolution but a low signal, because many photons are rejected; on the other hand, low dead time produces a high signal but wide peaks. A longer process time is needed for quantitative analysis where spectral resolution is important, whereas if maximizing the number of X-rays in a spectrum or map is most important, a shorter process time can be used. Si (Li) detectors operate at count rates of about 1 to 20 kCPS with optimal dead times of 20%-30%. The reason why SDD is now preferred to Si (Li) detectors is that they can handle much higher count rates of >100 kCPS and dead times of 50%. The count rate can be optimized by adjusting the beam current (probe current or spot size) and the processing time. It is important to select a processing time and beam current that will give an acceptable X-ray count rate and detector dead time for analysis, as well as the desired spectral resolution. The typical energy resolution of an EDS detector is 130-140 eV, while it is less than about 10 eV for WDS systems. Moreover, EDS systems have much lower count rates and poor reproducibility, generally by a factor of ten, with respect to WDS detectors. The beam energy can be varied to increase sensitivity for thinner or thicker coatings ( Figure 15). The film thickness can be obtained from the measured spectra through various approaches. The calibration curve can be obtained using standards of known thickness [157] or Monte Carlo simulations [20,21]. In terms of the quantification method, the quantity used in the calibration curves, multiple alternatives were evaluated: the K-ratio [153,[158][159][160], which is the ratio between the intensity in the sample and the intensity in a standard with known composition, commonly used in quantification; the ratio of intensities [161]; and the atomic ratio [157], obtained performing the ZAF correction algorithm on the K-ratios. Both the absolute thickness [162,163] as well the mass thickness [164][165][166][167] were taken into consideration for the quantification.
In the last fifty years, many software programs were written to simulate EDS spectra [168]; many of them were written by researchers, and some were commercial: MAGIC [169,170], STRATAGEM [171][172][173], GMRFILM [148], Electron Flight Simulator [174,175], ThinFilmID [176], LayerProbe [176,177], pyPENELOPE [178,179], Win X-ray [180,181], MC X-ray [180,182], XFilms [183], CASINO [150,[184][185][186][187], CalcZAF [188,189] and DTSA-II [190][191][192]. Many of these softwares exploit the PENEPMA algorithm [179]. PENEPMA is a simplified version dedicated to EPMA, written to perform simulation of X-ray spectra and calculate different quantities of interest of another algorithm called PENELOPE. PENELOPE (penetration and energy loss of positrons and electrons) is a generalpurpose Monte Carlo code system for the simulation of coupled electron-photon transport in arbitrary materials. PENELOPE covers the energy range from 1 GeV down to, nominally, 50 eV. The physical interaction models implemented in the code are based on the most reliable information available at present, limited only by the required generality of the code. These models combine results from first-principles calculations, semi-empirical models and evaluated databases. It should be borne in mind that although PENELOPE can run particles down to 50 eV, the interaction cross-sections for energies below 1 keV may be affected by sizeable uncertainties; the results for these energies should be considered as semi-quantitative. PENELOPE incorporates a flexible geometry package called The film thickness can be obtained from the measured spectra through various approaches. The calibration curve can be obtained using standards of known thickness [157] or Monte Carlo simulations [20,21]. In terms of the quantification method, the quantity used in the calibration curves, multiple alternatives were evaluated: the K-ratio [153,[158][159][160], which is the ratio between the intensity in the sample and the intensity in a standard with known composition, commonly used in quantification; the ratio of intensities [161]; and the atomic ratio [157], obtained performing the ZAF correction algorithm on the K-ratios. Both the absolute thickness [162,163] as well the mass thickness [164][165][166][167] were taken into consideration for the quantification.
In the last fifty years, many software programs were written to simulate EDS spectra [168]; many of them were written by researchers, and some were commercial: MAGIC [169,170], STRATAGEM [171][172][173], GMRFILM [148], Electron Flight Simulator [174,175], ThinFilmID [176], LayerProbe [176,177], pyPENELOPE [178,179], Win X-ray [180,181], MC X-ray [180,182], XFilms [183], CASINO [150,[184][185][186][187], CalcZAF [188,189] and DTSA-II [190][191][192]. Many of these softwares exploit the PENEPMA algorithm [179]. PENEPMA is a simplified version dedicated to EPMA, written to perform simulation of X-ray spectra and calculate different quantities of interest of another algorithm called PENELOPE. PENELOPE (penetration and energy loss of positrons and electrons) is a general-purpose Monte Carlo code system for the simulation of coupled electron-photon transport in arbitrary materials. PENELOPE covers the energy range from 1 GeV down to, nominally, 50 eV. The physical interaction models implemented in the code are based on the most reliable information available at present, limited only by the required generality of the code. These models combine results from first-principles calculations, semi-empirical models and evaluated databases. It should be borne in mind that although PENELOPE can run particles down to 50 eV, the interaction cross-sections for energies below 1 keV may be affected by sizeable uncertainties; the results for these energies should be considered as semi-quantitative. PENELOPE incorporates a flexible geometry package called PENGEOM that permits automatic tracking of particles in complex geometries consisting of homogeneous bodies limited by quadratic surfaces. The PENELOPE code system is distributed by the OECD/NEA data bank. The distribution package includes a report [193] that provides detailed information on the physical models and random sampling algorithms adopted in PENELOPE, on the PENGEOM geometry package, and on the structure and operation of the simulation routines. PENELOPE is coded as a set of FORTRAN subroutines, which perform the random sampling of interactions and the tracking of particles (either electrons, positrons or photons). In principle, the user should provide the main steering program to follow the particle histories through the material structure and to keep scores of quantities of interest. In PENEPMA, photon interactions are simulated in chronological succession, allowing the calculation of X-ray fluorescence in complex geometries. PENEPMA makes extensive use of interaction forcing (a variance-reduction technique that artificially increases the probability of occurrence of relevant interactions) to improve the efficiency. CalcZAF [188] simulation software is based on PENEPMA and is a general-purpose software package for simulation of both relativistic and sub relativistic electron interactions with matter. Even in this case, the characteristics and the geometry of the detector are not taken into account, and the output consists of a lines-like unconvoluted spectrum. DTSA-II [190] shares many physical models with PENEPMA but was designed exclusively for simulation of X-ray spectra generated by sub relativistic electrons. DTSA2 uses variance reduction techniques unsuited to general-purpose code. These optimizations help the program to be orders of magnitude more computationally efficient while retaining the detector position sensitivity. Simulations are executed in minutes rather than hours, and differences that result from varying the detector position can be modelled. It is possible to insert the characteristics and the geometry of the detector in DTSA2, which is capable of handling complex sample geometries. The primary and secondary bremsstrahlung and fluorescence can be calculated. The outputs consist of a real-looking spectrum, since it is deconvoluted considering the detector resolution; even the electron trajectories can be visualized. CASINO [184] is single scattering Monte Carlo simulation software of electron trajectory in solids specifically designed for low beam interactions in a bulk and thin foil. This software can be used to generate many of the recorded signals (X-rays and backscattered electrons) in a scanning electron microscope. This program can also be efficiently used for all the accelerated voltage found on a field emission scanning electron microscope (0.1 to 30 keV). The characteristics and the geometry of the detector are not taken into account, and the output is not a spectrum but rather the characteristic emission line intensities as a function of the depth.
The X-ray depth distribution of the emissions is described by the ϕ (ρz) curve, which can be used for the determination of thin film thicknesses [13]. The thickness of the film can vary between two extremes relative to the curve: extremely thin or extremely thick [163]. In the first case, the emission corresponds to a bulk sample with the composition of the substrate, and in the second case, to a bulk sample with the composition of the film. In the intermediate cases, the ϕ (ρz) curves vary between these two extremes. The maximum thickness that can be analyzed with the EPMA method is in the order of microns; this is determined by the acceleration potential of the electrons together with the atomic number of the elements in the sample [194]. On the other hand, the minimum detectable thickness (lower detection limit) is given by the combination of the X-ray energy characteristics of the elements in the sample and the properties of the detector and can be as low as a few monolayers or less [195].
Exploiting STRATAGEM software, Kühn [172] was capable of obtaining both the elemental composition and thickness of the thin film ternary alloy, Pd-Ni-Co, co-deposited via magnetron sputtering on a silicon wafer by ED-EPMA in the range of 50 to 250 nm. The results were confirmed by AES and XPS measurement, for the composition, and by SEM imaging for the thickness. CASINO simulations confirmed the volume of interaction. A similar approach was used for the determination of electrodeposited Ni, Pd and Au on Cu, comparing the results of CASINO, CalcZAF and DTSA [19].
A comparison between GRMfilm, DTSA-II and PENEPMA was performed for very thin films (5-20 m) of Al and Cu on Bi; in this study, also the variation in the film density with respect to the bulk material was evaluated [196]. Ultra-thin films of Ge, Sn, Ag and Au on Si wafer were also evaluated by Campos performing multiple analyses with different beam energies [197]. DTSA-II was also used to determine the sputter coated deposition of Ti and Ag on Si for medical applications [198]. Osada [162] developed new MC simulation software to evaluate the thickness of aluminum oxide on aluminum sheets in the range of 5 to 50 nm. Recently, in 2018, Darznek performed thickness measurement, tilting the sample off the normal incidence angle to increase the signal of the superficial coatings, precisely to determine the thickness of chromium film on a silicon substrate. With this approach, he was able to determine up to 10 14 atoms per square centimeter, with an error of less than 10%, exploiting K-ratio measurement with MC simulation.
In 2016, Sokolov [150] measured the thickness of silicon dioxide and silicon nitride thin films using EDS, varying the penetration depth of the analysis, changing the acceleration voltage of the beam and correlating the thickness of the film with the signal of the substrate elements to the collected noise. Stanford [199], in 2020, measured the oxide layer formation on Pu from 35 to 400 nm using measured standards to build the k-ratio calibration curves of oxygen through FIB-SEM analysis. Previously Bastin made a massive work collecting the K-ratio of Al [200] and Pd [201] of films from 10 to 320 nm in thickness at various beam energies between 3 and 30 kV on many substrates between Be and Bi.
Even the thickness of multi-layered samples can be measured using EPMA measurements [202,203]. In 2019, Pazzaglia [204] developed a new model for the standardless determination of mass thickness and composition using EDS for multilayer samples with an accuracy of 10 µg/cm 2 . Previously, Lesch used a sputtering method with EPMA, where the signal deconvolution with max entropy algorithm provided the thickness of Ti/Al/Ti layers deposited on Si.
EPMA was used by some authors to evaluate critical information about layered samples besides their thickness. Christien [145] used EDS measurement to determine the interdiffusion coefficient between thin films of miscible metals. Using various annealing temperature and Fick's diffusion equations, he was able to estimate the coefficients for an Ni film on Pd. Darznek [205] proposed a method to evaluate thickness uniformity of nanofilms by means of MC simulations, correlating the intensities of the peaks in the EDS spectrum to the film thickness. In 2016, Ortel [142] developed a technique combining EPMA measurement for mass deposition determination and SEM analysis for thickness determination to obtain the change in density of the films with respect to the bulk materials and consequently extrapolate the porosity of the coatings.

Ellipsometry
Ellipsometry is a non-destructive optical technique that enables one to determine the optical constants, the microstructure and the thickness of thin films [206]. At its core, this method is based on the reflection of a linearly polarized light beam on the sample surface. The reflected light becomes elliptically polarized by interference, due to changes in phase and intensity with respect to the incident photons [207][208][209]. Ellipsometry in fact exploits the reflection/refraction of light that occurs at the boundary between two different materials. When a photon flux hits the surface of a thin film, part of it will be reflected, and part will be refracted at a different angle inside the material. This process also occurs at the boundary between the thin film and the bulk material underneath, giving rise to interference between the surface-reflected and the bulk-reflected light. These interference patterns can be fitted to a model in order to extract various parameters, among them, the film thickness and film roughness. The analysis process can be quite complex, especially for multi-layered materials.
This technique exploits the penetrating power of visible light, usually produced by a laser, in the matter. Light has to travel down to the substrate to acquire thickness-sensitive data. For this reason, it is particularly suited for transparent films, where it can be used to determine thicknesses up to a few microns. This leads to a significant limitation in the determination of metallic film thickness. Due to the high light absorption coefficient of metals in a vast portion of the spectra, it is nearly impossible to use this technique for layers thicker than 100 nm [207].

Rutherford Backscattering Spectroscopy
Rutherford backscattering spectroscopy permits the depth profiling and thickness determination of metallic thin films up to a few microns. This method exploits the elastic backscattering of an energetic beam of ions (usually He 2+ , reaching energies from 2 to 4 MeV) from the atom nuclei composing the target sample [30,210,211]. Resulting ions are analyzed by energy and by backscattering angle. The obtained spectra can be then fitted using programs like SIMNRA [212] or SRIM [213] in order to obtain film thicknesses.
This technique permits very fast acquisition times (less than ten minutes) on conducting, semiconducting or insulating thin films, without the use of standards, with thicknesses ranging from 5 to 7 microns to nanometers. The main limitation in its use is related to the need for a linear particle accelerator, which is costly and difficult to find in most service facilities. Moreover, the technique is not very sensitive to low-density atoms (below Ne) and to stacks of different layers. Samples have to be kept in ultra-high vacuum for the operations. In the end, the density of the film has to be known precisely in order to quantify the thickness correctly by the fitting of the experimental data using models.

Mechanical Resonance
Resonant techniques allow the characteristics of a film to be evaluated by observing the variation in the resonance frequency of mechanical oscillation or electromagnetic radiation. Using vibrating doubly clamped micro or nano-beams, the elastic modulus and density or thickness of an ultra-thin film can be obtained, and even the residual stress can be calculated [214][215][216]. The presence of a thin layer coating deposited on an elastic substrate generates stress and alters the elastic resonance properties of the latter. The film can be metallic, polymeric or of other nature. This is possible by measuring the dependence of the resonant frequency on the tension parameter as a result of in-plane and out-of-plane vibrations. To perform these measurements, it is necessary that the substrate has peculiar geometric characteristics. Therefore, the film must be specially deposited on the measuring base, and films on bulk samples cannot be analyzed. With this technique it is possible to measure films that have a thickness from a few nanometers to several microns. Using the same principle, cantilevers can also be used [217][218][219]. The cantilevers can be made using silicon properly oxidized by photolithography; the oxide regions can then be removed by hydrofluoric acid, leaving only the silicon cantilever [220].

Electromagnetic Resonance
Electromagnetic radiation can also be used to determine the thickness of a film by exploiting the resonance frequencies using both microwaves [221] and visible light [222,223]. Surface plasmon resonance (SPR) is a widely used technique in chemistry and biology to detect changes on the surface of the metal substrate, correlating them to the concentration of an analyte. The technique is based on the collective oscillations of free electrons, called surface plasmons, that can be optically excited by the attenuated total reflection under resonant conditions. The surface plasmon field decays exponentially away from the surface. The resonance condition is extremely sensitive to changes in the refractive index of the surrounding medium, which is influenced by the chemical environment on the surface. In the resonant condition, the decrease in the power of the reflected light is observed. This phenomenon can also be exploited to determine the thickness of films that change the SPR properties of the substrate [224,225]. Given the exponential decay of the superficial plasmon, thicknesses in the order of nanometers can be measured with this technique.

Conclusions
Thickness measurement is a challenge that affects many scientists and technicians, in both academic and industrial environments. The composite materials on which metallic coatings are applied are ubiquitous and allow one to obtain properties that a single element would not have. Thickness control is decisive for the obtainment of definite technological properties, and thus its measurement is necessary. In this review, we described the primary techniques, both for preparation and analysis, which are used both in research and in the industrial sector for this purpose. In fact, there is no perfect technique suitable for any type of sample, but the most appropriate route must be chosen for every need. At the end of this work, it seems reasonable to report a rough comparison between the various methods as regards the range of thicknesses that can be analyzed ( Figure 16) and the times required to carry out the analysis (Figure 17). The costs are also an important parameter to consider, and they are very variable; from a general point of view, mechanical and optical techniques are cheaper than electronic, ionic and spectroscopic techniques. It should also be taken into account that some instruments are supplied with multiple combined analyzers, such as in the case of SE, BSE and EDS or for FIB techniques.
Coatings 2020, 10, x FOR PEER REVIEW 25 of 36 is possible to lower the magnifications until observing shapes above the millimeter. Spectroscopic techniques instead suffer from the attenuation of the signal inside the sample and therefore cannot measure coatings beyond a certain size, which appear as infinitely thick. Furthermore, for those analysis techniques that require sample preparation, the range that can be analyzed is the intersection of the ranges of the individual techniques.  Considering now the preparation and measurement time, it is highly dependent on the presence of automated systems and on the experience and manual skills of the operator, as well as on the degree of accuracy required for the result. In general, the XRF, in addition to being extremely  is possible to lower the magnifications until observing shapes above the millimeter. Spectroscopic techniques instead suffer from the attenuation of the signal inside the sample and therefore cannot measure coatings beyond a certain size, which appear as infinitely thick. Furthermore, for those analysis techniques that require sample preparation, the range that can be analyzed is the intersection of the ranges of the individual techniques.  Considering now the preparation and measurement time, it is highly dependent on the presence of automated systems and on the experience and manual skills of the operator, as well as on the degree of accuracy required for the result. In general, the XRF, in addition to being extremely In terms of the measurable ranges of thicknesses, microscopic techniques tend to have only a lower limit, dictated by the aberrations that the beam undergoes under certain dimensions, while it is possible to lower the magnifications until observing shapes above the millimeter. Spectroscopic techniques instead suffer from the attenuation of the signal inside the sample and therefore cannot measure coatings beyond a certain size, which appear as infinitely thick. Furthermore, for those analysis techniques that require sample preparation, the range that can be analyzed is the intersection of the ranges of the individual techniques.
Considering now the preparation and measurement time, it is highly dependent on the presence of automated systems and on the experience and manual skills of the operator, as well as on the degree of accuracy required for the result. In general, the XRF, in addition to being extremely versatile, is the fastest technique, not even requiring sample preparation. On the other hand, microscopic techniques coupled with cross-sectioning are widespread, as they allow a result in which the thickness is directly visible to be obtained. Furthermore, cross-sectioning, although time-consuming, is a procedure that is generally automated.
Some of the illustrated techniques are well established and have not undergone many innovations in recent years, if not engineering optimization of performance and costs; for others, however, research is still very active, as we have shown in this work. Therefore, they must be followed with interest in order to make the best use of them as the powerful analytical tools that they are.