Spatial Frequency Domain Imaging System Calibration, Correction and Application for Pear Surface Damage Detection

Spatial frequency domain imaging (SFDI) is a non-contact wide-field optical imaging technique for optical property detection. This study aimed to establish an SFDI system and investigate the effects of system calibration, error analysis and correction on the measurement of optical properties. Optical parameter characteristic measurements of normal pears with three different damage types were performed using the calibrated system. The obtained absorption coefficient μa and the reduced scattering coefficient μ’s were used for discriminating pears with different surface damage using a linear discriminant analysis model. The results showed that at 527 nm and 675 nm, the pears’ quadruple classification (normal, bruised, scratched and abraded) accuracy using the SFDI technique was 92.5% and 83.8%, respectively, which has an advantage compared with the conventional planar light classification results of 82.5% and 77.5%. The three-way classification (normal, minor damage and serious damage) SFDI technique was as high as 100% and 98.8% at 527 nm and 675 nm, respectively, while the classification accuracy of conventional planar light was 93.8% and 93.8%, respectively. The results of this study indicated that SFDI has the potential to detect different damage types in fruit and that the SFDI technique has a promising future in agricultural product quality inspection in further research.


Introduction
During the collection and transportation of fruit, collisions are likely to occur, causing surface defects such as bruises, scratches and abrasions, while damaged fruits are prone to decay and affect other normal fruits. To reduce unnecessary losses, it is essential to grade fruits according to the degree of damage suffered. At present, the commonly used rapid and non-destructive detection methods are mainly visible light imaging and hyperspectral imaging techniques based on reflectance measurements [1,2], and Raman techniques based on inelastic scattering [3]. However, these techniques find it difficult to achieve the detection of fine defects and initial damage in fruits [4,5].
In recent studies, it has been suggested that the degree of small bruises in fruits can be studied quantitatively by studying the variation in the obtained absorption coefficient µ a and the reduced scattering coefficient µ' s [6]. At present, the commonly used methods for the detection of optical characteristic parameters mainly include the integrating sphere technique [7], the time-resolved (TR) technique [8] and the spatially resolved (SR) technique [9]. The integrating sphere technique is a destructive measurement method, and the sample preparation process is tedious; it cannot achieve rapid non-destructive measurements and it is often used as a reference method. TR techniques have high measurement accuracy, but the system is expensive and requires the probe to be in contact with the sample, which is difficult to apply in damage detection in fruits. SR technique can only perform point measurements, although the equipment is cheaper and can be non-destructive. These the camera acquisition process, the sCMOS camera is cooled down to around 0 • C at an ambient temperature of 30 • C to minimize dark current. The camera was connected to the computer through the manufacturer's paired peripheral component interconnect express (PCI-Express) capture card, which simultaneously implemented camera control and data transmission, and greatly increased the transmission speed of 16-bit image data. A largeaperture lens (LEM2520CD-H2, Vision Datum Co., Ltd., Hangzhou, China) was mounted on the camera. There was a filter wheel (FW102, Beijing Optical Century Instrument Co., Ltd., Beijing, China) and a polarizer (OCZ203, Beijing Optical Century Instrument Co., Ltd., Beijing, China) in front of the lens. Six band-pass filters (Mega-9 Co., Ltd., Shanghai, China) were installed in the filter wheel. The distance between the camera and the sample stage was about 400 mm, and the field of view was about 150 × 150 mm 2 . There was a rangefinder (CD33, Optex Co., Ltd., Shiga, Japan) on the side of the camera to measure the distance from the sample to the camera in the vertical direction, which affected the frequency calibration.
Foods 2021, 10, x FOR PEER REVIEW 3 of 16 16-bit high-performance Scientific Complementary Metal-Oxide-Semiconductor (sCMOS) camera (Iris 9TM, Photometrics, Inc., Tucson, AZ, USA). The area array of the sCMOS camera is 2960 × 2960 pixels, and the peak quantum efficiency (QE) is greater than 73%. During the camera acquisition process, the sCMOS camera is cooled down to around 0 °C at an ambient temperature of 30 °C to minimize dark current. The camera was connected to the computer through the manufacturer's paired peripheral component interconnect express (PCI-Express) capture card, which simultaneously implemented camera control and data transmission, and greatly increased the transmission speed of 16-bit image data. A large-aperture lens (LEM2520CD-H2, Vision Datum Co., Ltd., Hangzhou, China) was mounted on the camera. There was a filter wheel (FW102, Beijing Optical Century Instrument Co., Ltd., Beijing, China) and a polarizer (OCZ203, Beijing Optical Century Instrument Co., Ltd., Beijing, China) in front of the lens. Six band-pass filters (Mega-9 Co., Ltd., Shanghai, China) were installed in the filter wheel. The distance between the camera and the sample stage was about 400 mm, and the field of view was about 150 × 150 mm 2 . There was a rangefinder (CD33, Optex Co., Ltd., Shiga, Japan) on the side of the camera to measure the distance from the sample to the camera in the vertical direction, which affected the frequency calibration. The light source was a Nippon electronic company (NEC) projector (NEC NP-V302WC, NEC Corporation, Tokyo, Japan), and the color wheel of the projector was removed. There was a neutral density filter (NE2R10 A, Thorlabs, Inc., Newton, NJ, USA) and a polarizer (OCZ203, Beijing Optical Century Instrument Co., Ltd., Beijing, China) in front of the projector. The neutral density filter can uniformly attenuate the light, and the polarizer in front of the projector can be used in conjunction with the polarizer in front of the camera lens to eliminate specular reflections.
In order to obtain a good projection effect, the optical axis of the projector and the camera should be on the same plane and at a small angle to obtain sinusoidal structured light and reduce specular reflection. Both the camera and the projector were installed above the sample stage.
The sample stage consisted of a horizontal translation stage (MTS123, Beijing Optical Century Instrument Co., Ltd., Beijing, China) and a vertical translation stage (MVS101, Beijing Optical Century Instrument Co., Ltd., Beijing, China). Under the control of the motor controller, the sample could be adjusted to the proper position. The horizontal translation stage had a travel of 300 mm and a displacement resolution of 0.0032 mm. The vertical translation stage had a travel of 55 mm and a displacement resolution of 0.1 mm. The light source was a Nippon electronic company (NEC) projector (NEC NP-V302WC, NEC Corporation, Tokyo, Japan), and the color wheel of the projector was removed. There was a neutral density filter (NE2R10 A, Thorlabs, Inc., Newton, NJ, USA) and a polarizer (OCZ203, Beijing Optical Century Instrument Co., Ltd., Beijing, China) in front of the projector. The neutral density filter can uniformly attenuate the light, and the polarizer in front of the projector can be used in conjunction with the polarizer in front of the camera lens to eliminate specular reflections.
In order to obtain a good projection effect, the optical axis of the projector and the camera should be on the same plane and at a small angle to obtain sinusoidal structured light and reduce specular reflection. Both the camera and the projector were installed above the sample stage.
The sample stage consisted of a horizontal translation stage (MTS123, Beijing Optical Century Instrument Co., Ltd., Beijing, China) and a vertical translation stage (MVS101, Beijing Optical Century Instrument Co., Ltd., Beijing, China). Under the control of the motor controller, the sample could be adjusted to the proper position. The horizontal translation stage had a travel of 300 mm and a displacement resolution of 0.0032 mm. The vertical translation stage had a travel of 55 mm and a displacement resolution of 0.1 mm. As shown in Figure 2, the acquisition and control software of the SFDI system was developed on the LabVIEW 2018 environment (National Instruments, Austin, TX, USA). In the LabVIEW programming environment, software development kits (SDKs) provided by camera manufacturers were used to implement camera control, projection-camera acquisition synchronous operation, stage movement and camera-stage distance measurement. A histogram of a single shot image was displayed during image acquisition and could be used to determine whether the image was overexposed. The preprocessed projection image was read from the specified folder in the computer by the software written in the lab. Projection and shooting were performed sequentially, and a proper amount of delay (1.5 s) was added to prevent the order of projection and shooting from being disordered. As shown in Figure 2, the acquisition and control software of the SFDI system was developed on the LabVIEW 2018 environment (National Instruments, Austin, TX, USA). In the LabVIEW programming environment, software development kits (SDKs) provided by camera manufacturers were used to implement camera control, projection-camera acquisition synchronous operation, stage movement and camera-stage distance measurement. A histogram of a single shot image was displayed during image acquisition and could be used to determine whether the image was overexposed. The preprocessed projection image was read from the specified folder in the computer by the software written in the lab. Projection and shooting were performed sequentially, and a proper amount of delay (1.5 s) was added to prevent the order of projection and shooting from being disordered.

System Operation
Before collecting images, the parameters of the projector should be set-sharpness, contrast, brightness, etc.-so that the projection contrast is high enough but not saturated. The projection image was pre-processed by keystone correction. The picture size was set to the projector's standard resolution of 1280 × 800 pixels, which allowed the image to be projected without loss of resolution.
Before shooting, the path of the projection image and the path to save the image were set. The software automatically read the serial number of the picture, and read the image to project or save the captured image in order. To ensure imaging quality, the exposure time had to be set after each sample placement. A single image was taken, and the sample position was observed and adjusted to the middle of the field of view. A histogram was used to set the exposure time to increase the signal to noise ratio (SNR).

System Calibrations
The system calibration aimed to obtain an accurate projection of the gray response, which is calibrated by a standard whiteboard: In the optical calibration and correction experiments, Iac(x,fx) is the reflected grayscale of the different frequency sinusoidal patterns projected, Iref is the reflected grayscale of a full white projected pattern, Rd,ref(fx) is the reflectance of a standard whiteboard (0.99 in this study) (150 × 150 mm 2 , Guangzhou Jingyi Photoelectric Technology Co., Ltd., Guangzhou, China) and Rac(x,fx) is the reflectance after calibration.

System Operation
Before collecting images, the parameters of the projector should be set-sharpness, contrast, brightness, etc.-so that the projection contrast is high enough but not saturated. The projection image was pre-processed by keystone correction. The picture size was set to the projector's standard resolution of 1280 × 800 pixels, which allowed the image to be projected without loss of resolution.
Before shooting, the path of the projection image and the path to save the image were set. The software automatically read the serial number of the picture, and read the image to project or save the captured image in order. To ensure imaging quality, the exposure time had to be set after each sample placement. A single image was taken, and the sample position was observed and adjusted to the middle of the field of view. A histogram was used to set the exposure time to increase the signal to noise ratio (SNR).

System Calibrations
The system calibration aimed to obtain an accurate projection of the gray response, which is calibrated by a standard whiteboard: In the optical calibration and correction experiments, I ac (x,f x ) is the reflected grayscale of the different frequency sinusoidal patterns projected, I ref is the reflected grayscale of a full white projected pattern, R d,ref (f x ) is the reflectance of a standard whiteboard (0.99 in this study) (150 × 150 mm 2 , Guangzhou Jingyi Photoelectric Technology Co., Ltd., Guangzhou, China) and R ac (x,f x ) is the reflectance after calibration.

Projector-Camera Calibration
The camera calibration was performed by MATLAB (The MathWorks, Inc., Natick, MA, USA). The external parameter matrix of the camera was used to determine if the camera was installed correctly.
After the camera calibration, a ceramic checkerboard was used for projector-camera calibration. The gray code pattern was projected onto the ceramic checkerboard and decoded to obtain the relationship between the calibration board's corner points and the projection's coordinates [26]. The gray code pattern was generated by Visual Studio 2017 (Microsoft Corporation, Redmond, WA, USA). After calibration, the internal parameter matrix of the projector and the camera, the external parameter matrix of the projector-camera setup, and the coordinates of the corner points on the camera image and the projected pattern could be obtained. The calibration result was judged by the reprojection error.
The relative position of the projector and the camera was determined by the external parameter matrix of the projector-camera setup, and the optical axis of the projector and the camera were adjusted to the same plane. The height h of the projector from the checkerboard plane and the projection angle (the pitch angle α and the yaw angle β) were given by the external parameter matrix of the camera and the projector-camera setup. The average pixel length of the checkerboard X c,mean was obtained by calculating the average distance between the coordinates of all corner points.

Keystone Correction
The existence of the projection angle will cause keystone distortion of the projected pattern. The projection angles are the pitch angle α, the yaw angle β and the roll angle. In actual installation, the pitch angle is inherent to the system, but the yaw angle and the roll angle should be avoided as much as possible. The hardware design of this study can avoid the existence of the roll angle. Therefore, in this study, a small amount of yaw error was considered, and the roll angle error was ignored to simplify the calculation formula. The effect of the pitch angle α and the yaw angle β on the projection was mainly considered, and the effect of keystone distortion was minimized through error parameters.
The method of projector keystone correction in this study is similar to the inverse perspective transform [27]. The original projection image was used to generate a corrected projection image through inverse perspective transformation, and then an image without keystone distortion was projected. The specific steps were as follows: 1 Use the projector's calibration parameters and the original image to generate point coordinates of the world coordinates plane, where the set angle was opposite to the actual angle of the projector. 2 Convert the scattered point coordinates into pixel coordinates to obtain a corrected projection image. 3 Calculate the error and set the projection image at different angles until the error has been minimized.
A simplified correction formula can be obtained when the roll angle is ignored. P i = {p p , q p , 1, 1} is a point in the image plane of projector, p p and q p represent pixel coordinates, and P g is a point on the corresponding checkerboard plane. The relationship between the two is P g = T g i P i , where: where α is the pitch angle; β is the yaw angle; c 1 = cosα; s 1 = sinα; c 2 = cosβ; s 2 = sinβ; h is the height of the projector from the checkerboard plane, given by the external parameter matrix; and f u , f v , c u and c v represent the horizontal focal length, the vertical focal length, horizontal optical center coordinates and the vertical optical center coordinates of the projector, respectively, given by the internal parameter matrix of the projector. P g = {x g , y g , −h, 1} represents the world coordinate point, which is also a point on the checkerboard plane. The corrected image coordinate point Pic is calculated by P ic = T i g P g , where: Through this inverse process, pixel coordinates can be obtained from world coordinates. After two steps of transformation, the original image can be transformed into a keystone correction pattern.
The error is the average of the standard deviation of pixels in the vertical fringe direction, that is, for an image with a m × n pixel area with a horizontal fringe: where std cow (i) represents the normalized pixel standard deviation of the i-th column. Different pitch angles α and yaw angles β around the calibrated angle were set to generate the corresponding keystone corrected patterns. After obtaining the reflectance image, we calculated the keystone correction error by Equations (3) and (4) to find the optimal keystone correction angle.

Frequency Calibration
Unlike the traditional 3D reconstruction of structured light, SFDI projects sinusoidal structured light onto the surface of the object. It studies the process of projecting a sinusoidal surface light source to a highly scattering object. The calculations of µ a and µ' s are frequency sensitive [28], so the frequency at which the sinusoidal patterns are actually generated needs to be calibrated.
For frequency calibration, an image of a ceramic checkerboard placed horizontally on the sample stage and perpendicular to the camera's optical axis was required. When whiteboard or liquid phantom images are collected, the height of the vertical translation stage needs to be adjusted so that the collection plane is at the same height to ensure that the actual frequency does not change.
The camera calibration results can be used to calculate the average pixel length X c,mean of the checkerboard. The actual period of the projected pattern can then be calculated by using Equation (5): where T r represents the actual period of the projected pattern, T c represents the pixel period of the image captured by the camera, X r represents the actual cell length of the checkerboard (5 mm in this study) and X c,mean represents the average pixel length of the checkerboard image. The ratio of the fringe generation period T p to the actual period T r is: When the actual period required is known, the pixel period can be obtained by the period ratio. First, a sinusoidal pattern was preset according to the actual projection situation (the size of the pattern actually projected on the sample) with a period of T p (0) = 40 pixels. This was projected onto a standard whiteboard to calibrate the image reflectance. A region of interest (ROI) was selected and averaged along the direction of the vertical fringes to obtain a sinusoidal average reflectance curve. The number of fringe periods of the ROI was calculated by the number of minimum points on the curve. T c was calculated by the pixel coordinates of the minimum points. The period ratio r p r (0) could then be calculated. Sixteen reference frequencies were used: 0 mm −1 , 0.007 mm −1 , 0.0084 mm −1 , 0.0098 mm −1 , 0.0112 mm −1 , 0.0126 mm −1 , 0.014 mm −1 , 0.028 mm −1 , 0.042 mm −1 , 0.056 mm −1 , 0.070 mm −1 , 0.084 mm −1 , 0.098 mm −1 , 0.112 mm −1 , 0.126 mm −1 and 0.14 mm −1 . The expected actual period is the reciprocal of the reference frequency. By using r p r (0) and the 16 expected actual periods T re (k) (k = 1,2 . . . , 16), the 16 projection periods T p (k) were calculated by Equation (6).
In the first step, the 16 generated images were projected and T c (k) was calculated. The 16 actual period verification values T v (k) were then calculated by Equation (5). Corresponding to the expected actual period T re , the period percentage error e t was: In the second step, we substituted T v as T r into Equation (6) and updated the 16 corresponding period ratios r p r (k). Using the updated period ratios r p r (k) and the expected actual period T re (k), the projection period T p (k) was calculated again. The actual period verification values T v (k) and the relative period error e t (k) were then calculated again.

Calibration of the Optical Properties
Phantoms are often used to verify the accuracy of a system or an algorithm. In this study, TiO 2 (T104950-500 g, Aladdin, Shanghai, China) was used as the scattering agent, and Indian ink (Royal Talens, Apeldoorn, The Netherlands) was used as the absorber. Different concentration gradients were set to linearly correct µ a and µ' s .
In total, 6 wavelengths were used in this study: 460 nm, 503 nm, 527 nm, 630 nm, 658 nm and 675 nm. Two sets of liquid phantoms were set up, and each set of liquid phantoms had 9 samples. Liquid phantoms in the first set were labeled #1 to #9, the TiO 2 volume fraction was 0.1%, and the ink volume fraction ranged from 0.004% to 0.02% (with an interval of 0.002%). Liquid phantoms in the second set were labeled #10 to #18, the ink volume fraction was 0.006% and the TiO 2 volume fraction ranged from 0.04% to 0.2% (with an interval of 0.02%).
The reference value of µ a of the liquid phantoms were calculated by the Lambert-Beer law. A spectrometer (QE65pro, Ocean Insight Co., Ltd., Orlando, FL, USA) was used to measure the collimated transmittance T of the pure absorption sample, and µ a was calculated by: where d is the optical path length of the pure absorption solution. The reduced scattering coefficient µ' s of the liquid phantoms was calculated by Mie scattering simulation. According to the refractive index, the volume fraction, the diameter of the TiO 2 , the refractive index of deionized water and the wavelength, reference µ' s values can be obtained by using the Mie program [29] based on MATLAB.
The three-phase pattern with a projection frequency of f x was applied to the liquid phantoms. The three-phase demodulation was performed by using Equation (9), and the reflectance was calibrated by the standard whiteboard using Equation (10): where I 1 , I 2 and I 3 represent the reflection intensity of the sample collected by the threephase projected pattern, R d (x, f x ) represents the reflectance after calibration, After the reflectance had been calibrated, multi-frequency fitting inversion was performed by Equation (11). where: R eff = 0.636n + 0.668+0.71/n − 1.44/n 2 (13) µ e f f = 3µ a (µ a +µ s where R d (f x ) represents the reflectance after calibration, A represents the proportional constant, R eff represents the effective reflection coefficient, n represents the refractive index. µ' eff represents the scalar attenuation coefficient in the spatial frequency domain, µ eff represents the effective attenuation coefficient and µ tr = µ a + µ' s represents the full attenuation coefficient, where µ a and µ' s refer to the absorption coefficient and the reduced scattering coefficient, respectively. For a set m of phantoms, the linearity error of the system due to TiO 2 precipitation and other reasons was corrected by: where µ a (i) ref and µ a (i) mea , respectively, represent the reference values and measured values of µ a for the i-th phantom; µ' s (j) ref and µ' s (j) mea represent the reference values and measured values of µ' s for the j-th phantom, respectively; and k(µ a ) λ and k(µ' s ) λ represent the correction ratio of µ a and µ' s , respectively. Therefore: where, respectively, µ a,cal and µ' s,mea represent the calibrated values and measured values of µ a for the phantoms, and µ' s,cal and µ' s,mea represent the calibrated values and measured values of µ' s the phantoms.

Sample Preparation
Eighty normal crown pears with no surface defects were purchased from a local fruit store and placed in a laboratory at 19 • C (room temperature) and 55% humidity for about 24 h. All pears were equally divided into four groups: Group A was the normal group, without any treatment. Group B was the bruise group. In a pendulum device, a 20 mm diameter steel ball with a weight of 31.3995 g was released from a height of 0.3 m and hit the equatorial part of the pear to cause slight bruises. Group C was the scratch group. A razor blade was used to make a few light strokes near the equator of the pear to produce shallow scratches. Group D was the abrasion group, for which sandpaper was gently rubbed a few times near the equator of the pear, causing a slight abrasion (Figure 3). 24 h. All pears were equally divided into four groups: Group A was the normal group, without any treatment. Group B was the bruise group. In a pendulum device, a 20 mm diameter steel ball with a weight of 31.3995 g was released from a height of 0.3 m and hit the equatorial part of the pear to cause slight bruises. Group C was the scratch group. A razor blade was used to make a few light strokes near the equator of the pear to produce shallow scratches. Group D was the abrasion group, for which sandpaper was gently rubbed a few times near the equator of the pear, causing a slight abrasion (Figure 3).

Discriminant Model Analysis
Linear discriminant analysis (LDA) is a supervised pattern recognition method [30], which mainly projects high-dimensional pattern samples into the optimal discriminative vector space to extract classification information and compress the dimensionality of the feature space. Optimization analysis was performed using the Statistics and Machine Learning Toolbox 11.3 in MATLAB 2018a.
Two LDA models were compared to assess the accuracy of the classification of pears with different impairments using SFDI with spatially modulated light. The first model used data from a subset of the SFDI data. The images were acquired when the spatial frequency was 0 mm −1 , which represents the dataset under conventional planar light irradiation. The second model used the full SFDI dataset.
A ROI of 400 × 400 pixels (26 × 26 mm 2 ) was selected for data processing, and a region of 200 × 200 pixel points was generated by 2 × 2 binning to reduce the time for data processing. Each sample was labeled and trained in the four categories (A (normal pear), B (bruised pear), C (scratched pear) and D (scuffed pear)) as described above after inversion

Discriminant Model Analysis
Linear discriminant analysis (LDA) is a supervised pattern recognition method [30], which mainly projects high-dimensional pattern samples into the optimal discriminative vector space to extract classification information and compress the dimensionality of the feature space. Optimization analysis was performed using the Statistics and Machine Learning Toolbox 11.3 in MATLAB 2018a.
Two LDA models were compared to assess the accuracy of the classification of pears with different impairments using SFDI with spatially modulated light. The first model used data from a subset of the SFDI data. The images were acquired when the spatial frequency was 0 mm −1 , which represents the dataset under conventional planar light irradiation. The second model used the full SFDI dataset.
A ROI of 400 × 400 pixels (26 × 26 mm 2 ) was selected for data processing, and a region of 200 × 200 pixel points was generated by 2 × 2 binning to reduce the time for data processing. Each sample was labeled and trained in the four categories (A (normal pear), B (bruised pear), C (scratched pear) and D (scuffed pear)) as described above after inversion to obtain µ a and µ' s , and then averaged. The accuracy of the model was determined using k-fold cross-validation for 5 folds.

Projector-Camera Calibration Results
Via Equation (1), the projected normalized reflectance image was obtained by a standard whiteboard. Images of a standard whiteboard were obtained under a white field and from projections at set frequencies.
As shown in Figure 4a, a total of 21 gray code patterns and their complementary images were generated. The generated gray code pattern was projected onto a ceramic checkerboard calibration board, as shown in Figure 4b. By changing the angle of the calibration plate, and shooting about 10 groups in total, the result was considered to be more accurate. Gray code decoding and projector calibration were realized through open-source software [26]. The calibration software interface is shown in Figure 4c. The reprojection error of the projector was 0.63 pixels, and the reprojection error of the camera was 2.32 pixels. The reprojection error of the camera here was larger than the results obtained by Moreno and Taubin [26]; this may be due to the fact that the parameters in the program cannot be configured when using the software, but the reprojection error of the projector was similar. Overall, the projector-camera calibration results are reliable.
accurate. Gray code decoding and projector calibration were realized through opensource software [26]. The calibration software interface is shown in Figure 4c. The reprojection error of the projector was 0.63 pixels, and the reprojection error of the camera was 2.32 pixels. The reprojection error of the camera here was larger than the results obtained by Moreno and Taubin [26]; this may be due to the fact that the parameters in the program cannot be configured when using the software, but the reprojection error of the projector was similar. Overall, the projector-camera calibration results are reliable.  Figure 5a shows the sinusoidal pattern without keystone correction, and Figure 5b shows the keystone-corrected pattern. The keystone-corrected pattern was projected onto  Figure 5a shows the sinusoidal pattern without keystone correction, and Figure 5b shows the keystone-corrected pattern. The keystone-corrected pattern was projected onto a standard whiteboard. The reflectivity of the image was calculated by Equation (1), and the reflectivity of the image was used to determine the correction effect. Figure 5c shows the reflectivity of the image before correction, and Figure 5d shows the reflectivity of the image after correction. The range of α was 12 • -22 • , with gradient of 1 • . The range of β was 0 • -1 • , with a gradient of 0.1 • . As shown in Figure 5e, the minimum correction error e p = 0.017 pixels, the corresponding pitch angle α = 20 • and the yaw angle β = 0.5 • . If the angle gradient were smaller, the number of images to be acquired would grow. If the roll angle were added, the equation would become very complicated. However, the hardware design of this study can also avoid the existence of a roll angle. The error after keystone correction was very small, and it can be considered that the captured image has no obvious deformation. After simplifying the correction process, the keystone can still be corrected accurately. 0.017 pixels, the corresponding pitch angle α = 20° and the yaw angle β = 0.5°. If the angle gradient were smaller, the number of images to be acquired would grow. If the roll angle were added, the equation would become very complicated. However, the hardware design of this study can also avoid the existence of a roll angle. The error after keystone correction was very small, and it can be considered that the captured image has no obvious deformation. After simplifying the correction process, the keystone can still be corrected accurately.

Frequency Calibration Results
After keystone correction, the actual frequency of the sinusoidal pattern was calibrated. The average pixel period of the captured image Tc(0) was 158.2 pixels. The calculation showed that the actual period of the projected pattern Tr(0) = 10.1 mm, and the period ratio r r p (0) = 3960 pixel m −1 . The calculated value r r p (0) was used as the initial period ratio, and the required projection pixel period was calculated. The verification results were obtained at each frequency, and the period ratio r r p (k) was updated at each frequency by Equations (5) and (6) and re-checked after the projection collection. When the frequency was less than 0.014 mm −1 , a complete sinusoidal pattern could not be captured on the whiteboard. Therefore, only nine frequencies larger than 0.014 mm −1 were used for the calibration; other frequencies were estimated according to the frequency calibration results. Frequency is the reciprocal of the period. The relationship between the expected frequency and its two-step verification values at each frequency is shown in Figure 6a. The relative error of period at each frequency was calculated by Equation (7). As shown in Figure 6b, the average percentage error after calibration reduced from 2.1% to 0.1%.

Frequency Calibration Results
After keystone correction, the actual frequency of the sinusoidal pattern was calibrated. The average pixel period of the captured image T c (0) was 158.2 pixels. The calculation showed that the actual period of the projected pattern T r (0) = 10.1 mm, and the period ratio r p r (0) = 3960 pixel m −1 . The calculated value r p r (0) was used as the initial period ratio, and the required projection pixel period was calculated. The verification results were obtained at each frequency, and the period ratio r p r (k) was updated at each frequency by Equations (5) and (6) and re-checked after the projection collection.
When the frequency was less than 0.014 mm −1 , a complete sinusoidal pattern could not be captured on the whiteboard. Therefore, only nine frequencies larger than 0.014 mm −1 were used for the calibration; other frequencies were estimated according to the frequency calibration results. Frequency is the reciprocal of the period. The relationship between the expected frequency and its two-step verification values at each frequency is shown in Figure 6a. The relative error of period at each frequency was calculated by Equation (7). As shown in Figure 6b, the average percentage error after calibration reduced from 2.1% to 0.1%.

Optical Property Calibration Results
Liquid phantoms were used to calibrate the optical properties measured by the SFDI system. The reference values of μa and μ's of the phantoms were calculated at six wave-

Optical Property Calibration Results
Liquid phantoms were used to calibrate the optical properties measured by the SFDI system. The reference values of µ a and µ' s of the phantoms were calculated at six wavelengths. The relationship between the reference value of the optical properties and the volume fraction is shown in Figure 7. The reference value of µ a was calibrated linearly. The reference values of µ' s was related linearly to the volume fraction of TiO 2 . Both µ a and µ' s increased with increasing wavelengths.

Optical Property Calibration Results
Liquid phantoms were used to calibrate the optical properties measured by the SFDI system. The reference values of μa and μ's of the phantoms were calculated at six wavelengths. The relationship between the reference value of the optical properties and the volume fraction is shown in Figure 7. The reference value of μa was calibrated linearly. The reference values of μ's was related linearly to the volume fraction of TiO2. Both μa and μ's increased with increasing wavelengths. The determination coefficients of the measured absorption coefficient μa and the ink volume fraction at six wavelengths are shown in Table 1. This shows that the reference values of μa and the volume fraction of ink are highly linearly related.
The projection image was corrected and calibrated. The captured images were demodulated in three phases after the ROI (300 × 300 pixels) was selected, and the reflectance was calibrated. Multi-frequency fitting was used for inversion. After the optical properties had been obtained, the measured values and the reference values were fitted linearly to eliminate the linearity error of the system: the first set of data was used for μa calibration and the μ's test; the second set of data was used for the μa test and μ's calibration. The average error of the six selected wavelengths is shown in Table 2. The overall average error of μa and μ's at six wavelengths was less than 8.88% and 4.54%, respectively, The determination coefficients of the measured absorption coefficient µ a and the ink volume fraction at six wavelengths are shown in Table 1. This shows that the reference values of µ a and the volume fraction of ink are highly linearly related. The projection image was corrected and calibrated. The captured images were demodulated in three phases after the ROI (300 × 300 pixels) was selected, and the reflectance was calibrated. Multi-frequency fitting was used for inversion. After the optical properties had been obtained, the measured values and the reference values were fitted linearly to eliminate the linearity error of the system: the first set of data was used for µ a calibration and the µ' s test; the second set of data was used for the µ a test and µ' s calibration.
The average error of the six selected wavelengths is shown in Table 2. The overall average error of µ a and µ' s at six wavelengths was less than 8.88% and 4.54%, respectively, which is slightly better than the values reported by Matthew et al. in their recent study (23% and 6%) [31]. Linear calibration of the liquid phantoms can make the measured value of the sample closer to the true value. There is still some error in µ a and µ' s after the linear calibration, which means that there are still other non-linear errors, such as an unstable light source [32], and theoretical errors of diffusion approximation [33], which have noticeable impacts on the measurement of µ a . In this study, the inversion method of multi-frequency fitting was adopted. Other improved inversion methods [17] can be adopted in further research.  Figure 8 shows the µ a and µ' s pseudo-color plots of normal pears and three different damage types (bruised, scratched and abraded) at 527 and 675 nm. It can be seen from the figure that the µ a of the damaged region is larger than that of the normal area, while µ' s is the opposite, which is similar to other results in the literature [6,17]. Compared with µ a mapping, the type of damage is easier to identify in µ' s mapping and can be roughly distinguished. This is due to the fact that a pear generally undergoes physical structural changes when damage occurs, which affects the µ' s of the sample [18]. It can also be seen that at 527 nm, both µ a and µ' s are larger than at 675 nm, which is due to the fact that the pear has a smaller absorption peak near 527 nm, while the µ' s of the pear tapers off in visible (VIS) and near-infrared (NIR) range [34]. Secondly, it can be seen that there are many spots in the figure, which are the pear's epidermal breathing pores.

Damage Discrimination Results
frequency fitting was adopted. Other improved inversion methods [17] can be adopted in further research.  Figure 8 shows the μa and μ's pseudo-color plots of normal pears and three different damage types (bruised, scratched and abraded) at 527 and 675 nm. It can be seen from the figure that the μa of the damaged region is larger than that of the normal area, while μ's is the opposite, which is similar to other results in the literature [6,17]. Compared with μa mapping, the type of damage is easier to identify in μ's mapping and can be roughly distinguished. This is due to the fact that a pear generally undergoes physical structural changes when damage occurs, which affects the μ's of the sample [18]. It can also be seen that at 527 nm, both μa and μ's are larger than at 675 nm, which is due to the fact that the pear has a smaller absorption peak near 527 nm, while the μ's of the pear tapers off in visible (VIS) and near-infrared (NIR) range [34]. Secondly, it can be seen that there are many spots in the figure, which are the pear's epidermal breathing pores.  From the results (Table 3) of the four classifications (normal, bruised, scratched and abraded), it can be seen that the SFDI technique was better at detecting and classifying pears with different damage types compared with the traditional planar light technique. The classification accuracy at 527 nm was higher than that at 675 nm, which was probably the result of the absorption peak of pears near 527 nm. When the classification results were analyzed, it was found that bruised and scratched pears were misclassified more often, which may be because both belong to minor damage and thus the average u a and u' s values in the region are close. Hence, the pears were classified into three classes: normal, minor damage and serious damage, and the same LDA method was used to classify them. The results showed that the classification of the new method was more accurate; in particular, the classification accuracy of the SFDI detection technique at 527 nm was up to 100%. In the study of Zhang et al. [6], the optical properties of apples with different levels of bruising were measured using integrating spheres and classified based on this, with an accuracy of 92.5%. This further illustrates the feasibility of using optical properties for non-destructive detection of early fruit damage. Compared with the reference literature [18,35], the µ a obtained in this experiment is small and the µ' s is large. In order to further explore the reason for this, the peel of the same batch of samples was removed for flesh image processing, and it was found that the peel has a certain influence on the µ a and µ' s of pears, which was reported in the study of Hu et al. [36]. In addition, the experiment was only carried out at two wavelengths (527 and 675 nm) and the near infrared part was not studied due to system limitations; however, the pear had significant absorption peaks in the near infrared part [34], and the SFDI system could be improved to carry out more wavelength experiments in the future.

Conclusions
In this study, an optical calibration and correction method was proposed for the SFDI system. After optical calibration and correction, the accuracy of the system's measurement was verified by using liquid phantoms. Projector-camera calibration, projection keystone correction and frequency calibration ensured that the projected pattern was not distorted and made the experimental conditions closer to the ideal experimental conditions to reduce the experimental error. Next, the optical parameters of normal pears and three different damage types of pears were measured using the calibrated SFDI system, and the LDA method was used for discrimination of pears with different surface damage types based on the obtained µ a and µ' s . Further studies can be implemented for investigating the application prospects of the SFDI technique for the detection of agricultural products and foodstuffs.