A Fast Iterative Procedure for Adjacency Effects Correction on Remote Sensed Data

: This paper describes a simple, iterative atmospheric correction procedure based on the MODTRAN ® 5 radiative transfer code. Such a procedure receives in input a spectrally resolved at-sensor radiance image, evaluates the different contributions to received radiation, and corrects the effect of adjacency from surrounding pixels permitting the retrieval of ground reﬂectance spectrum for each pixel of the image. The procedure output is a spectral ground reﬂectance image obtained without the need of any user-provided a priori hypothesis. The novelty of the proposed method relies on its iterative approach for evaluating the contribution of surrounding pixels: a ﬁrst run of the atmospheric correction procedure is performed by assuming that the spectral reﬂectance of the surrounding pixels is equal to that of the pixel under investigation. Such information is used in the subsequent iteration steps to estimate the spectral radiance of the surrounding pixels, in order to make a more accurate evaluation of the reﬂectance image. The results are here presented and discussed for two different cases: synthetic images produced with the hyperspectral simulation tool PRIMUS and real images acquired by CHRIS–PROBA sensor. The retrieved reﬂectance error drops after a few iterations, providing a quantitative estimate for the number of iterations needed. Relative error after the procedure converges is in the order of few percent, and the causes of remaining uncertainty in retrieved spectra are discussed.


Introduction
The radiance observed by a hyperspectral imager depends on illumination and observation geometry and on the interaction of solar radiation with the ground and atmosphere [1]. Gas absorption is the major responsible for the modulation of the atmospheric transmission spectrum. Furthermore, scattering by molecules and aerosols plays a significant role in the extinction of radiation [2] so that the radiation received by the sensor can be ascribed to different contributions, depending on scattering events occurring along each photon's trajectory and its interaction with the ground. The estimate of these different contributions to the received radiation can be carried out by means of a direct Radiative Transfer Model (RTM) [3].
An atmospheric correction procedure performs the so-called inversion of the direct model: it retrieves the surface spectral reflectance from the (calibrated) at-sensor spectral radiance [4][5][6][7] and permits the characterization of the observed surface, regardless of the effects of atmosphere, illumination and observation geometry. Ground surface spectral reflectance is the starting point for terrain classification, the determination of different environmental parameters and the Earth radiation budget estimate [8]. Inversion is performed by simulating, via the RTM, the propagation of solar radiation through the atmospheric medium and its interaction with ground and air components.
Due to atmospheric scattering, a single sensor pixel collects, together with the radiance directly received from the corresponding ground pixel along the ground-to-sensor direction, two different contributions [5,6,9]: radiation diffused by the atmosphere without

Materials and Methods
The atmospheric correction procedure proposed in this paper is an iterative procedure that uses MODTRAN ® to model electromagnetic radiation propagation through the atmosphere. MODTRAN ® , one of the most widely used radiation transfer models, assumes that the atmosphere is a stratified medium made up of plane-parallel layers between the ground and the Top Of Atmosphere (TOA) [16]. Starting from version 5, the MODTRAN ® radiative transfer code implements the modelling approach proposed by Verhoef [1,6,10,31] for adjacency effects. Atmospheric correction input parameters are the Sun and observation (i.e., the sensor) geometries and atmosphere scattering, and absorption properties calculated from the constituent (i.e., gas and aerosol) profile at a given spectral resolution. The proposed approach considers flat terrain, absence of cloud cover, and Lambertian surfaces but neglects light polarization.
Following the nomenclature adopted by Verhoef [6,10], the spectral irradiance impinging on the generic ground pixel (with reflectance ρ) can be expressed as the sum of different contributions ( Figure 1a): direct (modulated by atmospheric absorption and propagated along the Sun-to-ground-pixel line), diffuse (impinging on the pixel after one or more atmospheric scattering events) and diffuse after multiple interactions between the atmosphere (with albedo S) and the surrounding ground (with average reflectance ρ). As a consequence, the irradiance illuminating the generic ground pixel depends on the illumination/observation geometry, on the atmospheric medium, and on the contribution due to the reflectance (also called "hemispherical reflectance") of the neighbouring terrain. Contributions to the irradiance impinging on the generic ground pixel (with reflectance ) subtended by the IFOV of the corresponding image pixel: 0) direct Sun-to-pixel, 1) diffuse (impinging on the pixel after one or more scattering events, but without interaction with ground), 2) trapped between ground (with average reflectance ̅ ) and overlaying atmosphere (with albedo S). (b) Contributions to the radiation received into the generic IFOV: 3) directly from the corresponding ground pixel without scattering events, 4) from neighbouring pixels via diffuse radiation, and 5) path radiance non-interacting with ground.
For a generic wavelength and a Sun zenith angle , we consider ( , ) as the total ground irradiance (direct plus diffuse, i.e., made up of the contribution 0 plus 1 of Figure 1a) impinging on a black ground (i.e., having zero reflectance). Therefore, the irradiance illuminating the generic ground pixel (having non-zero ground reflectance) can be written as the sum of the contribution 0, 1 and 2: ( , ) = ( , ) + ( , ) ̅ ( ) + ̅ ( ) + ̅ ( ) + ⋯ (1) By substituting the corresponding limit of the series in (1), we get the total irradiance illu- Figure 1. (a) Contributions to the irradiance impinging on the generic ground pixel (with reflectance ρ) subtended by the IFOV of the corresponding image pixel: (0) direct Sun-to-pixel, (1) diffuse (impinging on the pixel after one or more scattering events, but without interaction with ground), (2) trapped between ground (with average reflectance ρ ) and overlaying atmosphere (with albedo S). (b) Contributions to the radiation received into the generic IFOV: (3) directly from the corresponding ground pixel without scattering events, (4) from neighbouring pixels via diffuse radiation, and (5) path radiance non-interacting with ground.
The radiance reaching the sensor (function of the irradiance impinging on the ground pixel and its reflectance ρ) can further be divided into the following contributions ( Figure 1b): the directly transmitted radiance modulated by the ground-pixel-to-sensor atmospheric Remote Sens. 2021, 13, 1799 4 of 20 transmittance, the atmosphere-backscattered radiation (with no ground interaction) and the diffuse radiance scattered into the IFOV after being reflected by adjacent ground pixels.
It is interesting to note that without contributions 2 ( Figure 1a) and 4 (Figure 1b), the radiance acquired by each sensor pixel would be independent of the others. Pixel interdependence is caused by atmospheric scattering (either due to multiple ground reflections of contributions 2 (Figure 1a) or to adjacency contribution 4 (Figure 1b)).
For a generic wavelength λ and a Sun zenith angle ϑ s , we consider E grnd (λ, ϑ s ) as the total ground irradiance (direct plus diffuse, i.e., made up of the contribution 0 plus 1 of Figure 1a) impinging on a black ground (i.e., having zero reflectance). Therefore, the irradiance E g illuminating the generic ground pixel (having non-zero ground reflectance) can be written as the sum of the contribution 0, 1 and 2: By substituting the corresponding limit of the series in (1), we get the total irradiance illuminating the pixel in the general case, considering for the contribution 2 an average reflectance ρ.
Following the MODTRAN ® notation as in [31] and defining ϑ v and ϕ r as the view zenith angle and the relative Sun-observer azimuth angle, the observed at-sensor radiance L obs (λ, ϑ v , ϑ s , ϕ r ) can be written as the sum of three terms: the path radiance L path (λ, ϑ v , ϑ s , ϕ r ), the radiance received directly from the ground pixel and modulated by the ground-to-sensor transmittance (E grnd (λ, ϑ s ) ρ(λ) (1−ρ(λ)S)π T obs ), and the radiance from its neighbouring pixels (E grnd (λ, ϑ s ) ρ(λ) (1−ρ(λ)S)π t obs ) taking into account the reflectance of the spatially averaged value ρ (Figure 1b): where T obs and t obs are, respectively, the direct and diffuse transmittance of the atmosphere along the ground-to-sensor path, and L path is the contribution given by the path radiance. The terms T obs , t obs , E grnd , S are determined by the MODTRAN ® RTM output for a given atmospheric model, Sun position and observation geometry. The term L path depends on observation geometry; therefore, it varies according to the position of the pixels in the at-sensor radiance image. To calculate it, we hypothesize scene scanning along the sensor trajectory (common either in push-broom or whisk-broom acquisition), fixing the observation geometry for each column of the image. L path is provided by the RTM for three different values of ϑ v and ϕ r for, respectively, the leftmost, central and rightmost columns of the image. L path (λ, ϑ v , ϑ s , ϕ r ) is then calculated by interpolation for each pixel of the image line. Equation (2) can be expressed as apparent reflectance ρ a (λ) = L obs (λ,ϑ v ,ϑ s ,ϕ r ) E 0 (λ) cos ϑ s by normalizing for the lighting conditions, where E 0 is TOA solar irradiance: Here T s and t s are, respectively, direct and diffuse transmittance by the atmosphere along the Sun-ground path and ρ 0 (λ) = is made up of three components: the first term never interacts with the ground; the second term takes into account the interaction of solar radiation with the observed pixel; and the third term considers the adjacency effects due to neighbouring pixels that have spatially averaged reflectance ρ. The apparent spectral reflectance ρ a (λ) must be integrated over the spectral width of the considered instrumental spectral channel. After making angular and spectral dependences implicit, Equation (3) can be re-written as where the operator x f = ∆λ x(λ) f (λ)dλ takes into account the integration over the channel spectral response, in which ∆λ is the spectral interval and f (λ) is the spectral response function integrally normalized to the unit. It is an acceptable hypothesis that the spectral variation of the reflectance terms in the channel is modest; thus, the terms ρ f and ρ f (reflectance spectrally averaged in the channel) can be factored out of the spectral integration. Reintroducing the geometric series (as in [31]) and considering only the first two terms, Equation (4) can be rewritten as This last formulation suggests that, for each spectral channel, the atmospheric albedo can be regarded as made of two contributions: S 1 , related to the pixel of interest, and S 2 , related to the surrounding pixels.

The Image Simulation Tool PRIMUS
The image simulation tool PRIMUS (PRISMA Realistic IMage Utility for Simulation) [32] takes into account the instrumental characteristics, the acquisition geometry and the major phenomena and interactions that affect the acquired hyperspectral images. In the simulation process the illumination and acquisition geometry, the spatial and spectral variability of the target, the atmosphere (aerosol and gas constituents) and its interaction with the soil are considered. PRIMUS uses the approach proposed by Verhoef and Bach [6] to take into account adjacency effects: the at-sensor radiance is calculated using the formula in Equation (3) assuming that the surrounding pixels are characterized by a spectral reflectance equal to their average value ρ. Finally, the simulation process considers the main effects introduced by the instrument, such as sensor sampling, instrument optical characteristics, and noise. The image simulation tool is composed of three independent blocks: 1.
The scenario builder, which creates a reflectance ground map (associating combinations of spectra to the classes of thematic map); 2.
The atmospheric propagation calculator, which calculates the at-sensor radiance data cube using as input the reflectance map provided by the scenario builder. This step also simulates adjacency; 3.
The sensor simulator, which computes the synthetic images of the selected scenario considering instrumental effects (foreoptics transmittance, Modulation Transfer Function (MTF), detector sampling (spectral and spatial), and noise (e.g., photonic noise, and detector thermal noise).
PRIMUS is a powerful simulation tool that can be used to test and validate algorithms and procedures, thanks to the controlled flux of data and the presence of a verified "ground truth".
In the framework of this paper, PRIMUS was used to generate synthetic images to test critical issues of the iterative procedure described in next section, in particular in the case of sharp spatial/spectral gradients inside the image when the contribution to the apparent reflectance given by adjacency effects is maximum. PRIMUS is an end-to-end simulator that gives the possibility of comparing retrieved data with input data (acting as ground Remote Sens. 2021, 13, 1799 6 of 20 truth). The simulation is performed at a high-spectral-resolution, which allows convolution with the sensor response for each spectral channel.

Iterative Atmospheric Correction Procedure
The proposed iterative method aimed at solving Equation (5) by assuming that, at the first iteration, the spectral reflectance of the surrounding pixels was almost equal to the one observed, i.e., ρ ≈ ρ. Following the iterative approach, the first run of the correction procedure evaluated the spectral reflectance of each pixel of the entire image on the assumption that ρ = ρ. The reflectance retrieved in the first run was used in the following step to calculate the new value for the mean spectral reflectance of surrounding pixels ρ. The new value of ρ was used in a new evaluation of the reflectance image. The iteration process can be summarized as follows: STEP 0: Assuming ρ ≈ ρ, S 1 and S 2 from Equation (6) have an average as Under these assumptions, the apparent reflectance can be expressed as and the at-ground spectral reflectance at order 0 is For each pixel of the processed image, the value of reflectance in (9) was used to determine the mean spectral reflectance ρ 0 f averaged on the entire image. Therefore, ρ 0 f then became the input for the next iteration.

STEP 1:
Using the average ρ 0 f as a mean value for the reflectance of the surrounding pixels, the spectral reflectance at the first order was expressed as Then the average value ρ 1 f was calculated by spatial averaging as in the previous step and used as input for the next iteration. STEP N: At present, the number of iterations is set by the user. An alternative solution is offered by an iteration stopping as soon as the minimum of a user-defined suitable functional is reached. A candidate for the functional could be the global (i.e., calculated on the entire Remote Sens. 2021, 13, 1799 7 of 20 image or on the main regions of interest) relative variation of the reflectance image ρ N f with respect to ρ N−1 f as calculated in the previous step. The model proposed by Verhoef and Bach [6] and described in the previous section assumes, for the evaluation of the average value ρ of Equation (1), an equal contribution by all the surrounding pixels in the image (i.e., a spatial average on the entire image). In principle, our approach can exploit the possibility of using, when available, a non-uniform distribution for spatially weighting the average reflectance ρ to take into account the surrounding pixel contribution as a function of the distance from the target pixel. In the trivial case in which the weighting function is the uniform distribution, the value of ρ i f is the reflectance spatially averaged over the entire image and spectrally averaged in the channel f , as adopted by Verhoef and the present paper.

Implementation of the Procedure
The iterative procedure was implemented in a stand-alone software procedure using as input data the MODTRAN ® 5.3.2 radiative transfer software output data. In this version of MODTRAN, the four-stream model [6] produced outputs useful for the calculation of the terms introduced in Equation (2). In particular, from the dedicated *.acd MODTRAN output file, the following five quantities were extracted and spectrally averaged by using a userselected filter (Gaussian-triangular-rectangular) with the central wavelength and the Full-Width Half Maximum (FWHM) of each spectral band of the sensor and by implementing the calculation of Equations (7)-(11): Sun-to-ground diffuse transmittance of the atmosphere t s , Sun-to-ground direct transmittance T s , observer-to-ground embedded diffuse transmittance t obs , observer (or sensor)-to-ground direct transmittance T obs , and spherical albedo of the atmosphere S. The value of ρ 0 was calculated from L path and E 0 and the Sun's zenith angle was computed for user-selected geographic coordinates and time of the day.
Concerning the observation geometry, we assumed a pushbroom acquisition mode (across-track acquisition); that is, the sensor observed a spectrally resolved single line on the ground, orthogonal to the flight direction. In pushbroom geometry, observation angles are constant along the flight direction (along-track), i.e., for each column of the image. Differences in observation angles for each pixel in each row of the image are managed by interpolating three different MODTRAN run with proper ϑ v and ϕ r angles for, respectively, the left, central and right column of the image (with respect to the flight direction).
The spectrally averaged values for each parameter are angularly interpolated and the zero-order reflectance image is calculated. Further iterations allow for the calculation of the N-order reflectance ρ N f , integrated over the spectral width of the considered band as in (11). The selection of input data and the settings for the output as well as the number of iterations can be customised.
The code is entirely developed in Java language and runs on every OS implementing a Java Virtual Machine able to run standard Java bitcode.

Results
The iterative algorithm was applied to two different images: a simulated image, produced with the in-house developed hyperspectral image simulator, PRIMUS (introduced in Section 2.1) on the PROBA-1 (Project for On-Board Autonomy-1) platform. The simulated image was used to test the algorithm's performance on a supervised data set in which almost all the parameters involved in hyperspectral image acquisition can be controlled. The test on the CHRIS image was exploited to assess the performance of the algorithm when applied to a realistic case study.

Atmospheric Correction Procedure Applied to the Simulated Image
For the purposes of this paper, the simulation performed by PRIMUS was simplified by directly providing as input a high-spectral-resolution ground-reflectance datacube and neglecting the simulation of electronic noise or MTF (the effect of detector was given by the spectral convolution of the image in each spectral channel). The input for the simulation Remote Sens. 2021, 13,1799 8 of 20 of the at-sensor-radiance image was provided by a synthetic, high-spectral-resolution, 30 × 20 pixel reflectance datacube with 601 spectral bands at a step of 1 nm in the range of 400 to 1000 nm. Such a ground reflectance image was generated to represent a principal case in which an adjacency contribution could influence pixel mixing, leading to an incorrect estimation of ground reflectance. The image was, consequently, non-representative of natural ground features; on the contrary, it presented a different combination of ground spatial and spectral patterns. This ground reflectance image was used as input to generate the corresponding at-sensor radiance image. The sensor simulated by PRIMUS has 64 channels having Gaussian spectral response, mirroring the characteristics of existing hyperspectral sensors in the same spectral range. The position of the channels and spectral widths are reported in Table 1. As a consequence of spectral convolution with the simulated sensor's channels, the simulated at-sensor-radiance image maintained the original spatial dimension of 30 × 20 pixels but only on 64 spectral bands in the 400-1000 nm range. At this point, the PRIMUS tool provided the at-sensor-radiance input image to test our retrieval procedure. The atmospheric vertical profile used for the simulation is shown in Figure 2.  The image is made up of six different subimages, characterized by six different 10 × 10 pixel reflectance patterns. The dots in Figure 3 show the pixel position for which the reflectance was extracted. The pixels were chosen following the desiderata described in the following list.
Pixels surrounded by spatially uniform reflectance A: pixel with black reflectance (spectrally constant: ρ(λ) = 0.0) surrounded by pixels with the same reflectance value; B: pixel with grey reflectance (spectrally constant: ρ(λ) = 0.5) surrounded by pixels with the same reflectance value; C: pixel with spectral reflectance linearly increasing from 0.1 (at 400 nm) to 0.9 (at 1000 nm) surrounded by pixels with the same reflectance value; Pixels surrounded by chessboard spatial pattern, spectrally constant Remote Sens. 2021, 13, 1799 9 of 20 D: pixel with grey reflectance (spectrally constant: ρ(λ) = 0.5) surrounded by a chessboard spatial pattern of pixels with reflectance between 0.5 and 1.0; E: pixel with unitary reflectance (spectrally constant: ρ(λ) = 1.0) surrounded by a chessboard spatial pattern of pixels with reflectance increasing from 0.0 (at 400 nm) and 0.5 (at 1000 nm); Pixels surrounded by random spatial pattern, spectrally variable: F: pixel with ground reflectance varying between 0.5 (at 400 nm) to 1.0 (at 1000 nm) surrounded by a random spatial pattern, spectrally variable; G: pixel with ground reflectance varying between 0.0 and 1.0 with two maxima (@ 500 nm and 900 nm) surrounded by a random spatial pattern, spectrally variable; H: pixel with ground reflectance, varying between 0.0 and 1.0 with two maxima (@ 500 nm and 900 nm) surrounded by a random spatial pattern, spectrally variable. The image is made up of six different subimages, characterized by six different 10 × 10 pixel reflectance patterns. The dots in Figure 3 show the pixel position for which the reflectance was extracted. The pixels were chosen following the desiderata described in the following list.
Pixels surrounded by spatially uniform reflectance A: pixel with black reflectance (spectrally constant: ( ) = 0.0) surrounded by pixels with the same reflectance value; B: pixel with grey reflectance (spectrally constant: ( ) = 0.5) surrounded by pixels with the same reflectance value; C: pixel with spectral reflectance linearly increasing from 0.1 (at 400 nm) to 0.9 (at 1000 nm) surrounded by pixels with the same reflectance value; Pixels surrounded by chessboard spatial pattern, spectrally constant D: pixel with grey reflectance (spectrally constant: ( ) = 0.5) surrounded by a chessboard spatial pattern of pixels with reflectance between 0.5 and 1.0; E: pixel with unitary reflectance (spectrally constant: ( ) = 1.0) surrounded by a chessboard spatial pattern of pixels with reflectance increasing from 0.0 (at 400 nm) and 0.5 (at 1000 nm); Pixels surrounded by random spatial pattern, spectrally variable: F: pixel with ground reflectance varying between 0.5 (at 400 nm) to 1.0 (at 1000 nm) surrounded by a random spatial pattern, spectrally variable; G: pixel with ground reflectance varying between 0.0 and 1.0 with two maxima (@ 500 nm and 900 nm) surrounded by a random spatial pattern, spectrally variable; H: pixel with ground reflectance, varying between 0.0 and 1.0 with two maxima (@ 500 nm and 900 nm) surrounded by a random spatial pattern, spectrally variable. The procedure for determining ground reflectance was run iteratively. The result after each iteration was compared with the ground truth to test the goodness of the reconstructed reflectance.
The retrieved reflectance spectrum was compared with the corresponding ground truth image, which is, by definition, the image observed by the sensor, normalized for illumination effects and observed without the presence of the atmosphere. As a consequence, the ground truth has the physical dimensions of a reflectance. It was obtained directly by the convolution of the high-spectral-resolution reflectance image (the same used as input for the at-sensor radiance simulation by PRIMUS) with the sensor spectral response for each channel (as in Table 1). Figure 3 shows, for the first and last spectral channels, a change in the retrieved reflectance while the iteration number increased. The ground truth reflectance is also shown together with the position of the eight points, the spectra of which were individually analysed. Each point was representative of different ground conditions: pixels surrounded by spatially uniform reflectance (A, B, C); pixels surrounded by a chessboard spatial pattern, spectrally constant (D, E); pixel surrounded by a chessboard spatial pattern, spectrally variable (F, G, H). For points A to H, Figures 4, 5 and 6 illustrate the results of successive iterations compared with "ground truth" data. The Reflectance spectra and the difference from the ground truth were plotted for all the selected points. Note that the algorithm set to 1 all reflectance values greater than 1. We observed that the ground pattern affected the first retrieval of ground reflectance (iteration 0), given that it was a spectrum retrieved under the assumption of a spatially constant reflectance value. The retrieved values of reflectance converged to the ground truth (grey line) after the first iteration.    An analogous analysis was carried out for the entire image for each spectral channel: the difference between the entire retrieved reflectance image and the ground truth was evaluated as the iteration number increased. Figure 7 shows, as a function of the wavelength, the difference between the spatially averaged reflectance image and the spatially averaged ground truth (a) and the maximum error value between retrieved reflectance and ground truth (b) after each iteration.
The analysis was extended to the root-mean-square (RMS) error as shown in Figure  8. The RMS was calculated, respectively: (a) as the root of squared difference between points A to H of the retrieved reflectance and the corresponding ground truth, averaging on each point of the spectrum; (b) as the root of squared difference between all the retrieved reflectance image pixels and the ground truth, averaged both spatially and spectrally. Figure 6. Reflectance spectra of the pixels F, G and H as the number of iterations increases. The pattern of surrounding pixels affects the first retrieval (iteration 0). The retrieved reflectance converges to the real value after the subsequent iterations. Both the reflectance spectra (at the top) and their difference with the ground truth (at the bottom) are shown.
The procedure for determining ground reflectance was run iteratively. The result after each iteration was compared with the ground truth to test the goodness of the reconstructed reflectance.
The retrieved reflectance spectrum was compared with the corresponding ground truth image, which is, by definition, the image observed by the sensor, normalized for illumination effects and observed without the presence of the atmosphere. As a consequence, the ground truth has the physical dimensions of a reflectance. It was obtained directly by the convolution of the high-spectral-resolution reflectance image (the same used as input for the at-sensor radiance simulation by PRIMUS) with the sensor spectral response for each channel (as in Table 1). Figure 3 shows, for the first and last spectral channels, a change in the retrieved reflectance while the iteration number increased. The ground truth reflectance is also shown together with the position of the eight points, the spectra of which were individually analysed. Each point was representative of different ground conditions: pixels surrounded by spatially uniform reflectance (A, B, C); pixels surrounded by a chessboard spatial pattern, spectrally constant (D, E); pixel surrounded by a chessboard spatial pattern, spectrally variable (F, G, H).
For points A to H, Figures 4-6 illustrate the results of successive iterations compared with "ground truth" data. The Reflectance spectra and the difference from the ground truth were plotted for all the selected points. Note that the algorithm set to 1 all reflectance values greater than 1. We observed that the ground pattern affected the first retrieval of ground reflectance (iteration 0), given that it was a spectrum retrieved under the assumption of a spatially constant reflectance value. The retrieved values of reflectance converged to the ground truth (grey line) after the first iteration.
An analogous analysis was carried out for the entire image for each spectral channel: the difference between the entire retrieved reflectance image and the ground truth was evaluated as the iteration number increased. Figure 7 shows, as a function of the wavelength, the difference between the spatially averaged reflectance image and the spatially averaged ground truth (a) and the maximum error value between retrieved reflectance and ground truth (b) after each iteration.  (a) (b) Figure 8. (a) Standard deviation with respect to ground truth for the retrieved reflectance of each pixels from A to H as a function of the number of iterations. The standard deviation is calculated as the root of the squared difference between the retrieved reflectance spectrum and ground truth and averaged on all the values of the spectrum of each point. (b) Standard deviation with respect to ground truth for the retrieved reflectance of the entire image as a function of the number of iterations. Analogously to (a), the standard deviation is calculated as the root of the squared difference between retrieved reflectance image and ground truth but averaging both spectrally and spatially on each value of the entire datacube. The analysis was extended to the root-mean-square (RMS) error as shown in Figure 8. The RMS was calculated, respectively: (a) as the root of squared difference between points A to H of the retrieved reflectance and the corresponding ground truth, averaging on each point of the spectrum; (b) as the root of squared difference between all the retrieved reflectance image pixels and the ground truth, averaged both spatially and spectrally.

Atmospheric Correction on CHRIS-PROBA Image
CHRIS on PROBA-1 [32], developed by SIRA (UK) and mounted in the mini-satellite PROBA-1, was the first hyperspectral sensor capable of acquiring the same image at five different viewing angles (Viewing Zenith Angle equal to: 0 • , ±36 • , ±55 • ). In the framework of the ESA project EOPI cat. 1-LBR Project ID.2832 "Assimilation of biophysical and biochemical variables in biochemical and hydrological models at landscape scale", the CHRIS sensor acquired a large number of images over the Cal/Val test site inside the Migliarino-San Rossore-Massaciuccoli Regional Park (Italy) and managed by our institution. CHRIS performed acquisitions over 18 spectral bands in the visible and near infrared with a spatial resolution of 18 m. Table 2 summarizes CHRIS instrumental characteristics.
(a) (b) Figure 7. Difference (a) between the spatially averaged reflectance image and the spatially averaged ground truth and maximum error (b) between retrieved reflectance and ground truth as a function of the wavelength after each iteration steps.
(a) (b) Figure 8. (a) Standard deviation with respect to ground truth for the retrieved reflectance of each pixels from A to H as a function of the number of iterations. The standard deviation is calculated as the root of the squared difference between the retrieved reflectance spectrum and ground truth and averaged on all the values of the spectrum of each point. (b) Standard deviation with respect to ground truth for the retrieved reflectance of the entire image as a function of the number of iterations. Analogously to (a), the standard deviation is calculated as the root of the squared difference between retrieved reflectance image and ground truth but averaging both spectrally and spatially on each value of the entire datacube.

Atmospheric Correction on CHRIS-PROBA Image
CHRIS on PROBA-1 [32], developed by SIRA (UK) and mounted in the mini-satellite PROBA-1, was the first hyperspectral sensor capable of acquiring the same image at five different viewing angles (Viewing Zenith Angle equal to: 0°, ± 36°, ± 55°). In the framework of the ESA project EOPI cat. 1-LBR Project ID.2832 "Assimilation of biophysical and biochemical variables in biochemical and hydrological models at landscape scale", the CHRIS sensor acquired a large number of images over the Cal/Val test site inside the Migliarino-San Rossore-Massaciuccoli Regional Park (Italy) and managed by our institution. CHRIS performed acquisitions over 18 spectral bands in the visible and near infrared with a spatial resolution of 18 m. Table 2 summarizes CHRIS instrumental characteristics.  During CHRIS acquisition, an in-field campaign for data validation and assimilation was executed at San Rossore. A Fieldspec Spectroradiometer by ASD Inc. was used to acquire in situ spectral reflectance measurements of different materials belonging to three different classes: soil, vegetation and manmade materials. Measurements mainly consisted of the spectral reflectance of vegetation and soils, but also the spectra acquired on materials produced by man such as cement and tiles. Measurements of inland waters were also carried out, in particular on freshwater canals in the park. As far as vegetation was concerned, the reflectance spectra of grass, leaves (poplar, holm, oak and pine), shrubs (bramble), dry grass and dry shrubs were acquired. Spectral reflectance measurements of soils and sands were performed in different areas. During acquisitions, the change in solar illumination was measured by making reference measurements with a cadence of 1 measure of white every 10 min using a 25 × 25 cm Spectralon tablet. The spectrum of each material was obtained as the average of 10 measurements. Figure 9 shows different acquired natural and manmade spectra. concerned, the reflectance spectra of grass, leaves (poplar, holm, oak and pine), shrubs (bramble), dry grass and dry shrubs were acquired. Spectral reflectance measurements of soils and sands were performed in different areas. During acquisitions, the change in solar illumination was measured by making reference measurements with a cadence of 1 measure of white every 10 min using a 25 × 25 cm Spectralon tablet. The spectrum of each material was obtained as the average of 10 measurements. Figure 9 shows different acquired natural and manmade spectra.
(a) (b) Figure 9. Spectral reflectance measurements of different materials acquired during a field campaign in the Migliarino-San Rossore-Massaciuccoli Regional Park (Italy) using a Fieldspec Spectroradiometer. Reference measurements were performed using a Spectralon tablet. The spectrum of each material was obtained as an average of 10 measurements. The spectra of manmade materials and different type of soil are shown in (a). Vegetation and water spectra are shown in (b).
Samples of natural targets were also collected and their reflectance measured in a laboratory using the Spectroradiometer PERKIN ELMER LAMBDA 19. Both in-field and laboratory measurements were used to validate results from atmospheric correction procedures.
A selected CHRIS image, characterized by high radiometry and clear sky acquisition, as well as the acquisition of the corresponding ground truth for selected pixels made possible the test of our iterative procedure on a real dataset.  Samples of natural targets were also collected and their reflectance measured in a laboratory using the Spectroradiometer PERKIN ELMER LAMBDA 19. Both in-field and laboratory measurements were used to validate results from atmospheric correction procedures.
A selected CHRIS image, characterized by high radiometry and clear sky acquisition, as well as the acquisition of the corresponding ground truth for selected pixels made possible the test of our iterative procedure on a real dataset. Figure 10a shows the at-sensor radiance image acquired with Viewing Zenith Angle (VZA) equal to 0 • by CHRIS over San Rossore Park on 25 July 2003. Figure 10b shows the radiance spectra extracted from pixels containing natural or manmade targets, where the discontinuity at 551 nm introduced in the last release (Release 4.1) of CHRIS radiometrically calibrated data is evident. This release mitigated unexpected peaks (present in release 3.1) at 490 nm (close to O 3 feature), 753 nm (close to O 2 feature), and 911 nm (close to H 2 O feature), but introduced a new one at 551 nm [33]. The related reflectance spectra obtained by applying the iterative atmospheric correction procedure are shown in Figure 10c. As for the previous case with simulated data, the iterations were stopped at the tenth step after observing that the retrieved reflectance spectrum reached an asymptotic value. On the basis of the results obtained for the simulated data, we assumed at first that the asymptotic value found by the algorithm was the true reflectance spectrum. Then the asymptotic value was compared with the one directly measured in-field for selected pixels.
Following this approach, for each iteration the differences between the estimated reflectance value at the generic iteration step and the asymptotic one were evaluated. The difference between each reflectance retrieval and its asymptotic value was spectrally dependent as shown in Figure 11a. The value of this maximum as the iteration step increased is shown in Figure 11b, where it was evaluated for all the targets shown in Figure 10b,c.
Finally, spectral reflectance values were numerically compared with reflectance spectra measured in situ and used as ground truth data. The comparison was performed using data coming from selected areas within San Rossore Park. The selection criteria are listed below: -areas should be accessible to easily perform the measurements during acquisition; -areas should, as far as possible, be uniform within a spatial dimension of 54 × 54 m (equal to three times the CHRIS Ground Sampling Distance (GSD)) in order to avoid mixed pixels when comparing ground measurements with reflectance spectra retrieved from the CHRIS atmospherically corrected images.
Taking into account the considerations listed above, a pixel relative to sand and another relative to vegetation were selected in the image. Their comparison is shown in Figure 12a,b, where the retrieved reflectance spectra at the first and last iteration together with the corresponding ground truth values were reported. The maximum and mean values for the relative error between the ground truth and the retrieved values as the iteration number increased are reported in Figure 12c. The graph stops at step 7 since an asymptotic value was reached. The relative error as a function of wavelength is shown in Figure 12d. The difference between the ground truth and the (interpolated) retrieved reflectance spectra for the first 4 iterations is reported in Figure 12e,f. Figure 10a shows the at-sensor radiance image acquired with Viewing Zenith Angle (VZA) equal to 0° by CHRIS over San Rossore Park on 25 July 2003. Figure 10b shows the radiance spectra extracted from pixels containing natural or manmade targets, where the discontinuity at 551 nm introduced in the last release (Release 4.1) of CHRIS radiometrically calibrated data is evident. This release mitigated unexpected peaks (present in release 3.1) at 490 nm (close to O3 feature), 753 nm (close to O2 feature), and 911 nm (close to H2O feature), but introduced a new one at 551 nm [33]. The related reflectance spectra obtained by applying the iterative atmospheric correction procedure are shown in Figure  10c. As for the previous case with simulated data, the iterations were stopped at the tenth step after observing that the retrieved reflectance spectrum reached an asymptotic value. On the basis of the results obtained for the simulated data, we assumed at first that the asymptotic value found by the algorithm was the true reflectance spectrum. Then the asymptotic value was compared with the one directly measured in-field for selected pixels. atmospherically corrected images. Taking into account the considerations listed above, a pixel relative to sand and an-other relative to vegetation were selected in the image. Their comparison is shown in Fig-ures 12a and 12b, where the retrieved reflectance spectra at the first and last iteration to-gether with the corresponding ground truth values were reported. The maximum and mean values for the relative error between the ground truth and the retrieved values as the iteration number increased are reported in Figure 12c. The graph stops at step 7 since an asymptotic value was reached. The relative error as a function of wavelength is shown in Figure 12d. The difference between the ground truth and the (interpolated) retrieved reflectance spectra for the first 4 iterations is reported in Figure 12e,f.
(a) (b) Figure 11. Spectral dependence of the difference between the retrieved reflectance at first iteration and the asymptotic value reached at iteration n = 10 (a). Maximum value for the difference between the retrieved reflectance at iteration n and asymptotic value reached at iteration n = 10 (b). The difference was evaluated for different coverages. Figure 11. Spectral dependence of the difference between the retrieved reflectance at first iteration and the asymptotic value reached at iteration n = 10 (a). Maximum value for the difference between the retrieved reflectance at iteration n and asymptotic value reached at iteration n = 10 (b). The difference was evaluated for different coverages. (e) (f) Figure 12. Spectral reflectance retrieved by the atmospheric correction procedure at iteration step 0 (grey) and after reaching asymptotic value compared with measured (ground truth) data (dashed line) for the "sand" pixel (a) and for the "meadow" pixel (b). (c) Max and mean relative error as the iteration number increases. (d) Relative error as a function of the wavelength between asymptotic values and ground truth data. Difference between the measured ground truth and the retrieved reflectance for the "sand" (e) and the "meadow" (f) pixels for the first 4 iterations; after the second iteration the curves overlap.

Atmospheric Correction Procedure Applied to the Simulated Image
As stated in the previous section, a simulated image was used to test and validate the iterative procedure, thanks to the controlled flux of data and the presence of a verified Figure 12. Spectral reflectance retrieved by the atmospheric correction procedure at iteration step 0 (grey) and after reaching asymptotic value compared with measured (ground truth) data (dashed line) for the "sand" pixel (a) and for the "meadow" pixel (b). (c) Max and mean relative error as the iteration number increases. (d) Relative error as a function of the wavelength between asymptotic values and ground truth data. Difference between the measured ground truth and the retrieved reflectance for the "sand" (e) and the "meadow" (f) pixels for the first 4 iterations; after the second iteration the curves overlap.

Atmospheric Correction Procedure Applied to the Simulated Image
As stated in the previous section, a simulated image was used to test and validate the iterative procedure, thanks to the controlled flux of data and the presence of a verified "ground truth". Due to the use of synthetic data, any spatial effect of pixel mixing introduced by the sensor and optics was completely absent, and the goodness of the retrieval procedure could be tested in a controlled environment for different ground reflectance pattern.
The spectra relative to points A to H of Figures 4-6 are representative of different scenarios and were selected for an analysis of the performances of the retrieval algorithm on a pixel with: • spatially uniform background and constant or linearly variable reflectance spectrum (points A to C); • background pixel characterized by a chessboard pattern and spectrally constant reflectance spectrum (points D, E); • background pixel characterized by a chessboard pattern and spectrally variable (and non-linear) spectrum (points F, G, H).
For all the points, the contribution of surrounding pixels to retrieved reflectance was visible for iteration step 0, i.e., for the first-guess of the ground reflectance spectrum obtained by considering a spatially constant ground. Thereafter, the algorithm converged to the ground truth value with uncertainty already less than 1% at step 2. We noted that at each iteration step, both for points A to C (uniform background) and for points D to H (structured background), the retrieved spectrum "oscillated" around the true value (i.e., the difference between retrieved and true spectra was either overestimated or underestimated), converging to the ground truth when the number of iterations increased. Such behaviour appeared to be independent of the spectral structure of the reflectance, as it was observed for reflectance spectra having different spectral shapes (both in Figure 4 and in Figures 5 and 6).
The cases having reflectance spectrum spectrally constant (points A, B, D and E in Figures 4 and 5) or linearly variable (point C in in Figure 4) clearly showed that the error on retrieved reflectance at step 0 (i.e., considering constant ground reflectance) was greater for the leftmost shorter-wavelength (blue) region of the spectrum. This behaviour can be explained by considering that • step 0 did not consider spatial differences between the generic pixel and its surroundings, and • the atmospheric scattering was greater in this spectral region (due to the effects of Rayleigh scattering as clearly shown by the transmittance spectrum of Figure 2), thereby increasing the effects of the surrounding pixel reflectance on the retrieved spectrum. Subsequent iterations seem to correctly converge to the "ground truth" value of the reflectance spectrum.
A similar behaviour is observed considering the entire image, spatially averaged. Figure 7a shows the spectrum of the difference between the spatial average of the retrieved reflectance image (for different steps) and the spatial average of the ground truth. Figure 7b displays the trend of the maximum error in reflectance retrieval calculated for all the pixels of the image with respect to the ground truth for different iteration steps. Again, the uncertainty at the first steps is greater for the shortest wavelengths. We also noted the effects of absorption bands (mainly of water between 900 and 1000 nm) deteriorating the quality of the retrieval.
A more qualitative result is offered by Figure 8, showing the RMS between the retrieved reflectance spectrum and ground truth, where it is clear that after iteration step 3 the retrieval uncertainty was approximately an order of magnitude smaller than step 0 (i.e., considering spatially constant reflectance values for all the pixels), at, respectively, less than 0.01 for Figure 8a and less than 0.001 for Figure 8b. It is worth noting that the results of Figure 8a,b are not intercomparable because the RMS is performed on the spectrum of each selected point for (a) and on all the points of the datacube for (b).

Atmospheric Correction on CHRIS-PROBA Image
Section 4.2 showed the results obtained by applying the iterative procedure on real data, i.e., on the CHRIS acquisition shown in Figure 10a. Even in this case, the retrieved reflectance spectra reach an asymptotic value after few iterations. Such an asymptotic value was considered the "true reflectance spectrum", and it was used to evaluate the differences in the estimated reflectance value obtained at the generic iteration n. These differences were spectrally dependent as the position and the value of the maximum changed with the target as shown in Figure 11a. This behaviour could be ascribed to the fact that the first iteration did not consider spectral differences between the generic pixel and its surroundings: the greater the spectral difference between spectra, the greater the error in the retrieved spectral reflectance at first iteration.
This effect quickly disappeared as the iteration number increased. Figure 11b shows that the value of the maximum of the differences plotted in Figure 11a fell starting from iteration 2 with a reduction of at least 80% with respect to step 0. After iteration 3, the estimated spectral reflectance reached a stable value. Furthermore, in Figure 11a, the differences in the visible spectral range were greater than the ones obtained in the near infrared. Such behaviour had also been observed for the simulated image. In this case also, atmospheric scattering increased the effects of the surrounding pixels' reflectance on the retrieved reflectance. The test on the real image confirmed what was noticed on the simulated dataset: the retrieved spectral reflectance rapidly converged for all the selected coverages.
Retrieved spectral reflectance was numerically compared with the "ground truth" reflectance spectra measured during the CHRIS acquisition. Specific areas within San Rossore Park were selected to avoid mixed pixels. The selection criteria listed in Section 4.2 aimed to guarantee that the selected area had a spatial variability at least greater than three times the CHRIS GSD. It is worth mentioning that some portions of the San Rossore test site (such as sea shores, and urban areas) showed a spatial variability that could be less than the desired value. Such areas were not selected for the comparison experiment. The results derived by the iterative procedure were compared with the ground truth for two different coverage: sand and vegetation. The maximum and the mean values for the relative error reported in Figure 12c showed that for the "sand" pixel ( Figure 12a) the error in the retrieved spectral reflectance decreased from an average value of 23% obtained at iteration 0 to an average value of 6% for the asymptotic value. For the "vegetation" pixel ( Figure 12b), the average relative error was equal to 11% for the iteration 0 and 6% of the asymptotic value. The same behaviour was observed for the maximum error: the value decreased from 33 to 11% for sand, while for vegetation the value fell from 23 to 13%. The mean error reached an asymptotic value equal to 6% for both kind of coverage. Such a residual error can be mainly ascribed to the uncertainty in the atmospheric model: due to the lack of knowledge about some atmospheric parameters, a standard MODTRAN ® MID LATITUDE SUMMER atmosphere was used for atmospheric correction. Figure 12d shows that the spectral behaviour of the asymptotic value of relative error for both pixels (a) and (b) was greater at wavelengths close to the absorption bands and in the visible region, proving that the uncertainty in the atmospheric model was the main source of error in the retrieval of spectral reflectance. Figure 12e,f provide an absolute estimate for the difference between the measured and the retrieved spectral reflectance. We noted that after the second iteration the curves in practice overlapped. The asymptotic value of the difference was greater in proximity to the atmospheric absorption bands, confirming that an important source of error was the atmospheric model and its parametrization.

Conclusions
The paper presented an atmospheric correction procedure based on the direct model implemented on the widely known radiative transfer model MODTRAN ® and made direct use of the output of the MODTRAN software package starting from version 5.2.3.
The procedure demonstrated the ability to invert the direct model simulated by MODTRAN, thereby determining iteratively the ground reflectance of each pixel of the image separating the contribution of surrounding pixels. The procedure was tested on both synthetic images (for taking advantage of a simulated, i.e., controlled, environment and a fully known ground truth) and on real hyperspectral images and ground reflectance measurements. The proposed iterative procedure converged rapidly after step 2 for both simulated and real cases and for different ground reflectance spectra and patterns.
In the considered case of a real CHRIS image, the mean relative error in retrieved reflectance after the second iteration remained under 6%, with peaks of approximately 10%. A major source of error can be attributed to the RTM parametrization, in particular to the molecular absorption model, having a greater error around the atmospheric absorption bands. As a consequence, a better a priori knowledge of the gaseous components of the atmosphere could improve retrieval accuracy.
The procedure can exploit the possibility of using, whenever available, alternative models to evaluate the surrounding pixels' contribution (i.e., an analytical relation as a function of the distance from the target pixel), thus allowing the use of different radiative transfer models.
The algorithm can be also implemented in more sophisticated atmospheric correction procedures, aiming to autonomously tune both the abundances of selected main atmospheric components (gases and aerosol) and the ground spectral reflectance map.