MCSNet: A Radio Frequency Interference Suppression Network for Spaceborne SAR Images via Multi-Dimensional Feature Transform

: Spaceborne synthetic aperture radar (SAR) is a promising remote sensing technique, as it can produce high-resolution imagery over a wide area of surveillance with all-weather and all-day capabilities. However, the spaceborne SAR sensor may suffer from severe radio frequency interference (RFI) from some similar frequency band signals, resulting in image quality degradation, blind spot, and target loss. To remove these RFI features presented on spaceborne SAR images, we propose a multi-dimensional calibration and suppression network (MCSNet) to exploit the features learning of spaceborne SAR images and RFI. In the scheme, a joint model consisting of the spaceborne SAR image and RFI is established based on the relationship between SAR echo and the scattering matrix. Then, to suppress the RFI presented in images, the main structure of MCSNet is constructed by a multi-dimensional and multi-channel strategy, wherein the feature calibration module (FCM) is designed for global depth feature extraction. In addition, MCSNet performs planned mapping on the feature maps repeatedly under the supervision of the SAR interference image, compensating for the discrepancies caused during the RFI suppression. Finally, a detailed restoration module based on the residual network is conceived to maintain the scattering characteristics of the underlying scene in interfered SAR images. The simulation data and Sentinel-1 data experiments, including different landscapes and different forms of RFI, validate the effectiveness of the proposed method. Both the results demonstrate that MCSNet outperforms the state-of-the-art methods and can greatly suppress the RFI in spaceborne SAR.


Introduction
Synthetic Aperture Radar (SAR) is an advanced sensor [1] that can support all-weather and all-time operation [2].SAR performs pulse compression on the returned echo signal and then utilizes imaging techniques to generate high-precision images of the target [3].Spaceborne SAR provides extremely high altitude and thus enables observation over a wide area [4], widely applied in environmental monitoring [5], disaster warning [6], and geographic inversion [7].However, the growing number of electromagnetic devices in the universe space, ground, and ocean leads to the overlapping utilization of the same spectrum, therefore the electromagnetic environment in which the Spaceborne SAR is located becomes increasingly harsh [8].Under such an environment, the echoes are easily affected by electromagnetic interference, this form of interference is commonly regarded as Radio Frequency Interference (RFI).RFI presents as striped or blocky electromagnetic artifacts in SAR images, thereby degrading image quality [9].With high-power RFI, these artifacts can significantly obscure the entire images and it is detrimental to the observation [10].Therefore, to fully extract the geographic information from the images, a lot of effort is required to investigate the RFI suppression approaches [11].
For high-power RFI features presented directly on the image, the methods described above lack countermeasures to solve them.To remove interference features presented on SAR images, we propose the Multi-dimensional Calibration and Suppression Network (MCSNet).This network is aimed at Level 1-B Spaceborne SAR data form with real-valued magnitude information.A brief summaries of the contributions are shown below: 1.
A strategy for SAR images RFI suppression across multi-dimensions and multi-channel.

2.
A module applied to the global structure, with functions for extracting deep image features and calibrating the mapping of feature maps; A novel method with a supervised mechanism for calibrating image features and maintaining the scattering characteristics of SAR images with fine detail.

3.
The corresponding results show that MCSNet owns the characteristic functionality of RFI suppression.
The entire paper is made up of five sections.Section 2 introduces the SAR image model and the associated formulations.Section 3 explains the overall structure of MCSNet and the roles of each module plays.Section 4 provides the experimental steps and results.Finally, Section 5 gives a concise summary.

SAR Image Model and Equations
The beam emitted by the SAR will return a scattered echo when it hits the interest region.When the frequency of the interference source is within the bandwidth of the SAR operation, the SAR system suffers the interference at this point [36], which can be expressed by the following equation: where S Y (t) denotes the mixed echo received by the SAR receiver, S X (t) denotes the target echo formed by scattering from the imaging area and S I (t) denotes the interfering signal echo.In addition, S N (t) denotes the system background noise and t denotes the range fast time.The geometric interpretation [37] of the above processes is given in Figure 1.For the Spaceborne SAR operating environment, S I (t) can be considered as RFI, which in general can be divided into frequency modulation RFI (F M RF I ) and amplitude modulation RFI (AM RF I ) [23].F M RF I with a large bandwidth can be expressed by the following equation: where f , R denote the range sampling number and counting points, respectively.A m and K m denote the amplitude and modulation slope of the F M RF I .Additionally, M denotes the number of F M RF I and t f represents the kth range sampling moment.The bandwidth of AM RF I is generally larger than the sampling interval of SAR which can be expressed by the following equation: where M denotes the number of AM RF I .It can be observed from Equation (3) that the amplitude of AM RF I varies with time, therefore some unintentional interference also exhibits amplitude modulation characteristics.
Based on the electromagnetic scattering relationship between the interest region of the SAR system and S Y (t), we can obtain the final SAR image model by imaging-processing the echo data [38], as shown in the following equations: where I Y ∈ R H×W×3 denotes the image with interference, H imaging denotes the imaging corresponding function and * denotes the CONV operation.I X ∈ R H×W×C represents the target image without RFI.Here, we define the values of H and W as 512 while C is set according to the training effect, usually 8 or a multiple of 8. Additionally, I RF I ∈ R H×W×3 and I N ∈ R H×W×3 denote the RFI image and the background noise image, respectively.

The Proposed Method
We aim to obtain the I X from the I Y in Equation (4).In fact, I X and I RF I deviate from each other in terms of the carried information and mechanism.Therefore, we consider them as independent and thus construct a convex optimization model [39,40] with l2 norm constraints [41] to solve the problem, where I opt ∈ R H×W×3 represents the ideal output image of our algorithm.Based on Equation (5), we constructed an end-to-end MCSNet to obtain the optimal I X , and the structure of the MCSNet is shown in Figure 2.Where I Y ∈ R 512×512×3 and I X ∈ R 512×512×3 denote the input image and the output images of MCSNet.top_En and Bot_En indicate the top encoder and the bottom encoder, respectively.In addition, MSN denotes the multidimensional suppression network.MCSNet splits the image into the top and bottom parts for training.The entire architecture is mainly composed of the following components: (1) We design the Feature Calibration Module (FCM) with attention mechanisms to capture global information of the network.(2) The Multidimensional Suppression Network (MSN) is designed for interference suppression and mainly consists of three parts: the top encoder, the bottom encoder, and the decoder, and each part interacts with each other in feature information.(3) Image Calibration Network (ICN) facilitates I X to calibrate feature maps output by MSN and preserve valuable information.(4) The Residual Restoration Module (RRM) implements feature interaction with the MSN, ultimately outputting high-resolution images without interference.

FCM
In SAR image processing tasks, we need a module to extract features while being able to calibrate them.Therefore, to preserve the feature mapping, the FCM is designed to acquire image information on multiple channels.The architecture of the FCM is shown in Figure 3, where g 1 ∈ R H×W×C and g 7 ∈ R H×W×C denote the input and output feature maps of the FCM.ACM denotes the Attention Correction Module which is the main component of the FCM.First, we employ 1 × 1 CONV to perform the multi-channel transformation on g 1 , for characterizing as much image information as possible, as shown below: where conv 1×1 ( * , w i ) indicates a convolution operation with kernel size of 1 and the weight w i .g 2 ∈ R H×W×C/4 and g 3 ∈ R H×W×C represent the output feature maps after the corresponding CONV operations.H PReLu ( * ) denotes the PReLu activation function [42], which can adaptively correct the linear cell parameters.
ACM enhances the representation of feature maps, where g 3 and g 6 ∈ R H×W×C are the input and output of ACM, respectively.We first acquire the mean and standard deviation of g 3 , and then integrate them, as shown in the following equation: where H std ( * ) and H Mean ( * ) denote standard deviation pooling and mean pooling [43], respectively, and H cat ( * , * ) denotes CAT.In addition, g 4 ∈ R 1×1×C denotes the joint pooling feature.Afterward, two-dimensional CONV is employed for the multi-channel transformation of g 4 to produce g 5 ∈ R 1×1×C , similar to Equation ( 6).After being activated by the Sigmoid function, g 5 interacts with g 3 to produce the attentional feature g 6 ∈ R H×W×C , As shown in the equation below: where H Sigmoid ( * ) denotes the Sigmoid activation function and ⊗ denotes elements multiplication.Finally, we perform feature fusion between g 1 and g 6 to get the output of FCM where ⊕ represents the elements addition.

MSN
MSN considers the top encoder, bottom encoder, and decoder as the main structure, mainly used for removing interference features from the SAR image.The overall structure of MSN is shown in Figure 2. To reduce the computation amount and remove RFI features more effectively, we divide the image into two parts for processing [44], I Y _Top ∈ R H/2×W×3 and I Y _Bot ∈ R H/2×W×3 , respectively.Figure 4 gives the construction of the top encoder, bottom encoder and decoder.where DAM stands for Downsampling Attention Module, which consists of multiple FCMs connected in series and bilinear downsampling module.Correspondingly, UAM stands for upsampling Attention Module, which consists of multiple FCMs connected in series and bilinear upsampling module.The encoder increases the number of channels while continuously compressing input feature maps in dimensions.
, and J E4 ∈ R H/8×W/4×(C+2×C a ) are feature maps output by each stage of the encoder.The C a ∈ R above indicates a fixed increase in the number of channels.The decoder reduces the number of channels while squeezing the feature maps to the original dimension.Among them, J D1 ∈ R H×W×C , J D2 ∈ R H×W×C , J D3 ∈ R H/2×W/2×(C+C a ) , and J D4 ∈ R H/4×W/4×(C+2×C a ) are feature maps output by each stage of the decoder.This multi-dimensional transformation on the feature maps generates more contextual information to make the model learn better.In addition, the multi-dimensional and multi-channel squeeze means makes it easier to filter out interference features.
In Figure 2, the MEN framework, the red dashed line indicates the transfer feature stream.This is an attention protection mechanism that avoids useful information being lost during dimensional transformations.Referring to Figure 4, assume that the features of each stage of Top Encoder are {J T E1 , J T E2 , J T E3 , J T E4 }, and similarly, the features of each stage of Bot Encoder are {J BE1 , J BE2 , J BE3 , J BE4 }.In summary, the attention protection mechanism can be expressed by the following equations: where J Eni denotes the combined output features of the encoder at the ith stage.

ICN
The MSN filters the interference features, however, the transformation of the dimension of the feature maps causes the loss of useful information and some discrepancies, so we design the ICN to calibrate the feature information.The overall structure of the ICN is shown in Figure 5, where K 1 ∈ R H×W×C and K 6 ∈ R H×W×C are the input and output of the ICN.We first employ a 1×1 CONV on K 1 and perform feature fusion with I Y to generate K 2 ∈ R 512×512×3 , as shown in the following equation: Next, we adopt FCM to extract useful information from K 2 and generate K 4 ∈ R 512×512×C with I Y features, as shown in the following equation: The gated channel transformation block (GCTB) [45] can comprehensively learn the relationships between channels and convey valid information, therefore we adopt the GCTB to collect channel information of K 1 , as shown in the following equation: where K 3 ∈ R 512×512×C denotes the channel-wise feature and H GCTB ( * ) denotes the GCTB response function.Afterward, we perform the attention interaction between K4 and K3 to produce K 5 ∈ R 512×512×C .Finally, the feature fusion between K 5 and K 1 is performed to obtain K 6 ∈ R 512×512×C , the above process as shown below:

RRM
RRM is designed to further profile the output feature maps from ICN and generate fine details for image restoration.Figure 6 gives the structure of the RRM and we can see that RRM is similar to a residual network.Where P 1 ∈ R H×W×C and I X ∈ R H×W×3 are the input and output of the RRM.Additionally, the CEU represents the feature extraction unit, which is a series combination of the GCTB and FCM.First, P1 goes through a series of CEUs to produce the abundant feature P 2 ∈ R H×W×C .Then, we perform the feature fusion between P2 and P1 to produce P 3 ∈ R H×W×C , as shown in the following equations:

UAM
where H CEU ( * ) denotes the CEU response function.After a 1×1 CONV on P 3 , the final ground recovery image I X is obtained under the supervision of I Y , the above as shown below:

Experiments and Results
In this section, we will elaborate on the experiments in detail and display the final results.Firstly the dataset composition, the loss function, and the evaluation metrics will be presented.Secondly, compared with other state-of-the-art algorithms, the qualitative and quantitative results of the simulation data and measured Sentinel-1 data will be given.

Dataset
There is currently a lack of end-to-end data in the field of SAR jamming.Therefore based on Equation (4), we conduct relevant interference experiments to construct the simulation dataset.In addition, the Sentinel-1 satellite carries a C-band SAR, which provides continuous all-weather images.We take advantage of the periodicity of Sentine1 to construct the measured dataset.Typically, the interfered region is very small relative to the entire scene of the Sentinel-1.To reduce the computational effort and improve the processing efficiency of the method, we perform the cropping on Sentinel-1 images.In our experiments, we put 1600 pairs of simulated images and 400 pairs of measured images into one training sets X i = {Iin i , Icl i }, i = 1, 2, • • • , 2000, where Iin i and Icl i denote SAR images with interference and without interference.

Loss Function
Typically, the l2 loss function causes the image to be too smooth which is not suitable for our task.To converge the training results and obtain fine images, we adopt Charbonnier Loss as the main item to approximate the l1 loss function for enhancing the performance of MCSNet.The entire loss function can be expressed by the following equation: where I X ∈ R 512×512×3 denotes the image predicted by MCSNet and Icl ∈ R 512×512×3 denotes the clean image.µ denotes the weight coefficient and L S denotes the value of the loss function.( * ) denote the gradients of the vectors.In addition, Char( * , * ) denotes Charbonnier Loss and can be further expressed as: where A ∈ R H×W×C , B ∈ R H×W×C denote the tensor matrix.Additionally, ε denotes the penalty factor.

Assessment Indicators
To compare the quantitative results of the different methods, difference of Equivalent Noise of Looks (∆ENL) [34,46], Structural Similarity (SSI M) [47], and Peak Signal to Noise Ratio (PSNR) [48] are selected as assessment indicators.
Typically, ENL is employed for grey-scale statistics of SAR images, as shown below: where ENL indicates the value of Equivalent Noise of Looks.Additionally, µ I X and σ denote the mean and standard deviation of I X .In the field of SAR anti-interference, ∆ENL reflects the closeness between the image after interference suppression and the clean image, which can be expressed by the following equation: among them, ENL_I X and ENL_I cl denote the ENL values for I X and Icl.As explained above, a lower ∆ENL value indicates better interference suppression performance.We employ SSIM to measure the similarity between SAR images, as shown below: where Icl ∈ R H×W×3 denotes the clean image and y ∈ R H×W×3 denotes the image to be measured.µ x and µ y denote the pixel averages of Icl and y. σ 2 x and σ 2 y denote the pixel variances of Icl and y.Moreover, σ xy denotes the correlation coefficient between Icl and y.Additionally, K 1 and K 1 default to 0.01 and 0.03, respectively, here L denotes the pixel range of the SAR image.
PSNR is a widely used objective method for assessing the quality of SAR images, As illustrated below: where MAX y denotes the maximum pixel value of y and MSE(Icl, y) denotes the mean squared error between Icl and y.Typically, higher PSNR and SSIM values can induce better image quality.

Simulation Data Results
We initially verify the effectiveness of the proposed method using simulation data.The simulation data mainly consists of three types of interference [49], namely squelching interference (SI), multipoint frequency shifting interference (MFSI), and RFI.The implementation of RFI has been mentioned in Equations ( 2) and (3).Furthermore, the principle of MFSI is to generate a range-oriented delay after SAR matching filtering, which can be expressed by the following equation: where f d denotes the amount of frequency shift.In addition, φ f denotes the random phase, which serves to make the phase between each pulse incoherent and thus produce line-like interference.In addition, the principle of SI is based on MFSI with increasing amount of frequency shift in synthetic aperture time, as follows: where Q denotes a positive integer, which varies with azimuth to time.In addition, f bd denotes fixed frequency shift increment, and F SD denotes total amount of frequency shift.
To show the superiority of MCSNet, we adopt the excellent denoising algorithms in the visual field to compare with MSCNet, which are RESCAn [32] and SPANet [33], respectively, commonly used in tasks such as de-raining, de-fogging, and de-blurring.

Visual Results
The visual results of the simulation data are shown in Figure 7.Both the inputinterfered images and the clean images without interference are simulated.The first column shows the input-interfered images, where (a) indicates SI, (b) indicates MFSI, and (c) indicates RFI.Additionally, the second column indicates the corresponding clean images without interference, and the third to fifth columns indicate the results for each method.The overall performance of SPANet [32] is not satisfactory, with many interference residual textures remaining on the results.RESCAn [33] performs well on (a) and (c), but for (b), the interference features are not completely removed.The results of our method are visually as expected and are all close to the clean image.

Closeness between Results and Clean Images
The naked eye observations are not sufficient to demonstrate the best performance of our method.Therefore, based on Equation (20), we adopt ∆ENL to check the ability of each method to maintain the scattering characteristics of SAR images.The corresponding results are given in Table 1, from which it can be seen that for (a), (b), and (c), our method achieves the lowest ∆ENL values.Here, SSIM and PSNR are employed as metrics to evaluate the image quality of the simulation data (Table 2).Our methods both achieve the highest PSNR and SSIM values, whereas for (a), our results hold a PSNR value of 30.0051 and an SSIM value of 0.9962 much higher than the other methods.In short, for simulation data, MCSNet can suppress RFI while yielding high-resolution SAR images.

Measured Data Results
Our measured data are from the European Space Agency Sentinel-1A satellite in cband.The data exist in the form of Ground Range Detected (GRD) type, with polarization modes as VV or VH.In general, the form of the GRD data mainly contains real-value amplitude information, reflecting the scattering intensity in the region.For the various forms of RFI on SAR images and different landscapes, several typical cases have been selected as measured data to analyse the qualitative and quantitative results.The geographical locations of these measured data are presented in Figure 8, where the red box indicates the area from which the interfered SAR images come.

Closeness between Results and Clean Images
Qualitative results from visual observations are not sufficient to judge the merits of the methods, therefore we further compared the measured data quantitatively.As mentioned above, ∆ENL measures the closeness between the images after interference suppression and the clean images.The comparison results of ∆ENL are given in Table 3.For (a), (b), (c), (d), (e), and (f), our method achieves the lowest ∆ENL values, wherein (b) even reaches 0.0024, indicating that the results of MCSNet are closest to the clean images and the scattering characteristics of the original image are preserved.The convolution and scaling operations in the network will inevitably cause some distortion to the image.Therefore, we adopt SSIM and PSNR as criteria to evaluate the image quality of each method.The clean images are taken as reference images and the corresponding results are given in Table 4.It can be observed that for each scene, our method achieves the highest PSNR and SSIM.In particular, for (b), MCSNet reaches 30.9641PSNR value and 0.9896 SSIM value, which is much higher than the other methods.The overall results reflect that our method can conserve the useful information and details of the original image.Typically, the grayscale statistics of SAR images reflect the intensity of scattering coefficients of ground objects [50].The two images own similar gray-value magnitudes indicating that their scattering intensities are similar.The results of the scattering characteristics comparisons are shown in Figure 12.To concretely exhibit the capability of each method for preserving the scattering characteristics, we select (a), and (b) in Figure 9 as the test data.In addition, in Figure 12, the orange horizontal lines indicate the selected gray-value profiles.Where the trajectories of the blue and red curves are approximately fitted, indicating that our results have similar scattering properties to the clean images, with the best performance achieved.

Conclusions
To address the observational impact caused by multiple forms of RFI in Spaceborne SAR radar operation, this paper proposes a highly adaptive Multi-dimensional Calibration and Suppression Network (MCSNet) to solve this problem, which can only deal with realvalued data.Through analyzing the SAR image model, the input image is split into two parts for processing.First, the FCM is designed to capture full-text information.In addition, the Multidimensional Suppression Network is designed to suppress RFI at multi-channel and multi-scale levels.Next, a valid method is proposed to apply the input image as a reference image to correct the features in the network and preserve the valid information.Finally, a residual module with a channel attention mechanism is proposed to restore fine image details further yielding high-resolution images without RFI.Both the simulation data and measured data collected by Sentinel-1 verify the effectiveness of the proposed method.In comparison with state-of-the-art denoising methods in the field of computer vision, our method achieves the best results both qualitatively and quantitatively, indicating the specific functionality of our method for RFI suppression in Spaceborne SAR.

Discussion
For SAR images with real-valued information, our method indeed makes a difference.However, in the face of interference features completely overwhelming the whole image, the problem needs to be dealt with by adopting the complex-valued information of the SAR echo data and imaging results.Therefore, to make this idea of interference suppression more widely applicable, in future research we consider the design of networks that can handle complex-valued information.

Figure 1 .
Figure 1.Geometric interpretation of interference to Spaceborne SAR.

Figure 2 .
Figure 2. The overall structure of Multi-dimensional Calibration and Suppression Network, referred to as MCSNet.Where I Y and I X denote the input image with interference and the output result of MCSNet, respectively.The black arrows indicate the transfer information stream and the red dashed lines indicate the transfer feature stream.CONV and CAT denote convolution and concatenation operations, respectively.

Figure 3 .
Figure 3.The overall structure of FCM, referred to as FCM.Where the black arrows indicate the transfer information stream.⊕ and ⊗ denote feature fusion and elements multiplication, respectively.

Figure 4 .
Figure 4.The overall structure of the top encoder, bottom encoder and decoder.Where (a) is the structure of top encoder and bottom encoder.In addition, (b) is the structure of decoder.The black arrows indicate the transfer information stream.DAM indicates Downsampling Attention Module, UAM indicates Upsampling Attention Module.

Figure 5 .
Figure 5.The overall structure of ICN.Where the black arrows indicate the transfer information stream.⊕ and ⊗ denote feature fusion and elements multiplication, respectively.

Figure 6 .
Figure 6.The overall structure of RRM.Where the black arrows indicate the transfer information stream and ⊕ denotes feature fusion.

Figure 7 .
Figure 7. Visual results of simulation data.The first column gives the interfered SAR images and the second column gives the corresponding ground truth.The third to fifth columns give the results of the different methods.In addition, (a-c) indicate the different types of interference.

Figure 8 .
Figure 8. Geographical location description for measured data, where (I) in the region of the Korean Peninsula, (II) in the sea and islands off Nagasaki, Japan, (III) in Astrakhan, Russia, and (IV) in Krasnodar Krai, Russia.The red box indicates the interfered area, with a unit length of 100 km.4.5.1.Visual ResultsThe visual results of the evaluation images are given in Figure9, where the first column indicates the input-interfered images.The second column indicates the corresponding images without interference, they are clean images acquired at the same place at different times according to the periodicity of the Sentinel-1A.The third and fourth columns indicate the results of RESCAN and SPANet.The last column indicates the results of MCSNet.For (a) corresponds to Figure8(I), this is acquired by Sentinel-1 on the Korean Peninsula on 29 March 2021, from which we can see a clear white ripple-like interference on the hilly terrain.For (b) corresponds to Figure8(II), this is obtained by Sentinel-1 on 12 February 2022, in Nagasaki, Japan, from which it can be observed that the white striped RFI spans across the island and the sea, causing some visual obstruction.For (c) (d) and (e) corresponds to (III) in Figure8, they are acquired by Sentinel-1 on 5 September 2021, in the Astrakhan region of Russia, from which it can be noticed that white block-like and stripe-like RFIs overlay the area in the images, causing some geographical features to become blurred.For (f) corresponds to Figure 8(IV), which is acquired by Sentinel-1 on 10 July 2021, in the Russian Volga estuary region, where high-power RFI covers a large area making the geographic information invisible.The results of RESCAN are not satisfactory, for (a), (b), (c), (d), (e), and (f), all of which have RFI features left on the images.The results of SCANet are also unsatisfactory, unable to find the RFI and remove it completely.In contrast, MCSNet is effective in suppressing interference on each image, and our results are visually close to clean images, reflecting the adaptability of MCSNet to different landscapes and different forms of RFI.

Figure 9 .Figure 10 .
Figure 9. Visual results of measured data.The first column gives the interfered SAR images and the second column gives the corresponding ground truth.The third to fifth columns give the results of the different methods.Additionally, (a-f) indicate the different scenes from Figure 8.In addition, (a,c) the enlarged regions of interest are shown in Figures 10 and 11.

Figure 11 .
Figure 11.The region of interest for enlarged display in (c) of Figure 9.

Figure 12 .
Figure 12.Scattering characteristics analysis.Where the orange horizontal line indicates the grayvalue profile.The red curves indicate the scattering analysis of our results.The blue curves indicate the scattering analysis of the clean image.

Table 1 .
Comparisons of ∆ENL for simulation data.

Table 2 .
Comparisons of SSIM and PSNR for simulation data.

Table 3 .
Comparisons of ∆ENL for measured data.

Table 4 .
Comparisons of SSIM and PSNR for measured data.