Next Article in Journal
Co-Training Vision-Language Models for Remote Sensing Multi-Task Learning
Next Article in Special Issue
Determination of Slow Surface Movements Around the 1915 Çanakkale Bridge During the 2022–2024 Period with Sentinel-1 Time Series
Previous Article in Journal
A2Former: An Airborne Hyperspectral Crop Classification Framework Based on a Fully Attention-Based Mechanism
Previous Article in Special Issue
Fast Low-Artifact Image Generation for Staggered SAR: A Preview-Oriented Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Assessment of the Computing Performance for the Parallel Implementation of a Time-Domain Airborne SAR Raw Data Focusing Procedure

1
Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council (CNR), 80124 Napoli, Italy
2
Instituto de Capacitación Especial y Desarrollo de Ingeniería Asistida por Computadora (CEDIAC), Facultad de Ingeniería (FI), Universidad Nacional de Cuyo (UNCUYO), Mendoza 5502, Argentina
3
Department of Engineering (DI), University “Parthenope”, 80143 Napoli, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(2), 221; https://doi.org/10.3390/rs18020221
Submission received: 4 December 2025 / Revised: 2 January 2026 / Accepted: 7 January 2026 / Published: 9 January 2026

Highlights

What are the main findings?
  • Two implementation strategies for Time-Domain (TD) airborne Synthetic Aperture Radar (SAR) focusing are presented and quantitatively compared using real airborne SAR data. The pixel-wise strategy consistently outperforms the matrix-wise strategy by 19–36% in computing time, with the performance gap widening for larger output grids, while both methods produce identical focused images.
  • For both proposed implementation strategies, TD focusing can be substantially accelerated through the proper application of parallel processing techniques. Depending on available hardware capabilities, significant speedup factors can be achieved. For instance, with the considered Information Technology (IT) platform, up to 177× have been achieved. Specifically, if we consider a SAR raw data size of 4000 × 200,000 samples, covering an area of approximately 3 km in slant range and 8 km in azimuth, and apply the pixel-wise approach to implement the TD focusing, by setting an azimuth resolution of 30 cm, over an output grid of 4000 × 32,000 pixels, with a single-job of the IT platform available at IREA-CNR we need 1560 min of computing time. Exploiting 256 jobs on the same platform, we reduce this computing time to just 8 min.
What are the implications of the main findings?
  • Time-Domain SAR focusing becomes operationally viable for near-real-time airborne monitoring and emergency response applications, with processing times reduced to minutes for large-scale datasets when appropriate parallelization strategies are applied.
  • The identified scaling behavior and efficiency limits allow for predicting processing times and optimization of resource allocation in operational SAR scenarios. Since computing efficiency deteriorates beyond specific parallelization thresholds, the presented results allow establishing practical criteria for balancing performance improvements against costs/inefficiencies related to parallelization.

Abstract

In this work, different implementation strategies for a Time-Domain (TD) focusing procedure applied to airborne Synthetic Aperture Radar (SAR) raw data are presented, with the key objective of quantitatively assessing their computing time. In particular, two methodological approaches are proposed: a pixel-wise strategy, which processes each image pixel independently, and a matrix-wise strategy, which handles data blocks collectively. Both strategies are further extended to parallel execution frameworks to exploit multi-threading and multi-node capabilities. The presented analysis is conducted within the context of the airborne SAR infrastructure developed at the Institute for Electromagnetic Sensing of the Environment (IREA) of the National Research Council (CNR) in Naples, Italy. This infrastructure integrates an airborne SAR sensor and a high-performance Information Technology (IT) platform well-tailored to the parallel processing of huge amounts of data. Experimental results indicate an advantage of the pixel-wise strategy over the matrix-wise counterpart in terms of computing time. Furthermore, the adoption of parallel processing techniques yields substantial speedups, highlighting its relevance for time-critical SAR applications. These findings are particularly relevant in operational scenarios that demand a rapid data turnaround, such as near-real-time airborne monitoring in emergency response contexts.

Graphical Abstract

1. Introduction

Synthetic Aperture Radar (SAR) systems are advanced microwave imaging sensors installed on mobile platforms—ground-based, airborne, or spaceborne—that synthesize large antenna apertures to achieve high-resolution imaging [1]. Among these, airborne SAR platforms—such as those mounted on airplanes, helicopters, or drones—offer high operational flexibility and rapid deployment capabilities, enabling great responsiveness and short revisit times [2]. Moreover, their closer proximity to the observed scene allows the use of smaller antennas, thus ensuring high azimuth resolution [1]. All these features make airborne SAR systems particularly suitable for rapid-response applications over relatively small areas and a variety of operational scenarios [2].
The focusing process of airborne SAR data must account for the so-called motion errors, which are due to the platform attitude instabilities and deviations from the nominal flight path. Accurate SAR focusing requires precise knowledge of the phase center [3] position and pointing direction of the radar antenna during the acquisition, as well as availability of a Digital Elevation Model (DEM) of the observed scene [4,5]. Airborne SAR focusing can be implemented in the Frequency Domain (FD) [4,5] or in the Time Domain (TD) [6,7,8,9]. FD algorithms are computationally efficient but rely on approximations [6,7,8,9] that, in some cases, may impair the quality of the final Single-Look Complex (SLC) images. In contrast, TD algorithms avoid such approximations; therefore, if compared to their FD counterparts, they generally yield more accurate SLC images at expenses, however, of a significantly higher computing time.
In this context, the present work addresses the numerical implementation of the TD SAR raw data focusing algorithm presented in [9]. Two different implementation strategies—namely, pixel-wise and matrix-wise—are proposed and assessed in terms of the involved computing time. Parallel implementation of such strategies is addressed as well. The study is conducted using a dedicated airborne SAR infrastructure [10] operated by the Institute for Electromagnetic Sensing of the Environment (IREA) of the National Research Council (CNR) in Naples, Italy. This infrastructure consists of a SAR sensor named MIPS [10] and a multi-node, multi-threaded Information Technology (IT) platform designed to support high-performance parallel processing of huge amounts of SAR data. In this framework, the proposed strategies are implemented in Interactive Data Language (IDL), Version 8.2 (linux x86_64 m64); therefore, the reported performance results are specific to IDL and to the IREA-CNR IT platform. However, the proposed approaches are not tied to any specific IT platform and can be implemented in other programming languages and computing environments.
The work is organized as follows. In Section 2, the rationale behind the TD airborne SAR focusing procedure presented in [9] is recalled. Then, the proposed pixel-wise and matrix-wise implementation strategies are described. Finally, the parallel implementation of both strategies, aimed at reducing the computing time, is presented. Section 3 shows the results obtained by applying these two different strategies to a specific dataset acquired and processed using the IREA-CNR infrastructure. A discussion of the results is provided in Section 4, followed by concluding remarks reported in Section 5.

2. Materials and Methods

In this section, we first recall the main rationale of the Time-Domain (TD) airborne SAR raw data focusing procedure presented in [9]. Then, we describe two different strategies that we consider in this work for their numerical implementation. Note also that the considered signals are assumed hereafter range-focused [1,11].

2.1. Rationale of the TD Airborne SAR Focusing Procedure

Let us consider an airborne SAR acquisition geometry, where the position of the antenna Phase Center (PC) at the instant t n and of a generic target to be focused are represented by the vectors A n and T , respectively. Let us then consider the vector R n T representing the position of the target with respect to the antenna PC position at the time t n [9]:
R n T = T A n
The Time-Domain focusing procedure basically involves the following coherent summation for each target T :
f T = n = k T K T 2 k T + K T 2 h n , m n exp j 4 π λ R n T ω n , R n T
wherein
R n T = R n T
represents the antenna-to-target distance at the time t n , and
  • h ( · , · ) is the considered range-focused SAR signal [11] stored in the N × M matrix h _ _ ;
  • N and M are the total number of azimuth and range samples, respectively, of the considered range-focused SAR signal;
  • n { 1 ,   2 ,   ,   N } are the azimuth indexes of the range-focused SAR signal; they are thus related to the so-called SAR slow time   t n ;
  • m { 1 ,   2 ,   ,   M } are the range indices of the range-focused SAR signal; they are thus related to the so-called SAR fast time;
  • m n = m n R n T is the range index corresponding to the antenna-to-target distance R n T ;
  • ω · is the weighting function that compensates for signal attenuation effects;
  • k T and K T are the center azimuth index and the length (in pixels) of the synthetic aperture, respectively, relevant to the target T .
As detailed in [9], for each target to be focused, the index k T accounts for the variations in the antenna PC and pointing direction throughout the radar data acquisition process. Accordingly, computation of k T depends on the changes in both the attitude and position of the airborne platform during the data collection. Meanwhile, K T depends on the required azimuth resolution and, therefore, on the antenna-to-target distance [11]. Overall, in (2), both k T and K T are target-dependent indices.
For further details on the main rationale of the TD airborne SAR focusing, the reader can refer to [9].

2.2. Implementation of the TD Airborne SAR Focusing Procedure

To compute the summation in (2), it is mandatory to define the desired azimuth resolution as well as the geometric grid (named output grid) in which the targets will be focused. This leads to an output grid of dimensions I × L that will include all the focused targets. Over this output grid, the following data matrices are generated (see Figure 1):
  • F _ _ is an I × L matrix that stores all the targets to be focused;
  • k _ _   and K _ _ are I × L matrices that contain the quantities k T and K T , respectively, relevant to the targets of the considered output grid;
  • An external Digital Elevation Model (DEM) that specifies, in a defined coordinate system, the ground position of each target of the considered output grid. Specifically, the DEM comprises three I × L matrices ( D _ _ x , D _ _ y , D _ _ z , in Figure 1); in a geocentric Cartesian system (just to quote an example), these three matrices store the x, y, and z coordinates of the ground position of the targets.
Note that all the above-mentioned data matrices have I × L dimensions matching the output grid. Additionally, the antenna PC position from the onboard navigation system (GNSS), expressed in the same coordinate system as the DEM, is recorded into three vectors ( P _ x , P _ y , P _ z , in Figure 1). Each vector comprises N samples, matching the azimuth extent of the range-focused SAR signal h ( , ) .
In general, the raw data focusing procedure outlined in [9] requires evaluating the antenna-to-target distance R n T using the external DEM of the surveyed area, along with the flight data from the GNSS. Based on the output grid and the required number of synthetic aperture samples K T to achieve the desired azimuth resolution, the navigation data is then employed to determine the segment of the flight path where the radar antenna illuminates the target around the direction of maximum radiation. Finally, the summation in (2) can be computed for each target.
Within this framework, two distinct strategies—referred to as pixel-wise and matrix-wise, respectively—are proposed in the following to compute the summation in (2) for all targets in the output grid.

2.2.1. Pixel-Wise Strategy

In the pixel-wise approach, the focusing procedure operates on a pixel-by-pixel basis, where each target is processed separately by aggregating all relevant SAR data samples h _ _ in the summation defined in (2).
To better clarify this process, let us consider Figure 2, which illustrates the focusing procedure for a generic target, whose location in the F _ _ matrix is highlighted in blue in the figure. The following quantities are needed to focus this target:
  • the target position, recorded in the three matrices D _ _ x , D _ _ y , and D _ _ z in correspondence to the yellow cell in Figure 2, which matches the blue cell of the matrix F _ _ ;
  • the synthetic aperture center index, pointing to the dark green cell of the GNSS vectors in Figure 2;
  • the synthetic aperture length, which allows selecting the light green cells, around the dark green ones, of the GNSS vectors in Figure 2.
Once these quantities are properly selected in the matrices/vectors mentioned above, the summation bounds in (2)—that is, the azimuth indexes n —are thereby determined, identifying the azimuth positions of the samples in the matrix h _ _ that must be coherently combined to focus the considered target (light green-highlighted region in the h _ _ matrix in Figure 2). Subsequently, the distances between the selected target and all the antenna positions along the radar acquisition path are computed via (1) and (3) using the data selected in the GNSS vectors and DEM matrices. As pictorially depicted in Figure 2, these distances allow for computing the indices m n in (2) and, thus, to select within the matrix h _ _ the contributing cells that are highlighted in red in the figure. These samples are then combined according to (2), and the obtained result is placed into the blue cell of the matrix F _ _ in Figure 2. This procedure is performed for each target in the I × L output grid, and the entire image is focused once all targets have been processed. Consequently, the pixel-wise strategy entails I × L computational steps, corresponding to the total number of pixels (i.e., targets) of the output grid.

2.2.2. Matrix-Wise Strategy

In contrast to the pixel-wise strategy, the matrix-wise approach processes all points in the output grid simultaneously, using vectorized operations. The motivation behind this strategy lies in the fact that the performance difference between vectorized operations and explicit loops can be significant in certain programming languages, such as the Interactive Data Language (IDL) [12] or MATLAB [13], which exploit specialized libraries (e.g., BLAS, LAPACK) optimized for matrix computations [14].
The matrix-wise strategy, whose main rationale is illustrated in Figure 3, makes use of the same matrices and vectors (summarized in Figure 1) as the pixel-wise approach. Moreover, an additional I × L matrix, say G _ _   , is used.
Specifically, in the matrix-wise case, at each processing step, the summation in (2) is updated, simultaneously for all the targets of the image, with an additional term. For this reason, overall, the matrix-wise approach requires a number of processing steps equal to the number of terms in the summation in (2). As clarified above, this number is target-dependent. Accordingly, the number of processing steps required by the matrix-wise strategy is equal to the maximum value of K T , that is, the maximum value recorded in the matrix K _ _ .
To better clarify the operating principle of the matrix-wise approach, let us assume, for the sake of simplicity, that the synthetic aperture length is the same for all the targets to be focused (we will remove this assumption later on). In this case, the matrix K _ _ is replaced with one term, say K ~ , valid for all the targets and thus representing the number of iterations needed to focus the overall image. At each step, called s , with s 0 ,   1 ,   2 ,   , K ~ 1 , the summation in (2) is updated for all the targets of the image. The matrix F _ _ is therefore updated at each iteration with the partial sum in (2) properly updated. In the following, we name F _ _ s as the matrix F _ _   achieved at the s t h step, with F _ _ K ~ 1 being the final focused image.
At the processing step s = 0 , for all the targets, the summation in (2) is initialized with the central term. Accordingly, for s = 0 , for each target T , only the term of the sum in (2) with n =   k T is computed and stored in the matrix F _ _ 0 . Since this operation is carried out simultaneously for all the targets, all the cells of the matrix F _ _ 0 (highlighted in blue in Figure 3) are of interest, and the following quantities are needed to compute the values to store therein:
  • the target positions, recorded in the three matrices D _ _ x , D _ _ y , and D _ _ z in correspondence to all the cells (highlighted in yellow in Figure 3), matched to all the cells of the matrix F _ _ 0 ;
  • the synthetic aperture center indices relevant to all the targets, recorded in all the cells (highlighted in yellow in Figure 3) of the matrix k _ _ , pointing to the dark green cells of the GNSS vectors in Figure 3.
As better clarified in the following, the matrix k _ _ is updated at each processing step (see, again, Figure 3); accordingly, for s = 0 , we name it k _ _ 0 . The selected cells of the GNSS vectors allow identifying the azimuth positions of the samples in the matrix h _ _ (highlighted in green in Figure 3) that must be picked up to update the summation in (2) for all the image targets.
Subsequently, the distances between each target T and the corresponding antenna PC position A k T are computed via (1) and (3) using the data selected in the GNSS vectors and the DEM matrices, properly linked to each other by the elements of the matrix k _ _ . As pictorially depicted in Figure 3, these distances allow us to compute, for each target T , the corresponding index m k T in (2) and, thus, to select within the matrix h _ _ the cell k T ,   m k T . Since this procedure is carried out simultaneously for all the targets of the image, overall, all the cells of the matrix h _ _   highlighted in red in Figure 3 are of interest at this step.
By doing so, for each target it is possible to compute the term of the sum in (2) with n =   k T and to store it in the matrix G _ _ 0 (where, as usual, the subscript means that the matrix G _ _ is updated at each processing step). For s = 0 , we let:
F _ _ 0 = G _ _ 0
At the processing step s = 1 , for all the targets, the summation in (2) is updated by considering the additional term next to the central one. The matrix k _ _ is therefore updated as follows: k _ _ 1 = k _ _ 0 + 1 _ _ , with 1 _ _ being an I × L all-one matrix. Then, the procedure described above can be carried out by exploiting the matrix k _ _ 1 , which allows computing, for each target, the term of the sum in (2) with n =   k T + 1 , and to store it in the matrix G _ _ 1 . Accordingly, for s = 1, we let:
F _ _ 1 = F _ _ 0 + G _ _ 1
The procedure can be repeated iteratively by updating the matrix k _ _   in such a way that, at the generic processing step s , k _ _ s properly points at the desired index of the summation in (2). At each step, the matrix F _ _   is then updated as follows:
F _ _ s = F _ _ s 1 + G _ _ s
As observed above, the overall procedure stops after K ~ steps, and F _ _ K ~ 1 stores the final focused image.
We recall that, to simplify the discussion, we have assumed that the synthetic aperture length is the same for all the targets to be focused. Removing this assumption is very easy to manage: it just implies that at each processing step, we have to check, for each target T , if the index n , stored in the matrix k _ _ s , enforces the following condition:
k T K T 2 n k T + K T 2
Condition (7) can be checked for each target T through the corresponding values stored in the matrices k _ _ 0 (properly saved at the beginning of the overall procedure), k _ _ s , and K _ _ ; for the targets that do not verify it, the corresponding cells in the matrix G _ _ s are set equal to 0.

2.2.3. Parallel Implementation

Since the TD focusing procedure allows each target T to be focused independently, the algorithm exhibits an inherent parallelism that can be exploited through a divide-and-conquer strategy, applicable to both the pixel-wise and the matrix-wise approaches. Typically, to deal with parallel processing, the output grid is partitioned into multiple independent and equally sized segments. Each segment is focused separately, and the results are then recomposed to generate the final focused image.
Specifically, the following three primary steps should be performed according to the flowchart shown in Figure 4:
1.
Assessment of the Available Computational Resources. The parallelization degree J is determined by multiplying the number of Worker Nodes (WNs) times the number of cores available in the processing infrastructure. This evaluation establishes the number J of portions into which the output grid will be divided.
2.
Output Grid Splitting and Focusing. Based on the previous evaluation, the output grid is split into J adjacent, non-overlapping, equal-sized portions along the azimuth direction, while keeping the full range dimension intact. Each portion represents a task, which we refer to as a “job”, to be processed (i.e., focused) by using a single core on a designated WN. To prevent memory overload in the computing system, only the necessary data (referred to as “data package”) for performing the assigned job are allocated to each core.
For each job, the data package includes the relevant portions of the external DEM, along with the corresponding segments of the matrices k _ _   and K _ _ , which are consistently partitioned with the output grid. Since these data matrices have the same dimensions as the output grid, each data portion has the same dimensions as well, making their partition straightforward once J is fixed.
Selecting for each job the proper azimuth portions of the GNSS vectors and the range-focused SAR data matrix h _ _ is, instead, not straightforward as well. Indeed, the GNSS vectors and the data matrix h _ _ have all the same azimuth dimensions, which are, however, different from the azimuth dimension of the output grid. Therefore, this operation requires determining the minimum and maximum azimuth indexes ( n m i n and n m a x , respectively) of interest for the considered job. This is achieved by inspecting the indices contained in the portions of the matrices k _ _   and K _ _ relevant to the considered job. These indices indeed allow marking the track segment (i.e., the bounds of the GNSS vectors) that should be considered to focus each entire portion and, in turn, the azimuth boundaries of the h _ _ matrix, which, similarly to the output grid, is not cut in the range dimension.
3.
Tiling of Focused Data. Once all the portions are separately focused, each one is mapped directly to its corresponding position within the output grid. Since the portions are non-overlapping, no merging operations are required. This operation ultimately yields the final SLC image.

3. Results

To quantitatively assess the effectiveness of the processing strategies detailed in Section 2, the proposed approaches were applied to an X-band dataset acquired and processed using the IREA-CNR airborne SAR infrastructure [10], which comprises both flight and ground segments. The flight segment includes the radar sensor, namely the MIPS system [10], and software tools for flight planning. The ground segment integrates a SAR data processing chain—mainly implemented using the Interactive Data Language (IDL)—as well as an Information Technology (IT) platform—housed at the IREA-CNR laboratories in Naples, Italy—dedicated to data storage and processing. The IT platform is a CPU-based system composed of 22 Worker Nodes (WNs), each featuring 2 CPUs with 32 cores per CPU, 2 TB of RAM, and 192 TB of raw disk storage [10].
The used airborne SAR dataset is relevant to an area located in Grumento Nova, in the province of Potenza, Basilicata region, Southern Italy, and the imaged area spans approximately 3 km in slant range and 8 km in azimuth. The main parameters of the acquired raw data are reported in Table 1.
To compare the strategies described in Section 2, we applied them by considering different numbers of targets to be focused and setting an azimuth resolution of 30 cm. Specifically, we used the two strategies to focus six ground portions of different extensions. In particular, the ground pixels were selected according to the radar (range × azimuth) grid, and in all the cases we have considered, 4000 pixels in range. Consequently, increasing the number of ground points to be focused requires expanding the azimuthal extent of the imaged area. More specifically, we considered the ground portions summarized in Table 2 and reported in the following: 4000 × 250 = 1 million pixels, 4000 × 1000 = 4 million pixels, 4000 × 4000 = 16 million pixels, 4000 × 8000 = 32 million pixels, 4000 × 16,000 = 64 million pixels, and 4000 × 32,000 = 128 million pixels.

3.1. Pixel-Wise vs. Matrix-Wise Strategy

3.1.1. Single-Job Case

To ensure a fair comparison between the pixel-wise and matrix-wise approaches, identical benchmarking conditions were enforced by intentionally disabling parallelization and executing both strategies in a single-job configuration.
It is important to highlight that both approaches yield bitwise-identical final SLC-focused images. Accordingly, the two strategies are identical in terms of imaging capabilities. To show this, we report in Figure 5a,b the Multi-Look Complex (MLC) images generated by applying to the same data set the pixel-wise and matrix-wise strategies, respectively. Moreover, Figure 5c presents the corresponding interferometric phase difference, which, of course, is identically zero, with zero mean and zero standard deviation.
Turning to the evaluation of the processing times achieved with the two strategies, the results are summarized in Table 3 and illustrated in Figure 6. Comparison of the processing times reveals that the pixel-wise approach consistently outperforms the matrix-wise strategy across all conducted experiments (see Figure 6). As shown in Table 3, the performance gap becomes more pronounced as the image size increases. While the matrix-wise approach is about 19% slower for smaller grids (e.g., 4000 × 250 pixels), the relative difference stabilizes around 35–36% for larger grids (from 4000 × 4000 pixels upwards). To explain this result, we note first of all that the two implementations are equivalent in terms of the required number of elementary operations (specifically, summations). To better clarify this point, let us assume, just for the sake of simplicity, that the synthetic aperture length is the same for all the targets to be focused, that is, the matrix K _ _ is replaced with one term, say K ~ , valid for all the targets. Application of the pixel-wise strategy requires carrying out K ~ 1 summations at each iteration, where the total number of iterations is equal to the number of targets of the output grid (that is, M × N). Application of the pixel-wise strategy requires carrying out M × N summations (one for each target of the output grid) at each iteration, where the total number of iterations is equal to K ~ 1 , with K ~ being the synthetic aperture length. Accordingly, for both strategies, the number of involved summations is M × N × K ~ 1 . The measured discrepancies between the computing time in the two cases are therefore not related to the number of summations required by the two strategies but, instead, to other factors depending on the specific programming language exploited to implement the two strategies themselves. A possible explanation of the behavior reported in Figure 6 and Table 3 could be that the matrix-wise strategy involves, at each iteration, an indexing burden (on the order of the number of targets of the output grid) significantly greater than that required by the pixel-wise strategy (on the order of the number of samples of the synthetic aperture). It is reasonable to assume that, for the exploited programming language, this represents a bottleneck, as testified by the fact that the discrepancies tend to reduce as the size of the output grid reduces (see Figure 6 and Table 3). Nevertheless, identifying the underlying causes of the measured processing-time differences is beyond the scope of this research, as they could be influenced by the programming language we used.
In any case, the general trend shows that both implementations scale approximately linearly with the output-grid size. This quasi-linear behavior allows for reliable prediction of processing times for a given dataset, based on the required azimuth resolution and output grid configuration.

3.1.2. Multi-Job Case

Based on the results obtained in Section 3.1.1 under a single-job configuration, we enabled the parallel processing procedure shown in Figure 4 and repeated the experiments for the 4000 × 250, 4000 × 4000, and 4000 × 32,000 pixels output grids. The results are presented in Figure 7 for both the pixel-wise and matrix-wise strategies, where the computing time is represented as a function of the number of used jobs.
Regardless of the strategy employed, the effectiveness of parallel computing is evident, with processing times dramatically reduced as the number of jobs increases. Moreover, the nearly linear scalability observed in the single-job configuration is also replicated here, but only up to a certain number of jobs, beyond which performance reaches a plateau or even degrades. In fact, Figure 7 shows that processing times begin to increase when the number of jobs exceeds 32 for the 4000 × 250 pixels output grid, and 128 for the 4000 × 4000 pixels output grid, indicating a threshold beyond which parallel efficiency deteriorates. This phenomenon will be further analyzed in Section 3.2.
Here, we note that parallelization significantly improves the overall processing performance, it does not favor one strategy over the other, and the pixel-wise strategy indeed remains computationally more efficient than the matrix-wise one. Additionally, as the number of jobs increases, after reaching the point at which the computational efficiency of the parallel implementation deteriorates, the performance of both strategies tends to converge.

3.2. Pixel-Wise Strategy: Parallel Processing Case

Considering the superior performance of the pixel-wise approach, the analysis related to the parallel processing implementation is further explored using this strategy.
Specifically, we present the results obtained for all the output grids of Table 2. Figure 8 summarizes the computing time as a function of the number of jobs, ranging from 1 to 256, for output grids of 4000 × 250, 4000 × 1000, and 4000 × 4000 pixels, while Figure 9 extends the analysis to output grids of 4000 × 8000, 4000 × 16,000, and 4000 × 32,000 pixels. In both figures, blue markers denote the raw data focusing times, whereas red dots represent half of the focusing time achieved by halving the number of jobs (that is, half of the value of the preceding blue marker). The curve obtained with the red dots, referred to as Half Time Law [15], reflects the ideal assumption that doubling the number of jobs halves the processing time. To provide an alternative representation, Figure 10 shows the computing time as a function of the total number of focused points for all job configurations (Figure 10a). Moreover, for the job configurations that show a behavior different from that expected from the Half Time Law (see Figure 8 and Figure 9), Figure 10b also reports such Half Time Law reference (square markers).
As seen in Section 3.1.2, increasing the number of jobs leads to reduced computing times, particularly up to 8 jobs (see the left-hand column in the cases shown in Figure 8 and Figure 9). Within this range, doubling the number of jobs roughly halves the focusing time. This trend is visible in both figures, where the blue markers closely follow the red dots. However, the results from 16 to 256 jobs (see the right-hand column in the cases shown in Figure 8 and Figure 9) indicate that an indiscriminate increase in the number of jobs does not always lead to improved computational performance. In fact, for each output grid, there is a specific number of jobs beyond which the efficiency of the parallel processing begins to diminish, even if total processing time continues to decrease. This behavior becomes evident when the focusing times start to diverge from the values predicted by the Half Time Law, which assumes ideal linear speedup. This is also noticeable in Figure 10b, where the processing time progressively deviates from the Half Time Law either when increasing the number of jobs (i.e., when moving across curves in the figure at a fixed number of focused points) or when decreasing the total number of focused points (i.e., when moving along a given job-configuration curve in the figure). Once the processing times start to diverge from the Half Time Law, the scalability of the parallel implementation deteriorates, ultimately leading to a regime where adding more jobs becomes counterproductive in terms of execution time. This is particularly evident in Figure 8a, for the 4000 × 250 pixels output grid, where the processing time begins to increase significantly from 32 jobs onward. A similar pattern is observed for the 4000 × 1000 pixels output grid starting at 64 jobs, and for the 4000 × 4000 pixels output grid from 128 jobs (Figure 8b and Figure 8c, respectively). This loss in efficiency could be attributed to the overhead associated with managing a large number of parallel jobs, which outweighs the computational benefits of workload distribution. However, a detailed root-cause attribution of the observed deviation from the Half Time Law would require a dedicated performance study and is beyond the scope of this work.
In this context, we could consider a time reduction ratio of 1.7 as the minimum acceptable threshold to ensure a reasonable trade-off between computational gain and the additional costs introduced by parallel execution. This guarantees that doubling the number of jobs yields, at least, a ~40% reduction in processing time. While ideal scalability would correspond to a ratio of 2 (50% of time reduction), configurations resulting in lower ratios become progressively less efficient and ultimately suboptimal.
According to our experiments, the number of jobs that still achieve at least a 1.7 ratio are as follows: 8 jobs for the 4000 × 250 pixels output grid, 16 for the 4000 × 1000 pixels case, 32 for the 4000 × 4000 pixels configuration, 64 for the 4000 × 8000 pixels output grid, and 128 for both the 4000 × 16,000 and 4000 × 32,000 pixels cases.

4. Discussion

We now discuss the results presented in Section 3.
The performance analysis of the proposed focusing strategies—namely, the pixel-wise and matrix-wise approaches—as well as their parallel implementation, was carried out using the IDL programming environment and the IT platform available at IREA-CNR [10].
Regarding the comparison between pixel-wise and matrix-wise strategies, the results shown in Section 3.1. demonstrate that the pixel-wise approach consistently outperforms the matrix-wise strategy in terms of processing time, with the performance gap widening as the output grid size increases. While both methods yield identical focused SLC images (see Figure 5), the matrix-wise implementation incurs a significant overhead likely associated with the management of large data matrices, which becomes increasingly detrimental as the number of focused points grows (see Table 3). This behavior is particularly notable given that the underlying programming environment (IDL) is optimized for vectorized operations, suggesting that, in practice, the expected computational advantages of matrix operations are outweighed by memory and data handling inefficiencies at large scales.
Parallelization was shown to be highly effective in reducing processing times for both strategies, especially for moderate numbers of parallel jobs. Specifically, the findings in Section 3.2 show that processing times could be decreased by factors ranging from 1.9 to nearly 177 when comparing multi-job configurations (ranging from 2 to 256 jobs) to a single-job approach. Moreover, the parallel strategy remains effective even as the number of focused points increases significantly. For instance, with the pixel-wise strategy, when processing the considered dataset on a 4000 × 32,000 pixels output grid at 30 cm resolution using 128 jobs, the computing time is approximately 14 min, which is comparable to the time required to process the same dataset on a 4000 × 4000 pixels output grid with 16 jobs (~13 min).
The analysis revealed that, up to a certain threshold, increasing the number of jobs led to near-ideal reductions in processing time, closely following the Half Time Law. However, beyond this threshold, further increases in the number of jobs led to a reduction in the computing time worse than that expected by the Half Time Law, and in some cases, even to an increase in computing time. The optimal number of parallel jobs depends on the size of the output grid, with larger grids benefiting from a higher degree of parallelization before reaching the efficiency plateau (see Figure 8, Figure 9 and Figure 10).
This efficiency loss may be attributed to the overhead associated with managing a large number of concurrent processes, which can eventually outweigh the benefits of workload distribution. This is one of the reasons we did not conduct experiments with more than 256 jobs, despite the IT platform’s capacity to manage up to 1408 parallel processes. Another important consideration is that the IT resources are shared with other activities conducted at the IREA-CNR laboratories, and are not exclusively allocated for the focusing procedures.
Additionally, the nearly linear scalability of the pixel-wise approach, with respect to data size, further supports its suitability for large-scale SAR applications, as it enables reliable prediction of processing times based on grid configuration and resolution requirements.
It should be noted that the reported analyses (see Figure 8, Figure 9 and Figure 10) are specific to the tested IREA-CNR IT platform and to the use of IDL for the implementation; therefore, absolute runtimes may differ when adopting other programming languages, software environments, or computing architectures. Nevertheless, the proposed focusing strategies are algorithmically general and portable. These findings highlight the importance of carefully balancing the complexity of the problem to be addressed and the resources required by the focusing procedure, in order to maximize processing efficiency, which is not a trivial task, particularly in shared or resource-constrained environments. These aspects represent matters of current study and future work.
Looking forward, efforts are currently underway to migrate the parallelized focusing system from a CPU-based to a GPU-based architecture, supported by a planned near-term upgrade of the IT platform to include GPUs. This transition is expected to significantly reduce processing times, as GPUs are inherently more suitable for high-throughput, computation-intensive operations [16,17,18,19].

5. Conclusions

In this work, two strategies for implementing the TD airborne SAR raw data focusing procedure described in [9] were presented and assessed. The main objective was to provide a quantitative evaluation of their computational performance, leveraging the airborne SAR infrastructure available at IREA-CNR. This infrastructure comprises a dedicated airborne SAR sensor and a multi-node, multi-threaded computing platform for data storage and processing, offering a suitable environment for the application of parallel computing techniques.
The tests were conducted in two main phases. The first phase was aimed at performing a comparative analysis between the two proposed strategies, referred to as the pixel-wise and matrix-wise, respectively. The results, achieved through experiments conducted considering different numbers of focused pixels, demonstrated that the pixel-wise strategy offers superior computational efficiency. This advantage persists even when parallel processing is employed, making the pixel-wise approach the preferred option for large-scale SAR focusing tasks.
These results motivated the second phase of testing, which was concentrated on the parallel implementation of the pixel-wise strategy. The results show that the computational efficiency can be significantly improved through a parallel processing framework. Specifically, speedup factors of up to 177 were observed when comparing a single-job setup to multi-job configurations (up to 256 jobs).
Specifically, if we consider a raw data size of 4000 × 200,000 samples, covering an area of approximately 3 km in slant range and 8 km in azimuth, and apply the pixel-wise approach to implement the TD focusing procedure in [9] by setting an azimuth resolution of 30 cm, over an output grid of 4000 × 32,000 pixels, with a single job of the IT platform available at IREA-CNR, we need 1560 min of computing time. Exploiting 256 jobs on the same platform, we reduce this computing time to just 8 min.
Despite parallelization significantly accelerates processing, its effectiveness is limited by several constraints. First, the amount of computational resources that can be employed may be limited when working within a shared infrastructure that is not exclusively allocated to SAR focusing tasks, as is the case with the IREA-CNR IT platform. Furthermore, there is an inherent limitation in the processing itself, where increasing the number of parallel jobs does not necessarily improve computing performance and may even degrade it, likely related to the overhead associated with managing an excessively large number of parallel tasks. This behavior is more evident when the size of the output grid (i.e., the number of pixels to focus) is relatively small. Identifying the optimal degree of parallelization is thus essential for achieving an optimum trade-off between processing speed and resource utilization, and this is certainly worth noting for future analyses.

Author Contributions

Conceptualization, J.E., P.B. and S.P.; methodology, J.E., P.B., C.E., A.N., R.L. and S.P.; software, J.E.; validation, J.E., P.B. and S.P.; formal analysis, J.E., P.B., C.E., A.N., R.L. and S.P.; investigation, J.E., P.B., C.E., A.N., R.L. and S.P.; writing—original draft preparation, J.E. and S.P.; writing—review and editing, J.E. and S.P.; visualization, J.E.; supervision, R.L. and S.P.; funding acquisition, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by the Italian Department of Civil Protection (DPC) within the framework of the DPC-IREA Agreement (2025–2027), although it does not necessarily represent official DPC opinion and policies. This research was also partially supported by the European Union—NextGeneratonEU (PNRR-M4C2) program through the ICSC—CN-HPC (CN00000013). Moreover, the work was also partially supported by the European Union under the Italian National Recovery and Resilience Plan (PNRR) of NextGenerationEU within the Project MANSAR-HPC—Monitoraggio Ambientale di aree Naturali tramite tecnologia SAR e sistemi HPC—call “Bando a Cascata”, D. R. Università degli Studi di Bari Aldo Moro n. 1433 del 17/04/2024, Project “HPC—National Centre for HPC, Big Data and Quantum Computing”, Code CN00000013, CUP H93C22000450007, Spoke 5 “Environment & Natural Disasters”, call MUR n. 3138 16/12/2021 PNRR, Missione 4, Componente 2, Investimento 1.4–NextGenerationEU.

Data Availability Statement

Restrictions apply to the datasets exploited for this article. Data acquisition has been carried out in the frame of funded projects: the datasets are thus subject to the restrictions imposed by non-disclosure agreements signed with the partners.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Curlander, J.C.; McDonough, R.N. Synthetic Aperture Radar: Systems and Signal Processing; Wiley: New York, NY, USA, 1991; ISBN 978-0-471-85770-9. [Google Scholar]
  2. Perna, S.; Soldovieri, F.; Amin, M. Editorial for Special Issue “Radar Imaging in Challenging Scenarios from Smart and Flexible Platforms”. Remote Sens. 2020, 12, 1272. [Google Scholar] [CrossRef]
  3. Esposito, C.; Gifuni, A.; Perna, S. Measurement of the Antenna Phase Center Position in Anechoic Chamber. IEEE Antennas Wirel. Propag. Lett. 2018, 17, 2183–2187. [Google Scholar] [CrossRef]
  4. Moreira, A. Yonghong Huang Airborne SAR Processing of Highly Squinted Data Using a Chirp Scaling Approach with Integrated Motion Compensation. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1029–1040. [Google Scholar] [CrossRef]
  5. Fornaro, G. Trajectory Deviations in Airborne SAR: Analysis and Compensation. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 997–1009. [Google Scholar] [CrossRef]
  6. Yegulalp, A.F. Fast Backprojection Algorithm for Synthetic Aperture Radar. In Proceedings of the 1999 IEEE Radar Conference. Radar into the Next Millennium (Cat. No.99CH36249), Waltham, MA, USA, 22 April 1999; IEEE: Piscataway, NJ, USA, 1999; pp. 60–65. [Google Scholar]
  7. Ulander, L.M.H.; Hellsten, H.; Stenstrom, G. Synthetic-Aperture Radar Processing Using Fast Factorized Back-Projection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 760–776. [Google Scholar] [CrossRef]
  8. Frey, O.; Magnard, C.; Ruegg, M.; Meier, E. Focusing of Airborne Synthetic Aperture Radar Data From Highly Nonlinear Flight Tracks. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1844–1858. [Google Scholar] [CrossRef]
  9. Berardino, P.; Natale, A.; Esposito, C.; Lanari, R.; Perna, S. On the Time-Domain Airborne SAR Focusing in the Presence of Strong Azimuth Variations of the Squint Angle. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5212818. [Google Scholar] [CrossRef]
  10. Esposito, C.; Natale, A.; Lanari, R.; Berardino, P.; Perna, S. On the Capabilities of the IREA-CNR Airborne SAR Infrastructure. Remote Sens. 2024, 16, 3704. [Google Scholar] [CrossRef]
  11. Franceschetti, G.; Lanari, R. Synthetic Aperture Radar Processing; CRC Press: Boca Raton, FL, USA, 1999; ISBN 978-0-203-73748-4. [Google Scholar]
  12. IDL Interactive Data Language. Available online: https://www.nv5geospatialsoftware.com/docs/using_idl_home.html (accessed on 12 May 2025).
  13. MATLAB. Available online: https://it.mathworks.com/products/matlab.html (accessed on 12 May 2025).
  14. Altman, Y.M. Accelerating MATLAB Performance: 1001 Tips to Speed up MATLAB Programs; Chapman and Hall/CRC: Boca Raton, FL, USA, 2014; ISBN 978-0-429-18874-9. [Google Scholar]
  15. Grama, A.Y.; Gupta, A.; Kumar, V. Isoefficiency: Measuring the Scalability of Parallel Algorithms and Architectures. IEEE Parallel Distrib. Technol. Syst. Appl. 1993, 1, 12–21. [Google Scholar] [CrossRef]
  16. Robey, R.; Zamora, Y. Parallel and High Performance Computing, 1st ed.; Manning Publications: New York, NY, USA, 2021; ISBN 978-1-61729-646-8. [Google Scholar]
  17. Buber, E.; Diri, B. Performance Analysis and CPU vs GPU Comparison for Deep Learning. In Proceedings of the 2018 6th International Conference on Control Engineering & Information Technology (CEIT), Istanbul, Turkey, 25–27 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  18. Asano, S.; Maruyama, T.; Yamaguchi, Y. Performance Comparison of FPGA, GPU and CPU in Image Processing. In Proceedings of the 2009 International Conference on Field Programmable Logic and Applications, Prague, Czech Republic, 31 August–2 September 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 126–131. [Google Scholar]
  19. Frey, O.; Werner, C.L.; Wegmuller, U. GPU-Based Parallelized Time-Domain Back-Projection Processing for Agile SAR Platforms. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1132–1135. [Google Scholar]
Figure 1. Data matrices involved in the TD airborne SAR focusing procedure. The F _ _ matrix contains all targets to be focused; the k _ _ and K _ _ matrices store the quantities k T and K T in (2), respectively, relevant to each target; the three matrices D _ _ x , D _ _ y , D _ _ z provide the ground position of each target T in a common coordinate system (e.g., a geocentric Cartesian system). All these matrices have the same dimensions as the output grid (I × L). The GNSS vectors provide the platform position in the same coordinate system as the DEM, and their lengths correspond to the azimuth dimension of the range-focused SAR matrix h _ _ .
Figure 1. Data matrices involved in the TD airborne SAR focusing procedure. The F _ _ matrix contains all targets to be focused; the k _ _ and K _ _ matrices store the quantities k T and K T in (2), respectively, relevant to each target; the three matrices D _ _ x , D _ _ y , D _ _ z provide the ground position of each target T in a common coordinate system (e.g., a geocentric Cartesian system). All these matrices have the same dimensions as the output grid (I × L). The GNSS vectors provide the platform position in the same coordinate system as the DEM, and their lengths correspond to the azimuth dimension of the range-focused SAR matrix h _ _ .
Remotesensing 18 00221 g001
Figure 2. Main rationale of the pixel-wise focusing procedure. The details are reported in the body of the manuscript.
Figure 2. Main rationale of the pixel-wise focusing procedure. The details are reported in the body of the manuscript.
Remotesensing 18 00221 g002
Figure 3. Main rationale of the matrix-wise focusing procedure. The matrices k _ _ s , F _ _ s , G _ _ s are updated at each processing step s . The figure illustrates the processing step s = 0 , assuming that the synthetic aperture length is the same for all the targets to be focused ( K ~ ). More details are reported in the body of the manuscript.
Figure 3. Main rationale of the matrix-wise focusing procedure. The matrices k _ _ s , F _ _ s , G _ _ s are updated at each processing step s . The figure illustrates the processing step s = 0 , assuming that the synthetic aperture length is the same for all the targets to be focused ( K ~ ). More details are reported in the body of the manuscript.
Remotesensing 18 00221 g003
Figure 4. Flowchart of the parallel implementation of the TD SAR focusing procedure. Each job and its corresponding data package are represented in different colors. For job j (shown as a general example), the relevant segments of the DEM, k _ _ and K _ _ matrices, as well as the portions of the GNSS vectors and the range-focused SAR data matrix h _ _ , are highlighted in yellow.
Figure 4. Flowchart of the parallel implementation of the TD SAR focusing procedure. Each job and its corresponding data package are represented in different colors. For job j (shown as a general example), the relevant segments of the DEM, k _ _ and K _ _ matrices, as well as the portions of the GNSS vectors and the range-focused SAR data matrix h _ _ , are highlighted in yellow.
Remotesensing 18 00221 g004
Figure 5. Comparison of pixel-wise and matrix-wise 4000 × 32,000 pixels SLC outputs: (a,b) show the MLC amplitude images obtained using the pixel-wise and matrix-wise approaches, respectively; (c) displays the multi-look interferometric phase difference between the data focused with the two different strategies. Range pixel spacing: 0.75 m (SLC) and 3.75 m (MLC). Range resolution: 0.75 m (SLC) and 3.75 m (MLC). Azimuth pixel spacing: 0.25 m (SLC) and 4 m (MLC). Azimuth resolution: 0.30 m (SLC) and 4.8 m (MLC). Range extension: 3 km. Azimuth extension: 8 km.
Figure 5. Comparison of pixel-wise and matrix-wise 4000 × 32,000 pixels SLC outputs: (a,b) show the MLC amplitude images obtained using the pixel-wise and matrix-wise approaches, respectively; (c) displays the multi-look interferometric phase difference between the data focused with the two different strategies. Range pixel spacing: 0.75 m (SLC) and 3.75 m (MLC). Range resolution: 0.75 m (SLC) and 3.75 m (MLC). Azimuth pixel spacing: 0.25 m (SLC) and 4 m (MLC). Azimuth resolution: 0.30 m (SLC) and 4.8 m (MLC). Range extension: 3 km. Azimuth extension: 8 km.
Remotesensing 18 00221 g005
Figure 6. Graphical representation of the results reported in Table 3, that is, processing time (minutes) of the two considered strategies with a single job, for different numbers of focused pixels. Azimuth resolution: 30 cm.
Figure 6. Graphical representation of the results reported in Table 3, that is, processing time (minutes) of the two considered strategies with a single job, for different numbers of focused pixels. Azimuth resolution: 30 cm.
Remotesensing 18 00221 g006
Figure 7. Processing time (minutes) of the two considered strategies implemented through a multi-job configuration and using three different output grids characterized by different numbers of pixels: (a) 4000 × 250 pixels; (b) 4000 × 4000 pixels; (c) 4000 × 32,000 pixels. Azimuth resolution: 30 cm.
Figure 7. Processing time (minutes) of the two considered strategies implemented through a multi-job configuration and using three different output grids characterized by different numbers of pixels: (a) 4000 × 250 pixels; (b) 4000 × 4000 pixels; (c) 4000 × 32,000 pixels. Azimuth resolution: 30 cm.
Remotesensing 18 00221 g007
Figure 8. Focusing time for the parallel implementation of the pixel-wise strategy, using 1 to 256 jobs. Results are presented for the following output grid configurations: (a) 4000 × 250 pixels; (b) 4000 × 1000 pixels; (c) 4000 × 4000 pixels. Blue markers indicate the focusing times, in minutes, as a function of the total number of jobs, shown on a logarithmic scale. In each case, the red dot corresponds to half of the focusing time of the previous job configuration. Azimuth resolution: 30 cm.
Figure 8. Focusing time for the parallel implementation of the pixel-wise strategy, using 1 to 256 jobs. Results are presented for the following output grid configurations: (a) 4000 × 250 pixels; (b) 4000 × 1000 pixels; (c) 4000 × 4000 pixels. Blue markers indicate the focusing times, in minutes, as a function of the total number of jobs, shown on a logarithmic scale. In each case, the red dot corresponds to half of the focusing time of the previous job configuration. Azimuth resolution: 30 cm.
Remotesensing 18 00221 g008
Figure 9. As Figure 8, but for the number of pixels that, in this case, are (a) 4000 × 8000 pixels; (b) 4000 × 16,000 pixels; (c) 4000 × 32,000 pixels.
Figure 9. As Figure 8, but for the number of pixels that, in this case, are (a) 4000 × 8000 pixels; (b) 4000 × 16,000 pixels; (c) 4000 × 32,000 pixels.
Remotesensing 18 00221 g009
Figure 10. Processing time (minutes) of the parallel implementation of the pixel-wise strategy as a function of the total number of focused points: (a) jobs configuration from 1 to 256; (b) jobs configurations from 16 to 256. In (b), the corresponding Half Time Law for each job configuration is also shown (square markers). Azimuth resolution: 30 cm.
Figure 10. Processing time (minutes) of the parallel implementation of the pixel-wise strategy as a function of the total number of focused points: (a) jobs configuration from 1 to 256; (b) jobs configurations from 16 to 256. In (b), the corresponding Half Time Law for each job configuration is also shown (square markers). Azimuth resolution: 30 cm.
Remotesensing 18 00221 g010
Table 1. Main parameters of the raw data for the Grumento X-band dataset acquired by the MIPS system.
Table 1. Main parameters of the raw data for the Grumento X-band dataset acquired by the MIPS system.
Carrier frequency [GHz]9.55
Azimuth sampling [m]0.04
Range sampling [m]0.75
Number of azimuth samples~200,000
Number of range samples~4000
Table 2. Main processing settings for the Grumento X-band dataset acquired by the MIPS system.
Table 2. Main processing settings for the Grumento X-band dataset acquired by the MIPS system.
Output Grid Size
[Pixels]
4000
×
250
4000
×
1000
4000
×
4000
4000
×
8000
4000
×
16,000
4000
×
32,000
Processed range resolution [m]0.75
Processed azimuth resolution [m]0.30
Range pixel spacing [m]0.75
Azimuth pixel spacing [m]0.25
Covered areaRange [m]3000
Azimuth [m]62.52501000200040008000
Table 3. Processing times (in minutes) relevant to the two implemented strategies, executed in a single-job configuration. The relative differences are computed using the pixel-wise approach as the reference. Azimuth resolution: 30 cm.
Table 3. Processing times (in minutes) relevant to the two implemented strategies, executed in a single-job configuration. The relative differences are computed using the pixel-wise approach as the reference. Azimuth resolution: 30 cm.
Output Grid
Size
[Pixels]
Pixel-Wise
Processing Time
[min]
Matrix-Wise
Processing Time
[min]
Relative
Difference
[%]
4000 × 25012.1214.4519%
4000 × 100048.0558.6322%
4000 × 4000192.35260.9036%
4000 × 8000388.52523.3535%
4000 × 16,000781.251052.1835%
4000 × 32,0001560.372106.7335%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Euillades, J.; Berardino, P.; Esposito, C.; Natale, A.; Lanari, R.; Perna, S. Quantitative Assessment of the Computing Performance for the Parallel Implementation of a Time-Domain Airborne SAR Raw Data Focusing Procedure. Remote Sens. 2026, 18, 221. https://doi.org/10.3390/rs18020221

AMA Style

Euillades J, Berardino P, Esposito C, Natale A, Lanari R, Perna S. Quantitative Assessment of the Computing Performance for the Parallel Implementation of a Time-Domain Airborne SAR Raw Data Focusing Procedure. Remote Sensing. 2026; 18(2):221. https://doi.org/10.3390/rs18020221

Chicago/Turabian Style

Euillades, Jorge, Paolo Berardino, Carmen Esposito, Antonio Natale, Riccardo Lanari, and Stefano Perna. 2026. "Quantitative Assessment of the Computing Performance for the Parallel Implementation of a Time-Domain Airborne SAR Raw Data Focusing Procedure" Remote Sensing 18, no. 2: 221. https://doi.org/10.3390/rs18020221

APA Style

Euillades, J., Berardino, P., Esposito, C., Natale, A., Lanari, R., & Perna, S. (2026). Quantitative Assessment of the Computing Performance for the Parallel Implementation of a Time-Domain Airborne SAR Raw Data Focusing Procedure. Remote Sensing, 18(2), 221. https://doi.org/10.3390/rs18020221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop