Next Article in Journal
Center of Pressure Measurement Sensing System for Dynamic Biomechanical Signal Acquisition and Its Self-Calibration
Previous Article in Journal
Visual Screening of Genetic Polymorphisms in eae Gene of Escherichia coli O157:H7 with Single-Nucleotide Resolution by ARMS-PCR-Mediated Lateral Flow Strip
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Theoretical Modeling and Experimental Study on Low-Altitude Slow-Small Target (LSS) Detection Based on Broadband Spectral Modulation Imaging

School of Physics, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(3), 909; https://doi.org/10.3390/s26030909
Submission received: 16 December 2025 / Revised: 21 January 2026 / Accepted: 27 January 2026 / Published: 30 January 2026
(This article belongs to the Section Optical Sensors)

Abstract

The detection of low-altitude slow-small (LSS) targets, such as drones, is challenged by their small radar cross-section (RCS) and low signal-to-clutter ratio (SCR), resulting in short effective range and susceptibility to background clutter in complex environments. To overcome the limitations of conventional radar and electro-optical methods, this paper proposes a novel detection theory based on broadband spectral modulation imaging (BSMI). We analyze the recognition accuracy for drone targets across different zenith angles and detection ranges through numerical simulations. A snapshot-based BSMI detection system was designed and implemented, with experiments conducted under consistent conditions for validation. Results demonstrate that the system achieves over 90% classification accuracy, confirming the theory’s effectiveness. This study significantly enhances detection probability and suppresses false alarms for low-altitude drones, providing a viable technical solution for monitoring unauthorized aerial activities.

1. Introduction

The detection of low-altitude, slow, and small (LSS) targets, including drones and other low-altitude aircraft, is of paramount importance for urban security, airspace management, and public safety [1]. With the rapid proliferation of drone technology for personal and commercial applications, there has been a significant increase in unauthorized flights, often referred to as “clandestine” or “unregulated” flights, particularly within restricted zones, posing serious challenges to urban governance [2]. Characterized by low observability, slow velocity, and low-altitude flight patterns, LSS targets substantially complicate detection efforts [3]. Consequently, the development of specialized detection systems capable of accurately identifying such targets is critically needed. These systems are essential for safeguarding citizen privacy, preventing illicit activities, and fostering the sustainable development of urban environments.
Current methodologies for detecting LSS targets primarily fall into three categories: acoustic, radar, and electro-optical (including visible and infrared) techniques. Acoustic detection identifies targets by analyzing their acoustic signatures, with studies by researchers such as Svanström, F and Zhang, T demonstrating effective drone recognition [4,5]. However, its utility is constrained to short ranges, typically under 200 m. Radar techniques, including Track-Before-Detect (TBD) and noise-whitening processes, leverage the Doppler effect to estimate kinematic and geometric target attributes [6,7]. Despite their robustness, radar systems are frequently susceptible to urban clutter and avian interference, resulting in elevated false alarm rates. In infrared detection, methods like two-dimensional least mean square (TDLMS) filtering and grayscale difference (GSD) algorithms have been employed to suppress background noise and enhance dim targets [8]. Nevertheless, the low thermal emissions characteristic of electric drones lead to poor thermal contrast with the environment, thereby reducing detection accuracy [9,10]. Visible-light detection outperforms infrared at close ranges and supports a spectrum of algorithms from traditional frame differencing to modern deep learning models [11,12,13,14]. Its performance, however, degrades over long distances due to atmospheric disturbances and occlusions. Collectively, each approach exhibits distinct limitations concerning operational range, false alarm control, or environmental adaptability, underscoring the pressing need for more robust LSS detection solutions.
Parallel to the evolution of conventional methods, spectral imaging technology has witnessed significant advancements in the realm of drone detection and tracking. Early research predominantly relied on traditional narrowband scanning or staring spectral imaging, which faced limitations in dynamic target recognition and long-range detection energy efficiency. Over the past five years, breakthroughs in snapshot spectral imaging and broadband modulation-demodulation technology have directed the field toward new developmental trajectories. In 2022, Wei Kai et al. utilized a liquid crystal tunable filter (LCTF) to analyze specific drone materials, achieving effective detection with dedicated algorithms [15]. In 2024, Sun et al. proposed a hyperspectral tracking algorithm for low-altitude drones that integrated deep learning with an improved kernel correlation filter, thereby enhancing both recognition accuracy and tracking efficiency [16]. Of particular note, broadband modulation-demodulation spectral imaging technology employs micro-nano structures such as metamaterials, quantum dots, or photonic crystals to achieve spectral encoding, endowing each pixel with unique broadband transmission characteristics and enabling highly efficient spectral information acquisition at the hardware level [17,18]. In 2025, Yu et al. demonstrated, for the first time, the identification of dynamic drone targets based on spectral features using snapshot spectral imaging, highlighting the application potential of this technology for low-altitude target detection [19]. These advancements signify a transition of spectral imaging from traditional fine spectral analysis towards novel detection systems that combine high energy efficiency, rapid response, and strong environmental adaptability, offering a new pathway for the robust detection of LSS targets against complex backgrounds.
The innovation of this work lies in the development of a snapshot spectral imaging system for drone target detection against complex backgrounds—such as clouds, forest canopies, and the sky—based on the rapidly advancing broadband modulation-demodulation spectral imaging technology. The primary contributions of this study are threefold:
First, systemic integration innovation. We have, for the first time, integrated a broadband modulation spectral chip [20] with a large field-of-view lens to construct a long-range LSS target detection system. This design overcomes the low energy efficiency of narrowband spectral detectors at long distances and mitigates the limitations of traditional scanning or staring spectral cameras in recognizing dynamic targets [10,21,22,23,24].
Second, enhanced detection mechanism and performance. By exploiting the distinct spectral differences among drones, interfering objects, and sky radiance [25,26,27,28], the proposed method effectively suppresses false alarms caused by cluttered backgrounds, enabling precise LSS target identification. Experimental results demonstrate that the system achieves both precision and recall rates exceeding 90% in real-world scenarios, representing a significant enhancement in overall detection performance.
Third, systematic evaluation of environmental adaptability. Through simulations and experimental tests, we have systematically assessed the impact of various zenith angles and atmospheric transmittance conditions on classification accuracy, providing crucial insights for the system’s deployment in diverse and variable practical environments.
This study not only validates the efficacy of broadband modulation-demodulation spectral imaging technology in addressing the challenges of long-range LSS target detection within complex backgrounds but also lays a key technological foundation for its engineering application and the advancement of intelligent perception capabilities in photoelectric systems.

2. Materials and Methods

2.1. Wide-Spectrum Modulation Spectral Imaging Detector: System Description

The wide-spectrum modulation spectral imaging detector, a key component in computational spectroscopy, is structured as shown in Figure 1. It integrates a wide-spectrum modulation filtering unit—composed of materials such as plasmonic filters [29,30,31], metasurfaces [32,33,34], or photonic crystal plates [35,36]—with a CMOS image sensor. Each filter unit modulates incident broadband light, overcoming the traditional compromise between energy throughput and spectral resolution in narrowband systems and thereby enhancing optical energy utilization. Assuming the incident spectrum is F(λ), and the transmission spectrum of each band-modulating filter unit is Xi(λ), where i = 1, 2, 3, …, N, in Figure 1, N = 9, and ni represents the noise signal. The response curve of the CMOS image sensor is S(λ). The signal intensity value Ii reaching each channel can be described as:
I i = λ 1 λ 2 F ( λ ) X i ( λ ) S ( λ ) d λ + n i
Applying discretization to the above equation yields its discrete form.
I i = λ f ( λ ) x i ( λ ) s ( λ ) + n i
Let ai(λ) = xi(λ)s(λ). The function ai(λ), which represents the combined system response, can be determined empirically by measuring its specific transmission spectrum. Consequently, the signal incident spectral irradiance I can be represented by a system of linear equations:
I = A F ( λ ) + n
In this formulation, I ∈ RN×1, A ∈ RM×N, and F(λ) ∈ RM×1, with N being the number of wideband filter units and M the spectral sampling points. The case of M > N results in an overdetermined system that is often solvable, whsereas the underdetermined case (M < N) must be solved using regularization or data-driven methods like compressed sensing or convolutional neural networks.
In spatially modulated multispectral imaging systems, the inherently ill-posed nature of the system response matrix fundamentally limits spectral reconstruction quality. Directly applying traditional least squares methods yields solutions that are highly sensitive to noise, significantly compromising spectral accuracy. To mitigate this, we introduce a regularized least squares approach that incorporates appropriate constraints to embed prior knowledge into the inversion process. This strategy not only improves numerical stability and computational efficiency but also preserves spectral fidelity. By processing the spectral image across nine distinct bands, the proposed method achieves high-quality reconstruction of the target’s spectral information.

2.2. Calibration of the Wide-Spectrum Modulated Spectral Detector

The calibration of the wide-spectrum modulated spectral detector establishes the critical correspondence between incident radiation and sensor response, laying a theoretical foundation for accurate spectral reconstruction. The calibration system employs an optical configuration consisting of a bromine-tungsten lamp (400–2500 nm, Zolix, Beijing, China) coupled with a monochromator. Quasi-monochromatic light is generated across the detector’s operational band of 400–900 nm at 5 nm intervals. The light is homogenized by an integrating sphere before illuminating the detector surface, ensuring uniform field distribution. At each wavelength step, the digital responses of all modulation units are recorded simultaneously, thereby systematically acquiring the spectral transmission function of each filter element under controlled conditions.
Based on the complete dataset, the system response matrix A is constructed. Each column of the matrix corresponds to the normalized spectral response curve of a filter unit sampled at 5 nm resolution over the 400–900 nm range. This discrete representation quantitatively characterizes the modulation properties of the detector and forms the forward model for spectral encoding. Validation experiments using a standard reflectance white board were conducted, and the results were compared with measurements from a widely-used high-precision spectrometer (Ocean Optics QE65pro, Orlando, FL, USA). The Goodness of Fit Coefficient (GFC) exceeded 90%, demonstrating the system’s capability for high-accuracy spectral reconstruction across the 400–900 nm band. It should be noted that, for the purpose of LSS target detection in this study, spectral reconstruction does not aim for continuous high resolution over the full band. Instead, reconstruction is based on nine uniformly divided spectral segments—a level of accuracy that fully meets the requirements of the subsequent classification and identification tasks.

2.3. Theory of Small-Target Detection via Wide-Spectrum Modulation

The performance of long-range small target detection poses a critical dependence on irradiance conditions and atmospheric transmittance, which varies with season, latitude, and aerosol composition. The proposed theory utilizes the high optical throughput and spectral encoding capabilities of the wide-spectrum modulation detector to establish a spectral transfer equation. This equation models the target’s reflectance across the detector’s broadband channels, with its specific parameters defined below.
I 1 ( t , λ , θ ) = λ 1 λ 2 s u n ( t , λ , θ ) · R ( λ ) · A i r T ( t , λ , θ ) · L e n T ( λ ) · X 1 ( λ ) · S ( λ ) d λ + n 1 I 2 ( t , λ , θ ) = λ 1 λ 2 s u n ( t , λ , θ ) · R ( λ ) · A i r T ( t , λ , θ ) · L e n T ( λ ) · X 2 ( λ ) · S ( λ ) d λ + n 2 I 3 ( t , λ , θ ) = λ 1 λ 2 s u n ( t , λ , θ ) · R ( λ ) · A i r T ( t , λ , θ ) · L e n T ( λ ) · X 3 ( λ ) · S ( λ ) d λ + n 3 I i ( t , λ , θ ) = λ 1 λ 2 s u n ( t , λ , θ ) · R ( λ ) · A i r T ( t , λ , θ ) · L e n T ( λ ) · X i ( λ ) · S ( λ ) d λ + n i
The key variable in the spectral transfer equation is Ii(t,λ,θ), representing the signal intensity at the Ii-th wideband modulation unit. It varies with time (expressed as seasonal/monthly parameter t), wavelength λ, and the solar zenith angle θ. The equation integrates several physical components: the illumination source sun(t,λ,θ), the target’s reflective property R(λ), the transmission losses through the atmosphere AirT(t,λ,θ) and the lens LenT(t,λ,θ), the spectral modulation Xi(λ), the sensor response S(λ), and the noise ni per channel.
The target reflectance spectrum, when processed through the spectral transfer equation, produces a uniquely modulated signal across the nine broadband channels. This encoding strategy enhances the original spectral information and significantly increases detector sensitivity. Consequently, the system surpasses the detection range of traditional RGB and narrowband methods using the same image sensor. Thus, wideband modulation extends system range and improves recognition performance via optimized spectral processing.
Figure 2 illustrates the complete imaging process of a UAV target captured by a snapshot spectral camera. Solar radiation incident upon the target generates a reflected spectral signal, which is then modulated by atmospheric transmittance before entering the camera system. The camera, comprising a lens assembly and a multispectral detector, further modifies this signal. As the light passes through the lens, its spectral profile is shaped by the lens’s optical transfer function and transmittance curve. This doubly modulated signal—by the atmosphere and the lens—is then projected onto the multispectral detector. There, it undergoes a third modulation by the spectral channel filter layers. Finally, photoelectric conversion at the silicon substrate yields the wide-spectrum modulated signal of the UAV target.

2.4. SVM-Based Classification and Evaluation Model for Small-Target Modulated Spectra

Based on the previously established broadband modulation small target detection theory, this section employs Support Vector Machine (SVM) [31,37,38] to classify and identify UAV targets from background scenes (such as sky, clouds, birds, etc.). The input to the algorithm model is the nine-band spectral curve data of targets or backgrounds including UAVs, cloud layers, and birds. Firstly, spectral data of the targets and background are acquired within each modulation channel, followed by feature extraction from the spectral modulation signal curves per cycle. The extracted features are normalized using Z-score standardization to eliminate scale differences among feature dimensions, thereby enhancing data consistency and model robustness. The overall identification workflow based on the SVM algorithm is illustrated in Figure 3.
A linear kernel SVM was selected as the classifier for this spectral classification task, based on its computational efficiency and proven effectiveness with high-dimensional spectral features. To ensure robust performance evaluation, the dataset was partitioned using stratified k-fold cross-validation, which preserves the class distribution across all folds. Furthermore, the multi-class classification problem was decomposed into a set of binary tasks through a label binarization strategy, enabling the use of binary SVM classifiers for multi-class recognition.
To classify the preprocessed spectral features, a linear kernel SVM is employed. The algorithm aims to find an optimal decision hyperplane in the feature space that maximally separates the two classes by maximizing the classification margin—the minimum distance from any sample to the hyperplane. Under ideal conditions, SVM achieves perfect linear separation by minimizing the Euclidean norm of the weight vector, thereby enhancing generalization. In practical applications, however, a soft-margin strategy is adopted to handle non-separable cases and noise, introducing slack variables to allow some samples to violate margin constraints. This leads to an optimization problem that balances structural risk and empirical loss, regulated by a key parameter controlling the trade-off between model complexity and classification error. Algorithmically, SVM uses the hinge loss to linearly penalize misclassifications. Its convex optimization nature guarantees a global optimum, and the solution depends only on a subset of training samples called support vectors, providing desirable sparsity, computational efficiency, and strong generalization performance.
The classification performance is evaluated by calculating the ROC (Receiver Operating Characteristic) curves and AUC (Area Under the (ROC) Curve) values for each class, as well as computing the macro-average and micro-average ROC curves to comprehensively reflect the model’s overall performance. Additionally, to further analyze the model’s classification effectiveness, a confusion matrix is generated to quantify the prediction results, allowing for the calculation of classification accuracy and recall for each target class, thereby revealing the classifier’s recognition capability for different targets. The experimental results are visually presented through ROC curves, AUC values, and confusion matrices, providing an efficient analytical framework and performance evaluation basis for the spectral classification of small targets.
Using an SVM classifier to classify and recognize drone targets, sky, and white cloud modulated spectra, the model’s recognition accuracy is evaluated using metrics such as accuracy, recall, ROC curve, and confusion matrix.
Accuracy:
Accuracy = T P + T N T P + F P + F N + T N
Recall:
Recall = T P T P + F N
TP is a true positive example, TN is a true negative example, FP is a false positive example, and FN is a false negative example.
TPR = T P T P + F N
FPR = F P T P + F N
TPR is the true positive rate, and FPR is the false positive rate. The ROC curve shows the relationship between TPR and FPR at different thresholds, where the horizontal axis represents FPR and the vertical axis represents TPR, and the coordinate range is 0 to 1. The better the model performance, the closer its ROC curve is to the upper left corner and away from the diagonal line (the straight line from (0, 0) to (1, 1)). The AUC value represents the area under the ROC curve. The closer the value is to 1, the stronger the classification and recognition ability of the model.

3. Simulation: Setup, Results, and Analysis

3.1. Simulation Parameters and Setup

Based on the theoretical framework of wide-spectrum modulated small target recognition, a simulation scenario of a drone target against a sky background was constructed. The key simulation parameters comprised the reflectance spectrum of the drone material, the solar irradiance spectrum, atmospheric transmittance, and lens transmittance. A wide-spectrum modulation detector with nine distinct spectral channels was adopted for the detection process. The specific simulation parameters are summarized in the table below:
Based on the theoretical framework of wide-spectrum modulated small target recognition, a simulation scenario of a drone target against a sky background was constructed. The drone materials were selected from commonly used aerospace composites, including fiberglass, polypropylene, carbon fiber, and titanium alloy. The target scenario incorporated both sky and cloud backgrounds, while interference scenarios simulated typical forest occlusion conditions. For atmospheric transmission modeling, low-latitude summer aerosol parameters were adopted along with a rural visibility setting of 23 km. Solar radiation spectra were configured at zenith angles of 0° and 30°. Atmospheric transmittance curves and solar radiance spectra were simulated using MODTRAN, generating full-band spectral curves that account for atmospheric absorption and scattering effects. By integrating material properties, environmental parameters, and atmospheric data, physics-based simulations were performed using the Spectral Transfer Equation (Equation (1)) established in Section 2.2, ultimately producing modulated spectral datasets for both target and interference scenarios.
Noise is a key factor affecting the fidelity of simulations. To align with the detector used in experiments, a noise model based on actual hardware parameters was integrated into the simulation to improve reliability. The key detector parameters are as follows: a bit depth of 10 bits, a full-well capacity corresponding to a maximum digital number (DN) of 1023, and a measured dark noise standard deviation of approximately 64 DN. An additive Gaussian noise term was introduced to emulate practical signal degradation, with its amplitude varying dynamically—randomly fluctuating between 0% and 6% of the original signal value across channels—thereby more accurately representing the non-stationary noise characteristics under operational conditions.

3.2. Simulation Results and Discussion

Simulations were conducted under four representative scenarios, formed by combining solar zenith angles (0° and 30°) with detection distances (100 m and 200 m), using the parameters defined in Table 1. The target set included four common drone materials—fiberglass, polypropylene, carbon fiber, and titanium alloy—whose reflectance spectra were precisely measured with a standard spectrometer (Ocean QE65 Pro) to build a dedicated spectral library. To emulate realistic flight conditions, background and interference spectra were also incorporated, covering three typical categories: trees (terrestrial occlusion), blue sky (Rayleigh scattering), and white clouds (Mie scattering). Following the theory of wide-spectrum modulated point target detection, the spectral transfer equation was applied to simulate the system’s response, generating modulated channel values for each target, interference, and background spectrum. The resulting modulated spectral data for the four drone types were subsequently classified using a Support Vector Machine (SVM) algorithm. Final recognition performance was evaluated quantitatively through a confusion matrix and ROC curve.
The classification performance of the SVM algorithm for the modulated spectra of drones, trees, clouds, and the sky was evaluated via confusion matrices. In each simulation scenario (defined by distance and zenith angle), a balanced dataset of 160 spectral samples (40 per target class) was used. Figure 4a presents the confusion matrix obtained under a solar zenith angle of 0° and a detection distance of 100 m, where rows denote true labels and columns represent predicted labels. Diagonal entries indicate correct classifications, with counts of 40 (clouds), 39 (drones), 35 (sky), and 39 (trees). Off-diagonal misclassifications include one drone identified as a tree, five sky samples as trees, and one tree sample as sky. Corresponding per-class accuracy values are 1.00, 0.975, 0.875, and 0.975, with recall rates of 1.00, 1.00, 0.972, and 0.86. These results confirm the effectiveness of the SVM-based classifier under the specified simulation conditions.
Further analysis of the confusion matrices under varying solar zenith angles and detection distances reveals distinct performance trends. As shown in Figure 4b, under a solar zenith angle of 30° and a detection distance of 100 m, the counts of correctly classified samples for clouds, drones, sky, and trees are 40, 39, 35, and 38, respectively. While all cloud samples are correctly identified, one drone is misclassified as a tree. For the sky category, three samples are mistaken for clouds and two for trees; in the tree category, one sample is misclassified as a drone and another as sky. The corresponding accuracy rates are 1.00, 0.975, 0.875, and 0.950, with recall rates of 0.93, 0.975, 0.972, and 0.926. These results indicate that as the solar zenith angle changes, the misclassification rate tends to increase, primarily due to altered solar spectral irradiance, which subsequently affects the modulation characteristics of the target reflection spectrum.
When the solar zenith angle is fixed at 0° and the detection distance extends to 200 m, the numbers of correctly classified samples for the four targets are 40, 38, 36, and 38, corresponding to accuracy rates of 1.00, 0.95, 0.90, and 0.95, and recall rates of 0.975, 0.974, 0.923, and 0.926, respectively. Compared with the 100-m case at the same zenith angle, classification accuracy declines for all targets except the sky category. This decrease is attributed to the increased influence of atmospheric transmittance on spectral band characteristics over longer distances. Nevertheless, as the detection range remains relatively short, overall classification performance remains satisfactory.
Under the most challenging simulated condition of 200 m detection distance and 30° solar zenith angle, the classification performance for all four target categories declined, with accuracy rates of 1.00, 0.925, 0.900, and 0.900, and recall rates of 0.952, 0.925, 0.947, and 0.900, respectively. All metrics stabilized around 0.9, demonstrating that the classification model maintains balanced and reliable accuracy even under degraded conditions. In summary, while variations in solar zenith angle and detection distance influence modulation spectrum classification, the model consistently sustains high accuracy within a reasonable operational range.
Based on the classification results under different zenith angles and detection distances, the corresponding ROC curves were calculated, as shown in Figure 5a. At a solar zenith angle of 0° and a detection distance of 100 m, the AUC values for targets such as drones and trees reach 0.99. This indicates that the reflectance spectra of different targets exhibit distinct characteristics after modulation through the spectral transfer equation, and the SVM classifier effectively extracts these features, achieving excellent classification performance. When the distance increases to 200 m while the zenith angle remains 0°, the ROC curves shift slightly downward, yet the detection capability is not substantially degraded. Since detector noise is not included in the simulation, signal attenuation has limited impact on classification accuracy. When the distance is held constant and the zenith angle increases to 30°, the ROC curves for each target show modest fluctuations relative to the 0° case but remain close to the upper-left corner. Variations in solar illumination angle influence the modulation of target spectral features but do not cause a pronounced drop in classification accuracy. These results demonstrate that although the modulated spectral signal attenuates and fluctuates with increasing distance and zenith angle, the SVM model based on modulated spectral features maintains strong robustness and is well-suited for detecting LSS targets in complex environments.

4. Experimental Validation

4.1. Experimental System Setup

The developed snapshot-type LSS target detection system, based on wide-spectrum modulation, integrates three core components: a wide-spectrum modulation spectral chip, a wide-band large-field-of-view lens, and a 360° rotating mount. The spectral chip—a nine-channel imaging spectral detector fabricated by Changchun University of Science and Technology [10]—covers the 400–900 nm range with a pixel size of 1.8 μm × 1.8 μm and a spatial resolution of 1600 × 1200 pixels, supporting spectral image acquisition at 30 frames per second. The matched lens has a focal length of 2.48 mm, providing horizontal, vertical, and diagonal fields of view of 45.1°, 59.6°, and 76.5°, respectively. The entire optical assembly is mounted on a rotating bracket, enabling full-azimuth detection and identification of drone targets.
Upon completion of the system assembly, experimental validation was conducted under conditions consistent with the simulation. The tests were performed in a suburban environment with an atmospheric visibility of approximately 23 km. All experiments were carried out in June at a fixed detection distance of 100 m, with measurements taken at solar zenith angles of 0° (solar noon) and 30° (early afternoon). During testing, the system was securely positioned on the ground to capture drone targets against complex backgrounds including blue sky, white clouds, and woodland. To ensure adequate signal-to-noise ratio, imaging parameters were adjusted so that the average grayscale value of the spectral images remained between 180 and 220. Modulation spectrum data corresponding to the drone target, sky, and clouds were then extracted from the acquired images for subsequent analysis.
To ensure the precision consistency between experimental and simulation results, under stable experimental conditions, we conducted multiple tests on the scene data. Each test involved multiple captures of spectral images, which were then averaged to enhance the signal-to-noise ratio of the imagery. Furthermore, during the spectral reconstruction process, multiple samples were taken from different regions of the same target to ensure high accuracy in extracting the target spectrum.
Figure 6 details the process through which the detection system acquires modulated spectral information from aerial drone targets and background elements such as the sky, white clouds, and green vegetation. First, under set distance conditions, the system captures raw multispectral image data. This data reflects the response of the target within the periodic spectral modulation channels after being shaped by the system’s spectral transfer function. The acquired raw data is then processed to reconstruct spectral images for each modulation channel, producing multi-condition spectral representations of the target and its surrounding interference scenarios. Subsequently, gray values from selected regions of the modulated spectral data are extracted to derive their corresponding modulated spectral profiles. Experimental results reveal clear distinctions between the modulated spectral features of drone targets and background objects such as the sky, clouds, and vegetation. These differences provide a reliable data foundation and discriminative basis for subsequent target recognition and classification.
Figure 7 presents the spectral images extracted from modulated data across different channels. Due to varying degrees of spectral modulation among channels, their grayscale values exhibit distinct profiles. Different elements within the scene—such as the drone target, sky background, white clouds, and forest—also show characteristic grayscale representations in each channel. Although the drone occupies only a small proportion of pixels in the image, it displays noticeable grayscale variation across multiple modulation channels, enabling it to be distinguished from background and other interfering objects. This result further confirms the feasibility and effectiveness of using target-modulated spectra for recognition and classification. On one hand, modulated spectra provide strong discriminability, effectively enhancing target-background contrast. On the other hand, the multispectral information from the nine channels offers richer spectral features, substantially improving the recognition capability and classification accuracy for typical targets such as drones. Consequently, modulated spectral-based multi-channel imaging demonstrates promising potential for detecting and identifying drone targets in complex backgrounds.

4.2. Experimental Results and Discussion

Based on the foregoing simulation results, experimental validation was performed using the wide-spectrum modulation detector. In the experiment, the modulated spectral curves of targets were extracted from multi-scene spectral modulation images, and classification was conducted using an SVM algorithm with identical design parameters. The confusion matrix derived from the experimental results is shown in Figure 8. To ensure data randomness and experimental repeatability, datasets of different sizes were collected under varying environmental conditions, comprising 320, 240, 200, and 200 samples respectively. Each dataset contained an equal number of samples from the four target categories to maintain balance for the classification task.
Figure 8a presents the classification results for the four target categories: clouds, drones, sky, and trees. The numbers of correctly classified samples are 77, 76, 74, and 76, corresponding to accuracy values of 0.974, 0.95, 0.913, and 0.95, and recall rates of 1, 0.926, 0.936, and 0.926, respectively. For drone targets specifically, the accuracy and recall are 0.95 and 0.926. The primary factor affecting drone classification accuracy is the misclassification of sky samples as drones. This phenomenon is largely attributable to atmospheric transmittance and detector noise under long-distance testing conditions, which reduce the signal-to-noise ratio of drone targets in the modulated spectral images. In particular, at the edge regions of target pixels, the interference from sky background pixels diminishes the discriminability of the modulated spectral curve, leading to the misclassification of sky background as drone targets.
Further analysis was conducted with detection distance and solar zenith angle as key variables. When the detection distance remained fixed and the zenith angle increased to 30°, the overall classification accuracy for all targets decreased markedly, accompanied by a rise in misclassified samples. Notably, the recognition accuracy for drone targets declined further, with correct classifications being noticeably affected by interference from both the sky and trees. While in simulations solar radiation had limited influence on classification outcomes, in actual measurements the presence of detector noise altered the line shape of the target-modulated spectrum. This spectral distortion caused the modulated spectral signature of the drone to resemble those of the sky and trees more closely, thereby increasing classification errors.
When the solar zenith angle is 0° and the detection distance increases to 200 m, the classification accuracy for drones remains relatively stable, while the recall rate decreases to 0.88. This indicates an increase in false negatives, where samples of other categories are misclassified as drones. When both the detection distance and solar zenith angle change simultaneously, the overall classification accuracy for all targets declines to approximately 0.9, with drone classification accuracy reaching 0.92. These results demonstrate that under long-range detection conditions, the modulated spectral characteristics of targets such as drones are notably affected by atmospheric interference, leading to signal attenuation and reduced signal-to-noise ratio, which in turn significantly raises the misclassification and false alarm rates.
As shown in Figure 9, the stability of the model was analyzed using ROC curves and AUC values, with classification performance evaluated across different detection distances, solar zenith angles, and target categories. The results indicate that the AUC values for drone targets remain consistently above 0.95, while the AUC values for the sky and trees exhibit greater variability, with minimum values of 0.89 and 0.90, respectively. Although the classification stability for the latter two categories shows some fluctuation, both the micro-average and macro-average AUC values across all categories exceed 0.95. This demonstrates that the model maintains robust overall classification performance and high recognition accuracy.

5. Discussion

5.1. Synthesis and Comparative Analysis of Simulation and Experimental Results

As shown in Table 2, the comparison between simulation and experimental results indicates that the simulated classification accuracy for targets such as drones exceeds the experimentally measured performance. Notably, the Root Mean Square Error (RMSE) between simulated and experimental outcomes for drones is the smallest among all targets, measuring only 0.026. The RMSE, defined as:
RMSE = 1 n i = 1 n ( S i m i E x p i ) 2
where Simi and Expi are the simulated and experimental values under the i-th condition, respectively, and n is the number of conditions, quantifies the average deviation between the two datasets. This low RMSE value reflects the closest agreement between simulation and experiment and confirms the higher recognition accuracy achieved by the model. This finding further substantiates the strong generalizability and practical validity of the wide-spectrum modulation theory for small target recognition, particularly in drone identification.
As shown in Table 3, the comparison of recall rates between simulation and experimental target recognition results indicates that for drone targets, the simulated recall ranges from 0.925 to 1, whereas the experimental recall ranges from 0.88 to 0.982. Although the experimental results are generally slightly lower than the simulation, they still maintain high accuracy. The RMSE between the two sets is 0.06, reflecting good agreement. However, relative to cloud and tree targets, the agreement between simulation and experiment is somewhat lower for drones. The primary reason is that extracting the modulated spectral curve from drone targets is more sensitive to image quality and the target’s pixel proportion in the scene. Especially under long-range detection, drone targets often occupy only tens or even fewer pixels, which increases the discrepancy between simulated and experimental outcomes.

5.2. Limitations of the Current Study and Future Perspectives

While this study has successfully established and validated a novel broadband modulation-demodulation framework for the detection of LSS targets, several limitations must be acknowledged, which also outline clear directions for future research. Firstly, the atmospheric model and noise injection methods employed in this work, although improved, remain relatively simplified. Future work will incorporate more complex, time-varying atmospheric profiles and a comprehensive noise model that includes fixed-pattern noise and nonlinear effects to enhance prediction fidelity. Additionally, while the current detection algorithm has proven effective, it has primarily been validated on a specific set of target types. Its generalization capability requires further evaluation across a broader spectrum of both LSS and non-LSS targets. Finally, systematic comparative analysis with other advanced spectral imaging techniques and deep learning-based classifiers is beyond the scope of this foundational study. Future research will focus on such benchmarking to quantitatively assess the advantages of the proposed technology. Addressing these limitations will advance the current proof-of-concept into a mature, field-deployable sensing solution.
Meanwhile, the current study primarily validates the detection performance under single or homogeneous backgrounds. Future work will focus on extending the research to various typical complex scenarios, such as urban and forest environments, while also considering the influence of different seasons, weather conditions, and lighting variations. This will allow for a systematic evaluation of the method’s generalization capability and environmental adaptability. Additionally, further efforts will be made to optimize the robustness of the algorithm in complex backgrounds, thereby advancing the technology toward practical engineering applications.
Finally, the current tests primarily focused on broadleaf trees as the vegetation target and cumulus clouds as the cloud target. The accuracy and generalizability of both simulation and experimental methods in more diverse scenarios require further improvement. Additionally, there are limitations in the current precision characterization of the data. In future work, we will continue to optimize the reflectance spectral database for different target categories and systematically enhance the generalization capability of the model in complex scenarios, thereby laying a foundation for the engineering application of this technology.

6. Conclusions

This study systematically presents a small-target detection method based on snapshot spectral imaging technology. Utilizing the operational principle of the wide-spectrum modulation spectral detector, the method comprehensively incorporates key factors in low-altitude slow-small (LSS) target detection, including material reflectance spectra, solar irradiance spectra, and atmospheric transmittance. Based on these factors, a theoretical framework for wide-spectrum modulated small-target recognition was established, followed by simulation analysis founded on the classification of small-target modulated spectra. Finally, a wide-spectrum modulation small-target detection system was constructed and tested on drone and woodland targets. Experimental results show that the recognition accuracy for drone targets lies between 0.92 and 0.95, with recall rates ranging from 0.88 to 0.982, aligning closely with simulation outcomes and confirming the effectiveness of this technical approach for LSS target detection. In practical detection, however, factors such as detector noise and limited dataset size can affect model performance, introducing fluctuations due to solar zenith angle and detection distance. Future work will focus on enhancing model stability and robustness for LSS target recognition by refining the noise term in the spectral transfer equation and expanding the training dataset.

Author Contributions

Conceptualization, D.L. and H.C.; Data curation, D.L. and Y.H.; Investigation, D.L. and H.C.; Methodology, D.L.; Resources, D.L. and J.L.; Supervision, H.C.; validation, S.S.; Writing—original draft, D.L.; Writing—review and editing, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Scientific Research Project of the Education Department of Jilin Province, namely “UAV Detection Technology Based on Snapshot Spectral Imaging under the Background of Low-Altitude Economy”. The project category is a doctoral project, and the project approval number is JJKH20250475BS.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

Here, I would like to express my sincere gratitude to Yangming Cao from the University of Sheffield for his guidance on the algorithm. He also provided the design ideas related to the algorithm, including the work in aspects such as accuracy calculation and model generation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Swinney, C.J.; Woods, J.C. A review of security incidents and defence techniques relating to the malicious use of small unmanned aerial systems. IEEE Aerosp. Electron. Syst. Mag. 2022, 37, 14–28. [Google Scholar] [CrossRef]
  2. Khan, M.A.; Menouar, H.; Eldeeb, A.; Abu-Dayya, A.; Salim, F.D. On the detection of unauthorized drones—Techniques and future perspectives: A review. IEEE Sens. J. 2022, 22, 11439–11455. [Google Scholar] [CrossRef]
  3. Svanström, F.; Alonso-Fernandez, F.; Englund, C. A dataset for multi-sensor drone detection. Data Brief 2021, 39, 107521. [Google Scholar] [CrossRef] [PubMed]
  4. Svanström, F.; Alonso-Fernandez, F.; Englund, C. Drone detection and tracking in real-time by fusion of different sensing modalities. Drones 2022, 6, 317. [Google Scholar] [CrossRef]
  5. Zhang, T.; Zhang, X.; Ke, X.; Zhan, X.; Shi, J.; Wei, S.; Pan, D.; Li, J.; Su, H.; Zhou, Y. LS-SSDD-v1.0: A deep learning dataset dedicated to small ship detection from large-scale Sentinel-1 SAR images. Remote Sens. 2020, 12, 2997. [Google Scholar]
  6. Chen, G.; Wiede, C.; Kokozinski, R. Data processing approaches on SPAD-based d-TOF LiDAR systems: A review. IEEE Sens. J. 2020, 21, 5656–5667. [Google Scholar]
  7. Zhang, B.; Guan, Y.-Q.; Xia, L.; Dong, D.; Chen, Q.; Xu, C.; Wu, C.; Huang, H.; Zhang, L.; Kang, L. An all-day lidar for detecting soft targets over 100 km based on superconducting nanowire single-photon detectors. Supercond. Sci. Technol. 2021, 34, 034005. [Google Scholar]
  8. Lv, P.-Y.; Sun, S.-L.; Lin, C.-Q.; Liu, G.-R. Space moving target detection and tracking method in complex background. Infrared Phys. Technol. 2018, 91, 107–118. [Google Scholar] [CrossRef]
  9. Utebayeva, D.; Ilipbayeva, L.; Matson, E.T. Practical study of recurrent neural networks for efficient real-time drone sound detection: A review. Drones 2022, 7, 26. [Google Scholar] [CrossRef]
  10. Wei, H.; Ma, P.; Pang, D.; Li, W.; Qian, J.; Guo, X. Weighted Local Ratio-Difference Contrast Method for Detecting an Infrared Small Target against Ground–Sky Background. Remote Sens. 2022, 14, 5636. [Google Scholar] [CrossRef]
  11. Wu, X.; Zhang, H.; Zhao, C.; Li, Z. Improved YOLOv4 for real-time detection algorithm of low-slow-small unmanned aerial vehicles. J. Appl. Opt. 2024, 45, 79–88. [Google Scholar]
  12. Zhai, X.; Huang, Z.; Li, T.; Liu, H.; Wang, S. YOLO-Drone: An optimized YOLOv8 network for tiny UAV object detection. Electronics 2023, 12, 3664. [Google Scholar] [CrossRef]
  13. He, X.; Fan, K.; Xu, Z. Uav identification based on improved YOLOv7 under foggy condition. Signal Image Video Process. 2024, 18, 6173–6183. [Google Scholar]
  14. Wang, C.; Su, Y.; Wang, J.; Wang, T.; Gao, Q. UAVSwarm dataset: An unmanned aerial vehicle swarm dataset for multiple object tracking. Remote Sens. 2022, 14, 2601. [Google Scholar] [CrossRef]
  15. Wei, K. Research on Spectral Characteristics of UAV Targets in Low-Altitude Background and Target Detection Application. Master’s Thesis, Zhengzhou University of Aeronautics, Zhengzhou, China, 2022. [Google Scholar]
  16. Sun, H.; Ma, P.; Li, Z.; Ye, Z.; Ma, Y. Hyperspectral Low Altitude UAV Target Tracking Algorithm Based on Deep Learning and Improved KCF. Front. Phys. 2024, 12, 1341353. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Yang, E.; Yoon, H.; Cheng, Q.; Sun, Z.; Hasan, T.; Cai, W. Reconstructive Spectrometers: Hardware Miniaturization and Computational Reconstruction. eLight 2025, 5, 23. [Google Scholar] [CrossRef]
  18. Yao, Z.; Liu, S.; Wang, Y.; Yuan, X.; Fang, L. Integrated lithium niobate photonics for sub-ångström snapshot spectroscopy. Nature 2025, 646, 567–575. [Google Scholar] [CrossRef]
  19. Yu, H.; Memon, M.; Yao, M.; Gao, Z.; Luo, Y.; Kang, Y.; Zhan, Q.; Chen, W.; Chen, Y.; Liu, S.; et al. A miniaturized cascaded-diode-array spectral imager. Nat. Photonics 2025, 19, 1322–1329. [Google Scholar] [CrossRef]
  20. Wang, T.; Cai, H.; Li, S.; Ren, Y.; Shi, J.; Zhou, J.; Li, D.; Ding, S.; Hua, Y.; Qu, G. Research progress of novel metasurface spectral imaging chips. Laser Optoelectron. Prog. 2023, 60, 203–212. [Google Scholar]
  21. Sun, C.; Chen, Y.; Qiu, X.; Li, R.; You, L. Mrd-yolo: A multispectral object detection algorithm for complex road scenes. Sensors 2024, 24, 3222. [Google Scholar] [CrossRef]
  22. Nataprawira, J.; Gu, Y.; Goncharenko, I.; Kamijo, S. Pedestrian detection using multispectral images and a deep neural network. Sensors 2021, 21, 2536. [Google Scholar] [CrossRef]
  23. Huang, Y.; Qu, J.; Wang, H.; Yang, J. An All-Time Detection Algorithm for UAV Images in Urban Low Altitude. Drones 2024, 8, 332. [Google Scholar] [CrossRef]
  24. Zhao, X.; Liu, K.; Gao, K.; Li, W. Hyperspectral time-series target detection based on spectral perception and spatial–temporal tensor decomposition. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5520812. [Google Scholar] [CrossRef]
  25. Zhao, D.; Asano, Y.; Gu, L.; Sato, I.; Zhou, H. City-scale distance sensing via bispectral light extinction in bad weather. Remote Sens. 2020, 12, 1401. [Google Scholar]
  26. Chang, C. Hyperspectral target detection: Hypothesis testing, signal-to-noise ratio, and spectral angle theories. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5505223. [Google Scholar]
  27. Xiong, F.; Zhou, J.; Qian, Y. Material based object tracking in hyperspectral videos. IEEE Trans. Image Process. 2020, 29, 3719–3733. [Google Scholar] [CrossRef]
  28. Ma, Q.; Zhu, B.; Cheng, Z.; Zhang, Y. Detection and recognition method of fast low-altitude unmanned aerial vehicle based on dual channel. Acta Opt. Sin. 2019, 39, 105–115. [Google Scholar] [CrossRef]
  29. Yang, Z.; Albrow-Owen, T.; Cai, W.; Hasan, T. Miniaturization of optical spectrometers. Science 2021, 371, eabe0722. [Google Scholar] [CrossRef]
  30. Wang, Z.; Yi, S.; Chen, A.; Zhou, M.; Luk, T.S.; James, A.; Nogan, J.; Ross, W.; Joe, G.; Shahsafi, A. Single-shot on-chip spectral sensors based on photonic crystal slabs. Nat. Commun. 2019, 10, 1020. [Google Scholar]
  31. Zhang, W.; Song, H.; He, X.; Huang, L.; Zhang, X.; Zheng, J.; Shen, W.; Hao, X.; Liu, X. Deeply learned broadband encoding stochastic hyperspectral imaging. Light Sci. Appl. 2021, 10, 108. [Google Scholar]
  32. Bian, L.; Wang, Z.; Zhang, Y.; Li, L.; Zhang, Y.; Yang, C.; Fang, W.; Zhao, J.; Zhu, C.; Meng, Q. A broadband hyperspectral image sensor with high spatio-temporal resolution. Nature 2024, 635, 73–81. [Google Scholar] [CrossRef]
  33. Cui, K.; Rao, S.; Xu, S.; Huang, Y.; Cai, X.; Huang, Z.; Wang, Y.; Feng, X.; Liu, F.; Zhang, W. Spectral convolutional neural network chip for in-sensor edge computing of incoherent natural light. Nat. Commun. 2025, 16, 81. [Google Scholar] [CrossRef]
  34. Zhang, S.; Dong, Y.; Fu, H.; Huang, S.-L.; Zhang, L. A spectral reconstruction algorithm of miniature spectrometer based on sparse optimization and dictionary learning. Sensors 2018, 18, 644. [Google Scholar] [CrossRef]
  35. Wen, J.; Gao, H.; Shi, W.; Feng, S.; Hao, L.; Liu, Y.; Xu, L.; Shao, Y.; Zhang, Y.; Shen, W. Real-time Hyperspectral Imager with High Spatial-Spectral Resolution Enabled by Massively Parallel Neural Network. ACS Photonics 2025, 12, 1448–1460. [Google Scholar] [CrossRef]
  36. Song, H.; Ma, Y.; Han, Y.; Shen, W.; Zhang, W.; Li, Y.; Liu, X.; Peng, Y.; Hao, X. Deep-learned broadband encoding stochastic filters for computational spectroscopic instruments. Adv. Theory Simul. 2021, 4, 2000299. [Google Scholar] [CrossRef]
  37. Liu, G.; Wang, L.; Liu, D.; Fei, L.; Yang, J. Hyperspectral image classification based on non-parallel support vector machine. Remote Sens. 2022, 14, 2447. [Google Scholar] [CrossRef]
  38. Wang, F.; Xie, K.; Han, L.; Han, M.; Wang, Z. Research on support vector machine optimization based on improved quantum genetic algorithm. Quantum Inf. Process. 2023, 22, 380. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of a wideband modulated imaging detector.
Figure 1. Schematic diagram of a wideband modulated imaging detector.
Sensors 26 00909 g001
Figure 2. Theoretical flow of small target detection based on wideband modulation.
Figure 2. Theoretical flow of small target detection based on wideband modulation.
Sensors 26 00909 g002
Figure 3. Process design of SVM classification and recognition algorithm.
Figure 3. Process design of SVM classification and recognition algorithm.
Sensors 26 00909 g003
Figure 4. Confusion matrix of Simulation results for classifying and recognizing targets (e.g., drones, clouds, birds). (a) Simulation Results at 100 m Distance and 0° Zenith Angle. (b) Simulation Results at 100 m Distance and 30° Zenith Angle. (c) Simulation Results at 200 m Distance and 0° Zenith Angle. (d) Simulation Results at 200 m Distance and 30° Zenith Angle.
Figure 4. Confusion matrix of Simulation results for classifying and recognizing targets (e.g., drones, clouds, birds). (a) Simulation Results at 100 m Distance and 0° Zenith Angle. (b) Simulation Results at 100 m Distance and 30° Zenith Angle. (c) Simulation Results at 200 m Distance and 0° Zenith Angle. (d) Simulation Results at 200 m Distance and 30° Zenith Angle.
Sensors 26 00909 g004
Figure 5. ROC curve of target classification and recognition simulation results at different zenith angles and detection distances. (a) ROC curve (100 m/0° Simulation). (b) ROC curve (100 m/30° Simulation). (c) ROC curve (200 m/0° Simulation). (d) ROC curve (200 m/30° Simulation).
Figure 5. ROC curve of target classification and recognition simulation results at different zenith angles and detection distances. (a) ROC curve (100 m/0° Simulation). (b) ROC curve (100 m/30° Simulation). (c) ROC curve (200 m/0° Simulation). (d) ROC curve (200 m/30° Simulation).
Sensors 26 00909 g005
Figure 6. Schematic diagram of target modulation spectrum extraction.
Figure 6. Schematic diagram of target modulation spectrum extraction.
Sensors 26 00909 g006
Figure 7. Modulated spectral images of different channels against the sky background.
Figure 7. Modulated spectral images of different channels against the sky background.
Sensors 26 00909 g007
Figure 8. Confusion matrix of experimental results for classifying and recognizing targets (e.g., drones, clouds, birds). (a) Experimental Results at 100 m Distance and 0° Zenith Angle. (b) Experimental Results at 100 m Distance and 30° Zenith Angle. (c) Experimental Results at 200 m Distance and 0° Zenith Angle. (d) Experimental Results at 200 m Distance and 30° Zenith Angle.
Figure 8. Confusion matrix of experimental results for classifying and recognizing targets (e.g., drones, clouds, birds). (a) Experimental Results at 100 m Distance and 0° Zenith Angle. (b) Experimental Results at 100 m Distance and 30° Zenith Angle. (c) Experimental Results at 200 m Distance and 0° Zenith Angle. (d) Experimental Results at 200 m Distance and 30° Zenith Angle.
Sensors 26 00909 g008
Figure 9. ROC curve of target classification and recognition simulation results at different zenith angles and detection distances. (a) ROC curve (100 m/0° Experiment). (b) ROC curve (100 m/30° Experiment). (c) ROC curve (200 m/0° Experiment). (d) ROC curve (200 m/30° Experiment).
Figure 9. ROC curve of target classification and recognition simulation results at different zenith angles and detection distances. (a) ROC curve (100 m/0° Experiment). (b) ROC curve (100 m/30° Experiment). (c) ROC curve (200 m/0° Experiment). (d) ROC curve (200 m/30° Experiment).
Sensors 26 00909 g009
Table 1. Wide-spectrum modulation small target recognition simulation parameters.
Table 1. Wide-spectrum modulation small target recognition simulation parameters.
Simulation VariablesSpecific Parameters
drone material reflectivity spectrumGlass fiber, polypropylene, carbon fiber, titanium alloy, etc.
Environment and interference target spectrumSky radiation spectrum, white cloud scattering spectrum, tree reflection spectrum
Atmospheric Transmittance Aerosol SelectionRural-vis = 23 km
Irradiation type and zenith angleZenith angle 0°, 30°
Regional latitudeLow latitude
Seasons and datesJune (day180), Summer
Table 2. Comparative analysis of the accuracy of simulation and experimental target classification and recognition results.
Table 2. Comparative analysis of the accuracy of simulation and experimental target classification and recognition results.
Sim (Drones)Exp (Drones)Sim (Cloud)Exp (Cloud)Sim (Sky)Sim (Sky)Sim (Trees)Exp (Trees)
(0°, 100 m)0.9750.9510.9740.8750.9130.9750.95
(0°, 200 m)0.950.9610.920.90.860.950.92
(30°, 100 m)0.9750.93110.8750.880.950.916
(30°, 200 m)0.9250.9210.980.90.8780.90.86
RMSE0.0260.0430.030.052
Table 3. Comparative analysis of recall rates of simulation and experimental target classification and recognition results.
Table 3. Comparative analysis of recall rates of simulation and experimental target classification and recognition results.
Sim (Drones)Exp (Drones)Sim (Cloud)Exp (Cloud)Sim (Sky)Sim (Sky)Sim (Trees)Exp (Trees)
(0°, 100 m)10.926110.9720.9360.8660.926
(0°, 200 m)0.9740.880.9750.950.9230.890.9260.92
(30°, 100 m)0.9750.9820.930.9520.9720.8520.9260.932
(30°, 200 m)0.9250.9380.9520.9070.9470.8970.90.895
RMSE0.060.0280.0690.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, D.; Hua, Y.; Song, S.; Liu, J.; Cai, H. Theoretical Modeling and Experimental Study on Low-Altitude Slow-Small Target (LSS) Detection Based on Broadband Spectral Modulation Imaging. Sensors 2026, 26, 909. https://doi.org/10.3390/s26030909

AMA Style

Li D, Hua Y, Song S, Liu J, Cai H. Theoretical Modeling and Experimental Study on Low-Altitude Slow-Small Target (LSS) Detection Based on Broadband Spectral Modulation Imaging. Sensors. 2026; 26(3):909. https://doi.org/10.3390/s26030909

Chicago/Turabian Style

Li, Dongliang, Yangyang Hua, Siyuan Song, Jianguo Liu, and Hongxing Cai. 2026. "Theoretical Modeling and Experimental Study on Low-Altitude Slow-Small Target (LSS) Detection Based on Broadband Spectral Modulation Imaging" Sensors 26, no. 3: 909. https://doi.org/10.3390/s26030909

APA Style

Li, D., Hua, Y., Song, S., Liu, J., & Cai, H. (2026). Theoretical Modeling and Experimental Study on Low-Altitude Slow-Small Target (LSS) Detection Based on Broadband Spectral Modulation Imaging. Sensors, 26(3), 909. https://doi.org/10.3390/s26030909

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop