Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (185)

Search Parameters:
Keywords = military target detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6748 KiB  
Article
YOLO-SSFA: A Lightweight Real-Time Infrared Detection Method for Small Targets
by Yuchi Wang, Minghua Cao, Qing Yang, Yue Zhang and Zexuan Wang
Information 2025, 16(7), 618; https://doi.org/10.3390/info16070618 - 20 Jul 2025
Viewed by 429
Abstract
Infrared small target detection is crucial for military surveillance and autonomous driving. However, complex scenes and weak signal characteristics make the identification of such targets particularly difficult. This study proposes YOLO-SSFA, an enhanced You Only Look Once version 11 (YOLOv11) model with three [...] Read more.
Infrared small target detection is crucial for military surveillance and autonomous driving. However, complex scenes and weak signal characteristics make the identification of such targets particularly difficult. This study proposes YOLO-SSFA, an enhanced You Only Look Once version 11 (YOLOv11) model with three modules: Scale-Sequence Feature Fusion (SSFF), LiteShiftHead detection head, and Noise Suppression Network (NSN). SSFF improves multi-scale feature representation through adaptive fusion; LiteShiftHead boosts efficiency via sparse convolution and dynamic integration; and NSN enhances localization accuracy by focusing on key regions. Experiments on the HIT-UAV and FLIR datasets show mAP50 scores of 94.9% and 85%, respectively. These findings showcase YOLO-SSFA’s strong potential for real-time deployment in challenging infrared environments. Full article
Show Figures

Figure 1

24 pages, 20337 KiB  
Article
MEAC: A Multi-Scale Edge-Aware Convolution Module for Robust Infrared Small-Target Detection
by Jinlong Hu, Tian Zhang and Ming Zhao
Sensors 2025, 25(14), 4442; https://doi.org/10.3390/s25144442 - 16 Jul 2025
Viewed by 326
Abstract
Infrared small-target detection remains a critical challenge in military reconnaissance, environmental monitoring, forest-fire prevention, and search-and-rescue operations, owing to the targets’ extremely small size, sparse texture, low signal-to-noise ratio, and complex background interference. Traditional convolutional neural networks (CNNs) struggle to detect such weak, [...] Read more.
Infrared small-target detection remains a critical challenge in military reconnaissance, environmental monitoring, forest-fire prevention, and search-and-rescue operations, owing to the targets’ extremely small size, sparse texture, low signal-to-noise ratio, and complex background interference. Traditional convolutional neural networks (CNNs) struggle to detect such weak, low-contrast objects due to their limited receptive fields and insufficient feature extraction capabilities. To overcome these limitations, we propose a Multi-Scale Edge-Aware Convolution (MEAC) module that enhances feature representation for small infrared targets without increasing parameter count or computational cost. Specifically, MEAC fuses (1) original local features, (2) multi-scale context captured via dilated convolutions, and (3) high-contrast edge cues derived from differential Gaussian filters. After fusing these branches, channel and spatial attention mechanisms are applied to adaptively emphasize critical regions, further improving feature discrimination. The MEAC module is fully compatible with standard convolutional layers and can be seamlessly embedded into various network architectures. Extensive experiments on three public infrared small-target datasets (SIRSTD-UAVB, IRSTDv1, and IRSTD-1K) demonstrate that networks augmented with MEAC significantly outperform baseline models using standard convolutions. When compared to eleven mainstream convolution modules (ACmix, AKConv, DRConv, DSConv, LSKConv, MixConv, PConv, ODConv, GConv, and Involution), our method consistently achieves the highest detection accuracy and robustness. Experiments conducted across multiple versions, including YOLOv10, YOLOv11, and YOLOv12, as well as various network levels, demonstrate that the MEAC module achieves stable improvements in performance metrics while slightly increasing computational and parameter complexity. These results validate the MEAC module’s significant advantages in enhancing the detection of small and weak objects and suppressing interference from complex backgrounds. These results validate MEAC’s effectiveness in enhancing weak small-target detection and suppressing complex background noise, highlighting its strong generalization ability and practical application potential. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

14 pages, 5319 KiB  
Article
Efficiency Analysis of Disruptive Color in Military Camouflage Patterns Based on Eye Movement Data
by Xin Yang, Su Yan, Bentian Hao, Weidong Xu and Haibao Yu
J. Eye Mov. Res. 2025, 18(4), 26; https://doi.org/10.3390/jemr18040026 - 2 Jul 2025
Viewed by 298
Abstract
Disruptive color on animals’ bodies can reduce the risk of being caught. This study explores the camouflaging effect of disruptive color when applied to military targets. Disruptive and non-disruptive color patterns were placed on the target surface to form simulation materials. Then, the [...] Read more.
Disruptive color on animals’ bodies can reduce the risk of being caught. This study explores the camouflaging effect of disruptive color when applied to military targets. Disruptive and non-disruptive color patterns were placed on the target surface to form simulation materials. Then, the simulation target was set in woodland-, grassland-, and desert-type background images. The detectability of the target in the background was obtained by collecting eye movement indicators after the observer observed the background targets. The influence of background type (local and global), camouflage pattern type, and target viewing angle on the disruptive-color camouflage pattern was investigated. This study aims to design eye movement observation experiments to statistically analyze the indicators of first discovery time, discovery frequency, and first-scan amplitude in the target area. The experimental results show that the first discovery time of mixed disruptive-color targets in a forest background was significantly higher than that of non-mixed disruptive-color targets (t = 2.54, p = 0.039), and the click frequency was reduced by 15% (p < 0.05), indicating that mixed disruptive color has better camouflage effectiveness in complex backgrounds. In addition, the camouflage effect of mixed disruptive colors on large-scale targets (viewing angle ≥ 30°) is significantly improved (F = 10.113, p = 0.01), providing theoretical support for close-range reconnaissance camouflage design. Full article
Show Figures

Figure 1

18 pages, 3974 KiB  
Article
LKD-YOLOv8: A Lightweight Knowledge Distillation-Based Method for Infrared Object Detection
by Xiancheng Cao, Yueli Hu and Haikun Zhang
Sensors 2025, 25(13), 4054; https://doi.org/10.3390/s25134054 - 29 Jun 2025
Viewed by 533
Abstract
Currently, infrared object detection is utilized in a broad spectrum of fields, including military applications, security, and aerospace. Nonetheless, the limited computational power of edge devices presents a considerable challenge in achieving an optimal balance between accuracy and computational efficiency in infrared object [...] Read more.
Currently, infrared object detection is utilized in a broad spectrum of fields, including military applications, security, and aerospace. Nonetheless, the limited computational power of edge devices presents a considerable challenge in achieving an optimal balance between accuracy and computational efficiency in infrared object detection. In order to enhance the accuracy of infrared target detection and strengthen the implementation of robust models on edge platforms for rapid real-time inference, this paper presents LKD-YOLOv8, an innovative infrared object detection method that integrates YOLOv8 architecture with masked generative distillation (MGD), further augmented by the lightweight convolution design and attention mechanism for improved feature adaptability. Linear deformable convolution (LDConv) strengthens spatial feature extraction by dynamically adjusting kernel offsets, while coordinate attention (CA) refines feature alignment through channel-wise interaction. We employ a large-scale model (YOLOv8s) as the teacher to imparts knowledge and supervise the training of a compact student model (YOLOv8n). Experiments show that LKD-YOLOv8 achieves a 1.18% mAP@0.5:0.95 improvement over baseline methods while reducing the parameter size by 7.9%. Our approach effectively balances accuracy and efficiency, rendering it applicable for resource-constrained edge devices in infrared scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 1240 KiB  
Article
An Adaptive PSO Approach with Modified Position Equation for Optimizing Critical Node Detection in Large-Scale Networks: Application to Wireless Sensor Networks
by Abdelmoujib Megzari, Walid Osamy, Bader Alwasel and Ahmed M. Khedr
J. Sens. Actuator Netw. 2025, 14(3), 62; https://doi.org/10.3390/jsan14030062 - 16 Jun 2025
Viewed by 788
Abstract
In recent years, wireless sensor networks (WSNs) have been employed across various domains, including military services, healthcare, disaster response, industrial automation, and smart infrastructure. Due to the absence of fixed communication infrastructure, WSNs rely on ad hoc connections between sensor nodes to transmit [...] Read more.
In recent years, wireless sensor networks (WSNs) have been employed across various domains, including military services, healthcare, disaster response, industrial automation, and smart infrastructure. Due to the absence of fixed communication infrastructure, WSNs rely on ad hoc connections between sensor nodes to transmit sensed data to target nodes. Within a WSN, a sensor node whose failure partitions the network into disconnected segments is referred to as a critical node or cut vertex. Identifying such nodes is a fundamental step toward ensuring the reliability of WSNs. The critical node detection problem (CNDP) focuses on determining the set of nodes whose removal most significantly affects the network’s connectivity, stability, functionality, robustness, and resilience. CNDP is a significant challenge in network analysis that involves identifying the nodes that have a significant influence on connectivity or centrality measures within a network. However, achieving an optimal solution for the CNDP is often hindered by its time-consuming and computationally intensive nature, especially when dealing with large-scale networks. In response to this challenge, we present a method based on particle swarm optimization (PSO) for the detection of critical nodes. We employ discrete PSO (DPSO) along with the modified position equation (MPE) to effectively solve the CNDP, making it applicable to various k-vertex variations of the problem. We examine the impact of population size on both execution time and result quality. Experimental analysisusing different neighborhood topologies—namely, the star topology and the dynamic topology—was conducted to analyze their impact on solution effectiveness and adaptability to diverse network configurations. We consistently observed better result quality with the dynamic topology compared to the star topology for the same population size, while the star topology exhibited better execution time. Our findings reveal the promising efficacy of the proposed solution in addressing the CNDP, achieving high-quality solutions compared to existing methods. Full article
Show Figures

Figure 1

16 pages, 2071 KiB  
Article
Long-Term miRNA Changes Predicting Resiliency Factors of Post-Traumatic Stress Disorder in a Large Military Cohort—Millennium Cohort Study
by Ruoting Yang, Swapna Kannan, Aarti Gautam, Teresa M. Powell, Cynthia A. LeardMann, Allison V. Hoke, George I. Dimitrov, Marti Jett, Carrie J. Donoho, Rudolph P. Rull and Rasha Hammamieh
Int. J. Mol. Sci. 2025, 26(11), 5195; https://doi.org/10.3390/ijms26115195 - 28 May 2025
Viewed by 709
Abstract
Post-traumatic stress disorder (PTSD) is a complex, debilitating condition prevalent among military personnel exposed to traumatic events, necessitating biomarkers for early detection and intervention. Using data from the Millennium Cohort Study, the largest and longest-running military health study initiated in 2001, our objective [...] Read more.
Post-traumatic stress disorder (PTSD) is a complex, debilitating condition prevalent among military personnel exposed to traumatic events, necessitating biomarkers for early detection and intervention. Using data from the Millennium Cohort Study, the largest and longest-running military health study initiated in 2001, our objective was to identify specific microRNA (miRNA) expression patterns associated with distinct PTSD symptom trajectories among service members and veterans and assess their potential for predicting resilience and symptom severity. We analyzed 1052 serum samples obtained from the Department of Defense Serum Repository and linked with survey data collected at baseline and across three follow-up waves (2001–2011), using miRNA sequencing and statistical modeling. Our analysis identified five PTSD trajectories—resilient, pre-existing, new-onset moderate, new-onset severe, and adaptive—and revealed significant dysregulation of three key miRNAs (miR-182-5p, miR-9-5p, miR-204-5p) in participants with PTSD compared to resilient individuals. These miRNAs, which inhibit brain-derived neurotrophic factor (BDNF) and target pathways like NFκB, Notch, and TGF-alpha, were associated with neuronal plasticity, inflammation, and tissue repair, reflecting PTSD pathophysiology. These findings suggest that miRNA profiles could serve as biomarkers for early identification of PTSD risk and resilience, guiding targeted interventions to improve long-term health outcomes for military personnel. Full article
Show Figures

Figure 1

21 pages, 1166 KiB  
Article
Sea Clutter Suppression Method Based on Correlation Features
by Zhen Li, Huafeng He, Liyuan Wang, Tao Zhou, Yizhe Sun and Yaomin He
J. Mar. Sci. Eng. 2025, 13(5), 998; https://doi.org/10.3390/jmse13050998 - 21 May 2025
Viewed by 387
Abstract
Radar target detection in a sea clutter environment is of significant importance in both civilian and military applications, with the detection of small maneuvering targets being particularly challenging. To address this issue, this paper introduces the autocorrelation characteristics of sea clutter into orthogonal [...] Read more.
Radar target detection in a sea clutter environment is of significant importance in both civilian and military applications, with the detection of small maneuvering targets being particularly challenging. To address this issue, this paper introduces the autocorrelation characteristics of sea clutter into orthogonal projection operations to suppress sea clutter and enhance the detection capability of small maneuvering targets on the sea surface. The proposed method first generates speckle components that are consistent with the correlation characteristics of the observed sea clutter. Then, it uses these speckle components to derive the feature subspace of the sea clutter and applies this subspace in an orthogonal projection suppression algorithm, thereby achieving effective suppression of the sea clutter. This method does not rely on the covariance matrix estimation of sea clutter from reference cells but instead directly utilizes the autocorrelation characteristics of the observed sea clutter data to obtain the feature subspace, making it more adaptable to different environments. Simulation and experimental results demonstrate that this method significantly suppresses sea clutter and effectively improves the performance of target detection on the sea surface. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

19 pages, 12427 KiB  
Article
Oriented SAR Ship Detection Based on Edge Deformable Convolution and Point Set Representation
by Tianyue Guan, Sheng Chang, Yunkai Deng, Fengli Xue, Chunle Wang and Xiaoxue Jia
Remote Sens. 2025, 17(9), 1612; https://doi.org/10.3390/rs17091612 - 1 May 2025
Cited by 1 | Viewed by 673
Abstract
Ship detection in synthetic aperture radar (SAR) images holds significant importance for both military and civilian applications, including maritime traffic supervision, marine search and rescue operations, and emergency response initiatives. Although extensive research has been conducted in this field, the interference of speckle [...] Read more.
Ship detection in synthetic aperture radar (SAR) images holds significant importance for both military and civilian applications, including maritime traffic supervision, marine search and rescue operations, and emergency response initiatives. Although extensive research has been conducted in this field, the interference of speckle noise in SAR images and the potential discontinuity of target contours continue to pose challenges for the accurate detection of multi-directional ships in complex scenes. To address these issues, we propose a novel ship detection method for SAR images that leverages edge deformable convolution combined with point set representation. By integrating edge deformable convolution with backbone networks, we learn the correlations between discontinuous target blocks in SAR images. This process effectively suppresses speckle noise while capturing the overall offset characteristics of targets. On this basis, a multi-directional ship detection module utilizing radial basis function (RBF) point set representation is developed. By constructing a point set transformation function, we establish efficient geometric alignment between the point set and the predicted rotated box, and we impose constraints on the penalty term associated with point set transformation to ensure accurate mapping between point set features and directed prediction boxes. This methodology enables the precise detection of multi-directional ship targets even in dense scenes. The experimental results derived from two publicly available datasets, RSDD-SAR and SSDD, demonstrate that our proposed method achieves state-of-the-art performance when benchmarked against other advanced detection models. Full article
Show Figures

Figure 1

27 pages, 23958 KiB  
Article
Cross-Scene Multi-Object Tracking for Drones: Leveraging Meta-Learning and Onboard Parameters with the New MIDDTD
by Chenghang Wang, Xiaochun Shen, Zhaoxiang Zhang, Chengyang Tao and Yuelei Xu
Drones 2025, 9(5), 341; https://doi.org/10.3390/drones9050341 - 30 Apr 2025
Cited by 1 | Viewed by 615 | Correction
Abstract
Multi-object tracking (MOT) is a key intermediate task in many practical applications and theoretical fields, facing significant challenges due to complex scenarios, particularly in the context of drone-based air-to-ground military operations. During drone flight, factors such as high-altitude environments, small target proportions, irregular [...] Read more.
Multi-object tracking (MOT) is a key intermediate task in many practical applications and theoretical fields, facing significant challenges due to complex scenarios, particularly in the context of drone-based air-to-ground military operations. During drone flight, factors such as high-altitude environments, small target proportions, irregular target movement, and frequent occlusions complicate the multi-object tracking task. This paper proposes a cross-scene multi-object tracking (CST) method to address these challenges. Firstly, a lightweight object detection framework is proposed to optimize key sub-tasks by integrating multi-dimensional temporal and spatial information. Secondly, trajectory prediction is achieved through the implementation of Model-Agnostic Meta-Learning, enhancing adaptability to dynamic environments. Thirdly, re-identification is facilitated using Dempster–Shafer Theory, which effectively manages uncertainties in target recognition by incorporating aircraft state information. Finally, a novel dataset, termed the Multi-Information Drone Detection and Tracking Dataset (MIDDTD), is introduced, containing rich drone-related information and diverse scenes, thereby providing a solid foundation for the validation of cross-scene multi-object tracking algorithms. Experimental results demonstrate that the proposed method improves the IDF1 tracking metric by 1.92% compared to existing state-of-the-art methods, showcasing strong cross-scene adaptability and offering an effective solution for multi-object tracking from a drone’s perspective, thereby advancing theoretical and technical support for related fields. Full article
Show Figures

Figure 1

20 pages, 5808 KiB  
Article
Enhanced YOLOv7 Based on Channel Attention Mechanism for Nearshore Ship Detection
by Qingyun Zhu, Zhen Zhang and Ruizhe Mu
Electronics 2025, 14(9), 1739; https://doi.org/10.3390/electronics14091739 - 24 Apr 2025
Viewed by 502
Abstract
Nearshore ship detection is an important task in marine monitoring, playing a significant role in navigation safety and controlling illegal smuggling. The continuous research and development of Synthetic Aperture Radar (SAR) technology is not only of great importance in military and maritime security [...] Read more.
Nearshore ship detection is an important task in marine monitoring, playing a significant role in navigation safety and controlling illegal smuggling. The continuous research and development of Synthetic Aperture Radar (SAR) technology is not only of great importance in military and maritime security fields but also has great potential in civilian fields, such as disaster emergency response, marine resource monitoring, and environmental protection. Due to the limited sample size of nearshore ship datasets, it is difficult to meet the demand for the large quantity of training data required by existing deep learning algorithms, which limits the recognition accuracy. At the same time, artificial environmental features such as buildings can cause significant interference to SAR imaging, making it more difficult to distinguish ships from the background. Ship target images are greatly affected by speckle noise, posing additional challenges to data-driven recognition methods. Therefore, we utilized a Concurrent Single-Image GAN (ConSinGAN) to generate high-quality synthetic samples for re-labeling and fused them with the dataset extracted from the SAR-Ship dataset for nearshore image extraction and dataset division. Experimental analysis showed that the ship recognition model trained with augmented images had an accuracy increase of 4.66%, a recall rate increase of 3.68%, and an average precision (AP) with Intersection over Union (IoU) at 0.5 increased by 3.24%. Subsequently, an enhanced YOLOv7 algorithm (YOLOv7 + ESE) incorporating channel-wise information fusion was developed based on the YOLOv7 architecture integrated with the Squeeze-and-Excitation (SE) channel attention mechanism. Through comparative experiments, the analytical results demonstrated that the proposed algorithm achieved performance improvements of 0.36% in precision, 0.52% in recall, and 0.65% in average precision (AP@0.5) compared to the baseline model. This optimized architecture enables accurate detection of nearshore ship targets in SAR imagery. Full article
(This article belongs to the Special Issue Intelligent Systems in Industry 4.0)
Show Figures

Figure 1

24 pages, 2629 KiB  
Article
Robust Infrared–Visible Fusion Imaging with Decoupled Semantic Segmentation Network
by Xuhui Zhang, Yunpeng Yin, Zhuowei Wang, Heng Wu, Lianglun Cheng, Aimin Yang and Genping Zhao
Sensors 2025, 25(9), 2646; https://doi.org/10.3390/s25092646 - 22 Apr 2025
Viewed by 649
Abstract
The fusion of infrared and visible images provides complementary information from both modalities and has been widely used in surveillance, military, and other fields. However, most of the available fusion methods have only been evaluated with subjective metrics of visual quality of the [...] Read more.
The fusion of infrared and visible images provides complementary information from both modalities and has been widely used in surveillance, military, and other fields. However, most of the available fusion methods have only been evaluated with subjective metrics of visual quality of the fused images, which are often independent of the following relevant high-level visual tasks. Moreover, as a useful technique especially used in low-light scenarios, the effect of low-light conditions on the fusion result has not been well-addressed yet. To address these challenges, a decoupled and semantic segmentation-driven infrared and visible image fusion network is proposed in this paper, which connects both image fusion and the downstream task to drive the network to be optimized. Firstly, a cross-modality transformer fusion module is designed to learn rich hierarchical feature representations. Secondly, a semantic-driven fusion module is developed to enhance the key features of prominent targets. Thirdly, a weighted fusion strategy is adopted to automatically adjust the fusion weights of different modality features. This effectively merges the thermal characteristics from infrared images and detailed information from visible images. Additionally, we design a refined loss function that employs the decoupling network to constrain the pixel distributions in the fused images and produce more-natural fusion images. To evaluate the robustness and generalization of the proposed method in practical challenge applications, a Maritime Infrared and Visible (MIV) dataset is created and verified for maritime environmental perception, which will be made available soon. The experimental results from both widely used public datasets and the practically collected MIV dataset highlight the notable strengths of the proposed method with the best-ranking quality metrics among its counterparts. Of more importance, the fusion image achieved with the proposed method has over 96% target detection accuracy and a dominant high mAP@[50:95] value that far surpasses all the competitors. Full article
Show Figures

Graphical abstract

25 pages, 13401 KiB  
Article
Enhanced U-Net for Underwater Laser Range-Gated Image Restoration: Boosting Underwater Target Recognition
by Peng Liu, Shuaibao Chen, Wei He, Jue Wang, Liangpei Chen, Yuguang Tan, Dong Luo, Wei Chen and Guohua Jiao
J. Mar. Sci. Eng. 2025, 13(4), 803; https://doi.org/10.3390/jmse13040803 - 17 Apr 2025
Viewed by 636
Abstract
Underwater optical imaging plays a crucial role in maritime safety, enabling reliable navigation, efficient search and rescue operations, precise target recognition, and robust military reconnaissance. However, conventional underwater imaging methods often suffer from severe backscattering noise, limited detection range, and reduced image clarity—challenges [...] Read more.
Underwater optical imaging plays a crucial role in maritime safety, enabling reliable navigation, efficient search and rescue operations, precise target recognition, and robust military reconnaissance. However, conventional underwater imaging methods often suffer from severe backscattering noise, limited detection range, and reduced image clarity—challenges that are exacerbated in turbid waters. To address these issues, Underwater Laser Range-Gated Imaging has emerged as a promising solution. By selectively capturing photons within a controlled temporal gate, this technique effectively suppresses backscattering noise-enhancing image clarity, contrast, and detection range. Nevertheless, residual noise within the imaging slice can still degrade image quality, particularly in challenging underwater conditions. In this study, we propose an enhanced U-Net neural network designed to mitigate noise interference in underwater laser range-gated images, improving target recognition performance. Built upon the U-Net architecture with added residual connections, our network combines a VGG16-based perceptual loss with Mean Squared Error (MSE) as the loss function, effectively capturing high-level semantic features while preserving critical target details during reconstruction. Trained on a semi-synthetic grayscale dataset containing synthetically degraded images paired with their reference counterparts, the proposed approach demonstrates improved performance compared to several existing underwater image restoration methods in our experimental evaluations. Through comprehensive qualitative and quantitative evaluations, underwater target detection experiments, and real-world oceanic validations, our method demonstrates significant potential for advancing maritime safety and related applications. Full article
Show Figures

Figure 1

10 pages, 2124 KiB  
Article
Multifunctional Hierarchical Metamaterials: Synergizing Visible-Laser-Infrared Camouflage with Thermal Management
by Shenglan Wu, Hao Huang, Zhenyong Huang, Chunhui Tian, Lina Guo, Yong Liu and Shuang Liu
Photonics 2025, 12(4), 387; https://doi.org/10.3390/photonics12040387 - 16 Apr 2025
Viewed by 621
Abstract
With the rapid development of multispectral detection technology, realizing the synergistic camouflage and thermal management of materials in multi-band has become a major challenge. In this paper, a multifunctional radiation-selective hierarchical metamaterial (RSHM) is designed to realize the modulation of optical properties in [...] Read more.
With the rapid development of multispectral detection technology, realizing the synergistic camouflage and thermal management of materials in multi-band has become a major challenge. In this paper, a multifunctional radiation-selective hierarchical metamaterial (RSHM) is designed to realize the modulation of optical properties in a wide spectral range through the delicate design of microstructures and nanostructures. In the atmospheric windows of 3–5 μm and 8–14 μm, the emissivity of the material is as low as 0.14 and 0.25, which can effectively suppress the radiation characteristics of the target in the infrared band, thus realizing efficient infrared stealth. Simultaneously, it exhibits high emissivity in the 2.5–3 μm (up to 0.80) and 5–8 μm (up to 0.98) bands, significantly improving thermal radiation efficiency and enabling active thermal management. Notably, RSHM achieves low reflectivity at 1.06 μm (0.13) and 1.55 μm (0.005) laser wavelengths, as well as in the 8–14 μm (0.06) band, substantially improving laser stealth performances. Additionally, it maintains high transmittance in the visible light range, ensuring excellent visual camouflage effects. Furthermore, the RSHM demonstrates exceptional incident angle and polarization stability, maintaining robust performances even under complex detection conditions. This design is easy to expand relative to other frequency bands of the electromagnetic spectrum and holds significant potential for applications in military camouflage, energy-efficient buildings, and optical devices. Full article
Show Figures

Figure 1

20 pages, 1641 KiB  
Article
Spectral Information Divergence-Driven Diffusion Networks for Hyperspectral Target Detection
by Jinfu Gong, Zhen Huang, Zhengye Yang, Xuezhuan Ding and Fanming Li
Appl. Sci. 2025, 15(8), 4076; https://doi.org/10.3390/app15084076 - 8 Apr 2025
Viewed by 504
Abstract
Hyperspectral Imagery (HSI) plays a crucial role in military and civilian target detection. However, HSI target detection remains highly challenging due to the interference caused by complex and diverse real-world scenarios. This paper proposes a Spectral Information Divergence-driven Diffusion Network model (SID-DN) for [...] Read more.
Hyperspectral Imagery (HSI) plays a crucial role in military and civilian target detection. However, HSI target detection remains highly challenging due to the interference caused by complex and diverse real-world scenarios. This paper proposes a Spectral Information Divergence-driven Diffusion Network model (SID-DN) for hyperspectral target detection, which significantly enhances detection robustness in complex scenes by decoupling background distribution modeling from target detection. The proposed method focuses on learning the background distribution in hyperspectral data and achieves target detection by accurately reconstructing background samples to identify differences between background and target samples. This method introduces an adaptive coarse detection module, which optimizes the coarse detection process in generative hyperspectral target detection, effectively reducing the background-target misclassification. Additionally, a SID-based Diffusion model is designed to optimize the loss of Diffusion, effectively reducing the interference of suspected target samples during the background learning process. Experiments on three real-world datasets demonstrate that the method is highly competitive, with detection results significantly outperforming current state-of-the-art methods. Full article
Show Figures

Figure 1

23 pages, 31391 KiB  
Article
A Method for Airborne Small-Target Detection with a Multimodal Fusion Framework Integrating Photometric Perception and Cross-Attention Mechanisms
by Shufang Xu, Heng Li, Tianci Liu and Hongmin Gao
Remote Sens. 2025, 17(7), 1118; https://doi.org/10.3390/rs17071118 - 21 Mar 2025
Viewed by 1116
Abstract
In recent years, the rapid advancement and pervasive deployment of unmanned aerial vehicle (UAV) technology have catalyzed transformative applications across the military, civilian, and scientific domains. While aerial imaging has emerged as a pivotal tool in modern remote sensing systems, persistent challenges remain [...] Read more.
In recent years, the rapid advancement and pervasive deployment of unmanned aerial vehicle (UAV) technology have catalyzed transformative applications across the military, civilian, and scientific domains. While aerial imaging has emerged as a pivotal tool in modern remote sensing systems, persistent challenges remain in achieving robust small-target detection under complex all-weather conditions. This paper presents an innovative multimodal fusion framework incorporating photometric perception and cross-attention mechanisms to address the critical limitations of current single-modality detection systems, particularly their susceptibility to reduced accuracy and elevated false-negative rates in adverse environmental conditions. Our architecture introduces three novel components: (1) a bidirectional hierarchical feature extraction network that enables the synergistic processing of heterogeneous sensor data; (2) a cross-modality attention mechanism that dynamically establishes inter-modal feature correlations through learnable attention weights; (3) an adaptive photometric weighting fusion module that implements spectral characteristic-aware feature recalibration. The proposed system achieves multimodal complementarity through two-phase integration: first by establishing cross-modal feature correspondences through attention-guided feature alignment, then performing weighted fusion based on photometric reliability assessment. Comprehensive experiments demonstrate that our framework achieves an improvement of at least 3.6% in mAP compared to the other models on the challenging LLVIP dataset, and with particular improvements in detection reliability on the KAIST dataset. This research advances the state of the art in aerial target detection by providing a principled approach for multimodal sensor fusion, with significant implications for surveillance, disaster response, and precision agriculture applications. Full article
Show Figures

Graphical abstract

Back to TopTop