Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = multiplexing multi-scale features network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2474 KB  
Article
A MFR Work Modes Recognition Method Based on Dual-Scale Feature Extraction
by Zhiyuan Li, Xuan Fu, Chengjian Mo, Jianlong Tang, Ronghua Guo and Wenbo Li
Remote Sens. 2025, 17(6), 1054; https://doi.org/10.3390/rs17061054 - 17 Mar 2025
Viewed by 956
Abstract
Multi-function radar (MFR) work modes recognition is an important research component of the electronic reconnaissance field. When facing MFR systems equipped with complex mode-waveform mapping relationships and flexible beam scanning techniques, the intercepted work mode pulse sequences have a wide temporal range of [...] Read more.
Multi-function radar (MFR) work modes recognition is an important research component of the electronic reconnaissance field. When facing MFR systems equipped with complex mode-waveform mapping relationships and flexible beam scanning techniques, the intercepted work mode pulse sequences have a wide temporal range of feature distributions and variable durations, which bring significant challenges for accurate recognition. To address this issue, this study constructs a novel hierarchical MFR signal model with waveform multiplexing and waveform scheduling laws with spatial beam arrangement and proposes a work mode recognition method based on dual-scale feature extraction. The recognition method first obtains the variable-length sequence processing capability through pulse sequence segmentation. Then, a structure composed of convolutional neural network (CNN) and long short-term memory (LSTM) is followed to extract the deep time-series features at the internal-segment scale of segments, and the features of each segment are concatenated in the time dimension. Subsequently, an LSTM-Attention network is used to extract the external-segment-scale features while adaptively assigning a higher weight to important waveform segments. Ultimately, the work mode recognition results are obtained. The experimental results show that the proposed method’s performance is advantageous in recognizing work modes under the comprehensive MFR signal model. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

12 pages, 1924 KB  
Article
Multiplexing Multi-Scale Features Network for Salient Target Detection
by Xiaoxuan Liu, Yanfei Peng, Gang Wang and Jing Wang
Appl. Sci. 2024, 14(17), 7940; https://doi.org/10.3390/app14177940 - 5 Sep 2024
Viewed by 1232
Abstract
This paper proposes a multiplexing multi-scale features network (MMF-Network) for salient target detection to tackle the issue of incomplete detection structures when identifying salient targets across different scales. The network, based on encoder–decoder architecture, integrates a multi-scale aggregation module and a multi-scale visual [...] Read more.
This paper proposes a multiplexing multi-scale features network (MMF-Network) for salient target detection to tackle the issue of incomplete detection structures when identifying salient targets across different scales. The network, based on encoder–decoder architecture, integrates a multi-scale aggregation module and a multi-scale visual interaction module. Initially, a multi-scale aggregation module is constructed, which, despite potentially introducing a small amount of noise, significantly enhances the high-level semantic and geometric information of features. Subsequently, SimAM is employed to emphasize feature information, thereby highlighting the significant target. A multi-scale visual interaction module is designed to enable compatibility between low-resolution and high-resolution feature maps, with dilated convolutions utilized to expand the receptive field of high-resolution feature maps. Finally, the proposed MMF-Network is tested on three datasets: DUTS-Te, HUK-IS, and PSCAL-S, achieving scores of 0.887, 0.811, and 0.031 in terms of its F-value SSIM and MA, respectively. The experimental results demonstrate that the MMF-Network exhibits a superior performance in salient target detection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 11469 KB  
Article
MTUW-GAN: A Multi-Teacher Knowledge Distillation Generative Adversarial Network for Underwater Image Enhancement
by Tianchi Zhang and Yuxuan Liu
Appl. Sci. 2024, 14(2), 529; https://doi.org/10.3390/app14020529 - 8 Jan 2024
Cited by 4 | Viewed by 3080
Abstract
Underwater imagery is plagued by issues such as image blurring and color distortion, which significantly impede the detection and operational capabilities of underwater robots, specifically Autonomous Underwater Vehicles (AUVs). Previous approaches to image fusion or multi-scale feature fusion based on deep learning necessitated [...] Read more.
Underwater imagery is plagued by issues such as image blurring and color distortion, which significantly impede the detection and operational capabilities of underwater robots, specifically Autonomous Underwater Vehicles (AUVs). Previous approaches to image fusion or multi-scale feature fusion based on deep learning necessitated multi-branch image preprocessing prior to merging through fusion modules. However, these methods have intricate network structures and a high demand for computational resources, rendering them unsuitable for deployment on AUVs, which have limited resources at their disposal. To tackle these challenges, we propose a multi-teacher knowledge distillation GAN for underwater image enhancement (MTUW-GAN). Our approach entails multiple teacher networks instructing student networks simultaneously, enabling them to enhance color and detail in degraded images from various perspectives, thus achieving an image-fusion-level performance. Additionally, we employ middle layer channel distillation in conjunction with the attention mechanism to extract and transfer rich middle layer feature information from the teacher model to the student model. By eliminating multiplexed branching and fusion modules, our lightweight student model can directly generate enhanced underwater images through model compression. Furthermore, we introduce a multimodal objective enhancement function to refine the overall framework training, striking a balance between a low computational effort and high-quality image enhancement. Experimental results, obtained by comparing our method with existing approaches, demonstrate the clear advantages of our proposed method in terms of visual quality, model parameters, and real-time performance. Consequently, our method serves as an effective solution for real-time underwater image enhancement, specifically tailored for deployment on AUVs. Full article
Show Figures

Figure 1

13 pages, 4221 KB  
Article
Research on Orbital Angular Momentum Multiplexing Communication System Based on Neural Network Inversion of Phase
by Yang Cao, Zupeng Zhang, Xiaofeng Peng, Yuhan Wang and Huaijun Qin
Electronics 2022, 11(10), 1592; https://doi.org/10.3390/electronics11101592 - 17 May 2022
Cited by 1 | Viewed by 2457
Abstract
An adaptive optical wavefront recovery method based on a residual attention network is proposed for the degradation of an Orbital Angular Momentum multiplexing communication system performance caused by atmospheric turbulence in free-space optical communication. To prevent the degeneration phenomenon of neural networks, the [...] Read more.
An adaptive optical wavefront recovery method based on a residual attention network is proposed for the degradation of an Orbital Angular Momentum multiplexing communication system performance caused by atmospheric turbulence in free-space optical communication. To prevent the degeneration phenomenon of neural networks, the residual network is used as the backbone network, and a multi-scale residual hybrid attention network is constructed. Distributed feature extraction by convolutional kernels at different scales is used to enhance the network’s ability to represent light intensity image features. The attention mechanism is used to improve the recognition rate of the network for broken light spot features. The network loss function is designed by combining realistic evaluation indexes so as to obtain Zernike coefficients that match the actual wavefront aberration. Simulation experiments are carried out for different atmospheric turbulence intensity conditions, and the results show that the residual attention network can reconstruct the turbulent phase quickly and accurately. The peaks to valleys of the recovered residual aberrations were between 0.1 and 0.3 rad, and the root means square was between 0.02 and 0.12 rad. The results obtained by the residual attention network are better than those of the conventional network at different SNRs. Full article
(This article belongs to the Special Issue Mechatronic Control Engineering)
Show Figures

Figure 1

19 pages, 2881 KB  
Article
Multi-Scale Feature Mapping Network for Hyperspectral Image Super-Resolution
by Jing Zhang, Minhao Shao, Zekang Wan and Yunsong Li
Remote Sens. 2021, 13(20), 4180; https://doi.org/10.3390/rs13204180 - 19 Oct 2021
Cited by 18 | Viewed by 3137
Abstract
Hyperspectral Image (HSI) can continuously cover tens or even hundreds of spectral segments for each spatial pixel. Limited by the cost and commercialization requirements of remote sensing satellites, HSIs often lose a lot of information due to insufficient image spatial resolution. For the [...] Read more.
Hyperspectral Image (HSI) can continuously cover tens or even hundreds of spectral segments for each spatial pixel. Limited by the cost and commercialization requirements of remote sensing satellites, HSIs often lose a lot of information due to insufficient image spatial resolution. For the high-dimensional nature of HSIs and the correlation between the spectra, the existing Super-Resolution (SR) methods for HSIs have the problems of excessive parameter amount and insufficient information complementarity between the spectra. This paper proposes a Multi-Scale Feature Mapping Network (MSFMNet) based on the cascaded residual learning to adaptively learn the prior information of HSIs. MSFMNet simplifies each part of the network into a few simple yet effective network modules. To learn the spatial-spectral characteristics among different spectral segments, a multi-scale feature generation and fusion Multi-Scale Feature Mapping Block (MSFMB) based on wavelet transform and spatial attention mechanism is designed in MSFMNet to learn the spectral features between different spectral segments. To effectively improve the multiplexing rate of multi-level spectral features, a Multi-Level Feature Fusion Block (MLFFB) is designed to fuse the multi-level spectral features. In the image reconstruction stage, an optimized sub-pixel convolution module is used for the up-sampling of different spectral segments. Through a large number of verifications on the three general hyperspectral datasets, the superiority of this method compared with the existing hyperspectral SR methods is proved. In subjective and objective experiments, its experimental performance is better than its competitors. Full article
Show Figures

Figure 1

Back to TopTop