Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (626)

Search Parameters:
Keywords = images compressed sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3492 KB  
Article
Filter-Wise Mask Pruning and FPGA Acceleration for Object Classification and Detection
by Wenjing He, Shaohui Mei, Jian Hu, Lingling Ma, Shiqi Hao and Zhihan Lv
Remote Sens. 2025, 17(21), 3582; https://doi.org/10.3390/rs17213582 - 29 Oct 2025
Viewed by 388
Abstract
Pruning and acceleration has become an essential and promising technique for convolutional neural networks (CNN) in remote sensing image processing, especially for deployment on resource-constrained devices. However, how to maintain model accuracy and achieve satisfactory acceleration simultaneously remains to be a challenging and [...] Read more.
Pruning and acceleration has become an essential and promising technique for convolutional neural networks (CNN) in remote sensing image processing, especially for deployment on resource-constrained devices. However, how to maintain model accuracy and achieve satisfactory acceleration simultaneously remains to be a challenging and valuable problem. To break this limitation, we introduce a novel pruning pattern of filter-wise mask by enforcing extra filter-wise structural constraints on pattern-based pruning, which achieves the benefits of both unstructured and structured pruning. The newly introduced filter-wise mask enhances fine-grained sparsity with more hardware-friendly regularity. We further design an acceleration architecture with optimization of calculation parallelism and memory access, aiming to fully translate weight pruning to hardware performance gain. The proposed pruning method is firstly proven on classification networks. The pruning rate can achieve 75.1% for VGG-16 and 84.6% for ResNet-50 without accuracy compromise. Further to this, we enforce our method on the widely used object detection model, the you only look once (YOLO) CNN. On the aerial image dataset, the pruned YOLOv5s achieves a pruning rate of 53.43% with a slight accuracy degradation of 0.6%. Meanwhile, we implement the acceleration architecture on a field-programmable gate array (FPGA) to evaluate its practical execution performance. The throughput reaches up to 809.46MOPS. The pruned network achieves a speedup of 2.23× and 4.4×, with a compression rate of 2.25× and 4.5×, respectively, converting the model compression to execution speedup effectively. The proposed pruning and acceleration approach provides crucial technology to facilitate the application of remote sensing with CNN, especially in scenarios such as on-board real-time processing, emergency response, and low-cost monitoring. Full article
Show Figures

Figure 1

21 pages, 6424 KB  
Article
Coherent Dynamic Clutter Suppression in Structural Health Monitoring via the Image Plane Technique
by Mattia Giovanni Polisano, Marco Manzoni, Stefano Tebaldini, Damiano Badini and Sergi Duque
Remote Sens. 2025, 17(20), 3459; https://doi.org/10.3390/rs17203459 - 16 Oct 2025
Viewed by 319
Abstract
In this work, a radar imagery-based signal processing technique to eliminate dynamic clutter interference in Structural Health Monitoring (SHM) is proposed. This can be considered an application of a joint communication and sensing telecommunication infrastructure, leveraging a base-station as ground-based radar. The dynamic [...] Read more.
In this work, a radar imagery-based signal processing technique to eliminate dynamic clutter interference in Structural Health Monitoring (SHM) is proposed. This can be considered an application of a joint communication and sensing telecommunication infrastructure, leveraging a base-station as ground-based radar. The dynamic clutter is considered to be a fast moving road user, such as car, truck, or moped. The proposed technique is suitable in case of a dynamic clutter, such that its Doppler contribute alias and falls over the 0 Hz component. In those cases, a standard low-pass filter is not a viable option. Indeed, an excessively shallow low-pass filter preserves the dynamic clutter contribution, while an excessively narrow low-pass filter deletes the displacement information and also preserves the dynamic clutter. The proposed approach leverages the Time Domain Backprojection (TDBP), a well-known technique to produce radar imagery, to transfer the dynamic clutter from the data domain to an image plane, where the dynamic clutter is maximally compressed. Consequently, the dynamic clutter can be more effectively suppressed than in the range-Doppler domain. The dynamic clutter cancellation is performed by coherent subtraction. Throughout this work, a numerical simulation is conducted. The simulation results show consistency with the ground truth. A further validation is performed using real-world data acquired in the C-band by Huawei Technologies. Corner reflectors are placed on an infrastructure, in particular a bridge, to perform the measurements. Here, two case studies are proposed: a bus and a truck. The validation shows consistency with the ground truth, providing a degree of improvement within respect to the corrupted displacement on the mean error and its variance. As a by-product of the algorithm, there is the capability to produce high-resolution imagery of moving targets. Full article
Show Figures

Figure 1

25 pages, 15963 KB  
Article
Real-Time Lossless Compression System for Bayer Pattern Images with a Modified JPEG-LS
by Xufeng Li, Li Zhou and Yan Zhu
Mathematics 2025, 13(20), 3245; https://doi.org/10.3390/math13203245 - 10 Oct 2025
Viewed by 507
Abstract
Real-time lossless image compression based on the JPEG-LS algorithm is in high demand for critical missions such as satellite remote sensing and space exploration due to its excellent balance between complexity and compression rate. However, few researchers have made appropriate modifications to the [...] Read more.
Real-time lossless image compression based on the JPEG-LS algorithm is in high demand for critical missions such as satellite remote sensing and space exploration due to its excellent balance between complexity and compression rate. However, few researchers have made appropriate modifications to the JPEG-LS algorithm to make it more suitable for high-speed hardware implementation and application to Bayer pattern data. This paper addresses the current limitations by proposing a real-time lossless compression system specifically tailored for Bayer pattern images from spaceborne cameras. The system integrates a hybrid encoding strategy modified from JPEG-LS, combining run-length encoding, predictive encoding, and a non-encoding mode to facilitate high-speed hardware implementation. Images are processed in tiles, with each tile’s color channels processed independently to preserve individual channel characteristics. Moreover, potential error propagation is confined within a single tile. To enhance throughput, the compression algorithm operates within a 20-stage pipeline architecture. Duplication of computation units and the introduction of key-value registers and a bypass mechanism resolve structural and data dependency hazards within the pipeline. A reorder architecture prevents pipeline blocking, further optimizing system throughput. The proposed architecture is implemented on a XILINX XC7Z045-2FFG900C SoC (Xilinx, Inc., San Jose, CA, USA) and achieves a maximum throughput of up to 346.41 MPixel/s, making it the fastest architecture reported in the literature. Full article
(This article belongs to the Special Issue Complex System Dynamics and Image Processing)
Show Figures

Figure 1

21 pages, 4796 KB  
Article
Deep Bayesian Optimization of Sparse Aperture for Compressed Sensing 3D ISAR Imaging
by Zongkai Yang, Jingcheng Zhao, Mengyu Zhang, Changyu Lou and Xin Zhao
Remote Sens. 2025, 17(19), 3380; https://doi.org/10.3390/rs17193380 - 7 Oct 2025
Viewed by 484
Abstract
High-resolution three-dimensional (3D) Inverse Synthetic Aperture Radar (ISAR) imaging is essential for the characterization of target scattering in various environments. The practical application of this technique is frequently impeded by the lengthy measurement time necessary for comprehensive data acquisition with turntable-based systems. Sub-sampling [...] Read more.
High-resolution three-dimensional (3D) Inverse Synthetic Aperture Radar (ISAR) imaging is essential for the characterization of target scattering in various environments. The practical application of this technique is frequently impeded by the lengthy measurement time necessary for comprehensive data acquisition with turntable-based systems. Sub-sampling the aperture can decrease acquisition time; however, traditional reconstruction algorithms that utilize matched filtering exhibit significantly impaired imaging performance, often characterized by a high peak side-lobe ratio. A methodology is proposed that integrates compressed sensing(CS) theory with sparse-aperture optimization to achieve high-fidelity 3D imaging from sparsely sampled data. An optimized sparse sampling aperture is introduced to systematically balance the engineering requirement for efficient, continuous turntable motion with the low mutual coherence desired for the CS matrix. A deep Bayesian optimization framework was developed to automatically identify physically realizable optimal sampling trajectories, ensuring that the sensing matrix retains the necessary properties for accurate signal recovery. This method effectively addresses the high-sidelobe problem associated with traditional sparse techniques, significantly decreasing measurement duration while maintaining image quality. Quantitative experimental results indicate the method’s efficacy: the optimized sparse aperture decreases the number of angular sampling points by roughly 84% compared to a full acquisition, while reconstructing images with a high correlation coefficient of 0.98 to the fully sampled reference. The methodology provides an effective solution for rapid, high-performance 3D ISAR imaging, achieving an optimal balance between data acquisition efficiency and reconstruction fidelity. Full article
Show Figures

Figure 1

27 pages, 5542 KB  
Article
ILF-BDSNet: A Compressed Network for SAR-to-Optical Image Translation Based on Intermediate-Layer Features and Bio-Inspired Dynamic Search
by Yingying Kong and Cheng Xu
Remote Sens. 2025, 17(19), 3351; https://doi.org/10.3390/rs17193351 - 1 Oct 2025
Viewed by 451
Abstract
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance [...] Read more.
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance in image translation tasks, their massive number of parameters pose substantial challenges. Therefore, this paper proposes ILF-BDSNet, a compressed network for SAR-to-optical image translation. Specifically, first, standard convolutions in the feature-transformation module of the teacher network are replaced with depthwise separable convolutions to construct the student network, and a dual-resolution collaborative discriminator based on PatchGAN is proposed. Next, knowledge distillation based on intermediate-layer features and channel pruning via weight sharing are designed to train the student network. Then, the bio-inspired dynamic search of channel configuration (BDSCC) algorithm is proposed to efficiently select the optimal subnet. Meanwhile, the pixel-semantic dual-domain alignment loss function is designed. The feature-matching loss within this function establishes an alignment mechanism based on intermediate-layer features from the discriminator. Extensive experiments demonstrate the superiority of ILF-BDSNet, which significantly reduces number of parameters and computational complexity while still generating high-quality optical images, providing an efficient solution for SAR image translation in resource-constrained environments. Full article
Show Figures

Figure 1

26 pages, 10719 KB  
Article
MPGH-FS: A Hybrid Feature Selection Framework for Robust Multi-Temporal OBIA Classification
by Xiangchao Xu, Huijiao Qiao, Zhenfan Xu and Shuya Hu
Sensors 2025, 25(18), 5933; https://doi.org/10.3390/s25185933 - 22 Sep 2025
Viewed by 540
Abstract
Object-Based Image Analysis (OBIA) generates high-dimensional features that frequently induce the curse of dimensionality, impairing classification efficiency and generalizability in high-resolution remote sensing images. To address these challenges while simultaneously overcoming the limitations of single-criterion feature selection and enhancing temporal adaptability, we propose [...] Read more.
Object-Based Image Analysis (OBIA) generates high-dimensional features that frequently induce the curse of dimensionality, impairing classification efficiency and generalizability in high-resolution remote sensing images. To address these challenges while simultaneously overcoming the limitations of single-criterion feature selection and enhancing temporal adaptability, we propose a novel feature selection framework named Mutual information Pre-filtering and Genetic-Hill climbing hybrid Feature Selection (MPGH-FS), which integrates Mutual Information Correlation Coefficient (MICC) pre-filtering, Genetic Algorithm (GA) global search, and Hill Climbing (HC) local optimization. Experiments based on multi-temporal GF-2 imagery from 2018 to 2023 demonstrated that MPGH-FS could reduce the feature dimension from 232 to 9, and it achieved the highest Overall Accuracy (OA) of 85.55% and a Kappa coefficient of 0.75 in full-scene classification, with training and inference times limited to 6 s and 1 min, respectively. Cross-temporal transfer experiments further validated the method’s robustness to inter-annual variation within the same area, with classification accuracy fluctuations remaining below 4% across different years, outperforming comparative methods. These results confirm that MPGH-FS offers significant advantages in feature compression, classification performance, and temporal adaptability, providing a robust technical foundation for efficient and accurate multi-temporal remote sensing classification. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing, Analysis and Application)
Show Figures

Figure 1

25 pages, 7964 KB  
Article
DSCSRN: Physically Guided Symmetry-Aware Spatial-Spectral Collaborative Network for Single-Image Hyperspectral Super-Resolution
by Xueli Chang, Jintong Liu, Guotao Wen, Xiaoyu Huang and Meng Yan
Symmetry 2025, 17(9), 1520; https://doi.org/10.3390/sym17091520 - 12 Sep 2025
Viewed by 512
Abstract
Hyperspectral images (HSIs), with their rich spectral information, are widely used in remote sensing; yet the inherent trade-off between spectral and spatial resolution in imaging systems often limits spatial details. Single-image hyperspectral super-resolution (HSI-SR) seeks to recover high-resolution HSIs from a single low-resolution [...] Read more.
Hyperspectral images (HSIs), with their rich spectral information, are widely used in remote sensing; yet the inherent trade-off between spectral and spatial resolution in imaging systems often limits spatial details. Single-image hyperspectral super-resolution (HSI-SR) seeks to recover high-resolution HSIs from a single low-resolution input, but the high dimensionality and spectral redundancy of HSIs make this task challenging. In HSIs, spectral signatures and spatial textures often exhibit intrinsic symmetries, and preserving these symmetries provides additional physical constraints that enhance reconstruction fidelity and robustness. To address these challenges, we propose the Dynamic Spectral Collaborative Super-Resolution Network (DSCSRN), an end-to-end framework that integrates physical modeling with deep learning and explicitly embeds spatial–spectral symmetry priors into the network architecture. DSCSRN processes low-resolution HSIs with a Cascaded Residual Spectral Decomposition Network (CRSDN) to compress redundant channels while preserving spatial structures, generating accurate abundance maps. These maps are refined by two Synergistic Progressive Feature Refinement Modules (SPFRMs), which progressively enhance spatial textures and spectral details via a multi-scale dual-domain collaborative attention mechanism. The Dynamic Endmember Adjustment Module (DEAM) then adaptively updates spectral endmembers according to scene context, overcoming the limitations of fixed-endmember assumptions. Grounded in the Linear Mixture Model (LMM), this unmixing–recovery–reconstruction pipeline restores subtle spectral variations alongside improved spatial resolution. Experiments on the Chikusei, Pavia Center, and CAVE datasets show that DSCSRN outperforms state-of-the-art methods in both perceptual quality and quantitative performance, achieving an average PSNR of 43.42 and a SAM of 1.75 (×4 scale) on Chikusei. The integration of symmetry principles offers a unifying perspective aligned with the intrinsic structure of HSIs, producing reconstructions that are both accurate and structurally consistent. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 2646 KB  
Article
A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning
by Fuma Kimishima, Jian Yang and Jinjia Zhou
Information 2025, 16(9), 777; https://doi.org/10.3390/info16090777 - 7 Sep 2025
Viewed by 432
Abstract
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction [...] Read more.
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction to address information loss and often neglect uncertainty arising from ambiguous or insufficient data. In this work, we propose MCS-TCL, a novel and trustworthy CL framework based on Multi-functional Compressive Sensing Sampling. Our approach unifies sampling, compression, and feature extraction into a single operation by leveraging the compatibility between compressive sensing and convolutional feature learning. This joint design enables efficient signal acquisition while preserving discriminative information, leading to feature representations that remain robust across varying sampling ratios. To enhance the model’s reliability, we incorporate evidential deep learning (EDL) during training. EDL estimates the distribution of evidence over output classes, enabling the model to quantify predictive uncertainty and assign higher confidence to well-supported predictions. Extensive experiments on image classification tasks show that MCS-TCL outperforms existing CL methods, achieving state-of-the-art accuracy at a low sampling rate of 6%. Additionally, our framework reduces model size by 85.76% while providing meaningful uncertainty estimates, demonstrating its effectiveness in resource-constrained learning scenarios. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

24 pages, 2585 KB  
Article
Comprehensive Examination of Unrolled Networks for Solving Linear Inverse Problems
by Yuxi Chen, Xi Chen, Arian Maleki and Shirin Jalali
Entropy 2025, 27(9), 929; https://doi.org/10.3390/e27090929 - 3 Sep 2025
Viewed by 902
Abstract
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of [...] Read more.
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of design decisions that practitioners working on new applications must navigate, each potentially affecting the network’s overall performance. These decisions include selecting the optimization algorithm, defining the loss function, and determining the deep architecture, among others. Compounding the issue, evaluating each design choice requires time-consuming simulations to train, fine-tune the neural network, and optimize its performance. As a result, the process of exploring multiple options and identifying the optimal configuration becomes time-consuming and computationally demanding. The main objectives of this paper are (1) to unify some ideas and methodologies used in unrolled networks to reduce the number of design choices a user has to make, and (2) to report a comprehensive ablation study to discuss the impact of each of the choices involved in designing unrolled networks and present practical recommendations based on our findings. We anticipate that this study will help scientists and engineers to design unrolled networks for their applications and diagnose problems within their networks efficiently. Full article
Show Figures

Figure 1

19 pages, 5242 KB  
Article
Single-Pixel Three-Dimensional Compressive Imaging System Using Volume Structured Illumination
by Yanbing Jiang and Shaoshuo Mu
Electronics 2025, 14(17), 3463; https://doi.org/10.3390/electronics14173463 - 29 Aug 2025
Viewed by 704
Abstract
Single-pixel imaging enables two-dimensional image capture through a single-pixel detector, yet extending this to three-dimensional or higher-dimensional information capture in single-pixel optical imaging systems has remained a challenging problem. In this study, we present a single-pixel camera system for three-dimensional (3D) imaging based [...] Read more.
Single-pixel imaging enables two-dimensional image capture through a single-pixel detector, yet extending this to three-dimensional or higher-dimensional information capture in single-pixel optical imaging systems has remained a challenging problem. In this study, we present a single-pixel camera system for three-dimensional (3D) imaging based on compressed sensing (CS) with continuous wave (CW) pseudo-random volume structured illumination. An estimated image, which incorporates both spatial and depth information of a 3D scene, is reconstructed using an L1-norm minimization reconstruction algorithm. This algorithm employs prior knowledge of non-overlapping objects as a constraint in the target space, resulting in improved noise performance in both numerical simulations and physical experiments. Our simulations and experiments demonstrate the feasibility of the proposed 3D CS framework. This approach achieves compressive sensing in a 3D information capture system with a measurement ratio of 19.53%. Additionally, we show that our CS 3D capturing system can accurately reconstruct the color of a target using color filter modulation. Full article
Show Figures

Figure 1

20 pages, 7901 KB  
Article
Millimeter-Wave Interferometric Synthetic Aperture Radiometer Imaging via Non-Local Similarity Learning
by Jin Yang, Zhixiang Cao, Qingbo Li and Yuehua Li
Electronics 2025, 14(17), 3452; https://doi.org/10.3390/electronics14173452 - 29 Aug 2025
Viewed by 520
Abstract
In this study, we propose a novel pixel-level non-local similarity (PNS)-based reconstruction method for millimeter-wave interferometric synthetic aperture radiometer (InSAR) imaging. Unlike traditional compressed sensing (CS) methods, which rely on predefined sparse transforms and often introduce artifacts, our approach leverages structural redundancies in [...] Read more.
In this study, we propose a novel pixel-level non-local similarity (PNS)-based reconstruction method for millimeter-wave interferometric synthetic aperture radiometer (InSAR) imaging. Unlike traditional compressed sensing (CS) methods, which rely on predefined sparse transforms and often introduce artifacts, our approach leverages structural redundancies in InSAR images through an enhanced sparse representation model with dynamically filtered coefficients. This design simultaneously preserves fine details and suppresses noise interference. Furthermore, an iterative refinement mechanism incorporates raw sampled data fidelity constraints, enhancing reconstruction accuracy. Simulation and physical experiments demonstrate that the proposed InSAR-PNS method significantly outperforms conventional techniques: it achieves a 1.93 dB average peak signal-to-noise ratio (PSNR) improvement over CS-based reconstruction while operating at reduced sampling ratios compared to Nyquist-rate fast fourier transform (FFT) methods. The framework provides a practical and efficient solution for high-fidelity millimeter-wave InSAR imaging under sub-Nyquist sampling conditions. Full article
Show Figures

Figure 1

23 pages, 6848 KB  
Review
The Expanding Frontier: The Role of Artificial Intelligence in Pediatric Neuroradiology
by Alessia Guarnera, Antonio Napolitano, Flavia Liporace, Fabio Marconi, Maria Camilla Rossi-Espagnet, Carlo Gandolfo, Andrea Romano, Alessandro Bozzao and Daniela Longo
Children 2025, 12(9), 1127; https://doi.org/10.3390/children12091127 - 27 Aug 2025
Viewed by 1232
Abstract
Artificial intelligence (AI) is revolutionarily shaping the entire landscape of medicine and particularly the privileged field of radiology, since it produces a significant amount of data, namely, images. Currently, AI implementation in radiology is continuously increasing, from automating image analysis to enhancing workflow [...] Read more.
Artificial intelligence (AI) is revolutionarily shaping the entire landscape of medicine and particularly the privileged field of radiology, since it produces a significant amount of data, namely, images. Currently, AI implementation in radiology is continuously increasing, from automating image analysis to enhancing workflow management, and specifically, pediatric neuroradiology is emerging as an expanding frontier. Pediatric neuroradiology presents unique opportunities and challenges since neonates’ and small children’s brains are continuously developing, with age-specific changes in terms of anatomy, physiology, and disease presentation. By enhancing diagnostic accuracy, reducing reporting times, and enabling earlier intervention, AI has the potential to significantly impact clinical practice and patients’ quality of life and outcomes. For instance, AI reduces MRI and CT scanner time by employing advanced deep learning (DL) algorithms to accelerate image acquisition through compressed sensing and undersampling, and to enhance image reconstruction by denoising and super-resolving low-quality datasets, thereby producing diagnostic-quality images with significantly fewer data points and in a shorter timeframe. Furthermore, as healthcare systems become increasingly burdened by rising demands and limited radiology workforce capacity, AI offers a practical solution to support clinical decision-making, particularly in institutions where pediatric neuroradiology is limited. For example, the MELD (Multicenter Epilepsy Lesion Detection) algorithm is specifically designed to help radiologists find focal cortical dysplasias (FCDs), which are a common cause of drug-resistant epilepsy. It works by analyzing a patient’s MRI scan and comparing a wide range of features—such as cortical thickness and folding patterns—to a large database of scans from both healthy individuals and epilepsy patients. By identifying subtle deviations from normal brain anatomy, the MELD graph algorithm can highlight potential lesions that are often missed by the human eye, which is a critical step in identifying patients who could benefit from life-changing epilepsy surgery. On the other hand, the integration of AI into pediatric neuroradiology faces technical and ethical challenges, such as data scarcity and ethical and legal restrictions on pediatric data sharing, that complicate the development of robust and generalizable AI models. Moreover, many radiologists remain sceptical of AI’s interpretability and reliability, and there are also important medico-legal questions around responsibility and liability when AI systems are involved in clinical decision-making. Future promising perspectives to overcome these concerns are represented by federated learning and collaborative research and AI development, which require technological innovation and multidisciplinary collaboration between neuroradiologists, data scientists, ethicists, and pediatricians. The paper aims to address: (1) current applications of AI in pediatric neuroradiology; (2) current challenges and ethical considerations related to AI implementation in pediatric neuroradiology; and (3) future opportunities in the clinical and educational pediatric neuroradiology field. AI in pediatric neuroradiology is not meant to replace neuroradiologists, but to amplify human intellect and extend our capacity to diagnose, prognosticate, and treat with unprecedented precision and speed. Full article
Show Figures

Figure 1

21 pages, 4917 KB  
Article
A High-Capacity Reversible Data Hiding Scheme for Encrypted Hyperspectral Images Using Multi-Layer MSB Block Labeling and ERLE Compression
by Yijie Lin, Chia-Chen Lin, Zhe-Min Yeh, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2025, 17(8), 378; https://doi.org/10.3390/fi17080378 - 21 Aug 2025
Cited by 1 | Viewed by 533
Abstract
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically [...] Read more.
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically designed for encrypted HSIs, offering enhanced embedding capacity without compromising data security or reversibility. The approach introduces a multi-layer block labeling mechanism that leverages the similarity of most significant bits (MSBs) to accurately locate embeddable regions. To minimize auxiliary information overhead, we incorporate an Extended Run-Length Encoding (ERLE) algorithm for effective label map compression. The proposed method achieves embedding rates of up to 3.79 bits per pixel per band (bpppb), while ensuring high-fidelity reconstruction, as validated by strong PSNR metrics. Comprehensive security evaluations using NPCR, UACI, and entropy confirm the robustness of the encryption. Extensive experiments across six standard hyperspectral datasets demonstrate the superiority of our method over existing RDH techniques in terms of capacity, embedding rate, and reconstruction quality. These results underline the method’s potential for secure data embedding in next-generation Internet-based geospatial and remote sensing systems. Full article
Show Figures

Figure 1

25 pages, 6031 KB  
Article
Sparse Transform and Compressed Sensing Methods to Improve Efficiency and Quality in Magnetic Resonance Medical Imaging
by Santiago Villota and Esteban Inga
Sensors 2025, 25(16), 5137; https://doi.org/10.3390/s25165137 - 19 Aug 2025
Viewed by 1153
Abstract
This paper explores the application of transform-domain sparsification and compressed sensing (CS) techniques to improve the efficiency and quality of magnetic resonance imaging (MRI). We implement and evaluate three sparsifying methods—discrete wavelet transform (DWT), fast Fourier transform (FFT), and discrete cosine transform (DCT)—which [...] Read more.
This paper explores the application of transform-domain sparsification and compressed sensing (CS) techniques to improve the efficiency and quality of magnetic resonance imaging (MRI). We implement and evaluate three sparsifying methods—discrete wavelet transform (DWT), fast Fourier transform (FFT), and discrete cosine transform (DCT)—which are used to simulate subsampled reconstruction via inverse transforms. Additionally, one accurate CS reconstruction algorithm, basis pursuit (BP), using the L1-MAGIC toolbox, is implemented as a benchmark based on convex optimization with L1-norm minimization. Emphasis is placed on basis pursuit (BP), which satisfies the formal requirements of CS theory, including incoherent sampling and sparse recovery via nonlinear reconstruction. Each method is assessed in MATLAB R2024b using standardized DICOM images and varying sampling rates. The evaluation metrics include peak signal-to-noise ratio (PSNR), root mean square error (RMSE), structural similarity index measure (SSIM), execution time, memory usage, and compression efficiency. The results show that although discrete cosine transform (DCT) outperforms the others under simulation in terms of PSNR and SSIM, it is inconsistent with the physics of MRI acquisition. Conversely, basis pursuit (BP) offers a theoretically grounded reconstruction approach with acceptable accuracy and clinical relevance. Despite the limitations of a controlled experimental setup, this study establishes a reproducible benchmarking framework and highlights the trade-offs between the quality of transform-based reconstruction and computational complexity. Future work will extend this study by incorporating clinically validated CS algorithms with L0 and nonconvex Lp (0 < p < 1) regularization to align with state-of-the-art MRI reconstruction practices. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

32 pages, 22267 KB  
Article
HAF-YOLO: Dynamic Feature Aggregation Network for Object Detection in Remote-Sensing Images
by Pengfei Zhang, Jian Liu, Jianqiang Zhang, Yiping Liu and Jiahao Shi
Remote Sens. 2025, 17(15), 2708; https://doi.org/10.3390/rs17152708 - 5 Aug 2025
Cited by 2 | Viewed by 1182
Abstract
The growing use of remote-sensing technologies has placed greater demands on object-detection algorithms, which still face challenges. This study proposes a hierarchical adaptive feature aggregation network (HAF-YOLO) to improve detection precision in remote-sensing images. It addresses issues such as small object size, complex [...] Read more.
The growing use of remote-sensing technologies has placed greater demands on object-detection algorithms, which still face challenges. This study proposes a hierarchical adaptive feature aggregation network (HAF-YOLO) to improve detection precision in remote-sensing images. It addresses issues such as small object size, complex backgrounds, scale variation, and dense object distributions by incorporating three core modules: dynamic-cooperative multimodal fusion architecture (DyCoMF-Arch), multiscale wavelet-enhanced aggregation network (MWA-Net), and spatial-deformable dynamic enhancement module (SDDE-Module). DyCoMF-Arch builds a hierarchical feature pyramid using multistage spatial compression and expansion, with dynamic weight allocation to extract salient features. MWA-Net applies wavelet-transform-based convolution to decompose features, preserving high-frequency detail and enhancing representation of small-scale objects. SDDE-Module integrates spatial coordinate encoding and multidirectional convolution to reduce localization interference and overcome fixed sampling limitations for geometric deformations. Experiments on the NWPU VHR-10 and DIOR datasets show that HAF-YOLO achieved mAP50 scores of 85.0% and 78.1%, improving on YOLOv8 by 4.8% and 3.1%, respectively. HAF-YOLO also maintained a low computational cost of 11.8 GFLOPs, outperforming other YOLO models. Ablation studies validated the effectiveness of each module and their combined optimization. This study presents a novel approach for remote-sensing object detection, with theoretical and practical value. Full article
Show Figures

Graphical abstract

Back to TopTop