Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = compressive deconvolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 25659 KB  
Article
High-Resolution Imaging of Multi-Beam Uniform Linear Array Sonar Based on Two-Stage Sparse Deconvolution Method
by Jian Wang, Junhong Cui, Ruo Li, Haisen Li and Jing Wang
Remote Sens. 2026, 18(3), 403; https://doi.org/10.3390/rs18030403 - 25 Jan 2026
Viewed by 227
Abstract
Classical beamforming (CBF) beamforming constrains the accuracy and quality of underwater acoustic imaging by producing wide main-lobes that reduce resolution, high sidelobes that cause leakage, and point-spread functions that blur targets. Existing approaches typically address only one of these issues at a time, [...] Read more.
Classical beamforming (CBF) beamforming constrains the accuracy and quality of underwater acoustic imaging by producing wide main-lobes that reduce resolution, high sidelobes that cause leakage, and point-spread functions that blur targets. Existing approaches typically address only one of these issues at a time, limiting their ability to resolve multiple, interrelated problems simultaneously. In this study, we introduce a double-compression deconvolution high-resolution beamforming method designed to enhance multi-beam sonar imaging using an underwater uniform linear array. The proposed approach formulates imaging as a sparse deconvolution problem and suppresses off-target interference through two sparse constraints, thereby improving the sonar’s resolving capability. During sparse reconstruction, an auxiliary-parameter iterative shrinkage-threshold algorithm is employed to recover azimuthal sparse signals with higher accuracy. Simulations and controlled pool experiments demonstrate that, relative to classical beamforming, the proposed method significantly improves resolution, suppresses off-target interference, expands the imaging intensity dynamic range, and yields clearer target representations. This study provides an effective strategy to mitigate intrinsic limitations in high-resolution underwater sonar imaging. Full article
(This article belongs to the Special Issue Underwater Remote Sensing: Status, New Challenges and Opportunities)
28 pages, 126976 KB  
Article
MRLF: Multi-Resolution Layered Fusion Network for Optical and SAR Images
by Jinwei Wang, Liang Ma, Bo Zhao, Zhenguang Gou, Yingzheng Yin and Guangcai Sun
Remote Sens. 2025, 17(22), 3740; https://doi.org/10.3390/rs17223740 - 17 Nov 2025
Cited by 1 | Viewed by 718
Abstract
To enhance the comprehensive representation capability and fusion accuracy of remote sensing information, this paper proposes a multi-resolution hierarchical fusion network (MRLF) tailored to the heterogeneous characteristics of optical and synthetic aperture radar (SAR) images. By constructing a hierarchical feature decoupling mechanism, the [...] Read more.
To enhance the comprehensive representation capability and fusion accuracy of remote sensing information, this paper proposes a multi-resolution hierarchical fusion network (MRLF) tailored to the heterogeneous characteristics of optical and synthetic aperture radar (SAR) images. By constructing a hierarchical feature decoupling mechanism, the method decomposes input images into low-resolution global structural features and high-resolution local detail features. A residual compression module is employed to preserve multi-scale information, laying a complementary feature foundation for subsequent fusion. To address cross-modal radiometric discrepancies, a pre-trained complementary feature extraction model (CFEM) is introduced. The brightness distribution differences between SAR and fusion results are quantified using the Gram matrix, and mean-variance alignment constraints are applied to eliminate radiometric discontinuities. In the feature fusion stage, a dual-attention collaborative mechanism is designed, integrating channel attention to dynamically adjust modal weights and spatial attention to focus on complementary regions. Additionally, a learnable radiometric enhancement factor is incorporated to enable efficient collaborative representation of SAR textures and optical semantics. To maintain spatial consistency, hierarchical deconvolution and skip connections are further used to reconstruct low-resolution features, gradually restoring them to the original resolution. Experimental results demonstrate that MRLF significantly outperforms mainstream methods such as DenseFuse and SwinFusion on the Dongying and Xi’an datasets. The fused images achieve an information entropy (EN) of 6.72 and a structural similarity of 1.25, while maintaining stable complementary feature retention under large-scale scenarios. By enhancing multi-scale complementary features and optimizing radiometric consistency, this method provides a highly robust multi-modal representation scheme for all-weather remote sensing monitoring and disaster emergency response. Full article
Show Figures

Figure 1

24 pages, 5039 KB  
Article
EPIIC: Edge-Preserving Method Increasing Nuclei Clarity for Compression Artifacts Removal in Whole-Slide Histopathological Images
by Julia Merta and Michal Marczyk
Appl. Sci. 2025, 15(8), 4450; https://doi.org/10.3390/app15084450 - 17 Apr 2025
Viewed by 1291
Abstract
Hematoxylin and eosin (HE) staining is widely used in medical diagnosis. Stained slides provide crucial information to diagnose or monitor the progress of many diseases. Due to the large size of scanned images of whole tissues, a JPEG algorithm is commonly used for [...] Read more.
Hematoxylin and eosin (HE) staining is widely used in medical diagnosis. Stained slides provide crucial information to diagnose or monitor the progress of many diseases. Due to the large size of scanned images of whole tissues, a JPEG algorithm is commonly used for compression. This lossy compression method introduces artifacts visible as 8 × 8 pixel blocks and reduces overall quality, which may negatively impact further analysis. We propose a fully unsupervised Edge-Preserving method Increasing nucleI Clarity (EPIIC) for removing compression artifacts from whole-slide HE-stained images. The method is introduced in two versions, EPIIC and EPIIC Sobel, composed of stain deconvolution, gradient-based edge map estimation, and weighted smoothing. The performance of the method was evaluated using two image quality measures, PSNR and SSIM, and various datasets, including BreCaHAD with HE-stained histopathological images and five other natural image datasets, and compared with other edge-preserving filtering methods and a deep learning-based solution. The impact of compression artifacts removal on the nuclei segmentation task was tested using Hover-Net and STARDIST models. The proposed methods led to improved image quality in histopathological and natural images and better segmentation of cell nuclei compared to other edge-preserving filtering methods. The biggest improvement was observed for images compressed with a low compression quality factor. Compared to the method using neural networks, the developed algorithms have slightly worse performance in image enhancement, but they are superior in nuclei segmentation. EPIIC and EPIIC Sobel can efficiently remove compression artifacts, positively impacting the segmentation results of cell nuclei and overall image quality. Full article
Show Figures

Figure 1

21 pages, 5973 KB  
Article
Coronary Artery Disease Detection Based on a Novel Multi-Modal Deep-Coding Method Using ECG and PCG Signals
by Chengfa Sun, Changchun Liu, Xinpei Wang, Yuanyuan Liu and Shilong Zhao
Sensors 2024, 24(21), 6939; https://doi.org/10.3390/s24216939 - 29 Oct 2024
Cited by 9 | Viewed by 4511
Abstract
Coronary artery disease (CAD) is an irreversible and fatal disease. It necessitates timely and precise diagnosis to slow CAD progression. Electrocardiogram (ECG) and phonocardiogram (PCG), conveying abundant disease-related information, are prevalent clinical techniques for early CAD diagnosis. Nevertheless, most previous methods have relied [...] Read more.
Coronary artery disease (CAD) is an irreversible and fatal disease. It necessitates timely and precise diagnosis to slow CAD progression. Electrocardiogram (ECG) and phonocardiogram (PCG), conveying abundant disease-related information, are prevalent clinical techniques for early CAD diagnosis. Nevertheless, most previous methods have relied on single-modal data, restricting their diagnosis precision due to suffering from information shortages. To address this issue and capture adequate information, the development of a multi-modal method becomes imperative. In this study, a novel multi-modal learning method is proposed to integrate both ECG and PCG for CAD detection. Along with deconvolution operation, a novel ECG-PCG coupling signal is evaluated initially to enrich the diagnosis information. After constructing a modified recurrence plot, we build a parallel CNN network to encode multi-modal information, involving ECG, PCG and ECG-PCG coupling deep-coding features. To remove irrelevant information while preserving discriminative features, we add an autoencoder network to compress feature dimension. Final CAD classification is conducted by combining support vector machine and optimal multi-modal features. The experiment is validated on 199 simultaneously recorded ECG and PCG signals from non-CAD and CAD subjects, and achieves high performance with accuracy, sensitivity, specificity and f1-score of 98.49%, 98.57%,98.57% and 98.89%, respectively. The result demonstrates the superiority of the proposed multi-modal method in overcoming information shortages of single-modal signals and outperforming existing models in CAD detection. This study highlights the potential of multi-modal deep-coding information, and offers a wider insight to enhance CAD diagnosis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

28 pages, 5691 KB  
Article
Optimizing Processing Parameters for NR/EBC Thermoplastic Vulcanizates: A Comprehensive Full Factorial Design of Experiments (DOE) Strategy
by Nataphon Phupewkeaw, Pongdhorn Sae-Oui and Chakrit Sirisinha
Polymers 2024, 16(14), 1963; https://doi.org/10.3390/polym16141963 - 9 Jul 2024
Cited by 3 | Viewed by 1723
Abstract
This research explores the development of thermoplastic vulcanizate (TPV) blends derived from natural rubber (NR) and ethylene–butene copolymer (EBC) using a specific blend ratio and melt mixing technique. A comprehensive full factorial design of experiments (DOE) methodology is employed to optimize the processing [...] Read more.
This research explores the development of thermoplastic vulcanizate (TPV) blends derived from natural rubber (NR) and ethylene–butene copolymer (EBC) using a specific blend ratio and melt mixing technique. A comprehensive full factorial design of experiments (DOE) methodology is employed to optimize the processing parameters. TPVs are produced through dynamic vulcanization, combining rubber crosslinking and melt blending within a thermoplastic matrix under high temperatures and shear. The physico-mechanical properties of these TPVs are then analyzed. The objective is to enhance their mechanical performance by assessing the influence of blend ratio, mixing temperature, rotor speed, and mixing time on crucial properties, including tensile strength, elongation at break, compression set, tear strength, and hardness. Analysis of variance (ANOVA) identifies the optimal processing conditions that significantly improve material performance. Validation is achieved through atomic force microscopy (AFM), confirming the phase-separated structure and, thus, the success of dynamic vulcanization. Rubber process analyzer (RPA) and dynamic mechanical analyzer (DMA) assessments provide insights into the viscoelastic behavior and dynamic mechanical responses. Deconvolution analysis of temperature-dependent tan δ peaks reveals intricate microstructural interactions influencing the glass transition temperature (Tg). The optimized TPVs exhibit enhanced stiffness and effective energy dissipation capabilities across a wide temperature range, making them suitable for applications demanding thermal and mechanical load resistance. This study underscores the pivotal role of precise processing control in tailoring the properties of NR/EBC TPVs for specialized industrial uses. It highlights the indispensable contribution of the DOE methodology to TPV optimization, advancing material science and engineering, particularly for industries requiring robust and flexible materials. Full article
(This article belongs to the Section Polymer Processing and Engineering)
Show Figures

Figure 1

18 pages, 1507 KB  
Review
Advances in Lensless Fluorescence Microscopy Design
by Somaiyeh Khoubafarin, Edmond Kwesi Dadson and Aniruddha Ray
Photonics 2024, 11(6), 575; https://doi.org/10.3390/photonics11060575 - 19 Jun 2024
Cited by 2 | Viewed by 3312
Abstract
Lensless fluorescence microscopy (LLFM) has emerged as a promising approach for biological imaging, offering a simplified, high-throughput, portable, and cost-effective substitute for conventional microscopy techniques by removing lenses in favor of directly recording fluorescent light on a digital sensor. However, there are several [...] Read more.
Lensless fluorescence microscopy (LLFM) has emerged as a promising approach for biological imaging, offering a simplified, high-throughput, portable, and cost-effective substitute for conventional microscopy techniques by removing lenses in favor of directly recording fluorescent light on a digital sensor. However, there are several obstacles that this novel approach must overcome, such as restrictions on the resolution, field-of-view (FOV), signal-to-noise ratio (SNR), and multicolor-imaging capabilities. This review looks at the most current developments aimed at addressing these challenges and enhancing the performance of LLFM systems. To address these issues, computational techniques, such as deconvolution and compressive sensing, hardware modifications and structured illumination, customized filters, and the utilization of fiber-optic plates, have been implemented. Finally, this review emphasizes the numerous applications of LLFM in tissue analysis, pathogen detection, and cellular imaging, highlighting its adaptability and potential influence in a range of biomedical research and clinical diagnostic areas. Full article
(This article belongs to the Special Issue Advanced Photonic Sensing and Measurement II)
Show Figures

Figure 1

19 pages, 7785 KB  
Article
An Image Quality Improvement Method in Side-Scan Sonar Based on Deconvolution
by Jia Liu, Yan Pang, Lengleng Yan and Hanhao Zhu
Remote Sens. 2023, 15(20), 4908; https://doi.org/10.3390/rs15204908 - 11 Oct 2023
Cited by 6 | Viewed by 3365
Abstract
Side-scan sonar (SSS) is an important underwater imaging method that has high resolutions and is convenient to use. However, due to the restriction of conventional pulse compression technology, the side-scan sonar beam sidelobe in the range direction is relatively high, which affects the [...] Read more.
Side-scan sonar (SSS) is an important underwater imaging method that has high resolutions and is convenient to use. However, due to the restriction of conventional pulse compression technology, the side-scan sonar beam sidelobe in the range direction is relatively high, which affects the definition and contrast of images. When working in a shallow-water environment, image quality is especially influenced by strong bottom reverberation or other targets on the seabed. To solve this problem, a method for image-quality improvement based on deconvolution is proposed herein. In this method, to increase the range resolution and lower the sidelobe, a deconvolution algorithm is employed to improve the conventional pulse compression. In our simulation, the tolerance of the algorithm to different signal-to-noise ratios (SNRs) and the resolution ability of multi-target conditions were analyzed. Furthermore, the proposed method was applied to actual underwater data. The experimental results showed that the quality of underwater acoustic imaging could be effectively improved. The ratios of improvement for the SNR and contrast ratio (CR) were 32 and 12.5%, respectively. The target segmentation results based on this method are also shown. The accuracy of segmentation was effectively improved. Full article
(This article belongs to the Special Issue Advanced Array Signal Processing for Target Imaging and Detection)
Show Figures

Figure 1

19 pages, 84702 KB  
Article
Outlier Denoising Using a Novel Statistics-Based Mask Strategy for Compressive Sensing
by Weiqi Wang, Jidong Yang, Jianping Huang, Zhenchun Li and Miaomiao Sun
Remote Sens. 2023, 15(2), 447; https://doi.org/10.3390/rs15020447 - 11 Jan 2023
Cited by 10 | Viewed by 2663
Abstract
Denoising is always an important step in seismic processing, in order to produce high-quality data for subsequent imaging and inversion. Different types of noise can be suppressed using targeted denoising methods. For outlier noise with singular amplitudes, many classical denoising methods suffer from [...] Read more.
Denoising is always an important step in seismic processing, in order to produce high-quality data for subsequent imaging and inversion. Different types of noise can be suppressed using targeted denoising methods. For outlier noise with singular amplitudes, many classical denoising methods suffer from signal leakage. To mitigate this issue, we developed a statistics-based mask method and incorporated it into the compressive sensing (CS) framework, in order to remove outlier noise. A statistical analysis for seismic data amplitudes was first used to identify the locations of traces containing outlier noise. Then, the outlier trace locations were compared with a mask matrix generated by jitter sampling, and we replaced the sampled traces of the jitter mask that had the outlier noise with their nearby unsampled traces. The optimized sampling matrix enabled us to effectively identify and remove outliers. This optimized mask strategy converts an outlier denoising problem into a data reconstruction problem. Finally, a sparsely constrained inverse problem was solved using a soft-threshold iteration solver to recover signals at the null locations. The feasibility and adaptability of the proposed method were demonstrated through numerical experiments for synthetic and field data. The results showed that the proposed method outperformed the conventional f-x deconvolution and median filter method, and could accurately suppress outlier noise and recover missed expected signals. Full article
(This article belongs to the Special Issue Geophysical Data Processing in Remote Sensing Imagery)
Show Figures

Graphical abstract

23 pages, 9473 KB  
Article
Deep Compressed Sensing Generation Model for End-to-End Extreme Observation and Reconstruction
by Han Diao, Xiaozhu Lin and Chun Fang
Appl. Sci. 2022, 12(23), 12176; https://doi.org/10.3390/app122312176 - 28 Nov 2022
Cited by 6 | Viewed by 2225
Abstract
Data transmission and storage are inseparable from compression technology. Compressed sensing directly undersamples and reconstructs data at a much lower sampling frequency than Nyquist, which reduces redundant sampling. However, the requirement of data sparsity in compressed sensing limits its application. The combination of [...] Read more.
Data transmission and storage are inseparable from compression technology. Compressed sensing directly undersamples and reconstructs data at a much lower sampling frequency than Nyquist, which reduces redundant sampling. However, the requirement of data sparsity in compressed sensing limits its application. The combination of neural network-based generative models and compressed sensing breaks the limitation of data sparsity. Compressed sensing for extreme observations can reduce costs, but the reconstruction effect of the above methods in extreme observations is blurry. We addressed this problem by proposing an end-to-end observation and reconstruction method based on a deep compressed sensing generative model. Under RIP and S-REC, data can be observed and reconstructed from end to end. In MNIST extreme observation and reconstruction, end-to-end feasibility compared to random input is verified. End-to-end reconstruction accuracy improves by 5.20% over random input and SSIM by 0.2200. In the Fashion_MNIST extreme observation and reconstruction, it is verified that the reconstruction effect of the deconvolution generative model is better than that of the multi-layer perceptron. The end-to-end reconstruction accuracy of the deconvolution generative model is 2.49% higher than that of the multi-layer perceptron generative model, and the SSIM is 0.0532 higher. Full article
(This article belongs to the Special Issue AI, Machine Learning and Deep Learning in Signal Processing)
Show Figures

Figure 1

17 pages, 9485 KB  
Article
Signal Recovery from Randomly Quantized Data Using Neural Network Approach
by Ali Al-Shaikhi
Sensors 2022, 22(22), 8712; https://doi.org/10.3390/s22228712 - 11 Nov 2022
Viewed by 1673
Abstract
We present an efficient scheme based on a long short-term memory (LSTM) autoencoder for accurate seismic deconvolution in a multichannel setup. The technique is beneficial for compressing massive amounts of seismic data. The proposed robust estimation ensures the recovery of sparse reflectivity from [...] Read more.
We present an efficient scheme based on a long short-term memory (LSTM) autoencoder for accurate seismic deconvolution in a multichannel setup. The technique is beneficial for compressing massive amounts of seismic data. The proposed robust estimation ensures the recovery of sparse reflectivity from acquired seismic data that have been under-quantized. By adjusting the quantization error, the technique considerably improves the robustness of data to the quantization error, thereby boosting the visual saliency of seismic data compared to the other existing algorithms. This framework has been validated using both field and synthetic seismic data sets, and the assessment is carried out by comparing it to the steepest decent and basis pursuit methods. The findings indicate that the proposed scheme outperforms the other algorithms significantly in the following ways: first, in the proposed estimation, fraudulently or overbearingly estimated impulses are significantly suppressed, and second, the proposed guesstimate is much more robust to the quantization interval changes. The tests on real and synthetic data sets reveal that the proposed LSTM autoencoder-based method yields the best results in terms of both quality and computational complexity when compared with existing methods. Finally, the relative reconstruction error (RRE), signal-to-reconstruction error ratio (SRER), and power spectral density (PSD) are used to evaluate the performance of the proposed algorithm. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

13 pages, 1410 KB  
Article
Accelerated Inference of Face Detection under Edge-Cloud Collaboration
by Weiwei Zhang, Hongbo Zhou, Jian Mo, Chenghui Zhen and Ming Ji
Appl. Sci. 2022, 12(17), 8424; https://doi.org/10.3390/app12178424 - 24 Aug 2022
Cited by 3 | Viewed by 3541
Abstract
Model compression makes it possible to deploy face detection models on devices with limited computing resources. Edge–cloud collaborative inference, as a new paradigm of neural network inference, can significantly reduce neural network inference latency. Inspired by these two techniques, this paper adopts a [...] Read more.
Model compression makes it possible to deploy face detection models on devices with limited computing resources. Edge–cloud collaborative inference, as a new paradigm of neural network inference, can significantly reduce neural network inference latency. Inspired by these two techniques, this paper adopts a two-step acceleration strategy for the CenterNet model. Firstly, the model pruning method is used to prune the convolutional layer and the deconvolutional layer to obtain a preliminary acceleration effect. Secondly, the neural network is segmented by the optimizer to make full use of the computing resources on the edge and the cloud to further accelerate the inference of the neural network. In the first strategy, we achieve a 62.12% reduction in inference latency compared to the state-of-the-art object detection model Blazeface. Additionally, with a two-step speedup strategy, our method is only 26.5% of the baseline when the bandwidth is 500 kbps. Full article
Show Figures

Figure 1

7 pages, 1619 KB  
Communication
Signal-to-Noise Ratio Improvement for Multiple-Pinhole Imaging Using Supervised Encoder–Decoder Convolutional Neural Network Architecture
by Eliezer Danan, Nadav Shabairou, Yossef Danan and Zeev Zalevsky
Photonics 2022, 9(2), 69; https://doi.org/10.3390/photonics9020069 - 27 Jan 2022
Cited by 1 | Viewed by 2636
Abstract
Digital image devices have been widely applied in many fields, such as individual recognition and remote sensing. The captured image is a degraded image from the latent observation, where the degradation processing is affected by some factors, such as lighting and noise corruption. [...] Read more.
Digital image devices have been widely applied in many fields, such as individual recognition and remote sensing. The captured image is a degraded image from the latent observation, where the degradation processing is affected by some factors, such as lighting and noise corruption. Specifically, noise is generated in the processing of transmission and compression from the unknown latent observation. Thus, it is essential to use image denoising techniques to remove noise and recover the latent observation from the given degraded image. In this research, a supervised encoder–decoder convolution neural network was used to fix image distortion stemming from the limited accuracy of inverse filter methods (Wiener filter, Lucy–Richardson deconvolution, etc.). Particularly, we will correct image degradation that mainly stems from duplications arising from multiple-pinhole array imaging. Full article
Show Figures

Figure 1

11 pages, 19818 KB  
Article
A Novel One-Pot Synthesis and Characterization of Silk Fibroin/α-Calcium Sulfate Hemihydrate for Bone Regeneration
by Aditi Pandey, Tzu-Sen Yang, Shu-Lien Cheng, Ching-Shuan Huang, Agnese Brangule, Aivaras Kareiva and Jen-Chang Yang
Polymers 2021, 13(12), 1996; https://doi.org/10.3390/polym13121996 - 18 Jun 2021
Cited by 11 | Viewed by 3105
Abstract
This study aims to fabricate silk fibroin/calcium sulfate (SF/CS) composites by one-pot synthesis for bone regeneration applications. The SF was harvested from degummed silkworm cocoons, dissolved in a solvent system comprising of calcium chloride:ethanol:water (1:2:8), and then mixed with a stoichiometric amount of [...] Read more.
This study aims to fabricate silk fibroin/calcium sulfate (SF/CS) composites by one-pot synthesis for bone regeneration applications. The SF was harvested from degummed silkworm cocoons, dissolved in a solvent system comprising of calcium chloride:ethanol:water (1:2:8), and then mixed with a stoichiometric amount of sodium sulfate to prepare various SF/CS composites. The crystal pattern, glass transition temperature, and chemical composition of SF/CS samples were analyzed by XRD, DSC, and FTIR, respectively. These characterizations revealed the successful synthesis of pure calcium sulfate dihydrate (CSD) and calcium sulfate hemihydrate (CSH) when it was combined with SF. The thermal analysis through DSC indicated molecular-level interaction between the SF and CS. The FTIR deconvolution spectra demonstrated an increment in the β-sheet content by increasing CS content in the composites. The investigation into the morphology of the composites using SEM revealed the formation of plate-like dihydrate in the pure CS sample, while rod-like structures of α-CSH surrounded by SF in the composites were observed. The compressive strength of the hydrated 10 and 20% SF-incorporated CSH composites portrayed more than a twofold enhancement (statistically significant) in comparison to that of the pure CS samples. Reduced compressive strength was observed upon further increasing the SF content, possibly due to SF agglomeration that restricted its uniform distribution. Therefore, the one-pot synthesized SF/CS composites demonstrated suitable chemical, thermal, and morphological properties. However, additional biological analysis of its potential use as bone substitutes is required. Full article
(This article belongs to the Section Polymer Chemistry)
Show Figures

Figure 1

14 pages, 6132 KB  
Article
Blind Deconvolution Based on Compressed Sensing with bi-l0-l2-norm Regularization in Light Microscopy Image
by Kyuseok Kim and Ji-Youn Kim
Int. J. Environ. Res. Public Health 2021, 18(4), 1789; https://doi.org/10.3390/ijerph18041789 - 12 Feb 2021
Cited by 6 | Viewed by 2676
Abstract
Blind deconvolution of light microscopy images could improve the ability of distinguishing cell-level substances. In this study, we investigated the blind deconvolution framework for a light microscope image, which combines the benefits of bi-l0-l2-norm regularization with compressed [...] Read more.
Blind deconvolution of light microscopy images could improve the ability of distinguishing cell-level substances. In this study, we investigated the blind deconvolution framework for a light microscope image, which combines the benefits of bi-l0-l2-norm regularization with compressed sensing and conjugated gradient algorithms. Several existing regularization approaches were limited by staircase artifacts (or cartooned artifacts) and noise amplification. Thus, we implemented our strategy to overcome these problems using the bi-l0-l2-norm regularization proposed. It was investigated through simulations and experiments using optical microscopy images including the background noise. The sharpness was improved through the successful image restoration while minimizing the noise amplification. In addition, quantitative factors of the restored images, including the intensity profile, root-mean-square error (RMSE), edge preservation index (EPI), structural similarity index measure (SSIM), and normalized noise power spectrum, were improved compared to those of existing or comparative images. In particular, the results of using the proposed method showed RMSE, EPI, and SSIM values of approximately 0.12, 0.81, and 0.88 when compared with the reference. In addition, RMSE, EPI, and SSIM values in the restored image were proven to be improved by about 5.97, 1.26, and 1.61 times compared with the degraded image. Consequently, the proposed method is expected to be effective for image restoration and to reduce the cost of a high-performance light microscope. Full article
(This article belongs to the Section Digital Health)
Show Figures

Figure 1

18 pages, 37749 KB  
Article
CNN-Based Suppression of False Contour and Color Distortion in Bit-Depth Enhancement
by Changmeng Peng, Luting Cai, Xiaoyang Huang, Zhizhong Fu, Jin Xu and Xiaofeng Li
Sensors 2021, 21(2), 416; https://doi.org/10.3390/s21020416 - 8 Jan 2021
Cited by 5 | Viewed by 3407
Abstract
It is a challenge to transmit and store the massive visual data generated in the Visual Internet of Things (VIoT), so the compression of the visual data is of great significance to VIoT. Compressing bit-depth of images is very cost-effective to reduce the [...] Read more.
It is a challenge to transmit and store the massive visual data generated in the Visual Internet of Things (VIoT), so the compression of the visual data is of great significance to VIoT. Compressing bit-depth of images is very cost-effective to reduce the large volume of visual data. However, compressing the bit-depth will introduce false contour, and color distortion would occur in the reconstructed image. False contour and color distortion suppression become critical issues of the bit-depth enhancement in VIoT. To solve these problems, a Bit-depth Enhancement method with AUTO-encoder-like structure (BE-AUTO) is proposed in this paper. Based on the convolution-combined-with-deconvolution codec and global skip of BE-AUTO, this method can effectively suppress false contour and color distortion, thus achieving the state-of-the-art objective metric and visual quality in the reconstructed images, making it more suitable for bit-depth enhancement in VIoT. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning in Image Sensing)
Show Figures

Figure 1

Back to TopTop