Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = unfolded imaging network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 65594 KiB  
Article
An Ambitious Itinerary: Journey Across the Medieval Buddhist World in a Book, CUL Add.1643 (1015 CE)
by Jinah Kim
Religions 2025, 16(7), 900; https://doi.org/10.3390/rel16070900 - 14 Jul 2025
Viewed by 497
Abstract
A Sanskrit manuscript of the Prajñāpāramitā or Perfection of Wisdom in eight thousand verses, now in the Cambridge University Library, Add.1643, is one of the most ambitiously designed South Asian manuscripts from the eleventh century, with the highest number of painted panels known [...] Read more.
A Sanskrit manuscript of the Prajñāpāramitā or Perfection of Wisdom in eight thousand verses, now in the Cambridge University Library, Add.1643, is one of the most ambitiously designed South Asian manuscripts from the eleventh century, with the highest number of painted panels known among the dated manuscripts from medieval South Asia until 1400 CE. Thanks to the unique occurrence of a caption written next to each painted panel, it is possible to identify most images in this manuscript as representing those of famous pilgrimage sites or auspicious images of specific locales. The iconographic program transforms Add.1643 into a portable device containing famous pilgrimage sites of the Buddhist world known to the makers and users of the manuscript in eleventh-century Nepal. It is one compact colorful package of a book, which can be opened and experienced in its unfolding three-dimensional space, like a virtual or imagined pilgrimage. Building on the recent research focusing on early medieval Buddhist sites across Monsoon Asia and analyzing the representational potentials and ontological values of painting, this essay demonstrates how this early eleventh-century Nepalese manuscript (Add.1643) and its visual program document and remember the knowledge of maritime travels and the transregional and intraregional activities of people and ideas moving across Monsoon Asia. Despite being made in the Kathmandu Valley with a considerable physical distance from the actual sea routes, the sites remembered in the manuscript open a possibility to connect the dots of human movement beyond the known networks and routes of “world systems”. Full article
Show Figures

Figure 1

27 pages, 1306 KiB  
Review
Recent Advancements in Hyperspectral Image Reconstruction from a Compressive Measurement
by Xian-Hua Han, Jian Wang and Huiyan Jiang
Sensors 2025, 25(11), 3286; https://doi.org/10.3390/s25113286 - 23 May 2025
Viewed by 766
Abstract
Hyperspectral (HS) image reconstruction has become a pivotal research area in computational imaging, facilitating the recovery of high-resolution spectral information from compressive snapshot measurements. With the rapid advancement of deep neural networks, reconstruction techniques have achieved significant improvements in both accuracy and computational [...] Read more.
Hyperspectral (HS) image reconstruction has become a pivotal research area in computational imaging, facilitating the recovery of high-resolution spectral information from compressive snapshot measurements. With the rapid advancement of deep neural networks, reconstruction techniques have achieved significant improvements in both accuracy and computational efficiency, enabling more precise spectral recovery across a wide range of applications. This survey presents a comprehensive overview of recent progress in HS image reconstruction, systematically categorized into three main paradigms: traditional model-based methods, deep learning-based approaches, and hybrid frameworks that integrate data-driven priors with the mathematical modeling of the degradation process. We examine the foundational principles, strengths, and limitations of each category, with particular attention to developments such as sparsity and low-rank priors in model-based methods, the evolution from convolutional neural networks to Transformer architectures in learning-based approaches, and deep unfolding strategies in hybrid models. Furthermore, we review benchmark datasets, evaluation metrics, and prevailing challenges including spectral distortion, computational cost, and generalizability across diverse conditions. Finally, we outline potential research directions to address current limitations. This survey aims to provide a valuable reference for researchers and practitioners striving to advance the field of HS image reconstruction. Full article
(This article belongs to the Special Issue Feature Review Papers in Optical Sensors)
Show Figures

Figure 1

20 pages, 54664 KiB  
Article
Lensless Digital Holographic Reconstruction Based on the Deep Unfolding Iterative Shrinkage Thresholding Network
by Duofang Chen, Zijian Guo, Huidi Guan and Xueli Chen
Electronics 2025, 14(9), 1697; https://doi.org/10.3390/electronics14091697 - 22 Apr 2025
Viewed by 509
Abstract
Without using any optical lenses, lensless digital holography (LDH) records the hologram of a sample and numerically retrieves the amplitude and phase of the sample from the hologram. Such lensless imaging designs have enabled high-resolution and high-throughput imaging of specimens using compact, portable, [...] Read more.
Without using any optical lenses, lensless digital holography (LDH) records the hologram of a sample and numerically retrieves the amplitude and phase of the sample from the hologram. Such lensless imaging designs have enabled high-resolution and high-throughput imaging of specimens using compact, portable, and cost-effective devices to potentially address various point-of-care-, global health-, and telemedicine-related challenges. However, in lensless digital holography, the reconstruction results are severely affected by zero-order noise and twin images as only the hologram intensity can be recorded. To mitigate such interference and enhance image quality, extensive efforts have been made. In recent years, deep learning (DL)-based approaches have made significant advancements in the field of LDH reconstruction. It is well known that most deep learning networks are often regarded as black-box models, which poses challenges in terms of interpretability. Here, we present a deep unfolding network, dubbed the ISTAHolo-Net, for LDH reconstruction. The ISTAHolo-Net replaces the traditional iterative update steps with a fixed number of sub-networks and the regularization weights with learnable parameters. Every sub-network consists of two modules, which are the gradient descent module (GDM) and the proximal mapping module (PMM), respectively. The ISTAHolo-Net incorporates the sparsity-constrained inverse problem model into the neural network and hence combines the interpretability of traditional iterative algorithms with the learning capabilities of neural networks. Simulation and real experiments were conducted to verify the effectiveness of the proposed reconstruction method. The performance of the proposed method was compared with the angular spectrum method (ASM), the HRNet, the Y-Net, and the DH-GAN. The results show that the DL-based reconstruction algorithms can effectively reduce the interference of twin images, thereby improving image reconstruction quality, and the proposed ISTAHolo-Net performs best on our dataset. Full article
Show Figures

Figure 1

25 pages, 11034 KiB  
Article
A Novel Deep Unfolding Network for Multi-Band SAR Sparse Imaging and Autofocusing
by Xiaopeng Li, Mengyang Zhan, Yiheng Liang, Yinwei Li, Gang Xu and Bingnan Wang
Remote Sens. 2025, 17(7), 1279; https://doi.org/10.3390/rs17071279 - 3 Apr 2025
Viewed by 373
Abstract
The sparse imaging network of synthetic aperture radar (SAR) is usually designed end to end and has a limited adaptability to radar systems of different bands. Meanwhile, the implementation of the sparse imaging algorithm depends on the sparsity of the target scene and [...] Read more.
The sparse imaging network of synthetic aperture radar (SAR) is usually designed end to end and has a limited adaptability to radar systems of different bands. Meanwhile, the implementation of the sparse imaging algorithm depends on the sparsity of the target scene and usually adopts a fixed L1 regularization solution, which has a mediocre reconstruction effect on complex scenes. In this paper, a novel SAR imaging deep unfolding network based on approximate observation is proposed for multi-band SAR systems. Firstly, the approximate observation module is separated from the optimal solution network model and selected according to the multi-band radar echo. Secondly, to realize the SAR imaging of non-sparse scenes, Lp regularization is used to constrain the uncertain transform domain of the target scene. The adaptive optimization of Lp parameters is realized by using a data-driven approach. Furthermore, considering that phase errors may be introduced in the real SAR system during echo acquisition, an error estimation module is added to the network to estimate and compensate for the phase errors. Finally, the results from both simulated and real data experiments demonstrate that the proposed method exhibits outstanding performance under 0.22 THz and 9.6 GHz echo data: high-resolution SAR focused images are achieved under four different sparsity conditions of 20%, 40%, 60%, and 80%. These results fully validate the strong adaptability and robustness of the proposed method to diverse SAR system configurations and complex target scenarios. Full article
(This article belongs to the Special Issue Microwave Remote Sensing for Object Detection (2nd Edition))
Show Figures

Figure 1

19 pages, 3422 KiB  
Article
Dual-Ascent-Inspired Transformer for Compressed Sensing
by Rui Lin, Yue Shen and Yu Chen
Sensors 2025, 25(7), 2157; https://doi.org/10.3390/s25072157 - 28 Mar 2025
Viewed by 425
Abstract
Deep learning has revolutionized image compressed sensing (CS) by enabling lightweight models that achieve high-quality reconstruction with low latency. However, most deep neural network-based CS models are pre-trained for specific compression ratios (CS ratios), limiting their flexibility compared to traditional iterative algorithms. To [...] Read more.
Deep learning has revolutionized image compressed sensing (CS) by enabling lightweight models that achieve high-quality reconstruction with low latency. However, most deep neural network-based CS models are pre-trained for specific compression ratios (CS ratios), limiting their flexibility compared to traditional iterative algorithms. To address this limitation, we propose the Dual-Ascent-Inspired Transformer (DAT), a novel architecture that maintains stable performance across different compression ratios with minimal training costs. DAT’s design incorporates the mathematical properties of the dual ascent method (DAM), leading to accelerated training convergence. The architecture features an innovative asymmetric primal–dual space at each iteration layer, enabling dimension-specific operations that balance reconstruction quality with computational efficiency. We also optimize the Cross Attention module through parameter sharing, effectively reducing its training complexity. Experimental results demonstrate DAT’s superior performance in two key aspects: First, during early-stage training (within 10 epochs), DAT consistently outperforms existing methods across multiple CS ratios (10%, 30%, and 50%). Notably, DAT achieves comparable PSNR to the ISTA-Net+ baseline within just one epoch, while competing methods require significantly more training time. Second, DAT exhibits enhanced robustness to variations in initial learning rates, as evidenced by loss function analysis during training. Full article
Show Figures

Figure 1

22 pages, 5056 KiB  
Article
SAAS-Net: Self-Supervised Sparse Synthetic Aperture Radar Imaging Network with Azimuth Ambiguity Suppression
by Zhiyi Jin, Zhouhao Pan, Zhe Zhang and Xiaolan Qiu
Remote Sens. 2025, 17(6), 1069; https://doi.org/10.3390/rs17061069 - 18 Mar 2025
Viewed by 437
Abstract
Sparse Synthetic Aperture Radar (SAR) imaging has garnered significant attention due to its ability to suppress azimuth ambiguity in under-sampled conditions, making it particularly useful for high-resolution wide-swath (HRWS) SAR systems. Traditional compressed sensing-based sparse SAR imaging algorithms are hindered by range–azimuth coupling [...] Read more.
Sparse Synthetic Aperture Radar (SAR) imaging has garnered significant attention due to its ability to suppress azimuth ambiguity in under-sampled conditions, making it particularly useful for high-resolution wide-swath (HRWS) SAR systems. Traditional compressed sensing-based sparse SAR imaging algorithms are hindered by range–azimuth coupling induced by range cell migration (RCM), which results in high computational cost and limits their applicability to large-scale imaging scenarios. To address this challenge, the approximated observation-based sparse SAR imaging algorithm was developed, which decouples the range and azimuth directions, significantly reducing computational and temporal complexities to match the performance of conventional matched filtering algorithms. However, this method requires iterative processing and manual adjustment of parameters. In this paper, we propose a novel deep neural network-based sparse SAR imaging method, namely the Self-supervised Azimuth Ambiguity Suppression Network (SAAS-Net). Unlike traditional iterative algorithms, SAAS-Net directly learns the parameters from data, eliminating the need for manual tuning. This approach not only improves imaging quality but also accelerates the imaging process. Additionally, SAAS-Net retains the core advantage of sparse SAR imaging—azimuth ambiguity suppression in under-sampling conditions. The method introduces self-supervision to achieve orientation ambiguity suppression without altering the hardware architecture. Simulations and real data experiments using Gaofen-3 validate the effectiveness and superiority of the proposed approach. Full article
Show Figures

Figure 1

17 pages, 6702 KiB  
Article
A Variational Neural Network Based on Algorithm Unfolding for Image Blind Deblurring
by Shaoqing Gong, Yeran Wang, Guangyu Yang, Weibo Wei, Junli Zhao and Zhenkuan Pan
Appl. Sci. 2024, 14(24), 11742; https://doi.org/10.3390/app142411742 - 16 Dec 2024
Cited by 1 | Viewed by 1021
Abstract
Image blind deblurring is an ill-posed inverse problem in image processing. While deep learning approaches have demonstrated effectiveness, they often lack interpretability and require extensive data. To address these limitations, we propose a novel variational neural network based on algorithm unfolding. The model [...] Read more.
Image blind deblurring is an ill-posed inverse problem in image processing. While deep learning approaches have demonstrated effectiveness, they often lack interpretability and require extensive data. To address these limitations, we propose a novel variational neural network based on algorithm unfolding. The model is solved using the half quadratic splitting (HQS) method and proximal gradient descent. For blur kernel estimation, we introduce an L0 regularizer to constrain the gradient information and use the fast fourier transform (FFT) to solve the iterative results, thereby improving accuracy. Image restoration is initiated with Gabor filters for the convolution kernel, and the activation function is approximated using a Gaussian radial basis function (RBF). Additionally, two attention mechanisms improve feature selection. The experimental results on various datasets demonstrate that our model outperforms state-of-the-art algorithm unfolding networks and other blind deblurring models. Our approach enhances interpretability and generalization while utilizing fewer data and parameters. Full article
Show Figures

Figure 1

17 pages, 1961 KiB  
Article
Mask-Guided Spatial–Spectral MLP Network for High-Resolution Hyperspectral Image Reconstruction
by Xian-Hua Han, Jian Wang and Yen-Wei Chen
Sensors 2024, 24(22), 7362; https://doi.org/10.3390/s24227362 - 18 Nov 2024
Cited by 1 | Viewed by 1494
Abstract
Hyperspectral image (HSI) reconstruction is a critical and indispensable step in spectral compressive imaging (CASSI) systems and directly affects our ability to capture high-quality images in dynamic environments. Recent research has increasingly focused on deep unfolding frameworks for HSI reconstruction, showing notable progress. [...] Read more.
Hyperspectral image (HSI) reconstruction is a critical and indispensable step in spectral compressive imaging (CASSI) systems and directly affects our ability to capture high-quality images in dynamic environments. Recent research has increasingly focused on deep unfolding frameworks for HSI reconstruction, showing notable progress. However, these approaches have to break the optimization task into two sub-problems, solving them iteratively over multiple stages, which leads to large models and high computational overheads. This study presents a simple yet effective method that passes the degradation information (sensing mask) through a deep learning network to disentangle the degradation and the latent target’s representations. Specifically, we design a lightweight MLP block to capture non-local similarities and long-range dependencies across both spatial and spectral domains, and investigate an attention-based mask modelling module to achieve the spatial–spectral-adaptive degradation representationthat is fed to the MLP-based network. To enhance the information flow between MLP blocks, we introduce a multi-level fusion module and apply reconstruction heads to different MLP features for deeper supervision. Additionally, we combine the projection loss from compressive measurements with reconstruction loss to create a dual-domain loss, ensuring consistent optical detection during HS reconstruction. Experiments on benchmark HS datasets show that our method outperforms state-of-the-art approaches in terms of both reconstruction accuracy and efficiency, reducing computational and memory costs. Full article
Show Figures

Figure 1

18 pages, 6989 KiB  
Article
A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion
by Bihui Zhang, Xiangyong Cao and Deyu Meng
Remote Sens. 2024, 16(21), 3979; https://doi.org/10.3390/rs16213979 - 26 Oct 2024
Cited by 3 | Viewed by 1594
Abstract
Multispectral and hyperspectral image fusion (MS/HS fusion) aims to generate a high-resolution hyperspectral (HRHS) image by fusing a high-resolution multispectral (HRMS) and a low-resolution hyperspectral (LRHS) images. The deep unfolding-based MS/HS fusion method is a representative deep learning paradigm due to its excellent [...] Read more.
Multispectral and hyperspectral image fusion (MS/HS fusion) aims to generate a high-resolution hyperspectral (HRHS) image by fusing a high-resolution multispectral (HRMS) and a low-resolution hyperspectral (LRHS) images. The deep unfolding-based MS/HS fusion method is a representative deep learning paradigm due to its excellent performance and sufficient interpretability. However, existing deep unfolding-based MS/HS fusion methods only rely on a fixed linear degradation model, which focuses on modeling the relationships between HRHS and HRMS, as well as HRHS and LRHS. In this paper, we break free from this observation model framework and propose a new observation model. Firstly, the proposed observation model is built based on the convolutional sparse coding (CSC) technique, and then a proximal gradient algorithm is designed to solve this model. Secondly, we unfold the iterative algorithm into a deep network, dubbed as MHF-CSCNet, where the proximal operators are learned using convolutional neural networks. Finally, all trainable parameters can be automatically learned end-to-end from the training pairs. Experimental evaluations conducted on various benchmark datasets demonstrate the superiority of our method both quantitatively and qualitatively compared to other state-of-the-art methods. Full article
Show Figures

Figure 1

16 pages, 3480 KiB  
Article
Fucoidan–Vegetable Oil Emulsion Applied to Myosin of Silver Carp: Effect on Protein Conformation and Heat-Induced Gel Properties
by Wei Wang, Lijuan Yan and Shumin Yi
Foods 2024, 13(20), 3220; https://doi.org/10.3390/foods13203220 - 10 Oct 2024
Cited by 1 | Viewed by 1457
Abstract
How to improve the gel properties of protein has become a research focus in the field of seafood processing. In this paper, a fucoidan (FU)–vegetable oil emulsion was prepared, and the mechanism behind the effect of emulsion on protein conformation and the heat-induced [...] Read more.
How to improve the gel properties of protein has become a research focus in the field of seafood processing. In this paper, a fucoidan (FU)–vegetable oil emulsion was prepared, and the mechanism behind the effect of emulsion on protein conformation and the heat-induced gel properties was studied. The results revealed that the FU–vegetable oil complex caused the aggregation and cross-linking of myosin, as well as increased the surface hydrophobicity and total sulfhydryl content of myosin. In addition, the addition of the compound (0.3% FU and 1% vegetable oil) significantly improved the gel strength, hardness, chewiness, and water-holding capacity of the myosin gel (p < 0.05). In particular, when the addition of camellia oil was 1%, the gel strength, hardness, chewiness, and water-holding capacity had the highest values of 612.47 g.mm, 406.80 g, 252.75 g, and 53.56%, respectively. Simultaneously, the emulsion (0.3% FU-1% vegetable oil) enhanced the hydrogen bonds and hydrophobic interaction of the myosin gels. The image of the microstructure showed that the emulsion with 0.3% FU-1% vegetable oil improved the formation of the stable three-dimensional network structure. In summary, the FU–vegetable oil complex can promote unfolding of the protein structure and improve the gel properties of myosin, thus providing a theoretical basis for the development of functional surimi products. Full article
(This article belongs to the Section Food Physics and (Bio)Chemistry)
Show Figures

Figure 1

21 pages, 5400 KiB  
Article
Hybrid Sparse Transformer and Wavelet Fusion-Based Deep Unfolding Network for Hyperspectral Snapshot Compressive Imaging
by Yangke Ying, Jin Wang, Yunhui Shi and Nam Ling
Sensors 2024, 24(19), 6184; https://doi.org/10.3390/s24196184 - 24 Sep 2024
Viewed by 1874
Abstract
Recently, deep unfolding network methods have significantly progressed in hyperspectral snapshot compressive imaging. Many approaches directly employ Transformer models to boost the feature representation capabilities of algorithms. However, they often fall short of leveraging the full potential of self-attention mechanisms. Additionally, current methods [...] Read more.
Recently, deep unfolding network methods have significantly progressed in hyperspectral snapshot compressive imaging. Many approaches directly employ Transformer models to boost the feature representation capabilities of algorithms. However, they often fall short of leveraging the full potential of self-attention mechanisms. Additionally, current methods lack adequate consideration of both intra-stage and inter-stage feature fusion, which hampers their overall performance. To tackle these challenges, we introduce a novel approach that hybridizes the sparse Transformer and wavelet fusion-based deep unfolding network for hyperspectral image (HSI) reconstruction. Our method includes the development of a spatial sparse Transformer and a spectral sparse Transformer, designed to capture spatial and spectral attention of HSI data, respectively, thus enhancing the Transformer’s feature representation capabilities. Furthermore, we incorporate wavelet-based methods for both intra-stage and inter-stage feature fusion, which significantly boosts the algorithm’s reconstruction performance. Extensive experiments across various datasets confirm the superiority of our proposed approach. Full article
Show Figures

Figure 1

18 pages, 9754 KiB  
Article
Bridge Surface Defect Localization Based on Panoramic Image Generation and Deep Learning-Assisted Detection Method
by Tao Yin, Guodong Shen, Liang Yin and Guigang Shi
Buildings 2024, 14(9), 2964; https://doi.org/10.3390/buildings14092964 - 19 Sep 2024
Cited by 4 | Viewed by 1793
Abstract
Applying unmanned aerial vehicles (UAVs) and vision-based analysis methods to detect bridge surface damage significantly improves inspection efficiency, but the existing techniques have difficulty in accurately locating damage, making it difficult to use the results to assess a bridge’s degree of deterioration. Therefore, [...] Read more.
Applying unmanned aerial vehicles (UAVs) and vision-based analysis methods to detect bridge surface damage significantly improves inspection efficiency, but the existing techniques have difficulty in accurately locating damage, making it difficult to use the results to assess a bridge’s degree of deterioration. Therefore, this study proposes a method to generate panoramic bridge surface images using multi-view images captured by UAVs, in order to automatically identify and locate damage. The main contributions are as follows: (1) We propose a UAV-based image-capturing method for various bridge sections to collect close-range, multi-angle, and overlapping images of the surface; (2) we propose a 3D reconstruction method based on multi-view images to reconstruct a textured bridge model, through which an ultra-high resolution panoramic unfolded image of the bridge surface can be obtained by projecting from multiple angles; (3) we applied the Swin Transformer to optimize the YOLOv8 network and improve the detection accuracy of small-scale damages based on the established bridge damage dataset and employed sliding window segmentation to detect damage in the ultra-high resolution panoramic image. The proposed method was applied to detect surface damage on a three-span concrete bridge. The results indicate that this method automatically generates panoramic images of the bridge bottom, deck, and sides with hundreds of millions of pixels and recognizes damage in the panoramas. In addition, the damage detection accuracy reached 98.7%, which is improved by 13.6% when compared with the original network. Full article
Show Figures

Figure 1

16 pages, 6029 KiB  
Article
FusionOpt-Net: A Transformer-Based Compressive Sensing Reconstruction Algorithm
by Honghao Zhang, Bi Chen, Xianwei Gao, Xiang Yao and Linyu Hou
Sensors 2024, 24(18), 5976; https://doi.org/10.3390/s24185976 - 14 Sep 2024
Cited by 1 | Viewed by 1641
Abstract
Compressive sensing (CS) is a notable technique in signal processing, especially in multimedia, as it allows for simultaneous signal acquisition and dimensionality reduction. Recent advancements in deep learning (DL) have led to the creation of deep unfolding architectures, which overcome the inefficiency and [...] Read more.
Compressive sensing (CS) is a notable technique in signal processing, especially in multimedia, as it allows for simultaneous signal acquisition and dimensionality reduction. Recent advancements in deep learning (DL) have led to the creation of deep unfolding architectures, which overcome the inefficiency and subpar quality of traditional CS reconstruction methods. In this paper, we introduce a novel CS image reconstruction algorithm that leverages the strengths of the fast iterative shrinkage-thresholding algorithm (FISTA) and modern Transformer networks. To enhance computational efficiency, we employ a block-based sampling approach in the sampling module. By mapping FISTA’s iterative process onto neural networks in the reconstruction module, we address the hyperparameter challenges of traditional algorithms, thereby improving reconstruction efficiency. Moreover, the robust feature extraction capabilities of Transformer networks significantly enhance image reconstruction quality. Experimental results show that the FusionOpt-Net model surpasses other advanced methods on various public benchmark datasets. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 3566 KiB  
Article
The Effect of Ultrasound Treatment on the Structural and Functional Properties of Tenebrio molitor Myofibrillar Protein
by Xiu Wang, Xiangxiang Ni, Chaoyi Duan, Ruixi Li, Xiao’e Jiang, Mingfeng Xu and Rongrong Yu
Foods 2024, 13(17), 2817; https://doi.org/10.3390/foods13172817 - 5 Sep 2024
Cited by 31 | Viewed by 3141
Abstract
The objective of this study was to explore the impacts of various ultrasonic powers (0, 300, 500, 700, and 900 W) on the structure and functional attributes of the myofibrillar protein (MP) of Tenebrio molitor. As the ultrasonic intensity escalated, the extraction [...] Read more.
The objective of this study was to explore the impacts of various ultrasonic powers (0, 300, 500, 700, and 900 W) on the structure and functional attributes of the myofibrillar protein (MP) of Tenebrio molitor. As the ultrasonic intensity escalated, the extraction efficiency and yield of the MP rose, while the particle size and turbidity decreased correspondingly. The reduction in sulfhydryl group content and the increase in carbonyl group content both suggested that ultrasonic treatment promoted the oxidation of the MP to a certain extent, which was conducive to the formation of a denser and more stable gel network structure. This was also affirmed by SEM images. Additionally, the findings of intrinsic fluorescence and FTIR indicated that high-intensity ultrasound significantly altered the secondary structure of the protein. The unfolding of the MP exposed more amino acid residues, the α-helix decreased, and the β-helix improved, thereby resulting in a looser and more flexible conformation. Along with the structural alteration, the surface hydrophobicity and emulsification properties were also significantly enhanced. Besides that, SDS–PAGE demonstrated that the MP of T. molitor was primarily composed of myosin heavy chain (MHC), actin, myosin light chain (MLC), paramyosin, and tropomyosin. The aforementioned results confirmed that ultrasonic treatment could, to a certain extent, enhance the structure and function of mealworm MP, thereby providing a theoretical reference for the utilization of edible insect proteins in the future, deep-processing proteins produced by T. molitor, and the development of new technologies. Full article
(This article belongs to the Special Issue Processing and Nutritional Evaluation of Animal Products)
Show Figures

Figure 1

15 pages, 7113 KiB  
Article
Hybrid Transformer and Convolution for Image Compressed Sensing
by Ruili Nan, Guiling Sun, Bowen Zheng and Pengchen Zhang
Electronics 2024, 13(17), 3496; https://doi.org/10.3390/electronics13173496 - 3 Sep 2024
Viewed by 1340
Abstract
In recent years, deep unfolding networks (DUNs) have received widespread attention in the field of compressed sensing (CS) reconstruction due to their good interpretability and strong mapping capabilities. However, existing DUNs often improve the reconstruction effect at the expense of a large number [...] Read more.
In recent years, deep unfolding networks (DUNs) have received widespread attention in the field of compressed sensing (CS) reconstruction due to their good interpretability and strong mapping capabilities. However, existing DUNs often improve the reconstruction effect at the expense of a large number of parameters, and there is the problem of information loss in long-distance feature transmission. Based on the above problems, we propose an unfolded network architecture that mixes Transformer and large kernel convolution to achieve sparse sampling and reconstruction of natural images, namely, a reconstruction network based on Transformer and convolution (TCR-Net). The Transformer framework has the inherent ability to capture global context through a self-attention mechanism, which can effectively solve the challenge of long-range dependence on features. TCR-Net is an end-to-end two-stage architecture. First, a data-driven pre-trained encoder is used to complete the sparse representation and basic feature extraction of image information. Second, a new attention mechanism is introduced to replace the self-attention mechanism in Transformer, and a hybrid Transformer and convolution module based on optimization-inspired is designed. Its iterative process leads to the unfolding framework, which approximates the original image stage by stage. Experimental results show that TCR-Net outperforms existing state-of-the-art CS methods while maintaining fast computational speed. Specifically, when the CS ratio is 0.10, the average PSNR on the test set used in this paper is improved by at least 0.8%, the average SSIM is improved by at least 1.5%, and the processing speed is higher than 70FPS. These quantitative results show that our method has high computational efficiency while ensuring high-quality image restoration. Full article
Show Figures

Figure 1

Back to TopTop