Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = SAU-Net

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5440 KB  
Article
RepSAU-Net: Semantic Segmentation of Barcodes in Complex Backgrounds via Fused Self-Attention and Reparameterization Methods
by Yanfei Sun, Junyu Wang and Rui Yin
J. Imaging 2025, 11(11), 394; https://doi.org/10.3390/jimaging11110394 - 6 Nov 2025
Viewed by 527
Abstract
In the digital era, commodity barcodes serve as a bridge between the physical and digital worlds and are widely used in retail checkout systems. To meet the broader application demands for product identification, this paper proposes a method for locating, semantically segmenting barcodes [...] Read more.
In the digital era, commodity barcodes serve as a bridge between the physical and digital worlds and are widely used in retail checkout systems. To meet the broader application demands for product identification, this paper proposes a method for locating, semantically segmenting barcodes in complex backgrounds, decoding hidden information, and recovering these barcodes in wide field-of-view images. This method integrates self-attention mechanisms and reparameterization techniques to construct a RepSAU-Net model. Specifically, this paper first introduces a barcode image dataset synthesis strategy adapted for deep learning models, constructing the SBS (Screen Stego Barcodes) dataset, which comprises 2000 wide field-of-view background images (Type A) and 400 information-hidden barcode images (Type B), totaling 30,000 images. Based on this, a network architecture (RepSAU-Net) combining a self-attention mechanism and RepVGG reparameterization technology was designed, with a parameter count of 32.88 M. Experimental results demonstrate that this network performs well in barcode segmentation tasks, achieving an inference speed of 4.88 frames/s, a Mean Intersection over Union (MIoU) of 98.36%, and an Accuracy (Acc) of 94.96%. This research effectively enhances global information capture and feature extraction capabilities without significantly increasing computational load, providing technical support for the application of data-embedded barcodes. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

28 pages, 1092 KB  
Review
Examining the Interplay between CEPSA’s ESG Performance and Financial Performance: An Overview of the Energy Sector Transformation
by Yangxueyi Hu, Abeer Hassan and Sehrish Atif
Sustainability 2024, 16(7), 2772; https://doi.org/10.3390/su16072772 - 27 Mar 2024
Cited by 3 | Viewed by 5320
Abstract
This study delves into the financial performance of the Compañía Española de Petróleos, S.A.U. (CEPSA) within the context of the ongoing ESG transformation in the Energy Sector. The primary aim of this research is to understand the critical dimensions essential for evaluating energy [...] Read more.
This study delves into the financial performance of the Compañía Española de Petróleos, S.A.U. (CEPSA) within the context of the ongoing ESG transformation in the Energy Sector. The primary aim of this research is to understand the critical dimensions essential for evaluating energy companies’ ESG performances. The research assesses the changes in CEPSA’s financial indicators over the last five years (2018–2022). The report uses DuPont analysis to evaluate CEPSA’s environmental and social responsibility performances. The study examines several financial performance metrics, including return on net assets, profitability, and corporate financing structure changes. The methodology of this study comprehensively assesses CEPSA’s sustainable development trajectory and ESG management system. The analysis reveals that CEPSA has consistently improved its sustainable development capabilities over the last five years by establishing a comprehensive ESG management system. While return on net assets and profitability indicators have shown positive trends, the financing structure has changed significantly. Notably, the proportion of debt financing has increased substantially, and there is a slight decline in the net profit margin. The formal transformation in 2020 further influenced increases in liabilities and fixed assets for CEPSA. The study focuses on CEPSA’s sustained improvements in ESG management and the associated shifts in financial metrics, adding originality to the study and offering a nuanced perspective on the evolving landscape of sustainable practices. The study reveals the financial implications of ESG transformation in the energy sector and offers valuable insights for stakeholders. Moreover, this research contributes to the existing literature by employing the DuPont analysis system to explore the intricate relationship between ESG performance and financial indicators in the energy sector. Full article
Show Figures

Figure 1

26 pages, 48566 KB  
Article
SAA-UNet: Spatial Attention and Attention Gate UNet for COVID-19 Pneumonia Segmentation from Computed Tomography
by Shroog Alshomrani, Muhammad Arif and Mohammed A. Al Ghamdi
Diagnostics 2023, 13(9), 1658; https://doi.org/10.3390/diagnostics13091658 - 8 May 2023
Cited by 9 | Viewed by 3318
Abstract
The disaster of the COVID-19 pandemic has claimed numerous lives and wreaked havoc on the entire world due to its transmissible nature. One of the complications of COVID-19 is pneumonia. Different radiography methods, particularly computed tomography (CT), have shown outstanding performance in effectively [...] Read more.
The disaster of the COVID-19 pandemic has claimed numerous lives and wreaked havoc on the entire world due to its transmissible nature. One of the complications of COVID-19 is pneumonia. Different radiography methods, particularly computed tomography (CT), have shown outstanding performance in effectively diagnosing pneumonia. In this paper, we propose a spatial attention and attention gate UNet model (SAA-UNet) inspired by spatial attention UNet (SA-UNet) and attention UNet (Att-UNet) to deal with the problem of infection segmentation in the lungs. The proposed method was applied to the MedSeg, Radiopaedia 9P, combination of MedSeg and Radiopaedia 9P, and Zenodo 20P datasets. The proposed method showed good infection segmentation results (two classes: infection and background) with an average Dice similarity coefficient of 0.85, 0.94, 0.91, and 0.93 and a mean intersection over union (IOU) of 0.78, 0.90, 0.86, and 0.87, respectively, on the four datasets mentioned above. Moreover, it also performed well in multi-class segmentation with average Dice similarity coefficients of 0.693, 0.89, 0.87, and 0.93 and IOU scores of 0.68, 0.87, 0.78, and 0.89 on the four datasets, respectively. Classification accuracies of more than 97% were achieved for all four datasets. The F1-scores for the MedSeg, Radiopaedia P9, combination of MedSeg and Radiopaedia P9, and Zenodo 20P datasets were 0.865, 0.943, 0.917, and 0.926, respectively, for the binary classification. For multi-class classification, accuracies of more than 96% were achieved on all four datasets. The experimental results showed that the framework proposed can effectively and efficiently segment COVID-19 infection on CT images with different contrast and utilize this to aid in diagnosing and treating pneumonia caused by COVID-19. Full article
Show Figures

Figure 1

24 pages, 15207 KB  
Article
Improved U-Net Remote Sensing Classification Algorithm Fusing Attention and Multiscale Features
by Xiangsuo Fan, Chuan Yan, Jinlong Fan and Nayi Wang
Remote Sens. 2022, 14(15), 3591; https://doi.org/10.3390/rs14153591 - 27 Jul 2022
Cited by 45 | Viewed by 12773
Abstract
The selection and representation of classification features in remote sensing image play crucial roles in image classification accuracy. To effectively improve the features classification accuracy, an improved U-Net remote sensing classification algorithm fusing attention and multiscale features is proposed in this paper, called [...] Read more.
The selection and representation of classification features in remote sensing image play crucial roles in image classification accuracy. To effectively improve the features classification accuracy, an improved U-Net remote sensing classification algorithm fusing attention and multiscale features is proposed in this paper, called spatial attention-atrous spatial pyramid pooling U-Net (SA-UNet). This framework connects atrous spatial pyramid pooling (ASPP) with the convolutional units of the encoder of the original U-Net in the form of residuals. The ASPP module expands the receptive field, integrates multiscale features in the network, and enhances the ability to express shallow features. Through the fusion residual module, shallow and deep features are deeply fused, and the characteristics of shallow and deep features are further used. The spatial attention mechanism is used to combine spatial with semantic information so that the decoder can recover more spatial information. In this study, the crop distribution in central Guangxi province was analyzed, and experiments were conducted based on Landsat 8 multispectral remote sensing images. The experimental results showed that the improved algorithm increases the classification accuracy, with the accuracy increasing from 93.33% to 96.25%, The segmentation accuracy of sugarcane, rice, and other land increased from 96.42%, 63.37%, and 88.43% to 98.01%, 83.21%, and 95.71%, respectively. The agricultural planting area results obtained by the proposed algorithm can be used as input data for regional ecological models, which is conducive to the development of accurate and real-time crop growth change models. Full article
(This article belongs to the Special Issue Remote Sensing for Mapping Farmland and Agricultural Infrastructure)
Show Figures

Graphical abstract

16 pages, 6639 KB  
Article
State-of-the-Art Capability of Convolutional Neural Networks to Distinguish the Signal in the Ionosphere
by Yu-Chi Chang, Chia-Hsien Lin, Alexei V. Dmitriev, Mon-Chai Hsieh, Hao-Wei Hsu, Yu-Ciang Lin, Merlin M. Mendoza, Guan-Han Huang, Lung-Chih Tsai, Yung-Hui Li and Enkhtuya Tsogtbaatar
Sensors 2022, 22(7), 2758; https://doi.org/10.3390/s22072758 - 2 Apr 2022
Cited by 7 | Viewed by 3792
Abstract
Recovering and distinguishing different ionospheric layers and signals usually requires slow and complicated procedures. In this work, we construct and train five convolutional neural network (CNN) models: DeepLab, fully convolutional DenseNet24 (FC-DenseNet24), deep watershed transform (DWT), Mask R-CNN, and spatial attention-UNet (SA-UNet) for [...] Read more.
Recovering and distinguishing different ionospheric layers and signals usually requires slow and complicated procedures. In this work, we construct and train five convolutional neural network (CNN) models: DeepLab, fully convolutional DenseNet24 (FC-DenseNet24), deep watershed transform (DWT), Mask R-CNN, and spatial attention-UNet (SA-UNet) for the recovery of ionograms. The performance of the models is evaluated by intersection over union (IoU). We collect and manually label 6131 ionograms, which are acquired from a low-latitude ionosonde in Taiwan. These ionograms are contaminated by strong quasi-static noise, with an average signal-to-noise ratio (SNR) equal to 1.4. Applying the five models to these noisy ionograms, we show that the models can recover useful signals with IoU > 0.6. The highest accuracy is achieved by SA-UNet. For signals with less than 15% of samples in the data set, they can be recovered by Mask R-CNN to some degree (IoU > 0.2). In addition to the number of samples, we identify and examine the effects of three factors: (1) SNR, (2) shape of signal, (3) overlapping of signals on the recovery accuracy of different models. Our results indicate that FC-DenseNet24, DWT, Mask R-CNN and SA-UNet are capable of identifying signals from very noisy ionograms (SNR < 1.4), overlapping signals can be well identified by DWT, Mask R-CNN and SA-UNet, and that more elongated signals are better identified by all models. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Taiwan)
Show Figures

Figure 1

22 pages, 13255 KB  
Article
Split-Attention U-Net: A Fully Convolutional Network for Robust Multi-Label Segmentation from Brain MRI
by Minho Lee, JeeYoung Kim, Regina EY Kim, Hyun Gi Kim, Se Won Oh, Min Kyoung Lee, Sheng-Min Wang, Nak-Young Kim, Dong Woo Kang, ZunHyan Rieu, Jung Hyun Yong, Donghyeon Kim and Hyun Kook Lim
Brain Sci. 2020, 10(12), 974; https://doi.org/10.3390/brainsci10120974 - 11 Dec 2020
Cited by 48 | Viewed by 7072
Abstract
Multi-label brain segmentation from brain magnetic resonance imaging (MRI) provides valuable structural information for most neurological analyses. Due to the complexity of the brain segmentation algorithm, it could delay the delivery of neuroimaging findings. Therefore, we introduce Split-Attention U-Net (SAU-Net), a convolutional neural [...] Read more.
Multi-label brain segmentation from brain magnetic resonance imaging (MRI) provides valuable structural information for most neurological analyses. Due to the complexity of the brain segmentation algorithm, it could delay the delivery of neuroimaging findings. Therefore, we introduce Split-Attention U-Net (SAU-Net), a convolutional neural network with skip pathways and a split-attention module that segments brain MRI scans. The proposed architecture employs split-attention blocks, skip pathways with pyramid levels, and evolving normalization layers. For efficient training, we performed pre-training and fine-tuning with the original and manually modified FreeSurfer labels, respectively. This learning strategy enables involvement of heterogeneous neuroimaging data in the training without the need for many manual annotations. Using nine evaluation datasets, we demonstrated that SAU-Net achieved better segmentation accuracy with better reliability that surpasses those of state-of-the-art methods. We believe that SAU-Net has excellent potential due to its robustness to neuroanatomical variability that would enable almost instantaneous access to accurate neuroimaging biomarkers and its swift processing runtime compared to other methods investigated. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

Back to TopTop