Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = deep learning-based image matting approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6735 KB  
Article
SFMattingNet: A Trimap-Free Deep Image Matting Approach for Smoke and Fire Scenes
by Shihui Ma, Zhaoyang Xu and Hongping Yan
Remote Sens. 2025, 17(13), 2259; https://doi.org/10.3390/rs17132259 - 1 Jul 2025
Viewed by 2391
Abstract
Smoke and fire detection is vital for timely fire alarms, but traditional sensor-based methods are often unresponsive and costly. While deep learning-based methods offer promise using aerial images and surveillance images, the scarcity and limited diversity of smoke-and-fire-related image data hinder model accuracy [...] Read more.
Smoke and fire detection is vital for timely fire alarms, but traditional sensor-based methods are often unresponsive and costly. While deep learning-based methods offer promise using aerial images and surveillance images, the scarcity and limited diversity of smoke-and-fire-related image data hinder model accuracy and generalization. Alpha composition, blending foreground and background using per-pixel alpha values (transparency parameters stored in the alpha channel alongside RGB channels), can effectively augment smoke and fire image datasets. Since image matting algorithms compute these alpha values, the quality of the alpha composition directly depends on the performance of the smoke and fire matting methods. However, due to the lack of smoke and fire image matting datasets for model training, existing image matting methods exhibit significant errors in predicting the alpha values of smoke and fire targets, leading to unrealistic composite images. Therefore, to address these above issues, the main research contributions of this paper are as follows: (1) Construction of a high-precision, large-scale smoke and fire image matting dataset, SFMatting-800. The images in this dataset are sourced from diverse real-world scenarios. It provides precise foreground opacity values and attribute annotations. (2) Evaluation of existing image matting baseline methods. Based on the SFMatting-800 dataset, traditional, trimap-based deep learning and trimap-free deep learning matting methods are evaluated to identify their strengths and weaknesses, providing a benchmark for improving future smoke and fire matting methods. (3) Proposal of a deep learning-based trimap-free smoke and fire image matting network, SFMattingNet, which takes the original image as input without using trimaps. Taking into account the unique characteristics of smoke and fire, the network incorporates a non-rigid object feature extraction module and a spatial awareness module, achieving improved performance. Compared to the suboptimal approach, MODNet, our SFMattingNet method achieved an average error reduction of 12.65% in the smoke and fire matting task. Full article
(This article belongs to the Special Issue Advanced AI Technology for Remote Sensing Analysis)
Show Figures

Figure 1

24 pages, 7554 KB  
Article
Comparative Evaluation of Machine Learning-Based Radiomics and Deep Learning for Breast Lesion Classification in Mammography
by Alessandro Stefano, Fabiano Bini, Eleonora Giovagnoli, Mariangela Dimarco, Nicolò Lauciello, Daniela Narbonese, Giovanni Pasini, Franco Marinozzi, Giorgio Russo and Ildebrando D’Angelo
Diagnostics 2025, 15(8), 953; https://doi.org/10.3390/diagnostics15080953 - 9 Apr 2025
Cited by 12 | Viewed by 3431
Abstract
Background: Breast cancer is the second leading cause of cancer-related mortality among women, accounting for 12% of cases. Early diagnosis, based on the identification of radiological features, such as masses and microcalcifications in mammograms, is crucial for reducing mortality rates. However, manual interpretation [...] Read more.
Background: Breast cancer is the second leading cause of cancer-related mortality among women, accounting for 12% of cases. Early diagnosis, based on the identification of radiological features, such as masses and microcalcifications in mammograms, is crucial for reducing mortality rates. However, manual interpretation by radiologists is complex and subject to variability, emphasizing the need for automated diagnostic tools to enhance accuracy and efficiency. This study compares a radiomics workflow based on machine learning (ML) with a deep learning (DL) approach for classifying breast lesions as benign or malignant. Methods: matRadiomics was used to extract radiomics features from mammographic images of 1219 patients from the CBIS-DDSM public database, including 581 cases of microcalcifications and 638 of masses. Among the ML models, a linear discriminant analysis (LDA) demonstrated the best performance for both lesion types. External validation was conducted on a private dataset of 222 images to evaluate generalizability to an independent cohort. Additionally, a deep learning approach based on the EfficientNetB6 model was employed for comparison. Results: The LDA model achieved a mean validation AUC of 68.28% for microcalcifications and 61.53% for masses. In the external validation, AUC values of 66.9% and 61.5% were obtained, respectively. In contrast, the EfficientNetB6 model demonstrated superior performance, achieving an AUC of 81.52% for microcalcifications and 76.24% for masses, highlighting the potential of DL for improved diagnostic accuracy. Conclusions: This study underscores the limitations of ML-based radiomics in breast cancer diagnosis. Deep learning proves to be a more effective approach, offering enhanced accuracy and supporting clinicians in improving patient management. Full article
(This article belongs to the Special Issue Updates on Breast Cancer: Diagnosis and Management)
Show Figures

Figure 1

21 pages, 34742 KB  
Article
Integrating Depth-Based and Deep Learning Techniques for Real-Time Video Matting without Green Screens
by Pin-Chen Su and Mau-Tsuen Yang
Electronics 2024, 13(16), 3182; https://doi.org/10.3390/electronics13163182 - 12 Aug 2024
Cited by 2 | Viewed by 5817
Abstract
Virtual production, a filmmaking technique that seamlessly merges virtual and real cinematography, has revolutionized the film and television industry. However, traditional virtual production requires the setup of green screens, which can be both costly and cumbersome. We have developed a green screen-free virtual [...] Read more.
Virtual production, a filmmaking technique that seamlessly merges virtual and real cinematography, has revolutionized the film and television industry. However, traditional virtual production requires the setup of green screens, which can be both costly and cumbersome. We have developed a green screen-free virtual production system that incorporates a 3D tracker for camera tracking, enabling the compositing of virtual and real-world images from a moving camera with varying perspectives. To address the core issue of video matting in virtual production, we introduce a novel Boundary-Selective Fusion (BSF) technique that combines the alpha mattes generated by deep learning-based and depth-based approaches, leveraging their complementary strengths. Experimental results demonstrate that this combined alpha matte is more accurate and robust than those produced by either method alone. Overall, the proposed BSF technique is competitive with state-of-the-art video matting methods, particularly in scenarios involving humans holding objects or other complex settings. The proposed system enables real-time previewing of composite footage during filmmaking, reducing the costs associated with green screen setups and simplifying the compositing process of virtual and real images. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Computer Vision)
Show Figures

Figure 1

Back to TopTop