Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (37)

Search Parameters:
Keywords = semi-transparent image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 13245 KiB  
Article
LHRF-YOLO: A Lightweight Model with Hybrid Receptive Field for Forest Fire Detection
by Yifan Ma, Weifeng Shan, Yanwei Sui, Mengyu Wang and Maofa Wang
Forests 2025, 16(7), 1095; https://doi.org/10.3390/f16071095 - 2 Jul 2025
Viewed by 346
Abstract
Timely and accurate detection of forest fires is crucial for protecting forest ecosystems. However, traditional monitoring methods face significant challenges in effectively detecting forest fires, primarily due to the dynamic spread of flames and smoke, irregular morphologies, and the semi-transparent nature of smoke, [...] Read more.
Timely and accurate detection of forest fires is crucial for protecting forest ecosystems. However, traditional monitoring methods face significant challenges in effectively detecting forest fires, primarily due to the dynamic spread of flames and smoke, irregular morphologies, and the semi-transparent nature of smoke, which make it extremely difficult to extract key visual features. Additionally, deploying these detection systems to edge devices with limited computational resources remains challenging. To address these issues, this paper proposes a lightweight hybrid receptive field model (LHRF-YOLO), which leverages deep learning to overcome the shortcomings of traditional monitoring methods for fire detection on edge devices. Firstly, a hybrid receptive field extraction module is designed by integrating the 2D selective scan mechanism with a residual multi-branch structure. This significantly enhances the model’s contextual understanding of the entire image scene while maintaining low computational complexity. Second, a dynamic enhanced downsampling module is proposed, which employs feature reorganization and channel-wise dynamic weighting strategies to minimize the loss of critical details, such as fine smoke textures, while reducing image resolution. Furthermore, a scale weighted Fusion module is introduced to optimize multi-scale feature fusion through adaptive weight allocation, addressing the issues of information dilution and imbalance caused by traditional fusion methods. Finally, the Mish activation function replaces the SiLU activation function to improve the model’s ability to capture flame edges and faint smoke textures. Experimental results on the self-constructed Fire-SmokeDataset demonstrate that LHRF-YOLO achieves significant model compression while further improving accuracy compared to the baseline model YOLOv11. The parameter count is reduced to only 2.25M (a 12.8% reduction), computational complexity to 5.4 GFLOPs (a 14.3% decrease), and mAP50 is increased to 87.6%, surpassing the baseline model. Additionally, LHRF-YOLO exhibits leading generalization performance on the cross-scenario M4SFWD dataset. The proposed method balances performance and resource efficiency, providing a feasible solution for real-time and efficient fire detection on resource-constrained edge devices with significant research value. Full article
(This article belongs to the Special Issue Forest Fires Prediction and Detection—2nd Edition)
Show Figures

Figure 1

24 pages, 8390 KiB  
Article
Impact of Permanent Preservation Areas on Water Quality in a Semi-Arid Watershed
by Fernanda Helena Oliveira da Silva, Fernando Bezerra Lopes, Bruno Gabriel Monteiro da Costa Bezerra, Noely Silva Viana, Isabel Cristina da Silva Araújo, Nayara Rochelli de Sousa Luna, Michele Cunha Pontes, Raí Rebouças Cavalcante, Francisco Thiago de Alburquerque Aragão and Eunice Maia de Andrade
Environments 2025, 12(7), 220; https://doi.org/10.3390/environments12070220 - 27 Jun 2025
Viewed by 540
Abstract
Water is scarce in semi-arid regions due to environmental limitations; this situation is aggravated by changes in land use and land cover (LULC). In this respect, the basic ecological functions of Permanent Preservation Areas (PPAs) help to maintain water resources. The aim of [...] Read more.
Water is scarce in semi-arid regions due to environmental limitations; this situation is aggravated by changes in land use and land cover (LULC). In this respect, the basic ecological functions of Permanent Preservation Areas (PPAs) help to maintain water resources. The aim of this study was to evaluate the relationship between the LULC and water quality in PPAs in a semi-arid watershed, from 2009 to 2016. The following limnological data were analyzed: chlorophyll-a, transparency, total nitrogen and total phosphorus. The changes in LULC were obtained by classifying images from Landsat 5, 7 and 8 into three types: Open Dry Tropical Forest (ODTF), Dense Dry Tropical Forest (DDTF) and Exposed Soil (ES). Spearman correlation and principal component analysis were applied to evaluate the relationships between the parameters. There was a significant positive correlation between DDTF and the best limnological conditions. However, ES showed a significant negative relationship with transparency and a positive relationship with chlorophyll-a, indicating a greater input of sediments and nutrients into the water. The PCA corroborated the results of the correlation. It is therefore essential to prioritize the preservation and restoration of the vegetation in these sensitive areas to ensure the sustainability of water resources. Future studies should assess the impact of specific human activities, such as agriculture, deforestation and livestock farming, on water quality in the PPAs. Full article
Show Figures

Figure 1

13 pages, 5336 KiB  
Article
SnowMamba: Achieving More Precise Snow Removal with Mamba
by Guoqiang Wang, Yanyun Zhou, Fei Shi and Zhenhong Jia
Appl. Sci. 2025, 15(10), 5404; https://doi.org/10.3390/app15105404 - 12 May 2025
Viewed by 419
Abstract
Due to the diversity and semi-transparency of snowflakes, accurately locating and reconstructing background information during image restoration poses a significant challenge. Snowflakes obscure image details, thereby affecting downstream tasks such as object recognition and image segmentation. Although Convolutional Neural Networks (CNNs) and Transformers [...] Read more.
Due to the diversity and semi-transparency of snowflakes, accurately locating and reconstructing background information during image restoration poses a significant challenge. Snowflakes obscure image details, thereby affecting downstream tasks such as object recognition and image segmentation. Although Convolutional Neural Networks (CNNs) and Transformers have achieved promising results in snow removal through local or global feature processing, residual snowflakes or shadows persist in restored images. Inspired by the recent popularity of State Space Models (SSMs), this paper proposes a Mamba-based multi-scale desnowing network (SnowMamba), which effectively models the long-range dependencies of snowflakes. This enables the precise localization and removal of snow particles, addressing the issue of residual snowflakes and shadows in images. Specifically, we design a four-stage encoder–decoder network that incorporates Snow Caption Mamba (SCM) and SE modules to extract comprehensive snowflake and background information. The extracted multi-scale snow and background features are then fed into the proposed Multi-Scale Residual Interaction Network (MRNet) to learn and reconstruct clear, snow-free background images. Extensive experiments demonstrate that the proposed method outperforms other mainstream desnowing approaches in both qualitative and quantitative evaluations on three standard image desnowing datasets. Full article
Show Figures

Figure 1

23 pages, 2831 KiB  
Article
RT-DETR-Smoke: A Real-Time Transformer for Forest Smoke Detection
by Zhong Wang, Lanfang Lei, Tong Li, Xian Zu and Peibei Shi
Fire 2025, 8(5), 170; https://doi.org/10.3390/fire8050170 - 27 Apr 2025
Cited by 2 | Viewed by 1321
Abstract
Smoke detection is crucial for early fire prevention and the protection of lives and property. Unlike generic object detection, smoke detection faces unique challenges due to smoke’s semitransparent, fluid nature, which often leads to false positives in complex backgrounds and missed detections—particularly around [...] Read more.
Smoke detection is crucial for early fire prevention and the protection of lives and property. Unlike generic object detection, smoke detection faces unique challenges due to smoke’s semitransparent, fluid nature, which often leads to false positives in complex backgrounds and missed detections—particularly around smoke edges and small targets. Moreover, high computational overhead further restricts real-world deployment. To tackle these issues, we propose RT-DETR-Smoke, a specialized real-time transformer-based smoke-detection framework. First, we designed a high-efficiency hybrid encoder that combines convolutional and Transformer features, thus reducing computational cost while preserving crucial smoke details. We then incorporated an uncertainty-minimization strategy to dynamically select the most confident detection queries, further improving detection accuracy in challenging scenarios. Next, to alleviate the common issue of blurred or incomplete smoke boundaries, we introduced a coordinate attention mechanism, which enhances spatial-feature fusion and refines smoke-edge localization. Finally, we propose the WShapeIoU loss function to accelerate model convergence and boost the precision of the bounding-box regression for multiscale smoke targets under diverse environmental conditions. As evaluated on our custom smoke dataset, RT-DETR-Smoke achieves a remarkable 87.75% mAP@0.5 and processes images at 445.50 FPS, significantly outperforming existing methods in both accuracy and speed. These results underscore the potential of RT-DETR-Smoke for practical deployment in early fire-warning and smoke-monitoring systems. Full article
Show Figures

Figure 1

22 pages, 1614 KiB  
Article
The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives
by Panagiota (Naya) Kalfeli and Christina Angeli
Societies 2025, 15(2), 22; https://doi.org/10.3390/soc15020022 - 25 Jan 2025
Viewed by 3594
Abstract
This study aims to explore the perceptions of Greek journalists and academics on the use of artificial intelligence (AI) in Greek journalism, focusing on its benefits, risks, and potential ethical dilemmas. In particular, it seeks to (i) assess the extent of the use [...] Read more.
This study aims to explore the perceptions of Greek journalists and academics on the use of artificial intelligence (AI) in Greek journalism, focusing on its benefits, risks, and potential ethical dilemmas. In particular, it seeks to (i) assess the extent of the use of AI tools by Greek journalists; (ii) investigate views on how AI might alter news production, work routines, and labor relations in the field; and (iii) examine perspectives on the ethical challenges of AI in journalism, particularly in regard to AI-generated images in media content. To achieve this, a series of 28 in-depth semi-structured interviews was conducted with Greek journalists and academics. A thematic analysis was employed to identify key themes and patterns. Overall, the findings suggest that AI penetration in Greek journalism is in its early stages, with no formal training, strategy, or framework in place within Greek media. Regarding ethical concerns, there is evident skepticism and caution among journalists and academics about issues, such as, data bias, transparency, privacy, and copyright, which are further intensified by the absence of a regulatory framework. Full article
Show Figures

Figure 1

19 pages, 2078 KiB  
Article
Enhancing Medical Image Classification with Unified Model Agnostic Computation and Explainable AI
by Elie Neghawi and Yan Liu
AI 2024, 5(4), 2260-2278; https://doi.org/10.3390/ai5040111 - 5 Nov 2024
Cited by 1 | Viewed by 2227
Abstract
Background: Advances in medical image classification have recently benefited from general augmentation techniques. However, these methods often fall short in performance and interpretability. Objective: This paper applies the Unified Model Agnostic Computation (UMAC) framework specifically to the medical domain to demonstrate [...] Read more.
Background: Advances in medical image classification have recently benefited from general augmentation techniques. However, these methods often fall short in performance and interpretability. Objective: This paper applies the Unified Model Agnostic Computation (UMAC) framework specifically to the medical domain to demonstrate its utility in this critical area. Methods: UMAC is a model-agnostic methodology designed to develop machine learning approaches that integrate seamlessly with various paradigms, including self-supervised, semi-supervised, and supervised learning. By unifying and standardizing computational models and algorithms, UMAC ensures adaptability across different data types and computational environments while incorporating state-of-the-art methodologies. In this study, we integrate UMAC as a plug-and-play module within convolutional neural networks (CNNs) and Transformer architectures, enabling the generation of high-quality representations even with minimal data. Results: Our experiments across nine diverse 2D medical image datasets show that UMAC consistently outperforms traditional data augmentation methods, achieving a 1.89% improvement in classification accuracy. Conclusions: Additionally, by incorporating explainable AI (XAI) techniques, we enhance model transparency and reliability in decision-making. This study highlights UMAC’s potential as a powerful tool for improving both the performance and interpretability of medical image classification models. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

17 pages, 5605 KiB  
Review
Imaging of Live Cells by Digital Holographic Microscopy
by Emilia Mitkova Mihaylova
Photonics 2024, 11(10), 980; https://doi.org/10.3390/photonics11100980 - 18 Oct 2024
Cited by 2 | Viewed by 3149
Abstract
Imaging of microscopic objects is of fundamental importance, especially in life sciences. Recent fast progress in electronic detection and control, numerical computation, and digital image processing, has been crucial in advancing modern microscopy. Digital holography is a new field in three-dimensional imaging. Digital [...] Read more.
Imaging of microscopic objects is of fundamental importance, especially in life sciences. Recent fast progress in electronic detection and control, numerical computation, and digital image processing, has been crucial in advancing modern microscopy. Digital holography is a new field in three-dimensional imaging. Digital reconstruction of a hologram offers the remarkable capability to refocus at different depths inside a transparent or semi-transparent object. Thus, this technique is very suitable for biological cell studies in vivo and could have many biomedical and biological applications. A comprehensive review of the research carried out in the area of digital holographic microscopy (DHM) for live-cell imaging is presented. The novel microscopic technique is non-destructive and label-free and offers unmatched imaging capabilities for biological and bio-medical applications. It is also suitable for imaging and modelling of key metabolic processes in living cells, microbial communities or multicellular plant tissues. Live-cell imaging by DHM allows investigation of the dynamic processes underlying the function and morphology of cells. Future applications of DHM can include real-time cell monitoring in response to clinically relevant compounds. The effect of drugs on migration, proliferation, and apoptosis of abnormal cells is an emerging field of this novel microscopic technique. Full article
(This article belongs to the Special Issue Technologies and Applications of Digital Holography)
Show Figures

Figure 1

14 pages, 7097 KiB  
Article
Residual Mulching Film Detection in Seed Cotton Using Line Laser Imaging
by Sanhui Wang, Mengyun Zhang, Zhiyu Wen, Zhenxuan Zhao and Ruoyu Zhang
Agronomy 2024, 14(7), 1481; https://doi.org/10.3390/agronomy14071481 - 9 Jul 2024
Cited by 2 | Viewed by 1143
Abstract
Due to the widespread use of mulching film in cotton planting in China, residual mulching film mixed with machine-picked cotton poses a significant hazard to cotton processing. Detecting residual mulching film in seed cotton has become particularly challenging due to the film’s semi-transparent [...] Read more.
Due to the widespread use of mulching film in cotton planting in China, residual mulching film mixed with machine-picked cotton poses a significant hazard to cotton processing. Detecting residual mulching film in seed cotton has become particularly challenging due to the film’s semi-transparent nature. This study constructed an imaging system combining an area array camera and a line scan camera. A detection scheme was proposed that utilized features from both image types. To simulate online detection, samples were placed on a conveyor belt moving at 0.2 m/s, with line lasers at a wavelength of 650 nm as light sources. For area array images, feature extraction was performed to establish a partial least squares discriminant analysis (PLS-DA) model. For line scan images, texture feature analysis was used to build a support vector machine (SVM) classification model. Subsequently, image features from both cameras were merged to construct an SVM model. Experimental results indicated that detection methods based on area array and line scan images had accuracies of 75% and 79%, respectively, while the feature fusion method achieved an accuracy of 83%. This study demonstrated that the proposed method could effectively improve the accuracy of residual mulching film detection in seed cotton, providing a basis for reducing residual mulching film content during processing. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

18 pages, 24356 KiB  
Article
Early Smoke Recognition Algorithm for Forest Fires
by Yue Wang, Yan Piao, Qi Wang, Haowen Wang, Nan Qi and Hao Zhang
Forests 2024, 15(7), 1082; https://doi.org/10.3390/f15071082 - 22 Jun 2024
Cited by 1 | Viewed by 1407
Abstract
Forest fires require rapid and precise early smoke detection to minimize damage. This study focuses on employing smoke recognition methods for early warning systems in forest fire detection, identifying smoke as the primary indicator. A significant hurdle lies in the absence of a [...] Read more.
Forest fires require rapid and precise early smoke detection to minimize damage. This study focuses on employing smoke recognition methods for early warning systems in forest fire detection, identifying smoke as the primary indicator. A significant hurdle lies in the absence of a large-scale dataset for real-world early forest fire smoke detection. Early smoke videos present characteristics such as smoke plumes being small, slow-moving, and/or semi-transparent in color, and include images where there is background interference, posing critical challenges for practical recognition algorithms. To address these issues, this paper introduces a real-world early smoke monitoring video dataset as a foundational resource. The proposed 4D attention-based motion target enhancement network includes an important frame sorting module which adaptively selects essential frame sequences to improve the detection of slow-moving smoke targets. Additionally, a 4D attention-based motion target enhancement module is introduced to mitigate interference from smoke-like objects and enhance recognition of light smoke during the initial stages. Moreover, a high-resolution multi-scale fusion module is presented, incorporating a small target recognition layer to enhance the network’s ability to detect small smoke targets. This research represents a significant advancement in early smoke detection for forest fire surveillance, with practical implications for enhancing fire management. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

20 pages, 21097 KiB  
Article
DPDU-Net: Double Prior Deep Unrolling Network for Pansharpening
by Yingxia Chen, Yuqi Li, Tingting Wang, Yan Chen and Faming Fang
Remote Sens. 2024, 16(12), 2141; https://doi.org/10.3390/rs16122141 - 13 Jun 2024
Cited by 1 | Viewed by 1195
Abstract
The objective of the pansharpening task is to integrate multispectral (MS) images with low spatial resolution (LR) and to integrate panchromatic (PAN) images with high spatial resolution (HR) to generate HRMS images. Recently, deep learning-based pansharpening methods have been widely studied. However, traditional [...] Read more.
The objective of the pansharpening task is to integrate multispectral (MS) images with low spatial resolution (LR) and to integrate panchromatic (PAN) images with high spatial resolution (HR) to generate HRMS images. Recently, deep learning-based pansharpening methods have been widely studied. However, traditional deep learning methods lack transparency while deep unrolling methods have limited performance when using one implicit prior for HRMS images. To address this issue, we incorporate one implicit prior with a semi-implicit prior and propose a double prior deep unrolling network (DPDU-Net) for pansharpening. Specifically, we first formulate the objective function based on observation models of PAN and LRMS images and two priors of an HRMS image. In addition to the implicit prior in the image domain, we enforce the sparsity of the HRMS image in a certain multi-scale implicit space; thereby, the feature map can obtain better sparse representation ability. We optimize the proposed objective function via alternating iteration. Then, the iterative process is unrolled into an elaborate network, with each iteration corresponding to a stage of the network. We conduct both reduced-resolution and full-resolution experiments on two satellite datasets. Both visual comparisons and metric-based evaluations consistently demonstrate the superiority of the proposed DPDU-Net. Full article
Show Figures

Figure 1

10 pages, 655 KiB  
Article
Optical Rules to Mitigate the Parallax-Related Registration Error in See-Through Head-Mounted Displays for the Guidance of Manual Tasks
by Vincenzo Ferrari, Nadia Cattari, Sara Condino and Fabrizio Cutolo
Multimodal Technol. Interact. 2024, 8(1), 4; https://doi.org/10.3390/mti8010004 - 4 Jan 2024
Cited by 2 | Viewed by 3087
Abstract
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent [...] Read more.
Head-mounted displays (HMDs) are hands-free devices particularly useful for guiding near-field tasks such as manual surgical procedures. See-through HMDs do not significantly alter the user’s direct view of the world, but the optical merging of real and virtual information can hinder their coherent and simultaneous perception. In particular, the coherence between the real and virtual content is affected by a viewpoint parallax-related misalignment, which is due to the inaccessibility of the user-perceived reality through the semi-transparent optical combiner of the OST Optical See-Through (OST) display. Recent works demonstrated that a proper selection of the collimation optics of the HMD significantly mitigates the parallax-related registration error without the need for any eye-tracking cameras and/or for any error-prone alignment-based display calibration procedures. These solutions are either based on HMDs that projects the virtual imaging plane directly at arm’s distance, or they require the integration on the HMD of additional lenses to optically move the image of the observed scene to the virtual projection plane of the HMD. This paper describes and evaluates the pros and cons of both the suggested solutions by providing an analytical estimation of the residual registration error achieved with both solutions and discussing the perceptual issues generated by the simultaneous focalization of real and virtual information. Full article
Show Figures

Figure 1

13 pages, 4648 KiB  
Article
Monolithic Integration of Semi-Transparent and Flexible Integrated Image Sensor Array with a-IGZO Thin-Film Transistors (TFTs) and p-i-n Hydrogenated Amorphous Silicon Photodiodes
by Donghyeong Choi, Ji-Woo Seo, Jongwon Yoon, Seung Min Yu, Jung-Dae Kwon, Seoung-Ki Lee and Yonghun Kim
Nanomaterials 2023, 13(21), 2886; https://doi.org/10.3390/nano13212886 - 31 Oct 2023
Cited by 1 | Viewed by 3143
Abstract
A novel approach to fabricating a transparent and flexible one-transistor–one-diode (1T-1D) image sensor array on a flexible colorless polyimide (CPI) film substrate is successfully demonstrated with laser lift-off (LLO) techniques. Leveraging transparent indium tin oxide (ITO) electrodes and amorphous indium gallium zinc oxide [...] Read more.
A novel approach to fabricating a transparent and flexible one-transistor–one-diode (1T-1D) image sensor array on a flexible colorless polyimide (CPI) film substrate is successfully demonstrated with laser lift-off (LLO) techniques. Leveraging transparent indium tin oxide (ITO) electrodes and amorphous indium gallium zinc oxide (a-IGZO) channel-based thin-film transistor (TFT) backplanes, vertically stacked p-i-n hydrogenated amorphous silicon (a-Si:H) photodiodes (PDs) utilizing a low-temperature (<90 °C) deposition process are integrated with a densely packed 14 × 14 pixel array. The low-temperature-processed a-Si:H photodiodes show reasonable performance with responsivity and detectivity for 31.43 mA/W and 3.0 × 1010 Jones (biased at −1 V) at a wavelength of 470 nm, respectively. The good mechanical durability and robustness of the flexible image sensor arrays enable them to be attached to a curved surface with bending radii of 20, 15, 10, and 5 mm and 1000 bending cycles, respectively. These studies show the significant promise of utilizing highly flexible and rollable active-matrix technology for the purpose of dynamically sensing optical signals in spatial applications. Full article
Show Figures

Graphical abstract

21 pages, 8435 KiB  
Article
Influence of Co-Content on the Optical and Structural Properties of TiOx Thin Films Prepared by Gas Impulse Magnetron Sputtering
by Patrycja Pokora, Damian Wojcieszak, Piotr Mazur, Małgorzata Kalisz and Malwina Sikora
Coatings 2023, 13(5), 955; https://doi.org/10.3390/coatings13050955 - 19 May 2023
Cited by 2 | Viewed by 1841
Abstract
Nonstoichiometric (Ti,Co)Ox coatings were prepared using gas-impulse magnetron sputtering (GIMS). The properties of coatings with 3 at.%, 19 at.%, 44 at.%, and 60 at.% Co content were compared to those of TiOx and CoOx films. Structural studies with the aid of [...] Read more.
Nonstoichiometric (Ti,Co)Ox coatings were prepared using gas-impulse magnetron sputtering (GIMS). The properties of coatings with 3 at.%, 19 at.%, 44 at.%, and 60 at.% Co content were compared to those of TiOx and CoOx films. Structural studies with the aid of GIXRD indicated the amorphous nature of (Ti,Co)Ox. The fine-columnar, homogeneous microstructure was observed on SEM images, where cracks were identified only for films with a high Co content. On the basis of XPS measurements, TiO2, CoO, and Co3O4 forms were found on their surface. Optical studies showed that these films were semi-transparent (T > 46%), and that the amount of cobalt in the film had a significant impact on the decrease in the transparency level. A shift in the absorption edge position (from 337 to 387 nm) and a decrease in their optical bandgap energy (from 3.02 eV to more than 2.60 eV) were observed. The hardness of the prepared films changed slightly (ca. 6.5 GPa), but only the CoOx film showed a slightly lower hardness value than the rest of the coatings (4.8 GPa). The described studies allowed partial classification of non-stoichiometric (Ti,Co)Ox thin-film materials according to their functionality. Full article
(This article belongs to the Special Issue Advances in Thin Film Fabrication by Magnetron Sputtering)
Show Figures

Figure 1

19 pages, 4121 KiB  
Article
Chitosan-Decorated Copper Oxide Nanocomposite: Investigation of Its Antifungal Activity against Tomato Gray Mold Caused by Botrytis cinerea
by Ahmed Mahmoud Ismail, Mohamed A. Mosa and Sherif Mohamed El-Ganainy
Polymers 2023, 15(5), 1099; https://doi.org/10.3390/polym15051099 - 22 Feb 2023
Cited by 12 | Viewed by 3027
Abstract
Owing to the remarkable antimicrobial potential of these materials, research into the possible use of nanomaterials as alternatives to fungicides in sustainable agriculture is increasingly progressing. Here, we investigated the potential antifungal properties of chitosan-decorated copper oxide nanocomposite (CH@CuO NPs) to control gray [...] Read more.
Owing to the remarkable antimicrobial potential of these materials, research into the possible use of nanomaterials as alternatives to fungicides in sustainable agriculture is increasingly progressing. Here, we investigated the potential antifungal properties of chitosan-decorated copper oxide nanocomposite (CH@CuO NPs) to control gray mold diseases of tomato caused by Botrytis cinerea throughout in vitro and in vivo trials. The nanocomposite CH@CuO NPs were chemically prepared, and size and shape were determined using Transmission Electron Microscope (TEM). The chemical functional groups responsible for the interaction of the CH NPs with the CuO NPs were detected using the Fourier Transform Infrared (FTIR) spectrophotometry. The TEM images confirmed that CH NPs have a thin and semitransparent network shape, while CuO NPs were spherically shaped. Furthermore, the nanocomposite CH@CuO NPs ex-habited an irregular shape. The size of CH NPs, CuO NPs and CH@CuO NPs as measured through TEM, were approximately 18.28 ± 2.4 nm, 19.34 ± 2.1 nm, and 32.74 ± 2.3 nm, respectively. The antifungal activity of CH@CuO NPs was tested at three concentrations of 50, 100 and 250 mg/L and the fungicide Teldor 50% SC was applied at recommended dose 1.5 mL/L. In vitro experiments revealed that CH@CuO NPs at different concentrations significantly inhibited the reproductive growth process of B. cinerea by suppressing the development of hyphae, spore germination and formation of sclerotia. Interestingly, a significant control efficacy of CH@CuO NPs against tomato gray mold was observed particularly at concentrations 100 and 250 mg/L on both detached leaves (100%) as well as the whole tomato plants (100%) when compared to the conventional chemical fungicide Teldor 50% SC (97%). In addition, the tested concentration 100 mg/L improved to be sufficient to guarantee a complete reduction in the disease’s severity (100%) to tomato fruits from gray mold without any morphological toxicity. In comparison, tomato plants treated with the recommended dose 1.5 mL/L of Teldor 50% SC ensured disease reduction up to 80%. Conclusively, this research enhances the concept of agro-nanotechnology by presenting how a nano materials-based fungicide could be used to protect tomato plants from gray mold under greenhouse conditions and during the postharvest stage. Full article
(This article belongs to the Special Issue Properties and Characterization of Polymers in Nanomaterials)
Show Figures

Figure 1

12 pages, 44628 KiB  
Article
Single Remote Sensing Image Dehazing Using Robust Light-Dark Prior
by Jin Ning, Yanhong Zhou, Xiaojuan Liao and Bin Duo
Remote Sens. 2023, 15(4), 938; https://doi.org/10.3390/rs15040938 - 8 Feb 2023
Cited by 16 | Viewed by 2653
Abstract
Haze, generated by floaters (semitransparent clouds, fog, snow, etc.) in the atmosphere, can significantly degrade the utilization of remote sensing images (RSIs). However, the existing techniques for single image dehazing rarely consider that the haze is superimposed by floaters and shadow, and they [...] Read more.
Haze, generated by floaters (semitransparent clouds, fog, snow, etc.) in the atmosphere, can significantly degrade the utilization of remote sensing images (RSIs). However, the existing techniques for single image dehazing rarely consider that the haze is superimposed by floaters and shadow, and they often aggravate the degree of the haze shadow and dark region. In this paper, a single RSI dehazing method based on robust light-dark prior (RLDP) is proposed, which utilizes the proposed hybrid model and is robust to outlier pixels. In the proposed RLDP method, the haze is first removed by a robust dark channel prior (RDCP). Then, the shadow is removed with a robust light channel prior (RLCP). Further, a cube root mean enhancement (CRME)-based stable state search criterion is proposed for solving the difficult problem of patch size setting. The experiment results on benchmark and Landsat 8 RSIs demonstrate that the RLDP method could effectively remove haze. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Back to TopTop