Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (511)

Search Parameters:
Keywords = Deep-Blue

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3301 KiB  
Article
An Image-Based Water Turbidity Classification Scheme Using a Convolutional Neural Network
by Itzel Luviano Soto, Yajaira Concha-Sánchez and Alfredo Raya
Computation 2025, 13(8), 178; https://doi.org/10.3390/computation13080178 - 23 Jul 2025
Abstract
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and [...] Read more.
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and generated from 33 laboratory-prepared mixtures with varying concentrations of suspended clay particles. Red, green, and blue (RGB) images of each sample were captured under controlled optical conditions, and turbidity was measured using a calibrated turbidimeter. A transfer learning (TL) approach was applied using EfficientNet-B0, a deep yet computationally efficient CNN architecture. The model achieved an average accuracy of 99% across ten independent training runs, with minimal misclassifications. The use of a lightweight deep learning model, combined with a standardized image acquisition protocol, represents a novel and scalable alternative for rapid, low-cost water quality assessment in future environmental monitoring systems. Full article
(This article belongs to the Section Computational Engineering)
32 pages, 13059 KiB  
Article
Verifying the Effects of the Grey Level Co-Occurrence Matrix and Topographic–Hydrologic Features on Automatic Gully Extraction in Dexiang Town, Bayan County, China
by Zhuo Chen and Tao Liu
Remote Sens. 2025, 17(15), 2563; https://doi.org/10.3390/rs17152563 (registering DOI) - 23 Jul 2025
Abstract
Erosion gullies can reduce arable land area and decrease agricultural machinery efficiency; therefore, automatic gully extraction on a regional scale should be one of the preconditions of gully control and land management. The purpose of this study is to compare the effects of [...] Read more.
Erosion gullies can reduce arable land area and decrease agricultural machinery efficiency; therefore, automatic gully extraction on a regional scale should be one of the preconditions of gully control and land management. The purpose of this study is to compare the effects of the grey level co-occurrence matrix (GLCM) and topographic–hydrologic features on automatic gully extraction and guide future practices in adjacent regions. To accomplish this, GaoFen-2 (GF-2) satellite imagery and high-resolution digital elevation model (DEM) data were first collected. The GLCM and topographic–hydrologic features were generated, and then, a gully label dataset was built via visual interpretation. Second, the study area was divided into training, testing, and validation areas, and four practices using different feature combinations were conducted. The DeepLabV3+ and ResNet50 architectures were applied to train five models in each practice. Thirdly, the trainset gully intersection over union (IOU), test set gully IOU, receiver operating characteristic curve (ROC), area under the curve (AUC), user’s accuracy, producer’s accuracy, Kappa coefficient, and gully IOU in the validation area were used to assess the performance of the models in each practice. The results show that the validated gully IOU was 0.4299 (±0.0082) when only the red (R), green (G), blue (B), and near-infrared (NIR) bands were applied, and solely combining the topographic–hydrologic features with the RGB and NIR bands significantly improved the performance of the models, which boosted the validated gully IOU to 0.4796 (±0.0146). Nevertheless, solely combining GLCM features with RGB and NIR bands decreased the accuracy, which resulted in the lowest validated gully IOU of 0.3755 (±0.0229). Finally, by employing the full set of RGB and NIR bands, the GLCM and topographic–hydrologic features obtained a validated gully IOU of 0.4762 (±0.0163) and tended to show an equivalent improvement with the combination of topographic–hydrologic features and RGB and NIR bands. A preliminary explanation is that the GLCM captures the local textures of gullies and their backgrounds, and thus introduces ambiguity and noise into the convolutional neural network (CNN). Therefore, the GLCM tends to provide no benefit to automatic gully extraction with CNN-type algorithms, while topographic–hydrologic features, which are also original drivers of gullies, help determine the possible presence of water-origin gullies when optical bands fail to tell the difference between a gully and its confusing background. Full article
Show Figures

Figure 1

20 pages, 3982 KiB  
Article
Enhanced Rapid Mangrove Habitat Mapping Approach to Setting Protected Areas Using Satellite Indices and Deep Learning: A Case Study of the Solomon Islands
by Hyeon Kwon Ahn, Soohyun Kwon, Cholho Song and Chul-Hee Lim
Remote Sens. 2025, 17(14), 2512; https://doi.org/10.3390/rs17142512 - 18 Jul 2025
Viewed by 180
Abstract
Mangroves, as a key component of the blue-carbon ecosystem, have exceptional carbon sequestration capacity and are mainly distributed in tropical coastal regions. In the Solomon Islands, ongoing degradation of mangrove forests, primarily due to land conversion and timber exploitation, highlights an urgent need [...] Read more.
Mangroves, as a key component of the blue-carbon ecosystem, have exceptional carbon sequestration capacity and are mainly distributed in tropical coastal regions. In the Solomon Islands, ongoing degradation of mangrove forests, primarily due to land conversion and timber exploitation, highlights an urgent need for high-resolution spatial data to inform effective conservation strategies. The present study introduces an efficient and accurate methodology for mapping mangrove habitats and prioritizing protection areas utilizing open-source satellite imagery and datasets available through the Google Earth Engine platform in conjunction with a U-Net deep learning algorithm. The model demonstrates high performance, achieving an F1-score of 0.834 and an overall accuracy of 0.96, in identifying mangrove distributions. The total mangrove area in the Solomon Islands is estimated to be approximately 71,348.27 hectares, accounting for about 2.47% of the national territory. Furthermore, based on the mapped mangrove habitats, an optimized hotspot analysis is performed to identify regions characterized by high-density mangrove distribution. By incorporating spatial variables such as distance from roads and urban centers, along with mangrove area, this study proposes priority mangrove protection areas. These results underscore the potential for using openly accessible satellite data to enhance the precision of mangrove conservation strategies in data-limited settings. This approach can effectively support coastal resource management and contribute to broader climate change mitigation strategies. Full article
Show Figures

Figure 1

17 pages, 11610 KiB  
Article
Exploring the Impact of Species Participation Levels on the Performance of Dominant Plant Identification Models in the Sericite–Artemisia Desert Grassland by Using Deep Learning
by Wenhao Liu, Guili Jin, Wanqiang Han, Mengtian Chen, Wenxiong Li, Chao Li and Wenlin Du
Agriculture 2025, 15(14), 1547; https://doi.org/10.3390/agriculture15141547 - 18 Jul 2025
Viewed by 206
Abstract
Accurate plant species identification in desert grasslands using hyperspectral data is a critical prerequisite for large-scale, high-precision grassland monitoring and management. However, due to prolonged overgrazing and the inherent ecological vulnerability of the environment, sericite–Artemisia desert grassland has experienced significant ecological degradation. [...] Read more.
Accurate plant species identification in desert grasslands using hyperspectral data is a critical prerequisite for large-scale, high-precision grassland monitoring and management. However, due to prolonged overgrazing and the inherent ecological vulnerability of the environment, sericite–Artemisia desert grassland has experienced significant ecological degradation. Therefore, in this study, we obtained spectral images of the grassland in April 2022 using a Soc710 VP imaging spectrometer (Surface Optics Corporation, San Diego, CA, USA), which were classified into three levels (low, medium, and high) based on the level of participation of Seriphidium transiliense (Poljakov) Poljakov and Ceratocarpus arenarius L. in the community. The optimal index factor (OIF) was employed to synthesize feature band images, which were subsequently used as input for the DeepLabv3p, PSPNet, and UNet deep learning models in order to assess the influence of species participation on classification accuracy. The results indicated that species participation significantly impacted spectral information extraction and model classification performance. Higher participation enhanced the scattering of reflectivity in the canopy structure of S. transiliense, while the light saturation effect of C. arenarius was induced by its short stature. Band combinations—such as Blue, Red Edge, and NIR (BREN) and Red, Red Edge, and NIR (RREN)—exhibited strong capabilities in capturing structural vegetation information. The identification model performances were optimal, with a high level of S. transiliense participation and with DeepLabv3p, PSPNet, and UNet achieving an overall accuracy (OA) of 97.86%, 96.51%, and 98.20%. Among the tested models, UNet exhibited the highest classification accuracy and robustness with small sample datasets, effectively differentiating between S. transiliense, C. arenarius, and bare ground. However, when C. arenarius was the primary target species, the model’s performance declined as its participation levels increased, exhibiting significant omission errors for S. transiliense, whose producer’s accuracy (PA) decreased by 45.91%. The findings of this study provide effective technical means and theoretical support for the identification of plant species and ecological monitoring in sericite–Artemisia desert grasslands. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

50 pages, 9734 KiB  
Article
Efficient Hotspot Detection in Solar Panels via Computer Vision and Machine Learning
by Nayomi Fernando, Lasantha Seneviratne, Nisal Weerasinghe, Namal Rathnayake and Yukinobu Hoshino
Information 2025, 16(7), 608; https://doi.org/10.3390/info16070608 - 15 Jul 2025
Viewed by 372
Abstract
Solar power generation is rapidly emerging within renewable energy due to its cost-effectiveness and ease of deployment. However, improper inspection and maintenance lead to significant damage from unnoticed solar hotspots. Even with inspections, factors like shadows, dust, and shading cause localized heat, mimicking [...] Read more.
Solar power generation is rapidly emerging within renewable energy due to its cost-effectiveness and ease of deployment. However, improper inspection and maintenance lead to significant damage from unnoticed solar hotspots. Even with inspections, factors like shadows, dust, and shading cause localized heat, mimicking hotspot behavior. This study emphasizes interpretability and efficiency, identifying key predictive features through feature-level and What-if Analysis. It evaluates model training and inference times to assess effectiveness in resource-limited environments, aiming to balance accuracy, generalization, and efficiency. Using Unmanned Aerial Vehicle (UAV)-acquired thermal images from five datasets, the study compares five Machine Learning (ML) models and five Deep Learning (DL) models. Explainable AI (XAI) techniques guide the analysis, with a particular focus on MPEG (Moving Picture Experts Group)-7 features for hotspot discrimination, supported by statistical validation. Medium Gaussian SVM achieved the best trade-off, with 99.3% accuracy and 18 s inference time. Feature analysis revealed blue chrominance as a strong early indicator of hotspot detection. Statistical validation across datasets confirmed the discriminative strength of MPEG-7 features. This study revisits the assumption that DL models are inherently superior, presenting an interpretable alternative for hotspot detection; highlighting the potential impact of domain mismatch. Model-level insight shows that both absolute and relative temperature variations are important in solar panel inspections. The relative decrease in “blueness” provides a crucial early indication of faults, especially in low-contrast thermal images where distinguishing normal warm areas from actual hotspot is difficult. Feature-level insight highlights how subtle changes in color composition, particularly reductions in blue components, serve as early indicators of developing anomalies. Full article
Show Figures

Graphical abstract

25 pages, 16927 KiB  
Article
Improving Individual Tree Crown Detection and Species Classification in a Complex Mixed Conifer–Broadleaf Forest Using Two Machine Learning Models with Different Combinations of Metrics Derived from UAV Imagery
by Jeyavanan Karthigesu, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Geomatics 2025, 5(3), 32; https://doi.org/10.3390/geomatics5030032 - 13 Jul 2025
Viewed by 428
Abstract
Individual tree crown detection (ITCD) and tree species classification are critical for forest inventory, species-specific monitoring, and ecological studies. However, accurately detecting tree crowns and identifying species in structurally complex forests with overlapping canopies remains challenging. This study was conducted in a complex [...] Read more.
Individual tree crown detection (ITCD) and tree species classification are critical for forest inventory, species-specific monitoring, and ecological studies. However, accurately detecting tree crowns and identifying species in structurally complex forests with overlapping canopies remains challenging. This study was conducted in a complex mixed conifer–broadleaf forest in northern Japan, aiming to improve ITCD and species classification by employing two machine learning models and different combinations of metrics derived from very high-resolution (2.5 cm) UAV red–green–blue (RGB) and multispectral (MS) imagery. We first enhanced ITCD by integrating different combinations of metrics into multiresolution segmentation (MRS) and DeepForest (DF) models. ITCD accuracy was evaluated across dominant forest types and tree density classes. Next, nine tree species were classified using the ITCD outputs from both MRS and DF approaches, applying Random Forest and DF models, respectively. Incorporating structural, textural, and spectral metrics improved MRS-based ITCD, achieving F-scores of 0.44–0.58. The DF model, which used only structural and spectral metrics, achieved higher F-scores of 0.62–0.79. For species classification, the Random Forest model achieved a Kappa value of 0.81, while the DF model attained a higher Kappa value of 0.91. These findings demonstrate the effectiveness of integrating UAV-derived metrics and advanced modeling approaches for accurate ITCD and species classification in heterogeneous forest environments. The proposed methodology offers a scalable and cost-efficient solution for detailed forest monitoring and species-level assessment. Full article
Show Figures

Figure 1

7 pages, 1824 KiB  
Interesting Images
Apocrine Breast Carcinoma with Thanatosomes (Hyaline Globules)
by Mitsuhiro Tachibana, Masashi Nozawa, Tadahiro Isono, Kei Tsukamoto and Kazuyasu Kamimura
Diagnostics 2025, 15(14), 1768; https://doi.org/10.3390/diagnostics15141768 - 13 Jul 2025
Viewed by 255
Abstract
Thanatosomes (hyaline globules or death bodies) are histologically observed in various non-neoplastic and neoplastic conditions. Some of these globules are associated with apoptotic cell death. Only six documented cases of thanatosomes have been reported in breast tumors. In this rare case involving a [...] Read more.
Thanatosomes (hyaline globules or death bodies) are histologically observed in various non-neoplastic and neoplastic conditions. Some of these globules are associated with apoptotic cell death. Only six documented cases of thanatosomes have been reported in breast tumors. In this rare case involving a 64-year-old Japanese woman diagnosed as having rectal cancer, preoperative computed tomography scanning revealed breast cancer in her right breast. Following a right total mastectomy, a tumor characterized as apocrine carcinoma (carcinoma with apocrine differentiation) containing thanatosomes was discovered. These globules are PAS positive and diastase resistant, exhibit deep fuchsinophilic staining with Masson’s trichrome, stain dark blue with PTAH, and are negative for mucin by Alcian blue. The tumor cells tested positive for the androgen receptor, FOXA1, and GCDFP15. Human epidermal growth factor type 2 (HER2)/neu score was 3+/positive, and the Ki-67 labeling index was 60%. Thus, the tumor represented high-grade, HER2-enriched apocrine carcinoma. Thanatosomes are immunoreactive to cleaved caspase-3 and are histological markers of high cell turnover and apoptotic cell death. Therefore, in this nonspecific microscopic neoplastic condition, they are typically linked to high-grade tumors, as this case showed. This report presents a rare case of apocrine breast cancer featuring a limited number of thanatosomes. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

23 pages, 48857 KiB  
Article
A 36-Year Assessment of Mangrove Ecosystem Dynamics in China Using Kernel-Based Vegetation Index
by Yiqing Pan, Mingju Huang, Yang Chen, Baoqi Chen, Lixia Ma, Wenhui Zhao and Dongyang Fu
Forests 2025, 16(7), 1143; https://doi.org/10.3390/f16071143 - 11 Jul 2025
Viewed by 242
Abstract
Mangrove forests serve as critical ecological barriers in coastal zones and play a vital role in global blue carbon sequestration strategies. In recent decades, China’s mangrove ecosystems have experienced complex interactions between degradation and restoration under intense coastal urbanization and systematic conservation efforts. [...] Read more.
Mangrove forests serve as critical ecological barriers in coastal zones and play a vital role in global blue carbon sequestration strategies. In recent decades, China’s mangrove ecosystems have experienced complex interactions between degradation and restoration under intense coastal urbanization and systematic conservation efforts. However, the long-term spatiotemporal patterns and driving mechanisms of mangrove ecosystem health changes remain insufficiently quantified. This study developed a multi-temporal analytical framework using Landsat imagery (1986–2021) to derive kernel normalized difference vegetation index (kNDVI) time series—an advanced phenological indicator with enhanced sensitivity to vegetation dynamics. We systematically characterized mangrove growth patterns along China’s southeastern coast through integrated Theil–Sen slope estimation, Mann–Kendall trend analysis, and Hurst exponent forecasting. A Deep Forest regression model was subsequently applied to quantify the relative contributions of environmental drivers (mean annual sea surface temperature, precipitation, air temperature, tropical cyclone frequency, and relative sea-level rise rate) and anthropogenic pressures (nighttime light index). The results showed the following: (1) a nationally significant improvement in mangrove vitality (p < 0.05), with mean annual kNDVI increasing by 0.0072/yr during 1986–2021; (2) spatially divergent trajectories, with 58.68% of mangroves exhibiting significant improvement (p < 0.05), which was 2.89 times higher than the proportion of degraded areas (15.10%); (3) Hurst persistence analysis (H = 0.896) indicating that 74.97% of the mangrove regions were likely to maintain their growth trends, while 15.07% of the coastal zones faced potential degradation risks; and (4) Deep Forest regression id the relative rate of sea-level rise (importance = 0.91) and anthropogenic (nighttime light index, importance = 0.81) as dominant drivers, surpassing climatic factors. This study provides the first national-scale, 30 m resolution assessment of mangrove growth dynamics using kNDVI, offering a scientific basis for adaptive management and blue carbon strategies in subtropical coastal ecosystems. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

30 pages, 5474 KiB  
Article
WHU-RS19 ABZSL: An Attribute-Based Dataset for Remote Sensing Image Understanding
by Mattia Balestra, Marina Paolanti and Roberto Pierdicca
Remote Sens. 2025, 17(14), 2384; https://doi.org/10.3390/rs17142384 - 10 Jul 2025
Viewed by 247
Abstract
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts [...] Read more.
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts their application to more flexible, interpretable and generalisable learning frameworks. In this study, we introduce WHU-RS19 ABZSL: an attribute-based extension of the widely adopted WHU-RS19 dataset. This new version comprises 1005 high-resolution aerial images across 19 scene categories, each annotated with a vector of 38 features. These cover objects (e.g., roads and trees), geometric patterns (e.g., lines and curves) and dominant colours (e.g., green and blue), and are defined through expert-guided annotation protocols. To demonstrate the value of the dataset, we conduct baseline experiments using deep learning models that had been adapted for multi-label classification—ResNet18, VGG16, InceptionV3, EfficientNet and ViT-B/16—designed to capture the semantic complexity characteristic of real-world aerial scenes. The results, which are measured in terms of macro F1-score, range from 0.7385 for ResNet18 to 0.7608 for EfficientNet-B0. In particular, EfficientNet-B0 and ViT-B/16 are the top performers in terms of the overall macro F1-score and consistency across attributes, while all models show a consistent decline in performance for infrequent or visually ambiguous categories. This confirms that it is feasible to accurately predict semantic attributes in complex scenes. By enriching a standard benchmark with detailed, image-level semantic supervision, WHU-RS19 ABZSL supports a variety of downstream applications, including multi-label classification, explainable AI, semantic retrieval, and attribute-based ZSL. It thus provides a reusable, compact resource for advancing the semantic understanding of remote sensing and multimodal AI. Full article
(This article belongs to the Special Issue Remote Sensing Datasets and 3D Visualization of Geospatial Big Data)
Show Figures

Figure 1

23 pages, 3645 KiB  
Article
Color-Guided Mixture-of-Experts Conditional GAN for Realistic Biomedical Image Synthesis in Data-Scarce Diagnostics
by Patrycja Kwiek, Filip Ciepiela and Małgorzata Jakubowska
Electronics 2025, 14(14), 2773; https://doi.org/10.3390/electronics14142773 - 10 Jul 2025
Viewed by 182
Abstract
Background: Limited availability of high-quality labeled biomedical image datasets presents a significant challenge for training deep learning models in medical diagnostics. This study proposes a novel image generation framework combining conditional generative adversarial networks (cGANs) with a Mixture-of-Experts (MoE) architecture and color histogram-aware [...] Read more.
Background: Limited availability of high-quality labeled biomedical image datasets presents a significant challenge for training deep learning models in medical diagnostics. This study proposes a novel image generation framework combining conditional generative adversarial networks (cGANs) with a Mixture-of-Experts (MoE) architecture and color histogram-aware loss functions to enhance synthetic blood cell image quality. Methods: RGB microscopic images from the BloodMNIST dataset (eight blood cell types, resolution 3 × 128 × 128) underwent preprocessing with k-means clustering to extract the dominant colors and UMAP for visualizing class similarity. Spearman correlation-based distance matrices were used to evaluate the discriminative power of each RGB channel. A MoE–cGAN architecture was developed with residual blocks and LeakyReLU activations. Expert generators were conditioned on cell type, and the generator’s loss was augmented with a Wasserstein distance-based term comparing red and green channel histograms, which were found most relevant for class separation. Results: The red and green channels contributed most to class discrimination; the blue channel had minimal impact. The proposed model achieved 0.97 classification accuracy on generated images (ResNet50), with 0.96 precision, 0.97 recall, and a 0.96 F1-score. The best Fréchet Inception Distance (FID) was 52.1. Misclassifications occurred mainly among visually similar cell types. Conclusions: Integrating histogram alignment into the MoE–cGAN training significantly improves the realism and class-specific variability of synthetic images, supporting robust model development under data scarcity in hematological imaging. Full article
Show Figures

Figure 1

21 pages, 5148 KiB  
Article
Research on Buckwheat Weed Recognition in Multispectral UAV Images Based on MSU-Net
by Jinlong Wu, Xin Wu and Ronghui Miao
Agriculture 2025, 15(14), 1471; https://doi.org/10.3390/agriculture15141471 - 9 Jul 2025
Viewed by 232
Abstract
Quickly and accurately identifying weed areas is of great significance for improving weeding efficiency, reducing pesticide residues, protecting soil ecological environment, and increasing crop yield and quality. Targeting low detection efficiency in complex agricultural environments and inability of multispectral input in weed recognition [...] Read more.
Quickly and accurately identifying weed areas is of great significance for improving weeding efficiency, reducing pesticide residues, protecting soil ecological environment, and increasing crop yield and quality. Targeting low detection efficiency in complex agricultural environments and inability of multispectral input in weed recognition of minor grain based on unmanned aerial vehicles (UAVs), a semantic segmentation model for buckwheat weeds based on MSU-Net (multispectral U-shaped network) was proposed to explore the influence of different band optimizations on recognition accuracy. Five spectral features—red (R), blue (B), green (G), red edge (REdge), and near-infrared (NIR)—were collected in August when the weeds were more prominent. Based on the U-net image semantic segmentation model, the input module was improved to adaptively adjust the input bands. The neuron death caused by the original ReLU activation function may lead to misidentification, so it was replaced by the Swish function to improve the adaptability to complex inputs. Five single-band multispectral datasets and nine groups of multi-band combined data were, respectively, input into the improved MSU-Net model to verify the performance of our method. Experimental results show that in the single-band recognition results, the B band performs better than other bands, with mean pixel accuracy (mPA), mean intersection over union (mIoU), Dice, and F1 values of 0.75, 0.61, 0.87, and 0.80, respectively. In the multi-band recognition results, the R+G+B+NIR band performs better than other combined bands, with mPA, mIoU, Dice, and F1 values of 0.76, 0.65, 0.85, and 0.78, respectively. Compared with U-Net, DenseASPP, PSPNet, and DeepLabv3, our method achieved a preferable balance between model accuracy and resource consumption. These results indicate that our method can adapt to multispectral input bands and achieve good results in weed segmentation tasks. It can also provide reference for multispectral data analysis and semantic segmentation in the field of minor grain crops. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

16 pages, 4000 KiB  
Article
Microstructure Engineered Nanoporous Copper for Enhanced Catalytic Degradation of Organic Pollutants in Wastewater
by Taskeen Zahra, Saleem Abbas, Junfei Ou, Tuti Mariana Lim and Aumber Abbas
Materials 2025, 18(13), 2929; https://doi.org/10.3390/ma18132929 - 20 Jun 2025
Viewed by 1080
Abstract
Advanced oxidation processes offer bright potential for eliminating organic pollutants from wastewater, where the development of efficient catalysts revolves around deep understanding of the microstructure–property–performance relationship. In this study, we explore how microstructural engineering influences the catalytic performance of nanoporous copper (NPC) in [...] Read more.
Advanced oxidation processes offer bright potential for eliminating organic pollutants from wastewater, where the development of efficient catalysts revolves around deep understanding of the microstructure–property–performance relationship. In this study, we explore how microstructural engineering influences the catalytic performance of nanoporous copper (NPC) in degrading organic contaminants. By systematically tailoring the NPC microstructure, we achieve tunable three-dimensional porous architectures with nanoscale pores and macroscopic grains. This results in a homogeneous, bicontinuous pore–ligament network that is crucial for the oxidative degradation of the model pollutant methylene blue in the presence of hydrogen peroxide. The catalytic efficiency is assessed using ultraviolet–visible spectroscopy, which reveals first-order degradation kinetics with a rate constant κ = 44 × 10−3 min−1, a 30-fold improvement over bulk copper foil, and a fourfold increase over copper nanoparticles. The superior performance is attributed to the high surface area, abundant active sites, and multiscale porosity of NPC. Additionally, the high step-edge density, nanoscale curvature, and enhanced crystallinity contribute to the catalyst’s remarkable stability and reactivity. This study not only provides insights into microstructure–property–performance relationships in nanoporous catalysts but also offers an effective strategy for designing efficient and scalable materials for wastewater treatment and environmental applications. Full article
(This article belongs to the Section Porous Materials)
Show Figures

Figure 1

17 pages, 6547 KiB  
Article
Direct Estimation of Forest Aboveground Biomass from UAV LiDAR and RGB Observations in Forest Stands with Various Tree Densities
by Kangyu So, Jenny Chau, Sean Rudd, Derek T. Robinson, Jiaxin Chen, Dominic Cyr and Alemu Gonsamo
Remote Sens. 2025, 17(12), 2091; https://doi.org/10.3390/rs17122091 - 18 Jun 2025
Viewed by 729
Abstract
Canada’s vast forests play a substantial role in the global carbon balance but require laborious and expensive forest inventory campaigns to monitor changes in aboveground biomass (AGB). Light detection and ranging (LiDAR) or reflectance observations onboard airborne or unoccupied aerial vehicles (UAVs) may [...] Read more.
Canada’s vast forests play a substantial role in the global carbon balance but require laborious and expensive forest inventory campaigns to monitor changes in aboveground biomass (AGB). Light detection and ranging (LiDAR) or reflectance observations onboard airborne or unoccupied aerial vehicles (UAVs) may address scalability limitations associated with traditional forest inventory but require simple forest structures or large sets of manually delineated crowns. Here, we introduce a deep learning approach for crown delineation and AGB estimation reproducible for complex forest structures without relying on hand annotations for training. Firstly, we detect treetops and delineate crowns with a LiDAR point cloud using marker-controlled watershed segmentation (MCWS). Then we train a deep learning model on annotations derived from MCWS to make crown predictions on UAV red, blue, and green (RGB) tiles. Finally, we estimate AGB metrics from tree height- and crown diameter-based allometric equations, all derived from UAV data. We validate our approach using 14 ha mixed forest stands with various experimental tree densities in Southern Ontario, Canada. Our results show that using an unsupervised LiDAR-only algorithm for tree crown delineation alongside a self-supervised RGB deep learning model trained on LiDAR-derived annotations leads to an 18% improvement in AGB estimation accuracy. In unharvested stands, the self-supervised RGB model performs well for height (adjusted R2, Ra2 = 0.79) and AGB (Ra2 = 0.80) estimation. In thinned stands, the performance of both unsupervised and self-supervised methods varied with stand density, crown clumping, canopy height variation, and species diversity. These findings suggest that MCWS can be supplemented with self-supervised deep learning to directly estimate biomass components in complex forest structures as well as atypical forest conditions where stand density and spatial patterns are manipulated. Full article
Show Figures

Figure 1

42 pages, 3140 KiB  
Review
Face Anti-Spoofing Based on Deep Learning: A Comprehensive Survey
by Huifen Xing, Siok Yee Tan, Faizan Qamar and Yuqing Jiao
Appl. Sci. 2025, 15(12), 6891; https://doi.org/10.3390/app15126891 - 18 Jun 2025
Viewed by 1543
Abstract
Face recognition has achieved tremendous success in both its theory and technology. However, with increasingly realistic attacks, such as print photos, replay videos, and 3D masks, as well as new attack methods like AI-generated faces or videos, face recognition systems are confronted with [...] Read more.
Face recognition has achieved tremendous success in both its theory and technology. However, with increasingly realistic attacks, such as print photos, replay videos, and 3D masks, as well as new attack methods like AI-generated faces or videos, face recognition systems are confronted with significant challenges and risks. Distinguishing between real and fake faces, i.e., face anti-spoofing (FAS), is crucial to the security of face recognition systems. With the advent of large-scale academic datasets in recent years, FAS based on deep learning has achieved a remarkable level of performance and now dominates the field. This paper systematically reviews the latest advancements in FAS based on deep learning. First, it provides an overview of the background, basic concepts, and types of FAS attacks. Then, it categorizes existing FAS methods from the perspectives of RGB (red, green and blue) modality and other modalities, discussing the main concepts, the types of attacks that can be detected, their advantages and disadvantages, and so on. Next, it introduces popular datasets used in FAS research and highlights their characteristics. Finally, it summarizes the current research challenges and future directions for FAS, such as its limited generalization for unknown attacks, the insufficient multi-modal research, the spatiotemporal efficiency of algorithms, and unified detection for presentation attacks and deepfakes. We aim to provide a comprehensive reference in this field and to inspire progress within the FAS community, guiding researchers toward promising directions for future work. Full article
(This article belongs to the Special Issue Deep Learning in Object Detection)
Show Figures

Figure 1

14 pages, 2310 KiB  
Article
High-Performance Electrochromic Energy Storage Devices Based on Hexagonal WO3 and SnO2/PB Composite Films
by Yi Wang, Zilong Zhang, Ze Wang, Yujie Yan, Tong Feng and An Xie
Materials 2025, 18(12), 2871; https://doi.org/10.3390/ma18122871 - 17 Jun 2025
Viewed by 317
Abstract
Electrochromic devices have garnered significant interest owing to their promising applications in smart multifunctional electrochromic energy storage systems (EESDs) and their emerging next-generation electronic technologies. Tungsten oxide (WO3), possessing both electrochromic and pseudocapacitive characteristics, offers great potential for developing multifunctional devices [...] Read more.
Electrochromic devices have garnered significant interest owing to their promising applications in smart multifunctional electrochromic energy storage systems (EESDs) and their emerging next-generation electronic technologies. Tungsten oxide (WO3), possessing both electrochromic and pseudocapacitive characteristics, offers great potential for developing multifunctional devices with enhanced performance. However, achieving an efficient and straightforward synthesis of WO3 electrochromic films, while simultaneously ensuring high coloration efficiency and energy storage capability, remains a significant challenge. In this work, a low-temperature hydrothermal approach is employed to directly grow hexagonal-phase WO3 films on FTO substrates. This process utilizes sorbitol to promote nucleation and rubidium sulfate to regulate crystal growth, enabling a one-step in situ fabrication strategy. To complement the high-performance WO3 cathode, a composite PB/SnO2 film was designed as the anode, offering improved electrochromic properties and enhanced stability. The assembled EESD exhibited fast bleaching/coloration response and a high coloration efficiency of 101.2 cm2 C−1. Furthermore, it exhibited a clear and reversible change in optical properties, shifting from a transparent state to a deep blue color, with a transmittance modulation reaching 81.47%. Full article
(This article belongs to the Section Thin Films and Interfaces)
Show Figures

Graphical abstract

Back to TopTop