Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = RGB photography

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 12286 KiB  
Article
A UAV-Based Multi-Scenario RGB-Thermal Dataset and Fusion Model for Enhanced Forest Fire Detection
by Yalin Zhang, Xue Rui and Weiguo Song
Remote Sens. 2025, 17(15), 2593; https://doi.org/10.3390/rs17152593 - 25 Jul 2025
Viewed by 437
Abstract
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). [...] Read more.
UAVs are essential for forest fire detection due to vast forest areas and inaccessibility of high-risk zones, enabling rapid long-range inspection and detailed close-range surveillance. However, aerial photography faces challenges like multi-scale target recognition and complex scenario adaptation (e.g., deformation, occlusion, lighting variations). RGB-Thermal fusion methods integrate visible-light texture and thermal infrared temperature features effectively, but current approaches are constrained by limited datasets and insufficient exploitation of cross-modal complementary information, ignoring cross-level feature interaction. A time-synchronized multi-scene, multi-angle aerial RGB-Thermal dataset (RGBT-3M) with “Smoke–Fire–Person” annotations and modal alignment via the M-RIFT method was constructed as a way to address the problem of data scarcity in wildfire scenarios. Finally, we propose a CP-YOLOv11-MF fusion detection model based on the advanced YOLOv11 framework, which can learn heterogeneous features complementary to each modality in a progressive manner. Experimental validation proves the superiority of our method, with a precision of 92.5%, a recall of 93.5%, a mAP50 of 96.3%, and a mAP50-95 of 62.9%. The model’s RGB-Thermal fusion capability enhances early fire detection, offering a benchmark dataset and methodological advancement for intelligent forest conservation, with implications for AI-driven ecological protection. Full article
(This article belongs to the Special Issue Advances in Spectral Imagery and Methods for Fire and Smoke Detection)
Show Figures

Figure 1

19 pages, 16547 KiB  
Article
A New Method for Camera Auto White Balance for Portrait
by Sicong Zhou, Kaida Xiao, Changjun Li, Peihua Lai, Hong Luo and Wenjun Sun
Technologies 2025, 13(6), 232; https://doi.org/10.3390/technologies13060232 - 5 Jun 2025
Viewed by 815
Abstract
Accurate skin color reproduction under varying CCT remains a critical challenge in the graphic arts, impacting applications such as face recognition, portrait photography, and human–computer interaction. Traditional AWB methods like gray-world or max-RGB often rely on statistical assumptions, which limit their accuracy under [...] Read more.
Accurate skin color reproduction under varying CCT remains a critical challenge in the graphic arts, impacting applications such as face recognition, portrait photography, and human–computer interaction. Traditional AWB methods like gray-world or max-RGB often rely on statistical assumptions, which limit their accuracy under complex or extreme lighting. We propose SCR-AWB, a novel algorithm that leverages real skin reflectance data to estimate the scene illuminant’s SPD and CCT, enabling accurate skin tone reproduction. The method integrates prior knowledge of human skin reflectance, basis vectors, and camera sensitivity to perform pixel-wise spectral estimation. Experimental results on difficult skin color reproduction task demonstrate that SCR-AWB significantly outperforms traditional AWB algorithms. It achieves lower reproduction angle errors and more accurate CCT predictions, with deviations below 300 K in most cases. These findings validate SCR-AWB as an effective and computationally efficient solution for robust skin color correction. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

29 pages, 6039 KiB  
Article
Tree Species Detection and Enhancing Semantic Segmentation Using Machine Learning Models with Integrated Multispectral Channels from PlanetScope and Digital Aerial Photogrammetry in Young Boreal Forest
by Arun Gyawali, Mika Aalto and Tapio Ranta
Remote Sens. 2025, 17(11), 1811; https://doi.org/10.3390/rs17111811 - 22 May 2025
Viewed by 920
Abstract
The precise identification and classification of tree species in young forests during their early development stages are vital for forest management and silvicultural efforts that support their growth and renewal. However, achieving accurate geolocation and species classification through field-based surveys is often a [...] Read more.
The precise identification and classification of tree species in young forests during their early development stages are vital for forest management and silvicultural efforts that support their growth and renewal. However, achieving accurate geolocation and species classification through field-based surveys is often a labor-intensive and complicated task. Remote sensing technologies combined with machine learning techniques present an encouraging solution, offering a more efficient alternative to conventional field-based methods. This study aimed to detect and classify young forest tree species using remote sensing imagery and machine learning techniques. The study mainly involved two different objectives: first, tree species detection using the latest version of You Only Look Once (YOLOv12), and second, semantic segmentation (classification) using random forest, Categorical Boosting (CatBoost), and a Convolutional Neural Network (CNN). To the best of our knowledge, this marks the first exploration utilizing YOLOv12 for tree species identification, along with the study that integrates digital aerial photogrammetry with Planet imagery to achieve semantic segmentation in young forests. The study used two remote sensing datasets: RGB imagery from unmanned aerial vehicle (UAV) ortho photography and RGB-NIR from PlanetScope. For YOLOv12-based tree species detection, only RGB from ortho photography was used, while semantic segmentation was performed with three sets of data: (1) Ortho RGB (3 bands), (2) Ortho RGB + canopy height model (CHM) + Planet RGB-NIR (8 bands), and (3) ortho RGB + CHM + Planet RGB-NIR + 12 vegetation indices (20 bands). With three models applied to these datasets, nine machine learning models were trained and tested using 57 images (1024 × 1024 pixels) and their corresponding mask tiles. The YOLOv12 model achieved 79% overall accuracy, with Scots pine performing best (precision: 97%, recall: 92%, mAP50: 97%, mAP75: 80%) and Norway spruce showing slightly lower accuracy (precision: 94%, recall: 82%, mAP50: 90%, mAP75: 71%). For semantic segmentation, the CatBoost model with 20 bands outperformed other models, achieving 85% accuracy, 80% Kappa, and 81% MCC, with CHM, EVI, NIRPlanet, GreenPlanet, NDGI, GNDVI, and NDVI being the most influential variables. These results indicate that a simple boosting model like CatBoost can outperform more complex CNNs for semantic segmentation in young forests. Full article
Show Figures

Graphical abstract

25 pages, 5627 KiB  
Article
Digital Repeat Photography Application for Flowering Stage Classification of Selected Woody Plants
by Monika A. Różańska, Kamila M. Harenda, Damian Józefczyk, Tomasz Wojciechowski and Bogdan H. Chojnicki
Sensors 2025, 25(7), 2106; https://doi.org/10.3390/s25072106 - 27 Mar 2025
Viewed by 431
Abstract
Digital repeat photography is currently applied mainly in geophysical studies of ecosystems. However, its role as a tool that can be utilized in conventional phenology, tracking a plant’s seasonal developmental cycle, is growing. This study’s main goal was to develop an easy-to-reproduce, single-camera-based [...] Read more.
Digital repeat photography is currently applied mainly in geophysical studies of ecosystems. However, its role as a tool that can be utilized in conventional phenology, tracking a plant’s seasonal developmental cycle, is growing. This study’s main goal was to develop an easy-to-reproduce, single-camera-based novel approach to determine the flowering phases of 12 woody plants of various deciduous species. Field observations served as binary class calibration datasets (flowering and non-flowering stages). All the image RGB parameters, designated for each plant separately, were used as plant features for the models’ parametrization. The training data were subjected to various transformations to achieve the best classifications using the weighted k-nearest neighbors algorithm. The developed models enabled the flowering classifications at the 0, 1, 2, 3, and 5 onset day shift (absolute values) for 2, 3, 3, 2, and 2 plants, respectively. For 9 plants, the presented method enabled the flowering duration estimation, which is a valuable yet rarely used parameter in conventional phenological studies. We found the presented method suitable for various plants, despite their petal color and flower size, until there is a considerable change in the crown color during the flowering stage. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

18 pages, 7671 KiB  
Article
Automated Gluten Detection in Bread Images Using Convolutional Neural Networks
by Aviad Elyashar, Abigail Paradise Vit, Guy Sebbag, Alex Khaytin and Avi Zakai
Appl. Sci. 2025, 15(4), 1737; https://doi.org/10.3390/app15041737 - 8 Feb 2025
Cited by 1 | Viewed by 1208
Abstract
Celiac disease and gluten sensitivity affect a significant portion of the population and require adherence to a gluten-free diet. Dining in social settings, such as family events, workplace gatherings, or restaurants, makes it difficult to ensure that certain foods are gluten-free. Despite the [...] Read more.
Celiac disease and gluten sensitivity affect a significant portion of the population and require adherence to a gluten-free diet. Dining in social settings, such as family events, workplace gatherings, or restaurants, makes it difficult to ensure that certain foods are gluten-free. Despite the availability of portable gluten testing devices, these instruments have high costs, disposable capsules, depend on user preparation and technique, and cannot analyze an entire meal or detect gluten levels below the legal thresholds, potentially leading to inaccurate results. In this study, we propose RGB (Recognition of Gluten in Bread), a novel deep learning-based method for automatically detecting gluten in bread images. RGB is a decision-support tool to help individuals with celiac disease make informed dietary choices. To develop this method, we curated and annotated three unique datasets of bread images collected from Pinterest, Instagram, and a custom dataset containing information about flour types. Fine-tuning pre-trained convolutional neural networks (CNNs) on the Pinterest dataset, our best-performing model, ResNet50V2, achieved 77% accuracy and recall. Transfer learning was subsequently applied to adapt the model to the Instagram dataset, resulting in 78% accuracy and 77% recall. Finally, further fine-tuning the model on a significantly different dataset, the custom bread dataset, significantly improved the performance, achieving an accuracy of 86%, precision of 87%, recall of 86%, and F1-score of 86%. Our analysis further revealed that the model performed better on gluten-free flours, achieving higher accuracy scores for these types. This study demonstrates the feasibility of image-based gluten detection in bread and highlights its potential to provide a cost-effective non-invasive alternative to traditional testing methods by allowing individuals with celiac disease to receive immediate feedback on potential gluten content in their meals through simple food photography. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

17 pages, 3431 KiB  
Article
Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method
by Shijun Pan, Keisuke Yoshida, Satoshi Nishiyama, Takashi Kojima and Yutaro Hashimoto
Land 2025, 14(2), 217; https://doi.org/10.3390/land14020217 - 21 Jan 2025
Viewed by 872
Abstract
Riverine environmental information includes important data to collect, and the data collection still requires personnel’s field surveys. These on-site tasks still face significant limitations (i.e., hard or danger to entry). In recent years, as one of the efficient approaches for data collection, air-vehicle-based [...] Read more.
Riverine environmental information includes important data to collect, and the data collection still requires personnel’s field surveys. These on-site tasks still face significant limitations (i.e., hard or danger to entry). In recent years, as one of the efficient approaches for data collection, air-vehicle-based Light Detection and Ranging technologies have already been applied in global environmental research, i.e., land cover classification (LCC) or environmental monitoring. For this study, the authors specifically focused on seven types of LCC (i.e., bamboo, tree, grass, bare ground, water, road, and clutter) that can be parameterized for flood simulation. A validated airborne LiDAR bathymetry system (ALB) and a UAV-borne green LiDAR System (GLS) were applied in this study for cross-platform analysis of LCC. Furthermore, LiDAR data were visualized using high-contrast color scales to improve the accuracy of land cover classification methods through image fusion techniques. If high-resolution aerial imagery is available, then it must be downscaled to match the resolution of low-resolution point clouds. Cross-platform data interchangeability was assessed by comparing the interchangeability, which measures the absolute difference in overall accuracy (OA) or macro-F1 by comparing the cross-platform interchangeability. It is noteworthy that relying solely on aerial photographs is inadequate for achieving precise labeling, particularly under limited sunlight conditions that can lead to misclassification. In such cases, LiDAR plays a crucial role in facilitating target recognition. All the approaches (i.e., low-resolution digital imagery, LiDAR-derived imagery and image fusion) present results of over 0.65 OA and of around 0.6 macro-F1. The authors found that the vegetation (bamboo, tree, grass) and road species have comparatively better performance compared with clutter and bare ground species. Given the stated conditions, differences in the species derived from different years (ALB from year 2017 and GLS from year 2020) are the main reason. Because the identification of clutter species includes all the items except for the relative species in this research, RGB-based features of the clutter species cannot be substituted easily because of the 3-year gap compared with other species. Derived from on-site reconstruction, the bare ground species also has a further color change between ALB and GLS that leads to decreased interchangeability. In the case of individual species, without considering seasons and platforms, image fusion can classify bamboo and trees with higher F1 scores compared to low-resolution digital imagery and LiDAR-derived imagery, which has especially proved the cross-platform interchangeability in the high vegetation types. In recent years, high-resolution photography (UAV), high-precision LiDAR measurement (ALB, GLS), and satellite imagery have been used. LiDAR measurement equipment is expensive, and measurement opportunities are limited. Based on this, it would be desirable if ALB and GLS could be continuously classified by Artificial Intelligence, and in this study, the authors investigated such data interchangeability. A unique and crucial aspect of this study is exploring the interchangeability of land cover classification models across different LiDAR platforms. Full article
Show Figures

Figure 1

25 pages, 5085 KiB  
Article
Enhancing Underwater Images through Multi-Frequency Detail Optimization and Adaptive Color Correction
by Xiujing Gao, Junjie Jin, Fanchao Lin, Hongwu Huang, Jiawei Yang, Yongfeng Xie and Biwen Zhang
J. Mar. Sci. Eng. 2024, 12(10), 1790; https://doi.org/10.3390/jmse12101790 - 8 Oct 2024
Cited by 1 | Viewed by 3832
Abstract
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, [...] Read more.
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, an Efficient Fusion Edge Detection (EFED) module preserves crucial edge information, ensuring detail clarity even in challenging turbidity and illumination conditions. Second, a Multi-scale Color Parallel Frequency-division Attention (MCPFA) module integrates multi-color space data with edge information. This module dynamically weights features based on their frequency domain positions, prioritizing high-frequency details and areas affected by light attenuation. Our method further incorporates a dual multi-color space structural loss function, optimizing the performance of the network across RGB, Lab, and HSV color spaces. This approach enhances structural alignment and minimizes color distortion, edge artifacts, and detail loss often observed in existing techniques. Comprehensive quantitative and qualitative evaluations using both full-reference and no-reference image quality metrics demonstrate that our proposed method effectively suppresses scattering noise, corrects color deviations, and significantly enhances image details. In terms of objective evaluation metrics, our method achieves the best performance in the test dataset of EUVP with a PSNR of 23.45, SSIM of 0.821, and UIQM of 3.211, indicating that it outperforms state-of-the-art methods in improving image quality. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

13 pages, 3326 KiB  
Article
The Nitrate Cellulose Negatives: Degradation Study via Chemometric Methods
by Anastasia Povolotckaia, Svetlana Kaputkina, Irina Grigorieva, Dmitrii Pankin, Evgenii Borisov, Anna Vasileva, Valeria Kaputkina and Maria Dynnikova
Heritage 2024, 7(9), 4712-4724; https://doi.org/10.3390/heritage7090223 - 30 Aug 2024
Cited by 2 | Viewed by 1820
Abstract
Photographic artifacts carry important historical and cultural information. Materials used in photography at the turn of the XIXth and XXth centuries tend to degrade both over time and if the temperature and humidity conditions of storage are violated. In this connection, the question [...] Read more.
Photographic artifacts carry important historical and cultural information. Materials used in photography at the turn of the XIXth and XXth centuries tend to degrade both over time and if the temperature and humidity conditions of storage are violated. In this connection, the question arises of determining the safety degree and monitoring the condition of photographic materials. Close attention should be paid to photographic materials that become flammable as a result of decomposition. This class of objects includes photographic films based on cellulose nitrate. This study was aimed at examining 100 negatives and stereonegatives from the collection of Karl Kosse dating from 1902 to 1917 as typical examples of these hazard class objects. The degradation of individual negatives was accompanied by a significant change in color—yellowing. The base of photographic negatives (cellulose nitrate and camphor) was determined by Raman spectroscopy, and the presence of a gelatin layer was determined by ATR-FTIR spectroscopy. Using chemometric analysis methods based on the RGB components of digital photos of negatives, an approach has been proposed for determining the state of degradation. The use of the support vector machine approach allows for obtaining a decision boundary, which can be later used to analyze a large data array. Full article
(This article belongs to the Special Issue Spectroscopy in Archaeometry and Conservation Science)
Show Figures

Figure 1

18 pages, 11216 KiB  
Article
Remote Sensing Guides Management Strategy for Invasive Legumes on the Central Plateau, New Zealand
by Paul G. Peterson, James D. Shepherd, Richard L. Hill and Craig I. Davey
Remote Sens. 2024, 16(13), 2503; https://doi.org/10.3390/rs16132503 - 8 Jul 2024
Cited by 1 | Viewed by 1269
Abstract
Remote sensing was used to map the invasion of yellow-flowered legumes on the Central Plateau of New Zealand to inform weed management strategy. The distributions of Cytisus scoparius (broom), Ulex europaeus (gorse) and Lupinus arboreus (tree lupin) were captured with high-resolution RGB photographs [...] Read more.
Remote sensing was used to map the invasion of yellow-flowered legumes on the Central Plateau of New Zealand to inform weed management strategy. The distributions of Cytisus scoparius (broom), Ulex europaeus (gorse) and Lupinus arboreus (tree lupin) were captured with high-resolution RGB photographs of the plants while flowering. The outcomes of herbicide operations to control C. scoparius and U. europaeus over time were also assessed through repeat photography and change mapping. A grid-square sampling tool previously developed by Manaaki Whenua—Landcare Research was used to help transfer data rapidly from photography to maps using manual classification. Artificial intelligence was trialled and ruled out because the number of false positives could not be tolerated. Future actions to protect the natural values and vistas of the Central Plateau from legume invasion were identified. While previous control operations have mostly targeted large, highly visible legume patches, the importance of removing outlying plants to prevent the establishment of new seed banks and slow spread has been underestimated. Outliers not only establish new, large, long-lived seed banks in previously seed-free areas, but they also contribute more to range expansion than larger patches. Our C. scoparius and U. europaeus change mapping confirms and helps to visualise the establishment and expansion of uncontrolled outliers. The power of visualizing weed control strategies through remote sensing has supported recommendations to improve outlier control to achieve long-term, sustainable landscape-scale suppression of invasive legumes. Full article
(This article belongs to the Special Issue Remote Sensing for Management of Invasive Species)
Show Figures

Figure 1

14 pages, 5052 KiB  
Article
Non-Destructive Prediction of Anthocyanin Content of Rosa chinensis Petals Using Digital Images and Machine Learning Algorithms
by Xiu-Ying Liu, Jun-Ru Yu and Heng-Nan Deng
Horticulturae 2024, 10(5), 503; https://doi.org/10.3390/horticulturae10050503 - 13 May 2024
Cited by 2 | Viewed by 1750
Abstract
Anthocyanins are widely found in plants and have significant functions. The accurate detection and quantitative assessment of anthocyanin content are essential to assess its functions. The anthocyanin content in plant tissues is typically quantified by wet chemistry and spectroscopic techniques. However, these methods [...] Read more.
Anthocyanins are widely found in plants and have significant functions. The accurate detection and quantitative assessment of anthocyanin content are essential to assess its functions. The anthocyanin content in plant tissues is typically quantified by wet chemistry and spectroscopic techniques. However, these methods are time-consuming, labor-intensive, tedious, expensive, destructive, or require expensive equipment. Digital photography is a fast, economical, efficient, reliable, and non-invasive method for estimating plant pigment content. This study examined the anthocyanin content of Rosa chinensis petals using digital images, a back-propagation neural network (BPNN), and the random forest (RF) algorithm. The objective was to determine whether using RGB indices and BPNN and RF algorithms to accurately predict the anthocyanin content of R. chinensis petals is feasible. The anthocyanin content ranged from 0.832 to 4.549 µmol g−1 for 168 samples. Most RGB indices were strongly correlated with the anthocyanin content. The coefficient of determination (R2) and the ratio of performance to deviation (RPD) of the BPNN and RF models exceeded 0.75 and 2.00, respectively, indicating the high accuracy of both models in predicting the anthocyanin content of R. chinensis petals using RGB indices. The RF model had higher R2 and RPD values, and lower root mean square error (RMSE) and mean absolute error (MAE) values than the BPNN, indicating that it outperformed the BPNN model. This study provides an alternative method for determining the anthocyanin content of flowers. Full article
Show Figures

Figure 1

27 pages, 14613 KiB  
Article
A UAV-Based Single-Lens Stereoscopic Photography Method for Phenotyping the Architecture Traits of Orchard Trees
by Wenli Zhang, Xinyu Peng, Tingting Bai, Haozhou Wang, Daisuke Takata and Wei Guo
Remote Sens. 2024, 16(9), 1570; https://doi.org/10.3390/rs16091570 - 28 Apr 2024
Cited by 2 | Viewed by 1908
Abstract
This article addresses the challenges of measuring the 3D architecture traits, such as height and volume, of fruit tree canopies, constituting information that is essential for assessing tree growth and informing orchard management. The traditional methods are time-consuming, prompting the need for efficient [...] Read more.
This article addresses the challenges of measuring the 3D architecture traits, such as height and volume, of fruit tree canopies, constituting information that is essential for assessing tree growth and informing orchard management. The traditional methods are time-consuming, prompting the need for efficient alternatives. Recent advancements in unmanned aerial vehicle (UAV) technology, particularly using Light Detection and Ranging (LiDAR) and RGB cameras, have emerged as promising solutions. LiDAR offers precise 3D data but is costly and computationally intensive. RGB and photogrammetry techniques like Structure from Motion and Multi-View Stereo (SfM-MVS) can be a cost-effective alternative to LiDAR, but the computational demands still exist. This paper introduces an innovative approach using UAV-based single-lens stereoscopic photography to overcome these limitations. This method utilizes color variations in canopies and a dual-image-input network to generate a detailed canopy height map (CHM). Additionally, a block structure similarity method is presented to enhance height estimation accuracy in single-lens UAV photography. As a result, the average rates of growth in canopy height (CH), canopy volume (CV), canopy width (CW), and canopy project area (CPA) were 3.296%, 9.067%, 2.772%, and 5.541%, respectively. The r2 values of CH, CV, CW, and CPA were 0.9039, 0.9081, 0.9228, and 0.9303, respectively. In addition, compared to the commonly used SFM-MVS approach, the proposed method reduces the time cost of canopy reconstruction by 95.2% and of the cost of images needed for canopy reconstruction by 88.2%. This approach allows growers and researchers to utilize UAV-based approaches in actual orchard environments without incurring high computation costs. Full article
Show Figures

Figure 1

13 pages, 2472 KiB  
Technical Note
DBH Estimation for Individual Tree: Two-Dimensional Images or Three-Dimensional Point Clouds?
by Zhihui Mao, Zhuo Lu, Yanjie Wu and Lei Deng
Remote Sens. 2023, 15(16), 4116; https://doi.org/10.3390/rs15164116 - 21 Aug 2023
Cited by 7 | Viewed by 3488
Abstract
Accurate forest parameters are crucial for ecological protection, forest resource management and sustainable development. The rapid development of remote sensing can retrieve parameters such as the leaf area index, cluster index, diameter at breast height (DBH) and tree height at different scales (e.g., [...] Read more.
Accurate forest parameters are crucial for ecological protection, forest resource management and sustainable development. The rapid development of remote sensing can retrieve parameters such as the leaf area index, cluster index, diameter at breast height (DBH) and tree height at different scales (e.g., plots and stands). Although some LiDAR satellites such as GEDI and ICESAT-2 can measure the average tree height in a certain area, there is still a lack of effective means for obtaining individual tree parameters using high-resolution satellite data, especially DBH. The objective of this study is to explore the capability of 2D image-based features (texture and spectrum) in estimating the DBH of individual tree. Firstly, we acquired unmanned aerial vehicle (UAV) LiDAR point cloud data and UAV RGB imagery, from which digital aerial photography (DAP) point cloud data were generated using the structure-from-motion (SfM) method. Next, we performed individual tree segmentation and extracted the individual tree crown boundaries using the DAP and LiDAR point cloud data, respectively. Subsequently, the eight 2D image-based textural and spectral metrics and 3D point-cloud-based metrics (tree height and crown diameters) were extracted from the tree crown boundaries of each tree. Then, the correlation coefficients between each metric and the reference DBH were calculated. Finally, the capabilities of these metrics and different models, including multiple linear regression (MLR), random forest (RF) and support vector machine (SVM), in the DBH estimation were quantitatively evaluated and compared. The results showed that: (1) The 2D image-based textural metrics had the strongest correlation with the DBH. Among them, the highest correlation coefficient of −0.582 was observed between dissimilarity, variance and DBH. When using textural metrics alone, the estimated DBH accuracy was the highest, with a RMSE of only 0.032 and RMSE% of 16.879% using the MLR model; (2) Simply feeding multi-features, such as textural, spectral and structural metrics, into the machine learning models could not have led to optimal results in individual tree DBH estimations; on the contrary, it could even reduce the accuracy. In general, this study indicated that the 2D image-based textural metrics have great potential in individual tree DBH estimations, which could help improve the capability to efficiently and meticulously monitor and manage forests on a large scale. Full article
Show Figures

Figure 1

15 pages, 1796 KiB  
Article
A 256 × 256 LiDAR Imaging System Based on a 200 mW SPAD-Based SoC with Microlens Array and Lightweight RGB-Guided Depth Completion Neural Network
by Jier Wang, Jie Li, Yifan Wu, Hengwei Yu, Lebei Cui, Miao Sun and Patrick Yin Chiang
Sensors 2023, 23(15), 6927; https://doi.org/10.3390/s23156927 - 3 Aug 2023
Cited by 4 | Viewed by 4821
Abstract
Light detection and ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption [...] Read more.
Light detection and ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption has been hindered by the prohibitive costs and substantial power consumption associated with its implementation in mobile devices. To surmount these obstacles, this paper proposes a low-power, low-cost, single-photon avalanche detector (SPAD)-based system-on-chip (SoC) which packages the microlens arrays (MLAs) and a lightweight RGB-guided sparse depth imaging completion neural network for 3D LiDAR imaging. The proposed SoC integrates an 8 × 8 SPAD macropixel array with time-to-digital converters (TDCs) and a charge pump, fabricated using a 180 nm bipolar-CMOS-DMOS (BCD) process. Initially, the primary function of this SoC was limited to serving as a ranging sensor. A random MLA-based homogenizing diffuser efficiently transforms Gaussian beams into flat-topped beams with a 45° field of view (FOV), enabling flash projection at the transmitter. To further enhance resolution and broaden application possibilities, a lightweight neural network employing RGB-guided sparse depth complementation is proposed, enabling a substantial expansion of image resolution from 8 × 8 to quarter video graphics array level (QVGA; 256 × 256). Experimental results demonstrate the effectiveness and stability of the hardware encompassing the SoC and optical system, as well as the lightweight features and accuracy of the algorithmic neural network. The state-of-the-art SoC-neural network solution offers a promising and inspiring foundation for developing consumer-level 3D imaging applications on mobile devices. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

15 pages, 875 KiB  
Article
Segmentation of Acne Vulgaris Images Techniques: A Comparative and Technical Study
by María Moncho-Santonja, Silvia Aparisi-Navarro, Beatriz Defez and Guillermo Peris-Fajarnés
Appl. Sci. 2023, 13(10), 6157; https://doi.org/10.3390/app13106157 - 17 May 2023
Cited by 1 | Viewed by 2403
Abstract
Background: Acne vulgaris is the most common dermatological pathology worldwide. The currently used methodologies for the evaluation and monitoring of acne have been analyzed in several studies, highlighting important limitations that can be concretely addressed using image processing methods by performing segmentation on [...] Read more.
Background: Acne vulgaris is the most common dermatological pathology worldwide. The currently used methodologies for the evaluation and monitoring of acne have been analyzed in several studies, highlighting important limitations that can be concretely addressed using image processing methods by performing segmentation on different acne vulgaris image modalities. These techniques reduce the costs of treatment and acne severity grading, since they improve objectivity and are less time-consuming. That is why, in the last decade, several studies that propose segmentation methodologies on acne patients’ images have been published. The aim of this work is to analyze the segmentation methods developed for acne vulgaris images until now, including an analysis of the processing techniques and image modalities used, as well as the results. Results: Following the PRISMA statement and PICO model, 27 studies were included in the systematic review, and subsequently, they were divided into two groups: those that discuss methods based on classical image processing techniques, such as contrast adjustment and conversion of RGB images to other color spaces, and those discussing methods based on machine learning algorithms. Conclusions: Currently, there is no preference between one group of segmentation methods or the other. Moreover, the lack of uniformity in the evaluation of results for each study makes the comparison of methods difficult. The preferred image modality for segmentation is conventional photography, which shows a research gap in the application of segmentation algorithms to other acne vulgaris image modalities that could be useful, such as fluorescence imaging. Full article
Show Figures

Figure 1

16 pages, 5911 KiB  
Article
Acid Mine Drainage Discrimination Using Very High Resolution Imagery Obtained by Unmanned Aerial Vehicle in a Stone Coal Mining Area
by Xiaomei Kou, Dianchao Han, Yongxiang Cao, Haixing Shang, Houfeng Li, Xin Zhang and Min Yang
Water 2023, 15(8), 1613; https://doi.org/10.3390/w15081613 - 20 Apr 2023
Cited by 7 | Viewed by 3472
Abstract
Mining of mineral resources exposes various minerals to oxidizing environments, especially sulfide minerals, which are decomposed by water after oxidation and make the water in the mine area acidic. Acid mine drainage (AMD) from mining can pollute surrounding rivers and lakes, causing serious [...] Read more.
Mining of mineral resources exposes various minerals to oxidizing environments, especially sulfide minerals, which are decomposed by water after oxidation and make the water in the mine area acidic. Acid mine drainage (AMD) from mining can pollute surrounding rivers and lakes, causing serious ecological problems. Compared with traditional field surveys, unmanned aerial vehicle (UAV) technology has advantages in terms of real-time imagery, security, and image accuracy. UAV technology can compensate for the shortcomings of traditional technology in mine environmental surveys and effectively improve the implementat ion efficiency of the work. UAV technology has gradually become one of the important ways of mine environmental monitoring. In this study, a UAV aerial photography system equipped with a Red, Green, Blue (RGB) camera collected very-high-resolution images of the stone coal mining area in Ziyang County, northwest China, and classified the very-high-resolution images by support vector machine (SVM), random forest (RF), and U-Net methods, and detected the distribution of five types of land cover, including AMD, roof, water, vegetation, and bare land. Finally, the accuracy of the recognition results was evaluated based on the land-cover map using the confusion matrix. The recognition accuracy of AMD using the U-Net method is significantly better than that of SVM and RF traditional machine-learning methods. The results showed that a UAV aerial photography system equipped with an RGB camera and the depth neural network algorithm could be combined for the competent detection of mine environmental problems. Full article
(This article belongs to the Special Issue Mine and Water)
Show Figures

Figure 1

Back to TopTop