Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (102)

Search Parameters:
Keywords = image differencing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 31946 KB  
Article
Hail Damage Detection: Integrating Sentinel-2 Images with Weather Radar Hail Kinetic Energy
by Adrian Ursu, Vasilică Istrate, Vasile Jitariu and Ionuț-Lucian Lazăr
Remote Sens. 2025, 17(23), 3850; https://doi.org/10.3390/rs17233850 - 27 Nov 2025
Viewed by 715
Abstract
Hailstorms represent one of the most damaging convective hazards for agriculture, yet quantifying their impacts at a landscape scale remains challenging due to their localized and short-lived nature. In this study, we combine weather radar parameters and Sentinel-2 multispectral imagery to assess vegetation [...] Read more.
Hailstorms represent one of the most damaging convective hazards for agriculture, yet quantifying their impacts at a landscape scale remains challenging due to their localized and short-lived nature. In this study, we combine weather radar parameters and Sentinel-2 multispectral imagery to assess vegetation damage caused by two major hail events in northeastern Romania: Rădăuți (17 July 2016) and Dolhasca (30 July 2020). Radar-derived hail kinetic energy (HKE) was used as a rapid temporal indicator of hail occurrence, with a threshold of 300 J m−2 applied to delineate potentially affected areas. Sentinel-2 Level-1C imagery, selected under strict temporal and cloud cover criteria, was processed to generate pre- and post-event Normalized Difference Vegetation Index (NDVI) maps, from which NDVI differences (ΔNDVI) were computed. Thresholds of 0.10 and 0.20 were applied to identify moderate and severe vegetation stress, respectively. The results demonstrate strong spatial correspondence between radar-derived HKE cores and Sentinel-2 ΔNDVI reductions. In Rădăuți, where only one post-event image was available, ΔNDVI thresholds identified between 2236 and 5856 ha of affected vegetation within the HKE > 300 J m−2 zone. In Dolhasca, where three post-event images were available (5, 8, and 15 days), the analysis revealed 6200–9100 ha affected at 5 days, decreasing to 4800–7200 ha at 8 days, and further to 3100–5600 ha at 15 days post-event. This temporal gradient highlights both the recovery of vegetation and the diminishing sensitivity of the ΔNDVI signal with increasing time elapsed since the event. Analysis by land use classes showed arable fields to be the most sensitive, followed by orchards and pastures, while forests exhibited smaller but persistent declines. This study demonstrates the robustness of integrating radar-derived hail kinetic energy with Sentinel-2 NDVI differencing for the spatiotemporal assessment of hail damage. The approach provides both rapid detection and temporally resolved mapping of hail damage, underlining the critical role of time as a determining factor in impact assessments. These findings have strong implications for operational crop monitoring, disaster response, and risk management in hail-prone regions. Full article
Show Figures

Figure 1

27 pages, 4269 KB  
Article
Image Processing Algorithms Analysis for Roadside Wild Animal Detection
by Mindaugas Knyva, Darius Gailius, Šarūnas Kilius, Aistė Kukanauskaitė, Pranas Kuzas, Gintautas Balčiūnas, Asta Meškuotienė and Justina Dobilienė
Sensors 2025, 25(18), 5876; https://doi.org/10.3390/s25185876 - 19 Sep 2025
Cited by 1 | Viewed by 1214
Abstract
The study presents a comparative analysis of five distinct image processing methodologies for roadside wild animal detection using thermal imagery, aiming to identify an optimal approach for embedded system implementation to mitigate wildlife–vehicle collisions. The evaluated techniques included the following: bilateral filtering followed [...] Read more.
The study presents a comparative analysis of five distinct image processing methodologies for roadside wild animal detection using thermal imagery, aiming to identify an optimal approach for embedded system implementation to mitigate wildlife–vehicle collisions. The evaluated techniques included the following: bilateral filtering followed by thresholding and SIFT feature matching; Gaussian filtering combined with Canny edge detection and contour analysis; color quantization via the nearest average algorithm followed by contour identification; motion detection based on absolute inter-frame differencing, object dilation, thresholding, and contour comparison; and animal detection based on a YOLOv8n neural network. These algorithms were applied to sequential thermal images captured by a custom roadside surveillance system incorporating a thermal camera and a Raspberry Pi processing unit. Performance evaluation utilized a dataset of consecutive frames, assessing average execution time, sensitivity, specificity, and accuracy. The results revealed performance trade-offs: the motion detection method achieved the highest sensitivity (92.31%) and overall accuracy (87.50%), critical for minimizing missed detections, despite exhibiting the near lowest specificity (66.67%) and a moderate execution time (0.126 s) compared to the fastest bilateral filter approach (0.093 s) and the high-specificity Canny edge method (90.00%). Consequently, considering the paramount importance of detection reliability (sensitivity and accuracy) in this application, the motion-based methodology was selected for further development and implementation within the target embedded system framework. Subsequent testing on diverse datasets validated its general robustness while highlighting potential performance variations depending on dataset characteristics, particularly the duration of animal presence within the monitored frame. Full article
(This article belongs to the Special Issue Energy Harvesting and Machine Learning in IoT Sensors)
Show Figures

Figure 1

25 pages, 10818 KB  
Article
From Detection to Motion-Based Classification: A Two-Stage Approach for T. cruzi Identification in Video Sequences
by Kenza Chenni, Carlos Brito-Loeza, Cefa Karabağ and Lavdie Rada
J. Imaging 2025, 11(9), 315; https://doi.org/10.3390/jimaging11090315 - 14 Sep 2025
Viewed by 1056
Abstract
Chagas disease, caused by Trypanosoma cruzi (T. cruzi), remains a significant public health challenge in Latin America. Traditional diagnostic methods relying on manual microscopy suffer from low sensitivity, subjective interpretation, and poor performance in suboptimal conditions. This study presents a novel [...] Read more.
Chagas disease, caused by Trypanosoma cruzi (T. cruzi), remains a significant public health challenge in Latin America. Traditional diagnostic methods relying on manual microscopy suffer from low sensitivity, subjective interpretation, and poor performance in suboptimal conditions. This study presents a novel computer vision framework integrating motion analysis with deep learning for automated T. cruzi detection in microscopic videos. Our motion-based detection pipeline leverages parasite motility as a key discriminative feature, employing frame differencing, morphological processing, and DBSCAN clustering across 23 microscopic videos. This approach effectively addresses limitations of static image analysis in challenging conditions including noisy backgrounds, uneven illumination, and low contrast. From motion-identified regions, 64×64 patches were extracted for classification. MobileNetV2 achieved superior performance with 99.63% accuracy, 100% precision, 99.12% recall, and an AUC-ROC of 1.0. Additionally, YOLOv5 and YOLOv8 models (Nano, Small, Medium variants) were trained on 43 annotated videos, with YOLOv5-Nano and YOLOv8-Nano demonstrating excellent detection capability on unseen test data. This dual-stage framework offers a practical, computationally efficient solution for automated Chagas diagnosis, particularly valuable for resource-constrained laboratories with poor imaging quality. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 6241 KB  
Article
Enhanced Cerebrovascular Extraction Using Vessel-Specific Preprocessing of Time-Series Digital Subtraction Angiograph
by Taehun Hong, Seonyoung Hong, Eonju Do, Hyewon Ko, Kyuseok Kim and Youngjin Lee
Photonics 2025, 12(9), 852; https://doi.org/10.3390/photonics12090852 - 25 Aug 2025
Viewed by 1331
Abstract
Accurate cerebral vasculature segmentation using digital subtraction angiography (DSA) is critical for diagnosing and treating cerebrovascular diseases. However, conventional single-frame analysis methods often fail to capture fine vascular structures due to background noise, overlapping anatomy, and dynamic contrast flow. In this study, we [...] Read more.
Accurate cerebral vasculature segmentation using digital subtraction angiography (DSA) is critical for diagnosing and treating cerebrovascular diseases. However, conventional single-frame analysis methods often fail to capture fine vascular structures due to background noise, overlapping anatomy, and dynamic contrast flow. In this study, we propose a novel vessel-enhancing preprocessing technique using temporal differencing of DSA sequences to improve cerebrovascular segmentation accuracy. Our method emphasizes contrast flow dynamics while suppressing static background components by computing absolute differences between sequential DSA frames. The enhanced images were input into state-of-the-art deep learning models, U-Net++ and DeepLabv3+, for vascular segmentation. Quantitative evaluation of the publicly available DIAS dataset demonstrated significant segmentation improvements across multiple metrics, including the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and Vascular Connectivity (VC). Particularly, DeepLabv3+ with the proposed preprocessing achieved a DSC of 0.83 ± 0.05 and VC of 44.65 ± 0.63, outperforming conventional methods. These results suggest that leveraging temporal information via input enhancement substantially improves small and complex vascular structure extraction. Our approach is computationally efficient, model-agnostic, and clinically applicable for DSA. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Optics and Biophotonics)
Show Figures

Figure 1

24 pages, 4396 KB  
Article
Study of the Characteristics of a Co-Seismic Displacement Field Based on High-Resolution Stereo Imagery: A Case Study of the 2024 MS7.1 Wushi Earthquake, Xinjiang
by Chenyu Ma, Zhanyu Wei, Li Qian, Tao Li, Chenglong Li, Xi Xi, Yating Deng and Shuang Geng
Remote Sens. 2025, 17(15), 2625; https://doi.org/10.3390/rs17152625 - 29 Jul 2025
Cited by 1 | Viewed by 905
Abstract
The precise characterization of surface rupture zones and associated co-seismic displacement fields from large earthquakes provides critical insights into seismic rupture mechanisms, earthquake dynamics, and hazard assessments. Stereo-photogrammetric digital elevation models (DEMs), produced from high-resolution satellite stereo imagery, offer reliable global datasets that [...] Read more.
The precise characterization of surface rupture zones and associated co-seismic displacement fields from large earthquakes provides critical insights into seismic rupture mechanisms, earthquake dynamics, and hazard assessments. Stereo-photogrammetric digital elevation models (DEMs), produced from high-resolution satellite stereo imagery, offer reliable global datasets that are suitable for the detailed extraction and quantification of vertical co-seismic displacements. In this study, we utilized pre- and post-event WorldView-2 stereo images of the 2024 Ms7.1 Wushi earthquake in Xinjiang to generate DEMs with a spatial resolution of 0.5 m and corresponding terrain point clouds with an average density of approximately 4 points/m2. Subsequently, we applied the Iterative Closest Point (ICP) algorithm to perform differencing analysis on these datasets. Special care was taken to reduce influences from terrain changes such as vegetation growth and anthropogenic structures. Ultimately, by maintaining sufficient spatial detail, we obtained a three-dimensional co-seismic displacement field with a resolution of 15 m within grid cells measuring 30 m near the fault trace. The results indicate a clear vertical displacement distribution pattern along the causative sinistral–thrust fault, exhibiting alternating uplift and subsidence zones that follow a characteristic “high-in-center and low-at-ends” profile, along with localized peak displacement clusters. Vertical displacements range from approximately 0.2 to 1.4 m, with a maximum displacement of ~1.46 m located in the piedmont region north of the Qialemati River, near the transition between alluvial fan deposits and bedrock. Horizontal displacement components in the east-west and north-south directions are negligible, consistent with focal mechanism solutions and surface rupture observations from field investigations. The successful extraction of this high-resolution vertical displacement field validates the efficacy of satellite-based high-resolution stereo-imaging methods for overcoming the limitations of GNSS and InSAR techniques in characterizing near-field surface displacements associated with earthquake ruptures. Moreover, this dataset provides robust constraints for investigating fault-slip mechanisms within near-surface geological contexts. Full article
Show Figures

Figure 1

21 pages, 3293 KB  
Article
A Fusion of Entropy-Enhanced Image Processing and Improved YOLOv8 for Smoke Recognition in Mine Fires
by Xiaowei Li and Yi Liu
Entropy 2025, 27(8), 791; https://doi.org/10.3390/e27080791 - 25 Jul 2025
Cited by 1 | Viewed by 820
Abstract
Smoke appears earlier than flames, so image-based fire monitoring techniques mainly focus on the detection of smoke, which is regarded as one of the effective strategies for preventing the spread of initial fires that eventually evolve into serious fires. Smoke monitoring in mine [...] Read more.
Smoke appears earlier than flames, so image-based fire monitoring techniques mainly focus on the detection of smoke, which is regarded as one of the effective strategies for preventing the spread of initial fires that eventually evolve into serious fires. Smoke monitoring in mine fires faces serious challenges: the underground environment is complex, with smoke and backgrounds being highly integrated and visual features being blurred, which makes it difficult for existing image-based monitoring techniques to meet the actual needs in terms of accuracy and robustness. The conventional ground-based methods are directly used in the underground with a high rate of missed detection and false detection. Aiming at the core problems of mixed target and background information and high boundary uncertainty in smoke images, this paper, inspired by the principle of information entropy, proposes a method for recognizing smoke from mine fires by integrating entropy-enhanced image processing and improved YOLOv8. Firstly, according to the entropy change characteristics of spatio-temporal information brought by smoke diffusion movement, based on spatio-temporal entropy separation, an equidistant frame image differential fusion method is proposed, which effectively suppresses the low entropy background noise, enhances the detail clarity of the high entropy smoke region, and significantly improves the image signal-to-noise ratio. Further, in order to cope with the variable scale and complex texture (high information entropy) of the smoke target, an improvement mechanism based on entropy-constrained feature focusing is introduced on the basis of the YOLOv8m model, so as to more effectively capture and distinguish the rich detailed features and uncertain information of the smoke region, realizing the balanced and accurate detection of large and small smoke targets. The experiments show that the comprehensive performance of the proposed method is significantly better than the baseline model and similar algorithms, and it can meet the demand of real-time detection. Compared with YOLOv9m, YOLOv10n, and YOLOv11n, although there is a decrease in inference speed, the accuracy, recall, average detection accuracy mAP (50), and mAP (50–95) performance metrics are all substantially improved. The precision and robustness of smoke recognition in complex mine scenarios are effectively improved. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 5975 KB  
Article
Fast Tongue Detection Based on Lightweight Model and Deep Feature Propagation
by Keju Chen, Yun Zhang, Li Zhong and Yongguo Liu
Electronics 2025, 14(7), 1457; https://doi.org/10.3390/electronics14071457 - 3 Apr 2025
Viewed by 1425
Abstract
While existing tongue detection methods have achieved good accuracy, the problems of low detection speed and excessive noise in the background area still exist. To address these problems, a fast tongue detection model based on a lightweight model and deep feature propagation (TD-DFP) [...] Read more.
While existing tongue detection methods have achieved good accuracy, the problems of low detection speed and excessive noise in the background area still exist. To address these problems, a fast tongue detection model based on a lightweight model and deep feature propagation (TD-DFP) is proposed. Firstly, a color channel is added to the RGB tongue image to introduce more prominent tongue features. To reduce the computational complexity, keyframes are selected through inter frame differencing, while optical flow maps are used to achieve feature alignment between non-keyframes and keyframes. Secondly, a convolutional neural network with feature pyramid structures is designed to extract multi-scale features, and object detection heads based on depth-wise convolutions are adopted to achieve real-time tongue region detection. In addition, a knowledge distillation module is introduced to improve training performance during the training phase. TD-DFP achieved 82.8% mean average precision (mAP) values and 61.88 frames per second (FPS) values on the tongue dataset. The experimental results indicate that TD-DFP can achieve efficient and accurate tongue detection, achieving real-time tongue detection. Full article
(This article belongs to the Special Issue Mechanism and Modeling of Graph Convolutional Networks)
Show Figures

Figure 1

26 pages, 7687 KB  
Article
A Comparative Study Between Gaofen-1 WFV and Sentinel MSI Imagery for Fire Severity Assessment in a Karst Region, China
by Yao Liao, Yun Liu, Juan Yang, Huixuan Li, Yue Shi, Xue Li, Feng Hu, Jinlong Fan and Zhong Zheng
Forests 2025, 16(4), 597; https://doi.org/10.3390/f16040597 - 28 Mar 2025
Viewed by 696
Abstract
Wild fires frequently influence fragile karst forest ecosystems in southwestern China. We evaluated the potential of Gaofen Wide Field of View (WFV) imagery for assessing the fire severity of karst forest fires. Comparison with Sentinel Multispectral Imager (MSI) imagery was conducted using 19 [...] Read more.
Wild fires frequently influence fragile karst forest ecosystems in southwestern China. We evaluated the potential of Gaofen Wide Field of View (WFV) imagery for assessing the fire severity of karst forest fires. Comparison with Sentinel Multispectral Imager (MSI) imagery was conducted using 19 spectral indices. The highest correlation for Sentinel-2 MSI is 0.634, while for Gaofen-1 WFV it is 0.583. This is not a significant difference. The burned area index, differenced burned area index, and relative differenced modified soil adjusted vegetation index were the highest performing indices for the Gaofen-1 WFV, while the normalized burn ratio plus, differenced normalized differential vegetation index, and relative differenced normalized differential vegetation index were the best for the Sentinel MSI. The total accuracy evaluation of the fire severity assessment for Gaofen-1 WFV ranged from 40 to 44% and that for Sentinel MSI ranged from 40 to 48%. The difference in accuracy between the two satellites was less than 10%. The RMSE values for all six models were close to 0.6, ranging from 0.58 to 0.67. The fire severity maps derived from both imagery sources exhibited overall similar spatial patterns, but the Sentinel-2 MSI maps are obviously finer. These maps matched well with the unmanned aerial vehicle (UAV) images, particularly at high and unburned severity levels. The results of this study revealed that the performance of the Gaofen WFV imagery was close to that of Sentinel MSI imagery which makes it an effective data source for fire severity assessment in this region. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

20 pages, 42010 KB  
Article
Coastline and Riverbed Change Detection in the Broader Area of the City of Patras Using Very High-Resolution Multi-Temporal Imagery
by Spiros Papadopoulos, Vassilis Anastassopoulos and Georgia Koukiou
Electronics 2025, 14(6), 1096; https://doi.org/10.3390/electronics14061096 - 11 Mar 2025
Viewed by 1272
Abstract
Accurate and robust information on land cover changes in urban and coastal areas is essential for effective urban land management, ecosystem monitoring, and urban planning. This paper details the methodology and results of a pixel-level classification and change detection analysis, leveraging 1945 Royal [...] Read more.
Accurate and robust information on land cover changes in urban and coastal areas is essential for effective urban land management, ecosystem monitoring, and urban planning. This paper details the methodology and results of a pixel-level classification and change detection analysis, leveraging 1945 Royal Air Force (RAF) aerial imagery and 2011 Very High-Resolution (VHR) multispectral WorldView-2 satellite imagery from the broader area of Patras, Greece. Our attention is mainly focused on the changes in the coastline from the city of Patras to the northeast direction and the two major rivers, Charadros and Selemnos. The methodology involves preprocessing steps such as registration, denoising, and resolution adjustments to ensure computational feasibility for both coastal and riverbed change detection procedures while maintaining critical spatial features. For change detection at coastal areas over time, the Normalized Difference Water Index (NDWI) was applied to the new imagery to mask out the sea from the coastline and manually archive imagery from 1945. To determine the differences in the coastline between 1945 and 2011, we perform image differencing by subtracting the 1945 image from the 2011 image. This highlights the areas where changes have occurred over time. To conduct riverbed change detection, feature extraction using the Gray-Level Co-occurrence Matrix (GLCM) was applied to capture spatial characteristics. A Support Vector Machine (SVM) classification model was trained to distinguish river pixels from non-river pixels, enabling the identification of changes in riverbeds and achieving 92.6% and 92.5% accuracy for new and old imagery, respectively. Post-classification processing included classification maps to enhance the visualization of the detected changes. This approach highlights the potential of combining historical and modern imagery with supervised machine learning methods to effectively assess coastal erosion and riverbed alterations. Full article
Show Figures

Figure 1

28 pages, 28459 KB  
Article
Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya
by Roman Shults, Ashraf Farahat, Muhammad Usman and Md Masudur Rahman
Remote Sens. 2025, 17(4), 616; https://doi.org/10.3390/rs17040616 - 11 Feb 2025
Viewed by 3696
Abstract
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the [...] Read more.
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the global community. One such flood took place in northern Libya in September 2023. The presented study is aimed at evaluating the flood aftermath for Derna city, Libya, using high resolution GEOEYE-1 and Sentinel-2 satellite imagery in Google Earth Engine environment. The primary task is obtaining and analyzing data that provide high accuracy and detail for the study region. The main objective of study is to explore the capabilities of different algorithms and remote sensing datasets for quantitative change estimation after the flood. Different supervised classification methods were examined, including random forest, support vector machine, naïve-Bayes, and classification and regression tree (CART). The various sets of hyperparameters for classification were considered. The high-resolution GEOEYE-1 images were used for precise change detection using image differencing (pixel-to-pixel comparison and geographic object-based image analysis (GEOBIA) for extracting building), whereas Sentinel-2 data were employed for the classification and further change detection by classified images. Object based image analysis (OBIA) was also performed for the extraction of building footprints using very high resolution GEOEYE images for the quantification of buildings that collapsed due to the flood. The first stage of the study was the development of a workflow for data analysis. This workflow includes three parallel processes of data analysis. High-resolution GEOEYE-1 images of Derna city were investigated for change detection algorithms. In addition, different indices (normalized difference vegetation index (NDVI), soil adjusted vegetation index (SAVI), transformed NDVI (TNDVI), and normalized difference moisture index (NDMI)) were calculated to facilitate the recognition of damaged regions. In the final stage, the analysis results were fused to obtain the damage estimation for the studied region. As the main output, the area changes for the primary classes and the maps that portray these changes were obtained. The recommendations for data usage and further processing in Google Earth Engine were developed. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

19 pages, 7885 KB  
Article
An Improved Method for Human Activity Detection with High-Resolution Images by Fusing Pooling Enhancement and Multi-Task Learning
by Haoji Li, Shilong Ren, Lei Fang, Jinyue Chen, Xinfeng Wang, Guoqiang Wang, Qingzhu Zhang and Qiao Wang
Remote Sens. 2025, 17(1), 159; https://doi.org/10.3390/rs17010159 - 5 Jan 2025
Viewed by 1601
Abstract
Deep learning has garnered increasing attention in human activity detection due to its advantages, such as not relying on expert knowledge and automatic feature extraction. However, the existing deep learning-based approaches are primarily confined to recognizing specific types of human activities, which hinders [...] Read more.
Deep learning has garnered increasing attention in human activity detection due to its advantages, such as not relying on expert knowledge and automatic feature extraction. However, the existing deep learning-based approaches are primarily confined to recognizing specific types of human activities, which hinders scientific decision-making and comprehensive environmental protection. Therefore, there is an urgent need to develop a deep learning model to address multiple-type human activity detection with finer-resolution images. In this study, we proposed a new multi-task learning model (named PE-MLNet) to simultaneously achieve change detection and land use classification in GF-6 bitemporal images. Meanwhile, we also designed a pooling enhancement module (PEM) to accurately capture multi-scale change details from the bitemporal feature maps through combining differencing and concatenating branches. An independent annotated dataset at Yellow River Delta was taken to examine the effectiveness of PE-MLNet. The results showed that PE-MLNet exhibited obvious improvements in both detection accuracy and detail handling compared with other existing methods. Further analysis uncovered that the areas of buildings, roads, and oil depots has obviously increased, while the farmland and wetland areas largely decreased over the five years, indicating an expansion of human activities and their increased impacts on natural environments. Full article
Show Figures

Figure 1

18 pages, 6159 KB  
Article
Computer-Vision-Based Product Quality Inspection and Novel Counting System
by Changhyun Lee, Yunsik Kim and Hunkee Kim
Appl. Syst. Innov. 2024, 7(6), 127; https://doi.org/10.3390/asi7060127 - 18 Dec 2024
Cited by 2 | Viewed by 7480
Abstract
In this study, we aimed to enhance the accuracy of product quality inspection and counting in the manufacturing process by integrating image processing and human body detection algorithms. We employed the SIFT algorithm combined with traditional image comparison metrics such as SSIM, PSNR, [...] Read more.
In this study, we aimed to enhance the accuracy of product quality inspection and counting in the manufacturing process by integrating image processing and human body detection algorithms. We employed the SIFT algorithm combined with traditional image comparison metrics such as SSIM, PSNR, and MSE to develop a defect detection system that is robust against variations in rotation and scale. Additionally, the YOLOv8 Pose algorithm was used to detect and correct errors in product counting caused by human interference on the load cell in real time. By applying the image differencing technique, we accurately calculated the unit weight of products and determined their total count. In our experiments conducted on products weighing over 1 kg, we achieved a high accuracy of 99.268%. The integration of our algorithms with the load-cell-based counting system demonstrates reliable real-time quality inspection and automated counting in manufacturing environments. Full article
Show Figures

Figure 1

18 pages, 25764 KB  
Article
Evaluating Landsat- and Sentinel-2-Derived Burn Indices to Map Burn Scars in Chyulu Hills, Kenya
by Mary C. Henry and John K. Maingi
Fire 2024, 7(12), 472; https://doi.org/10.3390/fire7120472 - 11 Dec 2024
Cited by 6 | Viewed by 2924
Abstract
Chyulu Hills, Kenya, serves as one of the region’s water towers by supplying groundwater to surrounding streams and springs in southern Kenya. In a semiarid region, this water is crucial to the survival of local people, farms, and wildlife. The Chyulu Hills is [...] Read more.
Chyulu Hills, Kenya, serves as one of the region’s water towers by supplying groundwater to surrounding streams and springs in southern Kenya. In a semiarid region, this water is crucial to the survival of local people, farms, and wildlife. The Chyulu Hills is also very prone to fires, and large areas of the range burn each year during the dry season. Currently, there are no detailed fire records or burn scar maps to track the burn history. Mapping burn scars using remote sensing is a cost-effective approach to monitor fire activity over time. However, it is not clear whether spectral burn indices developed elsewhere can be directly applied here when Chyulu Hills contains mostly grassland and bushland vegetation. Additionally, burn scars are usually no longer detectable after an intervening rainy season. In this study, we calculated the Differenced Normalized Burn Ratio (dNBR) and two versions of the Relative Differenced Normalized Burn Ratio (RdNBR) using Landsat Operational Land Imager (OLI) and Sentinel-2 MultiSpectral Instrument (MSI) data to determine which index, threshold values, instrument, and Sentinel near-infrared (NIR) band work best to map burn scars in Chyulu Hills, Kenya. The results indicate that the Relative Differenced Normalized Burn Ratio from Landsat OLI had the highest accuracy for mapping burn scars while also minimizing false positives (commission error). While mapping burn scars, it became clear that adjusting the threshold value for an index resulted in tradeoffs between false positives and false negatives. While none were perfect, this is an important consideration going forward. Given the length of the Landsat archive, there is potential to expand this work to additional years. Full article
(This article belongs to the Special Issue Fire in Savanna Landscapes, Volume II)
Show Figures

Figure 1

29 pages, 4900 KB  
Article
Forest Fire Severity and Koala Habitat Recovery Assessment Using Pre- and Post-Burn Multitemporal Sentinel-2 Msi Data
by Derek Campbell Johnson, Sanjeev Kumar Srivastava and Alison Shapcott
Forests 2024, 15(11), 1991; https://doi.org/10.3390/f15111991 - 11 Nov 2024
Viewed by 2643
Abstract
Habitat loss due to wildfire is an increasing problem internationally for threatened animal species, particularly tree-dependent and arboreal animals. The koala (Phascolartos cinereus) is endangered in most of its range, and large areas of forest were burnt by widespread wildfires in [...] Read more.
Habitat loss due to wildfire is an increasing problem internationally for threatened animal species, particularly tree-dependent and arboreal animals. The koala (Phascolartos cinereus) is endangered in most of its range, and large areas of forest were burnt by widespread wildfires in Australia in 2019/2020, mostly areas dominated by eucalypts, which provide koala habitats. We studied the impact of fire and three subsequent years of recovery on a property in South-East Queensland, Australia. A classified Differenced Normalised Burn Ratio (dNBR) calculated from pre- and post-burn Sentinel-2 scenes encompassing the local study area was used to assess regional impact of fire on koala-habitat forest types. The geometrically structured composite burn index (GeoCBI), a field-based assessment, was used to classify fire severity impact. To detect lower levels of forest recovery, a manual classification of the multitemporal dNBR was used, enabling the direct comparison of images between recovery years. In our regional study area, the most suitable koala habitat occupied only about 2%, and about 10% of that was burnt by wildfire. From the five koala habitat forest types studied, one upland type was burnt more severely and extensively than the others but recovered vigorously after the first year, reaching the same extent of recovery as the other forest types. The two alluvial forest types showed a negligible fire impact, likely due to their sheltered locations. In the second year, all the impacted forest types studied showed further, almost equal, recovery. In the third year of recovery, there was almost no detectable change and therefore no more notable vegetative growth. Our field data revealed that the dNBR can probably only measure the general vegetation present and not tree recovery via epicormic shooting and coppicing. Eucalypt foliage growth is a critical resource for the koala, so field verification seems necessary unless more-accurate remote sensing methods such as hyperspectral imagery can be implemented. Full article
Show Figures

Graphical abstract

23 pages, 8080 KB  
Article
Forest Fire Burn Scar Mapping Based on Modified Image Super-Resolution Reconstruction via Sparse Representation
by Juan Zhang, Gui Zhang, Haizhou Xu, Rong Chu, Yongke Yang and Saizhuan Wang
Forests 2024, 15(11), 1959; https://doi.org/10.3390/f15111959 - 7 Nov 2024
Viewed by 2347
Abstract
It is of great significance to map forest fire burn scars for post-disaster management and assessment of forest fires. Satellites can be utilized to acquire imagery even in primitive forests with steep mountainous terrain. However, forest fire burn scar mapping extracted by the [...] Read more.
It is of great significance to map forest fire burn scars for post-disaster management and assessment of forest fires. Satellites can be utilized to acquire imagery even in primitive forests with steep mountainous terrain. However, forest fire burn scar mapping extracted by the Burned Area Index (BAI), differenced Normalized Burn Ratio (dNBR), and Feature Extraction Rule-Based (FERB) approaches directly at pixel level is limited by the satellite imagery spatial resolution. To further improve the spatial resolution of forest fire burn scar mapping, we improved the image super-resolution reconstruction via sparse representation (SCSR) and named it modified image super-resolution reconstruction via sparse representation (MSCSR). It was compared with the Burned Area Subpixel Mapping–Feature Extraction Rule-Based (BASM-FERB) method to screen a better approach. Based on the Sentinel-2 satellite imagery, the MSCSR and BASM-FERB approaches were used to map forest fire burn scars at the subpixel level, and the extraction result was validated using actual forest fire data. The results show that forest fire burn scar mapping at the subpixel level obtained by the MSCSR and BASM-FERB approaches has a higher spatial resolution; in particular, the MSCSR approach can more effectively reduce the noise effect on forest fire burn scar mapping at the subpixel level. Five accuracy indexes, the Overall Accuracy (OA), User’s Accuracy (UA), Producer’s Accuracy (PA), Intersection over Union (IoU), and Kappa Coefficient (Kappa), are used to assess the accuracy of forest fire burn scar mapping at the pixel/subpixel level based on the BAI, dNBR, FERB, MSCSR and BASM-FERB approaches. The average accuracy values of the OA, UA, PA, IoU, and Kappa of the forest fire burn scar mapping results at the subpixel level extracted by the MSCSR and BASM-FERB approaches are superior compared to the forest fire burn scar mapping results at the pixel level extracted by the BAI, dNBR and FERB approaches. In particular, the average accuracy values of the OA, UA, PA, IoU, and Kappa of the forest fire burn scar mapping at the subpixel level detected by the MSCSR approach are 98.49%, 99.13%, 92.31%, 95.83%, and 92.81%, respectively, which are 1.48%, 10.93%, 2.47%, 15.55%, and 5.90%, respectively, higher than the accuracy of that extracted by the BASM-FERB approach. It is concluded that the MSCSR approach extracts forest fire burn scar mapping at the subpixel level with higher accuracy and spatial resolution for post-disaster management and assessment of forest fires. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Back to TopTop