Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Authors = Mukhriddin Mukhiddinov ORCID = 0000-0002-1424-0799

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3823 KiB  
Article
Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests
by Akmalbek Abdusalomov, Sabina Umirzakova, Alpamis Kutlimuratov, Dilshod Mirzaev, Adilbek Dauletov, Tulkin Botirov, Madina Zakirova, Mukhriddin Mukhiddinov and Young Im Cho
Fire 2025, 8(8), 288; https://doi.org/10.3390/fire8080288 - 23 Jul 2025
Viewed by 400
Abstract
The escalating impacts and occurrence of wildfires threaten the public, economies, and global ecosystems. Physiologically declining or dead trees are a great portion of the fires because these trees are prone to higher ignition and have lower moisture content. To prevent wildfires, hazardous [...] Read more.
The escalating impacts and occurrence of wildfires threaten the public, economies, and global ecosystems. Physiologically declining or dead trees are a great portion of the fires because these trees are prone to higher ignition and have lower moisture content. To prevent wildfires, hazardous vegetation needs to be removed, and the vegetation should be identified early on. This work proposes a real-time fire risk tree detection framework using UAV images, which is based on lightweight object detection. The model uses the MobileNetV3-Small spine, which is optimized for edge deployment, combined with an SSD head. This configuration results in a highly optimized and fast UAV-based inference pipeline. The dataset used in this study comprises over 3000 annotated RGB UAV images of trees in healthy, partially dead, and fully dead conditions, collected from mixed real-world forest scenes and public drone imagery repositories. Thorough evaluation shows that the proposed model outperforms conventional SSD and recent YOLOs on Precision (94.1%), Recall (93.7%), mAP (90.7%), F1 (91.0%) while being light-weight (8.7 MB) and fast (62.5 FPS on Jetson Xavier NX). These findings strongly support the model’s effectiveness for large-scale continuous forest monitoring to detect health degradations and mitigate wildfire risks proactively. The framework UAV-based environmental monitoring systems differentiates itself by incorporating a balance between detection accuracy, speed, and resource efficiency as fundamental principles. Full article
Show Figures

Figure 1

24 pages, 3395 KiB  
Article
Drone-Based Wildfire Detection with Multi-Sensor Integration
by Akmalbek Abdusalomov, Sabina Umirzakova, Makhkamov Bakhtiyor Shukhratovich, Mukhriddin Mukhiddinov, Azamat Kakhorov, Abror Buriboev and Heung Seok Jeon
Remote Sens. 2024, 16(24), 4651; https://doi.org/10.3390/rs16244651 (registering DOI) - 12 Dec 2024
Cited by 10 | Viewed by 3939
Abstract
Wildfires pose a severe threat to ecological systems, human life, and infrastructure, making early detection critical for timely intervention. Traditional fire detection systems rely heavily on single-sensor approaches and are often hindered by environmental conditions such as smoke, fog, or nighttime scenarios. This [...] Read more.
Wildfires pose a severe threat to ecological systems, human life, and infrastructure, making early detection critical for timely intervention. Traditional fire detection systems rely heavily on single-sensor approaches and are often hindered by environmental conditions such as smoke, fog, or nighttime scenarios. This paper proposes Adaptive Multi-Sensor Oriented Object Detection with Space–Frequency Selective Convolution (AMSO-SFS), a novel deep learning-based model optimized for drone-based wildfire and smoke detection. AMSO-SFS combines optical, infrared, and Synthetic Aperture Radar (SAR) data to detect fire and smoke under varied visibility conditions. The model introduces a Space–Frequency Selective Convolution (SFS-Conv) module to enhance the discriminative capacity of features in both spatial and frequency domains. Furthermore, AMSO-SFS utilizes weakly supervised learning and adaptive scale and angle detection to identify fire and smoke regions with minimal labeled data. Extensive experiments show that the proposed model outperforms current state-of-the-art (SoTA) models, achieving robust detection performance while maintaining computational efficiency, making it suitable for real-time drone deployment. Full article
Show Figures

Figure 1

22 pages, 5877 KiB  
Article
ERIRMS Evaluation of the Reliability of IoT-Aided Remote Monitoring Systems of Low-Voltage Overhead Transmission Lines
by Halimjon Khujamatov, Dilmurod Davronbekov, Alisher Khayrullaev, Mirjamol Abdullaev, Mukhriddin Mukhiddinov and Jinsoo Cho
Sensors 2024, 24(18), 5970; https://doi.org/10.3390/s24185970 - 14 Sep 2024
Cited by 2 | Viewed by 1833
Abstract
Researchers have studied instances of power line technical failures, the significant rise in the energy loss index in the line connecting the distribution transformer and consumer meters, and the inability to control unauthorized line connections. New, innovative, and scientific approaches are required to [...] Read more.
Researchers have studied instances of power line technical failures, the significant rise in the energy loss index in the line connecting the distribution transformer and consumer meters, and the inability to control unauthorized line connections. New, innovative, and scientific approaches are required to address these issues while enhancing the reliability and efficiency of electricity supply. This study evaluates the reliability of Internet of Things (IoT)-aided remote monitoring systems specifically designed for a low-voltage overhead transmission line. Many methods of analysis and comparison have been employed to examine the reliability of wireless sensor devices used in real-time remote monitoring. A reliability model was developed to evaluate the reliability of the monitoring system in various situations. Based on the developed models, it was found that the reliability indicators of the proposed monitoring system were 98% in 1 month. In addition, it has been proven that the reliability of the system remains high even when an optional sensor in the network fails. This study investigates various IoT technologies, their integration into monitoring systems, and their effectiveness in enhancing the reliability and efficiency of electrical transmission infrastructure. The analysis includes data from field deployments, case studies, and simulations to assess performance metrics, such as accuracy, latency, and fault detection capabilities. Full article
Show Figures

Figure 1

6 pages, 1017 KiB  
Correction
Correction: Yuldashev et al. Parking Lot Occupancy Detection with Improved MobileNetV3. Sensors 2023, 23, 7642
by Yusufbek Yuldashev, Mukhriddin Mukhiddinov, Akmalbek Bobomirzaevich Abdusalomov, Rashid Nasimov and Jinsoo Cho
Sensors 2024, 24(16), 5236; https://doi.org/10.3390/s24165236 - 13 Aug 2024
Viewed by 1134
(This article belongs to the Special Issue Computer Vision for Smart Cities)
Show Figures

Figure 6

21 pages, 4178 KiB  
Article
Data Anomaly Detection for Structural Health Monitoring Based on a Convolutional Neural Network
by Soon-Young Kim and Mukhriddin Mukhiddinov
Sensors 2023, 23(20), 8525; https://doi.org/10.3390/s23208525 - 17 Oct 2023
Cited by 16 | Viewed by 4952
Abstract
Structural health monitoring (SHM) has been extensively utilized in civil infrastructures for several decades. The status of civil constructions is monitored in real time using a wide variety of sensors; however, determining the true state of a structure can be difficult due to [...] Read more.
Structural health monitoring (SHM) has been extensively utilized in civil infrastructures for several decades. The status of civil constructions is monitored in real time using a wide variety of sensors; however, determining the true state of a structure can be difficult due to the presence of abnormalities in the acquired data. Extreme weather, faulty sensors, and structural damage are common causes of these abnormalities. For civil structure monitoring to be successful, abnormalities must be detected quickly. In addition, one form of abnormality generally predominates the SHM data, which might be a problem for civil infrastructure data. The current state of anomaly detection is severely hampered by this imbalance. Even cutting-edge damage diagnostic methods are useless without proper data-cleansing processes. In order to solve this problem, this study suggests a hyper-parameter-tuned convolutional neural network (CNN) for multiclass unbalanced anomaly detection. A multiclass time series of anomaly data from a real-world cable-stayed bridge is used to test the 1D CNN model, and the dataset is balanced by supplementing the data as necessary. An overall accuracy of 97.6% was achieved by balancing the database using data augmentation to enlarge the dataset, as shown in the research. Full article
Show Figures

Figure 1

24 pages, 3227 KiB  
Article
An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images
by Saydirasulov Norkobil Saydirasulovich, Mukhriddin Mukhiddinov, Oybek Djuraev, Akmalbek Abdusalomov and Young-Im Cho
Sensors 2023, 23(20), 8374; https://doi.org/10.3390/s23208374 - 10 Oct 2023
Cited by 73 | Viewed by 9571
Abstract
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification [...] Read more.
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources. This study presents an enhanced YOLOv8 model customized to the context of unmanned aerial vehicle (UAV) images to address the challenges above and attain heightened precision in detection accuracy. Firstly, the research incorporates Wise-IoU (WIoU) v3 as a regression loss for bounding boxes, supplemented by a reasonable gradient allocation strategy that prioritizes samples of common quality. This strategic approach enhances the model’s capacity for precise localization. Secondly, the conventional convolutional process within the intermediate neck layer is substituted with the Ghost Shuffle Convolution mechanism. This strategic substitution reduces model parameters and expedites the convergence rate. Thirdly, recognizing the challenge of inadequately capturing salient features of forest fire smoke within intricate wooded settings, this study introduces the BiFormer attention mechanism. This mechanism strategically directs the model’s attention towards the feature intricacies of forest fire smoke, simultaneously suppressing the influence of irrelevant, non-target background information. The obtained experimental findings highlight the enhanced YOLOv8 model’s effectiveness in smoke detection, proving an average precision (AP) of 79.4%, signifying a notable 3.3% enhancement over the baseline. The model’s performance extends to average precision small (APS) and average precision large (APL), registering robust values of 71.3% and 92.6%, respectively. Full article
Show Figures

Figure 1

26 pages, 3790 KiB  
Article
Parking Lot Occupancy Detection with Improved MobileNetV3
by Yusufbek Yuldashev, Mukhriddin Mukhiddinov, Akmalbek Bobomirzaevich Abdusalomov, Rashid Nasimov and Jinsoo Cho
Sensors 2023, 23(17), 7642; https://doi.org/10.3390/s23177642 - 3 Sep 2023
Cited by 15 | Viewed by 6042 | Correction
Abstract
In recent years, parking lot management systems have garnered significant research attention, particularly concerning the application of deep learning techniques. Numerous approaches have emerged for tackling parking lot occupancy challenges using deep learning models. This study contributes to the field by addressing a [...] Read more.
In recent years, parking lot management systems have garnered significant research attention, particularly concerning the application of deep learning techniques. Numerous approaches have emerged for tackling parking lot occupancy challenges using deep learning models. This study contributes to the field by addressing a critical aspect of parking lot management systems: accurate vehicle occupancy determination in specific parking spaces. We propose an advanced solution by harnessing an optimized MobileNetV3 model with custom architectural enhancements, trained on the CNRPark-EXT and PKLOT datasets. The model processes individual parking space patches from real-time video feeds, providing occupancy classification for each patch, identifying occupied or available spaces. Our architectural modifications include the integration of a convolutional block attention mechanism in place of the native attention module and the adoption of blueprint separable convolutions instead of the traditional depth-wise separable convolutions. In terms of performance, our proposed model exhibits superior results when benchmarked against state-of-the-art methods. Achieving an exceptional area under the ROC curve (AUC) value of 0.99 for most experiments with the PKLot dataset, our enhanced MobileNetV3 showcases its exceptional discriminatory power in binary classification. Benchmarked against the CarNet and mAlexNet models, representative of previous state-of-the-art solutions, our proposed model showcases exceptional performance. During evaluations using the combined CNRPark-EXT and PKLot datasets, the proposed model attains an impressive average accuracy of 98.01%, while CarNet achieves 97.03%. Beyond achieving high accuracy and precision comparable to previous models, the proposed model exhibits promise for real-time applications. This work contributes to the advancement of parking lot occupancy detection by offering a robust and efficient solution with implications for urban mobility enhancement and resource optimization. Full article
(This article belongs to the Special Issue Computer Vision for Smart Cities)
Show Figures

Figure 1

29 pages, 4688 KiB  
Article
Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging
by Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov and Taeg Keun Whangbo
Cancers 2023, 15(16), 4172; https://doi.org/10.3390/cancers15164172 - 18 Aug 2023
Cited by 211 | Viewed by 14397
Abstract
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. [...] Read more.
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. When trying to locate cancerous tumors, magnetic resonance imaging (MRI) is a crucial tool. However, detecting brain tumors manually is a difficult and time-consuming activity that might lead to inaccuracies. In order to solve this, we provide a refined You Only Look Once version 7 (YOLOv7) model for the accurate detection of meningioma, glioma, and pituitary gland tumors within an improved detection of brain tumors system. The visual representation of the MRI scans is enhanced by the use of image enhancement methods that apply different filters to the original pictures. To further improve the training of our proposed model, we apply data augmentation techniques to the openly accessible brain tumor dataset. The curated data include a wide variety of cases, such as 2548 images of gliomas, 2658 images of pituitary, 2582 images of meningioma, and 2500 images of non-tumors. We included the Convolutional Block Attention Module (CBAM) attention mechanism into YOLOv7 to further enhance its feature extraction capabilities, allowing for better emphasis on salient regions linked with brain malignancies. To further improve the model’s sensitivity, we have added a Spatial Pyramid Pooling Fast+ (SPPF+) layer to the network’s core infrastructure. YOLOv7 now includes decoupled heads, which allow it to efficiently glean useful insights from a wide variety of data. In addition, a Bi-directional Feature Pyramid Network (BiFPN) is used to speed up multi-scale feature fusion and to better collect features associated with tumors. The outcomes verify the efficiency of our suggested method, which achieves a higher overall accuracy in tumor detection than previous state-of-the-art models. As a result, this framework has a lot of potential as a helpful decision-making tool for experts in the field of diagnosing brain tumors. Full article
(This article belongs to the Special Issue Brain Tumor: Recent Advances and Challenges)
Show Figures

Figure 1

19 pages, 14299 KiB  
Article
An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach
by Akmalbek Bobomirzaevich Abdusalomov, Bappy MD Siful Islam, Rashid Nasimov, Mukhriddin Mukhiddinov and Taeg Keun Whangbo
Sensors 2023, 23(3), 1512; https://doi.org/10.3390/s23031512 - 29 Jan 2023
Cited by 103 | Viewed by 9998
Abstract
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest [...] Read more.
With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

22 pages, 5689 KiB  
Article
Development of Language Models for Continuous Uzbek Speech Recognition System
by Abdinabi Mukhamadiyev, Mukhriddin Mukhiddinov, Ilyos Khujayarov, Mannon Ochilov and Jinsoo Cho
Sensors 2023, 23(3), 1145; https://doi.org/10.3390/s23031145 - 19 Jan 2023
Cited by 23 | Viewed by 6781
Abstract
Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but there is [...] Read more.
Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but there is no publicly available Uzbek speech dataset. Therefore, language models of low-resource languages need to be studied and created. The objective of this study is to address this limitation by developing a low-resource language model for the Uzbek language and understanding linguistic occurrences. We proposed the Uzbek language model named UzLM by examining the performance of statistical and neural-network-based language models that account for the unique features of the Uzbek language. Our Uzbek-specific linguistic representation allows us to construct more robust UzLM, utilizing 80 million words from various sources while using the same or fewer training words, as applied in previous studies. Roughly sixty-eight thousand different words and 15 million sentences were collected for the creation of this corpus. The experimental results of our tests on the continuous recognition of Uzbek speech show that, compared with manual encoding, the use of neural-network-based language models reduced the character error rate to 5.26%. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

23 pages, 5670 KiB  
Article
Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
by Mukhriddin Mukhiddinov, Oybek Djuraev, Farkhod Akhmedov, Abdinabi Mukhamadiyev and Jinsoo Cho
Sensors 2023, 23(3), 1080; https://doi.org/10.3390/s23031080 - 17 Jan 2023
Cited by 68 | Viewed by 12500
Abstract
Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and [...] Read more.
Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image’s lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face’s features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

16 pages, 6361 KiB  
Article
Improved Face Detection Method via Learning Small Faces on Hard Images Based on a Deep Learning Approach
by Dilnoza Mamieva, Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov and Taeg Keun Whangbo
Sensors 2023, 23(1), 502; https://doi.org/10.3390/s23010502 - 2 Jan 2023
Cited by 56 | Viewed by 8983
Abstract
Most facial recognition and face analysis systems start with facial detection. Early techniques, such as Haar cascades and histograms of directed gradients, mainly rely on features that had been manually developed from particular images. However, these techniques are unable to correctly synthesize images [...] Read more.
Most facial recognition and face analysis systems start with facial detection. Early techniques, such as Haar cascades and histograms of directed gradients, mainly rely on features that had been manually developed from particular images. However, these techniques are unable to correctly synthesize images taken in untamed situations. However, deep learning’s quick development in computer vision has also sped up the development of a number of deep learning-based face detection frameworks, many of which have significantly improved accuracy in recent years. When detecting faces in face detection software, the difficulty of detecting small, scale, position, occlusion, blurring, and partially occluded faces in uncontrolled conditions is one of the problems of face identification that has been explored for many years but has not yet been entirely resolved. In this paper, we propose Retina net baseline, a single-stage face detector, to handle the challenging face detection problem. We made network improvements that boosted detection speed and accuracy. In Experiments, we used two popular datasets, such as WIDER FACE and FDDB. Specifically, on the WIDER FACE benchmark, our proposed method achieves AP of 41.0 at speed of 11.8 FPS with a single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which are results among one-stage detectors. Then, we trained our model during the implementation using the PyTorch framework, which provided an accuracy of 95.6% for the faces, which are successfully detected. Visible experimental results show that our proposed model outperforms seamless detection and recognition results achieved using performance evaluation matrices. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

14 pages, 2480 KiB  
Article
Enhanced Classification of Dog Activities with Quaternion-Based Fusion Approach on High-Dimensional Raw Data from Wearable Sensors
by Azamjon Muminov, Mukhriddin Mukhiddinov and Jinsoo Cho
Sensors 2022, 22(23), 9471; https://doi.org/10.3390/s22239471 - 4 Dec 2022
Cited by 10 | Viewed by 3212
Abstract
The employment of machine learning algorithms to the data provided by wearable movement sensors is one of the most common methods to detect pets’ behaviors and monitor their well-being. However, defining features that lead to highly accurate behavior classification is quite challenging. To [...] Read more.
The employment of machine learning algorithms to the data provided by wearable movement sensors is one of the most common methods to detect pets’ behaviors and monitor their well-being. However, defining features that lead to highly accurate behavior classification is quite challenging. To address this problem, in this study we aim to classify six main dog activities (standing, walking, running, sitting, lying down, and resting) using high-dimensional sensor raw data. Data were received from the accelerometer and gyroscope sensors that are designed to be attached to the dog’s smart costume. Once data are received, the module computes a quaternion value for each data point that provides handful features for classification. Next, to perform the classification, we used several supervised machine learning algorithms, such as the Gaussian naïve Bayes (GNB), Decision Tree (DT), K-nearest neighbor (KNN), and support vector machine (SVM). In order to evaluate the performance, we finally compared the proposed approach’s F-score accuracies with the accuracy of classic approach performance, where sensors’ data are collected without computing the quaternion value and directly utilized by the model. Overall, 18 dogs equipped with harnesses participated in the experiment. The results of the experiment show a significantly enhanced classification with the proposed approach. Among all the classifiers, the GNB classification model achieved the highest accuracy for dog behavior. The behaviors are classified with F-score accuracies of 0.94, 0.86, 0.94, 0.89, 0.95, and 1, respectively. Moreover, it has been observed that the GNB classifier achieved 93% accuracy on average with the dataset consisting of quaternion values. In contrast, it was only 88% when the model used the dataset from sensors’ data. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

25 pages, 3483 KiB  
Article
A Wildfire Smoke Detection System Using Unmanned Aerial Vehicle Images Based on the Optimized YOLOv5
by Mukhriddin Mukhiddinov, Akmalbek Bobomirzaevich Abdusalomov and Jinsoo Cho
Sensors 2022, 22(23), 9384; https://doi.org/10.3390/s22239384 - 1 Dec 2022
Cited by 68 | Viewed by 10600
Abstract
Wildfire is one of the most significant dangers and the most serious natural catastrophe, endangering forest resources, animal life, and the human economy. Recent years have witnessed a rise in wildfire incidents. The two main factors are persistent human interference with the natural [...] Read more.
Wildfire is one of the most significant dangers and the most serious natural catastrophe, endangering forest resources, animal life, and the human economy. Recent years have witnessed a rise in wildfire incidents. The two main factors are persistent human interference with the natural environment and global warming. Early detection of fire ignition from initial smoke can help firefighters react to such blazes before they become difficult to handle. Previous deep-learning approaches for wildfire smoke detection have been hampered by small or untrustworthy datasets, making it challenging to extrapolate the performances to real-world scenarios. In this study, we propose an early wildfire smoke detection system using unmanned aerial vehicle (UAV) images based on an improved YOLOv5. First, we curated a 6000-wildfire image dataset using existing UAV images. Second, we optimized the anchor box clustering using the K-mean++ technique to reduce classification errors. Then, we improved the network’s backbone using a spatial pyramid pooling fast-plus layer to concentrate small-sized wildfire smoke regions. Third, a bidirectional feature pyramid network was applied to obtain a more accessible and faster multi-scale feature fusion. Finally, network pruning and transfer learning approaches were implemented to refine the network architecture and detection speed, and correctly identify small-scale wildfire smoke areas. The experimental results proved that the proposed method achieved an average precision of 73.6% and outperformed other one- and two-stage object detectors on a custom image dataset. Full article
Show Figures

Figure 1

18 pages, 6580 KiB  
Article
Development of Real-Time Landmark-Based Emotion Recognition CNN for Masked Faces
by Akhmedov Farkhod, Akmalbek Bobomirzaevich Abdusalomov, Mukhriddin Mukhiddinov and Young-Im Cho
Sensors 2022, 22(22), 8704; https://doi.org/10.3390/s22228704 - 11 Nov 2022
Cited by 45 | Viewed by 7501
Abstract
Owing to the availability of a wide range of emotion recognition applications in our lives, such as for mental status calculation, the demand for high-performance emotion recognition approaches remains uncertain. Nevertheless, the wearing of facial masks has been indispensable during the COVID-19 pandemic. [...] Read more.
Owing to the availability of a wide range of emotion recognition applications in our lives, such as for mental status calculation, the demand for high-performance emotion recognition approaches remains uncertain. Nevertheless, the wearing of facial masks has been indispensable during the COVID-19 pandemic. In this study, we propose a graph-based emotion recognition method that adopts landmarks on the upper part of the face. Based on the proposed approach, several pre-processing steps were applied. After pre-processing, facial expression features need to be extracted from facial key points. The main steps of emotion recognition on masked faces include face detection by using Haar–Cascade, landmark implementation through a media-pipe face mesh model, and model training on seven emotional classes. The FER-2013 dataset was used for model training. An emotion detection model was developed for non-masked faces. Thereafter, landmarks were applied to the upper part of the face. After the detection of faces and landmark locations were extracted, we captured coordinates of emotional class landmarks and exported to a comma-separated values (csv) file. After that, model weights were transferred to the emotional classes. Finally, a landmark-based emotion recognition model for the upper facial parts was tested both on images and in real time using a web camera application. The results showed that the proposed model achieved an overall accuracy of 91.2% for seven emotional classes in the case of an image application. Image based emotion detection of the proposed model accuracy showed relatively higher results than the real-time emotion detection. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

Back to TopTop