Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = rescue action classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13739 KB  
Article
Traffic Accident Rescue Action Recognition Method Based on Real-Time UAV Video
by Bo Yang, Jianan Lu, Tao Liu, Bixing Zhang, Chen Geng, Yan Tian and Siyu Zhang
Drones 2025, 9(8), 519; https://doi.org/10.3390/drones9080519 - 24 Jul 2025
Viewed by 854
Abstract
Low-altitude drones, which are unimpeded by traffic congestion or urban terrain, have become a critical asset in emergency rescue missions. To address the current lack of emergency rescue data, UAV aerial videos were collected to create an experimental dataset for action classification and [...] Read more.
Low-altitude drones, which are unimpeded by traffic congestion or urban terrain, have become a critical asset in emergency rescue missions. To address the current lack of emergency rescue data, UAV aerial videos were collected to create an experimental dataset for action classification and localization annotation. A total of 5082 keyframes were labeled with 1–5 targets each, and 14,412 instances of data were prepared (including flight altitude and camera angles) for action classification and position annotation. To mitigate the challenges posed by high-resolution drone footage with excessive redundant information, we propose the SlowFast-Traffic (SF-T) framework, a spatio-temporal sequence-based algorithm for recognizing traffic accident rescue actions. For more efficient extraction of target–background correlation features, we introduce the Actor-Centric Relation Network (ACRN) module, which employs temporal max pooling to enhance the time-dimensional features of static backgrounds, significantly reducing redundancy-induced interference. Additionally, smaller ROI feature map outputs are adopted to boost computational speed. To tackle class imbalance in incident samples, we integrate a Class-Balanced Focal Loss (CB-Focal Loss) function, effectively resolving rare-action recognition in specific rescue scenarios. We replace the original Faster R-CNN with YOLOX-s to improve the target detection rate. On our proposed dataset, the SF-T model achieves a mean average precision (mAP) of 83.9%, which is 8.5% higher than that of the standard SlowFast architecture while maintaining a processing speed of 34.9 tasks/s. Both accuracy-related metrics and computational efficiency are substantially improved. The proposed method demonstrates strong robustness and real-time analysis capabilities for modern traffic rescue action recognition. Full article
Show Figures

Figure 1

23 pages, 9340 KB  
Article
A Multidimensional Study of the 2023 Beijing Extreme Rainfall: Theme, Location, and Sentiment Based on Social Media Data
by Xun Zhang, Xin Zhang, Yingchun Zhang, Ying Liu, Rui Zhou, Abdureyim Raxidin and Min Li
ISPRS Int. J. Geo-Inf. 2025, 14(4), 136; https://doi.org/10.3390/ijgi14040136 - 24 Mar 2025
Viewed by 1144
Abstract
Extreme rainfall events are significant manifestations of climate change, causing substantial impacts on urban infrastructure and public life. This study takes the extreme rainfall event in Beijing in 2023 as the background and utilizes data from Sina Weibo. Based on large language models [...] Read more.
Extreme rainfall events are significant manifestations of climate change, causing substantial impacts on urban infrastructure and public life. This study takes the extreme rainfall event in Beijing in 2023 as the background and utilizes data from Sina Weibo. Based on large language models and prompt engineering, disaster information is extracted, and a multi-factor coupled disaster multi-sentiment classification model, Bert-BiLSTM, is designed. A disaster analysis framework focusing on three dimensions of theme, location and sentiment is constructed. The results indicate that during the pre-disaster stage, themes are concentrated on warnings and prevention, shifting to specific events and rescue actions during the disaster, and post-disaster, they express gratitude to rescue personnel and highlight social cohesion. In terms of spatial location, the disaster shows significant clustering, predominantly occurring in Mentougou and Fangshan. There is a clear difference in emotional expression between official media and the public; official media primarily focuses on neutral reporting and fact dissemination, while public sentiment is even richer. At the same time, there are also variations in sentiment expressions across different affected regions. This study provides new perspectives and methods for analyzing extreme rainfall events on social media by revealing the evolution of disaster themes, the spatial distribution of disasters, and the temporal and spatial changes in sentiment. These insights can support risk assessment, resource allocation, and public opinion guidance in disaster emergency management, thereby enhancing the precision and effectiveness of disaster response strategies. Full article
Show Figures

Figure 1

17 pages, 1413 KB  
Article
A Retrospective Analysis of Admission Trends and Outcomes in a Wildlife Rescue and Rehabilitation Center in Costa Rica
by Maria Miguel Costa, Nazaré Pinto da Cunha, Isabel Hagnauer and Marta Venegas
Animals 2024, 14(1), 51; https://doi.org/10.3390/ani14010051 - 22 Dec 2023
Cited by 4 | Viewed by 3216
Abstract
The evaluation of data regarding rehabilitation practices provides reference values for comparison purposes among different rehabilitation centers to critically review protocols and efficiently improve each center. The aim of the present work was to present the main causes of admission to Rescate Wildlife [...] Read more.
The evaluation of data regarding rehabilitation practices provides reference values for comparison purposes among different rehabilitation centers to critically review protocols and efficiently improve each center. The aim of the present work was to present the main causes of admission to Rescate Wildlife Rescue Center for each taxonomic group, to determine the admission factors that influenced the release and mortality, and to determine the predictive factors of release and mortality of wildlife. To this end, a retrospective study was carried out based on 5785 admissions registered in the database of Rescate Wildlife Rescue Center in Costa Rica in 2020 and 2021. Statistical analysis consisted of sample characterization via the analysis of several categorical variables: species, order, class, age group, cause of admission, outcome, clinical classification and days in the hospital, and respective association with the mortality or release rate. Most of the rescue animals were birds (59.3%), then mammals (20.7%), reptiles (17.4%), and finally ‘others’ (2.6%). The main causes of admission were ‘captivity’ (34.9%), ‘found’ (23.3%), and ‘trauma’ (19.3%). Animals rescued due to ‘captivity’ and the classes ‘birds’ and ‘reptiles’ had the highest release rates. The causes of admission ‘trauma’ and ‘orphanhood’ and the class ‘birds’ had the highest mortality rates. In general, a greater number of days spent in the hospital and membership in the classes ‘reptiles’, ‘juveniles’, in need of ‘basic care’, or ‘clinically healthy’ were predictors of survival. In contrast, the age groups ‘infant’ and ‘nestling’ were predictors of mortality. These results demonstrate the value of maintaining, improving, and studying databases from wildlife rehabilitation centers, as they can provide useful information that can be used to enhance the allocation of economic resources, treatment methods, disease surveillance, public education, and regulatory decision-making, leading to a better understanding of threats to wildlife and subsequent implementation of conservation actions. Full article
Show Figures

Figure 1

17 pages, 7331 KB  
Article
A Multitask Network for People Counting, Motion Recognition, and Localization Using Through-Wall Radar
by Junyu Lin, Jun Hu, Zhiyuan Xie, Yulan Zhang, Guangjia Huang and Zengping Chen
Sensors 2023, 23(19), 8147; https://doi.org/10.3390/s23198147 - 28 Sep 2023
Cited by 8 | Viewed by 2226
Abstract
Due to the outstanding penetrating detection performance of low-frequency electromagnetic waves, through-wall radar (TWR) has gained widespread applications in various fields, including public safety, counterterrorism operations, and disaster rescue. TWR is required to accomplish various tasks, such as people detection, people counting, and [...] Read more.
Due to the outstanding penetrating detection performance of low-frequency electromagnetic waves, through-wall radar (TWR) has gained widespread applications in various fields, including public safety, counterterrorism operations, and disaster rescue. TWR is required to accomplish various tasks, such as people detection, people counting, and positioning in practical applications. However, most current research primarily focuses on one or two tasks. In this paper, we propose a multitask network that can simultaneously realize people counting, action recognition, and localization. We take the range–time–Doppler (RTD) spectra obtained from one-dimensional (1D) radar signals as datasets and convert the information related to the number, motion, and location of people into confidence matrices as labels. The convolutional layers and novel attention modules automatically extract deep features from the data and output the number, motion category, and localization results of people. We define the total loss function as the sum of individual task loss functions. Through the loss function, we transform the positioning problem into a multilabel classification problem, where a certain position in the distance confidence matrix represents a certain label. On the test set consisting of 10,032 samples from through-wall scenarios with a 24 cm thick brick wall, the accuracy of people counting can reach 96.94%, and the accuracy of motion recognition is 96.03%, with an average distance error of 0.12 m. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 5552 KB  
Article
TwSense: Highly Robust Through-the-Wall Human Detection Method Based on COTS Wi-Fi Device
by Zinan Zhang, Zhanjun Hao, Xiaochao Dang and Kaikai Han
Appl. Sci. 2023, 13(17), 9668; https://doi.org/10.3390/app13179668 - 26 Aug 2023
Cited by 2 | Viewed by 2531
Abstract
With the popularization of Wi-Fi router devices, the application of device-free sensing has garnered significant attention due to its potential to make our lives more convenient. Wi-Fi signal-based through-the-wall human detection offers practical applications, such as emergency rescue and elderly monitoring. However, the [...] Read more.
With the popularization of Wi-Fi router devices, the application of device-free sensing has garnered significant attention due to its potential to make our lives more convenient. Wi-Fi signal-based through-the-wall human detection offers practical applications, such as emergency rescue and elderly monitoring. However, the accuracy of through-the-wall human detection is hindered by signal attenuation caused by wall materials and multiple propagation paths of interference. Therefore, through-the-wall human detection presents a substantial challenge. In this paper, we proposed a highly robust through-the-wall human detection method based on a commercial Wi-Fi device (TwSense). To mitigate interference from wall materials and other environmental factors, we employed the robust principal component analysis (OR-PCA) method to extract the target signal of Channel State Information (CSI). Subsequently, we segmented the action-induced Doppler shift feature image using the K-means clustering method. The features of the images were extracted using the Histogram of Oriented Gradients (HOG) algorithm. Finally, these features were fed into an SVM classifier (G-SVM) optimized by a grid search algorithm for action classification and recognition, thereby enhancing human detection accuracy. We evaluated the robustness of the entire system. The experimental results demonstrated that TwSense achieved the highest accuracy of 96%. Full article
Show Figures

Figure 1

24 pages, 10518 KB  
Article
Worker Abnormal Behavior Recognition Based on Spatio-Temporal Graph Convolution and Attention Model
by Zhiwei Li, Anyu Zhang, Fangfang Han, Junchao Zhu and Yawen Wang
Electronics 2023, 12(13), 2915; https://doi.org/10.3390/electronics12132915 - 3 Jul 2023
Cited by 6 | Viewed by 2211
Abstract
In response to the problem where many existing research models only consider acquiring the temporal information between sequences of continuous skeletons and in response to the lack of the ability to model spatial information, this study proposes a model for recognizing worker falls [...] Read more.
In response to the problem where many existing research models only consider acquiring the temporal information between sequences of continuous skeletons and in response to the lack of the ability to model spatial information, this study proposes a model for recognizing worker falls and lays out abnormal behaviors based on human skeletal key points and a spatio-temporal graph convolutional network (ST-GCN). Skeleton extraction of the human body in video sequences was performed using Alphapose. To resolve the problem of graph convolutional networks not being effective enough for skeletal key points feature aggregation, we propose an NAM-STGCN model that incorporates a normalized attention mechanism. By using the activation function PReLU to optimize the model structure, the improved ST-GCN model can more effectively extract skeletal key points action features in the spatio-temporal dimension for the purposes of abnormal behavior recognition. The experimental results show that our optimized model achieves a 96.72% accuracy for recognition on the self-built dataset, which is 4.92% better than the original model; the model loss value converges below 0.2. Tests were performed on the KTH and Le2i datasets, which are both better than typical classification recognition networks. The model can precisely identify abnormal human behaviors, facilitating the detection of abnormalities and rescue in a timely manner and offering novel ideas for smart site construction. Full article
(This article belongs to the Special Issue AI Technologies and Smart City)
Show Figures

Figure 1

17 pages, 1653 KB  
Article
Analysis of the Performance of Machine Learning Models in Predicting the Severity Level of Large-Truck Crashes
by Jinli Liu, Yi Qi, Jueqiang Tao and Tao Tao
Future Transp. 2022, 2(4), 939-955; https://doi.org/10.3390/futuretransp2040052 - 16 Nov 2022
Cited by 4 | Viewed by 2166
Abstract
Large-truck crashes often result in substantial economic and social costs. Accurate prediction of the severity level of a reported truck crash can help rescue teams and emergency medical services take the right actions and provide proper medical care, thereby reducing its economic and [...] Read more.
Large-truck crashes often result in substantial economic and social costs. Accurate prediction of the severity level of a reported truck crash can help rescue teams and emergency medical services take the right actions and provide proper medical care, thereby reducing its economic and social costs. This study aims to investigate the modeling issues in using machine learning methods for predicting the severity level of large-truck crashes. To this end, six representative machine learning (ML) methods, including four classification tree-based ML models, specifically the Extreme Gradient Boosting tree (XGBoost), the Adaptive Boosting tree (AdaBoost), Random Forest (RF), and the Gradient Boost Decision Tree (GBDT), and two non-tree-based ML models, specifically Support Vector Machines (SVM) and k-Nearest Neighbors (k-NN), were selected for predicting the severity level of large-truck crashes. The accuracy levels of these six methods were compared and the effects of data-balancing techniques in model prediction performance were also tested using three different resampling techniques: Undersampling, oversampling, and mix sampling. The results indicated that better prediction performances were obtained using the dataset with a similar distribution to the original sample population instead of using the datasets with a balanced sample population. Regarding the prediction performance, the tree-based ML models outperform the non-tree-based ML models and the GBDT model performed best among all of the six models. Full article
Show Figures

Figure 1

14 pages, 7590 KB  
Article
Detecting Human Actions in Drone Images Using YoloV5 and Stochastic Gradient Boosting
by Tasweer Ahmad, Marc Cavazza, Yutaka Matsuo and Helmut Prendinger
Sensors 2022, 22(18), 7020; https://doi.org/10.3390/s22187020 - 16 Sep 2022
Cited by 33 | Viewed by 7410
Abstract
Human action recognition and detection from unmanned aerial vehicles (UAVs), or drones, has emerged as a popular technical challenge in recent years, since it is related to many use case scenarios from environmental monitoring to search and rescue. It faces a number of [...] Read more.
Human action recognition and detection from unmanned aerial vehicles (UAVs), or drones, has emerged as a popular technical challenge in recent years, since it is related to many use case scenarios from environmental monitoring to search and rescue. It faces a number of difficulties mainly due to image acquisition and contents, and processing constraints. Since drones’ flying conditions constrain image acquisition, human subjects may appear in images at variable scales, orientations, and occlusion, which makes action recognition more difficult. We explore low-resource methods for ML (machine learning)-based action recognition using a previously collected real-world dataset (the “Okutama-Action” dataset). This dataset contains representative situations for action recognition, yet is controlled for image acquisition parameters such as camera angle or flight altitude. We investigate a combination of object recognition and classifier techniques to support single-image action identification. Our architecture integrates YoloV5 with a gradient boosting classifier; the rationale is to use a scalable and efficient object recognition system coupled with a classifier that is able to incorporate samples of variable difficulty. In an ablation study, we test different architectures of YoloV5 and evaluate the performance of our method on Okutama-Action dataset. Our approach outperformed previous architectures applied to the Okutama dataset, which differed by their object identification and classification pipeline: we hypothesize that this is a consequence of both YoloV5 performance and the overall adequacy of our pipeline to the specificities of the Okutama dataset in terms of bias–variance tradeoff. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 13742 KB  
Article
Sensor Fusion-Based Cooperative Trail Following for Autonomous Multi-Robot System
by Mingyang Geng, Shuqi Liu and Zhaoxia Wu
Sensors 2019, 19(4), 823; https://doi.org/10.3390/s19040823 - 17 Feb 2019
Cited by 4 | Viewed by 3816
Abstract
Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing [...] Read more.
Autonomously following a man-made trail in the wild is a challenging problem for robotic systems. Recently, deep learning-based approaches have cast the trail following problem as an image classification task and have achieved great success in the vision-based trail-following problem. However, the existing research only focuses on the trail-following task with a single-robot system. In contrast, many robotic tasks in reality, such as search and rescue, are conducted by a group of robots. While these robots are grouped to move in the wild, they can cooperate to lead to a more robust performance and perform the trail-following task in a better manner. Concretely, each robot can periodically exchange the vision data with other robots and make decisions based both on its local view and the information from others. This paper proposes a sensor fusion-based cooperative trail-following method, which enables a group of robots to implement the trail-following task by fusing the sensor data of each robot. Our method allows each robot to face the same direction from different altitudes to fuse the vision data feature on the collective level and then take action respectively. Besides, considering the quality of service requirement of the robotic software, our method limits the condition to implementing the sensor data fusion process by using the “threshold” mechanism. Qualitative and quantitative experiments on the real-world dataset have shown that our method can significantly promote the recognition accuracy and lead to a more robust performance compared with the single-robot system. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Sensors)
Show Figures

Figure 1

Back to TopTop