Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = UCF YouTube Action dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1214 KiB  
Article
Deep-Learning-Based Sequence Causal Long-Term Recurrent Convolutional Network for Data Fusion Using Video Data
by DaeHyeon Jeon and Min-Suk Kim
Electronics 2023, 12(5), 1115; https://doi.org/10.3390/electronics12051115 - 24 Feb 2023
Cited by 4 | Viewed by 2915
Abstract
The purpose of AI-Based schemes in intelligent systems is to advance and optimize system performance. Most intelligent systems adopt sequential data types derived from such systems. Realtime video data, for example, are continuously updated as a sequence to make necessary predictions for efficient [...] Read more.
The purpose of AI-Based schemes in intelligent systems is to advance and optimize system performance. Most intelligent systems adopt sequential data types derived from such systems. Realtime video data, for example, are continuously updated as a sequence to make necessary predictions for efficient system performance. The majority of deep-learning-based network architectures such as long short-term memory (LSTM), data fusion, two streams, and temporal convolutional network (TCN) for sequence data fusion are generally used to enhance robust system efficiency. In this paper, we propose a deep-learning-based neural network architecture for non-fix data that uses both a causal convolutional neural network (CNN) and a long-term recurrent convolutional network (LRCN). Causal CNNs and LRCNs use incorporated convolutional layers for feature extraction, so both architectures are capable of processing sequential data such as time series or video data that can be used in a variety of applications. Both architectures also have extracted features from the input sequence data to reduce the dimensionality of the data and capture the important information, and learn hierarchical representations for effective sequence processing tasks. We have also adopted a concept of series compact convolutional recurrent neural network (SCCRNN), which is a type of neural network architecture designed for processing sequential data combined by both convolutional and recurrent layers compactly, reducing the number of parameters and memory usage to maintain high accuracy. The architecture is challenge-able and suitable for continuously incoming sequence video data, and doing so allowed us to bring advantages to both LSTM-based networks and CNNbased networks. To verify this method, we evaluated it through a sequence learning model with network parameters and memory that are required in real environments based on the UCF-101 dataset, which is an action recognition data set of realistic action videos, collected from YouTube with 101 action categories. The results show that the proposed model in a sequence causal long-term recurrent convolutional network (SCLRCN) provides a performance improvement of at least 12% approximately or more to be compared with the existing models (LRCN and TCN). Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Image Processing)
Show Figures

Figure 1

16 pages, 2341 KiB  
Article
Motion Video Recognition in Speeded-Up Robust Features Tracking
by Jianguang Zhang, Yongxia Li, An Tai, Xianbin Wen and Jianmin Jiang
Electronics 2022, 11(18), 2959; https://doi.org/10.3390/electronics11182959 - 18 Sep 2022
Cited by 7 | Viewed by 2022
Abstract
Motion video recognition has been well explored in applications of computer vision. In this paper, we propose a novel video representation, which enhances motion recognition in videos based on SURF (Speeded-Up Robust Features) and two filters. Firstly, the detector scheme of SURF is [...] Read more.
Motion video recognition has been well explored in applications of computer vision. In this paper, we propose a novel video representation, which enhances motion recognition in videos based on SURF (Speeded-Up Robust Features) and two filters. Firstly, the detector scheme of SURF is used to detect the candidate points of the video because it is an efficient faster local feature detector. Secondly, by using the optical flow field and trajectory, the feature points can be filtered from the candidate points, which enables a robust and efficient extraction of motion feature points. Additionally, we introduce a descriptor, called MoSURF (Motion Speeded-Up Robust Features), based on SURF (Speeded-Up Robust Features), HOG (Histogram of Oriented Gradient), HOF (Histograms of Optical Flow), MBH(Motion Boundary Histograms), and trajectory information, which can effectively describe motion information and are complementary to each other. We evaluate our video representation under action classification on three motion video datasets namely KTH, YouTube, and UCF50. Compared with state-of-the-art methods, the proposed method shows advanced results on all datasets. Full article
(This article belongs to the Special Issue Multimedia Information Retrieval: From Theory to Applications)
Show Figures

Figure 1

17 pages, 1411 KiB  
Article
Human Activity Classification Using the 3DCNN Architecture
by Roberta Vrskova, Robert Hudec, Patrik Kamencay and Peter Sykora
Appl. Sci. 2022, 12(2), 931; https://doi.org/10.3390/app12020931 - 17 Jan 2022
Cited by 88 | Viewed by 10513
Abstract
Interest in utilizing neural networks in a variety of scientific and academic studies and in industrial applications is increasing. In addition to the growing interest in neural networks, there is also a rising interest in video classification. Object detection from an image is [...] Read more.
Interest in utilizing neural networks in a variety of scientific and academic studies and in industrial applications is increasing. In addition to the growing interest in neural networks, there is also a rising interest in video classification. Object detection from an image is used as a tool for various applications and is the basis for video classification. Identifying objects in videos is more difficult than for single images, as the information in videos has a time continuity constraint. Common neural networks such as ConvLSTM (Convolutional Long Short-Term Memory) and 3DCNN (3D Convolutional Neural Network), as well as many others, have been used to detect objects from video. Here, we propose a 3DCNN for the detection of human activity from video data. The experimental results show that the optimized proposed 3DCNN provides better results than neural network architectures for motion, static and hybrid features. The proposed 3DCNN obtains the highest recognition precision of the methods considered, 87.4%. In contrast, the neural network architectures for motion, static and hybrid features achieve precisions of 65.4%, 63.1% and 71.2%, respectively. We also compare results with previous research. Previous 3DCNN architecture on database UCF Youtube Action worked worse than the architecture we proposed in this article, where the achieved result was 29%. The experimental results on the UCF YouTube Action dataset demonstrate the effectiveness of the proposed 3DCNN for recognition of human activity. For a more complex comparison of the proposed neural network, the modified UCF101 dataset, full UCF50 dataset and full UCF101 dataset were compared. An overall precision of 82.7% using modified UCF101 dataset was obtained. On the other hand, the precision using full UCF50 dataset and full UCF101 dataset was 80.6% and 78.5%, respectively. Full article
Show Figures

Figure 1

18 pages, 2079 KiB  
Article
EduNet: A New Video Dataset for Understanding Human Activity in the Classroom Environment
by Vijeta Sharma, Manjari Gupta, Ajai Kumar and Deepti Mishra
Sensors 2021, 21(17), 5699; https://doi.org/10.3390/s21175699 - 24 Aug 2021
Cited by 25 | Viewed by 10277
Abstract
Human action recognition in videos has become a popular research area in artificial intelligence (AI) technology. In the past few years, this research has accelerated in areas such as sports, daily activities, kitchen activities, etc., due to developments in the benchmarks proposed for [...] Read more.
Human action recognition in videos has become a popular research area in artificial intelligence (AI) technology. In the past few years, this research has accelerated in areas such as sports, daily activities, kitchen activities, etc., due to developments in the benchmarks proposed for human action recognition datasets in these areas. However, there is little research in the benchmarking datasets for human activity recognition in educational environments. Therefore, we developed a dataset of teacher and student activities to expand the research in the education domain. This paper proposes a new dataset, called EduNet, for a novel approach towards developing human action recognition datasets in classroom environments. EduNet has 20 action classes, containing around 7851 manually annotated clips extracted from YouTube videos, and recorded in an actual classroom environment. Each action category has a minimum of 200 clips, and the total duration is approximately 12 h. To the best of our knowledge, EduNet is the first dataset specially prepared for classroom monitoring for both teacher and student activities. It is also a challenging dataset of actions as it has many clips (and due to the unconstrained nature of the clips). We compared the performance of the EduNet dataset with benchmark video datasets UCF101 and HMDB51 on a standard I3D-ResNet-50 model, which resulted in 72.3% accuracy. The development of a new benchmark dataset for the education domain will benefit future research concerning classroom monitoring systems. The EduNet dataset is a collection of classroom activities from 1 to 12 standard schools. Full article
(This article belongs to the Special Issue Human Activity Detection and Recognition)
Show Figures

Figure 1

17 pages, 8954 KiB  
Article
Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness
by Amir Nadeem, Ahmad Jalal and Kibum Kim
Symmetry 2020, 12(11), 1766; https://doi.org/10.3390/sym12111766 - 24 Oct 2020
Cited by 82 | Viewed by 3540
Abstract
Recent developments in sensor technologies enable physical activity recognition (PAR) as an essential tool for smart health monitoring and for fitness exercises. For efficient PAR, model representation and training are significant factors contributing to the ultimate success of recognition systems because model representation [...] Read more.
Recent developments in sensor technologies enable physical activity recognition (PAR) as an essential tool for smart health monitoring and for fitness exercises. For efficient PAR, model representation and training are significant factors contributing to the ultimate success of recognition systems because model representation and accurate detection of body parts and physical activities cannot be distinguished if the system is not well trained. This paper provides a unified framework that explores multidimensional features with the help of a fusion of body part models and quadratic discriminant analysis which uses these features for markerless human pose estimation. Multilevel features are extracted as displacement parameters to work as spatiotemporal properties. These properties represent the respective positions of the body parts with respect to time. Finally, these features are processed by a maximum entropy Markov model as a recognition engine based on transition and emission probability values. Experimental results demonstrate that the proposed model produces more accurate results compared to the state-of-the-art methods for both body part detection and for physical activity recognition. The accuracy of the proposed method for body part detection is 90.91% on a University of Central Florida’s (UCF) sports action dataset and, for activity recognition on a UCF YouTube action dataset and an IM-DailyRGBEvents dataset, accuracy is 89.09% and 88.26% respectively. Full article
Show Figures

Figure 1

Back to TopTop