Advances Techniques in Computer Vision and Multimedia II

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (10 April 2024) | Viewed by 517

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230601, China
Interests: pattern recognition; machine learning; multimedia computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the popularization of artificial intelligence (AI) technology, computer vision has experienced significant advancements and great success in areas closely concerned with human society, e.g., autonomous driving, virtual reality, mixed reality, and medical health. As a research topic, computer vision aims to enable computer systems to automatically see, recognize, and understand the visual world by simulating the mechanism of human vision.

Multimedia have also changed our lifestyles and are becoming an indispensable part of our daily life. This research field mainly discusses the emerging computing methods of dealing with various media (picture, text, audio, video, etc.) generated by the ubiquitous multimedia sensors and infrastructures, including retrieval of multimedia data, analysis of multimedia contents, methodology based on deep learning, and practical multimedia applications.

Many researchers have devoted themselves to exploring the emerging fields of computer vision and multimedia, e.g., adversarial learning for multimedia, multimodal sentiment analysis, and explainable AI. Meanwhile, numerous advanced technologies in these areas continue to emerge. This Special Issue will provide an excellent opportunity for sharing a timely collection of research updates and will benefit researchers and practitioners engaged in computer vision, media computing, machine learning, and other fields.

Prof. Dr. Yang Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • motion and tracking
  • image and video retrieval
  • detection and localization
  • scene analysis and understanding
  • multimedia systems
  • multimedia for society and health
  • multimedia application and services
  • multimedia security and content protection
  • multimedia communications, networking, and mobility

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 5787 KiB  
Article
Object and Event Detection Pipeline for Rink Hockey Games
by Jorge Miguel Lopes, Luis Paulo Mota, Samuel Marques Mota, José Manuel Torres, Rui Silva Moreira, Christophe Soares, Ivo Pereira, Feliz Ribeiro Gouveia and Pedro Sobral
Future Internet 2024, 16(6), 179; https://doi.org/10.3390/fi16060179 - 21 May 2024
Viewed by 311
Abstract
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also [...] Read more.
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also to make other judgments, such as being able to locate the ball. In this work, we present a real-time pipeline consisting of an object detection model specifically designed for rink hockey games, followed by a knowledge-based event detection module. Even in the presence of occlusions and fast movements, our deep learning object detection model effectively identifies and tracks important visual elements in real time, such as: ball, players, sticks, referees, crowd, goalkeeper, and goal. Using a curated dataset consisting of a collection of rink hockey videos containing 2525 annotated frames, we trained and evaluated the algorithm’s performance and compared it to state-of-the-art object detection techniques. Our object detection model, based on YOLOv7, presents a global accuracy of 80% and, according to our results, good performance in terms of accuracy and speed, making it a good choice for rink hockey applications. In our initial tests, the event detection module successfully detected an important event type in rink hockey games, namely, the occurrence of penalties. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia II)
Show Figures

Figure 1

Back to TopTop