Computational Sport Science and Sport Analytics

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (6 March 2020) | Viewed by 9877

Special Issue Editors


E-Mail Website
Guest Editor
School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1142, New Zealand
Interests: artificial intelligence; image processing; computer vision; human movement; sports sciences
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Software, Faculty of Engineering & IT, University of Technology Sydney, Broadway, NSW 2007, Australia
Interests: artificial intelligence (AI); deep learning; human action/activity recognition; multi-modal video analysis; aerial image processing; object detection and recognition; crowd behavior analysis

E-Mail Website
Guest Editor
School of Physical Education, Sport and Exercise Sciences, University of Otago, Dunedin 9054, New Zealand
Interests: multi-dimensional coordination and data visualization techniques; self-organizing maps (SOMs); sports biomechanics; sports performance analysis; player positional data

Special Issue Information

Dear Colleagues,

Applying AI to rehabilitation, healthcare, and sport science means advancing and augmenting the ways in which movement activities and sport are experienced, coached, played, promoted, broadcasted, and commercialized. As an addition to the nascent area of Sport Analytics, Computational Sport Science is focused on data-driven machine-learning approaches and human motion modelling and analysis (HMMA). Motion data can be obtained from mobile apps, action and depth cameras, deep learning-based computer vision systems, 3D motion capture systems, sport gadgets, inertial and exergame sensors, rehabilitation assistive technologies, and other wearable computing devices.

The focus of this Special Issue “Computational Sport Science and Sport Analytics” is on topics such as:

  • Acquiring and processing movement information from various sources;
  • Augmenting feedback and intervention (near real-time or post activity);
  • Providing diagnostic capability and insights from data;
  • Finding patterns in specific human activity contexts;
  • Generating knowledge from data;
  • Validating experts’ tacit and common-sense rules;
  • Offloading support decisions;
  • Assisting or offloading cognitive functions associated with human activities.

Research and development topics can also contribute to next-generation augmented coaching systems and technology (ACST), targeted at improving rehabilitation; quality of life associated with our ability to move; and related contexts such as performance, safety, response times, consistency, energy efficiency, motor skills, and sport-specific technique.

Prior to the submission deadline, authors are invited to attend a workshop on “Computational Sport Science: Human Motion Modelling and Analysis” that will be held in conjunction with the 2019 International Joint Conference on Neural Networks (IJCNN) from 14–19 July. The workshop will provide an opportunity for attendees to engage in discussions such as bridging the gaps between biomechanics and expert feedback, and to receive feedback on their research. The workshop will also provide opportunities and insights for attendees to engage in research that is aimed at creating strategic differences in elite sports and developing sports gadgets, exergames, and rehabilitation technologies. In addition to calling for submissions for the Special Issue of Information titled “Computational Sport Science and Sport Analytics”, we also invite authors interested in extending their IJCNN conference or workshop proceeding papers to the journal, by providing at least 50% new content.

Dr. Boris Bačić
Dr. Nabin Sharma
Dr. Peter Lamb
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human motion modelling and analysis (HMMA)
  • augmented coaching systems and technology (ACST)
  • video and sensor signal processing
  • sport and rehabilitation technology
  • motion and human performance data acquisition systems

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 5322 KiB  
Article
Task-Oriented Muscle Synergy Extraction Using An Autoencoder-Based Neural Model
by Domenico Buongiorno, Giacomo Donato Cascarano, Cristian Camardella, Irio De Feudis, Antonio Frisoli and Vitoantonio Bevilacqua
Information 2020, 11(4), 219; https://doi.org/10.3390/info11040219 - 17 Apr 2020
Cited by 9 | Viewed by 4211
Abstract
The growing interest in wearable robots opens the challenge for developing intuitive and natural control strategies. Among several human–machine interaction approaches, myoelectric control consists of decoding the motor intention from muscular activity (or EMG signals) with the aim of driving prosthetic or assistive [...] Read more.
The growing interest in wearable robots opens the challenge for developing intuitive and natural control strategies. Among several human–machine interaction approaches, myoelectric control consists of decoding the motor intention from muscular activity (or EMG signals) with the aim of driving prosthetic or assistive robotic devices accordingly, thus establishing an intimate human–machine connection. In this scenario, bio-inspired approaches, e.g., synergy-based controllers, are revealed to be the most robust. However, synergy-based myo-controllers already proposed in the literature consider muscle patterns that are computed considering only the total variance reconstruction rate of the EMG signals, without taking into account the performance of the controller in the task (or application) space. In this work, extending a previous study, the authors presented an autoencoder-based neural model able to extract muscles synergies for motion intention detection while optimizing the task performance in terms of force/moment reconstruction. The proposed neural topology has been validated with EMG signals acquired from the main upper limb muscles during planar isometric reaching tasks performed in a virtual environment while wearing an exoskeleton. The presented model has been compared with the non-negative matrix factorization algorithm (i.e., the most used approach in the literature) in terms of muscle synergy extraction quality, and with three techniques already presented in the literature in terms of goodness of shoulder and elbow predicted moments. The results of the experimental comparisons have showed that the proposed model outperforms the state-of-art synergy-based joint moment estimators at the expense of the quality of the EMG signals reconstruction. These findings demonstrate that a trade-off, between the capability of the extracted muscle synergies to better describe the EMG signals variability and the task performance in terms of force reconstruction, can be achieved. The results of this study might open new horizons on synergies extraction methodologies, optimized synergy-based myo-controllers and, perhaps, reveals useful hints about their origin. Full article
(This article belongs to the Special Issue Computational Sport Science and Sport Analytics)
Show Figures

Figure 1

17 pages, 1366 KiB  
Article
Automatic Sorting of Dwarf Minke Whale Underwater Images
by Dmitry A. Konovalov, Natalie Swinhoe, Dina B. Efremova, R. Alastair Birtles, Martha Kusetic, Suzanne Hillcoat, Matthew I. Curnock, Genevieve Williams and Marcus Sheaves
Information 2020, 11(4), 200; https://doi.org/10.3390/info11040200 - 9 Apr 2020
Cited by 4 | Viewed by 4743
Abstract
A predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subspecies) occurs annually in the Australian waters of the northern Great Barrier Reef in June–July, which has been the subject of a long-term photo-identification study. Researchers from the Minke Whale Project (MWP) at [...] Read more.
A predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subspecies) occurs annually in the Australian waters of the northern Great Barrier Reef in June–July, which has been the subject of a long-term photo-identification study. Researchers from the Minke Whale Project (MWP) at James Cook University collect large volumes of underwater digital imagery each season (e.g., 1.8TB in 2018), much of which is contributed by citizen scientists. Manual processing and analysis of this quantity of data had become infeasible, and Convolutional Neural Networks (CNNs) offered a potential solution. Our study sought to design and train a CNN that could detect whales from video footage in complex near-surface underwater surroundings and differentiate the whales from people, boats and recreational gear. We modified known classification CNNs to localise whales in video frames and digital still images. The required high classification accuracy was achieved by discovering an effective negative-labelling training technique. This resulted in a less than 1% false-positive classification rate and below 0.1% false-negative rate. The final operation-version CNN-pipeline processed all videos (with the interval of 10 frames) in approximately four days (running on two GPUs) delivering 1.95 million sorted images. Full article
(This article belongs to the Special Issue Computational Sport Science and Sport Analytics)
Show Figures

Figure 1

Back to TopTop