Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (144)

Search Parameters:
Keywords = video games information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7735 KiB  
Article
Visual Perception of Peripheral Screen Elements: The Impact of Text and Background Colors
by Snježana Ivančić Valenko, Marko Čačić, Ivana Žiljak Stanimirović and Anja Zorko
Appl. Sci. 2025, 15(14), 7636; https://doi.org/10.3390/app15147636 - 8 Jul 2025
Viewed by 383
Abstract
Visual perception of screen elements depends on their color, font, and position in the user interface design. Objects in the central part of the screen are perceived more easily than those in the peripheral areas. However, the peripheral space is valuable for applications [...] Read more.
Visual perception of screen elements depends on their color, font, and position in the user interface design. Objects in the central part of the screen are perceived more easily than those in the peripheral areas. However, the peripheral space is valuable for applications like advertising and promotion and should not be overlooked. Optimizing the design of elements in this area can improve user attention to peripheral visual stimuli during focused tasks. This study aims to evaluate how different combinations of text and background color affect the visibility of moving textual stimuli in the peripheral areas of the screen, while attention is focused on a central task. This study investigates how background color, combined with white or black text, affects the attention of participants. It also identifies which background color makes a specific word most noticeable in the peripheral part of the screen. We designed quizzes to present stimuli with black or white text on various background colors in the peripheral regions of the screen. The background colors tested were blue, red, yellow, green, white, and black. While saturation and brightness were kept constant, the color tone was varied. Among ten combinations of background and text color, we aimed to determine the most noticeable combination in the peripheral part of the screen. The combination of white text on a blue background resulted in the shortest detection time (1.376 s), while black text on a white background achieved the highest accuracy rate at 79%. The results offer valuable insights for improving peripheral text visibility in user interfaces across various visual communication domains such as video games, television content, and websites, where peripheral information must remain noticeable despite centrally focused user attention and complex viewing conditions. Full article
Show Figures

Figure 1

29 pages, 366 KiB  
Article
Video-Driven Artificial Intelligence for Predictive Modelling of Antimicrobial Peptide Generation: Literature Review on Advances and Challenges
by Jielu Yan, Zhengli Chen, Jianxiu Cai, Weizhi Xian, Xuekai Wei, Yi Qin and Yifan Li
Appl. Sci. 2025, 15(13), 7363; https://doi.org/10.3390/app15137363 - 30 Jun 2025
Viewed by 588
Abstract
How video-based methodologies and advanced computer vision algorithms can facilitate the development of antimicrobial peptide (AMP) generation models should be further reviewed, structural and functional patterns should be elucidated, and the generative power of in silico pipelines should be enhanced. AMPs have drawn [...] Read more.
How video-based methodologies and advanced computer vision algorithms can facilitate the development of antimicrobial peptide (AMP) generation models should be further reviewed, structural and functional patterns should be elucidated, and the generative power of in silico pipelines should be enhanced. AMPs have drawn significant interest as promising therapeutic agents because of their broad-spectrum efficacy, low resistance profile, and membrane-disrupting mechanisms. However, traditional discovery methods are hindered by high costs, lengthy synthesis processes, and difficulty in accessing the extensive chemical space involved in AMP research. Recent advances in artificial intelligence—especially machine learning (ML), deep learning (DL), and pattern recognition—offer game-changing opportunities to accelerate AMP design and validation. By integrating video analysis with computational modelling, researchers can visualise and quantify AMP–microbe interactions at unprecedented levels of detail, thereby informing both experimental design and the refinement of predictive algorithms. This review provides a comprehensive overview of these emerging techniques, highlights major breakthroughs, addresses critical challenges, and ultimately emphasises the powerful synergy between video-driven pattern recognition, AI-based modelling, and experimental validation in the pursuit of next-generation antimicrobial strategies. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
18 pages, 302 KiB  
Article
A Convergent Mixed-Methods Evaluation of a Co-Designed Evidence-Based Practice Module Underpinned by Universal Design for Learning Pedagogy
by Stephanie Craig, Hannah McConnell, Patrick Stark, Nuala Devlin, Claire McKeaveney and Gary Mitchell
Nurs. Rep. 2025, 15(7), 236; https://doi.org/10.3390/nursrep15070236 - 27 Jun 2025
Viewed by 442
Abstract
Background: The concept of evidence-based practice (EBP) is globally relevant in current healthcare climates. However, nursing students and teachers often struggle with integrating EBP effectively into a curriculum. This has implications for the way students learn to use evidence for their nursing [...] Read more.
Background: The concept of evidence-based practice (EBP) is globally relevant in current healthcare climates. However, nursing students and teachers often struggle with integrating EBP effectively into a curriculum. This has implications for the way students learn to use evidence for their nursing practice. A new undergraduate EBP module was co-designed with current nursing students and university staff throughout 2023. Underpinning the module was a UDL (universal design for learning) pedagogy consisting of flexible approaches to learning for nursing students which included co-developed videos, co-developed audio podcasts, and co-developed serious games to complement traditional flipped classroom learning. The module commenced in September 2023, running in Year 1 one of a 3-year undergraduate nursing program, and was co-taught by staff and senior students. Methods: A pre/post-test design was used to collect data on student attitude, knowledge, and utilization of EBP. A total of 430 students completed two validated questionnaires, the EBP Beliefs Scale© and EBP Implementation Scale©, before and after the module. Following the post-test, six focus group interviews were also conducted with 58 students to explore how the module informed student nursing practice whilst attending clinical placement during Year 1. A convergent mixed-methods design was employed. Sample attrition occurred (~25%). Effect sizes and 95% confidence intervals were calculated for primary outcomes. Results: Quantitative data was analyzed using paired t-tests and this highlighted statistically significant improvements in attitude, knowledge and utilization of evidence-based practice after learning (p < 0.001). Qualitative data was transcribed verbatim, thematically analyzed, and highlighted three main findings; EBP is my business, EBP positively influenced the care of my patients and EBP has positively impacted my professional development. Conclusions: Partnership with current nursing students in the co-design and implementation of a module about EBP was associated with improvements in student knowledge, attitude and utilization of evidence in practice. These factors are likely to also improve professional competence and ultimately patient care. Full article
Show Figures

Figure 1

23 pages, 12598 KiB  
Article
Integrating Augmented Reality and Geolocation for Outdoor Interactive Educational Experiences
by Christos Mourelatos and Michalis Vrigkas
Virtual Worlds 2025, 4(2), 18; https://doi.org/10.3390/virtualworlds4020018 - 7 May 2025
Viewed by 719
Abstract
This paper presents an augmented reality (AR) mobile application developed for Android devices, which brings five bust sculptures of historical personalities of the city of Komotini, Greece, to ‘life’ using the Unity engine. These busts narrate their achievements in two languages, Greek and [...] Read more.
This paper presents an augmented reality (AR) mobile application developed for Android devices, which brings five bust sculptures of historical personalities of the city of Komotini, Greece, to ‘life’ using the Unity engine. These busts narrate their achievements in two languages, Greek and English, to educate visitors on historical and cultural heritage and provide a comprehensive glimpse into the area’s past using 3D models, textures, and animations tailored to the educational content. Based on the users’ location, the application provides an interactive educational experience, allowing the users to explore the history and characteristics of the busts in an innovative way. The users may interact with the busts using markerless AR, discover information and historical facts about them, and stimulate their understanding of the busts’ significance in the context of local history and culture. Interactive elements, such as videos and 3D animations, are incorporated to enrich the learning experience. A location-based knowledge quiz game was also developed for this purpose. The application was evaluated by statistical analysis to measure the effect of using the application on the involvement of users in the educational process and to study the users’ satisfaction and experience. This approach revealed that the proposed AR app is effective in providing educational content, promotes active user participation, and provides a high level of user satisfaction. Full article
Show Figures

Figure 1

38 pages, 4448 KiB  
Article
Persistence and Evolution Within Interactive Design: An Integrated Approach to ICT Innovations in Emergent Game Narratives
by Mengfan Zou, Yuan Meng, Sara Cortés Gómez and Julia Sabina Gutierrez Sánchez
Technologies 2025, 13(5), 179; https://doi.org/10.3390/technologies13050179 - 1 May 2025
Viewed by 917
Abstract
Video games, as interactive artifacts within the continuum of information and communication technology (ICT), encapsulate an ontological inquiry: which mechanism maintains user engagement while evolving with ICT-driven innovations? How is this mechanism structured within video games in the competitive industry? This study analyzes [...] Read more.
Video games, as interactive artifacts within the continuum of information and communication technology (ICT), encapsulate an ontological inquiry: which mechanism maintains user engagement while evolving with ICT-driven innovations? How is this mechanism structured within video games in the competitive industry? This study analyzes the emergent narrative of the Animal Crossing franchise, focusing on the interplay between persistence and evolution, aligning with our inquiry by examining how technological integration, interactive design, and player agency co-construct narrative adaptations across generations. Employing an integrated approach, we introduced the ENSF framework to analyze emergent narrative mechanisms. On this basis, the qualitative walkthrough method and quantitative unsupervised learning methods—principal component analysis and VADER techniques—were used to examine narrative flow, linguistic metrics, and sentiment tendencies across four game generations and official materials (N = 37). This study contributes to (a) establishing the structural emergent narrative simulation framework (ENSF) delineating the narrative techniques’ interrelations—simulation, orientations, story events, resolutions, evaluations, and characters; and (b) interpreting how narrative mechanisms within interactive design balance persistence with evolution, proving that ICT innovations comply with player agency reinforcement. These discoveries establish a hermeneutic proposal identifying the socio-technological characteristics of interactive communications in video game design, emphasizing the dynamic balance within innovative gaming environments. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

29 pages, 2763 KiB  
Review
A Review of Computer Vision Technology for Football Videos
by Fucheng Zheng, Duaa Zuhair Al-Hamid, Peter Han Joo Chong, Cheng Yang and Xue Jun Li
Information 2025, 16(5), 355; https://doi.org/10.3390/info16050355 - 28 Apr 2025
Viewed by 1521
Abstract
In the era of digital advancement, the integration of Deep Learning (DL) algorithms is revolutionizing performance monitoring in football. Due to restrictions on monitoring devices during games to prevent unfair advantages, coaches are tasked to analyze players’ movements and performance visually. As a [...] Read more.
In the era of digital advancement, the integration of Deep Learning (DL) algorithms is revolutionizing performance monitoring in football. Due to restrictions on monitoring devices during games to prevent unfair advantages, coaches are tasked to analyze players’ movements and performance visually. As a result, Computer Vision (CV) technology has emerged as a vital non-contact tool for performance analysis, offering numerous opportunities to enhance the clarity, accuracy, and intelligence of sports event observations. However, existing CV studies in football face critical challenges, including low-resolution imagery of distant players and balls, severe occlusion in crowded scenes, motion blur during rapid movements, and the lack of large-scale annotated datasets tailored for dynamic football scenarios. This review paper fills this gap by comprehensively analyzing advancements in CV, particularly in four key areas: player/ball detection and tracking, motion prediction, tactical analysis, and event detection in football. By exploring these areas, this review offers valuable insights for future research on using CV technology to improve sports performance. Future directions should prioritize super-resolution techniques to enhance video quality and improve small-object detection performance, collaborative efforts to build diverse and richly annotated datasets, and the integration of contextual game information (e.g., score differentials and time remaining) to improve predictive models. The in-depth analysis of current State-Of-The-Art (SOTA) CV techniques provides researchers with a detailed reference to further develop robust and intelligent CV systems in football. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

17 pages, 3021 KiB  
Article
Perceptions of Parents and Children About Videogame Use
by Michela Franzò, Gaia Maria Olivieri, Anna Salerni and Marco Iosa
Multimodal Technol. Interact. 2025, 9(3), 21; https://doi.org/10.3390/mti9030021 - 28 Feb 2025
Viewed by 1670
Abstract
This study aims to investigate the gap in perceptions of parents and children on the use of videogames in childhood. Methods: A survey was conducted with 75 pairs formed by a son or daughter and one parent. The data collected contradict the prejudice [...] Read more.
This study aims to investigate the gap in perceptions of parents and children on the use of videogames in childhood. Methods: A survey was conducted with 75 pairs formed by a son or daughter and one parent. The data collected contradict the prejudice that playing video games reduces study time and leads to lower grades at school (R < 0.13). Our results support the idea that playing together fosters bonding and facilitates conversation. The impact of videogames on mood showed the most substantial differences in perception, with parents mainly reporting negative mood changes, while children reported similar frequencies of negative, neutral, and positive ones. In relation to the educational and informative potential of videogames, children had slightly more positive opinions than their parents (p < 0.001). Finally, more than half of the participants potentially agreed with the possibility of using videogames as academic tools. In conclusion, there is a gap between parents’ and children’s perceptions about videogaming, especially concerning their effects on children’s mood. Playing together and developing deeper knowledge about videogames could enhance positive effects on children’s development as well as their relationships with peers, parents, and at school. Full article
Show Figures

Graphical abstract

21 pages, 3599 KiB  
Article
Using Deep Learning to Identify Deepfakes Created Using Generative Adversarial Networks
by Jhanvi Jheelan and Sameerchand Pudaruth
Computers 2025, 14(2), 60; https://doi.org/10.3390/computers14020060 - 10 Feb 2025
Cited by 4 | Viewed by 2264
Abstract
Generative adversarial networks (GANs) have revolutionised various fields by creating highly realistic images, videos, and audio, thus enhancing applications such as video game development and data augmentation. However, this technology has also given rise to deepfakes, which pose serious challenges due to their [...] Read more.
Generative adversarial networks (GANs) have revolutionised various fields by creating highly realistic images, videos, and audio, thus enhancing applications such as video game development and data augmentation. However, this technology has also given rise to deepfakes, which pose serious challenges due to their potential to create deceptive content. Thousands of media reports have informed us of such occurrences, highlighting the urgent need for reliable detection methods. This study addresses the issue by developing a deep learning (DL) model capable of distinguishing between real and fake face images generated by StyleGAN. Using a subset of the 140K real and fake face dataset, we explored five different models: a custom CNN, ResNet50, DenseNet121, MobileNet, and InceptionV3. We leveraged the pre-trained models to utilise their robust feature extraction and computational efficiency, which are essential for distinguishing between real and fake features. Through extensive experimentation with various dataset sizes, preprocessing techniques, and split ratios, we identified the optimal ones. The 20k_gan_8_1_1 dataset produced the best results, with MobileNet achieving a test accuracy of 98.5%, followed by InceptionV3 at 98.0%, DenseNet121 at 97.3%, ResNet50 at 96.1%, and the custom CNN at 86.2%. All of these models were trained on only 16,000 images and validated and tested on 2000 images each. The custom CNN model was built with a simpler architecture of two convolutional layers and, hence, lagged in accuracy due to its limited feature extraction capabilities compared with deeper networks. This research work also included the development of a user-friendly web interface that allows deepfake detection by uploading images. The web interface backend was developed using Flask, enabling real-time deepfake detection, allowing users to upload images for analysis and demonstrating a practical use for platforms in need of quick, user-friendly verification. This application demonstrates significant potential for practical applications, such as on social media platforms, where the model can help prevent the spread of fake content by flagging suspicious images for review. This study makes important contributions by comparing different deep learning models, including a custom CNN, to understand the balance between model complexity and accuracy in deepfake detection. It also identifies the best dataset setup that improves detection while keeping computational costs low. Additionally, it introduces a user-friendly web tool that allows real-time deepfake detection, making the research useful for social media moderation, security, and content verification. Nevertheless, identifying specific features of GAN-generated deepfakes remains challenging due to their high realism. Future works will aim to expand the dataset by using all 140,000 images, refine the custom CNN model to increase its accuracy, and incorporate more advanced techniques, such as Vision Transformers and diffusion models. The outcomes of this study contribute to the ongoing efforts to counteract the negative impacts of GAN-generated images. Full article
Show Figures

Figure 1

38 pages, 11460 KiB  
Article
Simulation-Based Optimization of Crane Lifting Position and Capacity Using a Construction Digital Twin for Prefabricated Bridge Deck Assembly
by Dae-Ho Jang, Gi-Tae Roh, Chi-Ho Jeon and Chang-Su Shim
Buildings 2025, 15(3), 475; https://doi.org/10.3390/buildings15030475 - 3 Feb 2025
Cited by 2 | Viewed by 1849
Abstract
The growing adoption of off-site construction methods has increased the critical role of mobile cranes within the construction sector. This study develops a Construction Digital Twin (CDT) framework to optimize crane lifting positions and capacities for the installation of prefabricated bridge decks. By [...] Read more.
The growing adoption of off-site construction methods has increased the critical role of mobile cranes within the construction sector. This study develops a Construction Digital Twin (CDT) framework to optimize crane lifting positions and capacities for the installation of prefabricated bridge decks. By integrating 3D site modeling, Building Information Modeling (BIM), and crane simulations within the Unity game engine, the CDT overcomes the limitations of conventional 2D-based planning by providing a three-dimensional representation of site conditions. An exhaustive search method identifies optimal crane configurations, enhancing precision and efficiency. Simulation calibration using video analysis of real bridge deck installations aligns crane speed and cycle times with actual operations, improving reliability. Case studies demonstrate the CDT’s ability to reduce crane operation costs by 27% when employing a smaller capacity crane while maintaining operational efficiency. Additional DFA-focused simulations with varying deck dimensions revealed a potential 10% cost reduction by optimizing crane operations and deck design strategies. The CDT framework supports early-stage planning, reduces operational risks, and contributes to cost-effective and safer construction practices, offering a scalable solution adaptable to various construction scenarios. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 6888 KiB  
Article
Stroke Classification in Table Tennis as a Multi-Label Classification Task with Two Labels Per Stroke
by Yuta Fujihara, Tomoyasu Shimada, Xiangbo Kong, Ami Tanaka, Hiroki Nishikawa and Hiroyuki Tomiyama
Sensors 2025, 25(3), 834; https://doi.org/10.3390/s25030834 - 30 Jan 2025
Viewed by 1227
Abstract
In table tennis, there are various movements involved in hitting a ball, which are called strokes, and these are an important factor in determining the contents of a game. Therefore, research has been conducted to classify these types of strokes using video gameplay [...] Read more.
In table tennis, there are various movements involved in hitting a ball, which are called strokes, and these are an important factor in determining the contents of a game. Therefore, research has been conducted to classify these types of strokes using video gameplay data or inertial sensor information. However, the classification of strokes from actual videos of table tennis is more difficult than general action recognition tasks because many strokes display strong similarity. Therefore, this study proposes a multi-label stroke classification method, assigning multiple classes per stroke. Specifically, multi-labeling is performed by assigning two types of labels—namely the player’s posture and the rotation and velocity of the ball—to one stroke. By changing the head of the action recognition model to adopt multiple outputs for stroke classification, the difficulty in each classification task is reduced and the accuracy is improved. As a result, when performing multi-labeling classification with a conventional action recognition model, the accuracy of the validation data was improved by up to 8.6%, and the accuracy of the test data was improved by up to 18.1%. In addition, when two types of input—namely video and 3D joint coordinates—were used, the accuracy of the validation and test data was higher by 17.1 and 5.4% for 3D joint coordinates, respectively, confirming that 3D joint coordinates are effective. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

21 pages, 4884 KiB  
Article
Evaluation of Machine Learning Algorithms for Classification of Visual Stimulation-Induced EEG Signals in 2D and 3D VR Videos
by Mingliang Zuo, Xiaoyu Chen and Li Sui
Brain Sci. 2025, 15(1), 75; https://doi.org/10.3390/brainsci15010075 - 16 Jan 2025
Cited by 3 | Viewed by 1557
Abstract
Backgrounds: Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment’s characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers [...] Read more.
Backgrounds: Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment’s characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers can explore the neural mechanisms underlying cognitive and emotional responses to VR stimuli. However, distinguishing EEG signals recorded by two-dimensional (2D) versus three-dimensional (3D) VR environments remains underexplored. Current research primarily utilizes power spectral density (PSD) features to differentiate between 2D and 3D VR conditions, but the potential of other feature parameters for enhanced discrimination is unclear. Additionally, the use of machine learning techniques to classify EEG signals from 2D and 3D VR using alternative features has not been thoroughly investigated, highlighting the need for further research to identify robust EEG features and effective classification methods. Methods: This study recorded EEG signals from participants exposed to 2D and 3D VR video stimuli to investigate the neural differences between these conditions. Key features extracted from the EEG data included PSD and common spatial patterns (CSPs), which capture frequency-domain and spatial-domain information, respectively. To evaluate classification performance, several classical machine learning algorithms were employed: ssupport vector machine (SVM), k-nearest neighbors (KNN), random forest (RF), naive Bayes, decision Tree, AdaBoost, and a voting classifier. The study systematically compared the classification performance of PSD and CSP features across these algorithms, providing a comprehensive analysis of their effectiveness in distinguishing EEG signals in response to 2D and 3D VR stimuli. Results: The study demonstrated that machine learning algorithms can effectively classify EEG signals recorded during watching 2D and 3D VR videos. CSP features outperformed PSD in classification accuracy, indicating their superior ability to capture EEG signals differences between the VR conditions. Among the machine learning algorithms, the Random Forest classifier achieved the highest accuracy at 95.02%, followed by KNN with 93.16% and SVM with 91.39%. The combination of CSP features with RF, KNN, and SVM consistently showed superior performance compared to other feature-algorithm combinations, underscoring the effectiveness of CSP and these algorithms in distinguishing EEG responses to different VR experiences. Conclusions: This study demonstrates that EEG signals recorded during watching 2D and 3D VR videos can be effectively classified using machine learning algorithms with extracted feature parameters. The findings highlight the superiority of CSP features over PSD in distinguishing EEG signals under different VR conditions, emphasizing CSP’s value in VR-induced EEG analysis. These results expand the application of feature-based machine learning methods in EEG studies and provide a foundation for future research into the brain cortical activity of VR experiences, supporting the broader use of machine learning in EEG-based analyses. Full article
Show Figures

Figure 1

13 pages, 403 KiB  
Article
Fire Video Recognition Based on Local and Global Adaptive Enhancement
by Jian Ding, Yun Yi, Tinghua Wang and Tao Tian
Algorithms 2025, 18(1), 8; https://doi.org/10.3390/a18010008 - 1 Jan 2025
Viewed by 758
Abstract
Fires pose an enormous risk to human life and property. In the domain of fire warning, earlier approaches leveraging computer vision have achieved significant progress. However, these methods ignore the local and global motion characteristics of flames. To address this issue, a Local [...] Read more.
Fires pose an enormous risk to human life and property. In the domain of fire warning, earlier approaches leveraging computer vision have achieved significant progress. However, these methods ignore the local and global motion characteristics of flames. To address this issue, a Local and Global Adaptive Enhancement (LGAE) network is proposed, which mainly includes the backbone block, the Local Adaptive Motion Enhancement (LAME) block, and the Global Adaptive Motion Enhancement (GAME) block. Specifically, the LAME block is designed to capture information about local motion, and the GAME block is devised to enhance information about global motion. Through the utilization of these two blocks, the fire recognition ability of LGAE is improved. To facilitate the research and development in the domain of fire recognition, we constructed a Large-scale Fire Video Recognition (LFVR) dataset, which includes 11,560 video clips. Extensive experiments were carried out on the LFVR and FireNet datasets. The F1 scores of LGAE on LFVR and FireNet were 88.93% and 93.18%, respectively. The experimental outcomes indicate that LGAE performs better than other methods on both LFVR and FireNet. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

18 pages, 1415 KiB  
Article
A Mathematical Model to Study Defensive Metrics in Football: Individual, Collective and Game Pressures
by Jose M. Calabuig, César Catalán, Luis M. García-Raffi and Enrique A. Sánchez-Pérez
Mathematics 2024, 12(23), 3854; https://doi.org/10.3390/math12233854 - 7 Dec 2024
Viewed by 2023
Abstract
Performance analysis, utilizing video technology and recent technological advancements in soccer stadiums, provides a wealth of data, including player trajectories and real-time game statistics, which are crucial for tactical evaluation and decision-making by coaches and players. These data allow for the definition of [...] Read more.
Performance analysis, utilizing video technology and recent technological advancements in soccer stadiums, provides a wealth of data, including player trajectories and real-time game statistics, which are crucial for tactical evaluation and decision-making by coaches and players. These data allow for the definition of metrics that not only enrich the experience for soccer fans through enhanced visual displays but also empower coaching staff and managers to make informed, real-time decisions that directly impact match outcomes. Ultimately, these data serve as a pivotal tool for improving team strategy based on comprehensive post-match data analysis. In this article, we present a mathematical model to study the concept of pressure between players and, subsequently, between teams. We first explore the concept in a fixed frame of a match, determining what we call influence areas between players. We introduce the unit pressure function and analyze the total number of pressure interactions. Then, we apply these concepts to football matches, considering various factors such as players and the radius of the area of influence and examining pressure efficiency through mean unitary pressure. Lastly, a real case study is presented, showcasing visualizations like a heatmap matrix displaying individual and collective pressure, as well as the team pressure balance. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

17 pages, 3152 KiB  
Article
Connectivity in the Dorsal Visual Stream Is Enhanced in Action Video Game Players
by Kyle Cahill, Timothy Jordan and Mukesh Dhamala
Brain Sci. 2024, 14(12), 1206; https://doi.org/10.3390/brainsci14121206 - 28 Nov 2024
Viewed by 3074
Abstract
Action video games foster competitive environments that demand rapid spatial navigation and decision-making. Action video gamers often exhibit faster response times and slightly improved accuracy in vision-based sensorimotor tasks. Background/Objectives: However, the underlying functional and structural changes in the two visual streams of [...] Read more.
Action video games foster competitive environments that demand rapid spatial navigation and decision-making. Action video gamers often exhibit faster response times and slightly improved accuracy in vision-based sensorimotor tasks. Background/Objectives: However, the underlying functional and structural changes in the two visual streams of the brain that may be contributing to these cognitive improvements have been unclear. Methods: Using functional and diffusion MRI data, this study investigated the differences in connectivity between gamers who play action video games and nongamers in the dorsal and ventral visual streams. Results: We found that action video gamers have enhanced functional and structural connectivity, especially in the dorsal visual stream. Specifically, there is heightened functional connectivity—both undirected and directed—between the left superior occipital gyrus and the left superior parietal lobule during a moving-dot discrimination decision-making task. This increased connectivity correlates with response time in gamers. The structural connectivity in the dorsal stream, as quantified by diffusion fractional anisotropy and quantitative anisotropy measures of the axonal fiber pathways, was also enhanced for gamers compared to nongamers. Conclusions: These findings provide valuable insights into how action video gaming can induce targeted improvements in structural and functional connectivity between specific brain regions in the visual processing pathways. These connectivity changes in the dorsal visual stream underpin the superior performance of action video gamers compared to nongamers in tasks requiring rapid and accurate vision-based decision-making. Full article
Show Figures

Figure 1

36 pages, 7794 KiB  
Article
Video Games in Civic Engagement in Urban Planning, a Methodology for Effective and Informed Selection of Games for Specific Needs
by Jan Szot
Sustainability 2024, 16(23), 10411; https://doi.org/10.3390/su162310411 - 27 Nov 2024
Cited by 1 | Viewed by 1582
Abstract
Video games are recognized as significant tools and mediums to be used in civic participation in spatial planning and fostering local communities. As the phenomenon is widely recognized in papers presenting singular case studies and broader analyses in the field, selecting such serious [...] Read more.
Video games are recognized as significant tools and mediums to be used in civic participation in spatial planning and fostering local communities. As the phenomenon is widely recognized in papers presenting singular case studies and broader analyses in the field, selecting such serious games with certain characteristics remains unclear. The informed process of choosing games with particular properties regarding genesis, graphic style, genre, and complexity as the response for specified needs and process assumptions appears to be supportive in preventing unnecessary costs and data overproduction. Such avoidance is an important part of sustainable digital transformation. Therefore, there is a need for a more conscious process of selecting video games to be used in a participatory process. The following paper aims to propose a numerical base for a decisional instrument that could be useful for specifying the characteristics of games to be utilized in participation. They performed a multicriteria analysis of documented cases of implementing video games in civic engagement, allowing the creation of a set of numeric indicators that help determine the properties of games that will be most appropriate for given process assumptions. Such a tool can prevent overproducing data on the one hand and may cause dissemination of the presented way of handling the participation process on the other. Full article
Show Figures

Figure 1

Back to TopTop