Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (735)

Search Parameters:
Keywords = action videos

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 3044 KiB  
Article
Improving Event Data in Football Matches: A Case Study Model for Synchronizing Passing Events with Positional Data
by Alberto Cortez, Bruno Gonçalves, João Brito and Hugo Folgado
Appl. Sci. 2025, 15(15), 8694; https://doi.org/10.3390/app15158694 (registering DOI) - 6 Aug 2025
Abstract
In football, accurately pinpointing key events like passes is vital for analyzing player and team performance. Despite continuous technological advancements, existing tracking systems still face challenges in accurately synchronizing events and positional data accurately. This is a case study that proposes a new [...] Read more.
In football, accurately pinpointing key events like passes is vital for analyzing player and team performance. Despite continuous technological advancements, existing tracking systems still face challenges in accurately synchronizing events and positional data accurately. This is a case study that proposes a new method to synchronize events and positional data collected during football matches. Three datasets were used to perform this study: a dataset created by applying a custom algorithm that synchronizes positional and event data, referred to as the optimized synchronization dataset (OSD); a simple temporal alignment between positional and event data, referred to as the raw synchronization dataset (RSD); and a manual notational data (MND) from the match video footage, considered the ground truth observations. The timestamp of the pass in both synchronized datasets was compared to the ground truth observations (MND). Spatial differences in OSD were also compared to the RSD data and to the original data from the provider. Root mean square error (RMSE) and mean absolute error (MAE) were utilized to assess the accuracy of both procedures. More accurate results were observed for optimized dataset, with RMSE values of RSD = 75.16 ms (milliseconds) and OSD = 72.7 ms, and MAE values RSD = 60.50 ms and OSD = 59.73 ms. Spatial accuracy also improved, with OSD showing reduced deviation from RSD compared to the original event data. The mean positional deviation was reduced from 1.59 ± 0.82 m in original event data to 0.41 ± 0.75 m in RSD. In conclusion, the model offers a more accurate method for synchronizing independent datasets for event and positional data. This is particularly beneficial for applications where precise timing and spatial location of actions are critical. In contrast to previous synchronization methods, this approach simplifies the process by using an automated technique based on patterns of ball velocity. This streamlines synchronization across datasets, reduces the need for manual intervention, and makes the method more practical for routine use in applied settings. Full article
Show Figures

Figure 1

16 pages, 532 KiB  
Article
A Play-Responsive Approach to Teaching Mathematics in Preschool, with a Focus on Representations
by Maria Lundvin and Hanna Palmér
Educ. Sci. 2025, 15(8), 999; https://doi.org/10.3390/educsci15080999 (registering DOI) - 5 Aug 2025
Abstract
This article reports on a Swedish study investigating how children aged 2–3 years experience mathematical concepts through representations in play-responsive teaching. Drawing on the semiotic–cultural theory of learning, this study examines how representations, such as spoken language, bodily action, and artifacts, are mediated. [...] Read more.
This article reports on a Swedish study investigating how children aged 2–3 years experience mathematical concepts through representations in play-responsive teaching. Drawing on the semiotic–cultural theory of learning, this study examines how representations, such as spoken language, bodily action, and artifacts, are mediated. Video-recorded teaching sessions are analyzed to identify semiotic means of objectification and semiotic nodes at which these representations converge. The analysis distinguishes between children encountering concepts expressed by others and expressing concepts themselves. The results indicate that play-responsive teaching creates varied opportunities for experiencing mathematical concepts, with distinct modes of sensuous cognition linked to whether a concept is encountered or expressed. This study underscores the role of teachers’ choices in shaping these experiences and highlights bodily action as a significant form of representation. These findings aim to inform the use of representations in teaching mathematics to the youngest children in preschool. Full article
Show Figures

Figure 1

10 pages, 426 KiB  
Proceeding Paper
Guiding or Misleading: Challenges of Artificial Intelligence-Generated Content in Heuristic Teaching: ChatGPT
by Ping-Kuo A. Chen
Eng. Proc. 2025, 103(1), 1; https://doi.org/10.3390/engproc2025103001 - 4 Aug 2025
Viewed by 3
Abstract
Artificial intelligence (AI)-generated content (AIGC) is an innovative technology that utilizes machine learning, AI models, reward modeling, and natural language processing (NLP) to create diverse digital content such as videos, images, and text. It has the potential to support various human activities with [...] Read more.
Artificial intelligence (AI)-generated content (AIGC) is an innovative technology that utilizes machine learning, AI models, reward modeling, and natural language processing (NLP) to create diverse digital content such as videos, images, and text. It has the potential to support various human activities with significant implications in teaching and learning, facilitating heuristic teaching for educators. By using AIGC, teachers can create extensive knowledge content and effectively design instructional strategies to guide students, aligning with heuristic teaching. However, incorporating AIGC into heuristic teaching has controversies and concerns, which potentially mislead outcomes. Nevertheless, leveraging AIGC greatly benefits teachers in enhancing heuristic teaching. When integrating AIGC to support heuristic teaching, challenges and risks must be acknowledged and addressed. These challenges include the need for users to possess sufficient knowledge reserves to identify incorrect information and content generated by AIGC, the importance of avoiding excessive reliance on AIGC, ensuring users maintain control over their actions rather than being driven by AIGC, and the necessity of scrutinizing and verifying the accuracy of information and knowledge generated by AIGC to preserve its effectiveness. Full article
Show Figures

Figure 1

16 pages, 612 KiB  
Article
Examination of Step Kinematics Between Children with Different Acceleration Patterns in Short-Sprint Dash
by Ilias Keskinis, Vassilios Panoutsakopoulos, Evangelia Merkou, Savvas Lazaridis and Eleni Bassa
Biomechanics 2025, 5(3), 60; https://doi.org/10.3390/biomechanics5030060 - 4 Aug 2025
Viewed by 81
Abstract
Background/Objectives: Sprinting is a fundamental locomotor skill and a key indicator of lower limb strength and anaerobic power in early childhood. The aim of the study was to examine possible differences in the step kinematic parameters and their contribution to sprint speed [...] Read more.
Background/Objectives: Sprinting is a fundamental locomotor skill and a key indicator of lower limb strength and anaerobic power in early childhood. The aim of the study was to examine possible differences in the step kinematic parameters and their contribution to sprint speed between children with different patterns of speed development. Methods: 65 prepubescent male and female track athletes (33 males and 32 females; 6.9 ± 0.8 years old) were examined in a maximal 15 m short sprint running test, where photocells measured time for each 5 m segment. At the last 5 m segment, step length, frequency, and velocity were evaluated via a video analysis method. The symmetry angle was calculated for the examined step kinematic parameters. Results: Based on the speed at the final 5 m segment of the test, two groups were identified, the maximum sprint phase (MAX) and the acceleration phase (ACC) group. Speed was significantly (p < 0.05) higher in ACC in the final 5 m segment, while there was a significant (p < 0.05) interrelationship between step length and frequency in ACC but not in MAX. No other differences were observed. Conclusions: The difference observed in the interrelationship between speed and step kinematic parameters between ACC and MAX highlights the importance of identifying the speed development pattern to apply individualized training stimuli for the optimization of training that can lead to better conditioning and wellbeing of children involved in sports with requirements for short-sprint actions. Full article
(This article belongs to the Collection Locomotion Biomechanics and Motor Control)
Show Figures

Figure 1

26 pages, 18583 KiB  
Article
Transforming Pedagogical Practices and Teacher Identity Through Multimodal (Inter)action Analysis: A Case Study of Novice EFL Teachers in China
by Jing Zhou, Chengfei Li and Yan Cheng
Behav. Sci. 2025, 15(8), 1050; https://doi.org/10.3390/bs15081050 - 3 Aug 2025
Viewed by 188
Abstract
This study investigates the evolving pedagogical strategies and professional identity development of two novice college English teachers in China through a semester-long classroom-based inquiry. Drawing on Norris’s Multimodal (Inter)action Analysis (MIA), it analyzes 270 min of video-recorded lessons across three instructional stages, supported [...] Read more.
This study investigates the evolving pedagogical strategies and professional identity development of two novice college English teachers in China through a semester-long classroom-based inquiry. Drawing on Norris’s Multimodal (Inter)action Analysis (MIA), it analyzes 270 min of video-recorded lessons across three instructional stages, supported by visual transcripts and pitch-intensity spectrograms. The analysis reveals each teacher’s transformation from textbook-reliant instruction to student-centered pedagogy, facilitated by multimodal strategies such as gaze, vocal pitch, gesture, and head movement. These shifts unfold across the following three evolving identity configurations: compliance, experimentation, and dialogic enactment. Rather than following a linear path, identity development is shown as a negotiated process shaped by institutional demands and classroom interactional realities. By foregrounding the multimodal enactment of self in a non-Western educational context, this study offers insights into how novice EFL teachers navigate tensions between traditional discourse norms and reform-driven pedagogical expectations, contributing to broader understandings of identity formation in global higher education. Full article
Show Figures

Figure 1

21 pages, 6892 KiB  
Article
Enhanced Temporal Action Localization with Separated Bidirectional Mamba and Boundary Correction Strategy
by Xiangbin Liu and Qian Peng
Mathematics 2025, 13(15), 2458; https://doi.org/10.3390/math13152458 - 30 Jul 2025
Viewed by 242
Abstract
Temporal action localization (TAL) is a research hotspot in video understanding, which aims to locate and classify actions in videos. However, existing methods have difficulties in capturing long-term actions due to focusing on local temporal information, which leads to poor performance in localizing [...] Read more.
Temporal action localization (TAL) is a research hotspot in video understanding, which aims to locate and classify actions in videos. However, existing methods have difficulties in capturing long-term actions due to focusing on local temporal information, which leads to poor performance in localizing long-term temporal sequences. In addition, most methods ignore the boundary importance for action instances, resulting in inaccurate localized boundaries. To address these issues, this paper proposes a state space model for temporal action localization, called Separated Bidirectional Mamba (SBM), which innovatively understands frame changes from the perspective of state transformation. It adapts to different sequence lengths and incorporates state information from the forward and backward for each frame through forward Mamba and backward Mamba to obtain more comprehensive action representations, enhancing modeling capabilities for long-term temporal sequences. Moreover, this paper designs a Boundary Correction Strategy (BCS). It calculates the contribution of each frame to action instances based on the pre-localized results, then adjusts weights of frames in boundary regression to ensure the boundaries are shifted towards the frames with higher contributions, leading to more accurate boundaries. To demonstrate the effectiveness of the proposed method, this paper reports mean Average Precision (mAP) under temporal Intersection over Union (tIoU) thresholds on four challenging benchmarks: THUMOS13, ActivityNet-1.3, HACS, and FineAction, where the proposed method achieves mAPs of 73.7%, 42.0%, 45.2%, and 29.1%, respectively, surpassing the state-of-the-art approaches. Full article
(This article belongs to the Special Issue Advances in Applied Mathematics in Computer Vision)
Show Figures

Figure 1

23 pages, 13739 KiB  
Article
Traffic Accident Rescue Action Recognition Method Based on Real-Time UAV Video
by Bo Yang, Jianan Lu, Tao Liu, Bixing Zhang, Chen Geng, Yan Tian and Siyu Zhang
Drones 2025, 9(8), 519; https://doi.org/10.3390/drones9080519 - 24 Jul 2025
Viewed by 420
Abstract
Low-altitude drones, which are unimpeded by traffic congestion or urban terrain, have become a critical asset in emergency rescue missions. To address the current lack of emergency rescue data, UAV aerial videos were collected to create an experimental dataset for action classification and [...] Read more.
Low-altitude drones, which are unimpeded by traffic congestion or urban terrain, have become a critical asset in emergency rescue missions. To address the current lack of emergency rescue data, UAV aerial videos were collected to create an experimental dataset for action classification and localization annotation. A total of 5082 keyframes were labeled with 1–5 targets each, and 14,412 instances of data were prepared (including flight altitude and camera angles) for action classification and position annotation. To mitigate the challenges posed by high-resolution drone footage with excessive redundant information, we propose the SlowFast-Traffic (SF-T) framework, a spatio-temporal sequence-based algorithm for recognizing traffic accident rescue actions. For more efficient extraction of target–background correlation features, we introduce the Actor-Centric Relation Network (ACRN) module, which employs temporal max pooling to enhance the time-dimensional features of static backgrounds, significantly reducing redundancy-induced interference. Additionally, smaller ROI feature map outputs are adopted to boost computational speed. To tackle class imbalance in incident samples, we integrate a Class-Balanced Focal Loss (CB-Focal Loss) function, effectively resolving rare-action recognition in specific rescue scenarios. We replace the original Faster R-CNN with YOLOX-s to improve the target detection rate. On our proposed dataset, the SF-T model achieves a mean average precision (mAP) of 83.9%, which is 8.5% higher than that of the standard SlowFast architecture while maintaining a processing speed of 34.9 tasks/s. Both accuracy-related metrics and computational efficiency are substantially improved. The proposed method demonstrates strong robustness and real-time analysis capabilities for modern traffic rescue action recognition. Full article
(This article belongs to the Special Issue Cooperative Perception for Modern Transportation)
Show Figures

Figure 1

21 pages, 1231 KiB  
Article
Emotional Responses to Bed Bug Encounters: Effects of Sex, Proximity, and Educational Intervention on Fear and Disgust Perceptions
by Corraine A. McNeill and Rose H. Danek
Insects 2025, 16(8), 759; https://doi.org/10.3390/insects16080759 - 24 Jul 2025
Viewed by 497
Abstract
This study investigated individuals’ emotional responses to bed bugs and how these were influenced by sex, proximity, and educational intervention. Using a pre-post experimental design, participants (n = 157) completed emotional assessments before and after viewing an educational video about bed bugs. [...] Read more.
This study investigated individuals’ emotional responses to bed bugs and how these were influenced by sex, proximity, and educational intervention. Using a pre-post experimental design, participants (n = 157) completed emotional assessments before and after viewing an educational video about bed bugs. Contrary to our initial hypothesis that only fear and disgust would be observed, participants also exhibited high levels of anxiety and anger. Following the educational intervention, disgust, fear, and anger toward bed bugs increased significantly. Participants experienced greater disgust and fear when imagining encounters with bed bugs in closer proximity, with home infestations giving stronger responses than workplace scenarios. The educational video reduced disgust toward bed bugs in the home but increased fear of them in public spaces, potentially promoting vigilance that could limit bed bug spread. Females reported higher levels of disgust and fear than males across all proximity conditions, supporting evolutionary theories regarding sex-specific disgust sensitivity. The educational video successfully increased participants’ knowledge about bed bugs while simultaneously shifting emotional responses from contamination-based disgust to threat-specific fear. These findings suggest that educational interventions can effectively modify emotional responses to bed bugs, potentially leading to more rational management behaviors by transforming vague anxiety into actionable awareness of specific threats. Full article
(This article belongs to the Collection Cultural Entomology: Our Love-hate Relationship with Insects)
Show Figures

Figure 1

30 pages, 2282 KiB  
Article
User Experience of Navigating Work Zones with Automated Vehicles: Insights from YouTube on Challenges and Strengths
by Melika Ansarinejad, Kian Ansarinejad, Pan Lu and Ying Huang
Smart Cities 2025, 8(4), 120; https://doi.org/10.3390/smartcities8040120 - 19 Jul 2025
Viewed by 418
Abstract
Understanding automated vehicle (AV) behavior in complex road environments and user attitudes in such contexts is critical for their safe and effective integration into smart cities. Despite growing deployment, limited public data exist on AV performance in construction zones; highly dynamic settings marked [...] Read more.
Understanding automated vehicle (AV) behavior in complex road environments and user attitudes in such contexts is critical for their safe and effective integration into smart cities. Despite growing deployment, limited public data exist on AV performance in construction zones; highly dynamic settings marked by irregular lane markings, shifting detours, and unpredictable human presence. This study investigates AV behavior in these conditions through qualitative, video-based analysis of user-documented experiences on YouTube, focusing on Tesla’s supervised Full Self-Driving (FSD) and Waymo systems. Spoken narration, captions, and subtitles were examined to evaluate AV perception, decision-making, control, and interaction with humans. Findings reveal that while AVs excel in structured tasks such as obstacle detection, lane tracking, and cautious speed control, they face challenges in interpreting temporary infrastructure, responding to unpredictable human actions, and navigating low-visibility environments. These limitations not only impact performance but also influence user trust and acceptance. The study underscores the need for continued technological refinement, improved infrastructure design, and user-informed deployment strategies. By addressing current shortcomings, this research offers critical insights into AV readiness for real-world conditions and contributes to safer, more adaptive urban mobility systems. Full article
Show Figures

Figure 1

11 pages, 203 KiB  
Article
A Technical–Tactical Analysis of Medal Matches in Wrestling: Results from the 2024 European Senior Championships
by Mujde Atici, Abdullah Demirli, Bugrahan Cesur, Ozkan Isik, Laurentiu-Gabriel Talaghir, Marius Dumitru Cosoreanu, Viorel Dorgan and Adriana Neofit
Appl. Sci. 2025, 15(14), 7673; https://doi.org/10.3390/app15147673 - 9 Jul 2025
Viewed by 392
Abstract
Background and Objective: Match analysis plays a vital role in forming the scientific foundation of training and guiding strategic decision-making in wrestling. By objectively evaluating athletes’ technical and tactical performances, coaches and athletes can optimize preparation and in-match strategies. This study aimed to [...] Read more.
Background and Objective: Match analysis plays a vital role in forming the scientific foundation of training and guiding strategic decision-making in wrestling. By objectively evaluating athletes’ technical and tactical performances, coaches and athletes can optimize preparation and in-match strategies. This study aimed to analyze the technical and tactical characteristics of medal matches in Greco-Roman (GR), Freestyle (FS), and Women’s Wrestling (WW) at the 2024 European Wrestling Championships. Methods: A total of 54 elite-level matches (18 from each style), held in Bucharest between 12 and 18 February, 2024, were retrospectively analyzed. Three expert observers evaluated the matches using video footage from the United World Wrestling (UWW) archive. Descriptive statistics were performed using SPSS 25.0. Results: Across 301 recorded actions, 2-point techniques (52.16%) and 1-point techniques (43.85%) were dominant; only 3.99% were 4-point actions. GR primarily utilized body lock and gut wrench; FS favored single-leg attacks and leg lace. In WW, the scores were obtained from techniques applied in the par terre position with a high frequency (60.8%). Most victories in all styles occurred by points rather than technical superiority or falls. Conclusion: The findings reveal a strategic preference for low-risk, controlled techniques in high-level matches. These insights can inform evidence-based training and match preparation for future championships. Full article
(This article belongs to the Special Issue Innovative Approaches in Sports Science and Sports Training)
21 pages, 1709 KiB  
Article
Decoding Humor-Induced Amusement via Facial Expression Analysis: Toward Emotion-Aware Applications
by Gabrielle Toupin, Arthur Dehgan, Marie Buffo, Clément Feyt, Golnoush Alamian, Karim Jerbi and Anne-Lise Saive
Appl. Sci. 2025, 15(13), 7499; https://doi.org/10.3390/app15137499 - 3 Jul 2025
Viewed by 276
Abstract
Humor is widely recognized for its positive effects on well-being, including stress reduction, mood enhancement, and cognitive benefits. Yet, the lack of reliable tools to objectively quantify amusement—particularly its temporal dynamics—has limited progress in this area. Existing measures often rely on self-report or [...] Read more.
Humor is widely recognized for its positive effects on well-being, including stress reduction, mood enhancement, and cognitive benefits. Yet, the lack of reliable tools to objectively quantify amusement—particularly its temporal dynamics—has limited progress in this area. Existing measures often rely on self-report or coarse summary ratings, providing little insight into how amusement unfolds over time. To address this gap, we developed a Random Forest model to predict the intensity of amusement evoked by humorous video clips, based on participants’ facial expressions—particularly the co-activation of Facial Action Units 6 and 12 (“% Smile”)—and video features such as motion, saliency, and topic. Our results show that exposure to humorous content significantly increases “% Smile”, with amusement peaking toward the end of videos. Importantly, we observed emotional carry-over effects, suggesting that consecutive humorous stimuli can sustain or amplify positive emotional responses. Even when trained solely on humorous content, the model reliably predicted amusement intensity, underscoring the robustness of our approach. Overall, this study provides a novel, objective method to track amusement on a fine temporal scale, advancing the measurement of nonverbal emotional expression. These findings may inform the design of emotion-aware applications and humor-based therapeutic interventions to promote well-being and emotional health. Full article
(This article belongs to the Special Issue Emerging Research in Behavioral Neuroscience and in Rehabilitation)
Show Figures

Figure 1

21 pages, 2869 KiB  
Article
Multimodal Feature-Guided Audio-Driven Emotional Talking Face Generation
by Xueping Wang, Yuemeng Huo, Yanan Liu, Xueni Guo, Feihu Yan and Guangzhe Zhao
Electronics 2025, 14(13), 2684; https://doi.org/10.3390/electronics14132684 - 2 Jul 2025
Viewed by 623
Abstract
Audio-driven emotional talking face generation aims to generate talking face videos with rich facial expressions and temporal coherence. Current diffusion model-based approaches predominantly depend on either single-label emotion annotations or external video references, which often struggle to capture the complex relationships between modalities, [...] Read more.
Audio-driven emotional talking face generation aims to generate talking face videos with rich facial expressions and temporal coherence. Current diffusion model-based approaches predominantly depend on either single-label emotion annotations or external video references, which often struggle to capture the complex relationships between modalities, resulting in less natural emotional expressions. To address these issues, we propose MF-ETalk, a multimodal feature-guided method for emotional talking face generation. Specifically, we design an emotion-aware multimodal feature disentanglement and fusion framework that leverages Action Units (AUs) to disentangle facial expressions and models the nonlinear relationships among AU features using a residual encoder. Furthermore, we introduce a hierarchical multimodal feature fusion module that enables dynamic interactions among audio, visual cues, AUs, and motion dynamics. This module is optimized through global motion modeling, lip synchronization, and expression subspace learning, enabling full-face dynamic generation. Finally, an emotion-consistency constraint module is employed to refine the generated results and ensure the naturalness of expressions. Extensive experiments on the MEAD and HDTF datasets demonstrate that MF-ETalk outperforms state-of-the-art methods in both expression naturalness and lip-sync accuracy. For example, it achieves an FID of 43.052 and E-FID of 2.403 on MEAD, along with strong synchronization performance (LSE-C of 6.781, LSE-D of 7.962), confirming the effectiveness of our approach in producing realistic and emotionally expressive talking face videos. Full article
Show Figures

Figure 1

23 pages, 3743 KiB  
Article
Playful Computational Thinking Learning in and Beyond Early Childhood Classrooms: Insights from Collaborative Action Research of Two Teacher-Researchers
by Grace Yaxin Xing, Alice Grace Cady and X. Christine Wang
Educ. Sci. 2025, 15(7), 840; https://doi.org/10.3390/educsci15070840 - 2 Jul 2025
Viewed by 1418
Abstract
Blending child-led exploration with purposeful teacher guidance and clearly defined learning goals, playful learning has been promoted as a promising approach for introducing computational thinking (CT) in early childhood education (ECE). However, there is a lack of practical guidance for teachers on how [...] Read more.
Blending child-led exploration with purposeful teacher guidance and clearly defined learning goals, playful learning has been promoted as a promising approach for introducing computational thinking (CT) in early childhood education (ECE). However, there is a lack of practical guidance for teachers on how to design and implement playful CT learning effectively. To address this gap, we conducted a collaborative action research project and asked these two questions: (1) How can teachers effectively prepare and design a playful learning CT program using tangible CT toys? (2) How do teachers facilitate playful learning in the CT program? Through iterative cycles of planning, acting, observing, and reflecting, the first and second authors (teacher-researchers) designed and implemented their CT programs in a preschool classroom and an afterschool program respectively, and collected data including video recordings of sessions, participant-generated artifacts, program documentation, and anecdotal reflection notes. Based on our thematic analysis of the data, we identified practical principles for selecting CT toys, three key themes for CT program design and preparation—interest, ownership, and application, and two forms of teacher scaffolding during implementation: embodied approach and storytelling as scaffolding and assessment. The findings highlight practical ways that teachers can enhance children’s engagement, problem-solving skills, and conceptual understanding of CT, while also promoting autonomy and creativity through coding and storytelling. Full article
Show Figures

Figure 1

13 pages, 1932 KiB  
Article
Evaluation of the Quality and Educational Value of YouTube Videos on Class IV Resin Composite Restorations
by Rashed A. AlSahafi, Hesham A. Alhazmi, Israa Alkhalifah, Danah Albuhmdouh, Malik J. Farraj, Abdullah Alhussein and Abdulrahman A. Balhaddad
Dent. J. 2025, 13(7), 298; https://doi.org/10.3390/dj13070298 - 30 Jun 2025
Viewed by 298
Abstract
Objectives: The increasing reliance on online platforms for dental education necessitates an assessment of the quality and reliability of available resources. This study aimed to evaluate YouTube videos as educational tools for Class IV resin composite restorations. Methods: The first 100 YouTube [...] Read more.
Objectives: The increasing reliance on online platforms for dental education necessitates an assessment of the quality and reliability of available resources. This study aimed to evaluate YouTube videos as educational tools for Class IV resin composite restorations. Methods: The first 100 YouTube videos were screened, and 73 met the inclusion criteria. The videos were evaluated using the Video Information and Quality Index (VIQI) and specific content criteria derived from the dental literature. Videos with a score below the mean were identified as low-content videos. Results: No significant differences were noted between high- and low-content videos when examining the number of views, number of likes, duration, days since upload, viewing rate, interaction index, and number of subscribers (p > 0.05). The high-content videos demonstrated higher mean values compared with the low-content videos in flow (4.11 vs. 3.21; p < 0.0001), accuracy (4.07 vs. 3.07; p < 0.0001), quality (4 vs. 2.66; p < 0.0001), and precision (4.16 vs. 2.86; p < 0.0001). The overall VIQI score was significantly higher (p < 0.0001) for high-content videos (Mean 16.34; SD 2.46) compared with low-content videos (Mean 11.79; SD 2.96). For content score, high-content videos (Mean 9.36; SD 1.33) had a higher score (p < 0.0001) than low-content videos (Mean 4.90; SD 2.04). The key areas lacking sufficient coverage included occlusion, shade selection, and light curing techniques. Conclusions: While a significant portion of YouTube videos provided high-quality educational content, notable deficiencies were identified. This analysis serves as a call to action for both content creators and educational institutions to prioritize the accuracy and completeness of online dental education. Full article
(This article belongs to the Special Issue Dental Education: Innovation and Challenge)
Show Figures

Figure 1

18 pages, 9529 KiB  
Article
Adaptive Temporal Action Localization in Video
by Zhiyu Xu, Zhuqiang Lu, Yong Ding, Liwei Tian and Suping Liu
Electronics 2025, 14(13), 2645; https://doi.org/10.3390/electronics14132645 - 30 Jun 2025
Viewed by 322
Abstract
Temporal action localization aims to identify the boundaries of the action of interest in a video. Most existing methods take a two-stage approach: first, identify a set of action proposals; then, based on this set, determine the accurate temporal locations of the action [...] Read more.
Temporal action localization aims to identify the boundaries of the action of interest in a video. Most existing methods take a two-stage approach: first, identify a set of action proposals; then, based on this set, determine the accurate temporal locations of the action of interest. However, the diversely distributed semantics of a video over time have not been well considered, which could compromise the localization performance, especially for ubiquitous short actions or events (e.g., a fall in healthcare and a traffic violation in surveillance). To address this problem, we propose a novel deep learning architecture, namely an adaptive template-guided self-attention network, to characterize the proposals adaptively with their relevant frames. An input video is segmented into temporal frames, within which the spatio-temporal patterns are formulated by a global–Local Transformer-based encoder. Each frame is associated with a number of proposals of different lengths as their starting frame. Learnable templates for proposals of different lengths are introduced, and each template guides the sampling for proposals with a specific length. It formulates the probabilities for a proposal to form the representation of certain spatio-temporal patterns from its relevant temporal frames. Therefore, the semantics of a proposal can be formulated in an adaptive manner, and a feature map of all proposals can be appropriately characterized. To estimate the IoU of these proposals with ground truth actions, a two-level scheme is introduced. A shortcut connection is also utilized to refine the predictions by using the convolutions of the feature map from coarse to fine. Comprehensive experiments on two benchmark datasets demonstrate the state-of-the-art performance of our proposed method: 32.6% mAP@IoU 0.7 on THUMOS-14 and 9.35% mAP@IoU 0.95 on ActivityNet-1.3. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Image and Video Processing)
Show Figures

Figure 1

Back to TopTop