Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,051)

Search Parameters:
Keywords = video platforms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 495 KB  
Article
The Exposed Childhood: An Examination of Chinese Parents’ Online Sharing of Children’s Photos and Videos—An Analysis Based on Douyin Network Data
by Yaping Yue, Yuang Guo and Haojie Yuan
Behav. Sci. 2026, 16(4), 499; https://doi.org/10.3390/bs16040499 - 27 Mar 2026
Abstract
Amid the prevailing trend of “pan-entertainment” in cyberspace, adults increasingly interpret children’s lives through utilitarian, adult-centric, and entertainment-focused perspectives, leading to the alienation of children’s online images. This study examines child influencer accounts on Douyin—typically managed by parents—and conducts content and discourse analysis [...] Read more.
Amid the prevailing trend of “pan-entertainment” in cyberspace, adults increasingly interpret children’s lives through utilitarian, adult-centric, and entertainment-focused perspectives, leading to the alienation of children’s online images. This study examines child influencer accounts on Douyin—typically managed by parents—and conducts content and discourse analysis on them. Drawing on critical theories by Douglas Kellner, we employed Scrapy and NVivo to analyze 30 popular children’s videos and 15,000 user comments posted beneath them. The analysis identifies five key characteristics in the construction of such images: spectacular visual mechanisms, younger-age production trends, covert commercial penetration, homogenized spectacle types, and adult-centric implicit influence. The study underscores the urgency of strengthening protective mechanisms to counteract platform capitalism’s intrusion into childhood and to uphold children’s digital privacy and agency. Full article
Show Figures

Figure 1

17 pages, 335 KB  
Review
The Role of the Cardiothoracic Surgeon in the Age of AI—Are the Robots Going to Take Our Jobs?
by Caius-Glad Streian, Vlad-Alexandru Meche, Horea Bogdan Feier, Dragos Cozma, Ciprian Nicușor Dima, Constantin Tudor Luca and Sergiu-Ciprian Matei
Med. Sci. 2026, 14(2), 164; https://doi.org/10.3390/medsci14020164 (registering DOI) - 25 Mar 2026
Viewed by 213
Abstract
Introduction: Artificial intelligence (AI) and robot-assisted platforms are increasingly influencing cardiothoracic surgery. AI enhances risk prediction, imaging interpretation, and early complication detection, while robotics improves visualization, dexterity, and minimally invasive access. This systematic review evaluates the current evidence supporting these technologies and [...] Read more.
Introduction: Artificial intelligence (AI) and robot-assisted platforms are increasingly influencing cardiothoracic surgery. AI enhances risk prediction, imaging interpretation, and early complication detection, while robotics improves visualization, dexterity, and minimally invasive access. This systematic review evaluates the current evidence supporting these technologies and their implications for clinical practice. Methods: A systematic literature search was conducted across PubMed, Embase, Scopus, Web of Science, and Google Scholar (January 2000–May 2025) following PRISMA 2020 guidelines. After screening and eligibility assessment, 67 studies met predefined inclusion criteria and were incorporated into the qualitative synthesis. Additional high-impact reviews and consensus documents were consulted for contextual interpretation. Results: Machine learning models demonstrated modest but consistent improvements in predictive performance compared with EuroSCORE II and STS scores, particularly in high-risk cohorts. Robot-assisted mitral and coronary procedures showed reduced postoperative pain, blood loss, ICU stay, and recovery time in experienced centers, though early learning phases were associated with longer operative, cross-clamp, and bypass times. AI-enabled intraoperative tools, such as video analysis, workflow recognition, and real-time anatomical segmentation, emerged as promising adjuncts for surgical precision. Structured robotic training programs, especially simulation-based and dual-console pathways, accelerated proficiency acquisition. Conclusions: AI and robotic systems act as augmentative technologies that enhance rather than replace the surgeon’s role. Their safe and effective adoption requires standardized training, transparent AI decision pathways, and clear ethical and medico-legal governance. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Cardiovascular Medicine)
Show Figures

Figure 1

13 pages, 1473 KB  
Article
Enhancing Ophthalmologists’ Accuracy in Detecting Convergence Insufficiency Using AI-Derived Graphical Outputs
by Ahmad Khatib, Haneen Jabaly-Habib, Shmuel Raz and Ilan Shimshoni
J. Clin. Transl. Ophthalmol. 2026, 4(2), 9; https://doi.org/10.3390/jcto4020009 - 24 Mar 2026
Viewed by 154
Abstract
Background: Accurate evaluation of the Near Point of Convergence (NPC) is essential for diagnosing and managing convergence insufficiency (CI). Conventional assessment relies on the patient’s verbal feedback and the examiner’s visual observation, making it subjective and examiner-dependent. The AI-based MobileS platform, previously validated [...] Read more.
Background: Accurate evaluation of the Near Point of Convergence (NPC) is essential for diagnosing and managing convergence insufficiency (CI). Conventional assessment relies on the patient’s verbal feedback and the examiner’s visual observation, making it subjective and examiner-dependent. The AI-based MobileS platform, previously validated for both diagnosis and home-based therapy of CI, enables smartphone-based measurement and visualisation of NPC through eye tracking, without the need for verbal responses or additional equipment. This study, the third stage of our research programme, examined how ophthalmologists interpret NPC data when presented as videos versus AI-derived graphs. Methods: Twenty-two ophthalmologists completed an online questionnaire with 20 NPC test cases from the validated MobileS database, presented as both silent videos and AI-derived graphs. Accuracy was analysed using mixed-effects logistic regression, and continuous error was assessed using clustered bootstrap. Results: Graph-based interpretation showed higher odds of accurate NPC identification than video-based interpretation at the primary ±5 mm threshold (OR = 19.7, 95% CI: 13.50–28.74; p < 0.0001). Absolute error was lower for graphs than videos (Graphs − Videos: −22.73 mm; 95% CI: −26.88 to −18.59; p < 0.0001). “Uncertain” responses occurred in 28.2% of video-based assessments and 0% of graph-based assessments. Off-target errors decreased from 50.2% (videos) to 3.6% (graphs). Conclusions: AI-derived graphs of eye-movement data were associated with improved NPC estimation, suggesting a potential role in supporting clinical and tele-ophthalmology workflows. Full article
Show Figures

Figure 1

21 pages, 2227 KB  
Article
Emotion and Context-Aware Artificial Intelligence Recommendation for Urban Tourism
by Mashael Aldayel, Abeer Al-Nafjan, Reman Alwadiee, Sarah Altammami, Abeer Alnafaei and Leena Alzahrani
J. Theor. Appl. Electron. Commer. Res. 2026, 21(3), 95; https://doi.org/10.3390/jtaer21030095 - 23 Mar 2026
Viewed by 193
Abstract
The rapid growth of digital tourism platforms has intensified information overload and decision complexity for both locals and travelers, while operators struggle to differentiate their offerings and sustain profitable, data-driven e-commerce models. This paper presents Doroob, a big data and artificial intelligence (AI)-driven, [...] Read more.
The rapid growth of digital tourism platforms has intensified information overload and decision complexity for both locals and travelers, while operators struggle to differentiate their offerings and sustain profitable, data-driven e-commerce models. This paper presents Doroob, a big data and artificial intelligence (AI)-driven, context-aware recommendation system that integrates traditional recommender techniques with real-time facial emotion recognition (FER) to enable intelligent tourism commerce. Doroob combines three AI-based recommendation strategies: smart adaptive recommendation (SAR) collaborative filtering, a Vowpal Wabbit-based context-aware model, and a LightFM hybrid model. It trained on datasets built from the Google Places API and enriched with ratings adapted from MovieLens. FER, implemented with DeepFace and OpenCV, analyzes short video segments as users browse destination details, converts emotion scores into 1–5 satisfaction ratings, and stores this implicit feedback alongside explicit ratings to support adaptive, emotion-aware personalization. Experimental results show that the context-aware model achieves the strongest top-K ranking performance, the hybrid LightFM model yields the highest AUC of 0.95, and the SAR model provides the most accurate rating predictions, demonstrating that combining contextual modeling and FER-based implicit feedback can enhance personalization, mitigate cold-start, and support data-driven promotion of local tourist services in intelligent e-commerce ecosystems. Full article
(This article belongs to the Special Issue Human–Technology Synergies in AI-Driven E-Commerce Environments)
Show Figures

Figure 1

16 pages, 957 KB  
Article
Effects of a Video-Guided Active Break Programme on the Self-Esteem and Socio-Emotional Well-Being of Schoolchildren with Special Educational Needs: Active Classes Project
by Alejandra Robles-Campos, Yasna Chávez-Castillo, Isidora Zañartu, Ana María Arias, Carolina Muñoz, José Guzmán, Daniel Reyes-Molina, Igor Cigarroa, Maria Antonia Parra-Rizo, Juan de Dios Benítez-Sillero, Jose Manuel Armada-Crespo, Javier Murillo-Moraño and Rafael Zapata-Lamana
Behav. Sci. 2026, 16(3), 459; https://doi.org/10.3390/bs16030459 - 19 Mar 2026
Viewed by 288
Abstract
Serving students with special educational needs (SENs) involves recognising that their learning is closely linked to their emotional needs. Self-esteem and socio-emotional well-being play a key role in their motivation and adaptation to school. In this context, physical activity-based interventions at school emerge [...] Read more.
Serving students with special educational needs (SENs) involves recognising that their learning is closely linked to their emotional needs. Self-esteem and socio-emotional well-being play a key role in their motivation and adaptation to school. In this context, physical activity-based interventions at school emerge as a possible way to strengthen their self-esteem and socio-emotional well-being. The aim of this study was to analyse the effects of a web-based active break programme on self-esteem in students aged 6 to 10 years with SENs and on socio-emotional well-being in the subgroup of first–second-grade students. A pre-specified sub-analysis was conducted of a multicentre randomised controlled trial with a sample of 161 students with special educational needs (7.8 ± 1.1 years, 32% girls), divided into a control group (85 students) and an experimental group (76 students). A programme of video-guided active breaks was implemented in the classroom, applied twice a day, five days a week for 12 weeks, via a web platform. Self-esteem was assessed using the School Self-Esteem Test (SSET), and socio-emotional well-being was assessed using the Self-Report of Socio-Emotional Well-Being (SRSEWB). A significant Time × Group interaction was observed for self-esteem, F(1, 157) = 5.43, p = 0.021, η2p = 0.033, but no statistically significant effects were detected for socio-emotional well-being. These findings suggest that active break interventions may help strengthen self-esteem in students with SENs. Future research should examine the temporal stability of these improvements, determine the optimal intervention duration required to generate sustained changes, and evaluate longer-term socio-emotional outcomes. Full article
(This article belongs to the Section Health Psychology)
Show Figures

Figure 1

29 pages, 2282 KB  
Article
A Multimodal Deep Learning Approach for Analyzing Content Preferences on TikTok Across European Technical Universities Using Media Information Processing System
by Dragoş-Florin Sburlan and Marian Bucos
Electronics 2026, 15(6), 1288; https://doi.org/10.3390/electronics15061288 - 19 Mar 2026
Viewed by 244
Abstract
Social media platforms have become primary communication channels for technical European universities. However, the extent to which global platform algorithms homogenize individual preferences across cultures remains underexplored. Although the current literature offers insights into the topic, none of the works consider the cross-national [...] Read more.
Social media platforms have become primary communication channels for technical European universities. However, the extent to which global platform algorithms homogenize individual preferences across cultures remains underexplored. Although the current literature offers insights into the topic, none of the works consider the cross-national and multimodal nature of the phenomenon. In the current paper, we introduce the Media Information Processing System (MIPS), a privacy-preserving multimodal deep learning (DL) framework that incorporates large language models (LLMs), computer vision (CV), and knowledge graphs. We analyze data from 15,520 public videos shared by 2359 followers of six top technical universities from Romania, Germany, Italy, and Russia. The results of the study suggest that the degree of homogeneity of the followers’ interest profiles is markedly high. Statistical profiling of the data indicates that the interest profiles of the followers from different countries are positively correlated with a high degree of strength (mean Pearson r = 0.96; p > 0.90). Consensus clustering of the data reveals the existence of stable clusters of themes with high stability scores (>0.75), such as “Human Interaction Dynamics”. The results of the study contradict the traditional theory of regional cultural differentiation. Instead, the results suggest the existence of a new “digital student persona” that is characteristic of the academic lifestyle of students from different countries. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 3rd Edition)
Show Figures

Figure 1

37 pages, 2981 KB  
Article
Signs, Shapes, and Spaces: A CAMIL-Informed Qualitative Study of Metaverse Geometry Learning for Deaf and Hard-of-Hearing Students
by Ai Peng Chong, Kung-Teck Wong, Kong Liang Soon Vestly and Kuppusamy Suresh Kumar
Soc. Sci. 2026, 15(3), 191; https://doi.org/10.3390/socsci15030191 - 16 Mar 2026
Viewed by 381
Abstract
Deaf and Hard-of-Hearing (DHH) students face persistent barriers in geometry education due to instructional approaches that inadequately support visual communication and embodied learning. This study examined DHH students’ experiences with GeoMETriA, a metaverse-based geometry learning platform integrating sign language instruction, three-dimensional visualization, and [...] Read more.
Deaf and Hard-of-Hearing (DHH) students face persistent barriers in geometry education due to instructional approaches that inadequately support visual communication and embodied learning. This study examined DHH students’ experiences with GeoMETriA, a metaverse-based geometry learning platform integrating sign language instruction, three-dimensional visualization, and avatar-mediated interaction. Guided by the Cognitive Affective Model of Immersive Learning (CAMIL), a multi-phase qualitative design was employed, including pre-workshop interviews with four special education teachers and post-workshop focus group discussions with seven DHH secondary students following a four-session learning workshop. The findings indicate that gamified activities and peer collaboration enhanced interest and sustained engagement, while avatar customization supported embodiment and a sense of presence. Students described progression from initial uncertainty to greater confidence through practice and scaffolded support. However, cognitive and usability challenges emerged, particularly concerning sign language video pacing, navigation complexity, and limited instructional scaffolding. The study contributes theoretically by extending CAMIL-informed interpretations to sign-supported metaverse learning, empirically by documenting how engagement, embodiment, and self-efficacy develop during immersive geometry learning, and practically by offering design implications including adjustable sign language delivery, structured scaffolding, and culturally responsive avatar options. These findings suggest that metaverse-based platforms hold promise for supporting DHH learners when accessibility and learner-centered principles are embedded as foundational design considerations. Full article
(This article belongs to the Special Issue Belt and Road Together Special Education 2025)
Show Figures

Figure 1

24 pages, 905 KB  
Article
Neural Encoding Strategies for Neuromorphic Computing
by Michael Liu, Honghao Zheng and Yang Yi
Electronics 2026, 15(6), 1221; https://doi.org/10.3390/electronics15061221 - 14 Mar 2026
Viewed by 227
Abstract
Neuromorphic computing seeks to mimic structure and function of biological neural systems to enable energy-efficient, adaptive information processing. A critical component of this paradigm is neural encoding—the translation of analog or digital input data into spike-based representations suitable for spiking neural networks (SNNs). [...] Read more.
Neuromorphic computing seeks to mimic structure and function of biological neural systems to enable energy-efficient, adaptive information processing. A critical component of this paradigm is neural encoding—the translation of analog or digital input data into spike-based representations suitable for spiking neural networks (SNNs). This paper provides a comprehensive overview of major neural encoding schemes used in neuromorphic systems, including rate and temporal encoding, as well as latency, interspike interval, phase, and multiplexed encoding. The purpose of this paper is to explore the use of encoding techniques for deep learning applications. We discussed the underlying principles of spike encoding approaches, their biological inspiration, computational efficiency, power consumption, integrated circuit design and implementation, and suitability for various neuromorphic applications. We also presented our research on a hardware-and-software co-design platform for different encoding schemes and demonstrated their performance. By comparing their strengths, limitations, and implementation challenges, we aim to provide insights that will guide the development of more efficient and application-specific neuromorphic systems. We also performed an encoder performance analysis via Python 3.12 simulations to compare classification accuracies across these spike encoders on three popular image and video datasets. The performance of neural encoders working with both deep neural networks (DNNs) and SNNs is analyzed. Our performance data is largely consistent with the benchmark data on image classification from other papers, while limited performance data on the University of Central Florida’s 101 (UCF-101) video dataset were found in comparable studies on spike encoders. Based on our encoder performance data, the Interspike Interval (ISI) encoder performs well across all three datasets, preserving continuous, detailed spike timing and richer temporal information for standard classification tasks. Further, for image classification, multiplexing encoders outperform other spike encoders as they simplify timing patterns by enforcing phase locking and improve stability and robustness to noise. Within the SNN testbenches, the ISI-Phase encoder achieved the highest accuracy on the Modified National Institute of Standards and Technology (MNIST) dataset, surpassing the Time-To-First Spike (TTFS) encoder by 1.9%. On the Canadian Institute For Advanced Research (CIFAR-10) dataset, the ISI encoder achieved the highest accuracy. This ISI encoder had 22.7% higher accuracy than the TTFS encoder on the CIFAR-10 dataset. The ISI encoder performed best on the UCF-101 dataset, achieving 12.7% better performance than the TTFS encoder. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

32 pages, 7928 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Viewed by 272
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Show Figures

Figure 1

15 pages, 2174 KB  
Article
Information Quality and Audience Engagement of Cesarean Section-Related Videos on YouTube and Bilibili: A Cross-Platform Analysis
by Gongxin Shen, Lingxuan Wei, Hanliang Tao, Lexuan Chen, Peng Chen, Yuxin Li, Siyuan Chang and Dapeng Shen
Information 2026, 17(3), 273; https://doi.org/10.3390/info17030273 - 10 Mar 2026
Viewed by 278
Abstract
(1) Background: C-section-related health information is increasingly disseminated through short video platforms such as YouTube and Bilibili, yet the quality and audience engagement of this content remain insufficiently understood. (2) Methods: A cross-sectional study analyzed the top 90 C-section-related videos from each platform [...] Read more.
(1) Background: C-section-related health information is increasingly disseminated through short video platforms such as YouTube and Bilibili, yet the quality and audience engagement of this content remain insufficiently understood. (2) Methods: A cross-sectional study analyzed the top 90 C-section-related videos from each platform (180 total). C-section video characteristics and engagement metrics were collected. Information quality and reliability were assessed using GQS, DISCERN, and JAMA benchmarks. Associations between quality scores and engagement indicators were examined using Spearman’s correlation analysis. (3) Results: YouTube videos were longer and more frequently produced by medical professionals. Although GQS scores were comparable, YouTube content demonstrated higher reliability, with significantly higher DISCERN and JAMA scores (p < 0.001). C-section engagement metrics were strongly intercorrelated but showed weak associations with objective quality measures. (4) Conclusions: Significant cross-platform disparities exist in C-section information quality, with a pronounced dissociation between clinical reliability and audience engagement. This “quality-popularity paradox” underscores a critical mismatch between evidence-based rigor and digital dissemination. Our findings necessitate multi-sectoral interventions, including standardized creator credentialing and algorithm recalibration, to align high-quality obstetric knowledge with public attention. Full article
(This article belongs to the Special Issue Data Mining and Healthcare Informatics)
Show Figures

Figure 1

27 pages, 2940 KB  
Article
A Unified Framework for Vehicle Detection, Tracking, and Counting Across Ground and Aerial Views Using Knowledge Distillation with YOLOv10-S
by Md Rezaul Karim Khan and Naphtali Rishe
Remote Sens. 2026, 18(5), 842; https://doi.org/10.3390/rs18050842 - 9 Mar 2026
Viewed by 407
Abstract
Accurate and reliable vehicle detection, tracking, and counting across different surveillance platforms are fundamental requirements for developing smart Traffic Management Systems (TMS) and promoting sustainable urban mobility. Recent advances in both ground-level surveillance and remote sensing using deep learning have opened new opportunities [...] Read more.
Accurate and reliable vehicle detection, tracking, and counting across different surveillance platforms are fundamental requirements for developing smart Traffic Management Systems (TMS) and promoting sustainable urban mobility. Recent advances in both ground-level surveillance and remote sensing using deep learning have opened new opportunities for extracting detailed vehicular information from high-resolution aerial and surveillance video data. Our research reported here aims to present a unified, real-time vehicle analysis framework that integrates lightweight deep learning–based detection, robust multi-object tracking, and trajectory-driven counting within a single modular pipeline. The proposed framework employs a “You Only Look Once” system, YOLOv10-S as the detection backbone and enhances its robustness through supervision-level knowledge distillation without introducing any architectural modifications. Temporal consistency is enforced using an observation-centric multi-object tracking algorithm (OC-SORT), enabling stable identity preservation under camera motion and dense traffic conditions. Vehicle counting is performed using a trajectory-based virtual gate strategy, reducing duplicate counts and improving counting reliability. Comprehensive experiments conducted on the UA-DETRAC and VisDrone benchmarks show that the proposed framework effectively balances detection performance, tracking robustness, counting accuracy, and real-time efficiency in both ground-based and aerial surveillance settings. Furthermore, cross-dataset evaluations under direct train–test transfer highlight the inherent challenges of domain shift while showing that knowledge distillation consistently improves robustness in detection, tracking identity consistency, and vehicle counting. Overall, this framework enables effective real-world traffic monitoring by adopting a scalable and practical system design, where reliability is prioritized over architectural complexity. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

41 pages, 19770 KB  
Article
Vision-Based Dual-Mode Collision Risk-Warning for Aircraft Apron Monitoring
by Emre Can Bingol, Hamed Al-Raweshidy and Konstantinos Banitsas
Drones 2026, 10(3), 173; https://doi.org/10.3390/drones10030173 - 2 Mar 2026
Viewed by 437
Abstract
Ground incidents on airport aprons can cause substantial operational disruption and economic loss, while conventional surveillance (e.g., Surface Movement Radar (SMR), Closed-Circuit Television (CCTV)) often lacks the resolution and proactive decision support required for close-proximity operations. This study proposes a UAV-deployable, camera-agnostic Computer [...] Read more.
Ground incidents on airport aprons can cause substantial operational disruption and economic loss, while conventional surveillance (e.g., Surface Movement Radar (SMR), Closed-Circuit Television (CCTV)) often lacks the resolution and proactive decision support required for close-proximity operations. This study proposes a UAV-deployable, camera-agnostic Computer Vision (CV) framework for collision-risk warning from elevated viewpoints. An optimised YOLOv8-Seg backbone performs multi-class aircraft segmentation (airplane, wing, nose, tail, and fuselage) and is integrated with four MOT algorithms under identical evaluation settings. For quantitative tracker benchmarking, DeepSORT provides the strongest overall performance on the airplane-only MOTChallenge-format ground truth (MOTA 92.77%, recall 93.27%). To mitigate the scarcity of annotated apron-incident data, a labelled 997-frame MOT dataset is created via an MSFS simulation-based reenactment inspired by the 2018 Asiana–Turkish Airlines wing-to-tail event at Istanbul Ataturk Airport. The framework further introduces a dual-module warning mechanism that can operate independently: (i) a reactive module using image-plane proximity derived from segmentation masks, and (ii) a proactive module that predicts short-horizon conflicts via trajectory extrapolation and IoU-based future overlap analysis. The approach is evaluated on multiple simulated incident scenarios and assessed on a real apron video from Hong Kong International Airport; additionally, laboratory-scale UAV experiments using diecast aircraft models provide end-to-end feasibility evidence on unmanned-platform imagery. Overall, the results indicate timely warnings and practical feasibility for low-overhead UAV-enabled apron monitoring. Full article
Show Figures

Figure 1

13 pages, 1706 KB  
Article
Empowering Women in Pharmacy History Through Digital Heritage: ICT-Based Teaching Innovation and Social Engagement at the Museum of History of Pharmacy of Seville (Spain)
by Antonio Ramos Carrillo and Rocío Ruiz Altaba
Heritage 2026, 9(3), 98; https://doi.org/10.3390/heritage9030098 - 28 Feb 2026
Viewed by 321
Abstract
This study analyses the educational and social impact of a series of innovative teaching projects developed at the Museum of the History of Pharmacy of the University of Seville. The initiatives—including historical video documentaries, the “student guides” programme, and the digital outreach project [...] Read more.
This study analyses the educational and social impact of a series of innovative teaching projects developed at the Museum of the History of Pharmacy of the University of Seville. The initiatives—including historical video documentaries, the “student guides” programme, and the digital outreach project “Voices that Empower”—explore the pedagogical potential of scientific heritage as a learning tool and as a medium for public communication. Through experiential and service-learning methodologies, these projects have enhanced students’ communication skills, critical thinking, and awareness of cultural and gender dimensions within pharmaceutical studies. The results demonstrate that the integration of audiovisual production, museum-based learning, and digital storytelling fosters meaningful engagement between the university and society, while also revitalising the historical and humanistic dimensions of pharmacy. Furthermore, the inclusion of a gender perspective in the “Voices that Empower” initiative contributes to the visibility of women in STEM and highlights the museum as a space for empowerment and social transformation. This work concludes that university museums can act as strategic platforms for innovation in higher education, combining heritage preservation, teaching excellence, and civic outreach to promote a more inclusive and sustainable scientific culture. Full article
(This article belongs to the Section Cultural Heritage)
Show Figures

Figure 1

27 pages, 1058 KB  
Article
An AI-Driven Multimodal Sensor Fusion Framework for Fraud Perception in Short-Video and Live-Streaming Platforms
by Ruixiang Zhao, Xuanhao Zhang, Jinfan Yang, Haofei Li, Zhengjia Lu, Wenrui Xu and Manzhou Li
Sensors 2026, 26(5), 1525; https://doi.org/10.3390/s26051525 - 28 Feb 2026
Viewed by 411
Abstract
With the rapid proliferation of short-video platforms and live-streaming commerce ecosystems, marketing activities are increasingly manifested through complex multimodal sensing signals. These heterogeneous sensor data streams exhibit strong temporal dependency, high cross-modal coupling, and progressive evolutionary characteristics, making early-stage fraud perception particularly challenging [...] Read more.
With the rapid proliferation of short-video platforms and live-streaming commerce ecosystems, marketing activities are increasingly manifested through complex multimodal sensing signals. These heterogeneous sensor data streams exhibit strong temporal dependency, high cross-modal coupling, and progressive evolutionary characteristics, making early-stage fraud perception particularly challenging for conventional unimodal or static analytical paradigms. Existing approaches often fail to effectively capture weak anomalous cues emerging across multimodal channels during the initial stages of fraudulent campaigns. To address these limitations, an artificial intelligence-driven multimodal sensor perception framework is proposed for temporal fraud detection in short-video environments. A multimodal temporal alignment module is first designed to synchronize heterogeneous sensor signals with inconsistent sampling granularities. Subsequently, a shared temporal encoding network is constructed to learn evolution-aware representations across multimodal sensor sequences. On this basis, a cross-modal temporal attention fusion mechanism is introduced to dynamically weight sensor contributions at different behavioral stages. Finally, a fraud evolution modeling and early risk prediction module is developed to characterize the progressive intensification of fraudulent activities and to enable risk assessment under incomplete temporal observations. Extensive experiments conducted on real-world datasets collected from multiple mainstream short-video platforms demonstrate the effectiveness of the proposed AI-driven sensing framework. The model achieves an overall accuracy of 0.941, precision of 0.865, recall of 0.812, and F1 score of 0.838, with the AUC further reaching 0.956, significantly outperforming text-based, vision-based, temporal, and conventional multimodal baselines. In early-stage detection scenarios utilizing only the first 30% of video content, the framework maintains stable performance advantages, achieving a precision of 0.812, recall of 0.704, and F1 score of 0.754, validating its capability for proactive fraud warning. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Sensing)
Show Figures

Figure 1

28 pages, 493 KB  
Study Protocol
Psychoeducational Intervention for Sedentary Overweight Adults Who Are Fans of a Football Club: Protocol for a Pragmatic Trial
by José A. Jiménez-Chaires, Jeanette M. López-Walle, Abril Cantú-Berrueto, José Tristán and Alejandro García-Mas
Healthcare 2026, 14(5), 612; https://doi.org/10.3390/healthcare14050612 - 28 Feb 2026
Viewed by 388
Abstract
Background: A sedentary behavior and being overweight represent major public health issues associated with both physical and psychological risks. Based on self-determination theory (SDT), the psychoeducational intervention PsicoFIT—a component of the TIGREFIT program—aims to foster motivation toward physical activity, to promote healthy [...] Read more.
Background: A sedentary behavior and being overweight represent major public health issues associated with both physical and psychological risks. Based on self-determination theory (SDT), the psychoeducational intervention PsicoFIT—a component of the TIGREFIT program—aims to foster motivation toward physical activity, to promote healthy habits, and to reduce psychological ill-being in sedentary adults who are overweight and are fans of a football club. Methods: This protocol corresponds to a longitudinal comparative pragmatic clinical trial, designed in accordance with the recommendations of the SPIRIT Statement. The intervention, preceded by a training program for the coaches involved, will comprise 12 weekly modules delivered in two modalities: (1) face-to-face, through group sessions, and (2) semi face-to-face, through short video capsules hosted on a digital platform. Changes associated with the intervention will be evaluated using hierarchical multiple regression and pre-post comparisons, assessing baseline and post-intervention data within and between the intervention modalities. Primary outcomes will include changes in healthy lifestyle and burnout as indicators of well-being and ill-being, respectively. Secondary outcomes will assess basic psychological needs satisfaction and autonomous motivation as potential mediators of these effects, as well as the coach’s controlling interpersonal style as a possible contextual predictor. The modality of participation will be analyzed as a potential moderator of the observed changes. Finally, the acceptability and perceived contribution of the intervention will be explored through a focus group. Discussion: PsicoFIT will provide a methodological framework for designing interventions within multicomponent programs aimed at promoting healthy lifestyles and psychological well-being in sedentary adults who are overweight, considering the social context of football fandom and allowing for an exploration of the impact of the face-to-face and semi-face-to-face modalities. Future empirical application of the protocol will help verify its effectiveness, guide adaptations across contexts, and contribute to the development of evidence-based interventions. Conclusions: The implementation of PsicoFit will allow for the evaluation of its effectiveness, psychological mechanisms, and delivery modalities, thus guiding future evidence-based interventions in sport. Full article
(This article belongs to the Special Issue Innovative and Multidisciplinary Approaches to Healthcare)
Show Figures

Figure 1

Back to TopTop