Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,434)

Search Parameters:
Keywords = video analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 12956 KiB  
Article
Driver Identification and Detection of Drowsiness while Driving
by Sonia Díaz-Santos, Óscar Cigala-Álvarez, Ester Gonzalez-Sosa, Pino Caballero-Gil and Cándido Caballero-Gil
Appl. Sci. 2024, 14(6), 2603; https://doi.org/10.3390/app14062603 - 20 Mar 2024
Viewed by 71
Abstract
This paper introduces a cutting-edge approach that combines facial recognition and drowsiness detection technologies with Internet of Things capabilities, including 5G/6G connectivity, aimed at bolstering vehicle security and driver safety. The delineated two-phase project is tailored to strengthen security measures and address accidents [...] Read more.
This paper introduces a cutting-edge approach that combines facial recognition and drowsiness detection technologies with Internet of Things capabilities, including 5G/6G connectivity, aimed at bolstering vehicle security and driver safety. The delineated two-phase project is tailored to strengthen security measures and address accidents stemming from driver distraction and fatigue. The initial phase is centered on facial recognition for driver authentication before vehicle initiation. Following successful authentication, the subsequent phase harnesses continuous eye monitoring features, leveraging edge computing for real-time processing to identify signs of drowsiness during the journey. Emphasis is placed on video-based identification and analysis to ensure robust drowsiness detection. Finally, the study highlights the potential of these innovations to revolutionize automotive security and accident prevention within the context of intelligent transport systems. Full article
Show Figures

Figure 1

11 pages, 1354 KiB  
Article
Exploring Indicators for Training Load Control in Young Swimmers: The Role of Inspiratory Spirometry Outcomes
by Adrián Feria-Madueño, Nuno Batalha, Germán Monterrubio-Fernández and Jose A. Parraca
J. Funct. Morphol. Kinesiol. 2024, 9(1), 53; https://doi.org/10.3390/jfmk9010053 - 18 Mar 2024
Viewed by 173
Abstract
One of the most important implications of technology in swimming is to control training loads. Lactate control, video-analysis of the technique or the assessment of specific actions, i.e., the vertical jump, have helped to provide load adaptation indicators in swimmers in recent decades. [...] Read more.
One of the most important implications of technology in swimming is to control training loads. Lactate control, video-analysis of the technique or the assessment of specific actions, i.e., the vertical jump, have helped to provide load adaptation indicators in swimmers in recent decades. However, these indicators have led to a longer application time, due to their indirect procedure and the need to analyze each variable. The aim of this study was to analyze whether inspiratory spirometry values can serve as a training load control tool in swimmers. Countermovement jump (CMJ), Inspiratory Force Index (S-INDEX) and Peak Inspiratory Flow (PIF) were evaluated with a load of 3 cm H2O before, during and after performing a swimming performance test (critical speed test: specific warming up, 400 m and 100 m freestyle). Positive correlations were found between S-INDEX and jump height after warm-up, after 400 m and at the end of 100 m (Spearman = 0.470, R2 = 0.280; Spearman = 0.508, R2 = 0.392; Spearman = 0.458, R2 = 0.359, p < 0.05, respectively). Moreover, positive correlations were also found between PIF and jump height at the same evaluated moments (Spearman = 0.461, R2 = 0.305; Spearman = 0.493, R2 = 0.386; Spearman = 0.454, R2 = 0.374, p < 0.05). Both the S-INDEX and the PIF could serve as useful tools for swimmer load control, allowing coaches to make more immediate decisions. Full article
(This article belongs to the Special Issue Health and Performance through Sports at All Ages 3.0)
Show Figures

Figure 1

20 pages, 4100 KiB  
Protocol
Automated Analysis Pipeline for Extracting Saccade, Pupil, and Blink Parameters Using Video-Based Eye Tracking
by Brian C. Coe, Jeff Huang, Donald C. Brien, Brian J. White, Rachel Yep and Douglas P. Munoz
Vision 2024, 8(1), 14; https://doi.org/10.3390/vision8010014 - 18 Mar 2024
Viewed by 237
Abstract
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are [...] Read more.
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are not viable for the large datasets being collected. Additionally, video-based eye trackers allow for the analysis of pupil responses and blink behaviors. Here, we present a detailed description of our pipeline for collecting, storing, and cleaning data, as well as for organizing participant codes, which are fairly lab-specific but nonetheless, are important precursory steps in establishing standardized pipelines. More importantly, we also include descriptions of the automated detection and classification of saccades, blinks, “blincades” (blinks occurring during saccades), and boomerang saccades (two nearly simultaneous saccades in opposite directions where speed-based algorithms fail to split them), This is almost entirely task-agnostic and can be used on a wide variety of data. We additionally describe novel findings regarding post-saccadic oscillations and provide a method to achieve more accurate estimates for saccade end points. Lastly, we describe the automated behavior classification for the interleaved pro/anti-saccade task (IPAST), a task that probes voluntary and inhibitory control. This pipeline was evaluated using data collected from 592 human participants between 5 and 93 years of age, making it robust enough to handle large clinical patient datasets. In summary, this pipeline has been optimized to consistently handle large datasets obtained from diverse study cohorts (i.e., developmental, aging, clinical) and collected across multiple laboratory sites. Full article
Show Figures

Figure 1

22 pages, 15944 KiB  
Article
Leveraging the Sensitivity of Plants with Deep Learning to Recognize Human Emotions
by Jakob Adrian Kruse, Leon Ciechanowski, Ambre Dupuis, Ignacio Vazquez and Peter A. Gloor
Sensors 2024, 24(6), 1917; https://doi.org/10.3390/s24061917 - 16 Mar 2024
Viewed by 388
Abstract
Recent advances in artificial intelligence combined with behavioral sciences have led to the development of cutting-edge tools for recognizing human emotions based on text, video, audio, and physiological data. However, these data sources are expensive, intrusive, and regulated, unlike plants, which have been [...] Read more.
Recent advances in artificial intelligence combined with behavioral sciences have led to the development of cutting-edge tools for recognizing human emotions based on text, video, audio, and physiological data. However, these data sources are expensive, intrusive, and regulated, unlike plants, which have been shown to be sensitive to human steps and sounds. A methodology to use plants as human emotion detectors is proposed. Electrical signals from plants were tracked and labeled based on video data. The labeled data were then used for classification., and the MLP, biLSTM, MFCC-CNN, MFCC-ResNet, Random Forest, 1-Dimensional CNN, and biLSTM (without windowing) models were set using a grid search algorithm with cross-validation. Finally, the best-parameterized models were trained and used on the test set for classification. The performance of this methodology was measured via a case study with 54 participants who were watching an emotionally charged video; as ground truth, their facial emotions were simultaneously measured using facial emotion analysis. The Random Forest model shows the best performance, particularly in recognizing high-arousal emotions, achieving an overall weighted accuracy of 55.2% and demonstrating high weighted recall in emotions such as fear (61.0%) and happiness (60.4%). The MFCC-ResNet model offers decently balanced results, with AccuracyMFCCResNet=0.318 and RecallMFCCResNet=0.324. Regarding the MFCC-ResNet model, fear and anger were recognized with 75% and 50% recall, respectively. Thus, using plants as an emotion recognition tool seems worth investigating, addressing both cost and privacy concerns. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

15 pages, 899 KiB  
Article
Advancing Temporal Action Localization with a Boundary Awareness Network
by Jialiang Gu, Yang Yi and Min Wang
Electronics 2024, 13(6), 1099; https://doi.org/10.3390/electronics13061099 - 16 Mar 2024
Viewed by 209
Abstract
Temporal action localization (TAL) is crucial in video analysis, yet presents notable challenges. This process focuses on the precise identification and categorization of action instances within lengthy, raw videos. A key difficulty in TAL lies in determining the exact start and end points [...] Read more.
Temporal action localization (TAL) is crucial in video analysis, yet presents notable challenges. This process focuses on the precise identification and categorization of action instances within lengthy, raw videos. A key difficulty in TAL lies in determining the exact start and end points of actions, owing to the often unclear boundaries of these actions in real-world footage. Existing methods tend to take insufficient account of changes in action boundary features. To tackle these issues, we propose a boundary awareness network (BAN) for TAL. Specifically, the BAN mainly consists of a feature encoding network, coarse pyramidal detection to obtain preliminary proposals and action categories, and fine-grained detection with a Gaussian boundary module (GBM) to get more valuable boundary information. The GBM contains a novel Gaussian boundary pooling, which serves to aggregate the relevant features of the action boundaries and to capture discriminative boundary and actionness features. Furthermore, we introduce a novel approach named Boundary Differentiated Learning (BDL) to ensure our model’s capability in accurately identifying action boundaries across diverse proposals. Comprehensive experiments on both the THUMOS14 and ActivityNet v1.3 datasets, where our BAN model achieved an increase in mean Average Precision (mAP) by 1.6% and 0.2%, respectively, over existing state-of-the-art methods, illustrate that our approach not only improves upon the current state of the art but also achieves outstanding performance. Full article
Show Figures

Figure 1

16 pages, 1791 KiB  
Article
Comparing Video Analysis to Computerized Detection of Limb Position for the Diagnosis of Movement Control during Back Squat Exercise with Overload
by André B. Peres, Andrei Sancassani, Eliane A. Castro, Tiago A. F. Almeida, Danilo A. Massini, Anderson G. Macedo, Mário C. Espada, Víctor Hernández-Beltrán, José M. Gamonales and Dalton M. Pessôa Filho
Sensors 2024, 24(6), 1910; https://doi.org/10.3390/s24061910 - 16 Mar 2024
Viewed by 236
Abstract
Incorrect limb position while lifting heavy weights might compromise athlete success during weightlifting performance, similar to the way that it increases the risk of muscle injuries during resistance exercises, regardless of the individual’s level of experience. However, practitioners might not have the necessary [...] Read more.
Incorrect limb position while lifting heavy weights might compromise athlete success during weightlifting performance, similar to the way that it increases the risk of muscle injuries during resistance exercises, regardless of the individual’s level of experience. However, practitioners might not have the necessary background knowledge for self-supervision of limb position and adjustment of the lifting position when improper movement occurs. Therefore, the computerized analysis of movement patterns might assist people in detecting changes in limb position during exercises with different loads or enhance the analysis of an observer with expertise in weightlifting exercises. In this study, hidden Markov models (HMMs) were employed to automate the detection of joint position and barbell trajectory during back squat exercises. Ten volunteers performed three lift movements each with a 0, 50, and 75% load based on body weight. A smartphone was used to record the movements in the sagittal plane, providing information for the analysis of variance and identifying significant position changes by video analysis (p < 0.05). Data from individuals performing the same movements with no added weight load were used to train the HMMs to identify changes in the pattern. A comparison of HMMs and human experts revealed between 40% and 90% agreement, indicating the reliability of HMMs for identifying changes in the control of movements with added weight load. In addition, the results highlighted that HMMs can detect changes imperceptible to the human visual analysis. Full article
Show Figures

Figure 1

29 pages, 16373 KiB  
Article
A Crowd Movement Analysis Method Based on Radar Particle Flow
by Li Zhang, Lin Cao, Zongmin Zhao, Dongfeng Wang and Chong Fu
Sensors 2024, 24(6), 1899; https://doi.org/10.3390/s24061899 - 15 Mar 2024
Viewed by 268
Abstract
Crowd movement analysis (CMA) is a key technology in the field of public safety. This technology provides reference for identifying potential hazards in public places by analyzing crowd aggregation and dispersion behavior. Traditional video processing techniques are susceptible to factors such as environmental [...] Read more.
Crowd movement analysis (CMA) is a key technology in the field of public safety. This technology provides reference for identifying potential hazards in public places by analyzing crowd aggregation and dispersion behavior. Traditional video processing techniques are susceptible to factors such as environmental lighting and depth of field when analyzing crowd movements, so cannot accurately locate the source of events. Radar, on the other hand, offers all-weather distance and angle measurements, effectively compensating for the shortcomings of video surveillance. This paper proposes a crowd motion analysis method based on radar particle flow (RPF). Firstly, radar particle flow is extracted from adjacent frames of millimeter-wave radar point sets by utilizing the optical flow method. Then, a new concept of micro-source is defined to describe whether any two RPF vectors originated from or reach the same location. Finally, in each local area, the internal micro-sources are counted to form a local diffusion potential, which characterizes the movement state of the crowd. The proposed algorithm is validated in real scenarios. By analyzing and processing radar data on aggregation, dispersion, and normal movements, the algorithm is able to effectively identify these movements with an accuracy rate of no less than 88%. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

11 pages, 254 KiB  
Review
When Video Improves Learning in Higher Education
by Sven Trenholm and Fernando Marmolejo-Ramos
Educ. Sci. 2024, 14(3), 311; https://doi.org/10.3390/educsci14030311 - 15 Mar 2024
Viewed by 412
Abstract
The use of video in education has become ubiquitous as technological developments have markedly improved the ability and facility to create, deliver, and view videos. The concomitant pedagogical transformation has created a sense of urgency regarding how video may be used to advance [...] Read more.
The use of video in education has become ubiquitous as technological developments have markedly improved the ability and facility to create, deliver, and view videos. The concomitant pedagogical transformation has created a sense of urgency regarding how video may be used to advance learning. Initial reviews have suggested only limited potential for the use of video in higher education. More recently, a systematic review of studies on the effect of video use on learning in higher education, published in the journal Review of Educational Research, found, overall, effects to be positive. In the present paper, we critique this study. We reveal significant gaps in the study methodology and write-up and use a cognitive processing lens to critically assess and re-analyse study data. We found the results of this study to be only applicable to learning requiring lower-level cognitive processing and conclude, consistent with prior research, that claims of a universal benefit are not yet warranted. Full article
(This article belongs to the Special Issue Current Challenges in Digital Higher Education)
6 pages, 243 KiB  
Opinion
BeeOpen—An Open Data Sharing Ecosystem for Apiculture
by Shreyas M. Guruprasad and Benjamin Leiding
Agriculture 2024, 14(3), 470; https://doi.org/10.3390/agriculture14030470 - 14 Mar 2024
Viewed by 323
Abstract
The digital transformation of apiculture initially encompasses Internet of Things (IoT) systems, incorporating sensor technologies to capture and transmit bee-centric data. Subsequently, data analysis assumes a vital role by establishing correlations between the collected data and the biological conditions of beehives, often leveraging [...] Read more.
The digital transformation of apiculture initially encompasses Internet of Things (IoT) systems, incorporating sensor technologies to capture and transmit bee-centric data. Subsequently, data analysis assumes a vital role by establishing correlations between the collected data and the biological conditions of beehives, often leveraging artificial intelligence (AI) approaches. The field of precision bee monitoring has witnessed a surge in the collection of large volumes of diverse data, ranging from the hive weight and temperature to health status, queen bee presence, pests, and overall hive activity. Further, these datasets’ heterogeneous nature and lack of standardization present challenges in applying machine learning techniques directly to extract valuable insights. To address this issue, the envisioned ecosystem serves as an open and collaborative information platform, facilitating the exchange and utilization of bee monitoring datasets. The data storage architecture can process a large variety of data at high frequency, e.g., images, videos, audio, and time series data. The platform serves as a repository, providing crucial information about the condition of beehives, health assessments, pest attacks, swarming patterns, and other relevant data. Notably, this information portal is managed through a citizen scientist initiative. By consolidating data from various sources, including beekeepers, researchers, and monitoring systems, the platform offers a holistic view of the bee population’s status in any given area. Full article
(This article belongs to the Special Issue Sensor-Based Precision Agriculture)
Show Figures

Figure 1

14 pages, 262 KiB  
Article
Health-Related Quality of Life in Relation to Health Behaviour Patterns among Canadian Children
by Xiuyun Wu, Arto Ohinmaa, Paul J. Veugelers and Katerina Maximova
Children 2024, 11(3), 346; https://doi.org/10.3390/children11030346 - 14 Mar 2024
Viewed by 270
Abstract
Poor health behaviours in childhood, including sedentary behaviour, low physical activity levels, inadequate sleep, and unhealthy diet, are established risk factors for both chronic diseases and mental illness. Scant studies have examined the importance of such health behaviour patterns for health-related quality of [...] Read more.
Poor health behaviours in childhood, including sedentary behaviour, low physical activity levels, inadequate sleep, and unhealthy diet, are established risk factors for both chronic diseases and mental illness. Scant studies have examined the importance of such health behaviour patterns for health-related quality of life (HRQoL). This study aimed to examine the association of health behaviour patterns with HRQoL among Canadian children. Data from 2866 grade five students were collected through a provincially representative school-based survey of the 2014 Raising Healthy Eating and Active Living Kids in Alberta study. Latent class analysis was used to identify health behaviour patterns based on 11 lifestyle behaviours: sedentary behaviour (using a computer, playing video games, watching TV), physical activity (with and without a coach), sleep (bedtime on weekdays and weekends), and diet (fruit and vegetables intake, grain products, milk and alternatives, meat and alternatives). Multivariable multilevel logistic regression was applied to examine the associations of health behaviour patterns with HRQoL. Three groupings with distinct health behaviour patterns were identified: the first grouping (55%) is characterized by relatively healthy levels of sedentary behaviour, physical activity, and sleep, but a less healthy diet (“activity-focused” group). The second grouping (24%) is characterized by a relatively healthy diet, but moderately healthy levels of sedentary behaviour, physical activity, and sleep (“diet-focused” group). The third grouping (21%) is characterized by mostly unhealthy behaviours (“not health-focused” group). Students in the third and second groupings (“not health-focused” and “diet-focused”) were more likely to report lower HRQoL relative to students in the first grouping (“activity-focused”). The findings suggest that health promotion strategies may be more effective when considering the patterns of health behaviours as distinct targets in the efforts to improve HRQoL. Future research should include prospective observational and intervention studies to further elucidate the relationship between health behaviour patterns and HRQoL among children. Full article
(This article belongs to the Section Global and Public Health)
19 pages, 954 KiB  
Article
Enhanced Seamless Video Fusion: A Convolutional Pyramid-Based 3D Integration Algorithm
by Yueheng Zhang, Jing Yuan and Changxiang Yan
Sensors 2024, 24(6), 1852; https://doi.org/10.3390/s24061852 - 14 Mar 2024
Viewed by 287
Abstract
Video fusion aims to synthesize video footage from different sources into a unified, coherent output. It plays a key role in areas such as video editing and special effects production. The challenge is to ensure the quality and naturalness of synthetic video, especially [...] Read more.
Video fusion aims to synthesize video footage from different sources into a unified, coherent output. It plays a key role in areas such as video editing and special effects production. The challenge is to ensure the quality and naturalness of synthetic video, especially when dealing with footage of different sources and qualities. Researchers continue to strive to optimize algorithms to adapt to a variety of complex application scenarios and improve the effectiveness and applicability of video fusion. We introduce an algorithm based on a convolution pyramid and propose a 3D video fusion algorithm that looks for the potential function closest to the gradient field in the least square sense. The 3D Poisson equation is solved to realize seamless video editing. This algorithm uses a multi-scale method and wavelet transform to approximate linear time. Through numerical optimization, a small core is designed to deal with large target filters, and multi-scale transformation analysis and synthesis are realized. In terms of seamless video fusion, it shows better performance than existing algorithms. Compared with editing multiple 2D images into video after Poisson fusion, the video quality produced by this method is very close, and the computing speed of the video fusion is improved to a certain extent. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

17 pages, 687 KiB  
Article
Enhancing Embedded Object Tracking: A Hardware Acceleration Approach for Real-Time Predictability
by Mingyang Zhang, Kristof Van Beeck and Toon Goedemé
J. Imaging 2024, 10(3), 70; https://doi.org/10.3390/jimaging10030070 - 13 Mar 2024
Viewed by 363
Abstract
While Siamese object tracking has witnessed significant advancements, its hard real-time behaviour on embedded devices remains inadequately addressed. In many application cases, an embedded implementation should not only have a minimal execution latency, but this latency should ideally also have zero variance, i.e., [...] Read more.
While Siamese object tracking has witnessed significant advancements, its hard real-time behaviour on embedded devices remains inadequately addressed. In many application cases, an embedded implementation should not only have a minimal execution latency, but this latency should ideally also have zero variance, i.e., be predictable. This study aims to address this issue by meticulously analysing real-time predictability across different components of a deep-learning-based video object tracking system. Our detailed experiments not only indicate the superiority of Field-Programmable Gate Array (FPGA) implementations in terms of hard real-time behaviour but also unveil important time predictability bottlenecks. We introduce dedicated hardware accelerators for key processes, focusing on depth-wise cross-correlation and padding operations, utilizing high-level synthesis (HLS). Implemented on a KV260 board, our enhanced tracker exhibits not only a speed up, with a factor of 6.6, in mean execution time but also significant improvements in hard real-time predictability by yielding 11 times less latency variation as compared to our baseline. A subsequent analysis of power consumption reveals our approach’s contribution to enhanced power efficiency. These advancements underscore the crucial role of hardware acceleration in realizing time-predictable object tracking on embedded systems, setting new standards for future hardware–software co-design endeavours in this domain. Full article
Show Figures

Figure 1

24 pages, 7818 KiB  
Article
Assessment of Factors Affecting Pavement Rutting in Pakistan Using Finite Element Method and Machine Learning Models
by Xiao Hu, Azher Ishaq, Afaq Khattak and Feng Chen
Sustainability 2024, 16(6), 2362; https://doi.org/10.3390/su16062362 - 13 Mar 2024
Viewed by 347
Abstract
This study researches environmental factors, vehicle dynamics, and loading conditions on pavement structures, aiming to comprehend and predict their impact. The susceptibility of asphalt pavement to temperature variations, vehicle speed, and loading cycles is explored, with a particular focus on the lateral distribution [...] Read more.
This study researches environmental factors, vehicle dynamics, and loading conditions on pavement structures, aiming to comprehend and predict their impact. The susceptibility of asphalt pavement to temperature variations, vehicle speed, and loading cycles is explored, with a particular focus on the lateral distribution of wheel tracks in driving and passing lanes. Utilizing video analysis and finite element modelling (FEM) through ABAQUS 2022 software, multiple input factors, such as speed (60, 80 and 100 km/h), loading cycles (100,000 to 500,000), and temperature range (0 °C to 50 °C), are applied to observe the maximum rutting (17.89 mm to 24.7 mm). It is observed that the rut depth exhibited is directly proportional to the loading cycles and temperature, but the opposite is true in the case of vehicle speed. Moreover, interpretable machine learning models, particularly the Bayesian-optimized light gradient boosting machine (LGBM) model, demonstrate superior predictive performance in rut depth. Insights from SHAP interpretation highlight the significant roles of temperature and loading frequency in pavement deformation. This study concludes with a comprehensive understanding of how these factors impact road structures in Pakistan. Its implications extend to valuable insights for optimizing road design, offering a significant contribution to enhancing the durability and sustainability of road infrastructure in the region. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

27 pages, 10017 KiB  
Article
A Self-Adaptive Automatic Incident Detection System for Road Surveillance Based on Deep Learning
by César Bartolomé-Hornillos, Luis M. San-José-Revuelta, Javier M. Aguiar-Pérez, Carlos García-Serrada, Eduardo Vara-Pazos and Pablo Casaseca-de-la-Higuera
Sensors 2024, 24(6), 1822; https://doi.org/10.3390/s24061822 - 12 Mar 2024
Viewed by 361
Abstract
We present an automatic road incident detector characterised by a low computational complexity for easy implementation in affordable devices, automatic adaptability to changes in scenery and road conditions, and automatic detection of the most common incidents (vehicles with abnormal speed, pedestrians or objects [...] Read more.
We present an automatic road incident detector characterised by a low computational complexity for easy implementation in affordable devices, automatic adaptability to changes in scenery and road conditions, and automatic detection of the most common incidents (vehicles with abnormal speed, pedestrians or objects falling on the road, vehicles stopped on the shoulder, and detection of kamikaze vehicles). To achieve these goals, different tasks have been addressed: lane segmentation, identification of traffic directions, and elimination of unnecessary objects in the foreground. The proposed system has been tested on a collection of videos recorded in real scenarios with real traffic, including areas with different lighting. Self-adaptability (plug and play) to different scenarios has been tested using videos with significant scene changes. The achieved system can process a minimum of 80 video frames within the camera’s field of view, covering a distance of 400 m, all within a span of 12 s. This capability ensures that vehicles travelling at speeds of 120 km/h are seamlessly detected with more than enough margin. Additionally, our analysis has revealed a substantial improvement in incident detection with respect to previous approaches. Specifically, an increase in accuracy of 2–5% in automatic mode and 2–7% in semi-automatic mode. The proposed classifier module only needs 2.3 MBytes of GPU to carry out the inference, thus allowing implementation in low-cost devices. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Graphical abstract

11 pages, 225 KiB  
Article
Understanding Experiences of Youth with Long COVID: A Qualitative Approach
by Chelsea Torres, Kensei Maeda, Madeline Johnson and Leonard A. Jason
Children 2024, 11(3), 335; https://doi.org/10.3390/children11030335 - 12 Mar 2024
Viewed by 2417
Abstract
There is limited information on the specific impacts of Long COVID in youth. Long COVID presents as persisting or new symptoms following initial COVID-19 infection. The aim of this study was to better understand how children and their families describe their experiences seeking [...] Read more.
There is limited information on the specific impacts of Long COVID in youth. Long COVID presents as persisting or new symptoms following initial COVID-19 infection. The aim of this study was to better understand how children and their families describe their experiences seeking diagnosis and support following the onset of symptoms of Long COVID. Six children and five caregivers located in the United States participated in this study. Study procedures included an online video interview with caregiver–child dyads. Interview transcriptions were then analyzed using a conventional approach to content analysis, with two independent coders generating themes. Eight themes emerged from this analysis including the severity of illness and symptomatology, difficulty surrounding the diagnostic process and not being believed, the impact on family and social connections, poor school functioning, positive coping, subsequent positive medical experiences, mental health, and knowledge of the medical field and healthcare experience. Themes revealed difficulty for youth and families in navigating the medical system and functioning in areas of daily life as well as areas of positive experiences related to coping and medical involvement. These findings also highlighted areas of needed improvement for the medical community and for research on Long COVID in youth. Full article
Show Figures

Graphical abstract

Back to TopTop