Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = blind video quality assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 413 KiB  
Protocol
V-CARE (Virtual Care After REsuscitation): Protocol for a Randomized Feasibility Study of a Virtual Psychoeducational Intervention After Cardiac Arrest—A STEPCARE Sub-Study
by Marco Mion, Gisela Lilja, Mattias Bohm, Erik Blennow Nordström, Dorit Töniste, Katarina Heimburg, Paul Swindell, Josef Dankiewicz, Markus B. Skrifvars, Niklas Nielsen, Janus C. Jakobsen, Judith White, Matt P. Wise, Nikos Gorgoraptis, Meadbh Keenan, Philip Hopkins, Nilesh Pareek, Maria Maccaroni and Thomas R. Keeble
J. Clin. Med. 2025, 14(13), 4429; https://doi.org/10.3390/jcm14134429 - 22 Jun 2025
Viewed by 495
Abstract
Background: Out-of-hospital cardiac arrest (OHCA) survivors and their relatives may face challenges following hospital discharge, relating to mood, cognition, and returning to normal day-to-day activities. Identified research gaps include a lack of knowledge around what type of intervention is needed to best navigate [...] Read more.
Background: Out-of-hospital cardiac arrest (OHCA) survivors and their relatives may face challenges following hospital discharge, relating to mood, cognition, and returning to normal day-to-day activities. Identified research gaps include a lack of knowledge around what type of intervention is needed to best navigate recovery. In this study, we investigate the feasibility and patient acceptability of a new virtual psychoeducational group intervention for OHCA survivors and their relatives and compare it to a control group receiving a digital information booklet. Methods: V-CARE is a comparative, single-blind randomized pilot trial including participants at selected sites of the STEPCARE trial, in the United Kingdom and Sweden. Inclusion criteria are a modified Rankin Scale (mRS) ≤ 3 at 30-day follow-up; no diagnosis of dementia; and not experiencing an acute psychiatric episode. One caregiver per patient is invited to participate optionally. The intervention group in V-CARE receives four semi-structured, one-hour-long, psychoeducational sessions delivered remotely via video call by a trained clinician once a week, 2–3 months after hospital discharge. The sessions cover understanding cardiac arrest; coping with fatigue and memory problems; managing low mood and anxiety; and returning to daily life. The control group receives an information booklet focused on fatigue, memory/cognitive problems, mental health, and practical coping strategies. Results: Primary: feasibility (number of patients consented) and acceptability (retention rate); secondary: satisfaction with care (Client Satisfaction Questionnaire 8 item), self-management skills (Self-Management Assessment Scale) and, where available, health-related outcomes assessed in the STEPCARE Extended Follow-up sub-study including cognition, fatigue, mood, quality of life, and return to work. Conclusions: If preliminary insights from the V-CARE trial suggest the intervention to be feasible and acceptable, the results will be used to design a larger trial aimed at informing future interventions to support OHCA recovery. Full article
Show Figures

Figure 1

24 pages, 8143 KiB  
Article
PIFall: A Pressure Insole-Based Fall Detection System for the Elderly Using ResNet3D
by Wei Guo, Xiaoyang Liu, Chenghong Lu and Lei Jing
Electronics 2024, 13(6), 1066; https://doi.org/10.3390/electronics13061066 - 13 Mar 2024
Cited by 2 | Viewed by 2997
Abstract
Falls among the elderly are a significant public health issue, resulting in about 684,000 deaths annually. Such incidents often lead to severe consequences including fractures, contusions, and cranial injuries, immensely affecting the quality of life and independence of the elderly. Existing fall detection [...] Read more.
Falls among the elderly are a significant public health issue, resulting in about 684,000 deaths annually. Such incidents often lead to severe consequences including fractures, contusions, and cranial injuries, immensely affecting the quality of life and independence of the elderly. Existing fall detection methods using cameras and wearable sensors face challenges such as privacy concerns, blind spots in vision and being troublesome to wear. In this paper, we propose PIFall, a Pressure Insole-Based Fall Detection System for the Elderly, utilizing the ResNet3D algorithm. Initially, we design and fabricate a pair of insoles equipped with low-cost resistive films to measure plantar pressure, arranging 5×9 pressure sensors on each insole. Furthermore, we present a fall detection method that combines ResNet(2+1)D with an insole-based sensor matrix, utilizing time-series ‘stress videos’ derived from pressure map data as input. Lastly, we collect data on 12 different actions from five subjects, including fall risk activities specifically designed to be easily confused with actual falls. The system achieves an overall accuracy of 91% in detecting falls and 94% in identifying specific fall actions. Additionally, feedback is gathered from eight elderly individuals using a structured questionnaire to assess user experience and satisfaction with the pressure insoles. Full article
(This article belongs to the Special Issue Wearable Sensing Devices and Technology)
Show Figures

Figure 1

17 pages, 5156 KiB  
Article
Blind Video Quality Assessment for Ultra-High-Definition Video Based on Super-Resolution and Deep Reinforcement Learning
by Zefeng Ying, Da Pan and Ping Shi
Sensors 2023, 23(3), 1511; https://doi.org/10.3390/s23031511 - 29 Jan 2023
Cited by 6 | Viewed by 3306
Abstract
Ultra-high-definition (UHD) video has brought new challenges to objective video quality assessment (VQA) due to its high resolution and high frame rate. Most existing VQA methods are designed for non-UHD videos—when they are employed to deal with UHD videos, the processing speed will [...] Read more.
Ultra-high-definition (UHD) video has brought new challenges to objective video quality assessment (VQA) due to its high resolution and high frame rate. Most existing VQA methods are designed for non-UHD videos—when they are employed to deal with UHD videos, the processing speed will be slow and the global spatial features cannot be fully extracted. In addition, these VQA methods usually segment the video into multiple segments, predict the quality score of each segment, and then average the quality score of each segment to obtain the quality score of the whole video. This breaks the temporal correlation of the video sequences and is inconsistent with the characteristics of human visual perception. In this paper, we present a no-reference VQA method, aiming to effectively and efficiently predict quality scores for UHD videos. First, we construct a spatial distortion feature network based on a super-resolution model (SR-SDFNet), which can quickly extract the global spatial distortion features of UHD videos. Then, to aggregate the spatial distortion features of each UHD frame, we propose a time fusion network based on a reinforcement learning model (RL-TFNet), in which the actor network continuously combines multiple frame features extracted by SR-SDFNet and outputs an action to adjust the current quality score to approximate the subjective score, and the critic network outputs action values to optimize the quality perception of the actor network. Finally, we conduct large-scale experiments on UHD VQA databases and the results reveal that, compared to other state-of-the-art VQA methods, our method achieves competitive quality prediction performance with a shorter runtime and fewer model parameters. Full article
Show Figures

Figure 1

11 pages, 2095 KiB  
Article
Smartphone Slit Lamp Imaging—Usability and Quality Assessment
by Daniel Rudolf Muth, Frank Blaser, Nastasia Foa, Pauline Scherm, Wolfgang Johann Mayer, Daniel Barthelmes and Sandrine Anne Zweifel
Diagnostics 2023, 13(3), 423; https://doi.org/10.3390/diagnostics13030423 - 24 Jan 2023
Cited by 6 | Viewed by 3230
Abstract
Purpose: To assess the usability and image quality of a smartphone adapter for direct slit lamp imaging. Methods: A single-center, prospective, clinical study conducted in the Department of Ophthalmology at the University Hospital Zurich, Switzerland. The smartphone group consisted of 26 medical staff [...] Read more.
Purpose: To assess the usability and image quality of a smartphone adapter for direct slit lamp imaging. Methods: A single-center, prospective, clinical study conducted in the Department of Ophthalmology at the University Hospital Zurich, Switzerland. The smartphone group consisted of 26 medical staff (consultants, residents, and students). The control group consisted of one ophthalmic photographer. Both groups took images of the anterior and the posterior eye segment of the same proband. The control group used professional photography equipment. The participant group used an Apple iPhone 11 mounted on a slit lamp via a removable SlitREC smartphone adapter (Custom Surgical GmbH, Munich, Germany). The image quality was graded independently by two blinded ophthalmologists on a scale from 0 (low) to 10 (high quality). Images with a score ≥ 7.0/10 were considered as good as the reference images. The acquisition time was measured. A questionnaire on usability and experience in smartphone and slit lamp use was taken by all of the participants. Results: Each participant had three attempts at the same task. The overall smartphone quality was 7.2/10 for the anterior and 6.4/10 for the posterior segment. The subjectively perceived difficulty decreased significantly over the course of three attempts (Kendall’s W). Image quality increased as well but did not improve significantly from take 1 to take 3. However, the image quality of the posterior segment was significantly, positively correlated (Spearman’s Rho) with work experience. The mean acquisition time for anterior segment imaging was faster in the smartphone group compared to the control group (156 vs. 206 s). It was vice versa for the posterior segment (180 vs. 151 s). Conclusion: Slit lamp imaging with the presented smartphone adapter provides high-quality imaging of the anterior segment. Posterior segment imaging remains challenging in terms of image quality. The adapter constitutes a cost-effective, portable, easy-to-use solution for recording ophthalmic photos and videos. It can facilitate clinical documentation and communication among colleagues and with the patient especially outside normal consultation hours. Direct slit lamp imaging allows for time to be saved and increases the independence of ophthalmologists in terms of patient mobility and the availability of photographic staff. Full article
Show Figures

Figure 1

10 pages, 1984 KiB  
Article
Blind Image Quality Assessment with Deep Learning: A Replicability Study and Its Reproducibility in Lifelogging
by Ricardo Ribeiro, Alina Trifan and António J. R. Neves
Appl. Sci. 2023, 13(1), 59; https://doi.org/10.3390/app13010059 - 21 Dec 2022
Cited by 4 | Viewed by 2155
Abstract
The wide availability and small size of different types of sensors have allowed for the acquisition of a huge amount of data about a person’s life in real time. With these data, usually denoted as lifelog data, we can analyze and understand personal [...] Read more.
The wide availability and small size of different types of sensors have allowed for the acquisition of a huge amount of data about a person’s life in real time. With these data, usually denoted as lifelog data, we can analyze and understand personal experiences and behaviors. Most of the lifelog research has explored the use of visual data. However, a considerable amount of these images or videos are affected by different types of degradation or noise due to the non-controlled acquisition process. Image Quality Assessment can plays an essential role in lifelog research to deal with these data. We present in this paper a twofold study on the topic of blind image quality assessment. On the one hand, we explore the replication of the training process of a state-of-the-art deep learning model for blind image quality assessment in the wild. On the other hand, we present evidence that blind image quality assessment is an important pre-processing step to be further explored in the context of information retrieval in lifelogging applications. We consider that our efforts have been successful in the replication of the model training process, achieving similar results of inference when compared to the original version, while acknowledging a fair number of assumptions that we had to consider. Moreover, these assumptions motivated an extensive additional analysis that led to significant insights on the influence of both batch size and loss functions when training deep learning models in this context. We include preliminary results of the replicated model on a lifelogging dataset, as a potential reproducibility aspect to be considered. Full article
(This article belongs to the Special Issue Computer Vision-Based Intelligent Systems: Challenges and Approaches)
Show Figures

Figure 1

12 pages, 445 KiB  
Article
Nintendo Switch Joy-Cons’ Infrared Motion Camera Sensor for Training Manual Dexterity in People with Multiple Sclerosis: A Randomized Controlled Trial
by Alicia Cuesta-Gómez, Paloma Martín-Díaz, Patricia Sánchez-Herrera Baeza, Alicia Martínez-Medina, Carmen Ortiz-Comino and Roberto Cano-de-la-Cuerda
J. Clin. Med. 2022, 11(12), 3261; https://doi.org/10.3390/jcm11123261 - 7 Jun 2022
Cited by 19 | Viewed by 4501
Abstract
Background: The Nintendo Switch® (NS) is the ninth video game console developed by Nintendo®. Joy-Cons® are the primary game controllers for the NS® video game console, and they have an infrared motion camera sensor that allows capturing the [...] Read more.
Background: The Nintendo Switch® (NS) is the ninth video game console developed by Nintendo®. Joy-Cons® are the primary game controllers for the NS® video game console, and they have an infrared motion camera sensor that allows capturing the patient’s hands without the need to place sensors or devices on the body. The primary aim of the present study was to evaluate the effects of the NS®, combined with a conventional intervention, for improving upper limb (UL) grip muscle strength, coordination, speed of movements, fine and gross dexterity, functionality, quality of life, and executive function in multiple sclerosis (MS) patients. Furthermore, we sought to assess satisfaction and compliance levels. Methods: A single-blinded, randomized clinical trial was conducted. The sample was randomized into two groups: an experimental group who received treatment based on Dr Kawashima’s Brain Training® for the NS® (20 min) plus conventional rehabilitation (40 min), and a control group who received the same conventional rehabilitation (60 min) for the ULs. Both groups received two 60 min sessions per week over an eight-week period. Grip strength, the Box and Blocks Test (BBT), the Nine Hole Peg Test (NHPT), the QuickDASH, the Multiple Sclerosis Impact Scale (MSIS-29), the Trail Making Test (TMT), and the Stroop Color and Word Test (SCWT) were used pre- and post-treatment. Side effects and attendance rates were also recorded. Results: Intragroup analysis showed significant improvements for the experimental group in the post-treatment assessments for grip strength in the more affected side (p = 0.033), the BBT for the more (p = 0.030) and the less affected side (p = 0.022), the TMT (A section) (p = 0.012), and the QuickDASH (p = 0.017). No differences were observed for the control group in intragroup analysis, but they were observed in the NHPT for the more affected side (p = 0.012). The intergroup analysis did not show differences between both groups. Conclusions: Our results show that an eight-week experimental protocol, after using Dr Kawashima’s Brain Training® and the right-side Joy-Con controller for the NS®, combined with a conventional intervention, showed improvements in grip strength, coordination, fine and gross motor function, executive functions, and upper limb functionality in the experimental group. However, no differences were observed when both groups were compared in the intergroup analysis. The addition of Brain Training® for the NS® for the upper limb rehabilitation did not show side effects and was rated with a high satisfaction and excellent compliance in people with MS. Trial registration: This randomized controlled trial has been registered at ClinicalTrials Identifier: NCT04171908, November 2019. Full article
(This article belongs to the Special Issue Clinical Application of Physical Therapy in Neurorehabilitation)
Show Figures

Figure 1

18 pages, 758 KiB  
Article
Development of a Composite Pain Scale in Foals: A Pilot Study
by Aliai Lanci, Beatrice Benedetti, Francesca Freccero, Carolina Castagnetti, Jole Mariella, Johannes P. A. M. van Loon and Barbara Padalino
Animals 2022, 12(4), 439; https://doi.org/10.3390/ani12040439 - 11 Feb 2022
Cited by 8 | Viewed by 3171
Abstract
Prompt pain management is crucial in horses; however, tools to assess pain are limited. This study aimed to develop and pilot a composite scale for pain estimation in foals. The “Foal Composite Pain Scale” (FCPS) was developed based on literature and authors’ expertise. [...] Read more.
Prompt pain management is crucial in horses; however, tools to assess pain are limited. This study aimed to develop and pilot a composite scale for pain estimation in foals. The “Foal Composite Pain Scale” (FCPS) was developed based on literature and authors’ expertise. The FCPS consisted of 11 facial expressions, 4 behavioural items, and 5 physical items. Thirty-five pain-free foals (Control Group) and 15 foals experiencing pain (Pain Group) were used. Foals were video-recorded at different time points: the Control Group only at inclusion (C), while the Pain Group at inclusion (T1), after an analgesic treatment (T2), and at recovery (T3). Physical items were also recorded at the same time points. Videos were scored twice by five trained observers, blinded to group and time points, to calculate inter- and intra-observer reliability of each scale item. Fleiss’ kappa values ranged from moderate to almost perfect for the majority of the items, while the intraclass correlation coefficient was excellent (ICC = 0.923). The consistency of FCPS was also excellent (Cronbach’s alpha = 0.842). A cut-off ≥ 7 indicated the presence of pain. The Pain Group scores were significantly higher (p < 0.001) than the Control Group and decreased over time (T1, T2 > T3; p = 0.001). Overall, FCPS seems clinically applicable to quantify pain and improve the judgment of the quality of life in foals, but it needs modifications based on these preliminary findings. Consequently, further studies on a larger sample size are needed to test the feasibility and validity of the refined FCPS. Full article
(This article belongs to the Special Issue Advancing Equestrian Practice to Improve Equine Quality of Life)
Show Figures

Figure 1

16 pages, 3136 KiB  
Article
A New Blind Video Quality Metric for Assessing Different Turbulence Mitigation Algorithms
by Chiman Kwan and Bence Budavari
Electronics 2021, 10(18), 2277; https://doi.org/10.3390/electronics10182277 - 16 Sep 2021
Cited by 2 | Viewed by 2225
Abstract
Although many algorithms have been proposed to mitigate air turbulence in optical videos, there do not seem to be consistent blind video quality assessment metrics that can reliably assess different approaches. Blind video quality assessment metrics are necessary because many videos containing air [...] Read more.
Although many algorithms have been proposed to mitigate air turbulence in optical videos, there do not seem to be consistent blind video quality assessment metrics that can reliably assess different approaches. Blind video quality assessment metrics are necessary because many videos containing air turbulence do not have ground truth. In this paper, a simple and intuitive blind video quality assessment metric is proposed. This metric can reliably and consistently assess various turbulent mitigation algorithms for optical videos. Experimental results using more than 10 videos in the literature show that the proposed metrics correlate well with human subjective evaluations. Compared with an existing blind video metric and two other blind image quality metrics, the proposed metrics performed consistently better. Full article
(This article belongs to the Special Issue Theory and Applications in Digital Signal Processing)
Show Figures

Figure 1

11 pages, 1135 KiB  
Review
Interventions to Improve the Cast Removal Experience for Children and Their Families: A Scoping Review
by Pramila Maharjan, Dustin Murdock, Nicholas Tielemans, Nancy Goodall, Beverley Temple, Nicole Askin and Kristy Wittmeier
Children 2021, 8(2), 130; https://doi.org/10.3390/children8020130 - 10 Feb 2021
Cited by 4 | Viewed by 2955
Abstract
Background: Cast removal can be a distressing experience for a child. This scoping review aims to provide a comprehensive review of interventions designed to reduce anxiety and improve the child’s and family’s experience of pediatric cast removal. Methods: A scoping review was conducted [...] Read more.
Background: Cast removal can be a distressing experience for a child. This scoping review aims to provide a comprehensive review of interventions designed to reduce anxiety and improve the child’s and family’s experience of pediatric cast removal. Methods: A scoping review was conducted (Medline, Embase, PsycINFO, CINAHL, Scopus, grey literature sources). Inclusion criteria: studies published January 1975–October 2019 with a primary focus on pediatric patients undergoing cast removal/cast room procedures. Screening, full text review, data extraction, and quality appraisal were conducted in duplicate. Results: 974 unique articles and 1 video were screened. Nine articles (eight unique studies) with a total of 763 participants were included. Interventions included the following, alone or in combination: noise reduction, electronic device use, preparatory information, music therapy, play therapy, and child life specialist-directed intervention. Heart rate was used as a primary (88%) or secondary (12%) outcome measure across studies. Each study reported some positive effect of the intervention, however effects varied by age, outcome measure, and measurement timing. Studies scored low on outcome measure validity and blinding as assessed by the Joanna Briggs Institute Critical Appraisal Checklist for Randomized Controlled Trials. Conclusion: Various methods have been tested to improve the pediatric cast removal experience. Results are promising, however the variation in observed effectiveness suggests a need for the use of consistent and valid outcome measures. In addition, future research and quality improvement projects should evaluate interventions that are tailored to a child’s age and child/family preference. Full article
Show Figures

Figure 1

10 pages, 993 KiB  
Article
Objective Assessment of Acute Pain in Foals Using a Facial Expression-Based Pain Scale
by Johannes van Loon, Nicole Verhaar, Els van den Berg, Sarah Ross and Janny de Grauw
Animals 2020, 10(9), 1610; https://doi.org/10.3390/ani10091610 - 10 Sep 2020
Cited by 8 | Viewed by 5646
Abstract
Pain assessment is very important for monitoring welfare and quality of life in horses. To date, no studies have described pain scales for objective assessment of pain in foals. Studies in other species have shown that facial expression can be used in neonatal [...] Read more.
Pain assessment is very important for monitoring welfare and quality of life in horses. To date, no studies have described pain scales for objective assessment of pain in foals. Studies in other species have shown that facial expression can be used in neonatal animals for objective assessment of acute pain. The aim of the current study was to adapt a facial expression-based pain scale for assessment of acute pain in mature horses for valid pain assessment in foals. The scale was applied to fifty-nine foals (20 patients and 39 healthy controls); animals were assessed from video recordings (30–60 s) by 3 observers, who were blinded for the condition of the animals. Patients were diagnosed with acute health problems by means of clinical examination and additional diagnostic procedures. EQUUS-FAP FOAL (Equine Utrecht University Scale for Facial Assessment of Pain in Foals) showed good inter- and intra-observer reliability (Cronbach’s alpha = 0.95 and 0.98, p < 0.001). Patients had significantly higher pain scores compared to controls (p < 0.001) and the pain scores decreased after treatment with NSAIDs (meloxicam or flunixin meglumine IV) (p < 0.05). Our results indicate that a facial expression-based pain scale could be useful for the assessment of acute pain in foals. Further studies are needed to validate this pain scale. Full article
(This article belongs to the Special Issue Towards a better assessment of acute pain in equines)
Show Figures

Figure 1

16 pages, 1804 KiB  
Protocol
Protocol: Using Single-Case Experimental Design to Evaluate Whole-Body Dynamic Seating on Activity, Participation, and Quality of Life in Dystonic Cerebral Palsy
by Hortensia Gimeno and Tim Adlam
Healthcare 2020, 8(1), 11; https://doi.org/10.3390/healthcare8010011 - 31 Dec 2019
Cited by 4 | Viewed by 4822
Abstract
Introduction: People with hyperkinetic movement disorders, including dystonia, experience often painful, involuntary movements affecting functioning. Seating comfort is a key unmet need identified by families. This paper reports a protocol to assess the feasibility and preliminary evidence for the efficacy of dynamic seating [...] Read more.
Introduction: People with hyperkinetic movement disorders, including dystonia, experience often painful, involuntary movements affecting functioning. Seating comfort is a key unmet need identified by families. This paper reports a protocol to assess the feasibility and preliminary evidence for the efficacy of dynamic seating to improve functional outcomes for young children with dystonic cerebral palsy (DCP). Design: A series of single-case experimental design N-of-1 trials, with replications across participants, with a random baseline interval, and one treatment period (n = 6). Methods: Inclusion criteria: DCP; 21.5 cm < popliteal fossa to posterior sacrum < 35 cm; Gross Motor Function Classification System level IV–V; mini-Manual Ability Classification System level IV–V; difficulties with seating. Intervention: Trial of the seat (8 weeks), with multiple baseline before, during and after intervention and 2 month follow up. The baseline duration will be randomised per child (2–7 weeks). Primary outcomes: Performance Quality Rating Scale; Canadian Occupational Performance Measure; seating tolerance. The statistician will create the randomization, with allocation concealment by registration of participants prior to sending the allocation arm to the principal investigator. Primary outcomes will be assessed from video by an assessor blind to allocation. Analysis: Participant outcome data will be plotted over time, with parametric and non-parametric analysis including estimated size effect for N-of-1 trials. Full article
(This article belongs to the Special Issue N-of-1 Trials in Healthcare)
Show Figures

Figure 1

9 pages, 1368 KiB  
Article
Feasibility Evaluation of Commercially Available Video Conferencing Devices to Technically Direct Untrained Nonmedical Personnel to Perform a Rapid Trauma Ultrasound Examination
by Davinder Ramsingh, Michael Ma, Danny Quy Le, Warren Davis, Mark Ringer, Briahnna Austin and Cameron Ricks
Diagnostics 2019, 9(4), 188; https://doi.org/10.3390/diagnostics9040188 - 14 Nov 2019
Cited by 13 | Viewed by 3766
Abstract
Introduction: Point-of-care ultrasound (POCUS) is a rapidly expanding discipline that has proven to be a valuable modality in the hospital setting. Recent evidence has demonstrated the utility of commercially available video conferencing technologies, namely, FaceTime (Apple Inc, Cupertino, CA, USA) and Google Glass [...] Read more.
Introduction: Point-of-care ultrasound (POCUS) is a rapidly expanding discipline that has proven to be a valuable modality in the hospital setting. Recent evidence has demonstrated the utility of commercially available video conferencing technologies, namely, FaceTime (Apple Inc, Cupertino, CA, USA) and Google Glass (Google Inc, Mountain View, CA, USA), to allow an expert POCUS examiner to remotely guide a novice medical professional. However, few studies have evaluated the ability to use these teleultrasound technologies to guide a nonmedical novice to perform an acute care POCUS examination for cardiac, pulmonary, and abdominal assessments. Additionally, few studies have shown the ability of a POCUS-trained cardiac anesthesiologist to perform the role of an expert instructor. This study sought to evaluate the ability of a POCUS-trained anesthesiologist to remotely guide a nonmedically trained participant to perform an acute care POCUS examination. Methods: A total of 21 nonmedically trained undergraduate students who had no prior ultrasound experience were recruited to perform a three-part ultrasound examination on a standardized patient with the guidance of a remote expert who was a POCUS-trained cardiac anesthesiologist. The examination included the following acute care POCUS topics: (1) cardiac function via parasternal long/short axis views, (2) pneumothorax assessment via pleural sliding exam via anterior lung views, and (3) abdominal free fluid exam via right upper quadrant abdominal view. Each examiner was given a handout with static images of probe placement and actual ultrasound images for the three views. After a brief 8 min tutorial on the teleultrasound technologies, a connection was established with the expert, and they were guided through the acute care POCUS exam. Each view was deemed to be complete when the expert sonographer was satisfied with the obtained image or if the expert sonographer determined that the image could not be obtained after 5 min. Image quality was scored on a previously validated 0 to 4 grading scale. The entire session was recorded, and the image quality was scored during the exam by the remote expert instructor as well as by a separate POCUS-trained, blinded expert anesthesiologist. Results: A total of 21 subjects completed the study. The average total time for the exam was 8.5 min (standard deviation = 4.6). A comparison between the live expert examiner and the blinded postexam reviewer showed a 100% agreement between image interpretations. A review of the exams rated as three or higher demonstrated that 87% of abdominal, 90% of cardiac, and 95% of pulmonary exams achieved this level of image quality. A satisfaction survey of the novice users demonstrated higher ease of following commands for the cardiac and pulmonary exams compared to the abdominal exam. Conclusions: The results from this pilot study demonstrate that nonmedically trained individuals can be guided to complete a relevant ultrasound examination within a short period. Further evaluation of using telemedicine technologies to promote POCUS should be evaluated. Full article
Show Figures

Figure 1

13 pages, 7772 KiB  
Article
Deep Activation Pooling for Blind Image Quality Assessment
by Zhong Zhang, Hong Wang, Shuang Liu and Tariq S. Durrani
Appl. Sci. 2018, 8(4), 478; https://doi.org/10.3390/app8040478 - 21 Mar 2018
Cited by 10 | Viewed by 4243
Abstract
Driven by the rapid development of digital imaging and network technologies, the opinion-unaware blind image quality assessment (BIQA) method has become an important yet very challenging task. In this paper, we design an effective novel scheme for opinion-unaware BIQA. We first utilize the [...] Read more.
Driven by the rapid development of digital imaging and network technologies, the opinion-unaware blind image quality assessment (BIQA) method has become an important yet very challenging task. In this paper, we design an effective novel scheme for opinion-unaware BIQA. We first utilize the convolutional maps to select high-contrast patches, and then we utilize these selected patches of pristine images to train a pristine multivariate Gaussian (PMVG) model. In the test stage, each high-contrast patch is fitted by a test MVG (TMVG) model, and the local quality score is obtained by comparing with the PMVG. Finally, we propose the deep activation pooling (DAP) to automatically emphasize the more important scores and suppress the less important ones so as to obtain the overall image quality score. We verify the proposed method on two widely used databases, that is, the computational and subjective image quality (CSIQ) and the laboratory for image and video engineering (LIVE) databases, and the experimental results demonstrate that the proposed method achieves better results than the state-of-the-art methods. Full article
Show Figures

Figure 1

Back to TopTop