Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = video probe

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7704 KB  
Article
Seamless User-Generated Content Processing for Smart Media: Delivering QoE-Aware Live Media with YOLO-Based Bib Number Recognition
by Alberto del Rio, Álvaro Llorente, Sofia Ortiz-Arce, Maria Belesioti, George Pappas, Alejandro Muñiz, Luis M. Contreras and Dimitris Christopoulos
Electronics 2025, 14(20), 4115; https://doi.org/10.3390/electronics14204115 - 21 Oct 2025
Viewed by 892
Abstract
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, [...] Read more.
The increasing availability of User-Generated Content during large-scale events is transforming spectators into active co-creators of live narratives while simultaneously introducing challenges in managing heterogeneous sources, ensuring content quality, and orchestrating distributed infrastructures. A trial was conducted to evaluate automated orchestration, media enrichment, and real-time quality assessment in a live sporting scenario. A key innovation of this work is the use of a cloud-native architecture based on Kubernetes, enabling dynamic and scalable integration of smartphone streams and remote production tools into a unified workflow. The system also included advanced cognitive services, such as a Video Quality Probe for estimating perceived visual quality and an AI Engine based on YOLO models for detection and recognition of runners and bib numbers. Together, these components enable a fully automated workflow for live production, combining real-time analysis and quality monitoring, capabilities that previously required manual or offline processing. The results demonstrated consistently high Mean Opinion Score (MOS) values above 3 72.92% of the time, confirming acceptable perceived quality under real network conditions, while the AI Engine achieved strong performance with a Precision of 93.6% and Recall of 80.4%. Full article
Show Figures

Figure 1

29 pages, 5509 KB  
Article
Image-Analysis-Based Validation of the Mathematical Framework for the Representation of the Travel of an Accelerometer-Based Texture Testing Device
by Harald Paulsen, Margit Gföhler, Johannes Peter Schramel and Christian Peham
Sensors 2025, 25(20), 6307; https://doi.org/10.3390/s25206307 - 12 Oct 2025
Viewed by 673
Abstract
Texture testing is applied in various industries. Recently, a simple, accelerometer-equipped texture testing device (Surface Tester of Food Resilience; STFR) has been developed, and we elaborated formulae describing the movement of the probe. In this paper, we describe the validation of said formulae, [...] Read more.
Texture testing is applied in various industries. Recently, a simple, accelerometer-equipped texture testing device (Surface Tester of Food Resilience; STFR) has been developed, and we elaborated formulae describing the movement of the probe. In this paper, we describe the validation of said formulae, relying on video image analysis of the travel of the spherical probe. This allowed us to select the best-fit mathematical models. We elaborated formulae for accurate calculation of specimen surface characteristics and present an application integrating these formulae in the test procedure. The impact of correct height adjustment and specimen height was found to be critical for reproducibility of measurements and thus needs attendance. These findings form the basis for future comparative studies with established texture analyzers. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

36 pages, 8597 KB  
Review
Microrheology: From Video Microscopy to Optical Tweezers
by Andrea Jannina Fernandez, Graham M. Gibson, Anna Rył and Manlio Tassieri
Micromachines 2025, 16(8), 918; https://doi.org/10.3390/mi16080918 - 8 Aug 2025
Viewed by 3158
Abstract
Microrheology, a branch of rheology, focuses on studying the flow and deformation of matter at micron length scales, enabling the characterization of materials using minute sample volumes. This review article explores the principles and advancements of microrheology, covering a range of techniques that [...] Read more.
Microrheology, a branch of rheology, focuses on studying the flow and deformation of matter at micron length scales, enabling the characterization of materials using minute sample volumes. This review article explores the principles and advancements of microrheology, covering a range of techniques that infer the viscoelastic properties of soft materials from the motion of embedded tracer particles. Special emphasis is placed on methods employing optical tweezers, which have emerged as a powerful tool in both passive and active microrheology thanks to their exceptional force sensitivity and spatiotemporal resolution. The review also highlights complementary techniques such as video particle tracking, magnetic tweezers, dynamic light scattering, and atomic force microscopy. Applications across biology, materials science, and soft matter research are discussed, emphasizing the growing relevance of particle tracking microrheology and optical tweezers in probing microscale mechanics. Full article
(This article belongs to the Special Issue Microrheology with Optical Tweezers)
Show Figures

Figure 1

19 pages, 3862 KB  
Article
Estimation of Total Hemoglobin (SpHb) from Facial Videos Using 3D Convolutional Neural Network-Based Regression
by Ufuk Bal, Faruk Enes Oguz, Kubilay Muhammed Sunnetci, Ahmet Alkan, Alkan Bal, Ebubekir Akkuş, Halil Erol and Ahmet Çağdaş Seçkin
Biosensors 2025, 15(8), 485; https://doi.org/10.3390/bios15080485 - 25 Jul 2025
Viewed by 1701
Abstract
Hemoglobin plays a critical role in diagnosing various medical conditions, including infections, trauma, hemolytic disorders, and Mediterranean anemia, which is particularly prevalent in Mediterranean populations. Conventional measurement methods require blood sampling and laboratory analysis, which are often time-consuming and impractical during emergency situations [...] Read more.
Hemoglobin plays a critical role in diagnosing various medical conditions, including infections, trauma, hemolytic disorders, and Mediterranean anemia, which is particularly prevalent in Mediterranean populations. Conventional measurement methods require blood sampling and laboratory analysis, which are often time-consuming and impractical during emergency situations with limited medical infrastructure. Although portable oximeters enable non-invasive hemoglobin estimation, they still require physical contact, posing limitations for individuals with circulatory or dermatological conditions. Additionally, reliance on disposable probes increases operational costs. This study presents a non-contact and automated approach for estimating total hemoglobin levels from facial video data using three-dimensional regression models. A dataset was compiled from 279 volunteers, with synchronized acquisition of facial video and hemoglobin values using a commercial pulse oximeter. After preprocessing, the dataset was divided into training, validation, and test subsets. Three 3D convolutional regression models, including 3D CNN, channel attention-enhanced 3D CNN, and residual 3D CNN, were trained, and the most successful model was implemented in a graphical interface. Among these, the residual model achieved the most favorable performance on the test set, yielding an RMSE of 1.06, an MAE of 0.85, and a Pearson correlation coefficient of 0.73. This study offers a novel contribution by enabling contactless hemoglobin estimation from facial video using 3D CNN-based regression techniques. Full article
Show Figures

Figure 1

22 pages, 1759 KB  
Article
Discriminating Children with Speech Sound Disorders from Children with Typically Developing Speech Using the Motor Speech Hierarchy Probe Words: A Preliminary Analysis of Mandibular Control
by Linda Orton, Richard Palmer, Roslyn Ward, Petra Helmholz, Geoffrey R. Strauss, Paul Davey and Neville W. Hennessey
Diagnostics 2025, 15(14), 1793; https://doi.org/10.3390/diagnostics15141793 - 16 Jul 2025
Viewed by 1811
Abstract
Background/Objectives: The Motor Speech Hierarchy (MSH) Probe Words (PWs) have yet to be validated as effective in discriminating between children with impaired and children with typically developing speech motor control. This preliminary study first examined the effectiveness of the mandibular control subtest [...] Read more.
Background/Objectives: The Motor Speech Hierarchy (MSH) Probe Words (PWs) have yet to be validated as effective in discriminating between children with impaired and children with typically developing speech motor control. This preliminary study first examined the effectiveness of the mandibular control subtest of the MSH-PWs in distinguishing between typically developing (TD) and speech sound-disordered (SSD) children aged between 3 years 0 months and 3 years 6 months. Secondly, we compared automatically derived kinematic measures of jaw range and control with MSH-PW consensus scoring to assist in identifying deficits in mandibular control. Methods: Forty-one children with TD speech and 13 with SSD produced the 10 words of the mandibular stage of the MSH-PWs. A consensus team of speech pathologists observed video recordings of the words to score motor speech control and phonetic accuracy, as detailed in the MSH-PW scoring criteria. Specific measures of jaw and lip movements during speech were also extracted to derive the objective measurements, with agreement between the perceptual and objective measures of jaw range and jaw control evaluated. Results: A significant difference between TD and SSD groups was found for jaw range (p = 0.006), voicing transitions (p = 0.004) and total mandibular scores (p = 0.015). SSD and TD group discrimination was significant (at alpha = 0.01) with a balanced classification accuracy of 0.79. Initial analysis indicates objective kinematic measures using facial tracking show good agreement with perceptual judgements of jaw range and jaw control. Conclusions: The preliminary data indicate the MSH-PWs can discriminate TD speech from SSD at the level of mandibular control and can be used by clinicians to assess motor speech control. Further investigation of objective measures to support perceptual scoring is indicated. Full article
Show Figures

Figure 1

21 pages, 1639 KB  
Article
Effectiveness of Video Self-Modeling in Teaching Unplugged Coding Skills to Children with Autism Spectrum Disorders
by Erkan Kurnaz
Behav. Sci. 2025, 15(3), 272; https://doi.org/10.3390/bs15030272 - 26 Feb 2025
Cited by 2 | Viewed by 3360
Abstract
This study examined the effectiveness of video self-modeling in teaching unplugged coding skills to children with autism spectrum disorder (ASD). The participants included one female and three male children with ASD, ages 10 to 12, in a multiple-probe design across subjects. The findings [...] Read more.
This study examined the effectiveness of video self-modeling in teaching unplugged coding skills to children with autism spectrum disorder (ASD). The participants included one female and three male children with ASD, ages 10 to 12, in a multiple-probe design across subjects. The findings demonstrated that video self-modeling successfully facilitated the acquisition of unplugged coding skills for all four students. Additionally, all participants could generalize these skills to a new setting, and for those assessed, the skills were maintained for up to 12 weeks after the intervention. Social validity data collected from participants and their parents indicated positive perceptions of the approach. This study’s results highlight implications for instructional practices and future research. Full article
(This article belongs to the Section Educational Psychology)
Show Figures

Figure 1

18 pages, 1139 KB  
Article
Facial Movements Extracted from Video for the Kinematic Classification of Speech
by Richard Palmer, Roslyn Ward, Petra Helmholz, Geoffrey R. Strauss, Paul Davey, Neville Hennessey, Linda Orton and Aravind Namasivayam
Sensors 2024, 24(22), 7235; https://doi.org/10.3390/s24227235 - 12 Nov 2024
Cited by 3 | Viewed by 2560
Abstract
Speech Sound Disorders (SSDs) are prevalent communication problems in children that pose significant barriers to academic success and social participation. Accurate diagnosis is key to mitigating life-long impacts. We are developing a novel software solution—the Speech Movement and Acoustic Analysis Tracking (SMAAT) system [...] Read more.
Speech Sound Disorders (SSDs) are prevalent communication problems in children that pose significant barriers to academic success and social participation. Accurate diagnosis is key to mitigating life-long impacts. We are developing a novel software solution—the Speech Movement and Acoustic Analysis Tracking (SMAAT) system to facilitate rapid and objective assessment of motor speech control issues underlying SSD. This study evaluates the feasibility of using automatically extracted three-dimensional (3D) facial measurements from single two-dimensional (2D) front-facing video cameras for classifying speech movements. Videos were recorded of 51 adults and 77 children between 3 and 4 years of age (all typically developed for age) saying 20 words from the mandibular and labial-facial levels of the Motor-Speech Hierarchy Probe Wordlist (MSH-PW). Measurements around the jaw and lips were automatically extracted from the 2D video frames using a state-of-the-art facial mesh detection and tracking algorithm, and each individual measurement was tested in a Leave-One-Out Cross-Validation (LOOCV) framework for its word classification performance. Statistics were evaluated at the α=0.05 significance level and several measurements were found to exhibit significant classification performance in both the adult and child cohorts. Importantly, measurements of depth indirectly inferred from the 2D video frames were among those found to be significant. The significant measurements were shown to match expectations of facial movements across the 20 words, demonstrating their potential applicability in supporting clinical evaluations of speech production. Full article
(This article belongs to the Special Issue Deep Learning Based Face Recognition and Feature Extraction)
Show Figures

Graphical abstract

20 pages, 4100 KB  
Protocol
Automated Analysis Pipeline for Extracting Saccade, Pupil, and Blink Parameters Using Video-Based Eye Tracking
by Brian C. Coe, Jeff Huang, Donald C. Brien, Brian J. White, Rachel Yep and Douglas P. Munoz
Vision 2024, 8(1), 14; https://doi.org/10.3390/vision8010014 - 18 Mar 2024
Cited by 11 | Viewed by 5252
Abstract
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are [...] Read more.
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are not viable for the large datasets being collected. Additionally, video-based eye trackers allow for the analysis of pupil responses and blink behaviors. Here, we present a detailed description of our pipeline for collecting, storing, and cleaning data, as well as for organizing participant codes, which are fairly lab-specific but nonetheless, are important precursory steps in establishing standardized pipelines. More importantly, we also include descriptions of the automated detection and classification of saccades, blinks, “blincades” (blinks occurring during saccades), and boomerang saccades (two nearly simultaneous saccades in opposite directions where speed-based algorithms fail to split them), This is almost entirely task-agnostic and can be used on a wide variety of data. We additionally describe novel findings regarding post-saccadic oscillations and provide a method to achieve more accurate estimates for saccade end points. Lastly, we describe the automated behavior classification for the interleaved pro/anti-saccade task (IPAST), a task that probes voluntary and inhibitory control. This pipeline was evaluated using data collected from 592 human participants between 5 and 93 years of age, making it robust enough to handle large clinical patient datasets. In summary, this pipeline has been optimized to consistently handle large datasets obtained from diverse study cohorts (i.e., developmental, aging, clinical) and collected across multiple laboratory sites. Full article
Show Figures

Figure 1

10 pages, 2561 KB  
Project Report
Planetary Health Initiatives in Rural Education at a Riverside School in Southern Amazonas, Brazil
by Paula Regina Humbelino de Melo, Péricles Vale Alves and Tatiana Souza de Camargo
Challenges 2023, 14(4), 50; https://doi.org/10.3390/challe14040050 - 7 Dec 2023
Cited by 1 | Viewed by 2840
Abstract
Planetary Health is an expanding scientific field around the world, and actions in different areas are essential to minimize the environmental damage that compromises the future of humanity. This project report aims to describe the development of Planetary Health actions in a rural [...] Read more.
Planetary Health is an expanding scientific field around the world, and actions in different areas are essential to minimize the environmental damage that compromises the future of humanity. This project report aims to describe the development of Planetary Health actions in a rural school in the Brazilian Amazon, to understand and raise awareness of themes related to Planetary Health. To implement the educational activities, a booklet entitled “Planetary Health: Guide for Rural Education” was created. Subsequently, didactic sequences were applied to 37 ninth-grade students in the first semester of 2023. The activities were diversified, including: (1) investigative activities (pre-tests, interviews with family members, ecological footprint adapted to the Amazonian riverside context), (2) interpretative activities (image reading, identification and problem-solving for Planetary Health stories in the Amazon, educational cartoons, and graphs of the sectors with the highest pollution in Brazil and diseases associated with climate change), (3) audiovisual activities (educational videos), (4) playful activities (educational games), (5) practical and field activities (forest tracking, planting seedlings, sanitation trail, construction of a school garden, preparation of a healthy school snack, greenhouse effect simulation, and basic analysis of lake water with a probe). The results of the educational actions allowed students to undergo new experiences on Planetary Health themes, as well as understand the centrality of the Amazon for the planet and how the environmental impacts in this biome are compromising the future of humanity. The experiences during the educational actions showed that young riverside residents are concerned about the future of the Amazon, especially given the environmental destruction that is frequently evident, such as deforestation, fires, illegal mining, and land grabbing. Inserting these themes into riverside education makes it possible to look at the Amazon in a resilient, responsible way and to discuss scientific and local knowledge so that students can develop initiatives to face environmental challenges in their community. We conclude that Planetary Health education needs to be an effective part of the school curriculum, prioritizing reviewing the documents that guide education to prioritize transdisciplinary actions with children and young people, as they are the voices of the future and future leaders in emerging causes. Educational actions in Planetary Health in the Amazon region are an example that can inspire actions in other places with similar characteristics. Full article
Show Figures

Figure 1

16 pages, 2388 KB  
Article
No Evidence for Cross-Modal fMRI Adaptation in Macaque Parieto-Premotor Mirror Neuron Regions
by Saloni Sharma and Koen Nelissen
Brain Sci. 2023, 13(10), 1466; https://doi.org/10.3390/brainsci13101466 - 17 Oct 2023
Viewed by 1999
Abstract
To probe the presence of mirror neurons in the human brain, cross-modal fMRI adaptation has been suggested as a suitable technique. The rationale behind this suggestion is that this technique allows making more accurate inferences about neural response properties underlying fMRI voxel activations, [...] Read more.
To probe the presence of mirror neurons in the human brain, cross-modal fMRI adaptation has been suggested as a suitable technique. The rationale behind this suggestion is that this technique allows making more accurate inferences about neural response properties underlying fMRI voxel activations, beyond merely showing shared voxels that are active during both action observation and execution. However, the validity of using cross-modal fMRI adaptation to demonstrate the presence of mirror neurons in parietal and premotor brain regions has been questioned given the inconsistent and weak results obtained in human studies. A better understanding of cross-modal fMRI adaptation effects in the macaque brain is required as the rationale for using this approach is based on several assumptions related to macaque mirror neuron response properties that still need validation. Here, we conducted a cross-modal fMRI adaptation study in macaque monkeys, using the same action execution and action observation tasks that successfully yielded mirror neuron region cross-modal action decoding in a previous monkey MVPA study. We scanned two male rhesus monkeys while they first executed a sequence of either reach-and-grasp or reach-and-touch hand actions and then observed a video of a human actor performing these motor acts. Both whole-brain and region-of-interest analyses failed to demonstrate cross-modal fMRI adaptation effects in parietal and premotor mirror neuron regions. Our results, in line with previous findings in non-human primates, show that cross-modal motor-to-visual fMRI adaptation is not easily detected in monkey brain regions known to house mirror neurons. Thus, our results advocate caution in using cross-modal fMRI adaptation as a method to infer whether mirror neurons can be found in the primate brain. Full article
(This article belongs to the Section Sensory and Motor Neuroscience)
Show Figures

Figure 1

23 pages, 6340 KB  
Article
Automated Stabilization, Enhancement and Capillaries Segmentation in Videocapillaroscopy
by Vincenzo Taormina, Giuseppe Raso, Vito Gentile, Leonardo Abbene, Antonino Buttacavoli, Gaetano Bonsignore, Cesare Valenti, Pietro Messina, Giuseppe Alessandro Scardina and Donato Cascio
Sensors 2023, 23(18), 7674; https://doi.org/10.3390/s23187674 - 5 Sep 2023
Cited by 10 | Viewed by 2761
Abstract
Oral capillaroscopy is a critical and non-invasive technique used to evaluate microcirculation. Its ability to observe small vessels in vivo has generated significant interest in the field. Capillaroscopy serves as an essential tool for diagnosing and prognosing various pathologies, with anatomic–pathological lesions playing [...] Read more.
Oral capillaroscopy is a critical and non-invasive technique used to evaluate microcirculation. Its ability to observe small vessels in vivo has generated significant interest in the field. Capillaroscopy serves as an essential tool for diagnosing and prognosing various pathologies, with anatomic–pathological lesions playing a crucial role in their progression. Despite its importance, the utilization of videocapillaroscopy in the oral cavity encounters limitations due to the acquisition setup, encompassing spatial and temporal resolutions of the video camera, objective magnification, and physical probe dimensions. Moreover, the operator’s influence during the acquisition process, particularly how the probe is maneuvered, further affects its effectiveness. This study aims to address these challenges and improve data reliability by developing a computerized support system for microcirculation analysis. The designed system performs stabilization, enhancement and automatic segmentation of capillaries in oral mucosal video sequences. The stabilization phase was performed by means of a method based on the coupling of seed points in a classification process. The enhancement process implemented was based on the temporal analysis of the capillaroscopic frames. Finally, an automatic segmentation phase of the capillaries was implemented with the additional objective of quantitatively assessing the signal improvement achieved through the developed techniques. Specifically, transfer learning of the renowned U-net deep network was implemented for this purpose. The proposed method underwent testing on a database with ground truth obtained from expert manual segmentation. The obtained results demonstrate an achieved Jaccard index of 90.1% and an accuracy of 96.2%, highlighting the effectiveness of the developed techniques in oral capillaroscopy. In conclusion, these promising outcomes encourage the utilization of this method to assist in the diagnosis and monitoring of conditions that impact microcirculation, such as rheumatologic or cardiovascular disorders. Full article
Show Figures

Figure 1

12 pages, 1458 KB  
Article
Diagnostic Performance of Multispectral SWIR Transillumination and Reflectance Imaging for Caries Detection
by Yihua Zhu, Chung Ng, Oanh Le, Yi-Ching Ho and Daniel Fried
Diagnostics 2023, 13(17), 2824; https://doi.org/10.3390/diagnostics13172824 - 31 Aug 2023
Cited by 5 | Viewed by 2555
Abstract
The aim of this clinical study was to compare the diagnostic performance of dual short wavelength infrared (SWIR) occlusal transillumination and reflectance multispectral imaging with conventional visual assessment and radiography for caries detection on premolars scheduled for extraction for orthodontics reasons. Polarized light [...] Read more.
The aim of this clinical study was to compare the diagnostic performance of dual short wavelength infrared (SWIR) occlusal transillumination and reflectance multispectral imaging with conventional visual assessment and radiography for caries detection on premolars scheduled for extraction for orthodontics reasons. Polarized light microscopy (PLM) and micro-computed tomography (microCT) performed after tooth extraction were used as gold standards. The custom-fabricated imaging probe was 3D-printed and the imaging system employed a SWIR camera and fiber-optic light sources emitting light at 1300 nm for occlusal transillumination and 1600 nm for reflectance measurements. Teeth (n = 135) on 40 test subjects were imaged in vivo using the SWIR imaging prototype in the study and teeth were extracted after imaging. Our study demonstrates for the first time that near-simultaneous real-time transillumination and reflectance video can be successfully acquired for caries detection. Both SWIR imaging modalities had markedly higher sensitivity for lesions on proximal and occlusal surfaces compared to conventional methods (visual and radiographic). Reflectance imaging at 1600 nm had higher sensitivity and specificity than transillumination at 1300 nm. The combined SWIR methods yielded higher specificity but the combined sensitivity was lower than for each individual method. Full article
(This article belongs to the Special Issue Advances in Dental Imaging)
Show Figures

Figure 1

11 pages, 2086 KB  
Article
Intraoperative Contrast-Enhanced Ultrasonography (Io-CEUS) in Minimally Invasive Thoracic Surgery for Characterization of Pulmonary Tumours: A Clinical Feasibility Study
by Martin Ignaz Schauer, Ernst-Michael Jung, Natascha Platz Batista da Silva, Michael Akers, Elena Loch, Till Markowiak, Tomas Piler, Christopher Larisch, Reiner Neu, Christian Stroszczynski, Hans-Stefan Hofmann and Michael Ried
Cancers 2023, 15(15), 3854; https://doi.org/10.3390/cancers15153854 - 29 Jul 2023
Cited by 9 | Viewed by 1915
Abstract
Background: The intraoperative detection of solitary pulmonary nodules (SPNs) continues to be a major challenge, especially in minimally invasive video-assisted thoracic surgery (VATS). The location, size, and intraoperative frozen section result of SPNs are decisive regarding the extent of lung resection. This feasibility [...] Read more.
Background: The intraoperative detection of solitary pulmonary nodules (SPNs) continues to be a major challenge, especially in minimally invasive video-assisted thoracic surgery (VATS). The location, size, and intraoperative frozen section result of SPNs are decisive regarding the extent of lung resection. This feasibility study investigates the technical applicability of intraoperative contrast-enhanced ultrasonography (Io-CEUS) in minimally invasive thoracic surgery. Methods: In this prospective, monocentric clinical feasibility study, n = 30 patients who underwent Io-CEUS during elective minimally invasive lung resection for SPNs between October 2021 and February 2023. The primary endpoint was the technical feasibility of Io-CEUS during VATS. Secondary endpoints were defined as the detection and characterization of SPNs. Results: In all patients (female, n = 13; mean age, 63 ± 8.6 years) Io-CEUS could be performed without problems during VATS. All SPNs were detected by Io-CEUS (100%). SPNs had a mean size of 2.2 cm (0.5–4.5 cm) and a mean distance to the lung surface of 2.0 cm (0–6.4 cm). B-mode, colour-coded Doppler sonography, and contrast-enhanced ultrasound were used to characterize all tumours intraoperatively. Significant differences were found, especially in vascularization as well as in contrast agent behaviour, depending on the tumour entity. After successful lung resection, a pathologic examination confirmed the presence of lung carcinomas (n = 17), lung metastases (n = 10), and benign lung tumours (n = 3). Conclusions: The technical feasibility of Io-CEUS was confirmed in VATS before resection regarding the detection of suspicious SPNs. In particular, the use of Doppler sonography and contrast agent kinetics revealed intraoperative specific aspects depending on the tumour entity. Further studies on Io-CEUS and the application of an endoscopic probe for VATS will follow. Full article
Show Figures

Graphical abstract

17 pages, 5679 KB  
Article
Velocity Control of a Multi-Motion Mode Spherical Probe Robot Based on Reinforcement Learning
by Wenke Ma, Bingyang Li, Yuxue Cao, Pengfei Wang, Mengyue Liu, Chenyang Chang and Shigang Peng
Appl. Sci. 2023, 13(14), 8218; https://doi.org/10.3390/app13148218 - 15 Jul 2023
Cited by 3 | Viewed by 2033
Abstract
As deep space exploration tasks become increasingly complex, the mobility and adaptability of traditional wheeled or tracked probe robots with high functional density are constrained in harsh, dangerous, or unknown environments. A practical solution to these challenges is designing a probe robot for [...] Read more.
As deep space exploration tasks become increasingly complex, the mobility and adaptability of traditional wheeled or tracked probe robots with high functional density are constrained in harsh, dangerous, or unknown environments. A practical solution to these challenges is designing a probe robot for preliminary exploration in unknown areas, which is characterized by robust adaptability, simple structure, light weight, and minimal volume. Compared to the traditional deep space probe robot, the spherical robot with a geometric, symmetrical structure shows better adaptability to the complex ground environment. Considering the uncertain detection environment, the spherical robot should brake rapidly after jumping to avoid reentering obstacles. Moreover, since it is equipped with optical modules for deep space exploration missions, the spherical robot must maintain motion stability during the rolling process to ensure the quality of photos and videos captured. However, due to the nonlinear coupling and parameter uncertainty of the spherical robot, it is tedious to adjust controller parameters. Moreover, the adaptability of controllers with fixed parameters is limited. This paper proposes an adaptive proportion–integration–differentiation (PID) control method based on reinforcement learning for the multi-motion mode spherical probe robot (MMSPR) with rolling and jumping. This method uses the soft actor–critic (SAC) algorithm to adjust the parameters of the PID controller and introduces a switching control strategy to reduce static error. As the simulation results show, this method can facilitate the MMSPR’s convergence within 0.02 s regarding motion stability. In addition, in terms of braking, it enables an MMSPR with random initial speed brake within a convergence time of 0.045 s and a displacement of 0.0013 m. Compared with the PID method with fixed parameters, the braking displacement of the MMSPR is reduced by about 38%, and the convergence time is reduced by about 20%, showing better universality and adaptability. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

19 pages, 3480 KB  
Hypothesis
On the Fence: The Impact of Education on Support for Electric Fencing to Prevent Conflict between Humans and Baboons in Kommetjie, South Africa
by Debbie Walsh, M. Justin O’Riain, Nicoli Nattrass and David Gaynor
Animals 2023, 13(13), 2125; https://doi.org/10.3390/ani13132125 - 27 Jun 2023
Viewed by 3502
Abstract
Few studies test whether education can help increase support for wildlife management interventions. This mixed methods study sought to test the importance of educating a community on the use of a baboon-proof electric fence to mitigate negative interactions between humans and Chacma baboons [...] Read more.
Few studies test whether education can help increase support for wildlife management interventions. This mixed methods study sought to test the importance of educating a community on the use of a baboon-proof electric fence to mitigate negative interactions between humans and Chacma baboons (Papio ursinus) in a residential suburb of the City of Cape Town, South Africa. An educational video on the welfare, conservation and lifestyle benefits of a baboon-proof electric fence was included in a short online survey. The positioning of the video within the survey was randomised either to fall before or after questions probing the level of support for an electric fence. The results showed that watching the video before most survey questions increased the average marginal probability of supporting an electric fence by 15 percentage points. The study also explored whether the educational video could change people’s minds. Those who saw the video towards the end of the survey were questioned again about the electric fence. Many changed their minds after watching the video, with support for the fence increasing from 36% to 50%. Of these respondents, the results show that being female raised the average marginal probability of someone changing their mind in favour of supporting the fence by 19%. Qualitative analysis revealed that support for or against the fence was multi-layered and that costs and concern for baboons were not the only relevant factors influencing people’s choices. Conservation often needs to change people’s behaviours. We need to know what interventions are effective. We show in the real world that an educational video can be effective and can moderately change people’s opinions and that women are more likely to change their position in light of the facts than men. This study contributes to the emerging literature on the importance of education in managing conservation conflicts and the need for evidence-based interventions. Full article
Show Figures

Figure 1

Back to TopTop