Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (68)

Search Parameters:
Keywords = gaze motions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 9742 KiB  
Article
Autism Spectrum Disorder Detection Using Skeleton-Based Body Movement Analysis via Dual-Stream Deep Learning
by Jungpil Shin, Abu Saleh Musa Miah, Manato Kakizaki, Najmul Hassan and Yoichi Tomioka
Electronics 2025, 14(11), 2231; https://doi.org/10.3390/electronics14112231 - 30 May 2025
Viewed by 638
Abstract
Autism Spectrum Disorder (ASD) poses significant challenges in diagnosis due to its diverse symptomatology and the complexity of early detection. Atypical gait and gesture patterns, prominent behavioural markers of ASD, hold immense potential for facilitating early intervention and optimising treatment outcomes. These patterns [...] Read more.
Autism Spectrum Disorder (ASD) poses significant challenges in diagnosis due to its diverse symptomatology and the complexity of early detection. Atypical gait and gesture patterns, prominent behavioural markers of ASD, hold immense potential for facilitating early intervention and optimising treatment outcomes. These patterns can be efficiently and non-intrusively captured using modern computational techniques, making them valuable for ASD recognition. Various types of research have been conducted to detect ASD through deep learning, including facial feature analysis, eye gaze analysis, and movement and gesture analysis. In this study, we optimise a dual-stream architecture that combines image classification and skeleton recognition models to analyse video data for body motion analysis. The first stream processes Skepxels—spatial representations derived from skeleton data—using ConvNeXt-Base, a robust image recognition model that efficiently captures aggregated spatial embeddings. The second stream encodes angular features, embedding relative joint angles into the skeleton sequence and extracting spatiotemporal dynamics using Multi-Scale Graph 3D Convolutional Network(MSG3D), a combination of Graph Convolutional Networks (GCNs) and Temporal Convolutional Networks (TCNs). We replace the ViT model from the original architecture with ConvNeXt-Base to evaluate the efficacy of CNN-based models in capturing gesture-related features for ASD detection. Additionally, we experimented with a Stack Transformer in the second stream instead of MSG3D but found it to result in lower performance accuracy, thus highlighting the importance of GCN-based models for motion analysis. The integration of these two streams ensures comprehensive feature extraction, capturing both global and detailed motion patterns. A pairwise Euclidean distance loss is employed during training to enhance the consistency and robustness of feature representations. The results from our experiments demonstrate that the two-stream approach, combining ConvNeXt-Base and MSG3D, offers a promising method for effective autism detection. This approach not only enhances accuracy but also contributes valuable insights into optimising deep learning models for gesture-based recognition. By integrating image classification and skeleton recognition, we can better capture both global and detailed motion patterns, which are crucial for improving early ASD diagnosis and intervention strategies. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

17 pages, 18945 KiB  
Article
Collaborative Robot Control Based on Human Gaze Tracking
by Francesco Di Stefano, Alice Giambertone, Laura Salamina, Matteo Melchiorre and Stefano Mauro
Sensors 2025, 25(10), 3103; https://doi.org/10.3390/s25103103 - 14 May 2025
Viewed by 595
Abstract
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a [...] Read more.
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a practical and non-intrusive setup made up of a vision system and gaze-tracking software. After presenting a comparison between the major available systems on the market, OpenFace 2.0 was selected as the primary gaze-tracking software and integrated with a UR5 collaborative robot through a MATLAB-based control framework. Validation was conducted through real-world experiments, analyzing the effects of raw and filtered gaze data on system accuracy and responsiveness. The results indicate that gaze tracking can effectively guide robot motion, though signal processing significantly impacts responsiveness and control precision. This work establishes a foundation for future research on gaze-assisted robotic control, highlighting its potential benefits and challenges in enhancing human–robot collaboration. Full article
(This article belongs to the Special Issue Advanced Robotic Manipulators and Control Applications)
Show Figures

Figure 1

20 pages, 1942 KiB  
Article
Operator Expertise in Bilateral Teleoperation: Performance, Manipulation, and Gaze Metrics
by Harun Tugal, Ihsan Tugal, Fumiaki Abe, Masaki Sakamoto, Shu Shirai, Ipek Caliskanelli and Robert Skilton
Electronics 2025, 14(10), 1923; https://doi.org/10.3390/electronics14101923 - 9 May 2025
Cited by 2 | Viewed by 802
Abstract
This paper presents a comprehensive user study aimed as assessing and differentiating operator expertise within bilateral teleoperation systems. The primary objective is to identify key performance metrics that effectively distinguish novice from expert users. Unlike prior approaches that focus primarily on psychological evaluations, [...] Read more.
This paper presents a comprehensive user study aimed as assessing and differentiating operator expertise within bilateral teleoperation systems. The primary objective is to identify key performance metrics that effectively distinguish novice from expert users. Unlike prior approaches that focus primarily on psychological evaluations, this study emphasizes direct performance analysis across a range of telerobotic tasks. Ten participants (six novices and four experts) were assessed based on task completion time and difficulty, error rates, manipulator motion characteristics, gaze behaviour, and subjective feedback via questionnaires. The results show that experienced operators outperformed novices by completing tasks faster, making fewer errors, and demonstrating smoother manipulator control, as reflected by reduced jerks and higher spatial precision. Also, experts maintained consistent performance even as task complexity increased, whereas novices experienced a sharp decline, particularly at higher difficulty levels. Questionnaire responses further revealed that novices experienced higher mental and physical demands, especially in unfamiliar tasks, while experts demonstrated higher concentration and arousal levels. Additionally, the study introduces gaze transition entropy (GTE) and stationary gaze entropy (SGE) metrics to quantify visual attention strategies, with experts exhibiting more focused, goal-oriented gaze patterns, while novices showed more erratic and inefficient behaviour. These findings highlight both quantitative and qualitative measures as critical for evaluating operator performance and informing future teleoperation training programs. Full article
(This article belongs to the Special Issue Haptic Systems and the Tactile Internet: Design and Applications)
Show Figures

Figure 1

15 pages, 3391 KiB  
Article
OKN and Pupillary Response Modulation by Gaze and Attention Shifts
by Kei Kanari and Moe Kikuchi
J. Eye Mov. Res. 2025, 18(2), 11; https://doi.org/10.3390/jemr18020011 - 7 Apr 2025
Viewed by 393
Abstract
Pupil responses and optokinetic nystagmus (OKN) are known to vary with the brightness and direction of motion of attended stimuli, as well as gaze position. However, whether these processes are controlled by a common mechanism remains unclear. In this study, we investigated how [...] Read more.
Pupil responses and optokinetic nystagmus (OKN) are known to vary with the brightness and direction of motion of attended stimuli, as well as gaze position. However, whether these processes are controlled by a common mechanism remains unclear. In this study, we investigated how OKN latency relates to pupil response latency under two conditions: gaze shifts (eye movement) and attention shifts (covert attention without eye movement). As a result, while OKN showed consistent temporal changes across both gaze and attention conditions, pupillary responses exhibited distinct patterns. Moreover, the results revealed no significant correlation between pupil latency and OKN latency in either condition. These findings suggest that, although OKN and pupillary responses are influenced by similar attentional processes, their underlying mechanisms may differ. Full article
Show Figures

Figure 1

27 pages, 5537 KiB  
Article
Real-Time Gaze Estimation Using Webcam-Based CNN Models for Human–Computer Interactions
by Visal Vidhya and Diego Resende Faria
Computers 2025, 14(2), 57; https://doi.org/10.3390/computers14020057 - 10 Feb 2025
Viewed by 3175
Abstract
Gaze tracking and estimation are essential for understanding human behavior and enhancing human–computer interactions. This study introduces an innovative, cost-effective solution for real-time gaze tracking using a standard webcam, providing a practical alternative to conventional methods that rely on expensive infrared (IR) cameras. [...] Read more.
Gaze tracking and estimation are essential for understanding human behavior and enhancing human–computer interactions. This study introduces an innovative, cost-effective solution for real-time gaze tracking using a standard webcam, providing a practical alternative to conventional methods that rely on expensive infrared (IR) cameras. Traditional approaches, such as Pupil Center Corneal Reflection (PCCR), require IR cameras to capture corneal reflections and iris glints, demanding high-resolution images and controlled environments. In contrast, the proposed method utilizes a convolutional neural network (CNN) trained on webcam-captured images to achieve precise gaze estimation. The developed deep learning model achieves a mean squared error (MSE) of 0.0112 and an accuracy of 90.98% through a novel trajectory-based accuracy evaluation system. This system involves an animation of a ball moving across the screen, with the user’s gaze following the ball’s motion. Accuracy is determined by calculating the proportion of gaze points falling within a predefined threshold based on the ball’s radius, ensuring a comprehensive evaluation of the system’s performance across all screen regions. Data collection is both simplified and effective, capturing images of the user’s right eye while they focus on the screen. Additionally, the system includes advanced gaze analysis tools, such as heat maps, gaze fixation tracking, and blink rate monitoring, which are all integrated into an intuitive user interface. The robustness of this approach is further enhanced by incorporating Google’s Mediapipe model for facial landmark detection, improving accuracy and reliability. The evaluation results demonstrate that the proposed method delivers high-accuracy gaze prediction without the need for expensive equipment, making it a practical and accessible solution for diverse applications in human–computer interactions and behavioral research. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

24 pages, 1861 KiB  
Review
Impact of Virtual Reality on Brain–Computer Interface Performance in IoT Control—Review of Current State of Knowledge
by Adrianna Piszcz, Izabela Rojek and Dariusz Mikołajewski
Appl. Sci. 2024, 14(22), 10541; https://doi.org/10.3390/app142210541 - 15 Nov 2024
Cited by 4 | Viewed by 7853
Abstract
This article examines state-of-the-art research into the impact of virtual reality (VR) on brain–computer interface (BCI) performance: how the use of virtual reality can affect brain activity and neural plasticity in ways that can improve the performance of brain–computer interfaces in IoT control, [...] Read more.
This article examines state-of-the-art research into the impact of virtual reality (VR) on brain–computer interface (BCI) performance: how the use of virtual reality can affect brain activity and neural plasticity in ways that can improve the performance of brain–computer interfaces in IoT control, e.g., for smart home purposes. Integrating BCI with VR improves the performance of brain–computer interfaces in IoT control by providing immersive, adaptive training environments that increase signal accuracy and user control. VR offers real-time feedback and simulations that help users refine their interactions with smart home systems, making the interface more intuitive and responsive. This combination ultimately leads to greater independence, efficiency, and ease of use, especially for users with mobility issues, in managing IoT-connected devices. The integration of BCI and VR shows great potential for transformative applications ranging from neurorehabilitation and human–computer interaction to cognitive assessment and personalized therapeutic interventions for a variety of neurological and cognitive disorders. The literature review highlights the significant advances and multifaceted challenges in this rapidly evolving field. Particularly noteworthy is the emphasis on the importance of adaptive signal processing techniques, which are key to enhancing the overall control and immersion experienced by individuals in virtual environments. The value of multimodal integration, in which BCI technology is combined with complementary biosensors such as gaze tracking and motion capture, is also highlighted. The incorporation of advanced artificial intelligence (AI) techniques will revolutionize the way we approach the diagnosis and treatment of neurodegenerative conditions. Full article
(This article belongs to the Special Issue IoT in Smart Cities and Homes, 2nd Edition)
Show Figures

Figure 1

30 pages, 2719 KiB  
Article
Predicting Shot Accuracy in Badminton Using Quiet Eye Metrics and Neural Networks
by Samson Tan and Teik Toe Teoh
Appl. Sci. 2024, 14(21), 9906; https://doi.org/10.3390/app14219906 - 29 Oct 2024
Cited by 3 | Viewed by 2756
Abstract
This paper presents a novel approach to predicting shot accuracy in badminton by analyzing Quiet Eye (QE) metrics such as QE duration, fixation points, and gaze dynamics. We develop a neural network model that combines visual data from eye-tracking devices with biomechanical data [...] Read more.
This paper presents a novel approach to predicting shot accuracy in badminton by analyzing Quiet Eye (QE) metrics such as QE duration, fixation points, and gaze dynamics. We develop a neural network model that combines visual data from eye-tracking devices with biomechanical data such as body posture and shuttlecock trajectory. Our model is designed to predict shot accuracy, providing insights into the role of QE in performance. The study involved 30 badminton players of varying skill levels from the Chinese Swimming Club in Singapore. Using a combination of eye-tracking technology and motion capture systems, we collected data on QE metrics and biomechanical factors during a series of badminton shots for a total of 750. Key results include: (1) The neural network model achieved 85% accuracy in predicting shot outcomes, demonstrating the potential of integrating QE metrics with biomechanical data. (2) QE duration and onset were identified as the most significant predictors of shot accuracy, followed by racket speed and wrist angle at impact. (3) Elite players exhibited significantly longer QE durations (M = 289.5 ms) compared to intermediate (M = 213.7 ms) and novice players (M = 168.3 ms). (4) A strong positive correlation (r = 0.72) was found between QE duration and shot accuracy across all skill levels. These findings have important implications for badminton training and performance evaluation. The study suggests that QE-based training programs could significantly enhance players’ shot accuracy. Furthermore, the predictive model developed in this study offers a framework for real-time performance analysis and personalized training regimens in badminton. By bridging cognitive neuroscience and sports performance through advanced data analytics, this research paves the way for more sophisticated, individualized training approaches in badminton and potentially other fast-paced sports. Future research directions include exploring the temporal dynamics of QE during matches and developing real-time feedback systems based on QE metrics. Full article
Show Figures

Figure 1

18 pages, 8219 KiB  
Article
Evolution of the “4-D Approach” to Dynamic Vision for Vehicles
by Ernst Dieter Dickmanns
Electronics 2024, 13(20), 4133; https://doi.org/10.3390/electronics13204133 - 21 Oct 2024
Viewed by 1281
Abstract
Spatiotemporal models for the 3-D shape and motion of objects allowed large progress in the 1980s in visual perception of moving objects observed from a moving platform. Despite the successes demonstrated with several vehicles, the “4-D approach” has not been accepted generally. Its [...] Read more.
Spatiotemporal models for the 3-D shape and motion of objects allowed large progress in the 1980s in visual perception of moving objects observed from a moving platform. Despite the successes demonstrated with several vehicles, the “4-D approach” has not been accepted generally. Its advantage is that only the last image of the sequence needs to be analyzed in detail to allow the full state vectors of moving objects, including their velocity components, to be reconstructed by the feedback of prediction errors. The vehicle carrying the cameras can, thus, together with conventional measurements, directly create a visualization of the situation encountered. In 1994, at the final demonstration of the project PROMETHEUS, two sedan vehicles using this approach were the only ones worldwide capable of driving autonomously in standard heavy traffic on three-lane Autoroutes near Paris at speeds up to 130 km/h (convoy driving, lane changes, passing). Up to ten vehicles nearby could be perceived. In this paper, the three-layer architecture of the perception system is reviewed. At the end of the 1990s, the system evolved from mere recognition of objects in motion, to understanding complex dynamic scenes by developing behavioral capabilities, like fast saccadic changes in the gaze direction for flexible concentration on objects of interest. By analyzing motion of objects over time, the situation for decision making was assessed. In the third-generation system “EMS-vision” behavioral capabilities of agents were represented on an abstract level for characterizing their potential behaviors. These maneuvers form an additional knowledge base. The system has proven capable of driving in networks of minor roads, including off-road sections, with avoidance of negative obstacles (ditches). Results are shown for road vehicle guidance. Potential transitions to a robot mind and to the now-favored CNN are touched on. Full article
(This article belongs to the Special Issue Advancement on Smart Vehicles and Smart Travel)
Show Figures

Figure 1

20 pages, 330 KiB  
Article
An Eye-Tracking Study on Six Early Social-Emotional Abilities in Children Aged 1 to 3 Years
by Thalia Cavadini, Elliot Riviere and Edouard Gentaz
Children 2024, 11(8), 1031; https://doi.org/10.3390/children11081031 - 22 Aug 2024
Cited by 1 | Viewed by 2011
Abstract
Background: The experimental evaluation of young children’s socio-emotional abilities is limited by the lack of existing specific measures to assess this population and by the relative difficulty for researchers to adapt measures designed for the general population. Methods: This study examined six early [...] Read more.
Background: The experimental evaluation of young children’s socio-emotional abilities is limited by the lack of existing specific measures to assess this population and by the relative difficulty for researchers to adapt measures designed for the general population. Methods: This study examined six early social-emotional abilities in 86 typically developing children aged 1 to 3 years using an eye-tracking-based experimental paradigm that combined visual preference tasks adapted from pre-existing infant studies. Objectives: The aim of this study is to obtain developmental norms in six early social-emotional abilities in typical children aged 1 to 3 years that would be promising for an understanding of disorders of mental development. These developmental standards are essential to enable comparative assessments with children with atypical development, such as children with Profound Intellectual and Multiple Disabilities (PIMD). Results: The participants had greater spontaneous visual preferences for biological (vs. non-biological) motion, socially salient (vs. non-social) stimuli, the eye (vs. mouth) area of emotional expressions, angry (vs. happy) faces, and objects of joint attention (vs. non-looked-at ones). Interestingly, although the prosocial (vs. antisocial) scene of the socio-moral task was preferred, both the helper and hinderer characters were equally gazed at. Finally, correlational analyses revealed that performance was neither related to participants’ age nor to each other (dismissing the hypothesis of a common underpinning process). Conclusion: Our revised experimental paradigm is possible in infants aged 1 to 3 years and thus provides additional scientific proof on the direct assessment of these six socio-emotional abilities in this population. Full article
(This article belongs to the Section Pediatric Mental Health)
14 pages, 1508 KiB  
Article
A Mouth and Tongue Interactive Device to Control Wearable Robotic Limbs in Tasks where Human Limbs Are Occupied
by Hongwei Jing, Tianjiao Zheng, Qinghua Zhang, Benshan Liu, Kerui Sun, Lele Li, Jie Zhao and Yanhe Zhu
Biosensors 2024, 14(5), 213; https://doi.org/10.3390/bios14050213 - 24 Apr 2024
Cited by 1 | Viewed by 2657
Abstract
The Wearable Robotic Limb (WRL) is a type of robotic arm worn on the human body, aiming to enhance the wearer’s operational capabilities. However, proposing additional methods to control and perceive the WRL when human limbs are heavily occupied with primary tasks presents [...] Read more.
The Wearable Robotic Limb (WRL) is a type of robotic arm worn on the human body, aiming to enhance the wearer’s operational capabilities. However, proposing additional methods to control and perceive the WRL when human limbs are heavily occupied with primary tasks presents a challenge. Existing interactive methods, such as voice, gaze, and electromyography (EMG), have limitations in control precision and convenience. To address this, we have developed an interactive device that utilizes the mouth and tongue. This device is lightweight and compact, allowing wearers to achieve continuous motion and contact force control of the WRL. By using a tongue controller and mouth gas pressure sensor, wearers can control the WRL while also receiving sensitive contact feedback through changes in mouth pressure. To facilitate bidirectional interaction between the wearer and the WRL, we have devised an algorithm that divides WRL control into motion and force-position hybrid modes. In order to evaluate the performance of the device, we conducted an experiment with ten participants tasked with completing a pin-hole assembly task with the assistance of the WRL system. The results show that the device enables continuous control of the position and contact force of the WRL, with users perceiving feedback through mouth airflow resistance. However, the experiment also revealed some shortcomings of the device, including user fatigue and its impact on breathing. After experimental investigation, it was observed that fatigue levels can decrease with training. Experimental studies have revealed that fatigue levels can decrease with training. Furthermore, the limitations of the device have shown potential for improvement through structural enhancements. Overall, our mouth and tongue interactive device shows promising potential in controlling the WRL during tasks where human limbs are occupied. Full article
(This article belongs to the Special Issue Devices and Wearable Devices toward Innovative Applications)
Show Figures

Figure 1

11 pages, 2625 KiB  
Article
Properties of Gaze Strategies Based on Eye–Head Coordination in a Ball-Catching Task
by Seiji Ono, Yusei Yoshimura, Ryosuke Shinkai and Tomohiro Kizuka
Vision 2024, 8(2), 20; https://doi.org/10.3390/vision8020020 - 15 Apr 2024
Cited by 1 | Viewed by 2255
Abstract
Visual motion information plays an important role in the control of movements in sports. Skilled ball players are thought to acquire accurate visual information by using an effective visual search strategy with eye and head movements. However, differences in catching ability and gaze [...] Read more.
Visual motion information plays an important role in the control of movements in sports. Skilled ball players are thought to acquire accurate visual information by using an effective visual search strategy with eye and head movements. However, differences in catching ability and gaze movements due to sports experience and expertise have not been clarified. Therefore, the purpose of this study was to determine the characteristics of gaze strategies based on eye and head movements during a ball-catching task in athlete and novice groups. Participants were softball and tennis players and college students who were not experienced in ball sports (novice). They performed a one-handed catching task using a tennis ball-shooting machine, which was placed at 9 m in front of the participants, and two conditions were set depending on the height of the ball trajectory (high and low conditions). Their head and eye velocities were detected using a gyroscope and electrooculography (EOG) during the task. Our results showed that the upward head velocity and the downward eye velocity were lower in the softball group than in the tennis and novice groups. When the head was pitched upward, the downward eye velocity was induced from the vestibulo-ocular reflex (VOR) during ball catching. Therefore, it is suggested that skilled ball players have relatively stable head and eye movements, which may lead to an effective gaze strategy. An advantage of the stationary gaze in the softball group could be to acquire visual information about the surroundings other than the ball. Full article
(This article belongs to the Special Issue Eye and Head Movements in Visuomotor Tasks)
Show Figures

Figure 1

16 pages, 6278 KiB  
Article
Competing Visual Cues Revealed by Electroencephalography: Sensitivity to Motion Speed and Direction
by Rassam Rassam, Qi Chen and Yan Gai
Brain Sci. 2024, 14(2), 160; https://doi.org/10.3390/brainsci14020160 - 4 Feb 2024
Cited by 2 | Viewed by 1799
Abstract
Motion speed and direction are two fundamental cues for the mammalian visual system. Neurons in various places of the neocortex show tuning properties in term of firing frequency to both speed and direction. The present study applied a 32-channel electroencephalograph (EEG) system to [...] Read more.
Motion speed and direction are two fundamental cues for the mammalian visual system. Neurons in various places of the neocortex show tuning properties in term of firing frequency to both speed and direction. The present study applied a 32-channel electroencephalograph (EEG) system to 13 human subjects while they were observing a single object moving with different speeds in various directions from the center of view to the periphery on a computer monitor. Depending on the experimental condition, the subjects were either required to fix their gaze at the center of the monitor while the object was moving or to track the movement with their gaze; eye-tracking glasses were used to ensure that they followed instructions. In each trial, motion speed and direction varied randomly and independently, forming two competing visual features. EEG signal classification was performed for each cue separately (e.g., 11 speed values or 11 directions), regardless of variations in the other cue. Under the eye-fixed condition, multiple subjects showed distinct preferences to motion direction over speed; however, two outliers showed superb sensitivity to speed. Under the eye-tracking condition, in which the EEG signals presumably contained ocular movement signals, all subjects showed predominantly better classification for motion direction. There was a trend that speed and direction were encoded by different electrode sites. Since EEG is a noninvasive and portable approach suitable for brain–computer interfaces (BCIs), this study provides insights on fundamental knowledge of the visual system as well as BCI applications based on visual stimulation. Full article
Show Figures

Figure 1

23 pages, 14614 KiB  
Article
The Design and Control of a Biomimetic Binocular Cooperative Perception System Inspired by the Eye Gaze Mechanism
by Xufang Qin, Xiaohua Xia, Zhaokai Ge, Yanhao Liu and Pengju Yue
Biomimetics 2024, 9(2), 69; https://doi.org/10.3390/biomimetics9020069 - 24 Jan 2024
Cited by 1 | Viewed by 1738
Abstract
Research on systems that imitate the gaze function of human eyes is valuable for the development of humanoid eye intelligent perception. However, the existing systems have some limitations, including the redundancy of servo motors, a lack of camera position adjustment components, and the [...] Read more.
Research on systems that imitate the gaze function of human eyes is valuable for the development of humanoid eye intelligent perception. However, the existing systems have some limitations, including the redundancy of servo motors, a lack of camera position adjustment components, and the absence of interest-point-driven binocular cooperative motion-control strategies. In response to these challenges, a novel biomimetic binocular cooperative perception system (BBCPS) was designed and its control was realized. Inspired by the gaze mechanism of human eyes, we designed a simple and flexible biomimetic binocular cooperative perception device (BBCPD). Based on a dynamic analysis, the BBCPD was assembled according to the principle of symmetrical distribution around the center. This enhances braking performance and reduces operating energy consumption, as evidenced by the simulation results. Moreover, we crafted an initial position calibration technique that allows for the calibration and adjustment of the camera pose and servo motor zero-position, to ensure that the state of the BBCPD matches the subsequent control method. Following this, a control method for the BBCPS was developed, combining interest point detection with a motion-control strategy. Specifically, we propose a binocular interest-point extraction method based on frequency-tuned and template-matching algorithms for perceiving interest points. To move an interest point to a principal point, we present a binocular cooperative motion-control strategy. The rotation angles of servo motors were calculated based on the pixel difference between the principal point and the interest point, and PID-controlled servo motors were driven in parallel. Finally, real experiments validated the control performance of the BBCPS, demonstrating that the gaze error was less than three pixels. Full article
(This article belongs to the Special Issue Bioinspired Engineering and the Design of Biomimetic Structures)
Show Figures

Graphical abstract

16 pages, 2756 KiB  
Protocol
Methodology and Experimental Protocol for Studying Learning and Motor Control in Neuromuscular Structures in Pilates
by Mário José Pereira, Alexandra André, Mário Monteiro, Maria António Castro, Rui Mendes, Fernando Martins, Ricardo Gomes, Vasco Vaz and Gonçalo Dias
Healthcare 2024, 12(2), 229; https://doi.org/10.3390/healthcare12020229 - 17 Jan 2024
Cited by 2 | Viewed by 2864
Abstract
The benefits of Pilates have been extensively researched for their impact on muscular, psychological, and cardiac health, as well as body composition, among other aspects. This study aims to investigate the influence of the Pilates method on the learning process, motor control, and [...] Read more.
The benefits of Pilates have been extensively researched for their impact on muscular, psychological, and cardiac health, as well as body composition, among other aspects. This study aims to investigate the influence of the Pilates method on the learning process, motor control, and neuromuscular trunk stabilization, specifically in both experienced and inexperienced practitioners. This semi-randomized controlled trial compares the level of experience among 36 Pilates practitioners in terms of motor control and learning of two Pilates-based skills: standing plank and side crisscross. Data will be collected using various assessment methods, including abdominal wall muscle ultrasound (AWMUS), shear wave elastography (SWE), gaze behavior (GA) assessment, electroencephalography (EEG), and video motion. Significant intra- and inter-individual variations are expected, due to the diverse morphological and psychomotor profiles in the sample. The adoption of both linear and non-linear analyses will provide a comprehensive evaluation of how neuromuscular structures evolve over time and space, offering both quantitative and qualitative insights. Non-linear analysis is expected to reveal higher entropy in the expert group compared to non-experts, signifying greater complexity in their motor control. In terms of stability, experts are likely to exhibit higher Lyapunov exponent values, indicating enhanced stability and coordination, along with lower Hurst exponent values. In elastography, experienced practitioners are expected to display higher transversus abdominis (TrA) muscle elasticity, due to their proficiency. Concerning GA, non-experts are expected to demonstrate more saccades, focus on more Areas of Interest (AOIs), and shorter fixation times, as experts are presumed to have more efficient gaze control. In EEG, we anticipate higher theta wave values in the non-expert group compared to the expert group. These expectations draw from similar studies in elastography and correlated research in eye tracking and EEG. They are consistent with the principles of the Pilates Method and other scientific knowledge in related techniques. Full article
Show Figures

Figure 1

16 pages, 1438 KiB  
Systematic Review
Localization of Vestibular Cortex Using Electrical Cortical Stimulation: A Systematic Literature Review
by Christina K. Arvaniti, Alexandros G. Brotis, Thanasis Paschalis, Eftychia Z. Kapsalaki and Kostas N. Fountas
Brain Sci. 2024, 14(1), 75; https://doi.org/10.3390/brainsci14010075 - 11 Jan 2024
Cited by 3 | Viewed by 2739
Abstract
The vestibular system plays a fundamental role in body orientation, posture control, and spatial and body motion perception, as well as in gaze and eye movements. We aimed to review the current knowledge regarding the location of the cortical and subcortical areas, implicated [...] Read more.
The vestibular system plays a fundamental role in body orientation, posture control, and spatial and body motion perception, as well as in gaze and eye movements. We aimed to review the current knowledge regarding the location of the cortical and subcortical areas, implicated in the processing of vestibular stimuli. The search was performed in PubMed and Scopus. We focused on studies reporting on vestibular manifestations after electrical cortical stimulation. A total of 16 studies were finally included. Two main types of vestibular responses were elicited, including vertigo and perception of body movement. The latter could be either rotatory or translational. Electrical stimulation of the temporal structures elicited mainly vertigo, while stimulation of the parietal lobe was associated with perceptions of body movement. Stimulation of the occipital lobe produced vertigo with visual manifestations. There was evidence that the vestibular responses became more robust with increasing current intensity. Low-frequency stimulation proved to be more effective than high-frequency in eliciting vestibular responses. Numerous non-vestibular responses were recorded after stimulation of the vestibular cortex, including somatosensory, viscero-sensory, and emotional manifestations. Newer imaging modalities such as functional MRI (fMRI), Positron Emission Tomography (PET), SPECT, and near infra-red spectroscopy (NIRS) can provide useful information regarding localization of the vestibular cortex. Full article
(This article belongs to the Section Sensory and Motor Neuroscience)
Show Figures

Figure 1

Back to TopTop