Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = tactile attention task

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3721 KiB  
Article
Haptic–Vision Fusion for Accurate Position Identification in Robotic Multiple Peg-in-Hole Assembly
by Jinlong Chen, Deming Luo, Zhigang Xiao, Minghao Yang, Xingguo Qin and Yongsong Zhan
Electronics 2025, 14(11), 2163; https://doi.org/10.3390/electronics14112163 - 26 May 2025
Viewed by 490
Abstract
Multi-peg-hole assembly is a fundamental process in robotic manufacturing, particularly for circular aviation electrical connectors (CAECs) that require precise axial alignment. However, CAEC assembly poses significant challenges due to small apertures, posture disturbances, and the need for high error tolerance. This paper proposes [...] Read more.
Multi-peg-hole assembly is a fundamental process in robotic manufacturing, particularly for circular aviation electrical connectors (CAECs) that require precise axial alignment. However, CAEC assembly poses significant challenges due to small apertures, posture disturbances, and the need for high error tolerance. This paper proposes a dual-stream Siamese network (DSSN) framework that fuses visual and tactile modalities to achieve accurate position identification in six-degree-of-freedom robotic connector assembly tasks. The DSSN employs ConvNeXt for visual feature extraction and SE-ResNet-50 with integrated attention mechanisms for tactile feature extraction, while a gated attention module adaptively fuses multimodal features. A bidirectional long short-term memory (Bi-LSTM) recurrent neural network is introduced to jointly model spatiotemporal deviations in position and orientation. Compared with state-of-the-art methods, the proposed DSSN achieves improvements of approximately 7.4%, 5.7%, and 5.4% in assembly success rates after 1, 5, and 10 buckling iterations, respectively. Experimental results validate that the integration of multimodal adaptive fusion and sequential spatiotemporal learning enables robust and precise robotic connectors assembly under high-tolerance conditions. Full article
Show Figures

Figure 1

29 pages, 2112 KiB  
Review
From Sensors to Care: How Robotic Skin Is Transforming Modern Healthcare—A Mini Review
by Yuting Zhu, Wendy Moyle, Min Hong and Kean Aw
Sensors 2025, 25(9), 2895; https://doi.org/10.3390/s25092895 - 3 May 2025
Cited by 1 | Viewed by 1415
Abstract
In recent years, robotics has made notable progress, becoming an essential component of daily life by facilitating complex tasks and enhancing human experiences. While most robots have traditionally featured hard surfaces, the growing demand for more comfortable and safer human–robot interactions has driven [...] Read more.
In recent years, robotics has made notable progress, becoming an essential component of daily life by facilitating complex tasks and enhancing human experiences. While most robots have traditionally featured hard surfaces, the growing demand for more comfortable and safer human–robot interactions has driven the development of soft robots. One type of soft robot, which incorporates innovative skin materials, transforms rigid structures into more pliable and adaptive forms, making them better suited for interacting with humans. Especially in healthcare and rehabilitation, robotic skin technology has gained substantial attention, offering transformative solutions for improving the functionality of prosthetics, exoskeletons, and companion robots. Although replicating the complex sensory functions of human skin remains a challenge, ongoing research in soft robotics focuses on developing sensors that mimic the softness and tactile sensitivity necessary for effective interaction. This review provides a narrative analysis of current trends in robotic skin development, specifically tailored for healthcare and rehabilitation applications, including skin types of sensor technologies, materials, challenges, and future research directions in this rapidly developing field. Full article
(This article belongs to the Special Issue Advanced Sensors Technologies for Soft Robotic System)
Show Figures

Figure 1

15 pages, 1937 KiB  
Article
Improving the Performance of Electrotactile Brain–Computer Interface Using Machine Learning Methods on Multi-Channel Features of Somatosensory Event-Related Potentials
by Marija Novičić, Olivera Djordjević, Vera Miler-Jerković, Ljubica Konstantinović and Andrej M. Savić
Sensors 2024, 24(24), 8048; https://doi.org/10.3390/s24248048 - 17 Dec 2024
Viewed by 1033
Abstract
Traditional tactile brain–computer interfaces (BCIs), particularly those based on steady-state somatosensory–evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI [...] Read more.
Traditional tactile brain–computer interfaces (BCIs), particularly those based on steady-state somatosensory–evoked potentials, face challenges such as lower accuracy, reduced bit rates, and the need for spatially distant stimulation points. In contrast, using transient electrical stimuli offers a promising alternative for generating tactile BCI control signals: somatosensory event-related potentials (sERPs). This study aimed to optimize the performance of a novel electrotactile BCI by employing advanced feature extraction and machine learning techniques on sERP signals for the classification of users’ selective tactile attention. The experimental protocol involved ten healthy subjects performing a tactile attention task, with EEG signals recorded from five EEG channels over the sensory–motor cortex. We employed sequential forward selection (SFS) of features from temporal sERP waveforms of all EEG channels. We systematically tested classification performance using machine learning algorithms, including logistic regression, k-nearest neighbors, support vector machines, random forests, and artificial neural networks. We explored the effects of the number of stimuli required to obtain sERP features for classification and their influence on accuracy and information transfer rate. Our approach indicated significant improvements in classification accuracy compared to previous studies. We demonstrated that the number of stimuli for sERP generation can be reduced while increasing the information transfer rate without a statistically significant decrease in classification accuracy. In the case of the support vector machine classifier, we achieved a mean accuracy over 90% for 10 electrical stimuli, while for 6 stimuli, the accuracy decreased by less than 7%, and the information transfer rate increased by 60%. This research advances methods for tactile BCI control based on event-related potentials. This work is significant since tactile stimulation is an understudied modality for BCI control, and electrically induced sERPs are the least studied control signals in reactive BCIs. Exploring and optimizing the parameters of sERP elicitation, as well as feature extraction and classification methods, is crucial for addressing the accuracy versus speed trade-off in various assistive BCI applications where the tactile modality may have added value. Full article
Show Figures

Figure 1

16 pages, 2612 KiB  
Article
Influencing Mechanism of Signal Design Elements in Complex Human–Machine System: Evidence from Eye Movement Data
by Siu Shing Man, Wenbo Hu, Hanxing Zhou, Tingru Zhang and Alan Hoi Shou Chan
Informatics 2024, 11(4), 88; https://doi.org/10.3390/informatics11040088 - 21 Nov 2024
Viewed by 1220
Abstract
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible [...] Read more.
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

17 pages, 1670 KiB  
Article
Electrotactile BCI for Top-Down Somatosensory Training: Clinical Feasibility Trial of Online BCI Control in Subacute Stroke Patients
by Andrej M. Savić, Marija Novičić, Vera Miler-Jerković, Olivera Djordjević and Ljubica Konstantinović
Biosensors 2024, 14(8), 368; https://doi.org/10.3390/bios14080368 - 28 Jul 2024
Cited by 4 | Viewed by 4605
Abstract
This study investigates the feasibility of a novel brain–computer interface (BCI) device designed for sensory training following stroke. The BCI system administers electrotactile stimuli to the user’s forearm, mirroring classical sensory training interventions. Concurrently, selective attention tasks are employed to modulate electrophysiological brain [...] Read more.
This study investigates the feasibility of a novel brain–computer interface (BCI) device designed for sensory training following stroke. The BCI system administers electrotactile stimuli to the user’s forearm, mirroring classical sensory training interventions. Concurrently, selective attention tasks are employed to modulate electrophysiological brain responses (somatosensory event-related potentials—sERPs), reflecting cortical excitability in related sensorimotor areas. The BCI identifies attention-induced changes in the brain’s reactions to stimulation in an online manner. The study protocol assesses the feasibility of online binary classification of selective attention focus in ten subacute stroke patients. Each experimental session includes a BCI training phase for data collection and classifier training, followed by a BCI test phase to evaluate online classification of selective tactile attention based on sERP. During online classification tests, patients complete 20 repetitions of selective attention tasks with feedback on attention focus recognition. Using a single electroencephalographic channel, attention classification accuracy ranges from 70% to 100% across all patients. The significance of this novel BCI paradigm lies in its ability to quantitatively measure selective tactile attention resources throughout the therapy session, introducing a top-down approach to classical sensory training interventions based on repeated neuromuscular electrical stimulation. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Figure 1

15 pages, 2621 KiB  
Article
SlowR50-SA: A Self-Attention Enhanced Dynamic Facial Expression Recognition Model for Tactile Internet Applications
by Nikolay Neshov, Nicole Christoff, Teodora Sechkova, Krasimir Tonchev and Agata Manolova
Electronics 2024, 13(9), 1606; https://doi.org/10.3390/electronics13091606 - 23 Apr 2024
Cited by 2 | Viewed by 1400
Abstract
Emotion recognition from facial expressions is a challenging task due to the subtle and nuanced nature of facial expressions. Within the framework of Tactile Internet (TI), the integration of this technology has the capacity to completely transform real-time user interactions, by delivering customized [...] Read more.
Emotion recognition from facial expressions is a challenging task due to the subtle and nuanced nature of facial expressions. Within the framework of Tactile Internet (TI), the integration of this technology has the capacity to completely transform real-time user interactions, by delivering customized emotional input. The influence of this technology is far-reaching, as it may be used in immersive virtual reality interactions and remote tele-care applications to identify emotional states in patients. In this paper, a novel emotion recognition algorithm is presented that integrates a Self-Attention (SA) module into the SlowR50 backbone (SlowR50-SA). The experiments on the DFEW and FERV39K datasets demonstrate that the proposed model achieves good performance in terms of both Unweighted Average Recall (UAR) and Weighted Average Recall (WAR) metrics, achieving a UAR (WAR) of 57.09% (69.87%) on the DFEW dataset, and UAR (WAR) of 39.48% (49.34%) on the FERV39K dataset. Notably, SlowR50-SA operates with only eight frames of input at low temporal resolution, highlighting its efficiency. Furthermore, the algorithm has the potential to be integrated into Tactile Internet applications, where it can be used to enhance the user experience by providing real-time emotion feedback. SlowR50-SA can also be used to enhance virtual reality experiences by providing personalized haptic feedback based on the user’s emotional state. It can also be used in remote tele-care applications to detect signs of stress, anxiety, or depression in patients. Full article
(This article belongs to the Section Electronic Multimedia)
Show Figures

Figure 1

15 pages, 992 KiB  
Article
Mindfulness Affects the Boundaries of Bodily Self-Representation: The Effect of Focused-Attention Meditation in Fading the Boundary of Peripersonal Space
by Salvatore Gaetano Chiarella, Riccardo De Pastina, Antonino Raffone and Luca Simione
Behav. Sci. 2024, 14(4), 306; https://doi.org/10.3390/bs14040306 - 9 Apr 2024
Cited by 2 | Viewed by 2928
Abstract
Peripersonal space (PPS) is a dynamic multisensory representation of the space around the body, influenced by internal and external sensory information. The malleability of PPS boundaries, as evidenced by their expansion after tool use or modulation through social interactions, positions PPS as a [...] Read more.
Peripersonal space (PPS) is a dynamic multisensory representation of the space around the body, influenced by internal and external sensory information. The malleability of PPS boundaries, as evidenced by their expansion after tool use or modulation through social interactions, positions PPS as a crucial element in understanding the subjective experiences of self and otherness. Building on the existing literature highlighting both the cognitive and bodily effects of mindfulness meditation, this study proposes a novel approach by employing focused-attention meditation (FAM) and a multisensory audio–tactile task to assess PPS in both the extension and sharpness of its boundaries. The research hypothesis posits that FAM, which emphasizes heightened attention to bodily sensations and interoception, may reduce the extension of PPS and make its boundaries less sharp. We enrolled 26 non-meditators who underwent a repeated measure design in which they completed the PPS task before and after a 15-min FAM induction. We found a significant reduction in the sharpness of PPS boundaries but no significant reduction in PPS extension. These results provide novel insights into the immediate effects of FAM on PPS, potentially shedding light on the modulation of self–other representations in both cognitive and bodily domains. Indeed, our findings could have implications for understanding the intricate relationship between mindfulness practices and the subjective experience of self within spatial contexts. Full article
Show Figures

Figure 1

13 pages, 3764 KiB  
Case Report
Vibrotactile Feedback for a Person with Transradial Amputation and Visual Loss: A Case Report
by Gerfried Peternell, Harald Penasso, Henriette Luttenberger, Hildegard Ronacher, Roman Schlintner, Kara Ashcraft, Alexander Gardetto, Jennifer Ernst and Ursula Kropiunig
Medicina 2023, 59(10), 1710; https://doi.org/10.3390/medicina59101710 - 25 Sep 2023
Cited by 3 | Viewed by 2148
Abstract
Background and Objectives: After major upper-limb amputation, people face challenges due to losing tactile information and gripping function in their hands. While vision can confirm the success of an action, relying on it diverts attention from other sensations and tasks. This case report [...] Read more.
Background and Objectives: After major upper-limb amputation, people face challenges due to losing tactile information and gripping function in their hands. While vision can confirm the success of an action, relying on it diverts attention from other sensations and tasks. This case report presents a 30-year-old man with traumatic, complete vision loss and transradial left forearm amputation. It emphasizes the importance of restoring tactile abilities when visual compensation is impossible. Materials and Methods: A prototype tactile feedback add-on system was developed, consisting of a sensor glove and upper arm cuff with related vibration actuators. Results: We found a 66% improvement in the Box and Blocks test and an overall functional score increase from 30% to 43% in the Southampton Hand Assessment Procedure with feedback. Qualitative improvements in bimanual activities, ergonomics, and reduced reliance on the unaffected hand were observed. Incorporating the tactile feedback system improved the precision of grasping and the utility of the myoelectric hand prosthesis, freeing the unaffected hand for other tasks. Conclusions: This case demonstrated improvements in prosthetic hand utility achieved by restoring peripheral sensitivity while excluding the possibility of visual compensation. Restoring tactile information from the hand and fingers could benefit individuals with impaired vision and somatosensation, improving acceptance, embodiment, social integration, and pain management. Full article
(This article belongs to the Special Issue Innovations in Amputation Care)
Show Figures

Figure 1

39 pages, 8579 KiB  
Review
Bioinspired Perception and Navigation of Service Robots in Indoor Environments: A Review
by Jianguo Wang, Shiwei Lin and Ang Liu
Biomimetics 2023, 8(4), 350; https://doi.org/10.3390/biomimetics8040350 - 7 Aug 2023
Cited by 11 | Viewed by 4343
Abstract
Biological principles draw attention to service robotics because of similar concepts when robots operate various tasks. Bioinspired perception is significant for robotic perception, which is inspired by animals’ awareness of the environment. This paper reviews the bioinspired perception and navigation of service robots [...] Read more.
Biological principles draw attention to service robotics because of similar concepts when robots operate various tasks. Bioinspired perception is significant for robotic perception, which is inspired by animals’ awareness of the environment. This paper reviews the bioinspired perception and navigation of service robots in indoor environments, which are popular applications of civilian robotics. The navigation approaches are classified by perception type, including vision-based, remote sensing, tactile sensor, olfactory, sound-based, inertial, and multimodal navigation. The trend of state-of-art techniques is moving towards multimodal navigation to combine several approaches. The challenges in indoor navigation focus on precise localization and dynamic and complex environments with moving objects and people. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot)
Show Figures

Figure 1

17 pages, 2040 KiB  
Article
Behavioural Models of Risk-Taking in Human–Robot Tactile Interactions
by Qiaoqiao Ren, Yuanbo Hou, Dick Botteldooren and Tony Belpaeme
Sensors 2023, 23(10), 4786; https://doi.org/10.3390/s23104786 - 16 May 2023
Cited by 3 | Viewed by 2149
Abstract
Touch can have a strong effect on interactions between people, and as such, it is expected to be important to the interactions people have with robots. In an earlier work, we showed that the intensity of tactile interaction with a robot can change [...] Read more.
Touch can have a strong effect on interactions between people, and as such, it is expected to be important to the interactions people have with robots. In an earlier work, we showed that the intensity of tactile interaction with a robot can change how much people are willing to take risks. This study further develops our understanding of the relationship between human risk-taking behaviour, the physiological responses by the user, and the intensity of the tactile interaction with a social robot. We used data collected with physiological sensors during the playing of a risk-taking game (the Balloon Analogue Risk Task, or BART). The results of a mixed-effects model were used as a baseline to predict risk-taking propensity from physiological measures, and these results were further improved through the use of two machine learning techniques—support vector regression (SVR) and multi-input convolutional multihead attention (MCMA)—to achieve low-latency risk-taking behaviour prediction during human–robot tactile interaction. The performance of the models was evaluated based on mean absolute error (MAE), root mean squared error (RMSE), and R squared score (R2), which obtained the optimal result with MCMA yielding an MAE of 3.17, an RMSE of 4.38, and an R2 of 0.93 compared with the baseline of 10.97 MAE, 14.73 RMSE, and 0.30 R2. The results of this study offer new insights into the interplay between physiological data and the intensity of risk-taking behaviour in predicting human risk-taking behaviour during human–robot tactile interactions. This work illustrates that physiological activation and the intensity of tactile interaction play a prominent role in risk processing during human–robot tactile interaction and demonstrates that it is feasible to use human physiological data and behavioural data to predict risk-taking behaviour in human–robot tactile interaction. Full article
Show Figures

Figure 1

16 pages, 1467 KiB  
Article
Somatosensory Event-Related Potential as an Electrophysiological Correlate of Endogenous Spatial Tactile Attention: Prospects for Electrotactile Brain-Computer Interface for Sensory Training
by Marija Novičić and Andrej M. Savić
Brain Sci. 2023, 13(5), 766; https://doi.org/10.3390/brainsci13050766 - 5 May 2023
Cited by 12 | Viewed by 3104
Abstract
Tactile attention tasks are used in the diagnosis and treatment of neurological and sensory processing disorders, while somatosensory event-related potentials (ERP) measured by electroencephalography (EEG) are used as neural correlates of attention processes. Brain-computer interface (BCI) technology provides an opportunity for the training [...] Read more.
Tactile attention tasks are used in the diagnosis and treatment of neurological and sensory processing disorders, while somatosensory event-related potentials (ERP) measured by electroencephalography (EEG) are used as neural correlates of attention processes. Brain-computer interface (BCI) technology provides an opportunity for the training of mental task execution via providing online feedback based on ERP measures. Our recent work introduced a novel electrotactile BCI for sensory training, based on somatosensory ERP; however, no previous studies have addressed specific somatosensory ERP morphological features as measures of sustained endogenous spatial tactile attention in the context of BCI control. Here we show the morphology of somatosensory ERP responses induced by a novel task introduced within our electrotactile BCI platform i.e., the sustained endogenous spatial electrotactile attention task. By applying pulsed electrical stimuli to the two proximal stimulation hotspots at the user’s forearm, stimulating sequentially the mixed branches of radial and median nerves with equal probability of stimuli occurrence, we successfully recorded somatosensory ERPs for both stimulation locations, in the attended and unattended conditions. Waveforms of somatosensory ERP responses for both mixed nerve branches showed similar morphology in line with previous reports on somatosensory ERP components obtained by stimulation of exclusively sensory nerves. Moreover, we found statistically significant increases in ERP amplitude on several components, at both stimulation hotspots, while sustained endogenous spatial electrotactile attention task is performed. Our results revealed the existence of general ERP windows of interest and signal features that can be used to detect sustained endogenous tactile attention and classify between spatial attention locations in 11 healthy subjects. The current results show that features of N140, P3a and P3b somatosensory ERP components are the most prominent global markers of sustained spatial electrotactile attention, over all subjects, within our novel electrotactile BCI task/paradigm, and this work proposes the features of those components as markers of sustained endogenous spatial tactile attention in online BCI control. Immediate implications of this work are the possible improvement of online BCI control within our novel electrotactile BCI system, while these finding can be used for other tactile BCI applications in the diagnosis and treatment of neurological disorders by employing mixed nerve somatosensory ERPs and sustained endogenous electrotactile attention task as control paradigms. Full article
(This article belongs to the Special Issue Emerging Topics in Brain-Computer Interface)
Show Figures

Graphical abstract

14 pages, 576 KiB  
Article
Equivalent Behavioral Facilitation to Tactile Cues in Children with Autism Spectrum Disorder
by Girija Kadlaskar, Sophia Bergmann, Rebecca McNally Keehn, Amanda Seidl and Brandon Keehn
Brain Sci. 2021, 11(5), 625; https://doi.org/10.3390/brainsci11050625 - 13 May 2021
Cited by 2 | Viewed by 3153
Abstract
The alerting network, a subcomponent of attention, enables humans to respond to novel information. Children with ASD have shown equivalent alerting in response to visual and/or auditory stimuli compared to typically developing (TD) children. However, it is unclear whether children with ASD and [...] Read more.
The alerting network, a subcomponent of attention, enables humans to respond to novel information. Children with ASD have shown equivalent alerting in response to visual and/or auditory stimuli compared to typically developing (TD) children. However, it is unclear whether children with ASD and TD show equivalent alerting to tactile stimuli. We examined (1) whether tactile cues affect accuracy and reaction times in children with ASD and TD, (2) whether the duration between touch-cues and auditory targets impacts performance, and (3) whether behavioral responses in the tactile cueing task are associated with ASD symptomatology. Six- to 12-year-olds with ASD and TD participated in a tactile-cueing task and were instructed to respond with a button press to a target sound /a/. Tactile cues were presented at 200, 400, and 800 ms (25% each) prior to the auditory target. The remaining trials (25%) were presented without tactile cues. Findings suggested that both groups showed equivalent alerting responses to tactile cues. Additionally, all children were faster to respond to auditory targets at longer cue–target intervals. Finally, there was an association between rate of facilitation and RRB scores in all children, suggesting that patterns of responding to transient phasic cues may be related to ASD symptomatology. Full article
(This article belongs to the Special Issue Understanding Autism Spectrum Disorder)
Show Figures

Figure 1

15 pages, 1476 KiB  
Article
The Effect of Tactile Training on Sustained Attention in Young Adults
by Yu Luo and Jicong Zhang
Brain Sci. 2020, 10(10), 695; https://doi.org/10.3390/brainsci10100695 - 30 Sep 2020
Cited by 8 | Viewed by 4907
Abstract
Sustained attention is crucial for higher-order cognition and real-world activities. The idea that tactile training improves sustained attention is appealing and has clinical significance. The aim of this study was to explore whether tactile training could improve visual sustained attention. Using 128-channel electroencephalography [...] Read more.
Sustained attention is crucial for higher-order cognition and real-world activities. The idea that tactile training improves sustained attention is appealing and has clinical significance. The aim of this study was to explore whether tactile training could improve visual sustained attention. Using 128-channel electroencephalography (EEG), we found that participants with tactile training outperformed non-trainees in the accuracy and calculation efficiency measured by the Math task. Furthermore, trainees demonstrated significantly decreased omission error measured by the sustained attention to response task (SART). We also found that the improvements in behavioral performance were associated with parietal P300 amplitude enhancements. EEG source imaging analyses revealed stronger brain activation among the trainees in the prefrontal and sensorimotor regions at P300. These results suggest that the tactile training can improve sustained attention in young adults, and the improved sustained attention following training may be due to more effective attentional resources allocation. Our findings also indicate the use of a noninvasive tactile training paradigm to improve cognitive functions (e.g., sustained attention) in young adults, potentially leading to new training and rehabilitative protocols. Full article
Show Figures

Figure 1

9 pages, 1303 KiB  
Communication
Does Periodontal Tactile Input Uniquely Increase Cerebral Blood Flow in the Prefrontal Cortex?
by Takaharu Goto, Nobuaki Higaki, Takahiro Kishimoto, Yoritoki Tomotake and Tetsuo Ichikawa
Brain Sci. 2020, 10(8), 482; https://doi.org/10.3390/brainsci10080482 - 26 Jul 2020
Cited by 1 | Viewed by 2625
Abstract
We previously studied the effect of peripheral sensory information from sensory periodontal ligament receptors on prefrontal cortex (PFC) activity. In the dental field, an alternative dental implant without periodontal sensation can be applied for missing teeth. In this study, we examine whether periodontal [...] Read more.
We previously studied the effect of peripheral sensory information from sensory periodontal ligament receptors on prefrontal cortex (PFC) activity. In the dental field, an alternative dental implant without periodontal sensation can be applied for missing teeth. In this study, we examine whether periodontal tactile input could increase cerebral blood flow (CBF) in the PFC against elderly patients with dental implants lacking periodontal tactile (implant group), elderly individuals with natural teeth (elderly group), and young individuals with natural teeth (young group). The experimental task of maintaining occlusal force as closed-loop stimulation was performed. Compared with the young group, the elderly group showed significantly lower CBF. Contrastingly, compared with the young group, the implant group showed significantly lower CBF. There were no significant differences between the elderly and implant groups. Regarding the mean occlusal force value, compared with the young group and the elderly group, the implant group had a numerically, but not significantly, larger occlusal force exceeding the directed range. In conclusion, the periodontal tactile input does not uniquely increase PFC activity. However, increased CBF in the PFC due to the periodontal tactile input in the posterior region requires existing attention behavior function in the PFC. Full article
(This article belongs to the Special Issue Prefrontal Cortex and Cognitive-Emotional Functions)
Show Figures

Figure 1

7 pages, 538 KiB  
Communication
Could the Visual Differential Attention Be a Referential Gesture? A Study on Horses (Equus caballus) on the Impossible Task Paradigm
by Alessandra Alterisio, Paolo Baragli, Massimo Aria, Biagio D’Aniello and Anna Scandurra
Animals 2018, 8(7), 120; https://doi.org/10.3390/ani8070120 - 17 Jul 2018
Cited by 17 | Viewed by 5658
Abstract
In order to explore the decision-making processes of horses, we designed an impossible task paradigm aimed at causing an expectancy violation in horses. Our goals were to verify whether this paradigm is effective in horses by analyzing their motivation in trying to solve [...] Read more.
In order to explore the decision-making processes of horses, we designed an impossible task paradigm aimed at causing an expectancy violation in horses. Our goals were to verify whether this paradigm is effective in horses by analyzing their motivation in trying to solve the task and the mode of the potential helping request in such a context. In the first experiment, 30 horses were subjected to three consecutive conditions: no food condition where two persons were positioned at either side of a table in front of the stall, solvable condition when a researcher placed a reachable reward on the table, and the impossible condition when the food was placed farther away and was unreachable by the horse. Eighteen horses were used in the second experiment with similar solvable and impossible conditions but in the absence of people. We measured the direction of the horse’s ear cup as an indicator of its visual attention in terms of visual selective attention (VSA) when both ears were directed at the same target and the visual differential attention (VDA) when the ears were directed differentially to the persons and to the table. We also included tactile interaction toward table and people, the olfactory exploration of the table, and the frustration behaviors in the ethogram. In the first experiment, the VDA was the most frequent behavior following the expectancy violation. In the second experiment, horses showed the VDA behavior mostly when people and the unreachable resource were present at the same time. We speculate that the VDA could be a referential gesture aimed to link the solution of the task to the people, as a request for help. Full article
(This article belongs to the Section Companion Animals)
Show Figures

Figure 1

Back to TopTop