Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (447)

Search Parameters:
Keywords = sensory input

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2009 KB  
Article
Tool Wear Prediction Using Machine-Learning Models for Bone Drilling in Robotic Surgery
by Shilpa Pusuluri, Hemanth Satya Veer Damineni and Poolan Vivekananda Shanmuganathan
Automation 2025, 6(4), 59; https://doi.org/10.3390/automation6040059 - 16 Oct 2025
Viewed by 283
Abstract
Bone drilling is a widely encountered process in orthopedic surgeries and keyhole neuro surgeries. We are developing a sensor-integrated smart end-effector for drilling for robotic surgical applications. In manual surgeries, surgeons assess tool wear based on experience and force perception. In this work, [...] Read more.
Bone drilling is a widely encountered process in orthopedic surgeries and keyhole neuro surgeries. We are developing a sensor-integrated smart end-effector for drilling for robotic surgical applications. In manual surgeries, surgeons assess tool wear based on experience and force perception. In this work, we propose a machine-learning (ML)-based tool condition monitoring system based on multi-sensor data to preempt excessive tool wear during drilling in robotic surgery. Real-time data is acquired from the six-component force sensor of a collaborative arm along with the data from the temperature and multi-axis vibration sensor mounted on the bone specimen being drilled upon. Raw data from the sensors may have noises and outliers. Signal processing in the time- and frequency-domain are used for denoising as well as to obtain additional features to be derived from the raw sensory data. This paper addresses the challenging problem of identification of the most suitable ML algorithm and the most suitable features to be used as inputs to the algorithm. While dozens of features and innumerable machine learning and deep learning models are available, this paper addresses the problem of selecting the most relevant features, the most relevant AI models, and the optimal hyperparameters to be used in the AI model to provide accurate prediction on the tool condition. A unique framework is proposed for classifying tool wear that combines machine learning-based modeling with multi-sensor data. From the raw sensory data that contains only a handful of features, a number of additional features are derived using frequency-domain techniques and statistical measures. Using feature engineering, we arrived at a total of 60 features from time-domain, frequency-domain, and interaction-based metrics. Such additional features help in improving its predictive capabilities but make the training and prediction complicated and time-consuming. Using a sequence of techniques such as variance thresholding, correlation filtering, ANOVA F-test, and SHAP analysis, the number of features was reduced from 60 to the 4 features that will be most effective in real-time tool condition prediction. In contrast to previous studies that only examine a small number of machine learning models, our approach systematically evaluates a wide range of machine learning and deep learning architectures. The performances of 47 classical ML models and 6 deep learning (DL) architectures were analyzed using the set of the four features identified as most suitable. The Extra Trees Classifier (an ML model) and the one-dimensional Convolutional Neural Network (1D CNN) exhibited the best prediction accuracy among the models studied. Using real-time data, these models monitored the drilling tool condition in real-time to classify the tool wear into three categories of slight, moderate, and severe. Full article
Show Figures

Figure 1

28 pages, 2172 KB  
Article
Bioinspired Stimulus Selection Under Multisensory Overload in Social Robots Using Reinforcement Learning
by Jesús García-Martínez, Marcos Maroto-Gómez, Arecia Segura-Bencomo, Álvaro Castro-González and José Carlos Castillo
Sensors 2025, 25(19), 6152; https://doi.org/10.3390/s25196152 - 4 Oct 2025
Viewed by 392
Abstract
Autonomous social robots aim to reduce human supervision by performing various tasks. To achieve this, they are equipped with multiple perceptual channels to interpret and respond to environmental cues in real time. However, multimodal perception often leads to sensory overload, as robots may [...] Read more.
Autonomous social robots aim to reduce human supervision by performing various tasks. To achieve this, they are equipped with multiple perceptual channels to interpret and respond to environmental cues in real time. However, multimodal perception often leads to sensory overload, as robots may receive numerous simultaneous stimuli with varying durations or persistent activations across different sensory modalities. Sensor overstimulation and false positives can compromise a robot’s ability to prioritise relevant inputs, sometimes resulting in repeated or inaccurate behavioural responses that reduce the quality and coherence of the interaction. This paper presents a Bioinspired Attentional System that uses Reinforcement Learning to manage stimulus prioritisation in real time. The system draws inspiration from the following two neurocognitive mechanisms: Inhibition of Return, which progressively reduces the importance of previously attended stimuli that remain active over time, and Attentional Fatigue, which penalises stimuli of the same perception modality when they appear repeatedly or simultaneously. These mechanisms define the algorithm’s reward function to dynamically adjust the weights assigned to each stimulus, enabling the system to select the most relevant one at each moment. The system has been integrated into a social robot and tested in three representative case studies that show how it modulates sensory signals, reduces the impact of redundant inputs, and improves stimulus selection in overstimulating scenarios. Additionally, we compare the proposed method with a baseline where the robot executes expressions as soon as it receives them using a queue. The results show the system’s significant improvement in expression management, reducing the number of expressions in the queue and the delay in performing them. Full article
Show Figures

Figure 1

18 pages, 3941 KB  
Article
Cerebellar Contributions to Spatial Learning and Memory: Effects of Discrete Immunotoxic Lesions
by Martina Harley Leanza, Elisa Storelli, David D’Arco, Gioacchino de Leo, Giulio Kleiner, Luciano Arancio, Giuseppe Capodieci, Rosario Gulino, Antonio Bava and Giampiero Leanza
Int. J. Mol. Sci. 2025, 26(19), 9553; https://doi.org/10.3390/ijms26199553 - 30 Sep 2025
Viewed by 392
Abstract
Evidence of possible cerebellar involvement in spatial processing, place learning and other types of higher order functions comes mainly from clinical observations, as well as from mutant mice and lesion studies. The latter, in particular, have reported deficits in spatial learning and memory [...] Read more.
Evidence of possible cerebellar involvement in spatial processing, place learning and other types of higher order functions comes mainly from clinical observations, as well as from mutant mice and lesion studies. The latter, in particular, have reported deficits in spatial learning and memory following surgical or neurotoxic cerebellar ablation. However, the low specificity of such manipulations has often made it difficult to precisely dissect the cognitive components of the observed behaviors. Likewise, due to conflicting data coming from lesion studies, it has not been possible so far to conclusively address whether a cerebellar dysfunction is sufficient per se to induce learning deficits, or whether concurrent damage to other regulatory structure(s) is necessary to significantly interfere with cognitive processing. In the present study, the immunotoxin 192 IgG-saporin, selectively targeting cholinergic neurons in the basal forebrain and a subpopulation of cerebellar Purkinje cells, was administered to adult rats bilaterally into the basal forebrain nuclei, the cerebellar cortices or both areas combined. Additional animals underwent injections of the toxin into the lateral ventricles. Starting from two–three weeks post-lesion, the animals were tested on paradigms of motor ability as well as spatial learning and memory and then sacrificed for post-mortem morphological analyses. All lesioned rats showed no signs of ataxia and no motor deficits that could impair their performance in the water maze task. The rats with discrete cerebellar lesions exhibited fairly normal performance and did not differ from controls in any aspect of the task. By contrast, animals with double lesions, as well as those with 192 IgG-saporin given intraventricularly did manifest severe impairments in both reference and working memory. Histo- and immunohistochemical analyses confirmed the effects of the toxin conjugate on target neurons and fairly similar patterns of Purkinje cell loss in the animals with cerebellar lesion only, basal forebrain-cerebellar double lesions and bilateral intraventricular injections of the toxin. No such loss was by contrast seen in the basal forebrain-lesioned animals, whose Purkinje cells were largely spared and exhibited a normal distribution pattern. The results suggest important functional interactions between the ascending regulatory inputs from the cerebellum and those arising in the basal forebrain nuclei that would act together to modulate the complex sensory–motor and cognitive processes required to control whole body movement in space. Full article
(This article belongs to the Section Molecular Neurobiology)
Show Figures

Figure 1

14 pages, 590 KB  
Article
Predicting Temporal Liking of Food Pairings from Temporal Dominance of Sensations Data via Reservoir Computing on Crackers and Spreads
by Hiroharu Natsume and Shogo Okamoto
Foods 2025, 14(19), 3373; https://doi.org/10.3390/foods14193373 - 29 Sep 2025
Viewed by 352
Abstract
The temporal dominance of sensations (TDS) and temporal liking (TL) methods offer complementary insights into the evolution of sensory and hedonic responses during food consumption. This study investigates the feasibility of predicting TL curves for food pairings from their TDS profiles using reservoir [...] Read more.
The temporal dominance of sensations (TDS) and temporal liking (TL) methods offer complementary insights into the evolution of sensory and hedonic responses during food consumption. This study investigates the feasibility of predicting TL curves for food pairings from their TDS profiles using reservoir computing, a type of recurrent neural network. Participants evaluated eight samples—two crackers (plain, sesame), two spreads (peanut butter, strawberry jam), and their four binary combinations—performing both TDS and TL evaluations. This process yielded paired time-series data of TDS and TL curves. We trained various reservoir models under different conditions, including varying reservoir sizes (64, 128, 192, or 256 neurons) and the inclusion of auxiliary input dimensions, such as flags indicating the types of foods tasted. Our results show that models with minimal auxiliary inputs achieved the lowest root mean squared errors (RMSEs), with the best performance being an RMSE of 0.44 points on a 9-point liking scale between the observed and predicted TL curves. The ability to predict TL curves for food pairings holds some promise for reducing the need for extensive sensory evaluation, especially when a large number of food combinations are targeted. Full article
(This article belongs to the Section Food Systems)
Show Figures

Figure 1

12 pages, 4847 KB  
Article
Surformer v1: Transformer-Based Surface Classification Using Tactile and Vision Features
by Manish Kansana, Elias Hossain, Shahram Rahimi and Noorbakhsh Amiri Golilarz
Information 2025, 16(10), 839; https://doi.org/10.3390/info16100839 - 27 Sep 2025
Viewed by 361
Abstract
Surface material recognition is a key component in robotic perception and physical interaction, particularly when leveraging both tactile and visual sensory inputs. In this work, we propose Surformer v1, a transformer-based architecture designed for surface classification using structured tactile features and Principal Component [...] Read more.
Surface material recognition is a key component in robotic perception and physical interaction, particularly when leveraging both tactile and visual sensory inputs. In this work, we propose Surformer v1, a transformer-based architecture designed for surface classification using structured tactile features and Principal Component Analysis (PCA)-reduced visual embeddings extracted via ResNet 50. The model integrates modality-specific encoders with cross-modal attention layers, enabling rich interactions between vision and touch. Currently, state-of-the-art deep learning models for vision tasks have achieved remarkable performance. With this in mind, our first set of experiments focused exclusively on tactile-only surface classification. Using feature engineering, we trained and evaluated multiple machine learning models, assessing their accuracy and inference time. We then implemented an encoder-only Transformer model tailored for tactile features. This model not only achieves the highest accuracy, but also demonstrated significantly faster inference time compared to other evaluated models, highlighting its potential for real-time applications. To extend this investigation, we introduced a multimodal fusion setup by combining vision and tactile inputs. We trained both Surformer v1 (using structured features) and a Multimodal CNN (using raw images) to examine the impact of feature-based versus image-based multimodal learning on classification accuracy and computational efficiency. The results showed that Surformer v1 achieved 99.4% accuracy with an inference time of 0.7271 ms, while the Multimodal CNN achieved slightly higher accuracy but required significantly more inference time. These findings suggest that Surformer v1 offers a compelling balance between accuracy, efficiency, and computational cost for surface material recognition. The results also underscore the effectiveness of integrating feature learning, cross-modal attention and transformer-based fusion in capturing the complementary strengths of tactile and visual modalities. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

19 pages, 5781 KB  
Article
Transcriptome Analysis and Identification of Chemosensory Genes in the Galleria mellonella Larvae
by Jiaoxin Xie, Huiman Zhang, Chenyang Li, Lele Sun, Peng Wang and Yuan Guo
Insects 2025, 16(10), 1004; https://doi.org/10.3390/insects16101004 - 27 Sep 2025
Viewed by 440
Abstract
The greater wax moth Galleria mellonella (Lepidoptera: Galleriinae) represents a ubiquitous apicultural pest that poses significant threats to global beekeeping industries. The larvae damage honeybee colonies by consuming wax combs and tunneling through brood frames, consequently destroying critical hive infrastructure including brood-rearing areas, [...] Read more.
The greater wax moth Galleria mellonella (Lepidoptera: Galleriinae) represents a ubiquitous apicultural pest that poses significant threats to global beekeeping industries. The larvae damage honeybee colonies by consuming wax combs and tunneling through brood frames, consequently destroying critical hive infrastructure including brood-rearing areas, honey storage cells, and pollen reserves. Larval feeding behavior is critically dependent on chemosensory input for host recognition and food selection. In this study, we conducted a transcriptome analysis of larval heads and bodies in G. mellonella. We identified a total of 25 chemosensory genes: 9 odorant binding proteins (OBPs), 1 chemosensory protein (CSP), 5 odorant receptors (ORs), 4 gustatory receptors (GRs), 4 ionotropic receptors (IRs) and 2 sensory neuron membrane proteins (SNMPs). TPM normalization was employed to assess differential expression patterns of chemosensory genes between heads and bodies. Nine putative chemosensory genes were detected as differentially expressed, suggesting their potential functional roles. Subsequently, we quantified expression dynamics via reverse transcription quantitative PCR in major chemosensory tissues (larval heads, adult male and female antennae), revealing adult antennal-biased expression for most chemosensory genes in G. mellonella. Notably, two novel candidates (GmelOBP22 and GmelSNMP3) exhibited particularly high expression in larval heads, suggesting their crucial functional roles in larval development and survival. These findings enhance our understanding of the chemosensory mechanisms in G. mellonella larvae and establish a critical foundation for future functional investigations into its olfactory mechanisms. Full article
(This article belongs to the Special Issue Insect Transcriptomics)
Show Figures

Figure 1

22 pages, 3823 KB  
Article
Beyond Sight: The Influence of Opaque Glasses on Wine Sensory Perception
by George Ștefan Coman, Camelia Elena Luchian, Elena Cristina Scutarașu and Valeriu V. Cotea
Foods 2025, 14(18), 3231; https://doi.org/10.3390/foods14183231 - 17 Sep 2025
Viewed by 519
Abstract
International standards for wines with Protected Designation of Origin (PDO) require characterisation through both analytical and sensory criteria, although sensory evaluation remains inherently subjective, especially regarding organoleptic properties. This study examined paired Blanc de noir and red wines made from identical grape varieties [...] Read more.
International standards for wines with Protected Designation of Origin (PDO) require characterisation through both analytical and sensory criteria, although sensory evaluation remains inherently subjective, especially regarding organoleptic properties. This study examined paired Blanc de noir and red wines made from identical grape varieties to determine whether varietal traits remain perceptible regardless of the vinification method while also assessing the role of visual stimuli in influencing olfactory and gustatory perception. Controlled tastings were conducted using both transparent and opaque glassware, with experienced panellists recording sensory descriptors. Physicochemical parameters were measured using a Lyza 5000 analyser to confirm compliance with quality standards, while statistical analyses of sensory data were conducted using the XLSTAT–Basic, student-type user software. Results showed that the absence of visual cues did not mislead tasters in recognising core attributes; however, the winemaking method significantly affected descriptors linked to maceration, including flavour intensity, astringency, and red/dark fruit notes. Panellists distinguished between white and red wines at statistically significant levels, even without visual input, suggesting that vinification-related chemical composition primarily guided their perception. Direct correlations were observed between red winemaking descriptors and parameters such as pH, lactic acid, glycerol, and volatile acidity, while indirect correlations were found with malic acid and titratable acidity. The results highlight how winemaking methods, chemical composition, and sensory perception interact in defining varietal characteristics. Full article
(This article belongs to the Special Issue The Role of Taste, Smell or Color on Food Intake and Food Choice)
Show Figures

Figure 1

31 pages, 1863 KB  
Article
Human Activity Recognition with Noise-Injected Time-Distributed AlexNet
by Sanjay Dutta, Tossapon Boongoen and Reyer Zwiggelaar
Biomimetics 2025, 10(9), 613; https://doi.org/10.3390/biomimetics10090613 - 11 Sep 2025
Cited by 1 | Viewed by 586
Abstract
This study investigates the integration of biologically inspired noise injection with a time-distributed adaptation of the AlexNet architecture to enhance the performance and robustness of human activity recognition (HAR) systems. It is a critical field in computer vision which involves identifying and interpreting [...] Read more.
This study investigates the integration of biologically inspired noise injection with a time-distributed adaptation of the AlexNet architecture to enhance the performance and robustness of human activity recognition (HAR) systems. It is a critical field in computer vision which involves identifying and interpreting human actions from video sequences and has applications in healthcare, security and smart environments. The proposed model is based on an adaptation of AlexNet, originally developed for static image classification and not inherently suited for modelling temporal sequences for video action classification tasks. While our time-distributed AlexNet efficiently captures spatial and temporal features and suitable for video classification. However, its performance can be limited by overfitting and poor generalisation to unseen scenarios, to address these challenges, Gaussian noise was introduced at the input level during training, inspired by neural mechanisms observed in biological sensory processing to handle variability and uncertainty. Experiments were conducted on the EduNet, UCF50 and UCF101 datasets. The EduNet dataset was specifically designed for educational environments and we evaluate the impact of noise injection on model accuracy, stability and overall performance. The proposed bio-inspired noise-injected time-distributed AlexNet achieved an overall accuracy of 91.40% and an F1 score of 92.77%, outperforming other state-of-the-art models. Hyperparameter tuning, particularly optimising the learning rate, further enhanced model stability, reflected in lower standard deviation values across multiple experimental runs. These findings demonstrate that the strategic combination of noise injection with time-distributed architectures improves generalisation and robustness in HAR, paving the way for resource-efficient and real-world-deployable deep learning systems. Full article
(This article belongs to the Section Bioinspired Sensorics, Information Processing and Control)
Show Figures

Figure 1

19 pages, 549 KB  
Article
The Attachment Process of the Mothers of Children with Autism Spectrum Disorders in the Pre-School Years: A Mixed Methods Study
by Miran Jung and Kuem Sun Han
Children 2025, 12(9), 1169; https://doi.org/10.3390/children12091169 - 2 Sep 2025
Viewed by 753
Abstract
Background/Objectives: Autism spectrum disorder (ASD) is characterized by qualitative difficulties in interaction and communication, as well as hyper- or hypo-responsivity to sensory input, which can substantially challenge the formation of mother-child attachment. This study aimed to identify attachment levels among mothers of preschool-aged [...] Read more.
Background/Objectives: Autism spectrum disorder (ASD) is characterized by qualitative difficulties in interaction and communication, as well as hyper- or hypo-responsivity to sensory input, which can substantially challenge the formation of mother-child attachment. This study aimed to identify attachment levels among mothers of preschool-aged children with ASD and to delineate the attachment processes associated with those levels, with the goal of developing a grounded theory explaining these processes. Methods: A two-step study using methodological triangulation was conducted. In the first quantitative study, the attachment level of 64 mothers of children with ASD, under the age of 7 years in Korea, were measured. And 12 were selected for a second study using the grounded theory method of Strauss & Corbin. Results: A significant attachment difference (t = 4.39, p < 0.001) was found in the pregnancy plan. The core attachment category in mothers of pre-school children with ASD was identified as “keep on going with closing the distance”. Eight stages and four types were found in their attachment process. Conclusions: The results of this suggest that it is necessary to develop a personalized intervention strategy and to provide proper nursing by considering the attachment process and type of mothers of children with ASD. Full article
(This article belongs to the Section Pediatric Neurology & Neurodevelopmental Disorders)
Show Figures

Figure 1

10 pages, 2087 KB  
Case Report
Enhancing Quality of Life After Partial Brachial Plexus Injury Combining Targeted Sensory Reinnervation and AI-Controlled User-Centered Prosthesis: A Case Study
by Alexander Gardetto, Diane J. Atkins, Giulia Cannoletta, Giovanni Antonio Zappatore and Angelo Carrabba
Prosthesis 2025, 7(5), 111; https://doi.org/10.3390/prosthesis7050111 - 1 Sep 2025
Viewed by 2379
Abstract
Background/Objectives: Upper limb amputation presents considerable physical and psychological challenges, especially in young, active individuals. This case study outlines the rehabilitation journey of a 33-year-old patient, an Italian national Paralympic snowboard cross athlete, who underwent elective transradial amputation followed by advanced surgical and [...] Read more.
Background/Objectives: Upper limb amputation presents considerable physical and psychological challenges, especially in young, active individuals. This case study outlines the rehabilitation journey of a 33-year-old patient, an Italian national Paralympic snowboard cross athlete, who underwent elective transradial amputation followed by advanced surgical and prosthetic interventions. The objective was to assess the combined impact of upper limb Targeted Sensory Reinnervation (ulTSR) and the Adam’s Hand prosthetic system on functional recovery and user satisfaction. Methods: After a partial brachial plexus injury caused complete paralysis of his right hand, the patient opted for transradial amputation. He subsequently underwent ulTSR, performed by plastic surgeon, Alexander Gardetto, MD, which involved rerouting sensory nerves to defined regions of the residual limb in order to reestablish a phantom limb map. This reinnervation was designed to facilitate improved prosthetic integration. The Adam’s Hand, a myoelectric prosthesis with AI-based pattern recognition, was selected for its compatibility with TSR and intuitive control. Outcomes were evaluated using the OPUS questionnaire, the DASH, and patient feedback. Results: ulTSR successfully restored meaningful sensory input, allowing intuitive and precise control of the prosthesis, with minimal cognitive and muscular effort. The patient regained the ability to perform numerous activities of daily living such as dressing, eating, lifting, and fine motor tasks—which had been impossible for over 15 years. OPUS results demonstrated significant improvements in both function and satisfaction. Conclusions: This case highlights the synergistic benefits of combining ulTSR with user-centered prosthetic technology. Surgical neurorehabilitation, paired with advanced prosthetic design, led to marked improvements in autonomy, performance, and quality of life in a high-performance amputee athlete. Full article
Show Figures

Figure 1

19 pages, 2318 KB  
Article
Modulating Multisensory Processing: Interactions Between Semantic Congruence and Temporal Synchrony
by Susan Geffen, Taylor Beck and Christopher W. Robinson
Vision 2025, 9(3), 74; https://doi.org/10.3390/vision9030074 - 1 Sep 2025
Viewed by 894
Abstract
Presenting information to multiple sensory modalities often facilitates or interferes with processing, yet the mechanisms remain unclear. Using a Stroop-like task, the two reported experiments examined how semantic congruency and incongruency in one sensory modality affect processing and responding in a different modality. [...] Read more.
Presenting information to multiple sensory modalities often facilitates or interferes with processing, yet the mechanisms remain unclear. Using a Stroop-like task, the two reported experiments examined how semantic congruency and incongruency in one sensory modality affect processing and responding in a different modality. Participants were presented with pictures and sounds simultaneously (Experiment 1) or asynchronously (Experiment 2) and had to respond whether the visual or auditory stimulus was an animal or vehicle, while ignoring the other modality. Semantic congruency and incongruency in the unattended modality both affected responses in the attended modality, with visual stimuli having larger effects on auditory processing than the reverse (Experiment 1). Effects of visual input on auditory processing decreased under longer SOAs, while effects of auditory input on visual processing increased over SOAs and were correlated with relative processing speed (Experiment 2). These results suggest that congruence and modality both impact multisensory processing. Full article
Show Figures

Figure 1

16 pages, 1311 KB  
Article
Four Trials Is Not Enough: The Amount of Prior Audio–Visual Exposure Determines the Strength of Audio–Tactile Crossmodal Correspondence Early in Development
by Shibo Cao, Rong Tan and Vivian M. Ciaramitaro
Behav. Sci. 2025, 15(9), 1184; https://doi.org/10.3390/bs15091184 - 30 Aug 2025
Viewed by 679
Abstract
Successfully navigating the world involves integrating sensory inputs and selecting appropriate motor actions. Yet, what information belongs together? In addition to spatial and temporal factors, correspondence across sensory features also matters. In the Bouba–Kiki (BK) effect, spiky shapes are associated with sounds like [...] Read more.
Successfully navigating the world involves integrating sensory inputs and selecting appropriate motor actions. Yet, what information belongs together? In addition to spatial and temporal factors, correspondence across sensory features also matters. In the Bouba–Kiki (BK) effect, spiky shapes are associated with sounds like “kiki”, and round shapes are associated with “bouba”. Such associations exist between auditory and visual (AV) and auditory and tactile (AT) stimuli, where objects are only explored via touch. Visual experience influences AT associations, which are weak in early blind adults and in fully sighted 6- to 8-year-olds, who have a more naïve visual experience. It has been found that prior AV exposure in children enhances AT associations. Here, we consider how the amount of prior AV exposure strengthens AT associations. Sixty-one 6- to 8-year-olds completed four or eight AV trials, which involved seeing a round and spiky shape and indicating which shape best matched a sound. Then, children completed 16 AT trials: feeling a round and spiky shape. Shapes were hidden from view, and children had to indicate which of the two shapes best matched a sound. We found that eight, but not four, trials of prior AV exposure enhanced AT associations. Our findings suggest that the amount, not just the type, of prior exposure is important in the development of audio–tactile associations. Full article
(This article belongs to the Special Issue The Role of Early Sensorimotor Experiences in Cognitive Development)
Show Figures

Figure 1

26 pages, 3346 KB  
Article
Virtual Reality as a Stress Measurement Platform: Real-Time Behavioral Analysis with Minimal Hardware
by Audrey Rah and Yuhua Chen
Sensors 2025, 25(17), 5323; https://doi.org/10.3390/s25175323 - 27 Aug 2025
Viewed by 1218
Abstract
With the growing use of digital technologies and interactive games, there is rising interest in how people respond to challenges, stress, and decision-making in virtual environments. Studying human behavior in such settings helps to improve design, training, and user experience. Instead of relying [...] Read more.
With the growing use of digital technologies and interactive games, there is rising interest in how people respond to challenges, stress, and decision-making in virtual environments. Studying human behavior in such settings helps to improve design, training, and user experience. Instead of relying on complex devices, Virtual Reality (VR) creates new ways to observe and understand these responses in a simple and engaging format. This study introduces a lightweight method for monitoring stress levels that uses VR as the primary sensing platform. Detection relies on behavioral signals from VR. A minimal sensor such as Galvanic Skin Response (GSR), which measures skin conductance as a sign of physiological body response, supports the Sensor-Assisted Unity Architecture. The proposed Sensor-Assisted Unity Architecture focuses on analyzing the user’s behavior inside the virtual environment along with physical sensory measurements. Most existing systems rely on physiological wearables, which add both cost and complexity. The Sensor-Assisted Unity Architecture shifts the focus to behavioral analysis in VR supplemented by minimal physiological input. Behavioral cues captured within the VR environment are analyzed in real time by an embedded processor, which then triggers simple physical feedback. Results show that combining VR behavioral data with a minimal sensor can improve detection in cases where behavioral or physiological signals alone may be insufficient. While this study does not quantitatively compare the Sensor-Assisted Unity Architecture to multi-sensor setups, it highlights VR as the main platform, with sensor input offering targeted enhancements without significantly increasing system complexity. Full article
(This article belongs to the Special Issue Virtual Reality and Sensing Techniques for Human)
Show Figures

Figure 1

25 pages, 1701 KB  
Review
Deciphering the Fasciola hepatica Glycocode and Its Involvement in Host–Parasite Interactions
by Jaclyn Swan, Timothy C. Cameron, Terry W. Spithill and Travis Beddoe
Biomolecules 2025, 15(9), 1235; https://doi.org/10.3390/biom15091235 - 26 Aug 2025
Viewed by 801
Abstract
The zoonotic disease fasciolosis poses a significant global threat to both humans and livestock. The causative agent of fasciolosis is Fasciola hepatica, which is commonly referred to as liver fluke. The emergence of drug resistance has underscored the urgent need for new [...] Read more.
The zoonotic disease fasciolosis poses a significant global threat to both humans and livestock. The causative agent of fasciolosis is Fasciola hepatica, which is commonly referred to as liver fluke. The emergence of drug resistance has underscored the urgent need for new therapeutic treatments against F. hepatica. The tegument surface of F. hepatica is characterized by a dynamic syncytial layer surrounded by a glycocalyx, which serves as a crucial interface in host–parasite interactions, facilitating functions such as nutrient absorption, sensory input, and defense against the host immune response. Despite its pivotal role, only recently have we delved deeper into understanding glycans at the host–parasite interface and the glycosylation of hidden antigens. These glycan antigens have shown promise for vaccine development or as targets for drug manipulation across various pathogenic species. This review aims to consolidate current knowledge on the glycosylation of F. hepatica, exploring glycan motifs identified through generic lectin probing and mass spectrometry. Additionally, it examines the interaction of glycoconjugates with lectins from the innate immune systems of both ruminant and human host species. An enhanced understanding of glycans’ role in F. hepatica biology and their critical involvement in host–parasite interactions will be instrumental in developing novel strategies to combat these parasites effectively. In the future, a more comprehensive approach may be adopted in selecting and designing potential vaccine targets, integrating insights from glycosylation studies to improve efficacy. Full article
(This article belongs to the Section Biomacromolecules: Proteins, Nucleic Acids and Carbohydrates)
Show Figures

Graphical abstract

30 pages, 1831 KB  
Article
Integrating Cacao Physicochemical-Sensory Profiles via Gaussian Processes Crowd Learning and Localized Annotator Trustworthiness
by Juan Camilo Lugo-Rojas, Maria José Chica-Morales, Sergio Leonardo Florez-González, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Foods 2025, 14(17), 2961; https://doi.org/10.3390/foods14172961 - 25 Aug 2025
Viewed by 525
Abstract
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, [...] Read more.
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, and often inconsistent annotations provided by multiple experts. We propose a comprehensive framework that leverages a correlated chained Gaussian processes model for learning from crowds, termed MAR-CCGP, specifically designed for a customized Casa Luker database that integrates sensory and physicochemical data on cacao-based products. By formulating sensory evaluations as regression tasks, our approach enables the estimation of continuous perceptual scores from physicochemical inputs, while concurrently inferring the latent, input-dependent reliability of each annotator. To address the inherent noise, subjectivity, and non-stationarity in expert-generated sensory data, we introduce a three-stage methodology: (i) construction of an integrated database that unifies physicochemical parameters with corresponding sensory descriptors; (ii) application of a MAR-CCGP model to infer the underlying ground truth from noisy, crowd-sourced, and non-stationary sensory annotations; and (iii) development of a novel localized expert trustworthiness approach, also based on MAR-CCGP, which dynamically adjusts for variations in annotator consistency across the input space. Our approach provides a robust, interpretable, and scalable solution for learning from heterogeneous and noisy sensory data, establishing a principled foundation for advancing data-driven sensory analysis and product optimization in the food science domain. We validate the effectiveness of our method through a series of experiments on both semi-synthetic data and a novel real-world dataset developed in collaboration with Casa Luker, which integrates sensory evaluations with detailed physicochemical profiles of cacao-based products. Compared to state-of-the-art learning-from-crowds baselines, our framework consistently achieves superior predictive performance and more precise annotator reliability estimation, demonstrating its efficacy in multi-annotator regression settings. Of note, our unique combination of a novel database, robust noisy-data regression, and input-dependent trust scoring sets MAR-CCGP apart from existing approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

Back to TopTop