Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (88)

Search Parameters:
Keywords = tactile map

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 39404 KB  
Article
Soft Shear Sensing of Robotic Twisting Tasks Using Reduced-Order Conductivity Modeling
by Dhruv Trehan, David Hardman and Fumiya Iida
Sensors 2025, 25(16), 5159; https://doi.org/10.3390/s25165159 - 19 Aug 2025
Viewed by 385
Abstract
Much as the information generated by our fingertips is used for fine-scale grasping and manipulation, closed-loop dexterous robotic manipulation requires rich tactile information to be generated by artificial fingertip sensors. In particular, fingertip shear sensing dominates modalities such as twisting, dragging, and slipping, [...] Read more.
Much as the information generated by our fingertips is used for fine-scale grasping and manipulation, closed-loop dexterous robotic manipulation requires rich tactile information to be generated by artificial fingertip sensors. In particular, fingertip shear sensing dominates modalities such as twisting, dragging, and slipping, but there is limited research exploring soft shear predictions from an increasingly popular single-material tactile technology: electrical impedance tomography (EIT). Here, we focus on the twisting of a screwdriver as a representative shear-based task in which the signals generated by EIT hardware can be analyzed. Since EIT’s analytical reconstructions are based upon conductivity distributions, we propose and investigate five reduced-order models which relate shear-based screwdriver twisting to the conductivity maps of a robot’s single-material sensorized fingertips. We show how the physical basis of our reduced-order approach means that insights can be deduced from noisy signals during the twisting tasks, with respective torque and diameter correlations of 0.96 and 0.97 to our reduced-order parameters. Additionally, unlike traditional reconstruction techniques, all necessary FEM model signals can be precalculated with our approach, promising a route towards future high-speed closed-loop implementations. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

24 pages, 2716 KB  
Article
Interactive Indoor Audio-Map as a Digital Equivalent of the Tactile Map
by Dariusz Gotlib, Krzysztof Lipka and Hubert Świech
Appl. Sci. 2025, 15(16), 8975; https://doi.org/10.3390/app15168975 - 14 Aug 2025
Viewed by 246
Abstract
There are still relatively few applications that serve the function of a traditional tactile map, allowing visually impaired individuals to explore a digital map by sliding their fingers across it. Moreover, existing technological solutions either lack a spatial learning mode or provide only [...] Read more.
There are still relatively few applications that serve the function of a traditional tactile map, allowing visually impaired individuals to explore a digital map by sliding their fingers across it. Moreover, existing technological solutions either lack a spatial learning mode or provide only limited functionality, focusing primarily on navigating to a selected destination. To address these gaps, the authors have proposed an original concept for an indoor mobile application that enables map exploration by sliding a finger across the smartphone screen, using audio spatial descriptions as the primary medium for conveying information. The spatial descriptions are hierarchical and contextual, focusing on anchoring them in space and indicating their extent of influence. The basis for data management and analysis is GIS technology. The application is designed to support spatial orientation during user interaction with the digital map. The research emphasis was on creating an effective cartographic communication message, utilizing voice-based delivery of spatial information stored in a virtual building model (within a database) and tags placed in real-world buildings. Techniques such as Text-to-Speech, TalkBack, QRCode technologies were employed to achieve this. Preliminary tests conducted with both blind and sighted people demonstrated the usefulness of the proposed concept. The proposed solution supporting people with disabilities can also be useful and attractive to all users of navigation applications and may affect the development of such applications. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

14 pages, 2032 KB  
Article
Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment
by Andreas P. Plageras, Christos L. Stergiou, Vasileios A. Memos, George Kokkonis, Yutaka Ishibashi and Konstantinos E. Psannis
Electronics 2025, 14(16), 3185; https://doi.org/10.3390/electronics14163185 - 11 Aug 2025
Viewed by 401
Abstract
In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, [...] Read more.
In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, and “Adobe Illustrator”, we were able to obtain the desired results. The haptic devices used for the experiments were the “PHANTOM Touch” and the “PHANTOM Omni R” of “3D Systems”. The programs that were installed and configured properly so as to model the surfaces, run the experiments, and finally achieve the desired goal are “H3D Api”, “Geomagic_OpenHaptics”, and “OpenHaptics_Developer_Edition”. The purpose of this project was to feel different textures, shapes, and objects in images by using a haptic device. The primary objective was to create a system from the ground up to render visuals on the screen and facilitate interaction with them via the haptic device. The main focus of this work is to propose a novel pattern of images that we can classify as different textures so that they can be identified by people with reduced vision. Full article
Show Figures

Graphical abstract

19 pages, 7780 KB  
Article
Posture Estimation from Tactile Signals Using a Masked Forward Diffusion Model
by Sanket Kachole, Bhagyashri Nayak, James Brouner, Ying Liu, Liucheng Guo and Dimitrios Makris
Sensors 2025, 25(16), 4926; https://doi.org/10.3390/s25164926 - 9 Aug 2025
Viewed by 357
Abstract
Utilizing tactile sensors embedded in intelligent mats is an attractive non-intrusive approach for human motion analysis. Interpreting tactile pressure 2D maps for accurate posture estimation poses significant challenges, such as dealing with data sparsity, noise interference, and the complexity of mapping pressure signals. [...] Read more.
Utilizing tactile sensors embedded in intelligent mats is an attractive non-intrusive approach for human motion analysis. Interpreting tactile pressure 2D maps for accurate posture estimation poses significant challenges, such as dealing with data sparsity, noise interference, and the complexity of mapping pressure signals. Our approach introduces a novel dual-diffusion signal enhancement (DDSE) architecture that leverages tactile pressure measurements from an intelligent pressure mat for precise prediction of 3D body joint positions, using a diffusion model to enhance pressure data quality and a convolutional-transformer neural network architecture for accurate pose estimation. Additionally, we collected the pressure-to-posture inference technology (PPIT) dataset that relates pressure signals organized as a 2D array to Motion Capture data, and our proposed method has been rigorously evaluated on it, demonstrating superior accuracy in comparison to state-of-the-art methods. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

30 pages, 10586 KB  
Article
Autonomous UAV-Based System for Scalable Tactile Paving Inspection
by Tong Wang, Hao Wu, Abner Asignacion, Zhengran Zhou, Wei Wang and Satoshi Suzuki
Drones 2025, 9(8), 554; https://doi.org/10.3390/drones9080554 - 7 Aug 2025
Viewed by 445
Abstract
Tactile pavings (Tenji Blocks) are prone to wear, obstruction, and improper installation, posing significant safety risks for visually impaired pedestrians. This system incorporates a lightweight YOLOv8 (You Only Look Once version 8) model for real-time detection using a fisheye camera to maximize field-of-view [...] Read more.
Tactile pavings (Tenji Blocks) are prone to wear, obstruction, and improper installation, posing significant safety risks for visually impaired pedestrians. This system incorporates a lightweight YOLOv8 (You Only Look Once version 8) model for real-time detection using a fisheye camera to maximize field-of-view coverage, which is highly advantageous for low-altitude UAV navigation in complex urban settings. To enable lightweight deployment, a novel Lightweight Shared Detail Enhanced Oriented Bounding Box (LSDE-OBB) head module is proposed. The design rationale of LSDE-OBB leverages the consistent structural patterns of tactile pavements, enabling parameter sharing within the detection head as an effective optimization strategy without significant accuracy compromise. The feature extraction module is further optimized using StarBlock to reduce computational complexity and model size. Integrated Contextual Anchor Attention (CAA) captures long-range spatial dependencies and refines critical feature representations, achieving an optimal speed–precision balance. The framework demonstrates a 25.13% parameter reduction (2.308 M vs. 3.083 M), 46.29% lower GFLOPs, and achieves 11.97% mAP50:95 on tactile paving datasets, enabling real-time edge deployment. Validated through public/custom datasets and actual UAV flights, the system realizes robust tactile paving detection and stable navigation in complex urban environments via hierarchical control algorithms for dynamic trajectory planning and obstacle avoidance, providing an efficient and scalable platform for automated infrastructure inspection. Full article
Show Figures

Figure 1

14 pages, 7196 KB  
Article
Touch to Speak: Real-Time Tactile Pronunciation Feedback for Individuals with Speech and Hearing Impairments
by Anat Sharon, Roi Yozevitch and Eldad Holdengreber
Technologies 2025, 13(8), 345; https://doi.org/10.3390/technologies13080345 - 7 Aug 2025
Viewed by 535
Abstract
This study presents a wearable haptic feedback system designed to support speech training for individuals with speech and hearing impairments. The system provides real-time tactile cues based on detected phonemes, helping users correct their pronunciation independently. Unlike prior approaches focused on passive reception [...] Read more.
This study presents a wearable haptic feedback system designed to support speech training for individuals with speech and hearing impairments. The system provides real-time tactile cues based on detected phonemes, helping users correct their pronunciation independently. Unlike prior approaches focused on passive reception or therapist-led instruction, our method enables active, phoneme-level feedback using a multimodal interface combining audio input, visual reference, and spatially mapped vibrotactile output. We validated the system through three user studies measuring pronunciation accuracy, phoneme discrimination, and learning over time. The results show a significant improvement in word articulation accuracy and user engagement. These findings highlight the potential of real-time haptic pronunciation tools as accessible, scalable aids for speech rehabilitation and second-language learning. Full article
Show Figures

Figure 1

22 pages, 1787 KB  
Article
Active Touch Sensing for Robust Hole Detection in Assembly Tasks
by Bojan Nemec, Mihael Simonič and Aleš Ude
Sensors 2025, 25(15), 4567; https://doi.org/10.3390/s25154567 - 23 Jul 2025
Viewed by 325
Abstract
In this paper, we propose an active touch sensing algorithm designed for robust hole localization in 3D objects, specifically aimed at assembly tasks such as peg-in-hole operations. Unlike general object detection algorithms, our solution is tailored for precise localization of features like hole [...] Read more.
In this paper, we propose an active touch sensing algorithm designed for robust hole localization in 3D objects, specifically aimed at assembly tasks such as peg-in-hole operations. Unlike general object detection algorithms, our solution is tailored for precise localization of features like hole openings using sparse tactile feedback. The method builds on a prior 3D map of the object and employs a series of iterative search algorithms to refine localization by aligning tactile sensing data with the object’s shape. It is specifically designed for objects composed of multiple parallel surfaces located at distinct heights; a common characteristic in many assembly tasks. In addition to the deterministic approach, we introduce a probabilistic version of the algorithm, which effectively compensates for sensor noise and inaccuracies in the 3D map. This probabilistic framework significantly improves the algorithm’s resilience in real-world environments, ensuring reliable performance even under imperfect conditions. We validate the method’s effectiveness for several assembly tasks, such as inserting a plug into a socket, demonstrating its speed and accuracy. The proposed algorithm outperforms traditional search strategies, offering a robust solution for assembly operations in industrial and domestic applications with limited sensory input. Full article
(This article belongs to the Collection Tactile Sensors, Sensing and Systems)
Show Figures

Figure 1

24 pages, 1076 KB  
Article
Visual–Tactile Fusion and SAC-Based Learning for Robot Peg-in-Hole Assembly in Uncertain Environments
by Jiaxian Tang, Xiaogang Yuan and Shaodong Li
Machines 2025, 13(7), 605; https://doi.org/10.3390/machines13070605 - 14 Jul 2025
Viewed by 698
Abstract
Robotic assembly, particularly peg-in-hole tasks, presents significant challenges in uncertain environments where pose deviations, varying peg shapes, and environmental noise can undermine performance. To address these issues, this paper proposes a novel approach combining visual–tactile fusion with reinforcement learning. By integrating multimodal data [...] Read more.
Robotic assembly, particularly peg-in-hole tasks, presents significant challenges in uncertain environments where pose deviations, varying peg shapes, and environmental noise can undermine performance. To address these issues, this paper proposes a novel approach combining visual–tactile fusion with reinforcement learning. By integrating multimodal data (RGB image, depth map, tactile force information, and robot body pose data) via a fusion network based on the autoencoder, we provide the robot with a more comprehensive perception of its environment. Furthermore, we enhance the robot’s assembly skill ability by using the Soft Actor–Critic (SAC) reinforcement learning algorithm, which allows the robot to adapt its actions to dynamic environments. We evaluate our method through experiments, which showed clear improvements in three key aspects: higher assembly success rates, reduced task completion times, and better generalization across diverse peg shapes and environmental conditions. The results suggest that the combination of visual and tactile feedback with SAC-based learning provides a viable and robust solution for robotic assembly in uncertain environments, paving the way for scalable and adaptable industrial robots. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

21 pages, 24372 KB  
Article
Streamlining Haptic Design with Micro-Collision Haptic Map Generated by Stable Diffusion
by Hongyu Liu and Zhenyu Gu
Appl. Sci. 2025, 15(13), 7174; https://doi.org/10.3390/app15137174 - 26 Jun 2025
Viewed by 431
Abstract
Rendering surface materials to provide realistic tactile sensations is a key focus in haptic interaction research. However, generating texture maps and designing corresponding haptic feedback often requires expert knowledge and significant effort. To simplify the workflow, we developed a micro-collision-based tactile texture dataset [...] Read more.
Rendering surface materials to provide realistic tactile sensations is a key focus in haptic interaction research. However, generating texture maps and designing corresponding haptic feedback often requires expert knowledge and significant effort. To simplify the workflow, we developed a micro-collision-based tactile texture dataset for several common materials and fine-tuned the VAE model of Stable Diffusion. Our approach allows designers to generate matching visual and haptic textures from natural language prompts and enables users to receive real-time, realistic haptic feedback when interacting with virtual surfaces. We evaluated our method through a haptic design task. Professional and non-haptic designers each created one haptic design using traditional tools and another using our approach. Participants then evaluated the four resulting designs. The results showed that our method produced haptic feedback comparable to that of professionals, though slightly lower in overall and consistency scores. Importantly, professional designers using our method required less time and fewer expert resources. Non-haptic designers also achieved better outcomes with our tool. Our generative method optimizes the haptic design workflow, lowering the expertise threshold and increasing efficiency. It has the potential to support broader adoption of haptic design in interactive media and enhance multisensory experiences. Full article
Show Figures

Figure 1

23 pages, 4228 KB  
Article
Evaluation on AI-Generative Emotional Design Approach for Urban Vitality Spaces: A LoRA-Driven Framework and Empirical Research
by Ruoshi Zhang, Xiaoqing Tang, Lifang Wu, Yuchen Wang, Xiaojing He and Mengjie Liu
Land 2025, 14(6), 1300; https://doi.org/10.3390/land14061300 - 18 Jun 2025
Viewed by 1011
Abstract
Recent advancements in urban vitality space design reflect increasing academic attention to emotional experience dimensions, paralleled by the emergence of AI-based generative technology as a transformative tool for systematically exploring the emotional attachment potential in preliminary designs. To effectively utilize AI-generative design results [...] Read more.
Recent advancements in urban vitality space design reflect increasing academic attention to emotional experience dimensions, paralleled by the emergence of AI-based generative technology as a transformative tool for systematically exploring the emotional attachment potential in preliminary designs. To effectively utilize AI-generative design results for spatial vitality creation and evaluation, exploring whether generated spaces respond to people’s emotional demands is necessary. This study establishes a comparative framework analyzing emotional attachment characteristics between LoRA-generated spatial designs and the real urban vitality space, using the representative case of THE BOX in Chaoyang, Beijing. Empirical data were collected through structured on-site surveys with 115 validated participants, enabling a comprehensive emotional attachment evaluation. SPSS 26.0 was employed for multi-dimensional analyses, encompassing aggregate attachment intensity, dimensional differentiation, and correlation mapping. Key findings reveal that while both generative and original spatial representations elicit measurable positive responses, AI-generated designs demonstrate a limited capacity to replicate the authentic three-dimensional experiential qualities inherent to physical environments, particularly regarding structural articulation and material tactility. Furthermore, significant deficiencies persist in the generative design’s cultural semiotic expression and visual-interactive spatial legibility, resulting in diminished user satisfaction. The analysis reveals that LoRA-generated spatial solutions require strategic enhancements in dynamic visual hierarchy, interactive integration, chromatic optimization, and material fidelity to bridge this experiential gap. These insights suggest viable pathways for integrating generative AI methodologies with conventional urban design practices, potentially enabling more sophisticated hybrid approaches that synergize digital innovation with built environment realities to cultivate enriched multisensory spatial experiences. Full article
(This article belongs to the Section Land Planning and Landscape Architecture)
Show Figures

Figure 1

22 pages, 8008 KB  
Article
Real-Time Detection and Localization of Force on a Capacitive Elastomeric Sensor Array Using Image Processing and Machine Learning
by Peter Werner Egger, Gidugu Lakshmi Srinivas and Mathias Brandstötter
Sensors 2025, 25(10), 3011; https://doi.org/10.3390/s25103011 - 10 May 2025
Cited by 1 | Viewed by 837
Abstract
Soft and flexible capacitive tactile sensors are vital in prosthetics, wearable health monitoring, and soft robotics applications. However, achieving accurate real-time force detection and spatial localization remains a significant challenge, especially in dynamic, non-rigid environments like prosthetic liners. This study presents a real-time [...] Read more.
Soft and flexible capacitive tactile sensors are vital in prosthetics, wearable health monitoring, and soft robotics applications. However, achieving accurate real-time force detection and spatial localization remains a significant challenge, especially in dynamic, non-rigid environments like prosthetic liners. This study presents a real-time force point detection and tracking system using a custom-fabricated soft elastomeric capacitive sensor array in conjunction with image processing and machine learning techniques. The system integrates Otsu’s thresholding, Connected Component Labeling, and a tailored cluster-tracking algorithm for anomaly detection, enabling real-time localization within 1 ms. A 6×6 Dragon Skin-based sensor array was fabricated, embedded with copper yarn electrodes, and evaluated using a UR3e robotic arm and a Schunk force-torque sensor to generate controlled stimuli. The fabricated tactile sensor measures the applied force from 1 to 3 N. Sensor output was captured via a MUCA breakout board and Arduino Nano 33 IoT, transmitting the Ratio of Mutual Capacitance data for further analysis. A Python-based processing pipeline filters and visualizes the data with real-time clustering and adaptive thresholding. Machine learning models such as linear regression, Support Vector Machine, decision tree, and Gaussian Process Regression were evaluated to correlate force with capacitance values. Decision Tree Regression achieved the highest performance (R2=0.9996, RMSE=0.0446), providing an effective correlation factor of 51.76 for force estimation. The system offers robust performance in complex interactions and a scalable solution for soft robotics and prosthetic force mapping, supporting health monitoring, safe automation, and medical diagnostics. Full article
Show Figures

Figure 1

24 pages, 20196 KB  
Article
Inclusive Museum Engagement: Multisensory Storytelling of Cagli Warriors’ Journey and the Via Flamina Landscape Through Interactive Tactile Experiences and Digital Replicas
by Paolo Clini, Romina Nespeca, Umberto Ferretti, Federica Galazzi and Monica Bernacchia
Heritage 2025, 8(2), 61; https://doi.org/10.3390/heritage8020061 - 6 Feb 2025
Cited by 3 | Viewed by 2736
Abstract
This paper presents a case study from the Archaeological and Via Flaminia Museum in Cagli (Italy), developed within the ERASMUS+ Next-Museum project, which explores inclusive approaches through the digital transformation of small museums and their connection to the surrounding territory. A key goal [...] Read more.
This paper presents a case study from the Archaeological and Via Flaminia Museum in Cagli (Italy), developed within the ERASMUS+ Next-Museum project, which explores inclusive approaches through the digital transformation of small museums and their connection to the surrounding territory. A key goal was to “return” bronze statuettes to the museum, symbolically compensating the community for their absence. The initiative integrates accessibility and multisensory storytelling following “Design for All” principles. Three installations were implemented: tactile replicas of the statuettes produced through 3D printing, a sensorized table for interactive storytelling, and a story map displayed on a touchscreen for exploring local archaeological heritage. The design prioritized inclusivity, particularly for visitors with visual impairments, while addressing practical constraints such as the need for a mobile and flexible setup within a limited budget. Verification and validation tests were conducted with visually impaired participants during the pre-opening phase, and the installations were later evaluated using the User Experience Questionnaire, complemented by qualitative feedback. These evaluations highlight the potential of phygital experiences to foster engagement with cultural heritage while addressing technological and design challenges. Full article
Show Figures

Figure 1

14 pages, 7240 KB  
Article
Restoration of Genuine Sensation and Proprioception of Individual Fingers Following Transradial Amputation with Targeted Sensory Reinnervation as a Mechanoneural Interface
by Alexander Gardetto, Gernot R. Müller-Putz, Kyle R. Eberlin, Franco Bassetto, Diane J. Atkins, Mara Turri, Gerfried Peternell, Ortrun Neuper and Jennifer Ernst
J. Clin. Med. 2025, 14(2), 417; https://doi.org/10.3390/jcm14020417 - 10 Jan 2025
Viewed by 3247
Abstract
Background/Objectives: Tactile gnosis derives from the interplay between the hand’s tactile input and the memory systems of the brain. It is the prerequisite for complex hand functions. Impaired sensation leads to profound disability. Various invasive and non-invasive sensory substitution strategies for providing [...] Read more.
Background/Objectives: Tactile gnosis derives from the interplay between the hand’s tactile input and the memory systems of the brain. It is the prerequisite for complex hand functions. Impaired sensation leads to profound disability. Various invasive and non-invasive sensory substitution strategies for providing feedback from prostheses have been unsuccessful when translated to clinical practice, since they fail to match the feeling to genuine sensation of the somatosensory cortex. Methods: Herein, we describe a novel surgical technique for upper-limb-targeted sensory reinnervation (ulTSR) and report how single digital nerves selectively reinnervate the forearm skin and restore the spatial sensory capacity of single digits of the amputated hand in a case series of seven patients. We explore the interplay of the redirected residual digital nerves and the interpretation of sensory perception after reinnervation of the forearm skin in the somatosensory cortex by evaluating sensory nerve action potentials (SNAPs), somatosensory evoked potentials (SEPs), and amputation-associated pain qualities. Results: Digital nerves were rerouted and reliably reinnervated the forearm skin after hand amputation, leading to somatotopy and limb maps of the thumb and four individual fingers. SNAPs were obtained from the donor digital nerves after stimulating the recipient sensory nerves of the forearm. Matching SEPs were obtained after electrocutaneous stimulation of the reinnervated skin areas of the forearm where the thumb, index, and little fingers are perceived. Pain incidence was significantly reduced or even fully resolved. Conclusions: We propose that ulTSR can lead to higher acceptance of prosthetic hands and substantially reduce the incidence of phantom limb and neuroma pain. In addition, the spatial restoration of lost-hand sensing and the somatotopic reinnervation of the forearm skin may serve as a machine interface, allowing for genuine sensation and embodiment of the prosthetic hand without the need for complex neural coding adjustments. Full article
Show Figures

Graphical abstract

25 pages, 19201 KB  
Article
Efficient Cow Body Condition Scoring Using BCS-YOLO: A Lightweight, Knowledge Distillation-Based Method
by Zhiqiang Zheng, Zhuangzhuang Wang and Zhi Weng
Animals 2024, 14(24), 3668; https://doi.org/10.3390/ani14243668 - 19 Dec 2024
Viewed by 1855
Abstract
Monitoring the body condition of dairy cows is essential for ensuring their health and productivity, but traditional BCS methods—relying on visual or tactile assessments by skilled personnel—are subjective, labor-intensive, and impractical for large-scale farms. To overcome these limitations, we present BCS-YOLO, a lightweight [...] Read more.
Monitoring the body condition of dairy cows is essential for ensuring their health and productivity, but traditional BCS methods—relying on visual or tactile assessments by skilled personnel—are subjective, labor-intensive, and impractical for large-scale farms. To overcome these limitations, we present BCS-YOLO, a lightweight and automated BCS framework built on YOLOv8, which enables consistent, accurate scoring under complex conditions with minimal computational resources. BCS-YOLO integrates the Star-EMA module and the Star Shared Lightweight Detection Head (SSLDH) to enhance the detection accuracy and reduce model complexity. The Star-EMA module employs multi-scale attention mechanisms that balance spatial and semantic features, optimizing feature representation for cow hindquarters in cluttered farm environments. SSLDH further simplifies the detection head, making BCS-YOLO viable for deployment in resource-limited scenarios. Additionally, channel-based knowledge distillation generates soft probability maps focusing on key body regions, facilitating effective knowledge transfer and enhancing performance. The results on a public cow image dataset show that BCS-YOLO reduces the model size by 33% and improves the mean average precision (mAP) by 9.4%. These advances make BCS-YOLO a robust, non-invasive tool for consistent and accurate BCS in large-scale farming, supporting sustainable livestock management, reducing labor costs, enhancing animal welfare, and boosting productivity. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

16 pages, 2612 KB  
Article
Influencing Mechanism of Signal Design Elements in Complex Human–Machine System: Evidence from Eye Movement Data
by Siu Shing Man, Wenbo Hu, Hanxing Zhou, Tingru Zhang and Alan Hoi Shou Chan
Informatics 2024, 11(4), 88; https://doi.org/10.3390/informatics11040088 - 21 Nov 2024
Viewed by 1304
Abstract
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible [...] Read more.
In today’s rapidly evolving technological landscape, human–machine interaction has become an issue that should be systematically explored. This research aimed to examine the impact of different pre-cue modes (visual, auditory, and tactile), stimulus modes (visual, auditory, and tactile), compatible mapping modes (both compatible (BC), transverse compatible (TC), longitudinal compatible (LC), and both incompatible (BI)), and stimulus onset asynchrony (200 ms/600 ms) on the performance of participants in complex human–machine systems. Eye movement data and a dual-task paradigm involving stimulus–response and manual tracking were utilized for this study. The findings reveal that visual pre-cues can captivate participants’ attention towards peripheral regions, a phenomenon not observed when visual stimuli are presented in isolation. Furthermore, when confronted with visual stimuli, participants predominantly prioritize continuous manual tracking tasks, utilizing focal vision, while concurrently executing stimulus–response compatibility tasks with peripheral vision. Furthermore, the average pupil diameter tends to diminish with the use of visual pre-cues or visual stimuli but expands during auditory or tactile stimuli or pre-cue modes. These findings contribute to the existing literature on the theoretical design of complex human–machine interfaces and offer practical implications for the design of human–machine system interfaces. Moreover, this paper underscores the significance of considering the optimal combination of stimulus modes, pre-cue modes, and stimulus onset asynchrony, tailored to the characteristics of the human–machine interaction task. Full article
(This article belongs to the Topic Theories and Applications of Human-Computer Interaction)
Show Figures

Figure 1

Back to TopTop