Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (122)

Search Parameters:
Keywords = tactile images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1076 KiB  
Article
Visual–Tactile Fusion and SAC-Based Learning for Robot Peg-in-Hole Assembly in Uncertain Environments
by Jiaxian Tang, Xiaogang Yuan and Shaodong Li
Machines 2025, 13(7), 605; https://doi.org/10.3390/machines13070605 - 14 Jul 2025
Viewed by 362
Abstract
Robotic assembly, particularly peg-in-hole tasks, presents significant challenges in uncertain environments where pose deviations, varying peg shapes, and environmental noise can undermine performance. To address these issues, this paper proposes a novel approach combining visual–tactile fusion with reinforcement learning. By integrating multimodal data [...] Read more.
Robotic assembly, particularly peg-in-hole tasks, presents significant challenges in uncertain environments where pose deviations, varying peg shapes, and environmental noise can undermine performance. To address these issues, this paper proposes a novel approach combining visual–tactile fusion with reinforcement learning. By integrating multimodal data (RGB image, depth map, tactile force information, and robot body pose data) via a fusion network based on the autoencoder, we provide the robot with a more comprehensive perception of its environment. Furthermore, we enhance the robot’s assembly skill ability by using the Soft Actor–Critic (SAC) reinforcement learning algorithm, which allows the robot to adapt its actions to dynamic environments. We evaluate our method through experiments, which showed clear improvements in three key aspects: higher assembly success rates, reduced task completion times, and better generalization across diverse peg shapes and environmental conditions. The results suggest that the combination of visual and tactile feedback with SAC-based learning provides a viable and robust solution for robotic assembly in uncertain environments, paving the way for scalable and adaptable industrial robots. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

31 pages, 6682 KiB  
Review
Research Progress on Non-Destructive Testing Technology and Equipment for Poultry Eggshell Quality
by Qiaohua Wang, Zheng Yang, Chengkang Liu, Rongqian Sun and Shuai Yue
Foods 2025, 14(13), 2223; https://doi.org/10.3390/foods14132223 - 24 Jun 2025
Viewed by 510
Abstract
Eggshell quality inspection plays a pivotal role in enhancing the commercial value of poultry eggs and ensuring their safety. It effectively enables the screening of high-quality eggs to meet consumer demand for premium egg products. This paper analyzes the surface characteristics, ultrastructure, and [...] Read more.
Eggshell quality inspection plays a pivotal role in enhancing the commercial value of poultry eggs and ensuring their safety. It effectively enables the screening of high-quality eggs to meet consumer demand for premium egg products. This paper analyzes the surface characteristics, ultrastructure, and mechanical properties of poultry eggshells. It systematically reviews current advances in eggshell quality inspection technologies and compares the suitability and performance of techniques for key indicators, including shell strength, thickness, spots, color, and cracks. Furthermore, the paper discusses challenges in non-destructive testing, including individual egg variations, species differences, hardware precision limitations, and inherent methodological constraints. It summarizes commercially available portable and online non-destructive testing equipment, analyzing core challenges: the cost–accessibility paradox, speed–accuracy trade-off, algorithm interference impacts, and the technology–practice gap. Additionally, the paper explores the potential application of several emerging technologies—such as tactile sensing, X-ray imaging, laser-induced breakdown spectroscopy, and fluorescence spectroscopy—in eggshell quality inspection. Finally, it provides a comprehensive outlook on future research directions, offering constructive guidance for subsequent studies and practical applications in production. Full article
Show Figures

Figure 1

18 pages, 602 KiB  
Review
Innovations in Robot-Assisted Surgery for Genitourinary Cancers: Emerging Technologies and Clinical Applications
by Stamatios Katsimperis, Lazaros Tzelves, Georgios Feretzakis, Themistoklis Bellos, Ioannis Tsikopoulos, Nikolaos Kostakopoulos and Andreas Skolarikos
Appl. Sci. 2025, 15(11), 6118; https://doi.org/10.3390/app15116118 - 29 May 2025
Viewed by 780
Abstract
Robot-assisted surgery has transformed the landscape of genitourinary cancer treatment, offering enhanced precision, reduced morbidity, and improved recovery compared to open or conventional laparoscopic approaches. As the field matures, a new generation of technological innovations is redefining the boundaries of what robotic systems [...] Read more.
Robot-assisted surgery has transformed the landscape of genitourinary cancer treatment, offering enhanced precision, reduced morbidity, and improved recovery compared to open or conventional laparoscopic approaches. As the field matures, a new generation of technological innovations is redefining the boundaries of what robotic systems can achieve. This narrative review explores the integration of artificial intelligence, advanced imaging modalities, augmented reality, and connectivity in robotic urologic oncology. The applications of machine learning in surgical skill evaluation and postoperative outcome predictions are discussed, along with AI-enhanced haptic feedback systems that compensate for the lack of tactile sensation. The role of 3D virtual modeling, intraoperative augmented reality, and fluorescence-guided surgery in improving surgical planning and precision is examined for both kidney and prostate procedures. Emerging tools for real-time tissue recognition, including confocal microscopy and Raman spectroscopy, are evaluated for their potential to optimize margin assessment. This review also addresses the shift toward single-port systems and the rise of telesurgery enabled by 5G connectivity, highlighting global efforts to expand expert surgical care across geographic barriers. Collectively, these innovations represent a paradigm shift in robot-assisted urologic oncology, with the potential to enhance functional outcomes, surgical safety, and access to high-quality care. Full article
(This article belongs to the Special Issue New Trends in Robot-Assisted Surgery)
Show Figures

Figure 1

21 pages, 5680 KiB  
Review
Endoscopic Dilation for Fibrostenotic Complications in Eosinophilic Esophagitis—A Narrative Review
by Marco Michelon, Edoardo Vincenzo Savarino, Michele Montori, Maria Eva Argenziano, Pieter Jan Poortmans, Pierfrancesco Visaggi, Roberto Penagini, David J. Tate, Marina Coletta and Andrea Sorge
Allergies 2025, 5(2), 17; https://doi.org/10.3390/allergies5020017 - 26 May 2025
Viewed by 1398
Abstract
Esophageal fibrotic remodeling is a major complication of chronic inflammation in eosinophilic esophagitis (EoE) and represents one of the main determinants of symptoms in adult patients with EoE, with a remarkable impact on patients’ quality of life and the healthcare system. Esophageal fibrotic [...] Read more.
Esophageal fibrotic remodeling is a major complication of chronic inflammation in eosinophilic esophagitis (EoE) and represents one of the main determinants of symptoms in adult patients with EoE, with a remarkable impact on patients’ quality of life and the healthcare system. Esophageal fibrotic remodeling is diagnosed through upper gastrointestinal endoscopy, radiological studies, and a functional luminal imaging probe. However, diagnostic underestimation of esophageal strictures and suboptimal adherence to EoE guidelines still represent limitations of current clinical practice. Combined with medical therapy and/or elimination diets, endoscopic dilation remains the cornerstone treatment for esophageal strictures and rings, offering a safe and effective option for managing obstructive symptoms. Different modalities are available for esophageal endoscopic dilation of EoE, including mechanical and balloon dilators. Mechanical dilators provide tactile feedback during the procedure and exert longitudinal and radial forces. In contrast, balloon dilators apply a purely radial force and enable direct visualization of the esophageal mucosa during the procedure. Both mechanical and balloon dilators are safe and effective, with no single modality demonstrating clear superiority. Consequently, the choice of dilation technique is guided by stricture characteristics, the expertise of the endoscopist, and considerations related to the financial and environmental sustainability of the devices. This review aims to summarize the most relevant evidence on the endoscopic evaluation and dilation of fibrostenotic complications in EoE, also providing practical guidance for clinicians to optimize the endoscopic management of these patients. Full article
(This article belongs to the Section Diagnosis and Therapeutics)
Show Figures

Figure 1

22 pages, 8008 KiB  
Article
Real-Time Detection and Localization of Force on a Capacitive Elastomeric Sensor Array Using Image Processing and Machine Learning
by Peter Werner Egger, Gidugu Lakshmi Srinivas and Mathias Brandstötter
Sensors 2025, 25(10), 3011; https://doi.org/10.3390/s25103011 - 10 May 2025
Viewed by 707
Abstract
Soft and flexible capacitive tactile sensors are vital in prosthetics, wearable health monitoring, and soft robotics applications. However, achieving accurate real-time force detection and spatial localization remains a significant challenge, especially in dynamic, non-rigid environments like prosthetic liners. This study presents a real-time [...] Read more.
Soft and flexible capacitive tactile sensors are vital in prosthetics, wearable health monitoring, and soft robotics applications. However, achieving accurate real-time force detection and spatial localization remains a significant challenge, especially in dynamic, non-rigid environments like prosthetic liners. This study presents a real-time force point detection and tracking system using a custom-fabricated soft elastomeric capacitive sensor array in conjunction with image processing and machine learning techniques. The system integrates Otsu’s thresholding, Connected Component Labeling, and a tailored cluster-tracking algorithm for anomaly detection, enabling real-time localization within 1 ms. A 6×6 Dragon Skin-based sensor array was fabricated, embedded with copper yarn electrodes, and evaluated using a UR3e robotic arm and a Schunk force-torque sensor to generate controlled stimuli. The fabricated tactile sensor measures the applied force from 1 to 3 N. Sensor output was captured via a MUCA breakout board and Arduino Nano 33 IoT, transmitting the Ratio of Mutual Capacitance data for further analysis. A Python-based processing pipeline filters and visualizes the data with real-time clustering and adaptive thresholding. Machine learning models such as linear regression, Support Vector Machine, decision tree, and Gaussian Process Regression were evaluated to correlate force with capacitance values. Decision Tree Regression achieved the highest performance (R2=0.9996, RMSE=0.0446), providing an effective correlation factor of 51.76 for force estimation. The system offers robust performance in complex interactions and a scalable solution for soft robotics and prosthetic force mapping, supporting health monitoring, safe automation, and medical diagnostics. Full article
Show Figures

Figure 1

14 pages, 636 KiB  
Review
Technical Innovations and Complex Cases in Robotic Surgery for Lung Cancer: A Narrative Review
by Giacomo Cusumano, Giuseppe Calabrese, Filippo Tommaso Gallina, Francesco Facciolo, Pierluigi Novellis, Giulia Veronesi, Stefano Viscardi, Filippo Lococo, Elisa Meacci, Alberto Terminella, Gaetano Romano, Cristina Zirafa, Franca Melfi, Stefano Margaritora and Marco Chiappetta
Curr. Oncol. 2025, 32(5), 244; https://doi.org/10.3390/curroncol32050244 - 22 Apr 2025
Viewed by 1020
Abstract
For over two decades, robotic-assisted thoracic surgery (RATS) has revolutionized thoracic oncology. With enhanced visualization, dexterity, and precision, RATS has reduced blood loss, shortened hospital stays, and sped up recovery compared to traditional surgery or video-assisted thoracoscopic surgery (VATS). The use of 3D [...] Read more.
For over two decades, robotic-assisted thoracic surgery (RATS) has revolutionized thoracic oncology. With enhanced visualization, dexterity, and precision, RATS has reduced blood loss, shortened hospital stays, and sped up recovery compared to traditional surgery or video-assisted thoracoscopic surgery (VATS). The use of 3D high-definition imaging and articulated instruments allows for complex resections and advanced lymph node assessment. RATS delivers oncological outcomes similar to open surgery and VATS, with high rates of complete (R0) resections and acceptable complication rates. Its minimally invasive nature promotes quicker recovery. Advances in imaging software and augmented reality further enhance surgical accuracy and reduce intraoperative risks. However, RATS has some limitations, including high costs and a lack of tactile feedback, and certain complex procedures, such as extended resections and intrapericardial interventions, remain challenging. With growing experience and technological advances, RATS shows promise in reducing morbidity, improving quality of life, and expanding access to advanced oncologic care. This article reviews the evolution, benefits, and limitations of RATS in NSCLC treatment, highlighting its emerging role in managing complex cases. Full article
(This article belongs to the Section Thoracic Oncology)
Show Figures

Figure 1

21 pages, 2667 KiB  
Article
Synthetic Tactile Sensor for Macroscopic Roughness Estimation Based on Spatial-Coding Contact Processing
by Muhammad Irwan Yanwari and Shogo Okamoto
Sensors 2025, 25(8), 2598; https://doi.org/10.3390/s25082598 - 20 Apr 2025
Viewed by 564
Abstract
Traditional tactile sensors primarily measure macroscopic surface features but do not directly estimate how humans perceive such surface roughness. Sensors that mimic human tactile processing could bridge this gap. This study proposes a method for predicting macroscopic roughness perception based on a sensing [...] Read more.
Traditional tactile sensors primarily measure macroscopic surface features but do not directly estimate how humans perceive such surface roughness. Sensors that mimic human tactile processing could bridge this gap. This study proposes a method for predicting macroscopic roughness perception based on a sensing principle that closely resembles human tactile information processing. Humans are believed to assess macroscopic roughness based on the spatial distribution of subcutaneous deformation and resultant neural activities when touching a textured surface. To replicate this spatial-coding mechanism, we captured distributed contact information using a camera through a flexible, transparent material with fingerprint-like surface structures, simulating finger skin. Images were recorded under varying contact forces ranging from 1 N to 3 N. The spatial frequency components in the range of 0.1–1.0 mm−1 were extracted from these contact images, and a linear combination of these components was used to approximate human roughness perception recorded via the magnitude estimation method. The results indicate that for roughness specimens with rectangular or circular protrusions of surface wavelengths between 2 and 5 mm, the estimated roughness values achieved an average error comparable to the standard deviation of participants’ roughness ratings. These findings demonstrate the potential of macroscopic roughness estimation based on human-like tactile information processing and highlight the viability of vision-based sensing in replicating human roughness perception. Full article
(This article belongs to the Special Issue Recent Development of Flexible Tactile Sensors and Their Applications)
Show Figures

Figure 1

16 pages, 7388 KiB  
Article
Identification of Brain Activation Areas in Response to Active Tactile Stimulation by Gripping a Stress Ball
by Kei Sasaki, Noriko Sakurai, Nobukiyo Yoshida, Misuzu Oishi, Satoshi Kasai and Naoki Kodama
Brain Sci. 2025, 15(3), 264; https://doi.org/10.3390/brainsci15030264 - 28 Feb 2025
Viewed by 1276
Abstract
Background/Objectives: Research on pleasant tactile perception has primarily focused on C-tactile fibers found in hairy skin, with the forearm and face as common study sites. Recent findings of these fibers in hairless skin, such as the palms, have sparked interest in tactile stimulation [...] Read more.
Background/Objectives: Research on pleasant tactile perception has primarily focused on C-tactile fibers found in hairy skin, with the forearm and face as common study sites. Recent findings of these fibers in hairless skin, such as the palms, have sparked interest in tactile stimulation on the hands. While studies have examined comfort and brain activity in passive touch, active touch remains underexplored. This study aimed to investigate differences in pleasant sensation and brain activity during active touch with stress balls of varying hardness. Methods: Forty healthy women participated. Using functional magnetic resonance imaging (fMRI), brain activity was measured as participants alternated between gripping stress balls of soft, medium, and hard hardness and resting without a ball. Participants rated hardness and comfort on a 9-point scale. Results: Soft stress balls were perceived as soft and comfortable, activating the thalamus and left insular cortex while reducing activity in the right insular cortex. Medium stress balls elicited similar perceptions and thalamic activation but with reduced right insular cortex activity. Hard stress balls caused discomfort, activating the insular cortex, thalamus, and amygdala while reducing anterior cingulate cortex activity. Conclusions: Soft stress balls may reduce aversive stimuli through perceived comfort, while hard stress balls may induce discomfort and are unlikely to alleviate stress. Full article
(This article belongs to the Section Neuropsychology)
Show Figures

Figure 1

13 pages, 35894 KiB  
Article
An Artificial Intelligence Approach to the Craniofacial Recapitulation of Crisponi/Cold-Induced Sweating Syndrome 1 (CISS1/CISS) from Newborns to Adolescent Patients
by Giulia Pascolini, Dario Didona and Luigi Tarani
Diagnostics 2025, 15(5), 521; https://doi.org/10.3390/diagnostics15050521 - 21 Feb 2025
Viewed by 904
Abstract
Background/Objectives: Crisponi/cold-induced sweating syndrome 1 (CISS1/CISS, MIM#272430) is a genetic disorder due to biallelic variants in CRFL1 (MIM*604237). The related phenotype is mainly characterized by abnormal thermoregulation and sweating, facial muscle contractions in response to tactile and crying-inducing stimuli at an early [...] Read more.
Background/Objectives: Crisponi/cold-induced sweating syndrome 1 (CISS1/CISS, MIM#272430) is a genetic disorder due to biallelic variants in CRFL1 (MIM*604237). The related phenotype is mainly characterized by abnormal thermoregulation and sweating, facial muscle contractions in response to tactile and crying-inducing stimuli at an early age, skeletal anomalies (camptodactyly of the hands, scoliosis), and craniofacial dysmorphisms, comprising full cheeks, micrognathia, high and narrow palate, low-set ears, and a depressed nasal bridge. The condition is associated with high lethality during the neonatal period and can benefit from timely symptomatic therapy. Methods: We collected frontal images of all patients with CISS1/CISS published to date, which were analyzed with Face2Gene (F2G), a machine-learning technology for the facial diagnosis of syndromic phenotypes. In total, 75 portraits were subdivided into three cohorts, based on age (Cohort 1 and 2) and the presence of the typical facial trismus (Cohort 3). These portraits were uploaded to F2G to test their suitability for facial analysis and to verify the capacity of the AI tool to correctly recognize the syndrome based on the facial features only. The photos which passed this phase (62 images) were fed to three different AI algorithms—DeepGestalt, Facial D-Score, and GestaltMatcher. Results: The DeepGestalt algorithm results, including the correct diagnosis using a frontal portrait, suggested a similar facial phenotype in the first two cohorts. Cohort 3 seemed to be highly differentiable. The results were expressed in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve and p Value. The Facial D-Score values indicated the presence of a consistent degree of dysmorphic signs in the three cohorts, which was also confirmed by the GestaltMatcher algorithm. Interestingly, the latter allowed us to identify overlapping genetic disorders. Conclusions: This is the first AI-powered image analysis in defining the craniofacial contour of CISS1/CISS and in determining the feasibility of training the tool used in its clinical recognition. The obtained results showed that the use of F2G can reveal valid support in the diagnostic process of CISS1/CISS, especially in more severe phenotypes, manifesting with facial contractions and potentially lethal consequences. Full article
Show Figures

Figure 1

17 pages, 8641 KiB  
Article
Image-Based Tactile Deformation Simulation and Pose Estimation for Robot Skill Learning
by Chenfeng Fu, Longnan Li, Yuan Gao, Weiwei Wan, Kensuke Harada, Zhenyu Lu and Chenguang Yang
Appl. Sci. 2025, 15(3), 1099; https://doi.org/10.3390/app15031099 - 22 Jan 2025
Viewed by 1351
Abstract
The TacTip is a cost-effective, 3D-printed optical tactile sensor commonly used in deep learning and reinforcement learning for robotic manipulation. However, its specialized structure, which combines soft materials of varying hardnesses, makes it challenging to simulate the distribution of numerous printed markers on [...] Read more.
The TacTip is a cost-effective, 3D-printed optical tactile sensor commonly used in deep learning and reinforcement learning for robotic manipulation. However, its specialized structure, which combines soft materials of varying hardnesses, makes it challenging to simulate the distribution of numerous printed markers on pins. This paper aims to create an interpretable, AI-applicable simulation of the deformation of TacTip under varying pressures and interactions with different objects, addressing the black-box nature of learning and simulation in haptic manipulation. The research focuses on simulating the TacTip sensor’s shape using a fully tunable, chain-based mathematical model, refined through comparisons with real-world measurements. We integrated the WRS system with our theoretical model to evaluate its effectiveness in object pose estimation. The results demonstrated that the prediction accuracy for all markers across a variety of contact scenarios exceeded 92%. Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Systems and Robotics, 2nd Edition)
Show Figures

Figure 1

18 pages, 2572 KiB  
Review
Deep Learning Approaches for 3D Model Generation from 2D Artworks to Aid Blind People with Tactile Exploration
by Rocco Furferi
Heritage 2025, 8(1), 12; https://doi.org/10.3390/heritage8010012 - 28 Dec 2024
Viewed by 2518
Abstract
An effective method to enable the enjoyment of works of art by the blind is to reproduce tactile copies of the work, to facilitate tactile exploration. This is even more important when it comes to paintings, which are inherently not accessible to the [...] Read more.
An effective method to enable the enjoyment of works of art by the blind is to reproduce tactile copies of the work, to facilitate tactile exploration. This is even more important when it comes to paintings, which are inherently not accessible to the blind unless they are transformed into 3D models. Today, artificial intelligence techniques are rapidly growing and represent a paramount method for solving a variety of previously hard-to-solve tasks. It is, therefore, presumable that the translation from 2D images to 3D models using such methods will be also in continuous development. Unfortunately, reconstructing a 3D model from a single image, especially when it comes to painting-based images, is an ill-posed problem due to the depth ambiguity and the lack of a ground truth for the 3D model. To confront this issue, this paper proposes an overview of artificial intelligence-based methods for reconstructing 3D geometry from a single image is provided. The survey explores the potentiality of Convolutional Neural Networks, Generative Adversarial Networks, Variational Autoencoders, and zero-shot methods. Through a small set of case studies, the capabilities and limitations of CNNs in creating a 3D-scene model from artworks are also encompassed. The findings suggest that, while deep learning models demonstrate that they are effective for 3D retrieval from paintings, they also call for post-processing and user interaction to improve the accuracy of the 3D models. Full article
(This article belongs to the Special Issue AI and the Future of Cultural Heritage)
Show Figures

Figure 1

15 pages, 12297 KiB  
Article
Enhancing Accessibility: Automated Tactile Graphics Generation for Individuals with Visual Impairments
by Yehor Dzhurynskyi, Volodymyr Mayik and Lyudmyla Mayik
Computation 2024, 12(12), 251; https://doi.org/10.3390/computation12120251 - 23 Dec 2024
Cited by 1 | Viewed by 1186
Abstract
This study addresses the accessibility challenges faced by individuals with visual impairments due to limited access to graphic information, which significantly impacts their educational and social integration. Traditional methods for producing tactile graphics are labor-intensive and require specialized expertise, limiting their availability. Recent [...] Read more.
This study addresses the accessibility challenges faced by individuals with visual impairments due to limited access to graphic information, which significantly impacts their educational and social integration. Traditional methods for producing tactile graphics are labor-intensive and require specialized expertise, limiting their availability. Recent advancements in generative models, such as GANs, diffusion models, and VAEs, offer potential solutions to automate the creation of tactile images. In this work, we propose a novel generative model conditioned on text prompts, integrating a Bidirectional and Auto-Regressive Transformer (BART) and Vector Quantized Variational Auto-Encoder (VQ-VAE). This model transforms textual descriptions into tactile graphics, addressing key requirements for legibility and accessibility. The model’s performance was evaluated using cross-entropy, perplexity, mean square error, and CLIP Score metrics, demonstrating its ability to generate high-quality, customizable tactile images. Testing with educational and rehabilitation institutions confirmed the practicality and efficiency of the system, which significantly reduces production time and requires minimal operator expertise. The proposed approach enhances the production of inclusive educational materials, enabling improved access to quality education and fostering greater independence for individuals with visual impairments. Future research will focus on expanding the training dataset and refining the model for complex scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health)
Show Figures

Figure 1

25 pages, 19201 KiB  
Article
Efficient Cow Body Condition Scoring Using BCS-YOLO: A Lightweight, Knowledge Distillation-Based Method
by Zhiqiang Zheng, Zhuangzhuang Wang and Zhi Weng
Animals 2024, 14(24), 3668; https://doi.org/10.3390/ani14243668 - 19 Dec 2024
Viewed by 1604
Abstract
Monitoring the body condition of dairy cows is essential for ensuring their health and productivity, but traditional BCS methods—relying on visual or tactile assessments by skilled personnel—are subjective, labor-intensive, and impractical for large-scale farms. To overcome these limitations, we present BCS-YOLO, a lightweight [...] Read more.
Monitoring the body condition of dairy cows is essential for ensuring their health and productivity, but traditional BCS methods—relying on visual or tactile assessments by skilled personnel—are subjective, labor-intensive, and impractical for large-scale farms. To overcome these limitations, we present BCS-YOLO, a lightweight and automated BCS framework built on YOLOv8, which enables consistent, accurate scoring under complex conditions with minimal computational resources. BCS-YOLO integrates the Star-EMA module and the Star Shared Lightweight Detection Head (SSLDH) to enhance the detection accuracy and reduce model complexity. The Star-EMA module employs multi-scale attention mechanisms that balance spatial and semantic features, optimizing feature representation for cow hindquarters in cluttered farm environments. SSLDH further simplifies the detection head, making BCS-YOLO viable for deployment in resource-limited scenarios. Additionally, channel-based knowledge distillation generates soft probability maps focusing on key body regions, facilitating effective knowledge transfer and enhancing performance. The results on a public cow image dataset show that BCS-YOLO reduces the model size by 33% and improves the mean average precision (mAP) by 9.4%. These advances make BCS-YOLO a robust, non-invasive tool for consistent and accurate BCS in large-scale farming, supporting sustainable livestock management, reducing labor costs, enhancing animal welfare, and boosting productivity. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

12 pages, 4114 KiB  
Review
Painful Legs and Moving Toes Syndrome: Case Report and Review
by Mihael Tsalta-Mladenov, Vladina Dimitrova and Silva Andonova
Neurol. Int. 2024, 16(6), 1343-1354; https://doi.org/10.3390/neurolint16060102 - 4 Nov 2024
Viewed by 2731
Abstract
Introduction: Painful legs and moving toes (PLMT) syndrome is a rare movement disorder characterized by defuse lower limb neuropathic pain and spontaneous abnormal, involuntary toe movements. Objective: The objective was to present a rare case of PLMT syndrome with a triggering area in [...] Read more.
Introduction: Painful legs and moving toes (PLMT) syndrome is a rare movement disorder characterized by defuse lower limb neuropathic pain and spontaneous abnormal, involuntary toe movements. Objective: The objective was to present a rare case of PLMT syndrome with a triggering area in an adult patient due to multilevel discogenic pathology, to make a thorough review of this disorder and to provide a practical approach to its management. Case presentation: A 59-years-old male was admitted to the neurology ward with symptoms of defuse pain in the lower-back and the right leg accompanied by involuntary movements for the right toes intensified by tactile stimulation in the right upper thigh. Magnetic resonance imaging (MRI) revealed a multilevel discogenic pathology of the lumbar and cervical spine, with myelopathy at C5-C7 level. A medication with Pregabalin 300 mg/daily significantly improved both the abnormal toe movements and the leg pain. The clinical effect was constant during the 90-day follow-up without any adverse effects. Conclusion: Painful legs and moving toes (PLMT) is a condition that greatly affects the quality of life of patients, but which still remains less known by clinicians. Spontaneous resolution is rare, and oral medications are the first-line treatment. Pregabalin is a safe and effective treatment option for PLMT that should be considered early for the management of this condition. Other medication interventions, such as botulinum toxin injections, spinal blockade, or non-pharmacological treatment options like spinal cord stimulation, and surgical decompressions, are also recommended when the conservative treatment is ineffective in well-selected patients. Full article
(This article belongs to the Special Issue New Insights into Movement Disorders)
Show Figures

Figure 1

17 pages, 6147 KiB  
Article
Tactile Simultaneous Localization and Mapping Using Low-Cost, Wearable LiDAR
by John LaRocco, Qudsia Tahmina, John Simonis, Taylor Liang and Yiyao Zhang
Hardware 2024, 2(4), 256-272; https://doi.org/10.3390/hardware2040012 - 29 Sep 2024
Viewed by 1768
Abstract
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., [...] Read more.
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., ClaySight) that enhances the creation of automatic tactile map generation, as well as a model for wearable devices that use low-cost laser imaging, detection, and ranging (LiDAR,) used to improve the immediate spatial knowledge of visually impaired individuals. Our system uses LiDAR sensors to (1) produce affordable, low-latency tactile maps, (2) function as a day-to-day wayfinding aid, and (3) provide interactivity using a wearable device. The system comprises a dynamic mapping and scanning algorithm and an interactive handheld 3D-printed device that houses the hardware. Our algorithm accommodates user specifications to dynamically interact with objects in the surrounding area and create map models that can be represented with haptic feedback or alternative tactile systems. Using economical components and open-source software, the ClaySight system has significant potential to enhance independence and quality of life for the visually impaired. Full article
Show Figures

Figure 1

Back to TopTop