Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,597)

Search Parameters:
Keywords = camera system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1473 KB  
Article
AI-Driven Firmness Prediction of Kiwifruit Using Image-Based Vibration Response Analysis
by Seyedeh Fatemeh Nouri, Saman Abdanan Mehdizadeh and Yiannis Ampatzidis
Sensors 2025, 25(17), 5279; https://doi.org/10.3390/s25175279 (registering DOI) - 25 Aug 2025
Abstract
Accurate and non-destructive assessment of fruit firmness is critical for evaluating quality and ripeness, particularly in postharvest handling and supply chain management. This study presents the development of an image-based vibration analysis system for evaluating the firmness of kiwifruit using computer vision and [...] Read more.
Accurate and non-destructive assessment of fruit firmness is critical for evaluating quality and ripeness, particularly in postharvest handling and supply chain management. This study presents the development of an image-based vibration analysis system for evaluating the firmness of kiwifruit using computer vision and machine learning. In the proposed setup, 120 kiwifruits were subjected to controlled excitation in the frequency range of 200–300 Hz using a vibration motor. A digital camera captured surface displacement over time (for 20 s), enabling the extraction of key dynamic features, namely, the damping coefficient (damping is a measure of a material’s ability to dissipate energy) and natural frequency (the first peak in the frequency spectrum), through image processing techniques. Results showed that firmer fruits exhibited higher natural frequencies and lower damping, while softer, more ripened fruits showed the opposite trend. These vibration-based features were then used as inputs to a feed-forward backpropagation neural network to predict fruit firmness. The neural network consisted of an input layer with two neurons (damping coefficient and natural frequency), a hidden layer with ten neurons, and an output layer representing firmness. The model demonstrated strong predictive performance, with a correlation coefficient (R2) of 0.9951 and a root mean square error (RMSE) of 0.0185, confirming its high accuracy. This study confirms the feasibility of using vibration-induced image data combined with machine learning for non-destructive firmness evaluation. The proposed method provides a reliable and efficient alternative to traditional firmness testing techniques and offers potential for real-time implementation in automated grading and quality control systems for kiwi and other fruit types. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture: 2nd Edition)
22 pages, 780 KB  
Systematic Review
Non-Invasive Human-Free Diagnosis Methods for Assessing Pig Welfare at Abattoirs: A Systematic Review
by Maria Francisca Ferreira, Márcia Nunes and Madalena Vieira-Pinto
Animals 2025, 15(17), 2500; https://doi.org/10.3390/ani15172500 (registering DOI) - 25 Aug 2025
Abstract
The assessment of pig welfare and health at abattoirs is crucial for ensuring both animal well-being and food safety. Traditional assessment methods often rely on human observation, which is time-consuming, subjective, and difficult to scale in high-throughput facilities. This systematic review addresses a [...] Read more.
The assessment of pig welfare and health at abattoirs is crucial for ensuring both animal well-being and food safety. Traditional assessment methods often rely on human observation, which is time-consuming, subjective, and difficult to scale in high-throughput facilities. This systematic review addresses a crucial gap by identifying and evaluating non-invasive human-free diagnostic methods applicable in commercial settings. Following PRISMA guidelines, a total of 102 articles met the inclusion criteria. Thirteen distinct methods were identified and classified into three categories: biological sample analysis (5 methods; n = 80 articles), imaging and computer vision systems (4 methods; n = 19), and physiological and other sensors (4 methods; n = 24). Some articles assessed more than one method and are therefore counted in multiple categories. While no method achieved both high implementation and practicality, blood analysis for glucose and lactate, convolutional neural networks for lesion detection, and automated camera-based systems emerged as the most promising for practical integration into the slaughter flowline. Most techniques still face challenges related to automation, operator independence, and standardisation. Overall, this review highlights the growing potential of non-invasive methods in pig welfare evaluation and underscores the need for continued development and validation to facilitate their adoption into routine abattoir practices. Full article
Show Figures

Figure 1

16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 (registering DOI) - 25 Aug 2025
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

20 pages, 3402 KB  
Article
Real-Time Monitoring of 3D Printing Process by Endoscopic Vision System Integrated in Printer Head
by Martin Kondrat, Anastasiia Nazim, Kamil Zidek, Jan Pitel, Peter Lazorík and Michal Duhancik
Appl. Sci. 2025, 15(17), 9286; https://doi.org/10.3390/app15179286 - 24 Aug 2025
Abstract
This study investigates the real-time monitoring of 3D printing using an endoscopic camera system integrated directly into the print head. The embedded endoscope enables continuous observation of the area surrounding the extruder, facilitating real-time inspection of the currently printed layers. A convolutional neural [...] Read more.
This study investigates the real-time monitoring of 3D printing using an endoscopic camera system integrated directly into the print head. The embedded endoscope enables continuous observation of the area surrounding the extruder, facilitating real-time inspection of the currently printed layers. A convolutional neural network (CNN) is employed to analyse captured images in the direction of print progression, enabling the detection of common defects such as stringing, layer shifting, and inadequate first-layer adhesion. The primary innovation of this work lies in its capacity for online quality assessment and immediate classification of print integrity within predefined thresholds. This system allows for the prompt termination of printing in the case of critical faults or dynamic adjustment of printing parameters in response to minor anomalies. The proposed solution offers a novel pathway for optimising additive manufacturing through real-time feedback on layer formation. Full article
(This article belongs to the Special Issue Real-Time Detection in Additive Manufacturing)
Show Figures

Figure 1

17 pages, 16817 KB  
Article
Design and Implementation of an Autonomous Mobile Robot for Object Delivery via Homography-Based Visual Servoing
by Jung-Shan Lin, Yen-Che Hsiao and Jeih-Weih Hung
Future Internet 2025, 17(9), 379; https://doi.org/10.3390/fi17090379 - 24 Aug 2025
Abstract
This paper presents the design and implementation of an autonomous mobile robot system able to deliver objects from one location to another with minimal hardware requirements. Unlike most existing systems, our robot uses only a single camera—mounted on its robotic arm—to guide both [...] Read more.
This paper presents the design and implementation of an autonomous mobile robot system able to deliver objects from one location to another with minimal hardware requirements. Unlike most existing systems, our robot uses only a single camera—mounted on its robotic arm—to guide both its movements and the pick-and-place process. The robot detects target signs and objects, automatically navigates to desired locations, and accurately grasps and delivers items without the need for complex sensor arrays or multiple cameras. The main innovation of this work is a unified visual control strategy that coordinates both the vehicle and the robotic arm through homography-based visual servoing. Our experimental results demonstrate that the system can reliably locate, pick up, and place objects, achieving a high success rate in real-world tests. This approach offers a simple yet effective solution for object delivery tasks and lays the groundwork for practical, cost-efficient mobile robots in automation and logistics. Full article
(This article belongs to the Special Issue Mobile Robotics and Autonomous System)
Show Figures

Figure 1

18 pages, 15231 KB  
Article
Stereo Vision-Based Underground Muck Pile Detection for Autonomous LHD Bucket Loading
by Emilia Hennen, Adam Pekarski, Violetta Storoschewich and Elisabeth Clausen
Sensors 2025, 25(17), 5241; https://doi.org/10.3390/s25175241 - 23 Aug 2025
Viewed by 131
Abstract
To increase the safety and efficiency of underground mining processes, it is important to advance automation. An important part of that is to achieve autonomous material loading using load–haul–dump (LHD) machines. To be able to autonomously load material from a muck pile, it [...] Read more.
To increase the safety and efficiency of underground mining processes, it is important to advance automation. An important part of that is to achieve autonomous material loading using load–haul–dump (LHD) machines. To be able to autonomously load material from a muck pile, it is crucial to first detect and characterize it in terms of spatial configuration and geometry. Currently, the technologies available on the market that do not require an operator at the stope are only applicable in specific mine layouts or use 2D camera images of the surroundings that can be observed from a control room for teleoperation. However, due to missing depth information, estimating distances is difficult. This work presents a novel approach to muck pile detection developed as part of the EU-funded Next Generation Carbon Neutral Pilots for Smart Intelligent Mining Systems (NEXGEN SIMS) project. It uses a stereo camera mounted on an LHD to gather three-dimensional data of the surroundings. By applying a topological algorithm, a muck pile can be located and its overall shape determined. This system can detect and segment muck piles while driving towards them at full speed. The detected position and shape of the muck pile can then be used to determine an optimal attack point for the machine. This sensor solution was then integrated into a complete system for autonomous loading with an LHD. In two different underground mines, it was tested and demonstrated that the machines were able to reliably load material without human intervention. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 4993 KB  
Article
Automated IoT-Based Monitoring of Industrial Hemp in Greenhouses Using Open-Source Systems and Computer Vision
by Carmen Rocamora-Osorio, Fernando Aragon-Rodriguez, Ana María Codes-Alcaraz and Francisco-Javier Ferrández-Pastor
AgriEngineering 2025, 7(9), 272; https://doi.org/10.3390/agriengineering7090272 - 22 Aug 2025
Viewed by 365
Abstract
Monitoring the development of greenhouse crops is essential for optimising yield and ensuring the efficient use of resources. A system for monitoring hemp (Cannabis sativa L.) cultivation under greenhouse conditions using computer vision has been developed. This system is based on open-source [...] Read more.
Monitoring the development of greenhouse crops is essential for optimising yield and ensuring the efficient use of resources. A system for monitoring hemp (Cannabis sativa L.) cultivation under greenhouse conditions using computer vision has been developed. This system is based on open-source automation software installed on a single-board computer. It integrates various temperature and humidity sensors and surveillance cameras, automating image capture. Hemp seeds of the Tiborszallasi variety were sown. After germination, plants were transplanted into pots. Five specimens were selected for growth monitoring by image analysis. A surveillance camera was placed in front of each plant. Different approaches were applied to analyse growth during the early stages: two traditional computer vision techniques and a deep learning algorithm. An average growth rate of 2.9 cm/day was determined, corresponding to 1.43 mm/°C day. A mean MAE value of 1.36 cm was obtained, and the results of the three approaches were very similar. After the first growth stage, the plants were subjected to water stress. An algorithm successfully identified healthy and stressed plants and also detected different stress levels, with an accuracy of 97%. These results demonstrate the system’s potential to provide objective and quantitative information on plant growth and physiological status. Full article
16 pages, 3884 KB  
Article
Toward an Augmented Reality Representation of Collision Risks in Harbors
by Mario Miličević, Igor Vujović, Miro Petković and Ana Kuzmanić Skelin
Appl. Sci. 2025, 15(17), 9260; https://doi.org/10.3390/app15179260 - 22 Aug 2025
Viewed by 121
Abstract
In ports with a significant density of non-AIS vessels, there is an increased risk of collisions. This is because physical limitations restrict the maneuverability of AIS vessels, while small vessels that do not have AIS are unpredictable. To help with collision prevention, we [...] Read more.
In ports with a significant density of non-AIS vessels, there is an increased risk of collisions. This is because physical limitations restrict the maneuverability of AIS vessels, while small vessels that do not have AIS are unpredictable. To help with collision prevention, we propose an augmented reality system that detects vessels from video stream and estimates speed with a single sideway-mounted camera. The goal is to visualize a cone for risk assessment. The estimation of speed is executed by geometric relations between the camera and the ship, which were used to estimate distances between points in a known time interval. The most important part of the proposal is vessel speed estimation by a monocular camera validated by a laser speed measurement. This will help port authorities to manage risks. This system differs from similar trials as it uses a single stationary camera linked to the authorities and not to the bridge crew. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

29 pages, 2872 KB  
Article
Hybrid FEM-AI Approach for Thermographic Monitoring of Biomedical Electronic Devices
by Danilo Pratticò, Domenico De Carlo, Gaetano Silipo and Filippo Laganà
Computers 2025, 14(9), 344; https://doi.org/10.3390/computers14090344 - 22 Aug 2025
Viewed by 258
Abstract
Prolonged operation of biomedical devices may compromise electronic component integrity due to cyclic thermal stress, thereby impacting both functionality and safety. Regulatory standards require regular inspections, particularly for surgical applications, highlighting the need for efficient and non-invasive diagnostic tools. This study introduces an [...] Read more.
Prolonged operation of biomedical devices may compromise electronic component integrity due to cyclic thermal stress, thereby impacting both functionality and safety. Regulatory standards require regular inspections, particularly for surgical applications, highlighting the need for efficient and non-invasive diagnostic tools. This study introduces an integrated system that combines finite element models, infrared thermographic analysis, and artificial intelligence to monitor thermal stress in printed circuit boards (PCBs) within biomedical devices. A dynamic thermal model, implemented in COMSOL Multiphysics® (version 6.2), identifies regions at high risk of thermal overload. The infrared measurements acquired through a FLIR P660 thermal camera provided experimental validation and a dataset for training a hybrid artificial intelligence system. This model integrates deep learning-based U-Net architecture for thermal anomaly segmentation with machine learning classification of heat diffusion patterns. By combining simulation, the proposed system achieved an F1-score of 0.970 for hotspot segmentation using a U-Net architecture and an F1-score of 0.933 for the classification of heat propagation modes via a Multi-Layer Perceptron. This study contributes to the development of intelligent diagnostic tools for biomedical electronics by integrating physics-based simulation and AI-driven thermographic analysis, supporting automatic classification and localisation of thermal anomalies, real-time fault detection and predictive maintenance strategies. Full article
Show Figures

Figure 1

23 pages, 28830 KB  
Article
Micro-Expression-Based Facial Analysis for Automated Pain Recognition in Dairy Cattle: An Early-Stage Evaluation
by Shuqiang Zhang, Kashfia Sailunaz and Suresh Neethirajan
AI 2025, 6(9), 199; https://doi.org/10.3390/ai6090199 - 22 Aug 2025
Viewed by 152
Abstract
Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self-reporting, cows suppress overt cues, so automated vision is indispensable for on-farm [...] Read more.
Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self-reporting, cows suppress overt cues, so automated vision is indispensable for on-farm triage. Although earlier systems tracked whole-body posture or static grimace scales, frame-level detection of facial micro-expressions has not been explored fully in livestock. We translate micro-expression analytics from automotive driver monitoring to the barn, linking modern computer vision with veterinary ethology. Our two-stage pipeline first detects faces and 30 landmarks using a custom You Only Look Once (YOLO) version 8-Pose network, achieving a 96.9% mean average precision (mAP) at an Intersection over the Union (IoU) threshold of 0.50 for detection and 83.8% Object Keypoint Similarity (OKS) for keypoint placement. Cropped eye, ear, and muzzle patches are encoded using a pretrained MobileNetV2, generating 3840-dimensional descriptors that capture millisecond muscle twitches. Sequences of five consecutive frames are fed into a 128-unit Long Short-Term Memory (LSTM) classifier that outputs pain probabilities. On a held-out validation set of 1700 frames, the system records 99.65% accuracy and an F1-score of 0.997, with only three false positives and three false negatives. Tested on 14 unseen barn videos, it attains 64.3% clip-level accuracy (i.e., overall accuracy for the whole video clip) and 83% precision for the pain class, using a hybrid aggregation rule that combines a 30% mean probability threshold with micro-burst counting to temper false alarms. As an early exploration from our proof-of-concept study on a subset of our custom dairy farm datasets, these results show that micro-expression mining can deliver scalable, non-invasive pain surveillance across variations in illumination, camera angle, background, and individual morphology. Future work will explore attention-based temporal pooling, curriculum learning for variable window lengths, domain-adaptive fine-tuning, and multimodal fusion with accelerometry on the complete datasets to elevate the performance toward clinical deployment. Full article
Show Figures

Figure 1

15 pages, 2389 KB  
Article
Development of Marker-Based Motion Capture Using RGB Cameras: A Neural Network Approach for Spherical Marker Detection
by Yuji Ohshima
Sensors 2025, 25(17), 5228; https://doi.org/10.3390/s25175228 (registering DOI) - 22 Aug 2025
Viewed by 134
Abstract
Marker-based motion capture systems using infrared cameras (IR MoCaps) are commonly employed in biomechanical research. However, their high costs pose challenges for many institutions seeking to implement such systems. This study aims to develop a neural network (NN) model to estimate the digitized [...] Read more.
Marker-based motion capture systems using infrared cameras (IR MoCaps) are commonly employed in biomechanical research. However, their high costs pose challenges for many institutions seeking to implement such systems. This study aims to develop a neural network (NN) model to estimate the digitized coordinates of spherical markers and to establish a lower-cost marker-based motion capture system using RGB cameras. Thirteen participants were instructed to walk at self-selected speeds while their movements were recorded with eight RGB cameras. Each participant undertook trials with 24 mm spherical markers attached to 25 body landmarks (marker trials), as well as trials without markers (non-marker trials). To generate training data, virtual markers mimicking spherical markers were randomly inserted into images from the non-marker trials. These images were then used to fine-tune a pre-trained model, resulting in an NN model capable of detecting spherical markers. The digitized coordinates inferred by the NN model were employed to reconstruct the three-dimensional coordinates of the spherical markers, which were subsequently compared with the gold standard. The mean resultant error was determined to be 2.2 mm. These results suggest that the proposed method enables fully automatic marker reconstruction comparable to that of IR MoCap, highlighting its potential for application in motion analysis. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

13 pages, 3172 KB  
Article
A Simulation Framework for Zoom-Aided Coverage Path Planning with UAV-Mounted PTZ Cameras
by Natalia Chacon Rios, Sabyasachi Mondal and Antonios Tsourdos
Sensors 2025, 25(17), 5220; https://doi.org/10.3390/s25175220 - 22 Aug 2025
Viewed by 170
Abstract
Achieving energy-efficient aerial coverage remains a significant challenge for UAV-based missions, especially over hilly terrain where consistent ground resolution is needed. Traditional solutions use changes in altitude to compensate for elevation changes, which requires a significant amount of energy. This paper presents a [...] Read more.
Achieving energy-efficient aerial coverage remains a significant challenge for UAV-based missions, especially over hilly terrain where consistent ground resolution is needed. Traditional solutions use changes in altitude to compensate for elevation changes, which requires a significant amount of energy. This paper presents a new way to plan coverage paths (CPP) that uses real-time zoom control of a pan–tilt–zoom (PTZ) camera to keep the ground sampling distance (GSD)—the distance between two consecutive pixel centers projected onto the ground—constant without changing the UAV’s altitude. The proposed algorithm changes the camera’s focal length based on the height of the terrain. It only changes the altitude when the zoom limits are reached. Simulation results on a variety of terrain profiles show that the zoom-based CPP substantially reduces flight duration and path length compared to traditional altitude-based strategies. The framework can also be used with low-cost camera systems with limited zoom capability, thereby improving operational feasibility. These findings establish a basis for further development and field validation in upcoming research phases. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems in Precision Agriculture)
Show Figures

Figure 1

22 pages, 3688 KB  
Article
Assessing Birds of Prey as Biological Pest Control: A Comparative Study with Hunting Perches and Rodenticides on Rodent Activity and Crop Health
by Naama Ronen, Anna Brook and Motti Charter
Biology 2025, 14(9), 1108; https://doi.org/10.3390/biology14091108 - 22 Aug 2025
Viewed by 171
Abstract
Rodent damage significantly affects agriculture around the world. Rodenticides can sometimes control pests, but they are costly, may cause secondary poisoning to nontarget wildlife, and can become less efficient over time due to bait shyness and resistance. Using wildlife as biological pest control [...] Read more.
Rodent damage significantly affects agriculture around the world. Rodenticides can sometimes control pests, but they are costly, may cause secondary poisoning to nontarget wildlife, and can become less efficient over time due to bait shyness and resistance. Using wildlife as biological pest control agents, particularly barn owls (Tyto spp.), has been suggested as an alternative. Barn owl nest boxes and hunting perches have been added to increase predator pressure, yet few studies have examined their effectiveness. We conducted a field study in forty-five 10 × 10 m2 plots to compare three treatments (biological pest control by adding hunting perches, 1080 rodenticide, and control) on rodent (vole) activity and crop health (alfalfa, Medicago sativa) using unmanned aerial system (UAS) remote sensing and ground surveys. Additionally, we used 24/7 video cameras and a machine learning (YOLOv5) object detection algorithm to determine whether hunting perches increase the presence of diurnal and nocturnal raptors. Rodent activity increased during the study and did not vary among the treatments across all three treatment groups, indicating that neither the biological pest control nor the rodenticides prevented the rodent population from increasing. Moreover, the vegetation indices clearly showed that the alfalfa has become increasingly damaged over time, due to the rising damage caused by rodents. There were significantly more raptors in plots with hunting perches than in control plots and those treated with rodenticides. Specifically, barn owls and diurnal raptors (mainly black-shouldered kites) spent 97.92% more time on hunting perch plots than rodenticide plots and 97.61% more time on hunting perch plots than control plots. The number of barn owls was positively related to vole activity, indicating a bottom-up process, while the number of black-shouldered kites was unrelated to vole activity. Even though hunting perches effectively increased the presence and activity of diurnal and nocturnal raptors, rodent populations increased. Future research should investigate whether hunting perches can increase raptor populations and improve crop health in crops beyond alfalfa, which is known to be particularly challenging to control for voles. Full article
(This article belongs to the Section Conservation Biology and Biodiversity)
Show Figures

Figure 1

15 pages, 3926 KB  
Article
Robotic Removal and Collection of Screws in Collaborative Disassembly of End-of-Life Electric Vehicle Batteries
by Muyao Tan, Jun Huang, Xingqiang Jiang, Yilin Fang, Quan Liu and Duc Pham
Biomimetics 2025, 10(8), 553; https://doi.org/10.3390/biomimetics10080553 - 21 Aug 2025
Viewed by 110
Abstract
The recycling and remanufacturing of end-of-life (EoL) electric vehicle (EV) batteries are urgent challenges for a circular economy. Disassembly is crucial for handling EoL EV batteries due to their inherent uncertainties and instability. The human–robot collaborative disassembly of EV batteries as a semi-automated [...] Read more.
The recycling and remanufacturing of end-of-life (EoL) electric vehicle (EV) batteries are urgent challenges for a circular economy. Disassembly is crucial for handling EoL EV batteries due to their inherent uncertainties and instability. The human–robot collaborative disassembly of EV batteries as a semi-automated approach has been investigated and implemented to increase flexibility and productivity. Unscrewing is one of the primary operations in EV battery disassembly. This paper presents a new method for the robotic unfastening and collecting of screws, increasing disassembly efficiency and freeing human operators from dangerous, tedious, and repetitive work. The design inspiration for this method originated from how human operators unfasten and grasp screws when disassembling objects with an electric tool, along with the fusion of multimodal perception, such as vision and touch. A robotic disassembly system for screws is introduced, which involves a collaborative robot, an electric spindle, a screw collection device, a 3D camera, a six-axis force/torque sensor, and other components. The process of robotic unfastening and collecting screws is proposed by using position and force control. Experiments were carried out to validate the proposed method. The results demonstrate that the screws in EV batteries can be automatically identified, located, unfastened, and removed, indicating potential for the proposed method in the disassembly of EoL EV batteries. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

20 pages, 3686 KB  
Article
Comparative Analysis of Correction Methods for Multi-Camera 3D Image Processing System and Its Application Design in Safety Improvement on Hot-Working Production Line
by Joanna Gąbka
Appl. Sci. 2025, 15(16), 9136; https://doi.org/10.3390/app15169136 - 19 Aug 2025
Viewed by 166
Abstract
The paper presents the results of research focused on configuring a system for stereoscopic view capturing and processing. The system is being developed for use in staff training scenarios based on Virtual Reality (VR), where high-quality, distortion-free imagery is essential. This research addresses [...] Read more.
The paper presents the results of research focused on configuring a system for stereoscopic view capturing and processing. The system is being developed for use in staff training scenarios based on Virtual Reality (VR), where high-quality, distortion-free imagery is essential. This research addresses key challenges in image distortion, including the fish-eye effect and other aberrations. In addition, it considers the computational and bandwidth efficiency required for effective and economical streaming and real-time display of recorded content. Measurements and calculations were performed using a selected set of cameras, adapters, and lenses, chosen based on predefined criteria. A comparative analysis was conducted between the nearest-neighbour linear interpolation method and a third-order polynomial interpolation (ABCD polynomial). These methods were tested and evaluated using three different computational approaches, each aimed at optimizing data processing efficiency critical for real-time image correction. Images captured during real-time video transmission—processed using the developed correction techniques—are presented. In the final sections, the paper describes the configuration of an innovative VR-based training system incorporating an edge computing device. A case study involving a factory producing wheel rims is also presented to demonstrate the practical application of the system. Full article
Show Figures

Figure 1

Back to TopTop