Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = omnidirectional vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3907 KiB  
Article
Woodot: An AI-Driven Mobile Robotic System for Sustainable Defect Repair in Custom Glulam Beams
by Pierpaolo Ruttico, Federico Bordoni and Matteo Deval
Sustainability 2025, 17(12), 5574; https://doi.org/10.3390/su17125574 - 17 Jun 2025
Viewed by 395
Abstract
Defect repair on custom-curved glulam beams is still performed manually because knots are irregular, numerous, and located on elements that cannot pass through linear production lines, limiting the scalability of timber-based architecture. This study presents Woodot, an autonomous mobile robotic platform that combines [...] Read more.
Defect repair on custom-curved glulam beams is still performed manually because knots are irregular, numerous, and located on elements that cannot pass through linear production lines, limiting the scalability of timber-based architecture. This study presents Woodot, an autonomous mobile robotic platform that combines an omnidirectional rover, a six-dof collaborative arm, and a fine-tuned Segment Anything computer vision pipeline to identify, mill, and plug surface knots on geometrically variable beams. The perception model was trained on a purpose-built micro-dataset and reached an F1 score of 0.69 on independent test images, while the integrated system located defects with a 4.3 mm mean positional error. Full repair cycles averaged 74 s per knot, reducing processing time by more than 60% compared with skilled manual operations, and achieved flush plug placement in 87% of trials. These outcomes demonstrate that a lightweight AI model coupled with mobile manipulation can deliver reliable, shop-floor automation for low-volume, high-variation timber production. By shortening cycle times and lowering worker exposure to repetitive tasks, Woodot offers a viable pathway to enhance the environmental, economic, and social sustainability of digital timber construction. Nevertheless, some limitations remain, such as dependency on stable lighting conditions for optimal vision performance and the need for tool calibration checks. Full article
Show Figures

Figure 1

22 pages, 5968 KiB  
Article
The Optimization of PID Controller and Color Filter Parameters with a Genetic Algorithm for Pineapple Tracking Using an ROS2 and MicroROS-Based Robotic Head
by Carolina Maldonado-Mendez, Sergio Fabian Ruiz-Paz, Isaac Machorro-Cano, Antonio Marin-Hernandez and Sergio Hernandez-Mendez
Computation 2025, 13(3), 69; https://doi.org/10.3390/computation13030069 - 7 Mar 2025
Viewed by 838
Abstract
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system [...] Read more.
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system architecture, implemented in Robot Operating System 2 (ROS2), performs the following tasks: it captures images from a webcam embedded in the robot head, segments the object of interest based on color, and computes its centroid. If the centroid deviates from the center of the image plane, a proportional–integral–derivative (PID) controller adjusts the pan–tilt unit to reposition the object at the center, enabling continuous tracking. A multivariate Gaussian function is employed to segment objects with complex color patterns, such as the body of a pineapple. The parameters of both the PID controller and the multivariate Gaussian filter are optimized using a genetic algorithm. The PID controller receives as input the (x, y) positions of the pan–tilt unit, obtained via an embedded board and MicroROS, and generates control signals for the servomotors that drive the pan–tilt mechanism. The experimental results demonstrate that the robot successfully tracks a moving pineapple. Additionally, the color segmentation filter can be further optimized to detect other textured fruits, such as soursop and melon. This research contributes to the advancement of smart agriculture, particularly for fruit crops with rough textures and complex color patterns. Full article
Show Figures

Figure 1

14 pages, 6996 KiB  
Article
A Multilayer Perceptron-Based Spherical Visual Compass Using Global Features
by Yao Du, Carlos Mateo and Omar Tahri
Sensors 2024, 24(7), 2246; https://doi.org/10.3390/s24072246 - 31 Mar 2024
Cited by 1 | Viewed by 1381
Abstract
This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field [...] Read more.
This paper presents a visual compass method utilizing global features, specifically spherical moments. One of the primary challenges faced by photometric methods employing global features is the variation in the image caused by the appearance and disappearance of regions within the camera’s field of view as it moves. Additionally, modeling the impact of translational motion on the values of global features poses a significant challenge, as it is dependent on scene depths, particularly for non-planar scenes. To address these issues, this paper combines the utilization of image masks to mitigate abrupt changes in global feature values and the application of neural networks to tackle the modeling challenge posed by translational motion. By employing masks at various locations within the image, multiple estimations of rotation corresponding to the motion of each selected region can be obtained. Our contribution lies in offering a rapid method for implementing numerous masks on the image with real-time inference speed, rendering it suitable for embedded robot applications. Extensive experiments have been conducted on both real-world and synthetic datasets generated using Blender. The results obtained validate the accuracy, robustness, and real-time performance of the proposed method compared to a state-of-the-art method. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 21439 KiB  
Article
Accuracy vs. Energy: An Assessment of Bee Object Inference in Videos from On-Hive Video Loggers with YOLOv3, YOLOv4-Tiny, and YOLOv7-Tiny
by Vladimir A. Kulyukin and Aleksey V. Kulyukin
Sensors 2023, 23(15), 6791; https://doi.org/10.3390/s23156791 - 29 Jul 2023
Cited by 11 | Viewed by 2473
Abstract
A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive’s entrance. Since traffic at the hive’s entrance is a contributing factor to the hive’s productivity and health, we assessed [...] Read more.
A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive’s entrance. Since traffic at the hive’s entrance is a contributing factor to the hive’s productivity and health, we assessed the potential of three open-source convolutional network models, YOLOv3, YOLOv4-tiny, and YOLOv7-tiny, to quantify omnidirectional traffic in videos from on-hive video loggers on regular, unmodified one- and two-super Langstroth hives and compared their accuracies, energy efficacies, and operational energy footprints. We trained and tested the models with a 70/30 split on a dataset of 23,173 flying bees manually labeled in 5819 images from 10 randomly selected videos and manually evaluated the trained models on 3600 images from 120 randomly selected videos from different apiaries, years, and queen races. We designed a new energy efficacy metric as a ratio of performance units per energy unit required to make a model operational in a continuous hive monitoring data pipeline. In terms of accuracy, YOLOv3 was first, YOLOv7-tiny—second, and YOLOv4-tiny—third. All models underestimated the true amount of traffic due to false negatives. YOLOv3 was the only model with no false positives, but had the lowest energy efficacy and highest operational energy footprint in a deployed hive monitoring data pipeline. YOLOv7-tiny had the highest energy efficacy and the lowest operational energy footprint in the same pipeline. Consequently, YOLOv7-tiny is a model worth considering for training on larger bee datasets if a primary objective is the discovery of non-invasive computer vision models of traffic quantification with higher energy efficacies and lower operational energy footprints. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

12 pages, 2820 KiB  
Perspective
Design and Fabrication of Broadband InGaAs Detectors Integrated with Nanostructures
by Bo Yang, Yizhen Yu, Guixue Zhang, Xiumei Shao and Xue Li
Sensors 2023, 23(14), 6556; https://doi.org/10.3390/s23146556 - 20 Jul 2023
Cited by 8 | Viewed by 3569
Abstract
A visible–extended shortwave infrared indium gallium arsenide (InGaAs) focal plane array (FPA) detector is the ideal choice for reducing the size, weight and power (SWaP) of infrared imaging systems, especially in low-light night vision and other fields that require simultaneous visible and near-infrared [...] Read more.
A visible–extended shortwave infrared indium gallium arsenide (InGaAs) focal plane array (FPA) detector is the ideal choice for reducing the size, weight and power (SWaP) of infrared imaging systems, especially in low-light night vision and other fields that require simultaneous visible and near-infrared light detection. However, the lower quantum efficiency in the visible band has limited the extensive application of the visible–extended InGaAs FPA. Recently, a novel optical metasurface has been considered a solution for a high-performance semiconductor photoelectric device due to its highly controllable property of electromagnetic wave manipulation. Broadband Mie resonator arrays, such as nanocones and nanopillars designed with FDTD methods, were integrated on a back-illuminated InGaAs FPA as an AR metasurface. The visible–extended InGaAs detector was fabricated using substrate removal technology. The nanostructures integrated into the Vis-SWIR InGaAs detectors could realize a 10–20% enhanced quantum efficiency and an 18.8% higher FPA response throughout the wavelength range of 500–1700 nm. Compared with the traditional AR coating, nanostructure integration has advantages, such as broadband high responsivity and omnidirection antireflection, as a promising route for future Vis-SWIR InGaAs detectors with higher image quality. Full article
(This article belongs to the Special Issue Semiconductor Sensors towards Optoelectronic Device Applications)
Show Figures

Figure 1

25 pages, 6279 KiB  
Article
Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices
by Marta Rostkowska and Piotr Skrzypczyński
Sensors 2023, 23(14), 6485; https://doi.org/10.3390/s23146485 - 18 Jul 2023
Cited by 4 | Viewed by 1594
Abstract
This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even [...] Read more.
This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even in self-similar environments with few discernible features. As the target application is the global localization of a low-cost service mobile robot, the proposed solutions are optimized toward being small-footprint models that provide real-time inference on edge devices, such as Nvidia Jetson. We compare several design choices for the neural network-based architecture of the localization system and then demonstrate that the best results are achieved with embeddings (global descriptors) yielded by exploiting transfer learning and fine tuning on a limited number of catadioptric images. We test our solutions on two small-scale datasets collected using different catadioptric cameras in the same office building. Next, we compare the performance of our system to state-of-the-art visual place recognition systems on the publicly available COLD Freiburg and Saarbrücken datasets that contain images collected under different lighting conditions. Our system compares favourably to the competitors both in terms of the accuracy of place recognition and the inference time, providing a cost- and energy-efficient means of appearance-based localization for an indoor service robot. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

21 pages, 5824 KiB  
Article
Model-Predictive Control for Omnidirectional Mobile Robots in Logistic Environments Based on Object Detection Using CNNs
by Stefan-Daniel Achirei, Razvan Mocanu, Alexandru-Tudor Popovici and Constantin-Catalin Dosoftei
Sensors 2023, 23(11), 4992; https://doi.org/10.3390/s23114992 - 23 May 2023
Cited by 19 | Viewed by 4260
Abstract
Object detection is an essential component of autonomous mobile robotic systems, enabling robots to understand and interact with the environment. Object detection and recognition have made significant progress using convolutional neural networks (CNNs). Widely used in autonomous mobile robot applications, CNNs can quickly [...] Read more.
Object detection is an essential component of autonomous mobile robotic systems, enabling robots to understand and interact with the environment. Object detection and recognition have made significant progress using convolutional neural networks (CNNs). Widely used in autonomous mobile robot applications, CNNs can quickly identify complicated image patterns, such as objects in a logistic environment. Integration of environment perception algorithms and motion control algorithms is a topic subjected to significant research. On the one hand, this paper presents an object detector to better understand the robot environment and the newly acquired dataset. The model was optimized to run on the mobile platform already on the robot. On the other hand, the paper introduces a model-based predictive controller to guide an omnidirectional robot to a particular position in a logistic environment based on an object map obtained from a custom-trained CNN detector and LIDAR data. Object detection contributes to a safe, optimal, and efficient path for the omnidirectional mobile robot. In a practical scenario, we deploy a custom-trained and optimized CNN model to detect specific objects in the warehouse environment. Then we evaluate, through simulation, a predictive control approach based on the detected objects using CNNs. Results are obtained in object detection using a custom-trained CNN with an in-house acquired data set on a mobile platform and in the optimal control for the omnidirectional mobile robot. Full article
(This article belongs to the Special Issue Vehicular Sensing for Improved Urban Mobility)
Show Figures

Figure 1

16 pages, 19173 KiB  
Article
OMNI-CONV: Generalization of the Omnidirectional Distortion-Aware Convolutions
by Charles-Olivier Artizzu, Guillaume Allibert and Cédric Demonceaux
J. Imaging 2023, 9(2), 29; https://doi.org/10.3390/jimaging9020029 - 28 Jan 2023
Cited by 1 | Viewed by 2369
Abstract
Omnidirectional images have drawn great research attention recently thanks to their great potential and performance in various computer vision tasks. However, processing such a type of image requires an adaptation to take into account spherical distortions. Therefore, it is not trivial to directly [...] Read more.
Omnidirectional images have drawn great research attention recently thanks to their great potential and performance in various computer vision tasks. However, processing such a type of image requires an adaptation to take into account spherical distortions. Therefore, it is not trivial to directly extend the conventional convolutional neural networks on omnidirectional images because CNNs were initially developed for perspective images. In this paper, we present a general method to adapt perspective convolutional networks to equirectangular images, forming a novel distortion-aware convolution. Our proposed solution can be regarded as a replacement for the existing convolutional network without requiring any additional training cost. To verify the generalization of our method, we conduct an analysis on three basic vision tasks, i.e., semantic segmentation, optical flow, and monocular depth. The experiments on both virtual and real outdoor scenarios show our adapted spherical models consistently outperform their counterparts. Full article
Show Figures

Figure 1

15 pages, 4667 KiB  
Article
Classification and Object Detection of 360° Omnidirectional Images Based on Continuity-Distortion Processing and Attention Mechanism
by Xin Zhang, Degang Yang, Tingting Song, Yichen Ye, Jie Zhou and Yingze Song
Appl. Sci. 2022, 12(23), 12398; https://doi.org/10.3390/app122312398 - 4 Dec 2022
Cited by 3 | Viewed by 3553
Abstract
The use of 360° omnidirectional images has occurred widely in areas where comprehensive visual information is required due to their large visual field coverage. However, many extant convolutional neural networks based on 360° omnidirectional images have not performed well in computer vision tasks. [...] Read more.
The use of 360° omnidirectional images has occurred widely in areas where comprehensive visual information is required due to their large visual field coverage. However, many extant convolutional neural networks based on 360° omnidirectional images have not performed well in computer vision tasks. This occurs because 360° omnidirectional images are processed into plane images by equirectangular projection, which generates discontinuities at the edges and can result in serious distortion. At present, most methods to alleviate these problems are based on multi-projection and resampling, which can result in huge computational overhead. Therefore, a novel edge continuity distortion-aware block (ECDAB) for 360° omnidirectional images is proposed here, which prevents the discontinuity of edges and distortion by recombining and segmenting features. To further improve the performance of the network, a novel convolutional row-column attention block (CRCAB) is also proposed. CRCAB captures row-to-row and column-to-column dependencies to aggregate global information, enabling stronger representation of the extracted features. Moreover, to reduce the memory overhead of CRCAB, we propose an improved convolutional row-column attention block (ICRCAB), which can adjust the number of vectors in the row-column direction. Finally, to verify the effectiveness of the proposed networks, we conducted experiments on both traditional images and 360° omnidirectional image datasets. The experimental results demonstrated that better performance than for the baseline model was obtained by the network using ECDAB or CRCAB. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

17 pages, 69854 KiB  
Article
Omni-Directional Semi-Global Stereo Matching with Reliable Information Propagation
by Yueyang Ma, Ailing Tian, Penghui Bu, Bingcai Liu and Zixin Zhao
Appl. Sci. 2022, 12(23), 11934; https://doi.org/10.3390/app122311934 - 23 Nov 2022
Cited by 4 | Viewed by 2567
Abstract
High efficiency and accuracy of semi-global matching (SGM) make it widely used in many stereo vision applications. However, SGM not only struggles in dealing with pixels in homogeneous area, but also suffers from streak artifacts. In this paper, we propose a novel omni-directional [...] Read more.
High efficiency and accuracy of semi-global matching (SGM) make it widely used in many stereo vision applications. However, SGM not only struggles in dealing with pixels in homogeneous area, but also suffers from streak artifacts. In this paper, we propose a novel omni-directional SGM (OmniSGM) with a cost volume update scheme to aggregate costs from paths along all directions and to encourage reliable information to propagate across entire image. Specifically, we perform SGM along four tree structures, namely trees in the left, right, top and bottom of root node, and then fuse the outputs to obtain final result. The contributions of pixels on each tree can be recursively computed from leaf nodes to root node, ensuring our method has linear time computational complexity. Moreover, An iterative cost volume update scheme is proposed using aggregated cost in the last pass to enhance the robustness of initial matching cost. Thus, useful information is more likely to propagate in a long distance to handle the ambiguities in low textural area. Finally, we present an efficient strategy to propagate disparities of stable pixels along the minimum spanning tree (MST) for disparity refinement. Extensive experiments in stereo matching on Middlebury and KITTI datasets demonstrate that our method outperforms typical traditional SGM-based cost aggregation methods. Full article
(This article belongs to the Special Issue Application of Computer Science in Mobile Robots)
Show Figures

Figure 1

27 pages, 3157 KiB  
Article
Client-Oriented Blind Quality Metric for High Dynamic Range Stereoscopic Omnidirectional Vision Systems
by Liuyan Cao, Jihao You, Yang Song, Haiyong Xu, Zhidi Jiang and Gangyi Jiang
Sensors 2022, 22(21), 8513; https://doi.org/10.3390/s22218513 - 4 Nov 2022
Cited by 2 | Viewed by 1869
Abstract
A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, [...] Read more.
A high dynamic range (HDR) stereoscopic omnidirectional vision system can provide users with more realistic binocular and immersive perception, where the HDR stereoscopic omnidirectional image (HSOI) suffers distortions during its encoding and visualization, making its quality evaluation more challenging. To solve the problem, this paper proposes a client-oriented blind HSOI quality metric based on visual perception. The proposed metric mainly consists of a monocular perception module (MPM) and binocular perception module (BPM), which combine monocular/binocular, omnidirectional and HDR/tone-mapping perception. The MPM extracts features from three aspects: global color distortion, symmetric/asymmetric distortion and scene distortion. In the BPM, the binocular fusion map and binocular difference map are generated by joint image filtering. Then, brightness segmentation is performed on the binocular fusion image, and distinctive features are extracted on the segmented high/low/middle brightness regions. For the binocular difference map, natural scene statistical features are extracted by multi-coefficient derivative maps. Finally, feature screening is used to remove the redundancy between the extracted features. Experimental results on the HSOID database show that the proposed metric is generally better than the representative quality metric, and is more consistent with the subjective perception. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 11057 KiB  
Article
GNSS Urban Positioning with Vision-Aided NLOS Identification
by Hexiong Yao, Zhiqiang Dai, Weixiang Chen, Ting Xie and Xiangwei Zhu
Remote Sens. 2022, 14(21), 5493; https://doi.org/10.3390/rs14215493 - 31 Oct 2022
Cited by 7 | Viewed by 3527
Abstract
The global navigation satellite system (GNSS) has played an important role in a broad range of consumer and industrial applications. In particular, cities have become GNSS major application scenarios; however, GNSS signals suffer from blocking, reflection and attenuation in harsh urban environments, resulting [...] Read more.
The global navigation satellite system (GNSS) has played an important role in a broad range of consumer and industrial applications. In particular, cities have become GNSS major application scenarios; however, GNSS signals suffer from blocking, reflection and attenuation in harsh urban environments, resulting in diverse received signals, e.g., non-line-of-sight (NLOS) and multipath signals. NLOS signals often cause severe deterioration in positioning, navigation, and timing (PNT) solutions, which should be identified and excluded. In this paper, we propose a vision-aided NLOS identification method to augment GNSS urban positioning. A skyward omnidirectional camera is installed on a GNSS antenna to collect omnidirectional images of the sky region. After being rectified, these images are processed for sky region segmentation, which is improved by leveraging gradient information and energy function optimization. Image morphology processing is further employed to smooth slender boundaries. After sky region segmentation, the satellites are projected onto the omnidirectional image, from which NLOS satellites are identified. Finally, the identified NLOS satellites are excluded from GNSS PNT estimation, promoting accuracy and stability. Practical test results show that the proposed sky region segmentation module achieves over 96% accuracy, and that completely accurate NLOS identification is achieved for the experimental images. We validate the performance of our method on public datasets. Compared with the raw measurements without screening, the vision-aided NLOS identification method enables improvements of 60.3%, 12.4% and 63.3% in the E, N, and U directions, respectively, as well as an improvement of 58.5% in 3D accuracy. Full article
(This article belongs to the Special Issue Remote Sensing in Navigation: State-of-the-Art)
Show Figures

Graphical abstract

30 pages, 36753 KiB  
Article
Collision Detection and Avoidance for Underwater Vehicles Using Omnidirectional Vision
by Eduardo Ochoa, Nuno Gracias, Klemen Istenič, Josep Bosch, Patryk Cieślak and Rafael García
Sensors 2022, 22(14), 5354; https://doi.org/10.3390/s22145354 - 18 Jul 2022
Cited by 7 | Viewed by 4361
Abstract
Exploration of marine habitats is one of the key pillars of underwater science, which often involves collecting images at close range. As acquiring imagery close to the seabed involves multiple hazards, the safety of underwater vehicles, such as remotely operated vehicles (ROVs) and [...] Read more.
Exploration of marine habitats is one of the key pillars of underwater science, which often involves collecting images at close range. As acquiring imagery close to the seabed involves multiple hazards, the safety of underwater vehicles, such as remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs), is often compromised. Common applications for obstacle avoidance in underwater environments are often conducted with acoustic sensors, which cannot be used reliably at very short distances, thus requiring a high level of attention from the operator to avoid damaging the robot. Therefore, developing capabilities such as advanced assisted mapping, spatial awareness and safety, and user immersion in confined environments is an important research area for human-operated underwater robotics. In this paper, we present a novel approach that provides an ROV with capabilities for navigation in complex environments. By leveraging the ability of omnidirectional multi-camera systems to provide a comprehensive view of the environment, we create a 360° real-time point cloud of nearby objects or structures within a visual SLAM framework. We also develop a strategy to assess the risk of obstacles in the vicinity. We show that the system can use the risk information to generate warnings that the robot can use to perform evasive maneuvers when approaching dangerous obstacles in real-world scenarios. This system is a first step towards a comprehensive pilot assistance system that will enable inexperienced pilots to operate vehicles in complex and cluttered environments. Full article
(This article belongs to the Special Issue Underwater Robotics in 2022-2023)
Show Figures

Figure 1

13 pages, 6301 KiB  
Article
Full Soft Capacitive Omnidirectional Tactile Sensor Based on Micro-Spines Electrode and Hemispheric Dielectric Structure
by Baochun Xu, Yu Wang, Haoao Cui, Haoran Niu, Yijian Liu, Zhongli Li and Da Chen
Biosensors 2022, 12(7), 506; https://doi.org/10.3390/bios12070506 - 10 Jul 2022
Cited by 9 | Viewed by 3333
Abstract
Flourishing in recent years, intelligent electronics is desirably pursued in many fields including bio-symbiotic, human physiology regulatory, robot operation, and human–computer interaction. To support this appealing vision, human-like tactile perception is urgently necessary for dexterous object manipulation. In particular, the real-time force perception [...] Read more.
Flourishing in recent years, intelligent electronics is desirably pursued in many fields including bio-symbiotic, human physiology regulatory, robot operation, and human–computer interaction. To support this appealing vision, human-like tactile perception is urgently necessary for dexterous object manipulation. In particular, the real-time force perception with strength and orientation simultaneously is critical for intelligent electronic skin. However, it is still very challenging to achieve directional tactile sensing that has eminent properties, and at the same time, has the feasibility for scale expansion. Here, a fully soft capacitive omnidirectional tactile (ODT) sensor was developed based on the structure of MWCNTs coated stripe electrode and Ecoflex hemisphere array dielectric. The theoretical analysis of this structure was conducted for omnidirectional force detection by finite element simulation. Combined with the micro-spine and the hemispheric hills dielectric structure, this sensing structure could achieve omnidirectional detection with high sensitivity (0.306 ± 0.001 kPa−1 under 10 kPa) and a wide response range (2.55 Pa to 160 kPa). Moreover, to overcome the inherent disunity in flexible sensor units due to nano-materials and polymer, machine learning approaches were introduced as a prospective technical routing to recognize various loading angles and finally performed more than 99% recognition accuracy. The practical validity of the design was demonstrated by the detection of human motion, physiological activities, and gripping of a cup, which was evident to have great potential for tactile e-skin for digital medical and soft robotics. Full article
(This article belongs to the Special Issue Flexible Biosensors for Health Monitoring)
Show Figures

Figure 1

23 pages, 8892 KiB  
Article
A Mobile Robot with Omnidirectional Tracks—Design and Experimental Research
by Mateusz Fiedeń and Jacek Bałchanowski
Appl. Sci. 2021, 11(24), 11778; https://doi.org/10.3390/app112411778 - 11 Dec 2021
Cited by 19 | Viewed by 7951
Abstract
This article deals with the design and testing of mobile robots equipped with drive systems based on omnidirectional tracks. These are new mobile systems that combine the advantages of a typical track drive with the advantages of systems equipped with omnidirectional Mecanum wheels. [...] Read more.
This article deals with the design and testing of mobile robots equipped with drive systems based on omnidirectional tracks. These are new mobile systems that combine the advantages of a typical track drive with the advantages of systems equipped with omnidirectional Mecanum wheels. The omnidirectional tracks allow the robot to move in any direction without having to change the orientation of its body. The mobile robot market (automated construction machinery, mobile handle robots, mobile platforms, etc.) constantly calls for improvements in the manoeuvrability of vehicles. Omnidirectional drive technology can meet such requirements. The main aim of the work is to create a mobile robot that is capable of omnidirectional movement over different terrains, and also to conduct an experimental study of the robot’s operation. The paper presents the construction and principles of operation of a small robot equipped with omnidirectional tracks. The robot’s construction and control system, and also a prototype made with FDM technology, are described. The trajectory parameters of the robot’s operation along the main and transverse axes were measured on a test stand equipped with a vision-based measurement system. The results of the experimental research became the basis for the development and experimental verification of a static method of correcting deviations in movement trajectory. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

Back to TopTop