Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = direct visual servo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 10733 KB  
Article
Image-Based Auto-Focus Microscope System with Visual Servo Control for Micro-Stereolithography
by Yijie Liu, Xuexuan Li, Pengfei Jiang, Ziyue Wang, Jichang Guo, Chao Luo, Yaozhong Wei, Zhiliang Chen, Chang Liu, Wang Ren, Wei Zhang, Juntian Qu and Zhen Zhang
Micromachines 2024, 15(10), 1250; https://doi.org/10.3390/mi15101250 - 11 Oct 2024
Cited by 1 | Viewed by 2708
Abstract
Micro-stereolithography (μSL) is an advanced additive manufacturing technique that enables the fabrication of highly precise microstructures with fine feature resolution. One of the primary challenges in μSL is achieving and maintaining precise focus throughout the fabrication process. For the successful [...] Read more.
Micro-stereolithography (μSL) is an advanced additive manufacturing technique that enables the fabrication of highly precise microstructures with fine feature resolution. One of the primary challenges in μSL is achieving and maintaining precise focus throughout the fabrication process. For the successful application of μSL, it is essential to maintain the sample surface within a focal depth of several microns. Despite the growing interest in auto-focus devices, limited attention has been directed towards auto-focus systems in image-based auto-focus microscope systems for precision μSL. To address this challenge, we propose an image-based auto-focus microscope system incorporating visual servo control. In the optical design, a transflective beam splitter is employed, allowing the laser beam to pass through for fabrication while reflecting the focused beam on the sample surface to the microscope and camera. Utilizing captured spot images and the Foucault knife-edge test, a deep learning-based laser spot image processing algorithm is developed to determine the focus position based on spot size and the number of spot pixels on both sides. Experimental results demonstrate that the proposed auto-focus system effectively determines the relative position of the focal point using the laser spot image and achieves auto-focusing through visual servo control. Full article
Show Figures

Figure 1

20 pages, 12301 KB  
Article
High-Precision Drilling by Anchor-Drilling Robot Based on Hybrid Visual Servo Control in Coal Mine
by Mengyu Lei, Xuhui Zhang, Wenjuan Yang, Jicheng Wan, Zheng Dong, Chao Zhang and Guangming Zhang
Mathematics 2024, 12(13), 2059; https://doi.org/10.3390/math12132059 - 1 Jul 2024
Cited by 1 | Viewed by 2128
Abstract
Rock bolting is a commonly used method for stabilizing the surrounding rock in coal-mine roadways. It involves installing rock bolts after drilling, which penetrate unstable rock layers, binding loose rocks together, enhancing the stability of the surrounding rock, and controlling its deformation. Although [...] Read more.
Rock bolting is a commonly used method for stabilizing the surrounding rock in coal-mine roadways. It involves installing rock bolts after drilling, which penetrate unstable rock layers, binding loose rocks together, enhancing the stability of the surrounding rock, and controlling its deformation. Although recent progress in drilling and anchoring equipment has significantly enhanced the efficiency of roof support in coal mines and improved safety measures, how to deal with drilling rigs’ misalignment with the through-hole center remains a big issue, which may potentially compromise the quality of drilling and consequently affect the effectiveness of bolt support or even result in failure. To address this challenge, this article presents a robotic teleoperation system alongside a hybrid visual servo control strategy. Addressing the demand for high precision and efficiency in aligning the drilling rigs with the center of the drilling hole, a hybrid control strategy is introduced combining position-based and image-based visual servo control. The former facilitates an effective approach to the target area, while the latter ensures high-precision alignment with the center of the drilling hole. The robot teleoperation system employs the binocular vision measurement system to accurately determine the position and orientation of the drilling-hole center, which serves as the designated target position for the drilling rig. Leveraging the displacement and angle sensor information installed on each joint of the manipulator, the system utilizes the kinematic model of the manipulator to compute the spatial position of the end-effector. It dynamically adjusts the spatial pose of the end-effector in real time, aligning it with the target position relative to its current location. Additionally, it utilizes monocular vision information to fine-tune the movement speed and direction of the end-effector, ensuring rapid and precise alignment with the target drilling-hole center. Experimental results demonstrate that this method can control the maximum alignment error within 7 mm, significantly enhancing the alignment accuracy compared to manual control. Compared with the manual control method, the average error of this method is reduced by 41.2%, and the average duration is reduced by 4.3 s. This study paves a new path for high-precision drilling and anchoring of tunnel roofs, thereby improving the quality and efficiency of roof support while mitigating the challenges associated with significant errors and compromised safety during manual control processes. Full article
Show Figures

Figure 1

17 pages, 3296 KB  
Article
High-Precision Visual Servoing for the Neutron Diffractometer STRESS-SPEC at MLZ
by Martin Landesberger , Oguz Kedilioglu , Lijiu Wang , Weimin Gan , Joana Rebelo Kornmeier , Sebastian Reitelshöfer , Jörg Franke  and Michael Hofmann 
Sensors 2024, 24(9), 2703; https://doi.org/10.3390/s24092703 - 24 Apr 2024
Viewed by 1809
Abstract
With neutron diffraction, the local stress and texture of metallic components can be analyzed non-destructively. For both, highly accurate positioning of the sample is essential, requiring the measurement at the same sample location from different directions. Current sample-positioning systems in neutron diffraction instruments [...] Read more.
With neutron diffraction, the local stress and texture of metallic components can be analyzed non-destructively. For both, highly accurate positioning of the sample is essential, requiring the measurement at the same sample location from different directions. Current sample-positioning systems in neutron diffraction instruments combine XYZ tables and Eulerian cradles to enable the accurate six-degree-of-freedom (6DoF) handling of samples. However, these systems are not flexible enough. The choice of the rotation center and their range of motion are limited. Industrial six-axis robots have the necessary flexibility, but they lack the required absolute accuracy. This paper proposes a visual servoing system consisting of an industrial six-axis robot enhanced with a high-precision multi-camera tracking system. Its goal is to achieve an absolute positioning accuracy of better than 50μm. A digital twin integrates various data sources from the instrument and the sample in order to enable a fully automatic measurement procedure. This system is also highly relevant for other kinds of processes that require the accurate and flexible handling of objects and tools, e.g., robotic surgery or industrial printing on 3D surfaces. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

16 pages, 2675 KB  
Article
An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan
by Mohammed Albekairi, Hassen Mekki, Khaled Kaaniche and Amr Yousef
Sensors 2023, 23(24), 9667; https://doi.org/10.3390/s23249667 - 7 Dec 2023
Cited by 7 | Viewed by 2973
Abstract
In this article, we present an innovative approach to 2D visual servoing (IBVS), aiming to guide an object to its destination while avoiding collisions with obstacles and keeping the target within the camera’s field of view. A single monocular sensor’s sole visual data [...] Read more.
In this article, we present an innovative approach to 2D visual servoing (IBVS), aiming to guide an object to its destination while avoiding collisions with obstacles and keeping the target within the camera’s field of view. A single monocular sensor’s sole visual data serves as the basis for our method. The fundamental idea is to manage and control the dynamics associated with any trajectory generated in the image plane. We show that the differential flatness of the system’s dynamics can be used to limit arbitrary paths based on the number of points on the object that need to be reached in the image plane. This creates a link between the current configuration and the desired configuration. The number of required points depends on the number of control inputs of the robot used and determines the dimension of the flat output of the system. For a two-wheeled mobile robot, for instance, the coordinates of a single point on the object in the image plane are sufficient, whereas, for a quadcopter with four rotating motors, the trajectory needs to be defined by the coordinates of two points in the image plane. By guaranteeing precise tracking of the chosen trajectory in the image plane, we ensure that problems of collision with obstacles and leaving the camera’s field of view are avoided. Our approach is based on the principle of the inverse problem, meaning that when any point on the object is selected in the image plane, it will not be occluded by obstacles or leave the camera’s field of view during movement. It is true that proposing any trajectory in the image plane can lead to non-intuitive movements (back and forth) in the Cartesian plane. In the case of backward motion, the robot may collide with obstacles as it navigates without direct vision. Therefore, it is essential to perform optimal trajectory planning that avoids backward movements. To assess the effectiveness of our method, our study focuses exclusively on the challenge of implementing the generated trajectory in the image plane within the specific context of a two-wheeled mobile robot. We use numerical simulations to illustrate the performance of the control strategy we have developed. Full article
(This article belongs to the Special Issue Mobile Robots for Navigation)
Show Figures

Figure 1

17 pages, 6235 KB  
Article
Two-Step Adaptive Control for Planar Type Docking of Autonomous Underwater Vehicle
by Tianlei Wang, Zhenxing Sun, Yun Ke, Chaochao Li and Jiwei Hu
Mathematics 2023, 11(16), 3467; https://doi.org/10.3390/math11163467 - 10 Aug 2023
Cited by 7 | Viewed by 1545
Abstract
Planar type docking enables a convenient underwater energy supply for irregularly shaped autonomous underwater vehicles (AUVs), but the corresponding control method is still challenging. Conventional control methods for torpedo-shaped AUVs are not suitable for planar type docking due to the significant differences in [...] Read more.
Planar type docking enables a convenient underwater energy supply for irregularly shaped autonomous underwater vehicles (AUVs), but the corresponding control method is still challenging. Conventional control methods for torpedo-shaped AUVs are not suitable for planar type docking due to the significant differences in system structures and motion characteristics. This paper proposes a two-step adaptive control method to solve the planar type docking problem. The method makes a seamless combination of horizontal dynamic positioning and visual servo docking considering ocean current disturbance. The current disturbance is estimated and canceled in the pre-docking step using a current observer, and the positioning error is further compensated for by the vertical visual servo technique in the docking step. Reduced order dynamic models are distinctively established for different docking steps according to the motion characteristics, based on which the dynamic controllers are designed considering the model parameter uncertainties. Simulation is conducted with an initial distance of 10 m in the horizontal direction and 3 m in depth. Stable and accurate dynamic positioning under up to 0.4 m/s of current disturbances with different directions is validated. A 0.5 m lateral positioning error is successfully compensated for by the visual servo docking step. The proposed control method provides a valuable reference for similar types of docking application. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Graphical abstract

20 pages, 7829 KB  
Article
2D Magnetic Manipulation of a Micro-Robot in Glycerin Using Six Pairs of Magnetic Coils
by Qigao Fan, Jiawei Lu, Jie Jia and Juntian Qu
Micromachines 2022, 13(12), 2144; https://doi.org/10.3390/mi13122144 - 4 Dec 2022
Cited by 7 | Viewed by 3822
Abstract
This paper demonstrates the control system of a single magnetic micro-robot driven by combined coils. The combined coils consist of three pairs of Helmholtz coils and three pairs of Maxwell coils. The rotating magnetic field, gradient magnetic field, and combined magnetic field model [...] Read more.
This paper demonstrates the control system of a single magnetic micro-robot driven by combined coils. The combined coils consist of three pairs of Helmholtz coils and three pairs of Maxwell coils. The rotating magnetic field, gradient magnetic field, and combined magnetic field model of the combined coils were analyzed. To make the output magnetic field quickly converge to the reference point without steady-state error, the discrete-time optimal controller was designed based on the auto disturbance rejection technology. We have designed a closed-loop controller based on a position servo. The control system includes the position control and direction control of the micro-robot. To address problems with slow sampling frequency in visual feedback and inability to feed real-time position back to the control system, a Kalman filter algorithm was used to predict the position of the micro-robot in two-dimensional space. Simulations and experiments were carried out based on the proposed structure of combined coils and control scheme. The experimental results demonstrated the uniformity and excellent dynamic performance of the generated magnetic field. Full article
(This article belongs to the Special Issue Micro- and Nano-Systems for Manipulation, Actuation and Sensing)
Show Figures

Figure 1

17 pages, 22561 KB  
Article
Autonomous Visual Navigation for a Flower Pollination Drone
by Dries Hulens, Wiebe Van Ranst, Ying Cao and Toon Goedemé
Machines 2022, 10(5), 364; https://doi.org/10.3390/machines10050364 - 10 May 2022
Cited by 26 | Viewed by 6003
Abstract
In this paper, we present the development of a visual navigation capability for a small drone enabling it to autonomously approach flowers. This is a very important step towards the development of a fully autonomous flower pollinating nanodrone. The drone we developed is [...] Read more.
In this paper, we present the development of a visual navigation capability for a small drone enabling it to autonomously approach flowers. This is a very important step towards the development of a fully autonomous flower pollinating nanodrone. The drone we developed is totally autonomous and relies for its navigation on a small on-board color camera, complemented with one simple ToF distance sensor, to detect and approach the flower. The proposed solution uses a DJI Tello drone carrying a Maix Bit processing board capable of running all deep-learning-based image processing and navigation algorithms on-board. We developed a two-stage visual servoing algorithm that first uses a highly optimized object detection CNN to localize the flowers and fly towards it. The second phase, approaching the flower, is implemented by a direct visual steering CNN. This enables the drone to detect any flower in the neighborhood, steer the drone towards the flower and make the drone’s pollinating rod touch the flower. We trained all deep learning models based on an artificial dataset with a mix of images of real flowers, artificial (synthetic) flowers and virtually rendered flowers. Our experiments demonstrate that the approach is technically feasible. The drone is able to detect, approach and touch the flowers totally autonomously. Our 10 cm sized prototype is trained on sunflowers, but the methodology presented in this paper can be retrained for any flower type. Full article
Show Figures

Figure 1

25 pages, 1495 KB  
Article
Design of a Gough–Stewart Platform Based on Visual Servoing Controller
by Minglei Zhu, Cong Huang, Shijie Song and Dawei Gong
Sensors 2022, 22(7), 2523; https://doi.org/10.3390/s22072523 - 25 Mar 2022
Cited by 14 | Viewed by 5416
Abstract
Designing a robot with the best accuracy is always an attractive research direction in the robotics community. In order to create a Gough–Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design [...] Read more.
Designing a robot with the best accuracy is always an attractive research direction in the robotics community. In order to create a Gough–Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design methodology. This advanced optimal design method considers the controller positioning accuracy in the design process for getting the optimal geometric parameters of the robot. In this paper, three types of visual servoing controllers are applied to control the motions of the Gough–Stewart platform: leg-direction-based visual servoing, line-based visual servoing, and image moment visual servoing. Depending on these controllers, the positioning error models considering the camera observation error together with the controller singularities are analyzed. In the next step, the optimization problems are formulated in order to get the optimal geometric parameters of the robot and the placement of the camera for the Gough–Stewart platform for each type of controller. Then, we perform co-simulations on the three optimized Gough–Stewart platforms in order to test the positioning accuracy and the robustness with respect to the manufacturing errors. It turns out that the optimal control-based design methodology helps get both the optimum design parameters of the robot and the performance of the controller {robot + dedicated controller}. Full article
Show Figures

Figure 1

27 pages, 7850 KB  
Article
Mix Frame Visual Servo Control Framework for Autonomous Assistive Robotic Arms
by Zubair Arif and Yili Fu
Sensors 2022, 22(2), 642; https://doi.org/10.3390/s22020642 - 14 Jan 2022
Cited by 5 | Viewed by 5498
Abstract
Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This [...] Read more.
Assistive robotic arms (ARAs) that provide care to the elderly and people with disabilities, are a significant part of Human-Robot Interaction (HRI). Presently available ARAs provide non-intuitive interfaces such as joysticks for control and thus, lacks the autonomy to perform daily activities. This study proposes that, for inducing autonomous behavior in ARAs, visual sensors integration is vital, and visual servoing in the direct Cartesian control mode is the preferred method. Generally, ARAs are designed in a configuration where its end-effector’s position is defined in the fixed base frame while orientation is expressed in the end-effector frame. We denoted this configuration as ‘mixed frame robotic arms’. Consequently, conventional visual servo controllers which operate in a single frame of reference are incompatible with mixed frame ARAs. Therefore, we propose a mixed-frame visual servo control framework for ARAs. Moreover, we enlightened the task space kinematics of a mixed frame ARAs, which led us to the development of a novel “mixed frame Jacobian matrix”. The proposed framework was validated on a mixed frame JACO-2 7 DoF ARA using an adaptive proportional derivative controller for achieving image-based visual servoing (IBVS), which showed a significant increase of 31% in the convergence rate, outperforming conventional IBVS joint controllers, especially in the outstretched arm positions and near the base frame. Our Results determine the need for the mixed frame controller for deploying visual servo control on modern ARAs, that can inherently cater to the robotic arm’s joint limits, singularities, and self-collision problems. Full article
Show Figures

Figure 1

25 pages, 2477 KB  
Article
Model Reference Tracking Control Solutions for a Visual Servo System Based on a Virtual State from Unknown Dynamics
by Timotei Lala, Darius-Pavel Chirla and Mircea-Bogdan Radac
Energies 2022, 15(1), 267; https://doi.org/10.3390/en15010267 - 31 Dec 2021
Cited by 14 | Viewed by 3010
Abstract
This paper focuses on validating a model-free Value Iteration Reinforcement Learning (MFVI-RL) control solution on a visual servo tracking system in a comprehensive manner starting from theoretical convergence analysis to detailed hardware and software implementation. Learning is based on a virtual state representation [...] Read more.
This paper focuses on validating a model-free Value Iteration Reinforcement Learning (MFVI-RL) control solution on a visual servo tracking system in a comprehensive manner starting from theoretical convergence analysis to detailed hardware and software implementation. Learning is based on a virtual state representation reconstructed from input-output (I/O) system samples under nonlinear observability and unknown dynamics assumptions, while the goal is to ensure linear output reference model (ORM) tracking. Secondary, a competitive model-free Virtual State-Feedback Reference Tuning (VSFRT) is learned from the same I/O data using the same virtual state representation, demonstrating the framework’s learning capability. A model-based two degrees-of-freedom (2DOF) output feedback controller serving as a comparisons baseline is designed and tuned using an identified system model. With similar complexity and linear controller structure, MFVI-RL is shown to be superior, confirming that the model-based design issue of poor identified system model and control performance degradation can be solved in a direct data-driven style. Apart from establishing a formal connection between output feedback control, state feedback control and also between classical control and artificial intelligence methods, the results also point out several practical trade-offs, such as I/O data exploration quality and control performance leverage with data volume, control goal and controller complexity. Full article
(This article belongs to the Special Issue Intelligent Control for Future Systems)
Show Figures

Graphical abstract

13 pages, 659 KB  
Article
A Switched Approach to Image-Based Stabilization for Nonholonomic Mobile Robots with Field-of-View Constraints
by Yao Huang
Appl. Sci. 2021, 11(22), 10895; https://doi.org/10.3390/app112210895 - 18 Nov 2021
Cited by 4 | Viewed by 2152
Abstract
This paper presents a switched visual servoing strategy for maneuvering the nonholonomic mobile robot to the desired configuration while keeping the tracked image points in the vision of the camera. Firstly, a pure backward motion and a pure rotational motion are applied to [...] Read more.
This paper presents a switched visual servoing strategy for maneuvering the nonholonomic mobile robot to the desired configuration while keeping the tracked image points in the vision of the camera. Firstly, a pure backward motion and a pure rotational motion are applied to the mobile robot in succession. Thus, the principle point and the scaled focal length in x direction of the camera are identified through the visual feedback from a fixed onboard camera. Secondly, the identified parameters are used to build the system model in polar-coordinate representation. Then an adaptive non-smooth controller is designed to maneuver the mobile robot to the desired configuration under the nonholonomic constraint. And a switched strategy which consists of two image-based controllers is utilized for keeping the features in the field-of-view. Simulation results are presented to validate the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

13 pages, 2887 KB  
Article
Position Control for Soft Actuators, Next Steps toward Inherently Safe Interaction
by Dongshuo Li, Vaishnavi Dornadula, Kengyu Lin and Michael Wehner
Electronics 2021, 10(9), 1116; https://doi.org/10.3390/electronics10091116 - 9 May 2021
Cited by 11 | Viewed by 3970
Abstract
Soft robots present an avenue toward unprecedented societal acceptance, utility in populated environments, and direct interaction with humans. However, the compliance that makes them attractive also makes soft robots difficult to control. We present two low-cost approaches to control the motion of soft [...] Read more.
Soft robots present an avenue toward unprecedented societal acceptance, utility in populated environments, and direct interaction with humans. However, the compliance that makes them attractive also makes soft robots difficult to control. We present two low-cost approaches to control the motion of soft actuators in applications common in human-interaction tasks. First, we present a passive impedance approach, which employs restriction to pneumatic channels to regulate the inflation/deflation rate of a pneumatic actuator and eliminate the overshoot/oscillation seen in many underdamped silicone-based soft actuators. Second, we present a visual servoing feedback control approach. We present an elastomeric pneumatic finger as an example system on which both methods are evaluated and compared to an uncontrolled underdamped actuator. We perturb the actuator and demonstrate its ability to increase distal curvature around the obstacle and maintain the desired end position. In this approach, we use the continuum deformation characteristic of soft actuators as an advantage for control rather than a problem to be minimized. With their low cost and complexity, these techniques present great opportunity for soft robots to improve human–robot interaction. Full article
(This article belongs to the Special Issue Human Computer Interaction and Its Future)
Show Figures

Figure 1

25 pages, 2837 KB  
Article
Ball-Catching System Using Image Processing and an Omni-Directional Wheeled Mobile Robot
by Sho-Tsung Kao and Ming-Tzu Ho
Sensors 2021, 21(9), 3208; https://doi.org/10.3390/s21093208 - 5 May 2021
Cited by 16 | Viewed by 6408
Abstract
The ball-catching system examined in this research, which was composed of an omni-directional wheeled mobile robot and an image processing system that included a dynamic stereo vision camera and a static camera, was used to capture a thrown ball. The thrown ball was [...] Read more.
The ball-catching system examined in this research, which was composed of an omni-directional wheeled mobile robot and an image processing system that included a dynamic stereo vision camera and a static camera, was used to capture a thrown ball. The thrown ball was tracked by the dynamic stereo vision camera, and the omni-directional wheeled mobile robot was navigated through the static camera. A Kalman filter with deep learning was used to decrease the visual measurement noises and to estimate the ball’s position and velocity. The ball’s future trajectory and landing point was predicted by estimating its position and velocity. Feedback linearization was used to linearize the omni-directional wheeled mobile robot model and was then combined with a proportional-integral-derivative (PID) controller. The visual tracking algorithm was initially simulated numerically, and then the performance of the designed system was verified experimentally. We verified that the designed system was able to precisely catch a thrown ball. Full article
(This article belongs to the Special Issue Perceptual Deep Learning in Image Processing and Computer Vision)
Show Figures

Figure 1

24 pages, 9556 KB  
Article
Development of Multi-Axis Crank Linkage Motion System for Synchronized Flight Simulation with VR Immersion
by Cheng-Tang Pan, Pei-Yuan Sun, Hao-Jan Li, Cheng-Hsuan Hsieh, Zheng-Yu Hoe and Yow-Ling Shiue
Appl. Sci. 2021, 11(8), 3596; https://doi.org/10.3390/app11083596 - 16 Apr 2021
Cited by 11 | Viewed by 4696
Abstract
This paper developed a rotatable multi-axis motion platform combined with virtual reality (VR) immersion for flight simulation purposes. The system could simulate the state of the flight operation. The platform was mainly comprised of three crank linkage mechanisms to replace an expensive six [...] Read more.
This paper developed a rotatable multi-axis motion platform combined with virtual reality (VR) immersion for flight simulation purposes. The system could simulate the state of the flight operation. The platform was mainly comprised of three crank linkage mechanisms to replace an expensive six degrees of freedom (DoF) Stewart platform. Then, an independent subsystem which could rotate ±180° was installed at the center of the platform. Therefore, this platform exhibited 4-DoF movement, such as heave, roll, pitch, and yaw. In the servo motor control unit, Visual Studio C# was applied as the software to establish a motion control system to interact with the motion controller and four sets of servo motors. Ethernet Control Automation Technology (EtherCAT) was utilized to communicate the commands and orders between a PC and each servo motor. The optimum controller parameters of this system were obtained using Simulink simulation and verified by experiment. The multiple sets of servo motors and crank linkage mechanisms were synchronized with flight VR imagery. For VR imagery, the software Unity was used to design the flying digital content. The controller was used to transmit the platform’s spatial information to meet the direction of the pilot commands and to compensate the direction of the deviation in spatial coordinates. To achieve synchronized response and motion with respect to the three crank linkage mechanism platform and VR imagery on the tester’s goggle view, the relation of the spatial coordinate of VR imagery and three crank linkage mechanisms was transformed to angular displacement, speed and acceleration which were used to command the motor drive system. As soon as the position of the VR imagery changed, the computer instantly synchronized the VR imagery information to the multi-axis platform and performed multi-axis dynamic motion synchronously according to its commanded information. The testers can thus immerse in the VR image environment by watching the VR content, and obtain a flying experience. Full article
Show Figures

Figure 1

14 pages, 3775 KB  
Article
Autonomous Microrobotic Manipulation Using Visual Servo Control
by Matthew Feemster, Jenelle A. Piepmeier, Harrison Biggs, Steven Yee, Hatem ElBidweihy and Samara L. Firebaugh
Micromachines 2020, 11(2), 132; https://doi.org/10.3390/mi11020132 - 24 Jan 2020
Cited by 12 | Viewed by 3768
Abstract
This describes the application of a visual servo control method to the microrobotic manipulation of polymer beads on a two-dimensional fluid interface. A microrobot, actuated through magnetic fields, is utilized to manipulate a non-magnetic polymer bead into a desired position. The controller utilizes [...] Read more.
This describes the application of a visual servo control method to the microrobotic manipulation of polymer beads on a two-dimensional fluid interface. A microrobot, actuated through magnetic fields, is utilized to manipulate a non-magnetic polymer bead into a desired position. The controller utilizes multiple modes of robot actuation to address the different stages of the task. A filtering strategy employed in separation mode allows the robot to spiral from the manipuland in a fashion that promotes the manipulation positioning objective. Experiments demonstrate that our multiphase controller can be used to direct a microrobot to position a manipuland to within an average positional error of approximately 8 pixels (64 µm) over numerous trials. Full article
Show Figures

Figure 1

Back to TopTop