Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (32)

Search Parameters:
Keywords = field of view (FoV) constraints

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1564 KiB  
Review
Capsule Endoscopy: Current Trends, Technological Advancements, and Future Perspectives in Gastrointestinal Diagnostics
by Chang-Chao Su, Chu-Kuang Chou, Arvind Mukundan, Riya Karmakar, Binusha Fathima Sanbatcha, Chien-Wei Huang, Wei-Chun Weng and Hsiang-Chen Wang
Bioengineering 2025, 12(6), 613; https://doi.org/10.3390/bioengineering12060613 - 4 Jun 2025
Viewed by 3910
Abstract
Capsule endoscopy (CE) has revolutionized gastrointestinal (GI) diagnostics by providing a non-invasive, patient-centered approach to observing the digestive tract. Conceived in 2000 by Gavriel Iddan, CE employs a diminutive, ingestible capsule containing a high-resolution camera, LED lighting, and a power supply. It specializes [...] Read more.
Capsule endoscopy (CE) has revolutionized gastrointestinal (GI) diagnostics by providing a non-invasive, patient-centered approach to observing the digestive tract. Conceived in 2000 by Gavriel Iddan, CE employs a diminutive, ingestible capsule containing a high-resolution camera, LED lighting, and a power supply. It specializes in visualizing the small intestine, a region frequently unreachable by conventional endoscopy. CE helps detect and monitor disorders, such as unexplained gastrointestinal bleeding, Crohn’s disease, and cancer, while presenting a lower procedural risk than conventional endoscopy. Contrary to conventional techniques that necessitate anesthesia, CE reduces patient discomfort and complications. Nonetheless, its constraints, specifically the incapacity to conduct biopsies or therapeutic procedures, have spurred technical advancements. Five primary types of capsule endoscopes have emerged: steerable, magnetic, robotic, tethered, and hybrid. Their performance varies substantially. For example, the image sizes vary from 256 × 256 to 640 × 480 pixels, the fields of view (FOV) range from 140° to 360°, the battery life is between 8 and 15 h, and the frame rates fluctuate from 2 to 35 frames per second, contingent upon motion-adaptive capture. This study addresses a significant gap by methodically evaluating CE platforms, outlining their clinical preparedness, and examining the underexploited potential of artificial intelligence in improving diagnostic precision. Through the examination of technical requirements and clinical integration, we highlight the progress made in overcoming existing CE constraints and outline prospective developments for next-generation GI diagnostics. Full article
(This article belongs to the Special Issue Novel, Low Cost Technologies for Cancer Diagnostics and Therapeutics)
Show Figures

Figure 1

20 pages, 3530 KiB  
Article
Avalanche Photodiode-Based Deep Space Optical Uplink Communication in the Presence of Channel Impairments
by Wenjng Guo, Xiaowei Wu and Lei Yang
Photonics 2025, 12(6), 562; https://doi.org/10.3390/photonics12060562 - 3 Jun 2025
Viewed by 382
Abstract
Optical communication is a critical technology for future deep space exploration, offering substantial advantages in transmission capacity and spectrum utilization. This paper establishes a comprehensive theoretical framework for avalanche photodiode (APD)-based deep space optical uplink communication under combined channel impairments, including atmospheric and [...] Read more.
Optical communication is a critical technology for future deep space exploration, offering substantial advantages in transmission capacity and spectrum utilization. This paper establishes a comprehensive theoretical framework for avalanche photodiode (APD)-based deep space optical uplink communication under combined channel impairments, including atmospheric and coronal turbulence induced beam scintillation, pointing errors, angle-of-arrival (AOA) fluctuations, link attenuation, and background noise. A closed-form analytical channel model unifying these effects is derived and validated through Monte Carlo simulations. Webb and Gaussian approximations are employed to characterize APD output statistics, with theoretical symbol error rate (SER) expressions for pulse position modulation (PPM) derived under diverse impairment scenarios. Numerical results demonstrate that the Webb model achieves higher accuracy by capturing APD gain dynamics, while the Gaussian approximation remains viable when APD gain exceeds a channel fading-dependent gain threshold. Key system parameters such as APD gain and field-of-view (FOV) angle are analyzed. The optimal APD gain significantly influences the achievement of optimal SER performance, and angle of FOV design balances AOA fluctuations tolerance against noise suppression. These findings enable hardware optimization under size, weight, power, and cost (SWaP-C) constraints without compromising performance. Our work provides critical guidelines for designing robust APD-based deep space optical uplink communication systems. Full article
(This article belongs to the Special Issue Advanced Technologies in Optical Wireless Communications)
Show Figures

Figure 1

24 pages, 1196 KiB  
Article
Integrated Guidance and Control for Strap-Down Flight Vehicle: A Deep Reinforcement Learning Approach
by Qinglong Zhang, Bin Zhao, Yifu Jiang, Jingyan Zhang and Jiale Zhang
Aerospace 2025, 12(5), 400; https://doi.org/10.3390/aerospace12050400 - 1 May 2025
Viewed by 341
Abstract
This paper proposes a three-dimensional (3D) deep reinforcement learning-based integrated guidance and control (DRLIGC) method, which is restricted by the narrow field-of-view (FOV) constraint of the strap-down seeker. By leveraging the data-driven nature of the deep reinforcement learning (DRL) algorithm, this method mitigates [...] Read more.
This paper proposes a three-dimensional (3D) deep reinforcement learning-based integrated guidance and control (DRLIGC) method, which is restricted by the narrow field-of-view (FOV) constraint of the strap-down seeker. By leveraging the data-driven nature of the deep reinforcement learning (DRL) algorithm, this method mitigates the challenges associated with integrated guidance and control (IGC) method design arising from model dependencies, thereby addressing the inherent complexity of the IGC model. Firstly, according to different states and actions, the pitch and yaw channels of the six-degree-of-freedom (6-DOF) IGC model are modeled as Markov decision processes (MDPs). Secondly, a channel-by-channel progressive training method based on the twin delayed deep deterministic policy gradient (TD3) algorithm is proposed. The agents of the pitch and yaw channels are trained using the TD3 algorithm independently, which substantially alleviates the complexity of the training process, while the roll channel is stabilized through the application of the back-stepping method. Thirdly, a comprehensive reward function is designed to simultaneously address the narrow FOV constraint and enhance the target engagement capability. Additionally, this function mitigates the issue of sparse rewards to some extent. Through Monte Carlo (MC) and comparative simulation verification, it is shown that the DRLIGC method proposed in this paper can effectively approach the target while maintaining the narrow FOV constraint and also has good robustness. Full article
(This article belongs to the Special Issue Integrated Guidance and Control for Aerospace Vehicles)
Show Figures

Figure 1

20 pages, 4551 KiB  
Article
A Full-Profile Measurement Method for an Inner Wall with Narrow-Aperture and Large-Cavity Parts Based on Line-Structured Light Rotary Scanning
by Zhengwen Li, Changshuai Fang and Xiaodong Zhang
Sensors 2025, 25(9), 2843; https://doi.org/10.3390/s25092843 - 30 Apr 2025
Viewed by 419
Abstract
As a special component, inner-wall-shaped parts with a narrow aperture and large cavity play an important role in the field of industrial manufacturing. It is of great significance to accurately measure the full profile of the inner surface of such parts. Line-structured light [...] Read more.
As a special component, inner-wall-shaped parts with a narrow aperture and large cavity play an important role in the field of industrial manufacturing. It is of great significance to accurately measure the full profile of the inner surface of such parts. Line-structured light scanning is a widely used method for inner wall 3D measurement, which is usually applied to linear scanning measurements of the inner wall of pipe-shaped parts. In view of the structural characteristics of narrow-aperture and large-cavity parts, this article establishes a multi-sensor scanning measurement system based on the principle of line-structured light, which adopts rotary scanning instead of the traditional linear scanning measurement method in the system. Additionally, a calibration method is introduced to resolve the challenges associated with the calibration of rotation axis parameters. Considering the structural constraints in the measurement of narrow-aperture and large-cavity parts, a structural optimization algorithm is designed to enable the sensor to achieve a high theoretical measurement resolution while satisfying the geometric constraints of the measured parts. In order to minimize the size of the sensor, the adjacent sub-sensors in the system are arranged in the form of low overlapping fields of view (FOV). To solve the problem of multi-sensor registration under low overlapping FOV, a calibration method based on the structural characteristics of the measurement system itself is proposed, which realizes low-cost and high-precision calibration of the multi-sensor system. Through the repeatability measurement experiment of the spherical cavity parts, the average measurement deviation of the spherical cavity radius was measured to be 6 μm, and the standard deviation was 11.4 μm, which verified the feasibility of the measurement system proposed in this article. By comparing the system calibration method proposed in this article with existing methods, the measurement accuracy of the system is improved by approximately 80%, demonstrating the effectiveness of the proposed method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

20 pages, 5059 KiB  
Article
Optimized Multiple Unmanned Surface Vehicle Strategies for Target Tracking with Field of View Constraints
by Kai Xue, Zeyu Xu, Ping Wang, Qiuhong Li, Zhiqin Huang and Decheng Kong
J. Mar. Sci. Eng. 2025, 13(5), 889; https://doi.org/10.3390/jmse13050889 - 30 Apr 2025
Viewed by 501
Abstract
Unmanned surface vehicles (USVs) have garnered significant interest due to their potential in various maritime applications, particularly target tracking. However, when USVs perform rotational motion around a target, their operations are often hindered by limited fields of view (FOVs) within formations. In this [...] Read more.
Unmanned surface vehicles (USVs) have garnered significant interest due to their potential in various maritime applications, particularly target tracking. However, when USVs perform rotational motion around a target, their operations are often hindered by limited fields of view (FOVs) within formations. In this paper, we establish a constraint propagation model to integrate formation constraints, motion constraints, and FOV constraints. Then, we propose four strategies to enforce FOV constraints. The proposed strategies are divided into two categories: those that adjust formation radius and those that adjust rotational velocity. The advantages and disadvantages of each approach are systematically analyzed, highlighting their suitability for various operational scenarios. The effectiveness and robustness of these strategies are validated through simulations. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 6376 KiB  
Article
Deep Reinforcement Learning-Based Uncalibrated Visual Servoing Control of Manipulators with FOV Constraints
by Xungao Zhong, Qiao Zhou, Yuan Sun, Shaobo Kang and Huosheng Hu
Appl. Sci. 2025, 15(8), 4447; https://doi.org/10.3390/app15084447 - 17 Apr 2025
Cited by 1 | Viewed by 789
Abstract
In this article, we put forward a brand-new uncalibrated image-based visual servoing (IBVS) method. It is designed for monocular hand–eye manipulators with Field-of-View (FOV) feature constraints and makes use of a deep reinforcement learning (DRL) approach. First, the IBVS and its feature-loss problems [...] Read more.
In this article, we put forward a brand-new uncalibrated image-based visual servoing (IBVS) method. It is designed for monocular hand–eye manipulators with Field-of-View (FOV) feature constraints and makes use of a deep reinforcement learning (DRL) approach. First, the IBVS and its feature-loss problems are introduced. Then, a uncalibrated IBVS method is presented to address the feature-loss issue and improve servo efficiency with DRL. Specifically, the uncalibrated IBVS is integrated into the deep Q-network (DQN) control framework to ensure analytical stability. Additionally, a feature-constrained Q-network based on offline camera FOV environment feature mapping is designed and trained to adaptively output compensation for the IBVS controller, which helps maintain the feature within the camera’s FOV and improve servo performance. Finally, to further demonstrate the effectiveness and practicality of the proposed DQN-based uncalibrated IBVS method, experiments are conducted on a 6-DOF manipulator, and the results validate the proposed approach. Full article
(This article belongs to the Special Issue Robotics and Intelligent Systems: Technologies and Applications)
Show Figures

Figure 1

23 pages, 7999 KiB  
Article
Adaptive Impact-Time-Control Cooperative Guidance Law for UAVs Under Time-Varying Velocity Based on Reinforcement Learning
by Zhenyu Liu, Gang Lei, Yong Xian, Leliang Ren, Shaopeng Li and Daqiao Zhang
Drones 2025, 9(4), 262; https://doi.org/10.3390/drones9040262 - 29 Mar 2025
Cited by 1 | Viewed by 531
Abstract
In this study, an adaptive impact-time-control cooperative guidance law based on deep reinforcement learning considering field-of-view (FOV) constraints is proposed for high-speed UAVs with time-varying velocity. Firstly, a reinforcement learning framework for the high-speed UAVs’ guidance problem is established. The optimization objective is [...] Read more.
In this study, an adaptive impact-time-control cooperative guidance law based on deep reinforcement learning considering field-of-view (FOV) constraints is proposed for high-speed UAVs with time-varying velocity. Firstly, a reinforcement learning framework for the high-speed UAVs’ guidance problem is established. The optimization objective is to maximize the impact velocity; and the constraints for impact time, dive attacking, and FOV are considered simultaneously. The time-to-go estimation method is improved so that it can be applied to high-speed UAVs with time-varying velocity. Then, in order to improve the applicability and robustness of the agent, environmental uncertainties, including aerodynamic parameter errors, observation noise, and target random maneuvers, are incorporated into the training process. Furthermore, inspired by the RL2 algorithm, the recurrent layer is introduced into both the policy and value network. In this way, the agent can automatically adapt to different mission scenarios by updating the hidden states of the recurrent layer. In addition, a compound reward function is designed to train the agent to satisfy the requirements of impact-time control and dive attack simultaneously. Finally, the effectiveness and robustness of the proposed guidance law are validated through numerical simulations conducted across a wide range of scenarios. Full article
Show Figures

Figure 1

15 pages, 18745 KiB  
Article
Robust Adaptive Robotic Visual Servo Grasping with Guaranteed Field of View Constraints
by Liang Li, Junqi Luo, Peitao Hong, Wenhao Bai, Zhenyu Zhang and Liucun Zhu
Actuators 2024, 13(11), 457; https://doi.org/10.3390/act13110457 - 14 Nov 2024
Viewed by 1250
Abstract
Visual servo grasping technology has garnered significant attention in intelligent manufacturing for its potential to enhance both the flexibility and precision of robotic operations. However, traditional approaches frequently encounter challenges such as task failure when visual features move outside the camera’s field of [...] Read more.
Visual servo grasping technology has garnered significant attention in intelligent manufacturing for its potential to enhance both the flexibility and precision of robotic operations. However, traditional approaches frequently encounter challenges such as task failure when visual features move outside the camera’s field of view (FoV) and system instability due to interaction matrix singularities, limiting the technology’s effectiveness in complex environments. This study introduces a novel control strategy that leverages an asymmetric time-varying performance function to address the issue of visual feature escape. By strictly limiting the range of feature error, our approach ensures that visual features consistently remain within the camera’s FoV, thereby enhancing both transient and steady-state system performance. Furthermore, we have developed an adaptive damped least squares controller that dynamically adjusts the damping term to mitigate numerical instability resulting from interaction matrix singularities. The effectiveness of our method has been validated through grasping experiments involving significant rotations around the camera’s optical axis and other complex movements. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

17 pages, 4978 KiB  
Article
Illumination Model of Lunar Terrain around Lunar Laser Retroreflectors Based on LOLA Data
by Yuzuo Li, Xida Han, Xianlin Wu, Xudong Lin, Qianji Zhao, Chengkai Zhou and Yong Yan
Remote Sens. 2024, 16(17), 3195; https://doi.org/10.3390/rs16173195 - 29 Aug 2024
Viewed by 1277
Abstract
Lunar laser retroreflectors are a crucial target for lunar laser ranging (LLR). During LLR operations at all lunar laser ranging stations, the considerable distance between the Earth and the Moon makes it impossible to visually identify the position of the lunar laser retroreflectors [...] Read more.
Lunar laser retroreflectors are a crucial target for lunar laser ranging (LLR). During LLR operations at all lunar laser ranging stations, the considerable distance between the Earth and the Moon makes it impossible to visually identify the position of the lunar laser retroreflectors within the camera’s field of view (FOV). Typically, we use the camera to observe various lunar surface features illuminated by sunlight, and then we align the telescope with the corresponding area for accurate positioning. Based on lunar digital elevation model (DEM) data and INPOP19a planetary ephemeris, our research presents an illumination model for the lunar terrain around the lunar laser retroreflectors, suitable for the 1’ camera FOV constraint at the TianQin laser ranging station. The illumination model is linked to universal time coordinated (UTC) time and DEM data. By inputting the UTC time, the illumination conditions of the terrain surrounding the five lunar laser retroreflectors can be determined for that specific time. The terrain surrounding the Apollo 15 and Lunokhod 2 retroreflectors exhibits distinctive terrain features under illumination, making them easier to locate compared to other retroreflectors. This illumination model provides a better solution to the problem of aligning laser ranging stations to lunar laser retroreflectors. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

19 pages, 21135 KiB  
Article
Rendezvous of Nonholonomic Unmanned Ground Vehicles with Ultra-Wide-Angle Cameras
by Lijun Li, Yuanda Wang, Chao Xiong and Wei Shang
World Electr. Veh. J. 2024, 15(8), 370; https://doi.org/10.3390/wevj15080370 - 16 Aug 2024
Cited by 2 | Viewed by 1177
Abstract
In this paper, a time-varying delay output feedback control method based on the potential barrier function is proposed, which can solve the communication delay and field-of-view (FOV) constraints of Unmanned Ground Vehicle (UGV) clusters when communicating with ultra-wide-angle cameras. First, a second-order oscillator [...] Read more.
In this paper, a time-varying delay output feedback control method based on the potential barrier function is proposed, which can solve the communication delay and field-of-view (FOV) constraints of Unmanned Ground Vehicle (UGV) clusters when communicating with ultra-wide-angle cameras. First, a second-order oscillator and an output feedback controller are utilized to feed back the position and direction of neighboring vehicles by exchanging control quantities and to solve the time-varying delay in the position computation of the ultra-wide-angle camera. Due to the limited target radiation range perceived by the camera, an FOV-constrained potential function is adopted to optimize the design of the sliding mode surface. The stability of the closed-loop control system is analyzed by applying the Lyapunov method. Finally, simulation experiments are conducted to verify the effectiveness of the consensus scheme in addressing the communication delay and FOV constraint problem under two different initial conditions. Full article
Show Figures

Figure 1

18 pages, 12761 KiB  
Article
Robot-Assisted Augmented Reality (AR)-Guided Surgical Navigation for Periacetabular Osteotomy
by Haoyan Ding, Wenyuan Sun and Guoyan Zheng
Sensors 2024, 24(14), 4754; https://doi.org/10.3390/s24144754 - 22 Jul 2024
Cited by 2 | Viewed by 2576
Abstract
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for [...] Read more.
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for surgeons to perform a PAO surgery. To solve this challenge, we propose a robot-assisted, augmented reality (AR)-guided surgical navigation system for PAO. The system mainly consists of a robot arm, an optical tracker, and a Microsoft HoloLens 2 headset, which is a state-of-the-art (SOTA) optical see-through (OST) head-mounted display (HMD). For AR guidance, we propose an optical marker-based AR registration method to estimate a transformation from the optical tracker coordinate system (COS) to the virtual space COS such that the virtual models can be superimposed on the corresponding physical counterparts. Furthermore, to guide the osteotomy, the developed system automatically aligns a bone saw with osteotomy planes planned in preoperative images. Then, it provides surgeons with not only virtual constraints to restrict movement of the bone saw but also AR guidance for visual feedback without sight diversion, leading to higher surgical accuracy and improved surgical safety. Comprehensive experiments were conducted to evaluate both the AR registration accuracy and osteotomy accuracy of the developed navigation system. The proposed AR registration method achieved an average mean absolute distance error (mADE) of 1.96 ± 0.43 mm. The robotic system achieved an average center translation error of 0.96 ± 0.23 mm, an average maximum distance of 1.31 ± 0.20 mm, and an average angular deviation of 3.77 ± 0.85°. Experimental results demonstrated both the AR registration accuracy and the osteotomy accuracy of the developed system. Full article
(This article belongs to the Special Issue Augmented Reality-Based Navigation System for Healthcare)
Show Figures

Figure 1

25 pages, 6757 KiB  
Article
Simulation-Based Optimization of Path Planning for Camera-Equipped UAVs That Considers the Location and Time of Construction Activities
by Yusheng Huang and Amin Hammad
Remote Sens. 2024, 16(13), 2445; https://doi.org/10.3390/rs16132445 - 3 Jul 2024
Viewed by 1657
Abstract
Automated progress monitoring of construction sites using cameras has been proposed in recent years. Although previous studies have tried to identify the most informative camera views according to 4D BIM to optimize installation plans, video collection using fixed or pan-tilt-zoom cameras is still [...] Read more.
Automated progress monitoring of construction sites using cameras has been proposed in recent years. Although previous studies have tried to identify the most informative camera views according to 4D BIM to optimize installation plans, video collection using fixed or pan-tilt-zoom cameras is still limited by their inability to adapt to the dynamic construction environment. Therefore, considerable attention has been paid to using camera-equipped unmanned aerial vehicles (CE-UAVs), which provide mobility for the camera, allowing it to fit its field of view automatically to the important parts of the construction site while avoiding occlusions. However, previous studies on optimizing video collection with CE-UAV are limited to the scanning of static objects on construction sites. Given the growing interest in construction activities, the existing methods are inadequate to meet the requirements for the collection of high-quality videos. In this study, the following requirements for and constraints on collecting construction-activity videos have been identified: (1) the FOV should be optimized to cover the areas of interest with the minimum possible occlusion; (2) the path of the UAV should be optimized to allow efficient data collection on multiple construction activities over a large construction site, considering the locations of activities at specific times; and (3) the data collection should consider the requirements for CV processes. Aiming to address these requirements and constraints, a method has been proposed to perform simulation-based optimization of path planning for CE-UAVs to allow automated and effective collection of videos of construction activities based on a detailed 4D simulation that includes a micro-schedule and the corresponding workspaces. This method can identify the most informative views of the workspaces and the optimal path for data capture. A case study was developed to demonstrate the feasibility of the proposed method. Full article
Show Figures

Figure 1

18 pages, 6642 KiB  
Article
Enlarged Eye-Box Accommodation-Capable Augmented Reality with Hologram Replicas
by Woonchan Moon and Joonku Hahn
Sensors 2024, 24(12), 3930; https://doi.org/10.3390/s24123930 - 17 Jun 2024
Cited by 1 | Viewed by 1911
Abstract
Augmented reality (AR) technology has been widely applied across a variety of fields, with head-up displays (HUDs) being one of its prominent uses, offering immersive three-dimensional (3D) experiences and interaction with digital content and the real world. AR-HUDs face challenges such as limited [...] Read more.
Augmented reality (AR) technology has been widely applied across a variety of fields, with head-up displays (HUDs) being one of its prominent uses, offering immersive three-dimensional (3D) experiences and interaction with digital content and the real world. AR-HUDs face challenges such as limited field of view (FOV), small eye-box, bulky form factor, and absence of accommodation cue, often compromising trade-offs between these factors. Recently, optical waveguide based on pupil replication process has attracted increasing attention as an optical element for its compact form factor and exit-pupil expansion. Despite these advantages, current waveguide displays struggle to integrate visual information with real scenes because they do not produce accommodation-capable virtual content. In this paper, we introduce a lensless accommodation-capable holographic system based on a waveguide. Our system aims to expand the eye-box at the optimal viewing distance that provides the maximum FOV. We devised a formalized CGH algorithm based on bold assumption and two constraints and successfully performed numerical observation simulation. In optical experiments, accommodation-capable images with a maximum horizontal FOV of 7.0 degrees were successfully observed within an expanded eye-box of 9.18 mm at an optimal observation distance of 112 mm. Full article
Show Figures

Figure 1

14 pages, 4223 KiB  
Article
Deep Reinforcement Learning for the Visual Servoing Control of UAVs with FOV Constraint
by Gui Fu, Hongyu Chu, Liwen Liu, Linyi Fang and Xinyu Zhu
Drones 2023, 7(6), 375; https://doi.org/10.3390/drones7060375 - 3 Jun 2023
Cited by 7 | Viewed by 3405
Abstract
Visual servoing is a control method that utilizes image feedback to control robot motion, and it has been widely applied in unmanned aerial vehicle (UAV) motion control. However, due to field-of-view (FOV) constraints, visual servoing still faces challenges, such as easy target loss [...] Read more.
Visual servoing is a control method that utilizes image feedback to control robot motion, and it has been widely applied in unmanned aerial vehicle (UAV) motion control. However, due to field-of-view (FOV) constraints, visual servoing still faces challenges, such as easy target loss and low control efficiency. To address these issues, visual servoing control for UAVs based on the deep reinforcement learning (DRL) method is proposed, which dynamically adjusts the servo gain in real time to avoid target loss and improve control efficiency. Firstly, a Markov model of visual servoing control for a UAV under field-of-view constraints is established, which consists ofquintuplet and considers the improvement of the control efficiency. Secondly, an improved deep Q-network (DQN) algorithm with a target network and experience replay is designed to solve the Markov model. In addition, two independent agents are designed to adjust the linear and angular velocity servo gains in order to enhance the control performance, respectively. In the simulation environment, the effectiveness of the proposed method was verified using a monocular camera. Full article
Show Figures

Figure 1

12 pages, 2057 KiB  
Article
Communication, Computing, and Caching Trade-Off in VR Networks
by Yuqing Feng, Dongyu Wang and Yanzhao Hou
Electronics 2023, 12(7), 1577; https://doi.org/10.3390/electronics12071577 - 27 Mar 2023
Cited by 2 | Viewed by 1925
Abstract
As technology continues to advance, virtual reality (VR) video services are able to provide an increasingly realistic video experience. VR applications are limited, since the creation of an immersive experience requires processing and delivery of incredibly huge amounts of data. A potential technique [...] Read more.
As technology continues to advance, virtual reality (VR) video services are able to provide an increasingly realistic video experience. VR applications are limited, since the creation of an immersive experience requires processing and delivery of incredibly huge amounts of data. A potential technique to decrease the operation time for VR, as well as its energy use, is mobile edge computing (MEC). In this study, we develop a VR network in which several MEC servers can supply field-of-view (FOV) files to a VR device in order to satisfy the transmission requirements of VR video service and improve the quality of the experience. In this way, the projection process from 2D FOV to 3D FOV and the cached data is possible on an MEC server or a VR device. A cooperative computational offloading and caching strategy is developed as a decision matrix to reduce transmission requirements based on the service time constraint requirement. The VR video service mechanism is examined through the decision matrix. The trade-off between communication, caching, and computation (3C trade-off) is further implemented by means of a closed equation for the decision matrix. Results from simulations show that the suggested technique can perform close to optimally compared to alternative opposing methods. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

Back to TopTop