Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = CSRT tracker

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 4972 KB  
Article
Real-Time Object Localization Using a Fuzzy Controller for a Vision-Based Drone
by Ping-Sheng Wang, Chien-Hung Lin and Cheng-Ta Chuang
Inventions 2024, 9(1), 14; https://doi.org/10.3390/inventions9010014 - 12 Jan 2024
Cited by 5 | Viewed by 2706
Abstract
This study proposes a drone system with visual identification and tracking capabilities to address the issue of limited communication bandwidth for drones. This system can lock onto a target during flight and transmit its simple features to the ground station, thereby reducing communication [...] Read more.
This study proposes a drone system with visual identification and tracking capabilities to address the issue of limited communication bandwidth for drones. This system can lock onto a target during flight and transmit its simple features to the ground station, thereby reducing communication bandwidth demands. RealFlight is used as the simulation environment to validate the proposed drone algorithm. The core components of the system include DeepSORT and MobileNet lightweight models for target tracking. The designed fuzzy controller enables the system to adjust the drone’s motors, gradually moving the locked target to the center of the frame and maintaining continuous tracking. Additionally, this study introduces channel and spatial reliability tracking (CSRT) switching from multi-object to single-object tracking and multithreading technology to enhance the system’s execution speed. The experimental results demonstrate that the system can accurately adjust the target to the frame’s center within approximately 1.5 s, maintaining precision within ±0.5 degrees. On the Jetson Xavier NX embedded platform, the average frame rate (FPS) for the multi-object tracker was only 1.37, with a standard deviation of 1.05. In contrast, the single-object tracker CSRT exhibited a significant improvement, achieving an average FPS of 9.77 with a standard deviation of 1.86. This study provides an effective solution for visual tracking in drone systems that is efficient and conserves communication bandwidth. The validation of the embedded platform highlighted its practicality and performance. Full article
Show Figures

Figure 1

37 pages, 23114 KB  
Article
SPT: Single Pedestrian Tracking Framework with Re-Identification-Based Learning Using the Siamese Model
by Sumaira Manzoor, Ye-Chan An, Gun-Gyo In, Yueyuan Zhang, Sangmin Kim and Tae-Yong Kuc
Sensors 2023, 23(10), 4906; https://doi.org/10.3390/s23104906 - 19 May 2023
Cited by 8 | Viewed by 3291
Abstract
Pedestrian tracking is a challenging task in the area of visual object tracking research and it is a vital component of various vision-based applications such as surveillance systems, human-following robots, and autonomous vehicles. In this paper, we proposed a single pedestrian tracking (SPT) [...] Read more.
Pedestrian tracking is a challenging task in the area of visual object tracking research and it is a vital component of various vision-based applications such as surveillance systems, human-following robots, and autonomous vehicles. In this paper, we proposed a single pedestrian tracking (SPT) framework for identifying each instance of a person across all video frames through a tracking-by-detection paradigm that combines deep learning and metric learning-based approaches. The SPT framework comprises three main modules: detection, re-identification, and tracking. Our contribution is a significant improvement in the results by designing two compact metric learning-based models using Siamese architecture in the pedestrian re-identification module and combining one of the most robust re-identification models for data associated with the pedestrian detector in the tracking module. We carried out several analyses to evaluate the performance of our SPT framework for single pedestrian tracking in the videos. The results of the re-identification module validate that our two proposed re-identification models surpass existing state-of-the-art models with increased accuracies of 79.2% and 83.9% on the large dataset and 92% and 96% on the small dataset. Moreover, the proposed SPT tracker, along with six state-of-the-art (SOTA) tracking models, has been tested on various indoor and outdoor video sequences. A qualitative analysis considering six major environmental factors verifies the effectiveness of our SPT tracker under illumination changes, appearance variations due to pose changes, changes in target position, and partial occlusions. In addition, quantitative analysis based on experimental results also demonstrates that our proposed SPT tracker outperforms the GOTURN, CSRT, KCF, and SiamFC trackers with a success rate of 79.7% while beating the DiamSiamRPN, SiamFC, CSRT, GOTURN, and SiamMask trackers with an average of 18 tracking frames per second. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Classification and Tracking II)
Show Figures

Figure 1

16 pages, 4421 KB  
Article
Next in Surgical Data Science: Autonomous Non-Technical Skill Assessment in Minimally Invasive Surgery Training
by Renáta Nagyné Elek and Tamás Haidegger
J. Clin. Med. 2022, 11(24), 7533; https://doi.org/10.3390/jcm11247533 - 19 Dec 2022
Cited by 11 | Viewed by 3517
Abstract
Background: It is well understood that surgical skills largely define patient outcomes both in Minimally Invasive Surgery (MIS) and Robot-Assisted MIS (RAMIS). Non-technical surgical skills, including stress and distraction resilience, decision-making and situation awareness also contribute significantly. Autonomous, technologically supported objective skill assessment [...] Read more.
Background: It is well understood that surgical skills largely define patient outcomes both in Minimally Invasive Surgery (MIS) and Robot-Assisted MIS (RAMIS). Non-technical surgical skills, including stress and distraction resilience, decision-making and situation awareness also contribute significantly. Autonomous, technologically supported objective skill assessment can be efficient tools to improve patient outcomes without the need to involve expert surgeon reviewers. However, autonomous non-technical skill assessments are unstandardized and open for more research. Recently, Surgical Data Science (SDS) has become able to improve the quality of interventional healthcare with big data and data processing techniques (capture, organization, analysis and modeling of data). SDS techniques can also help to achieve autonomous non-technical surgical skill assessments. Methods: An MIS training experiment is introduced to autonomously assess non-technical skills and to analyse the workload based on sensory data (video image and force) and a self-rating questionnaire (SURG-TLX). A sensorized surgical skill training phantom and adjacent training workflow were designed to simulate a complicated Laparoscopic Cholecystectomy task; the dissection of the cholecyst’s peritonial layer and the safe clip application on the cystic artery in an uncomfortable environment. A total of 20 training sessions were recorded from 7 subjects (3 non-medicals, 2 residents, 1 expert surgeon and 1 expert MIS surgeon). Workload and learning curves were studied via SURG-TLX. For autonomous non-technical skill assessment, video image data with tracked instruments based on Channel and Spatial Reliability Tracker (CSRT) and force data were utilized. An autonomous time series classification was achieved by a Fully Convolutional Neural Network (FCN), where the class labels were provided by SURG-TLX. Results: With unpaired t-tests, significant differences were found between the two groups (medical professionals and control) in certain workload components (mental demands, physical demands, and situational stress, p<0.0001, 95% confidence interval, p<0.05 for task complexity). With paired t-tests, the learning curves of the trials were also studied; the task complexity resulted in a significant difference between the first and the second trials. Autonomous non-technical skill classification was based on the FCN by applying the tool trajectories and force data as input. This resulted in a high accuracy (85%) on temporal demands classification based on the z component of the used forces and 75% accuracy for classifying mental demands/situational stress with the x component of the used forces validated with Leave One Out Cross-Validation. Conclusions: Non-technical skills and workload components can be classified autonomously based on measured training data. SDS can be effective via automated non-technical skill assessment. Full article
(This article belongs to the Special Issue Advances in Robot-Assisted Minimally Invasive Surgery)
Show Figures

Figure 1

17 pages, 6408 KB  
Article
ABOships—An Inshore and Offshore Maritime Vessel Detection Dataset with Precise Annotations
by Bogdan Iancu, Valentin Soloviev, Luca Zelioli and Johan Lilius
Remote Sens. 2021, 13(5), 988; https://doi.org/10.3390/rs13050988 - 5 Mar 2021
Cited by 56 | Viewed by 6698
Abstract
Availability of domain-specific datasets is an essential problem in object detection. Datasets of inshore and offshore maritime vessels are no exception, with a limited number of studies addressing maritime vessel detection on such datasets. For that reason, we collected a dataset consisting of [...] Read more.
Availability of domain-specific datasets is an essential problem in object detection. Datasets of inshore and offshore maritime vessels are no exception, with a limited number of studies addressing maritime vessel detection on such datasets. For that reason, we collected a dataset consisting of images of maritime vessels taking into account different factors: background variation, atmospheric conditions, illumination, visible proportion, occlusion and scale variation. Vessel instances (including nine types of vessels), seamarks and miscellaneous floaters were precisely annotated: we employed a first round of labelling and we subsequently used the CSRT tracker to trace inconsistencies and relabel inadequate label instances. Moreover, we evaluated the out-of-the-box performance of four prevalent object detection algorithms (Faster R-CNN, R-FCN, SSD and EfficientDet). The algorithms were previously trained on the Microsoft COCO dataset. We compared their accuracy based on feature extractor and object size. Our experiments showed that Faster R-CNN with Inception-Resnet v2 outperforms the other algorithms, except in the large object category where EfficientDet surpasses the latter. Full article
Show Figures

Graphical abstract

23 pages, 9900 KB  
Article
Detection-Based Object Tracking Applied to Remote Ship Inspection
by Jing Xie, Erik Stensrud and Torbjørn Skramstad
Sensors 2021, 21(3), 761; https://doi.org/10.3390/s21030761 - 23 Jan 2021
Cited by 19 | Viewed by 7147
Abstract
We propose a detection-based tracking system for automatically processing maritime ship inspection videos and predicting suspicious areas where cracks may exist. This system consists of two stages. Stage one uses a state-of-the-art object detection model, i.e., RetinaNet, which is customized with certain modifications [...] Read more.
We propose a detection-based tracking system for automatically processing maritime ship inspection videos and predicting suspicious areas where cracks may exist. This system consists of two stages. Stage one uses a state-of-the-art object detection model, i.e., RetinaNet, which is customized with certain modifications and the optimal anchor setting for detecting cracks in the ship inspection images/videos. Stage two is an enhanced tracking system including two key components. The first component is a state-of-the-art tracker, namely, Channel and Spatial Reliability Tracker (CSRT), with improvements to handle model drift in a simple manner. The second component is a tailored data association algorithm which creates tracking trajectories for the cracks being tracked. This algorithm is based on not only the intersection over union (IoU) of the detections and tracking updates but also their respective areas when associating detections to the existing trackers. Consequently, the tracking results compensate for the detection jitters which could lead to both tracking jitter and creation of redundant trackers. Our study shows that the proposed detection-based tracking system has achieved a reasonable performance on automatically analyzing ship inspection videos. It has proven the feasibility of applying deep neural network based computer vision technologies to automating remote ship inspection. The proposed system is being matured and will be integrated into a digital infrastructure which will facilitate the whole ship inspection process. Full article
(This article belongs to the Special Issue Visual Sensors for Object Tracking and Recognition)
Show Figures

Figure 1

Back to TopTop