Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = video stitching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3395 KiB  
Article
End-to-End Online Video Stitching and Stabilization Method Based on Unsupervised Deep Learning
by Pengyuan Wang, Pinle Qin, Rui Chai, Jianchao Zeng, Pengcheng Zhao, Zuojun Chen and Bingjie Han
Appl. Sci. 2025, 15(11), 5987; https://doi.org/10.3390/app15115987 - 26 May 2025
Viewed by 689
Abstract
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization [...] Read more.
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization architecture allows the stabilization and stitching modules to share parameters and propagate errors through a fully differentiable network, ensuring consistent image alignment. Second, a Markov trajectory smoothing strategy in relative coordinates models inter-frame motion as incremental relationships, effectively reducing cumulative errors. Third, a dynamic attention mask generates spatiotemporal weight maps based on foreground motion prediction, suppressing misalignment caused by dynamic objects. Experimental evaluation on diverse handheld sequences shows that our method achieves higher stitching quality, lower geometric distortion rates, and improved video stability compared to state-of-the-art baselines, while maintaining real-time processing capabilities. Ablation studies validate that relative trajectory modeling substantially mitigates long-term jitter and that the dynamic attention mask enhances stitching accuracy in dynamic scenes. These results demonstrate that the proposed framework provides a robust solution for high-quality, real-time handheld video stitching. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

24 pages, 6895 KiB  
Article
Panoramic Video Synopsis on Constrained Devices for Security Surveillance
by Palash Yuvraj Ingle and Young-Gab Kim
Systems 2025, 13(2), 110; https://doi.org/10.3390/systems13020110 - 11 Feb 2025
Cited by 1 | Viewed by 1025
Abstract
As the global demand for surveillance cameras increases, the digital footage data also explicitly increases. Analyzing and extracting meaningful content from footage is a resource-depleting and laborious effort. The traditional video synopsis technique is used for constructing a small video by relocating the [...] Read more.
As the global demand for surveillance cameras increases, the digital footage data also explicitly increases. Analyzing and extracting meaningful content from footage is a resource-depleting and laborious effort. The traditional video synopsis technique is used for constructing a small video by relocating the object in the time and space domains. However, it is computationally expensive, and the obtained synopsis suffers from jitter artifacts; thus, it cannot be hosted on a resource-constrained device. In this research, we propose a panoramic video synopsis framework to address and solve the problems of the efficient analysis of objects for better governance and storage. The surveillance system has multiple cameras sharing a common homography, which the proposed method leverages. The proposed method constructs a panorama by solving the broad viewpoints with significant deviations, collisions, and overlapping among the images. We embed a synopsis framework on the end device to reduce storage, networking, and computational costs. A neural network-based model stitches multiple camera feeds to obtain a panoramic structure from which only tubes with abnormal behavior were extracted and relocated in the space and time domains to construct a shorter video. Comparatively, the proposed model achieved a superior accuracy matching rate of 98.7% when stitching the images. The feature enhancement model also achieves better peak signal-to-noise ratio values, facilitating smooth synopsis construction. Full article
(This article belongs to the Special Issue Digital Solutions for Participatory Governance in Smart Cities)
Show Figures

Figure 1

16 pages, 8072 KiB  
Article
Research on a Panoramic Image Stitching Method for Images of Corn Ears, Based on Video Streaming
by Yi Huangfu, Hongming Chen, Zhonghao Huang, Wenfeng Li, Jie Shi and Linlin Yang
Agronomy 2024, 14(12), 2884; https://doi.org/10.3390/agronomy14122884 - 3 Dec 2024
Cited by 1 | Viewed by 1210
Abstract
Background: Corn is the main grain crop grown in China, and the ear shape index of corn is an important parameter for breeding new varieties, including ear length, diameter, row number of ears, row number of grains per ear, and so on. Objective: [...] Read more.
Background: Corn is the main grain crop grown in China, and the ear shape index of corn is an important parameter for breeding new varieties, including ear length, diameter, row number of ears, row number of grains per ear, and so on. Objective: In order to solve the problem of limited field of view associated with computer detection of the corn ear shape index against a complex background, this paper proposes a panoramic splicing method for corn ears against a complex background, which can splice 10 corn ear panoramic images at the same time, to improve information collection efficiency, display comprehensive information, and support data analysis, so as to realize automatic corn seed examination. Methods: A summary of corn ear panoramic stitching methods under complex backgrounds is presented as follows: 1. a perceptual hash algorithm and histogram equalization were used to extract video frames; 2. the U-Net image segmentation model based on transfer learning was used to predict corn labels; 3. a mask preprocessing algorithm was designed; 4. a corn ear splicing positioning algorithm was designed; 5. an algorithm for irregular surface expansion was designed; 6. an image stitching method based on template matching was adopted to assemble the video frames. Results: The experimental results showed that the proposed corn ear panoramic stitching method could effectively solve the problems of virtual stitching, obvious stitching seams, and too-high similarity between multiple images. The success rate of stitching was as high as 100%, and the speed of single-corn-ear panoramic stitching was about 9.4 s, indicating important reference value for corn breeding and disease and insect detection. Discussions: Although the experimental results demonstrated the significant advantages of the panoramic splicing method for corn ear images proposed in this paper in terms of improving information collection efficiency and automating corn assessment, the method still faces certain challenges. Future research will focus on the following points: 1. addressing the issue of environmental interference caused by diseases, pests, and plant nutritional status on the measurement of corn ear parameters in order to enhance the stability and accuracy of the algorithm; 2. expanding the dataset for the U-Net model to include a wider range of corn ears with complex backgrounds, different growth stages, and various environmental conditions to improve the model’s segmentation recognition rate and precision. Recently, our panoramic splicing algorithm has been deployed in practical applications with satisfactory results. We plan to continue optimizing the algorithm and more broadly promote its use in fields such as corn breeding and pest and disease detection in an effort to advance the development of agricultural automation technology. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 4866 KiB  
Article
TCEDN: A Lightweight Time-Context Enhanced Depression Detection Network
by Keshan Yan, Shengfa Miao, Xin Jin, Yongkang Mu, Hongfeng Zheng, Yuling Tian, Puming Wang, Qian Yu and Da Hu
Life 2024, 14(10), 1313; https://doi.org/10.3390/life14101313 - 16 Oct 2024
Cited by 1 | Viewed by 1089
Abstract
The automatic video recognition of depression is becoming increasingly important in clinical applications. However, traditional depression recognition models still face challenges in practical applications, such as high computational costs, the poor application effectiveness of facial movement features, and spatial feature degradation due to [...] Read more.
The automatic video recognition of depression is becoming increasingly important in clinical applications. However, traditional depression recognition models still face challenges in practical applications, such as high computational costs, the poor application effectiveness of facial movement features, and spatial feature degradation due to model stitching. To overcome these challenges, this work proposes a lightweight Time-Context Enhanced Depression Detection Network (TCEDN). We first use attention-weighted blocks to aggregate and enhance video frame-level features, easing the model’s computational workload. Next, by integrating the temporal and spatial changes of video raw features and facial movement features in a self-learning weight manner, we enhance the precision of depression detection. Finally, a fusion network of 3-Dimensional Convolutional Neural Network (3D-CNN) and Convolutional Long Short-Term Memory Network (ConvLSTM) is constructed to minimize spatial feature loss by avoiding feature flattening and to achieve depression score prediction. Tests on the AVEC2013 and AVEC2014 datasets reveal that our approach yields results on par with state-of-the-art techniques for detecting depression using video analysis. Additionally, our method has significantly lower computational complexity than mainstream methods. Full article
Show Figures

Figure 1

25 pages, 6820 KiB  
Article
SASFF: A Video Synthesis Algorithm for Unstructured Array Cameras Based on Symmetric Auto-Encoding and Scale Feature Fusion
by Linliang Zhang, Lianshan Yan, Shuo Li and Saifei Li
Sensors 2024, 24(1), 5; https://doi.org/10.3390/s24010005 - 19 Dec 2023
Cited by 2 | Viewed by 1494
Abstract
For the synthesis of ultra-large scene and ultra-high resolution videos, in order to obtain high-quality large-scene videos, high-quality video stitching and fusion are achieved through multi-scale unstructured array cameras. This paper proposes a network model image feature point extraction algorithm based on symmetric [...] Read more.
For the synthesis of ultra-large scene and ultra-high resolution videos, in order to obtain high-quality large-scene videos, high-quality video stitching and fusion are achieved through multi-scale unstructured array cameras. This paper proposes a network model image feature point extraction algorithm based on symmetric auto-encoding and scale feature fusion. By using the principle of symmetric auto-encoding, the hierarchical restoration of image feature location information is incorporated into the corresponding scale feature, along with deep separable convolution image feature extraction, which not only improves the performance of feature point detection but also significantly reduces the computational complexity of the network model. Based on the calculated high-precision feature point pairing information, a new image localization method is proposed based on area ratio and homography matrix scaling, which improves the speed and accuracy of the array camera image scale alignment and positioning, realizes high-definition perception of local details in large scenes, and obtains clearer synthesis effects of large scenes and high-quality stitched images. The experimental results show that the feature point extraction algorithm proposed in this paper has been experimentally compared with four typical algorithms using the HPatches dataset. The performance of feature point detection has been improved by an average of 4.9%, the performance of homography estimation has been improved by an average of 2.5%, the amount of computation has been reduced by 18%, the number of network model parameters has been reduced by 47%, and the synthesis of billion-pixel videos has been achieved, demonstrating practicality and robustness. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 11533 KiB  
Article
Application of UAVs and Image Processing for Riverbank Inspection
by Chang-Hsun Chiang and Jih-Gau Juang
Machines 2023, 11(9), 876; https://doi.org/10.3390/machines11090876 - 1 Sep 2023
Cited by 7 | Viewed by 2081
Abstract
Many rivers are polluted by trash and garbage that can affect the environment. Riverbank inspection usually relies on workers of the environmental protection office, but sometimes the places are unreachable. This study applies unmanned aerial vehicles (UAVs) to perform the inspection task, which [...] Read more.
Many rivers are polluted by trash and garbage that can affect the environment. Riverbank inspection usually relies on workers of the environmental protection office, but sometimes the places are unreachable. This study applies unmanned aerial vehicles (UAVs) to perform the inspection task, which can significantly relieve labor work. Two UAVs are used to cover a wide area of riverside and capture riverbank images. The images from different UAVs are stitched using the scale-invariant feature transform (SIFT) algorithm. Static and dynamic image stitching are tested. Different you only look once (YOLO) algorithms are applied to identify riverbank garbage. Modified YOLO algorithms improve the accuracy of riverine waste identification, while the SIFT algorithm stitches the images obtained from the UAV cameras. Then, the stitching results and garbage data are sent to a video streaming server, allowing government officials to check waste information from the real-time multi-camera stitching images. The UAVs utilize 4G communication to transmit the video stream to the server. The transmission distance is long enough for this study, and the reliability is excellent in the test fields that are covered by the 4G communication network. In the automatic reconnection mechanism, we set the timeout to 1.8 s. The UAVs will automatically reconnect to the video streaming server if the disconnection time exceeds the timeout. Based on the energy provided by the onboard battery, the UAV can be operated for 20 min in a mission. The UAV inspection distance along a preplanned path is about 1 km at a speed of 1 m/s. The proposed UAV system can replace inspection labor, successfully identify riverside garbage, and transmit the related information and location on the map to the ground control center in real time. Full article
(This article belongs to the Special Issue Advanced Control of Unmanned Aerial Vehicles (UAV))
Show Figures

Figure 1

24 pages, 24015 KiB  
Article
Surveying of Nearshore Bathymetry Using UAVs Video Stitching
by Jinchang Fan, Hailong Pei and Zengjie Lian
J. Mar. Sci. Eng. 2023, 11(4), 770; https://doi.org/10.3390/jmse11040770 - 31 Mar 2023
Cited by 4 | Viewed by 2150
Abstract
In this paper, we extended video stitching to nearshore bathymetry for videos that were captured for the same coastal field simultaneously by two unmanned aerial vehicles (UAVs). In practice, a video captured by a single UAV often shows a limited coastal zone with [...] Read more.
In this paper, we extended video stitching to nearshore bathymetry for videos that were captured for the same coastal field simultaneously by two unmanned aerial vehicles (UAVs). In practice, a video captured by a single UAV often shows a limited coastal zone with a lack of a wide field of view. To solve this problem, we proposed a framework in which video stitching and bathymetric mapping were performed in sequence. Specifically, our method listed the video acquisition strategy and took two overlapping videos captured by two UAVs as inputs. Then, we adopted a unified video stitching and stabilization optimization to compute the stitching and stabilization of one of the videos separately. In this way, we can obtain the best stitching result. At the same time, background feature points identification on the shore plays the role of short-time visual odometry. Through the obtained panoramic video in Shuang Yue Bay, China, we used the temporal cross-correlation analysis based on the linear dispersion relationship to estimate the water depth. We selected the region of interest (ROI) area from the panoramic video, performed an orthorectification transformation and extracted time-stack images from it. The wave celerity was then estimated from the correlation of the signal through filtering processes. Finally, the bathymetry results were compared with the cBathy. By applying this method to two UAVs, a wider FOV was created and the surveying area was expanded, which provided effective input data for the bathymetry algorithms. Full article
(This article belongs to the Special Issue Coastal Engineering: Sustainability and New Technologies)
Show Figures

Figure 1

22 pages, 19188 KiB  
Article
Damage Segmentation on High-Resolution Coating Images Using a Novel Two-Stage Network Pipeline
by Kolja Hedrich, Lennart Hinz and Eduard Reithmeier
Aerospace 2023, 10(3), 245; https://doi.org/10.3390/aerospace10030245 - 2 Mar 2023
Cited by 2 | Viewed by 1992
Abstract
The automation of inspections in aircraft engines is an ever-increasing growing field of research. In particular, the inspection and quantification of coating damages in confined spaces, usually performed manually with handheld endoscopes, comprise tasks that are challenging to automate. In this study, 2D [...] Read more.
The automation of inspections in aircraft engines is an ever-increasing growing field of research. In particular, the inspection and quantification of coating damages in confined spaces, usually performed manually with handheld endoscopes, comprise tasks that are challenging to automate. In this study, 2D RGB video data provided by commercial instruments are further analyzed in the form of a segmentation of damage areas. For this purpose, large overview images, which are stitched from the video frames, showing the whole coating area are analyzed with convolutional neural networks (CNNs). However, these overview images need to be divided into smaller image patches to keep the CNN architecture at a functional and fixed size, which leads to a significantly reduced field of view (FOV) and therefore a loss of information and reduced network accuracy. A possible solution is a downsampling of the overview image to decrease the number of patches and increase this FOV for each patch. However, while an increased FOV with downsampling or a small FOV without resampling both exhibit a lack of information, these approaches incorporate partly different information and abstractions to be utilized complementary. Based on this hypothesis, we propose a two-stage segmentation pipeline, which processes image patches with different FOV and downsampling factors to increase the overall segmentation accuracy for large images. This includes a novel method to optimize the position of image patches, which leads to a further improvement in accuracy. After a validation of the described hypothesis, an evaluation and comparison of the proposed pipeline and methods against the single-network application is conducted in order to demonstrate the accuracy improvements. Full article
(This article belongs to the Special Issue Recent Advances in Technologies for Aerospace Maintenance)
Show Figures

Figure 1

26 pages, 3749 KiB  
Review
Video Synopsis Algorithms and Framework: A Survey and Comparative Evaluation
by Palash Yuvraj Ingle and Young-Gab Kim
Systems 2023, 11(2), 108; https://doi.org/10.3390/systems11020108 - 17 Feb 2023
Cited by 8 | Viewed by 4541
Abstract
With the increase in video surveillance data, techniques such as video synopsis are being used to construct small videos for analysis, thereby saving storage resources. The video synopsis framework applies in real-time environments, allowing for the creation of synopsis between multiple and single-view [...] Read more.
With the increase in video surveillance data, techniques such as video synopsis are being used to construct small videos for analysis, thereby saving storage resources. The video synopsis framework applies in real-time environments, allowing for the creation of synopsis between multiple and single-view cameras; the same framework encompasses optimization, extraction, and object detection algorithms. Contemporary state-of-the-art synopsis frameworks are suitable only for particular scenarios. This paper aims to review the traditional state-of-the-art video synopsis techniques and understand the different methods incorporated in the methodology. A comprehensive review provides analysis of varying video synopsis frameworks and their components, along with insightful evidence for classifying these techniques. We primarily investigate studies based on single-view and multiview cameras, providing a synopsis and taxonomy based on their characteristics, then identifying and briefly discussing the most commonly used datasets and evaluation metrics. At each stage of the synopsis framework, we present new trends and open challenges based on the obtained insights. Finally, we evaluate the different components such as object detection, tracking, optimization, and stitching techniques on a publicly available dataset and identify the lacuna among the different algorithms based on experimental results. Full article
Show Figures

Figure 1

19 pages, 8819 KiB  
Article
Implementation Method of Automotive Video SAR (ViSAR) Based on Sub-Aperture Spectrum Fusion
by Ping Guo, Fuen Wu, Shiyang Tang, Chenghao Jiang and Changjie Liu
Remote Sens. 2023, 15(2), 476; https://doi.org/10.3390/rs15020476 - 13 Jan 2023
Cited by 8 | Viewed by 3134
Abstract
The automotive synthetic aperture radar (SAR) can obtain two-dimensional (2-D) high-resolution images and has good robustness compared with the other sensors. Generally, the 2-D high-resolution always conflicts with the real-time requirement in conventional SAR imaging. This article suggests an automotive video SAR (ViSAR) [...] Read more.
The automotive synthetic aperture radar (SAR) can obtain two-dimensional (2-D) high-resolution images and has good robustness compared with the other sensors. Generally, the 2-D high-resolution always conflicts with the real-time requirement in conventional SAR imaging. This article suggests an automotive video SAR (ViSAR) imaging technique based on sub-aperture spectrum fusion to address this issue. Firstly, the scene space variation problem caused by close observation distance in automotive SAR is analyzed. Moreover, the sub-aperture implementation method, frame rate and resolution of automotive ViSAR are also introduced. Then, the improved Range Doppler algorithm (RDA) is used to focus the sub-aperture data. Finally, a sub-aperture stitching strategy is proposed to obtain a high-resolution frame image. Compared with the available ViSAR imaging method, the proposed method is more efficient, performs better, and is more appropriate for automotive ViSAR. The simulation results and actual data of the automotive SAR validate the effectiveness of the proposed method. Full article
Show Figures

Graphical abstract

13 pages, 1249 KiB  
Article
A Dual-Path Cross-Modal Network for Video-Music Retrieval
by Xin Gu, Yinghua Shen and Chaohui Lv
Sensors 2023, 23(2), 805; https://doi.org/10.3390/s23020805 - 10 Jan 2023
Cited by 5 | Viewed by 2902
Abstract
In recent years, with the development of the internet, video has become more and more widely used in life. Adding harmonious music to a video is gradually becoming an artistic task. However, artificially adding music takes a lot of time and effort, so [...] Read more.
In recent years, with the development of the internet, video has become more and more widely used in life. Adding harmonious music to a video is gradually becoming an artistic task. However, artificially adding music takes a lot of time and effort, so we propose a method to recommend background music for videos. The emotional message of music is rarely taken into account in current work, but it is crucial for video music retrieval. To achieve this, we design two paths to process content information and emotional information between modals. Based on the characteristics of video and music, we design various feature extraction schemes and common representation spaces. In the content path, the pre-trained network is used as the feature extraction network. As these features contain some redundant information, we use an encoder–decoder structure for dimensionality reduction. Where encoder weights are shared to obtain content sharing features for video and music. In the emotion path, an emotion key frames scheme was used for video and a channel attention mechanism was used for music in order to obtain the emotion information effectively. We also added emotion distinguish loss to guarantee that the network acquires the emotion information effectively. More importantly, we propose a way to combine content information with emotional information. That is, content features are first stitched together with sentiment features and then passed through a fused shared space structured as an MLP to obtain more effective fused shared features. In addition, a polarity penalty factor has been added to the classical metric loss function to make it more suitable for this task. Experiments show that this dual path video music retrieval network can effectively merge information. Compared with existing methods, the retrieval task evaluation index increases Recall@1 by 3.94. Full article
Show Figures

Figure 1

13 pages, 2990 KiB  
Article
Geological Borehole Video Image Stitching Method Based on Local Homography Matrix Offset Optimization
by Zhaopeng Deng, Shengzhi Song, Shuangyang Han, Zeqi Liu, Qiang Wang and Liuyang Jiang
Sensors 2023, 23(2), 632; https://doi.org/10.3390/s23020632 - 5 Jan 2023
Cited by 3 | Viewed by 2421
Abstract
Due to the influence of the shooting environment and inherent image characteristics, there is a large amount of interference in the process of image stitching a geological borehole video. To accurately match the acquired image sequences in the inner part of a borehole, [...] Read more.
Due to the influence of the shooting environment and inherent image characteristics, there is a large amount of interference in the process of image stitching a geological borehole video. To accurately match the acquired image sequences in the inner part of a borehole, this paper presents a new method of stitching an unfolded borehole image, which uses the image generated from the video to construct a large-scale panorama. Firstly, the speeded-up robust feathers (SURF) algorithm is used to extract the image feature points and complete the rough matching. Then, the M-estimator sample consensus (MSAC) algorithm is introduced to remove the mismatched point pairs and obtain the homography matrix. Subsequently, we propose a local homography matrix offset optimization (LHOO) algorithm to obtain the optimal offset. Finally, the above process is cycled frame by frame, and the image sequence is continuously stitched to complete the construction of a cylindrical borehole panorama. The experimental results show that compared with those of the SIFT, Harris, ORB and SURF algorithms, the matching accuracy of our algorithm has been greatly improved. The final test is carried out on 225 consecutive video frames, and the panorama has a good visual effect, and the average time of each frame is 100 ms, which basically meets the requirements of the project. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 89613 KiB  
Article
An Automatic Defect Detection System for Petrochemical Pipeline Based on Cycle-GAN and YOLO v5
by Kun Chen, Hongtao Li, Chunshu Li, Xinyue Zhao, Shujie Wu, Yuxiao Duan and Jinshen Wang
Sensors 2022, 22(20), 7907; https://doi.org/10.3390/s22207907 - 17 Oct 2022
Cited by 51 | Viewed by 6842
Abstract
Defect detection of petrochemical pipelines is an important task for industrial production safety. At present, pipeline defect detection mainly relies on closed circuit television method (CCTV) to take video of the pipeline inner wall and then detect the defective area manually, so the [...] Read more.
Defect detection of petrochemical pipelines is an important task for industrial production safety. At present, pipeline defect detection mainly relies on closed circuit television method (CCTV) to take video of the pipeline inner wall and then detect the defective area manually, so the detection is very time-consuming and has a high rate of false and missed detections. To solve the above issues, we proposed an automatic defect detection system for petrochemical pipeline based on Cycle-GAN and improved YOLO v5. Firstly, in order to create the pipeline defect dataset, the original pipeline videos need pre-processing, which includes frame extraction, unfolding, illumination balancing, and image stitching to create coherent and tiled pipeline inner wall images. Secondly, aiming at the problems of small amount of samples and the imbalance of defect and non-defect classes, a sample enhancement strategy based on Cycle-GAN is proposed to generate defect images and expand the data set. Finally, in order to detect defective areas on the pipeline and improve the detection accuracy, a robust defect detection model based on improved YOLO v5 and Transformer attention mechanism is proposed, with the average precision and recall as 93.10% and 90.96%, and the F1-score as 0.920 on the test set. The proposed system can provide reference for operators in pipeline health inspection, improving the efficiency and accuracy of detection. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

25 pages, 2816 KiB  
Article
Dissecting Latency in 360° Video Camera Sensing Systems
by Zhisheng Yan and Jun Yi
Sensors 2022, 22(16), 6001; https://doi.org/10.3390/s22166001 - 11 Aug 2022
Cited by 6 | Viewed by 2437
Abstract
360° video camera sensing is an increasingly popular technology. Compared with traditional 2D video systems, it is challenging to ensure the viewing experience in 360° video camera sensing because the massive omnidirectional data introduce adverse effects on start-up delay, event-to-eye delay, and frame [...] Read more.
360° video camera sensing is an increasingly popular technology. Compared with traditional 2D video systems, it is challenging to ensure the viewing experience in 360° video camera sensing because the massive omnidirectional data introduce adverse effects on start-up delay, event-to-eye delay, and frame rate. Therefore, understanding the time consumption of computing tasks in 360° video camera sensing becomes the prerequisite to improving the system’s delay performance and viewing experience. Despite the prior measurement studies on 360° video systems, none of them delves into the system pipeline and dissects the latency at the task level. In this paper, we perform the first in-depth measurement study of task-level time consumption for 360° video camera sensing. We start with identifying the subtle relationship between the three delay metrics and the time consumption breakdown across the system computing task. Next, we develop an open research prototype Zeus to characterize this relationship in various realistic usage scenarios. Our measurement of task-level time consumption demonstrates the importance of the camera CPU-GPU transfer and the server initialization, as well as the negligible effect of 360° video stitching on the delay metrics. Finally, we compare Zeus with a commercial system to validate that our results are representative and can be used to improve today’s 360° video camera sensing systems. Full article
(This article belongs to the Special Issue Frontiers in Mobile Multimedia Communications)
Show Figures

Figure 1

11 pages, 2500 KiB  
Article
Measurement of the Heat Transfer Properties of Carbon Fabrics via Infrared Thermal Mapping
by Phillip Kearney, Constantina Lekakou and Stephen Belcher
J. Compos. Sci. 2022, 6(6), 155; https://doi.org/10.3390/jcs6060155 - 25 May 2022
Cited by 3 | Viewed by 2860
Abstract
The aim of this paper is to determine the heat transfer properties of biaxial carbon fabrics of different architectures, including non-crimp stitch bonded fabrics, plain, twill and satin woven fabrics. The specific heat capacity was determined via DSC (differential scanning calorimetry). A novel [...] Read more.
The aim of this paper is to determine the heat transfer properties of biaxial carbon fabrics of different architectures, including non-crimp stitch bonded fabrics, plain, twill and satin woven fabrics. The specific heat capacity was determined via DSC (differential scanning calorimetry). A novel method of numerical analysis of temperature maps from a video using a high-resolution thermal camera is investigated for the measurement of the in-plane and transverse thermal diffusivity and conductivity. The determined thermal conductivity parallel to the fibers of a non-crimp stitch bonded fabric agrees well with the theoretical value calculated employing the rule of mixtures. The presence of voids due to the yarn crossover regions in woven fabrics leads to a reduced value of transverse thermal conductivity, especially in the single ply measurements of this study. Full article
(This article belongs to the Special Issue Feature Papers in Journal of Composites Science in 2022)
Show Figures

Figure 1

Back to TopTop