Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = affine motion estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 9440 KiB  
Article
RACFME: Object Tracking in Satellite Videos by Rotation Adaptive Correlation Filters with Motion Estimations
by Xiongzhi Wu, Haifeng Zhang, Chao Mei, Jiaxin Wu and Han Ai
Symmetry 2025, 17(4), 608; https://doi.org/10.3390/sym17040608 - 16 Apr 2025
Viewed by 346
Abstract
Video satellites provide high-temporal-resolution remote sensing images that enable continuous monitoring of the ground for applications such as target tracking and airport traffic detection. In this paper, we address the problems of object occlusion and the tracking of rotating objects in satellite videos [...] Read more.
Video satellites provide high-temporal-resolution remote sensing images that enable continuous monitoring of the ground for applications such as target tracking and airport traffic detection. In this paper, we address the problems of object occlusion and the tracking of rotating objects in satellite videos by introducing a rotation-adaptive tracking algorithm for correlation filters with motion estimation (RACFME). Our algorithm proposes the following improvements over the KCF method: (a) A rotation-adaptive feature enhancement module (RA) is proposed to obtain the rotated image block by affine transformation combined with the target rotation direction prior, which overcomes the disadvantage of HOG features lacking rotation adaptability, improves tracking accuracy while ensuring real-time performance, and solves the problem of tracking failure due to insufficient valid positive samples when tracking rotating targets. (b) Based on the correlation between peak response and occlusion, an occlusion detection method for vehicles and ships in satellite video is proposed. (c) Motion estimations are achieved by combining Kalman filtering with motion trajectory averaging, which solves the problem of tracking failure in the case of object occlusion. The experimental results show that the proposed RACFME algorithm can track a moving target with a 95% success score, and the RA module and ME both play an effective role. Full article
(This article belongs to the Special Issue Advances in Image Processing with Symmetry/Asymmetry)
Show Figures

Figure 1

24 pages, 16730 KiB  
Article
LV-FeatEx: Large Viewpoint-Image Feature Extraction
by Yukai Wang, Yinghui Wang, Wenzhuo Li, Yanxing Liang, Liangyi Huang and Xiaojuan Ning
Mathematics 2025, 13(7), 1111; https://doi.org/10.3390/math13071111 - 27 Mar 2025
Viewed by 524
Abstract
Maintaining stable image feature extraction under viewpoint changes is challenging, particularly when the angle between the camera’s reverse direction and the object’s surface normal exceeds 40 degrees. Such conditions can result in unreliable feature detection. Consequently, this hinders the performance of vision-based systems. [...] Read more.
Maintaining stable image feature extraction under viewpoint changes is challenging, particularly when the angle between the camera’s reverse direction and the object’s surface normal exceeds 40 degrees. Such conditions can result in unreliable feature detection. Consequently, this hinders the performance of vision-based systems. To address this, we propose a feature point extraction method named Large Viewpoint Feature Extraction (LV-FeatEx). Firstly, the method uses a dual-threshold approach based on image grayscale histograms and Kapur’s maximum entropy to constrain the AGAST (Adaptive and Generic Accelerated Segment Test) feature detector. Combined with the FREAK (Fast Retina Keypoint) descriptor, the method enables more effective estimation of camera motion parameters. Next, we design a longitude sampling strategy to create a sparser affine simulation model. Meanwhile, images undergo perspective transformation based on the camera motion parameters. This improves operational efficiency and aligns perspective distortions between two images, enhancing feature point extraction accuracy under large viewpoints. Finally, we verify the stability of the extracted feature points through feature point matching. Comprehensive experimental results show that, under large viewpoint changes, our method outperforms popular classical and deep learning feature extraction methods. The correct rate of feature point matching improves by an average of 40.1 percent, and speed increases by an average of 6.67 times simultaneously. Full article
Show Figures

Figure 1

22 pages, 3259 KiB  
Article
Advanced Patch-Based Affine Motion Estimation for Dynamic Point Cloud Geometry Compression
by Yiting Shao, Wei Gao, Shan Liu and Ge Li
Sensors 2024, 24(10), 3142; https://doi.org/10.3390/s24103142 - 15 May 2024
Cited by 2 | Viewed by 1496
Abstract
The substantial data volume within dynamic point clouds representing three-dimensional moving entities necessitates advancements in compression techniques. Motion estimation (ME) is crucial for reducing point cloud temporal redundancy. Standard block-based ME schemes, which typically utilize the previously decoded point clouds as inter-reference frames, [...] Read more.
The substantial data volume within dynamic point clouds representing three-dimensional moving entities necessitates advancements in compression techniques. Motion estimation (ME) is crucial for reducing point cloud temporal redundancy. Standard block-based ME schemes, which typically utilize the previously decoded point clouds as inter-reference frames, often yield inaccurate and translation-only estimates for dynamic point clouds. To overcome this limitation, we propose an advanced patch-based affine ME scheme for dynamic point cloud geometry compression. Our approach employs a forward-backward jointing ME strategy, generating affine motion-compensated frames for improved inter-geometry references. Before the forward ME process, point cloud motion analysis is conducted on previous frames to perceive motion characteristics. Then, a point cloud is segmented into deformable patches based on geometry correlation and motion coherence. During the forward ME process, affine motion models are introduced to depict the deformable patch motions from the reference to the current frame. Later, affine motion-compensated frames are exploited in the backward ME process to obtain refined motions for better coding performance. Experimental results demonstrate the superiority of our proposed scheme, achieving an average 6.28% geometry bitrate gain over the inter codec anchor. Additional results also validate the effectiveness of key modules within the proposed ME scheme. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 6090 KiB  
Article
Video Global Motion Compensation Based on Affine Inverse Transform Model
by Nan Zhang, Weifeng Liu and Xingyu Xia
Sensors 2023, 23(18), 7750; https://doi.org/10.3390/s23187750 - 8 Sep 2023
Cited by 1 | Viewed by 1770
Abstract
Global motion greatly increases the number of false alarms for object detection in video sequences against dynamic backgrounds. Therefore, before detecting the target in the dynamic background, it is necessary to estimate and compensate the global motion to eliminate the influence of the [...] Read more.
Global motion greatly increases the number of false alarms for object detection in video sequences against dynamic backgrounds. Therefore, before detecting the target in the dynamic background, it is necessary to estimate and compensate the global motion to eliminate the influence of the global motion. In this paper, we use the SURF (speeded up robust features) algorithm combined with the MSAC (M-Estimate Sample Consensus) algorithm to process the video. The global motion of a video sequence is estimated according to the feature point matching pairs of adjacent frames of the video sequence and the global motion parameters of the video sequence under the dynamic background. On this basis, we propose an inverse transformation model of affine transformation, which acts on each adjacent frame of the video sequence in turn. The model compensates the global motion, and outputs a video sequence after global motion compensation from a specific view for object detection. Experimental results show that the algorithm proposed in this paper can accurately perform motion compensation on video sequences containing complex global motion, and the compensated video sequences achieve higher peak signal-to-noise ratio and better visual effects. Full article
(This article belongs to the Special Issue Applications of Video Processing and Computer Vision Sensor II)
Show Figures

Figure 1

15 pages, 679 KiB  
Article
A Fast Gradient Iterative Affine Motion Estimation Algorithm Based on Edge Detection for Versatile Video Coding
by Jingping Hong, Zhihong Dong, Xue Zhang, Nannan Song and Peng Cao
Electronics 2023, 12(16), 3414; https://doi.org/10.3390/electronics12163414 - 11 Aug 2023
Cited by 4 | Viewed by 1739
Abstract
In the Versatile Video Coding (VVC) standard, affine motion models have been applied to enhance the resolution of complex motion patterns. However, due to the high computational complexity involved in affine motion estimation, real-time video processing applications face significant challenges. This paper focuses [...] Read more.
In the Versatile Video Coding (VVC) standard, affine motion models have been applied to enhance the resolution of complex motion patterns. However, due to the high computational complexity involved in affine motion estimation, real-time video processing applications face significant challenges. This paper focuses on optimizing affine motion estimation algorithms in the VVC environment and proposes a fast gradient iterative algorithm based on edge detection for efficient computation. Firstly, we establish judging conditions during the construction of affine motion candidate lists to streamline the redundant judging process. Secondly, we employ the Canny edge detection method for gradient assessment in the affine motion estimation process, thereby enhancing the iteration speed of affine motion vectors. The experimentalresults show that the encoding time of the affine motion estimation algorithm is about 15–35% lower than the overall encoding time of the anchor algorithm encoder, the average encoding time of the affine motion estimation part of the inter-frame prediction part is reduced by 24.79%, and the peak signal-to-noise ratio (PSNR) is only reduced by 0.04. Full article
(This article belongs to the Special Issue Image and Video Quality and Compression)
Show Figures

Figure 1

19 pages, 5666 KiB  
Article
A Novel Moving Object Detection Algorithm Based on Robust Image Feature Threshold Segmentation with Improved Optical Flow Estimation
by Jing Ding, Zhen Zhang, Xuexiang Yu, Xingwang Zhao and Zhigang Yan
Appl. Sci. 2023, 13(8), 4854; https://doi.org/10.3390/app13084854 - 12 Apr 2023
Cited by 8 | Viewed by 2858
Abstract
The detection of moving objects in images is a crucial research objective; however, several challenges, such as low accuracy, background fixing or moving, ‘ghost’ issues, and warping, exist in its execution. The majority of approaches operate with a fixed camera. This study proposes [...] Read more.
The detection of moving objects in images is a crucial research objective; however, several challenges, such as low accuracy, background fixing or moving, ‘ghost’ issues, and warping, exist in its execution. The majority of approaches operate with a fixed camera. This study proposes a robust feature threshold moving object identification and segmentation method with enhanced optical flow estimation to overcome these challenges. Unlike most optical flow Otsu segmentation for fixed cameras, a background feature threshold segmentation technique based on a combination of the Horn–Schunck (HS) and Lucas–Kanade (LK) optical flow methods is presented in this paper. This approach aims to obtain the segmentation of moving objects. First, the HS and LK optical flows with the image pyramid are integrated to establish the high-precision and anti-interference optical flow estimation equation. Next, the Delaunay triangulation is used to solve the motion occlusion problem. Finally, the proposed robust feature threshold segmentation method is applied to the optical flow field to attract the moving object, which is the. extracted from the Harris feature and the image background affine transformation model. The technique uses morphological image processing to create the final moving target foreground area. Experimental results verified that this method successfully detected and segmented objects with high accuracy when the camera was either fixed or moving. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 758 KiB  
Article
Neural Network-Based Reference Block Quality Enhancement for Motion Compensation Prediction
by Yanhan Chu, Hui Yuan, Shiqi Jiang and Congrui Fu
Appl. Sci. 2023, 13(5), 2795; https://doi.org/10.3390/app13052795 - 22 Feb 2023
Cited by 4 | Viewed by 1999
Abstract
Inter prediction is a crucial part of hybrid video coding frameworks, and it is used to eliminate redundancy in adjacent frames and improve coding performance. During inter prediction, motion estimation is used to find the reference block that is most similar to the [...] Read more.
Inter prediction is a crucial part of hybrid video coding frameworks, and it is used to eliminate redundancy in adjacent frames and improve coding performance. During inter prediction, motion estimation is used to find the reference block that is most similar to the current block, and the following motion compensation is used to shift the reference block fractionally to obtain the prediction block. The closer the reference block is to the original block, the higher the coding efficiency is. To improve the quality of reference blocks, a quality enhancement network (RBENN) that is dedicated to reference blocks is proposed. The main body of the network consists of 10 residual modules, with two convolution layers for preprocessing and feature extraction. Each residual module consists of two convolutional layers, one ReLU activation, and a shortcut. The network uses the luma reference block as input before motion compensation, and the enhanced reference block is then filtered by the default fractional interpolation. Moreover, the proposed method can be used for both conventional motion compensation and affine motion compensation. Experimental results showed that RBENN could achieve a −1.35% BD rate on average under the low-delay P (LDP) configuration compared with the latest H.266/VVC. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

30 pages, 4004 KiB  
Article
Backstepping- and Sliding Mode-Based Automatic Carrier Landing System with Deck Motion Estimation and Compensation
by Mihai Lungu, Mou Chen and Dana-Aurelia Vîlcică (Dinu)
Aerospace 2022, 9(11), 644; https://doi.org/10.3390/aerospace9110644 - 24 Oct 2022
Cited by 20 | Viewed by 2774
Abstract
This paper addresses the automatic carrier landing problem in the presence of deck motion, carrier airwake disturbance, wind shears, wind gusts, and atmospheric turbulences. By transforming the 6-DOF aircraft model into an affine dynamic with angle of attack controlled by thrust, the equations [...] Read more.
This paper addresses the automatic carrier landing problem in the presence of deck motion, carrier airwake disturbance, wind shears, wind gusts, and atmospheric turbulences. By transforming the 6-DOF aircraft model into an affine dynamic with angle of attack controlled by thrust, the equations associated to the resultant disturbances are deduced; then, a deck motion prediction block (based on a recursive-least squares algorithm) and a tracking differentiator-based deck motion compensation block are designed. After obtaining the aircraft reference trajectory, the backstepping control method is employed to design a novel automatic carrier landing system with three functional parts: a guidance control system, an attitude control system, and an approach power compensation system. The design of the attitude subsystem involves the flight path control, the control of the attitude angles, and the control of the angular rates. To obtain convergence performance for the closed-loop system, the backstepping technique is combined with sliding mode-based command differentiators for the computation of the virtual commands and extended state observers for the estimation of the disturbances. The global stability of the closed-loop architecture is analyzed by using the Lyapunov theory. Finally, simulation results verify the effectiveness of the proposed carrier landing system, the aircraft reference trajectory being accurately tracked. Full article
(This article belongs to the Special Issue Flight Control)
Show Figures

Figure 1

13 pages, 1086 KiB  
Article
An Adjacency Encoding Information-Based Fast Affine Motion Estimation Method for Versatile Video Coding
by Ximei Li, Jun He, Qi Li and Xingru Chen
Electronics 2022, 11(21), 3429; https://doi.org/10.3390/electronics11213429 - 23 Oct 2022
Cited by 5 | Viewed by 2064
Abstract
Versatile video coding (VVC), a new generation video coding standard, achieves significant improvements over high efficiency video coding (HEVC) due to its added advanced coding tools. Despite the fact that affine motion estimation adopted in VVC takes into account the translational, rotational, and [...] Read more.
Versatile video coding (VVC), a new generation video coding standard, achieves significant improvements over high efficiency video coding (HEVC) due to its added advanced coding tools. Despite the fact that affine motion estimation adopted in VVC takes into account the translational, rotational, and scaling motions of the object to improve the accuracy of interprediction, this technique adds a high computational complexity, making VVC unsuitable for use in real-time applications. To address this issue, an adjacency encoding information-based fast affine motion estimation method for VVC is proposed in this paper. First, this paper counts the probability of using the affine mode in interprediction. Then we analyze the trade-off between computational complexity and performance improvement based on statistical information. Finally, by exploring the mutual exclusivity between skip and affine modes, an enhanced method is proposed to reduce interprediction complexity. Experimental results show that compared with the VVC, the proposed low-complexity method achieves 10.11% total encoding time reduction and 40.85% time saving of affine motion estimation with a 0.16% Bjøontegaard delta bitrate (BDBR) increase. Full article
Show Figures

Figure 1

16 pages, 4020 KiB  
Article
All-Atom Molecular Dynamics Investigations on the Interactions between D2 Subunit Dopamine Receptors and Three 11C-Labeled Radiopharmaceutical Ligands
by Sanda Nastasia Moldovean, Diana-Gabriela Timaru and Vasile Chiş
Int. J. Mol. Sci. 2022, 23(4), 2005; https://doi.org/10.3390/ijms23042005 - 11 Feb 2022
Cited by 3 | Viewed by 2867
Abstract
The D2 subunit dopamine receptor represents a key factor in modulating dopamine release. Moreover, the investigated radiopharmaceutical ligands used in positron emission tomography imaging techniques are known to bind D2 receptors, allowing for dopaminergic pathways quantification in the living human brain. Thus, the [...] Read more.
The D2 subunit dopamine receptor represents a key factor in modulating dopamine release. Moreover, the investigated radiopharmaceutical ligands used in positron emission tomography imaging techniques are known to bind D2 receptors, allowing for dopaminergic pathways quantification in the living human brain. Thus, the biophysical characterization of these radioligands is expected to provide additional insights into the interaction mechanisms between the vehicle molecules and their targets. Using molecular dynamics simulations and QM calculations, the present study aimed to investigate the potential positions in which the D2 dopamine receptor would most likely interact with the three distinctive synthetic 11C-labeled compounds (raclopride (3,5-dichloro-N-[[(2S)-1-ethylpyrrolidin-2-yl]methyl]-2-hydroxy-6-methoxybenzamide)—RACL, FLB457 (5-bromo-N-[[(2S)-1-ethylpyrrolidin-2-yl]methyl]-2,3-dimethoxybenzamide)—FLB457 and SCH23390 (R(+)-7-Chloro-8-hydroxy-3-methyl-1-phenyl-2,3,4,5-tetrahydro-1H-3-benzazepine)—SCH)), as well as to estimate the binding affinities of the ligand-receptor complexes. A docking study was performed prior to multiple 50 ns molecular dynamics productions for the ligands situated at the top and bottom interacting pockets of the receptor. The most prominent motions for the RACL ligand were described by the high fluctuations of the peripheral aliphatic -CH3 groups and by its C-Cl aromatic ring groups. In good agreement with the experimental data, the D2 dopamine receptor-RACL complex showed the highest interacting patterns for ligands docked at the receptor’s top position. Full article
(This article belongs to the Collection Feature Papers in Molecular Pharmacology)
Show Figures

Figure 1

13 pages, 7420 KiB  
Article
Low-Dose PET Imaging of Tumors in Lung and Liver Regions Using Internal Motion Estimation
by Sang-Keun Woo, Byung-Chul Kim, Eun Kyoung Ryu, In Ok Ko and Yong Jin Lee
Diagnostics 2021, 11(11), 2138; https://doi.org/10.3390/diagnostics11112138 - 18 Nov 2021
Viewed by 1929
Abstract
Motion estimation and compensation are necessary for improvement of tumor quantification analysis in positron emission tomography (PET) images. The aim of this study was to propose adaptive PET imaging with internal motion estimation and correction using regional artificial evaluation of tumors injected with [...] Read more.
Motion estimation and compensation are necessary for improvement of tumor quantification analysis in positron emission tomography (PET) images. The aim of this study was to propose adaptive PET imaging with internal motion estimation and correction using regional artificial evaluation of tumors injected with low-dose and high-dose radiopharmaceuticals. In order to assess internal motion, molecular sieves imitating tumors were loaded with 18F and inserted into the lung and liver regions in rats. All models were classified into two groups, based on the injected radiopharmaceutical activity, to compare the effect of tumor intensity. The PET study was performed with injection of F-18 fluorodeoxyglucose (18F-FDG). Respiratory gating was carried out by external trigger device. Count, signal to noise ratio (SNR), contrast and full width at half maximum (FWHM) were measured in artificial tumors in gated images. Motion correction was executed by affine transformation with estimated internal motion data. Monitoring data were different from estimated motion. Contrast in the low-activity group was 3.57, 4.08 and 6.19, while in the high-activity group it was 10.01, 8.36 and 6.97 for static, 4 bin and 8 bin images, respectively. The results of the lung target in 4 bin and the liver target in 8 bin showed improvement in FWHM and contrast with sufficient SNR. After motion correction, FWHM was improved in both regions (lung: 24.56%, liver: 10.77%). Moreover, with the low dose of radiopharmaceuticals the PET image visualized specific accumulated radiopharmaceutical areas in the liver. Therefore, low activity in PET images should undergo motion correction before quantification analysis using PET data. We could improve quantitative tumor evaluation by considering organ region and tumor intensity. Full article
(This article belongs to the Special Issue The Use of Motion Analysis for Diagnostics)
Show Figures

Figure 1

12 pages, 2994 KiB  
Article
Context-Based Inter Mode Decision Method for Fast Affine Prediction in Versatile Video Coding
by Seongwon Jung and Dongsan Jun
Electronics 2021, 10(11), 1243; https://doi.org/10.3390/electronics10111243 - 24 May 2021
Cited by 20 | Viewed by 3056
Abstract
Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC [...] Read more.
Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC can support the significant coding performance, it leads to the tremendous computational complexity of VVC encoder. In particular, VVC has newly adopted an affine motion estimation (AME) method to overcome the limitations of the translational motion model at the expense of higher encoding complexity. In this paper, we proposed a context-based inter mode decision method for fast affine prediction that determines whether the AME is performed or not in the process of rate-distortion (RD) optimization for optimal CU-mode decision. Experimental results showed that the proposed method significantly reduced the encoding complexity of AME up to 33% with unnoticeable coding loss compared to the VVC Test Model (VTM). Full article
Show Figures

Figure 1

13 pages, 3355 KiB  
Article
Surface-Enhanced Raman Spectroscopy for Bisphenols Detection: Toward a Better Understanding of the Analyte–Nanosystem Interactions
by Eleonora Roschi, Cristina Gellini, Marilena Ricci, Santiago Sanchez-Cortes, Claudia Focardi, Bruno Neri, Juan Carlos Otero, Isabel López-Tocón, Giulietta Smulevich and Maurizio Becucci
Nanomaterials 2021, 11(4), 881; https://doi.org/10.3390/nano11040881 - 30 Mar 2021
Cited by 17 | Viewed by 3730
Abstract
Silver nanoparticles functionalized with thiolated β-cyclodextrin (CD-SH) were employed for the detection of bisphenols (BPs) A, B, and S by means of surface-enhanced Raman spectroscopy (SERS). The functionalization of Ag nanoparticles with CD-SH leads to an improvement of the sensitivity of the implemented [...] Read more.
Silver nanoparticles functionalized with thiolated β-cyclodextrin (CD-SH) were employed for the detection of bisphenols (BPs) A, B, and S by means of surface-enhanced Raman spectroscopy (SERS). The functionalization of Ag nanoparticles with CD-SH leads to an improvement of the sensitivity of the implemented SERS nanosensor. Using a multivariate analysis of the SERS data, the limit of detection of these compounds was estimated at about 10−7 M, in the range of the tens of ppb. Structural analysis of the CD-SH/BP complex was performed by density functional theory (DFT) calculations. Theoretical results allowed the assignment of key structural vibrational bands related to ring breathing motions and the inter-ring vibrations and pointed out an external interaction due to four hydrogen bonds between the hydroxyl groups of BP and CD located at the external top of the CD cone. DFT calculations allowed also checking the interaction energies of the different molecular species on the Ag surface and testing the effect of the presence of CD-SH on the BPs’ affinity. These findings were in agreement with the experimental evidences that there is not an actual inclusion of BP inside the CD cavity. The SERS sensor and the analysis procedure of data based on partial least square regression proposed here were tested in a real sample consisting of the detection of BPs in milk extracts to validate the detection performance of the SERS sensor. Full article
(This article belongs to the Special Issue Nanomaterials in Surface-Enhanced Raman Spectroscopy)
Show Figures

Graphical abstract

17 pages, 3790 KiB  
Article
ISAR Image Matching and Three-Dimensional Scattering Imaging Based on Extracted Dominant Scatterers
by Dan Xu, Bowen Bie, Guang-Cai Sun, Mengdao Xing and Vito Pascazio
Remote Sens. 2020, 12(17), 2699; https://doi.org/10.3390/rs12172699 - 20 Aug 2020
Cited by 8 | Viewed by 3613
Abstract
This paper studies inverse synthetic aperture radar (ISAR) image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers. In the condition of a long baseline between two radars, it is easy for obvious rotation, scale, distortion, and shift to occur between [...] Read more.
This paper studies inverse synthetic aperture radar (ISAR) image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers. In the condition of a long baseline between two radars, it is easy for obvious rotation, scale, distortion, and shift to occur between two-dimensional (2D) radar images. These problems lead to the difficulty of radar-image matching, which cannot be resolved by motion compensation and cross-correlation. What is more, due to the anisotropy, existing image-matching algorithms, such as scale invariant feature transform (SIFT), do not adapt to ISAR images very well. In addition, the angle between the target rotation axis and the radar line of sight (LOS) cannot be neglected. If so, the calibration result will be smaller than the real projection size. Furthermore, this angle cannot be estimated by monostatic radar. Therefore, instead of matching image by image, this paper proposes a novel ISAR imaging matching and 3D imaging based on extracted scatterers to deal with these issues. First, taking advantage of ISAR image sparsity, radar images are converted into scattering point sets. Then, a coarse scatterer matching based on the random sampling consistency algorithm (RANSAC) is performed. The scatterer height and accurate affine transformation parameters are estimated iteratively. Based on matched scatterers, information such as the angle and 3D image can be obtained. Finally, experiments based on the electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue 3D Modelling from Point Cloud: Algorithms and Methods)
Show Figures

Figure 1

16 pages, 2946 KiB  
Article
An Improved Fast Affine Motion Estimation Based on Edge Detection Algorithm for VVC
by Weizheng Ren, Wei He and Yansong Cui
Symmetry 2020, 12(7), 1143; https://doi.org/10.3390/sym12071143 - 8 Jul 2020
Cited by 17 | Viewed by 3959
Abstract
As a newly proposed video coding standard, Versatile Video Coding (VVC) has adopted some revolutionary techniques compared to High Efficiency Video Coding (HEVC). The multiple-mode affine motion compensation (MM-AMC) adopted by VVC saves approximately 15%-25% Bjøntegaard Delta Bitrate (BD-BR), with an inevitable increase [...] Read more.
As a newly proposed video coding standard, Versatile Video Coding (VVC) has adopted some revolutionary techniques compared to High Efficiency Video Coding (HEVC). The multiple-mode affine motion compensation (MM-AMC) adopted by VVC saves approximately 15%-25% Bjøntegaard Delta Bitrate (BD-BR), with an inevitable increase of encoding time. This paper gives an overview of both the 4-parameter affine motion model and the 6-parameter affine motion model, analyzes their performances, and proposes improved algorithms according to the symmetry of iterative gradient descent for fast affine motion estimation. Finally, the proposed algorithms and symmetric MM-AMC flame of VTM-7.0 are compared. The results show that the proposed algorithms save 6.65% total encoding time on average, which saves approximately 30% encoding time of affine motion compensation. Full article
Show Figures

Figure 1

Back to TopTop