Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Authors = Greg Maguire

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1556 KiB  
Review
Audio-Driven Facial Animation with Deep Learning: A Survey
by Diqiong Jiang, Jian Chang, Lihua You, Shaojun Bian, Robert Kosk and Greg Maguire
Information 2024, 15(11), 675; https://doi.org/10.3390/info15110675 - 28 Oct 2024
Cited by 1 | Viewed by 7076
Abstract
Audio-driven facial animation is a rapidly evolving field that aims to generate realistic facial expressions and lip movements synchronized with a given audio input. This survey provides a comprehensive review of deep learning techniques applied to audio-driven facial animation, with a focus on [...] Read more.
Audio-driven facial animation is a rapidly evolving field that aims to generate realistic facial expressions and lip movements synchronized with a given audio input. This survey provides a comprehensive review of deep learning techniques applied to audio-driven facial animation, with a focus on both audio-driven facial image animation and audio-driven facial mesh animation. These approaches employ deep learning to map audio inputs directly onto 3D facial meshes or 2D images, enabling the creation of highly realistic and synchronized animations. This survey also explores evaluation metrics, available datasets, and the challenges that remain, such as disentangling lip synchronization and emotions, generalization across speakers, and dataset limitations. Lastly, we discuss future directions, including multi-modal integration, personalized models, and facial attribute modification in animations, all of which are critical for the continued development and application of this technology. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

26 pages, 49715 KiB  
Article
Deep Spectral Meshes: Multi-Frequency Facial Mesh Processing with Graph Neural Networks
by Robert Kosk, Richard Southern, Lihua You, Shaojun Bian, Willem Kokke and Greg Maguire
Electronics 2024, 13(4), 720; https://doi.org/10.3390/electronics13040720 - 9 Feb 2024
Cited by 1 | Viewed by 1954
Abstract
With the rising popularity of virtual worlds, the importance of data-driven parametric models of 3D meshes has grown rapidly. Numerous applications, such as computer vision, procedural generation, and mesh editing, vastly rely on these models. However, current approaches do not allow for independent [...] Read more.
With the rising popularity of virtual worlds, the importance of data-driven parametric models of 3D meshes has grown rapidly. Numerous applications, such as computer vision, procedural generation, and mesh editing, vastly rely on these models. However, current approaches do not allow for independent editing of deformations at different frequency levels. They also do not benefit from representing deformations at different frequencies with dedicated representations, which would better expose their properties and improve the generated meshes’ geometric and perceptual quality. In this work, spectral meshes are introduced as a method to decompose mesh deformations into low-frequency and high-frequency deformations. These features of low- and high-frequency deformations are used for representation learning with graph convolutional networks. A parametric model for 3D facial mesh synthesis is built upon the proposed framework, exposing user parameters that control disentangled high- and low-frequency deformations. Independent control of deformations at different frequencies and generation of plausible synthetic examples are mutually exclusive objectives. A Conditioning Factor is introduced to leverage these objectives. Our model takes further advantage of spectral partitioning by representing different frequency levels with disparate, more suitable representations. Low frequencies are represented with standardised Euclidean coordinates, and high frequencies with a normalised deformation representation (DR). This paper investigates applications of our proposed approach in mesh reconstruction, mesh interpolation, and multi-frequency editing. It is demonstrated that our method improves the overall quality of generated meshes on most datasets when considering both the L1 norm and perceptual Dihedral Angle Mesh Error (DAME) metrics. Full article
(This article belongs to the Special Issue Selected Papers from Young Researchers in AI for Computer Vision)
Show Figures

Figure 1

17 pages, 4318 KiB  
Article
Efficient C2 Continuous Surface Creation Technique Based on Ordinary Differential Equation
by Shaojun Bian, Greg Maguire, Willem Kokke, Lihua You and Jian J. Zhang
Symmetry 2020, 12(1), 38; https://doi.org/10.3390/sym12010038 - 23 Dec 2019
Cited by 3 | Viewed by 3503
Abstract
In order to reduce the data size and simplify the process of creating characters’ 3D models, a new and interactive ordinary differential equation (ODE)-based C2 continuous surface creation algorithm is introduced in this paper. With this approach, the creation of a three-dimensional surface [...] Read more.
In order to reduce the data size and simplify the process of creating characters’ 3D models, a new and interactive ordinary differential equation (ODE)-based C2 continuous surface creation algorithm is introduced in this paper. With this approach, the creation of a three-dimensional surface is transformed into generating two boundary curves plus four control curves and solving a vector-valued sixth order ordinary differential equation subjected to boundary constraints consisting of boundary curves, and first and second partial derivatives at the boundary curves. Unlike the existing patch modeling approaches which require tedious and time-consuming manual operations to stitch two separate patches together to achieve continuity between two stitched patches, the proposed technique maintains the C2 continuity between adjacent surface patches naturally, which avoids manual stitching operations. Besides, compared with polygon surface modeling, our ODE C2 surface creation method can significantly reduce and compress the data size, deform the surface easily by simply changing the first and second partial derivatives, and shape control parameters instead of manipulating loads of polygon points. Full article
Show Figures

Figure 1

10 pages, 9647 KiB  
Article
Fully Automatic Facial Deformation Transfer
by Shaojun Bian, Anzong Zheng, Lin Gao, Greg Maguire, Willem Kokke, Jon Macey, Lihua You and Jian J. Zhang
Symmetry 2020, 12(1), 27; https://doi.org/10.3390/sym12010027 - 21 Dec 2019
Cited by 7 | Viewed by 7344
Abstract
Facial Animation is a serious and ongoing challenge for the Computer Graphic industry. Because diverse and complex emotions need to be expressed by different facial deformation and animation, copying facial deformations from existing character to another is widely needed in both industry and [...] Read more.
Facial Animation is a serious and ongoing challenge for the Computer Graphic industry. Because diverse and complex emotions need to be expressed by different facial deformation and animation, copying facial deformations from existing character to another is widely needed in both industry and academia, to reduce time-consuming and repetitive manual work of modeling to create the 3D shape sequences for every new character. But transfer of realistic facial animations between two 3D models is limited and inconvenient for general use. Modern deformation transfer methods require correspondences mapping, in most cases, which are tedious to get. In this paper, we present a fast and automatic approach to transfer the deformations of the facial mesh models by obtaining the 3D point-wise correspondences in the automatic manner. The key idea is that we could estimate the correspondences with different facial meshes using the robust facial landmark detection method by projecting the 3D model to the 2D image. Experiments show that without any manual labelling efforts, our method detects reliable correspondences faster and simpler compared with the state-of-the-art automatic deformation transfer method on the facial models. Full article
Show Figures

Graphical abstract

Back to TopTop