Next Article in Journal
A Coordinated Adaptive Signal Control Method Based on Queue Evolution and Delay Modeling Approach
Previous Article in Journal
Cross-Language Code Smell Detection via Transfer Learning
Previous Article in Special Issue
Development of an Augmented Reality Surgical Trainer for Minimally Invasive Pancreatic Surgery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Review

AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities

by
Nicolay Anderson Christian
1,
Jason Turuwhenua
2 and
Mohammad Norouzifard
1,2,*
1
School of Technology, Yoobee College of Creative Innovation, Auckland 1010, New Zealand
2
Auckland Bioengineering Institute, University of Auckland, Auckland 1010, New Zealand
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9292; https://doi.org/10.3390/app15179292
Submission received: 24 June 2025 / Revised: 21 August 2025 / Accepted: 22 August 2025 / Published: 24 August 2025

Abstract

The generation of 360° video is gaining prominence in immersive media, virtual reality (VR), gaming projects, and the emerging metaverse. Traditional methods for panoramic content creation often rely on specialized hardware and dense video capture, which limits scalability and accessibility. Recent advances in generative artificial intelligence, particularly diffusion models and neural radiance fields (NeRFs), are examined in this research for their potential to generate immersive panoramic video content from minimal input, such as a sparse set of narrow-field-of-view (NFoV) images. To investigate this, a structured literature review of over 70 recent papers in panoramic image and video generation was conducted. We analyze key contributions from models such as 360DVD, Imagine360, and PanoDiff, focusing on their approaches to motion continuity, spatial realism, and conditional control. Our analysis highlights that achieving seamless motion continuity remains the primary challenge, as most current models struggle with temporal consistency when generating long sequences. Based on these findings, a research direction has been proposed that aims to generate 360° video from as few as 8–10 static NFoV inputs, drawing on techniques from image stitching, scene completion, and view bridging. This review also underscores the potential for creating scalable, data-efficient, and near-real-time panoramic video synthesis, while emphasizing the critical need to address temporal consistency for practical deployment.
Keywords: 360° video generation; generative AI; sparse-input synthesis; panoramic video; diffusion models; neural radiance fields 360° video generation; generative AI; sparse-input synthesis; panoramic video; diffusion models; neural radiance fields

Share and Cite

MDPI and ACS Style

Christian, N.A.; Turuwhenua, J.; Norouzifard, M. AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities. Appl. Sci. 2025, 15, 9292. https://doi.org/10.3390/app15179292

AMA Style

Christian NA, Turuwhenua J, Norouzifard M. AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities. Applied Sciences. 2025; 15(17):9292. https://doi.org/10.3390/app15179292

Chicago/Turabian Style

Christian, Nicolay Anderson, Jason Turuwhenua, and Mohammad Norouzifard. 2025. "AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities" Applied Sciences 15, no. 17: 9292. https://doi.org/10.3390/app15179292

APA Style

Christian, N. A., Turuwhenua, J., & Norouzifard, M. (2025). AI and Generative Models in 360-Degree Video Creation: Building the Future of Virtual Realities. Applied Sciences, 15(17), 9292. https://doi.org/10.3390/app15179292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop