Next Article in Journal
Balancing Layout Space and Risk Comprehension in Health Communication: A Comparison of Separated and Integrated Icon Arrays
Previous Article in Journal
OntoCaimer: An Ontology Designed to Support Alzheimer’s Patient Care Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Democratization of Virtual Production: Usability Analysis of Three Solutions with Different Levels of Complexity: Professional, Educational and Cloud-Based

by
Roi Méndez-Fernández
*,
Rocío del Pilar Sosa-Fernández
,
Fátima Fernández-Ledo
and
Enrique Castelló-Mayo
Department of Communication Sciences, Faculty of Communicaction Sciences, North Campus, University of Santiago de Compostela, 15782 Santiago de Compostela, Spain
*
Author to whom correspondence should be addressed.
Informatics 2025, 12(4), 104; https://doi.org/10.3390/informatics12040104
Submission received: 24 July 2025 / Revised: 18 September 2025 / Accepted: 22 September 2025 / Published: 30 September 2025
(This article belongs to the Section Human-Computer Interaction)

Abstract

The technical and technological advances of recent years in the field of real-time photorealistic rendering have enabled enormous development in virtual production. However, the democratization of this technology faces two main obstacles: the high economic cost of implementation and the high complexity of the necessary software. This paper studies three virtual production software solutions that represent different stages in the democratization process of the technology, ranging from the professional software InfinitySet, to the more generalist and educational environment Edison, and to the cloud version of this same software, Edison OnCloud. To this end, an analysis of their functionalities and interfaces is conducted, the SUS questionnaire is applied, and the three systems are evaluated from the perspective of Nielsen’s usability principles. These tests demonstrate the complexity of the professional software InfinitySet, making it unapproachable for non-expert users without extensive previous training. On the other hand, both Edison and Edison OnCloud show significant usability improvements through limiting and reducing functionalities, which also results in a reduction in implementation costs, making the use of the technology feasible in non-professional environments, such as in education or for streamers.

1. Introduction

Virtual production has undergone a significant evolution in recent years, reaching from the most traditional settings (such as entertainment programs, news broadcasts, or election nights on television) to the most advanced television series and films [1]. This technology has progressed from virtual TV sets with chroma backgrounds, which began appearing in the commercial scene in the 1990s [2], to today’s major film studios that use screens as synthetic backgrounds rendered in real time [3].
The main advantage of this technology lies in its ability to reduce post-production times [4] by incorporating a large part of the VFX in real time [5]. This results in a significant reduction in post-production costs and, more importantly, allows for real-time visualization of what is being filmed [6], without limiting the narrative and creative possibilities inherent to the technology [7]. Virtual production can utilize real-time chromakeying (offering an exact cut-out of real elements and allowing for a 100% virtual environment) or LED screens as backgrounds (limiting integration possibilities but solving certain problems of lighting consistency between real and virtual worlds, even creating reflections of the virtual elements in the physical environment [8]).
From a technological perspective, advances in recent years have increased the realism of the virtual sets to such a level that they are now almost indistinguishable from real spaces. In this regard, developments in render engines, especially those related to game engines such as Unreal Engine (UE) [9], have enabled the real-time execution of global illumination algorithms with solutions like Lumen [10], or the processing of millions of polygons with developments such as Nanite [11]. This has made such software the foundation on which major virtual productions are built. On the hardware side, developments like Nvidia RTX, which implements specific RayTracing processors at the graphics card level, allowing the algorithm to run in real time [12], or DLSS (Deep Learning Super Sampling), which enables high-resolution images to be obtained from low-resolution images using artificial intelligence [13], have brought virtual production to a sweet spot where it is finally a viable solution already being used in major productions. Other alternatives, such as integrating real-world scenarios into the virtual world, are also being explored [14], which would make it possible to repeatedly shoot in the “real world” without the inconveniences of weather, daylight duration, required administrative permits, etc.
Despite the reduction in production times and costs implied by virtual production, the implementation of these technologies remains expensive since, as previously mentioned, it requires state-of-the-art computer equipment as well as software capable of controlling both real-time rendering and the involved effects and animations, while also managing real-time camera positioning [15]. This entire set of sensors, cameras, and computers means that virtual production is only accessible to a few production companies that can afford the initial investment.
Beyond the economic cost of these systems, their complexity must be taken into account. A professional virtual set consists of numerous hardware and software devices that must function synergistically and offer a multitude of technical and creative possibilities. This variety of possibilities also leads to significant system complexity. Among other things, the software must control real-time rendering, the positioning of real cameras, talent tracking (if necessary), the calibration and setup of real cameras and video inputs and outputs, actions and effects triggered in real time, etc. All these elements are routine for an expert, but they make the learning curve for operating these environments nearly insurmountable for a novice user.
Despite these limitations, recent years have seen the emergence of various solutions aimed at bringing the world of virtual production closer to other types of users, such as small production companies, streamers, and the educational sector [16]. These systems tend to limit the capabilities of more complex environments by employing virtual cameras decoupled from the real ones, simplifying work environments, and restricting the technical features of the systems. Through these strategies, it is possible to significantly reduce both hardware and software costs, bringing the implementation expenses down from hundreds of thousands, or even millions of euros, to just a few thousand. This simplification strategy restricts the technical capabilities of these environments, but in many cases, such limitations can be overcome through the creativity enabled by the technology itself, yielding professional-quality results with much more affordable systems. The next, still emerging, step in the democratization of this technology is to take it to the cloud, removing the need for high-performance graphics computers for running the software and making it accessible from any device with a browser, camera, and internet connection. In this way, hardware costs would be minimized, and prospective clients would pay only for the hours they use the systems in the cloud.
In summary, once the limitations of economic costs and hardware complexity are overcome, the main barrier to the adoption of virtual production systems lies in their operational complexity. Therefore, it becomes necessary to assess whether these simplified alternatives which are being developed enable non-specialist users to operate the systems, or if, conversely, specialists with years of experience are still required even for simpler productions. To address this issue, this article presents a comparative analysis of three solutions offered by one of the leading global virtual production companies, Brainstorm Multimedia (Valencia, Spain), a professional software used in virtual television studios. The second is Edison, a simplified version of the system designed with a strong focus on integration in the educational sector, but now also being used in areas like advertising and corporate presentations. Lastly, Edison OnCloud is considered, the cloud-based version of Edison, designed to be the most accessible alternative for all users. The reduction in complexity and costs previously described is perfectly reflected in these three systems. As an example, the implementation cost of the studio used in this research (using the InfinitySet software) exceeds EUR 500,000, while the equipment required to run Edison, including the software license, amounts to approximately EUR 10,000. For Edison OnCloud, three subscription plans are planned to be offered, with the most expensive priced at around EUR 100 per month, which includes unlimited runtime and up to 100 users under the same license. This makes it ideal for use in university settings (one user per faculty), secondary schools (one user per teacher), or similar institutions. Of course, it should be noted that costs may vary over time, and this initial estimate may increase. Similarly, both software licenses and certain technical equipment tend to rise in price with each new iteration or improvement. The economic advantage of the cloud-based alternative, subject to its performance and reliability, appears evident for non-professional use. These three alternatives are compared through detailed analysis of their interfaces and capabilities, the application of the SUS (System Usability Scale) questionnaire [17], and evaluation based on Jakob Nielsen’s usability heuristics [18]. This approach aims to provide an initial comprehensive overview that determines the level of training, knowledge, and skills necessary to access this technology, thereby establishing basic criteria for a future study with a larger and more representative sample of the general public. Subsequently, the article offers a discussion on the prospects for democratizing this technology, both currently and in the coming years, focusing on the observed accessibility, potential, and the challenges anticipated for the future. As this is a field that has been mainly focused on a professional environment, there are, to the authors’ knowledge, no previous usability studies on specific software solutions for virtual television studios. Related articles have focused more on defining the pipeline design for independent studios [1] than the usability and accessibility of the industry solutions. It is also common to find literature that analyzes the advantages of using programs related in some way to virtual production such as Unity [19,20], but always focusing on the point of view of professional developers. On the other hand, some studies focus on the usability of applications developed with these tools [21]. These studies do not focus on the usability of the platform itself but rather on the products created within it or even in developing new pipelines for virtual production. This stems from the very nature of the software, which is designed for professionals, resulting in its usability being secondary to its efficiency or the functionalities it offers. For this same reason, we believe it is important to pave the way as new solutions emerge that aim to break down this barrier between professional and amateur use, bringing the concepts and possibilities of virtual television studios to a much broader audience. This research aims to take those initial steps and contribute to defining the appropriate strategies to enhance accessibility for non-expert users while minimizing the loss of technical and creative possibilities.

2. Materials and Methods

The principal objective of this research is to determine whether virtual production technology has reached a level of maturity and development sufficient to be considered accessible to a general audience, moving beyond the highly specialized environment in which it has historically evolved. To address this goal, the study examines the usability of three next-generation solutions, comparing their evolution in terms of viability for non-expert users. Building on the results, the research discusses the integration potential of these solutions in small production studios, streaming environments, and educational settings. Finally, it identifies current strengths and limitations that must be addressed to achieve true democratization of virtual production.
Although the study is carried out through specific concrete solutions, the aim is to perform a theoretical analysis, derived from the empirical analysis presented, that goes beyond the implementation of a particular program. In this sense, both the usability improvements detected and their underlying causes are examined, proposing lines of work toward the democratization of the technology. Thus, if the outcome of the research were that these solutions did not achieve greater usability for non-expert users, the result could not be generalized, since alternative approaches may exist that could succeed in doing so. However, those strategies implemented in these solutions that did achieve the intended improvement in usability may indeed serve as useful guidelines to deepen their application or to generalize them in pursuit of the democratization of virtual television studios envisioned in this article. On the other hand, in the search for these improvement strategies, the decision was made to use programs from the same family, given that they share a common foundation, which results in similar routines and design patterns throughout the entire experience. To the best of the authors’ knowledge, there is no solution currently available on the market that seeks this democratization of technology by implementing the three distinct solutions which are being explored in this research. Furthermore, existing solutions are mostly oriented toward professional use and are therefore complex and not easily accessible to the general public. Solutions such as Aximetry (Budapest, Hungary), Pixotope (Oslo, Norway) or Zero Density (Amstelveen, Netherlands) focus on professional markets and do not offer a range of products geared toward more independent users or those with fewer economic and human resources. Vizrt (Bergen, Norway) offers an alternative comparable to Edison (Viz Virtual Studio Go), but the workflow they propose is more complex and expensive, as it requires more advanced cameras and external controllers and the company does not offer a full cloud virtual production solution. Brainstorm, however, does offer the three solutions: InfinitySet (comparable to those offered by Aximetry, Pixotope, Vizrt or Zero Density), Edison (designed for educators and content creators and comparable with Viz Virtual Studio Go), and EdisonOnCloud, which takes this concept a step further by making use of cloud computing and has no equivalent in the market.
In methodological terms, a qualitative strategy was used in three steps: a visual and functional analysis of the programs, the application of the SUS usability questionnaire, and an analysis based on Jakob Nielsen’s ten usability principles. As this study represents an initial approach, both the questionnaire and the analyses were carried out by the authors. All the authors are experts in virtual reality from a creative and theoretical perspective, but from a practical standpoint, they have varying levels of knowledge. Author 1 has a background in computer science and experience with virtual production software, having previously trained in InfinitySet (80 h training) and Edison (10 h training). Author 2 has attended the same training as author 1 but has never used InfinitySet independently. She has, however, used Edison frequently in her research. Author 3 has received 10 h of training in the use of both InfinitySet and Edison. She frequently uses Edison in her research and teaching but she has never used InfinitySet until this experience. Author 4 has also received 10 h of training in the use of both InfinitySet and Edison but had never used any of the applications before. None of the authors had prior contact with Edison OnCloud before this research, and all three received 2 h of training in its use. When interacting with the program, users were tasked with integrating into the three systems a predefined set offered by Brainstorm Multimedia, incorporating a real presenter through chroma keying and including camera movements. This set is the most basic one and represents the simplest workflow to begin a virtual production. Users were provided with technical assistance throughout the process to prevent potential deadlocks that could make it impossible to test the various necessary functionalities of the programs, thereby enabling a more comprehensive analysis of them. Despite the fact that the authors’ prior knowledge of some of the technologies and programs used may introduce a bias in the results with respect to novice users, it also allows them to explore certain aspects of design and the strategies adopted at a deeper level. Conversely, in this initial approach, it was impossible to include a larger number of external users due to the complexity of the training required for operating InfinitySet, which would have been unmanageable for a broad group of participants with no prior experience. Furthermore, it was necessary to include the InfinitySet software in this first study, as it is the most advanced of the three and provides a benchmark for the functionalities that are reduced or removed in the other two in the interest of simplification and usability enhancement. The results presented in the analysis of Nielsen’s heuristics are based on a joint discussion and debate among the authors following meetings held after the testing. For future work, a more extensive and diversified analysis is planned in terms of users, seeking the participation of both experts and non-experts in the field (after prior training). The specificity of this software and the inherent complexity of the studied technology make this prior training essential for tackling this future, broader test. Therefore, this study aims to approximate the type of training that would be necessary for a user’s first contact with the programs and to study the feasibility of conducting this study including the three alternatives or whether it will be impossible to include InfinitySet due to its inherent complexity.
Lastly, regarding materials, both the facilities used for the tests and the software employed must be considered. As for the software, as previously mentioned, three different programs were used in this comparison. InfinitySet 6.1 is a virtual production software that offers great flexibility in the use of sensor systems by including trackfree technology, which allows the use of sensorized cameras, robotic cameras, unsensorized cameras, and videos with integrated tracking. This makes it possible for the system to create virtual reality, augmented reality, and extended reality environments while at the same time feeding display systems. It also offers a software-based chromakeying system, allowing its use without external chromakeyers, although this is also possible. Moreover, InfinitySet 6.1 provides its own render engine while also integrating with Unreal Engine 5.4. The second program used is Edison 5.1, a simpler virtual production software focused on educational applications. Built on the foundation of InfinitySet, Edison offers a simplified version not only in terms of technical possibilities but also in its operation, aiming to make it accessible to non-expert users. This software was developed by Brainstorm and enhanced through collaboration between the company itself, the Open University of the Netherlands, the University of Alicante, and the University of Santiago de Compostela (USC) via the European project “CloudClass: Low Cost, Mobile, Cloud & Template Based Augmented Reality Studio for Education (Eurostars-E115354)”, which resulted in the foundation of what is now Edison OnCloud. The third software used in this study is Edison OnCloud 1, which works with Edison 5.1 on the remote desktop. This system is still in the Beta phase and is expected to reach the market soon under a subscription fee model. It takes Edison’s simplification one step further by making it unnecessary to acquire powerful computer equipment for graphic tasks, as these are carried out in the cloud, specifically on Amazon Web Services. It is necessary to note that both InfinitySet and Edison operate exclusively in Windows environments (Windows 11 in this case) and require a dedicated Nvidia graphics card for their operation. Edison OnCloud, by contrast, does not have any of these requirements, as it runs in the cloud through a web browser.
Regarding the hardware, the virtual studio of USC, located in the Faculty of Communication Sciences, was used. It is equipped with a chroma stage of approximately five by seven meters, two Panasonic AW-UE150KEJ PTZ (Panasonic, Kadoma, Japan)robotic cameras, a Panasonic AK-HC3900 studio camera (Panasonic, Kadoma, Japan) with a Fujinon ZA12x4.5BRD-S6 lens (Fujifilm, Tokio, Japan) and a inside-out optical tracking Stype Redspy (Stype, Helsinki, Finland), six HP Z4 computers (Hewlett-Packard, Palo Alto, United States of America) (three for rendering, one for graphics, one for editing, and one for control) equipped with Brainstorm Suite version 6.1 (InfinitySet, OnDemand and Aston), a teleprompter, a video player-recorder, four Sennheiser wireless microphones (Sennheiser, Wedemark-Wennebostel, Germany), a Yamaha TF1 sound console (Yamaha, Hamamatsu, Japan), and an LED lighting grid with 18 spotlights. This equipment was installed at the Faculty of Communication Sciences of the University of Santiago de Compostela in 2024 with funding from the National Recovery, Transformation, and Resilience Plan of the State Research Agency and the Ministry of Science and Innovation, through Next Generation funding from the European Union (ref. EQC2021-007535-P). It serves as the foundation for the common research service VIRTUS, Virtual Research Platform of the USC, which provides support to university research groups as well as the regional audiovisual sector. For Edison workflows, a DELL G167330 laptop and a Logitech Brio Ultra HD Pro Business camera were used. For the tests with Edison OnCloud, an ASUS ROG Zephyrus GA503Q (2021) laptop and the same Logitech Brio Ultra HD Pro Business camera were used.

3. Results

This section presents the results of the three analyses conducted on InfinitySet, Edison, and Edison OnCloud. First, the analysis of the interfaces and functionalities of the three systems is provided. Next, the results of the SUS questionnaire are presented. Finally, the software solutions are evaluated based on Nielsen’s heuristic principles.

3.1. Software Analysis

This section presents an analysis of the functionalities and characteristics of the software studied in this article. For each of the three programs, the functionalities, interface, and workflow on the equipment used in this research will be described.

3.1.1. InfinitySet

InfinitySet is the most powerful solution in the Brainstorm Multimedia suite, being the program used in television studios with the most cutting-edge technology. In this sense, the software is capable of integrating virtual reality, augmented reality, extended reality environments, as well as feeding LED screens. It offers integration with Unreal Engine, allowing this render engine to be used for displaying synthetic backgrounds with more realistic global illumination. Similarly, it provides its own render engine, which allows direct integration of 3D models but delivers lower image quality than UE.
It allows the configuration of up to 12 Xpoints or operating environments. Each Xpoint can be of type VR (when a fully virtual background is integrated with the image of actors and a few real elements), AR (when real image with tracking is integrated with 3D elements), XR (when screens and graphics are included in the real set), stack (when an image for a large screen is to be generated), or trackfree (when the presenter becomes a quadrilateral onto which the image is projected with chromakeying and their camera is fixed but free movement of the virtual camera is allowed [22]). The system also allows for up to 12 actors (in addition to the real image integration included in the selected Xpoint if it is AR, VR, or XR), 24 different types of Inputs, 12 playlists, 512 action presets, and 512 camera presets. The system’s complexity is enormous due to the multitude of options it provides.
Next, the interface of InfinitySet (Figure 1) is described:
  • On the left side of the interface, there is a panel that allows different basic elements to be added, such as text, basic 3D shapes, lights and particles, videos, etc.
  • At the bottom there is a browser with seven tabs that allow users to navigate available resources, those in use, and configure entry, exit, or custom animations.
  • The central section, called viewport, displays the set and the available video inputs.
  • On top of the viewport the options for the mouse use, 3D model editing, showing or hiding the information overlay, controlling render tools, and operating modes are presented.
  • On the right, the main options panel can be found. Inside it, three main tabs are presented:
    1. 
    Vset: Controls all the variables related to inputs, Xpoints, actors, playlists, actions, and others. Each of these options has a tab with all available configuration settings.
    2. 
    Object: Shows all 3D objects present in the set and their properties, allowing access to material, texture, position, scale, and other properties.
    3. 
    Unreal: This tab is dedicated to the Unreal Engine link, where the UE project to be loaded is specified, along with its properties within the environment, and the render engine initialization button which is used to start the render.
  • Lastly, on the far right section of the interface, there are a series of icons that allow new panels to be opened for specific elements such as timers, font types, animations, binds, lists, etc.
As can be seen, it is an extremely complex interface that offers a multitude of options and is highly oriented toward professional use. Achieving proficiency with it requires intensive and lengthy training, as well as frequent use to maintain the acquired skills.
As an example, to load a UE scenario in the test set, the following steps must be followed:
1. 
Configure three Xpoints as VR, one for each of the cameras in the studio. Each of these Xpoints will have a different input configured. Each of these three inputs will correspond to the input slot on the video card of each render computer for the input signal from its corresponding camera. In this case, for all three, it will be input 1. Along with the video information, tracking data arrives thanks to a complex camera configuration and calibration performed beforehand.
2. 
For each Xpoint, activate the Game Engine option to enable the use of UE.
3. 
Go to the Unreal tab and indicate which UE project that the user wants to load as the background.
4. 
Launch the project.
5. 
Configure chromakeying for each camera.
6. 
Add a video delay for each input so that real and virtual camera movements occur synchronously (otherwise, the real camera would start moving before the virtual one due to the time needed to render the background).
7. 
Configure a mask to hide the parts of the set that do not have chroma, so that the camera can move freely without unwanted real elements appearing.
If a scenario is to be loaded into Brainstorm Multimedia’s proprietary system, the user would simply need to drag the object into the viewport area, and it would be immediately loaded as the background. In this case, it would not be necessary to activate the game engine. As can be seen, it is a complex process with numerous steps that need to be remembered and require knowledge of both software and hardware. At the same time, it is also important to consider that the audio, which arrives through an audio mixing console, must also have a delay added to ensure it is synchronized with the video.

3.1.2. Edison

Edison, despite sharing the underlying programming with InfinitySet, is a much more simplified version that limits certain aspects and possibilities to facilitate interaction for non-expert users and reduce the learning curve needed to approach virtual production. To achieve this, it operates with four main elements. The actors are quadrilaterals placed in the 3D world, onto which the image from a camera is projected after the chromakeying process. This makes it appear that the presenter is truly on set (even though a camera without tracking is used) and enables virtual camera movements even when the real camera is fixed. To disguise the flat nature of the image when making camera rotations, the billboard effect is used (the quadrilateral always faces the virtual camera), simulating the presenter’s following of camera movements. The second element is the presentation. Since it is designed for teaching, Edison incorporates a slide presentation that can include PDFs, PowerPoint files, 3D models, video, audio, etc. The third element is the background. This is the 3D set in which the scene takes place and is unchangeable during production time. It does not allow individual elements to be modified and is simply a static background that enables camera movements. Finally, stands are also available, which allow for the hiding of parts of the presenter’s body that are not visible on camera. As it is focused on teaching, it is common for the teachers to record themselves behind a desk or for the camera not to capture their full body. Placing a table or stand in front of the teacher conceals the lower part of the body that does not appear in the shot, giving the impression of a complete framing.
The interface of Edison (Figure 2) is a simplified version of InfinitySet’s. As can be seen, the icons, colors, and style are the same, but the available options are much fewer. The interface is described below:
  • On the left side there is a resource browser, where the user can access scenarios, audiences, pointers, displays, slides, and stands available on the computer. To load any of these elements, the user needs to enter the corresponding folder and double-click on it. The system, knowing the source folder, will interpret the selected item accordingly. For example, if a 3D model is inside the backgrounds folder, it will be interpreted as a background, but the same 3D model inside the slides folder will be interpreted as a slide.
  • At the bottom, camera presets are presented, allowing the user to define eight virtual positions for the camera and the flight time between each one.
  • The viewport, where the result of the integration between real and virtual worlds is displayed, is located in the center of the interface.
  • Above the viewport there are buttons which allow the user to define the active panels within the interface, select the application’s working modes, and control the presentation (switch slides, bring a slide to the foreground, record, etc.).
  • To the right of the viewport, the slide list is presented. This area indicates “drop files here”, showing how to load new slides by dragging them into this area. By doing so, the first one will immediately appear on the display.
  • All the way to the right of the interface there is a work area, where all the application’s configurations can be set through four tabs:
    1. 
    Xpoint: The Xpoint type is always trackfree. In this tab certain properties of the background, stand, display, and slides can be modified.
    2. 
    Actors: The software allows up to six actors to be defined, each linked to one input. In this tab, the specific configurations of each actor (position, size, etc.) can be performed.
    3. 
    Inputs: The maximum number of inputs is also six. An input defines the type of signal that will be displayed within in an actor (video, webcam, image, etc.)
    4. 
    Settings: This tab allows the integration with Unreal Engine and includes all the configurations which are required.
  • Finally, at the top of this panel the “effects” button is placed, enabling some more advanced effects than those available by default. For example, by default, only one actor is available. To work with all six, the user needs to activate the corresponding effect. This allows the interface to be simpler for the most common tasks and users, but gives access to more complex alternatives for users which are not necessarily experts in virtual production but do have some advanced training in the use of the software, thus covering the entire target audience for this application.
The steps to configure the application to load a default scenario would be as follows:
1. 
Check that the input assigned to the actor is correct. Edison assigns the actor to the system’s default camera. It is necessary to verify that the correct camera (generally connected via USB) has been selected and that the resolution is as desired.
2. 
Select the desired background, which must be located in the backgrounds folder, and double click on it.
3. 
Define the actor’s position and size within the scene.
4. 
Define the desired camera preset positions by moving the virtual camera and pressing the corresponding preset buttons.

3.1.3. Edison OnCloud

Edison OnCloud, as its name suggests, is an Edison environment that runs in the cloud. It is an application that is not yet available to the public but takes Edison’s accessibility philosophy a step further. It is hosted on Amazon Web Services and, therefore, can be used from a browser on devices that do not have the necessary graphics power for the traditional local version. Edison requires an NVIDIA graphics card and sufficient computing power to handle the heavy 3D environments intended for integration. Using the cloud means these requirements are handled by the web service, allowing any user to access them from simpler devices. On the other hand, both Infinity and Edison licenses are for a single installation and are tied either to a specific device or to a hardware key. Thus, a new license must be purchased for each device where they are to be installed. In the case of Edison OnCloud, it is not necessary to buy a license, as it works on a subscription basis, allowing several users to share a subscription and a common storage space. This makes the technology more accessible for institutions such as universities, high schools, etc., since within a single plan, multiple users can access the software. For example, a university could have one user per faculty, enabling the creation of virtual production content for teaching and research dissemination.
Regarding the software itself, the system offers two operating modes. The first involves having an instance of Edison running in the cloud and controlling it in the same way as locally. In this case, the interface of Edison is exactly the same, but an additional layer of complexity is added for the user due to working in the cloud (Figure 3). The camera and microphone must be previously configured in the browser, file transfer does not work simply by dragging but instead requires the use of the provided tools, window switching is less agile than on a traditional computer, etc. All these options tools are displayed in the top region of the remote desktop browser interface (Figure 3). Therefore, it is the same interface but with an added layer of complexity resulting from remote operation.
The second operating mode is designed so that teachers can directly access templates preconfigured by technical specialists. In this way, these technicians would define the background, camera presets, and other stage settings. Therefore, the user would only need to choose the template, decide whether the presenter will be standing, behind a stand, behind a table, or on a screen (if a chroma set is not available), load the presentation slides, and conduct the production in real time without ever interacting with the actual Edison interface and using a simplified control panel (Figure 4) instead. The interface of this control panel allows you to manage the presentation slides and camera movements (top left), adjust chromakeying (top right), and modify in real time the available values for the background, slide display, stand, actor, and slides (bottom, from left to right). In this context, the process for preparing a recording would be as follows:
1. 
Choose the stage template to be used.
2. 
Define the type of actor to be used (with stand, with table, on screen, or standing).
3. 
Load the presentation to be used. As in Edison, this can be a PDF, PowerPoint, 3D models, video, etc., or combinations of these.
4. 
Start the Amazon Web Services service and carry out the production using the control panel.
The following table summarizes the differences in functionalities among the various software solutions that have been analyzed (Table 1).

3.2. System Usability Scale

For the comparative analysis, a common objective was set for all three systems: the integration of a predefined 3D scenario generated by Brainstorm Multimedia in the three systems, including the integration of a real person through chromakeying with camera movements. For Edison OnCloud, two questionnaires were administered, one for the basic version (which does not involve direct interaction with the original Edison interface) and another for the advanced version (handling the traditional Edison interface remotely). The average of the responses and their standard deviation can be seen in Table 2. The overall value of the results is calculated following the procedure established in the SUS methodology (subtracting the sum of the scores of the even-numbered questions from 25, adding to the result the sum of the odd-numbered questions minus 5, and the overall result multiplied by 2.5). The individual answers of each of the authors can be found in Appendix A, Table A1, Table A2, Table A3 and Table A4.
It is necessary to stress once again that the sample is very small, and its results cannot be treated as those of a larger and more representative sample of a general population. The objective of this research is to delve as deeply as possible into the analysis of the programs rather than to obtain a more superficial impression from a larger sample. This must be taken into account when analyzing and assessing the results obtained.
As can be observed, usability according to the SUS scale is almost acceptable for Edison OnCloud Advanced, being very close to 70, which marks the border of acceptable usability. Edison falls in the acceptable area, proving its usability an the possibility for non-expert users to learn to use it in a reasonable time. Edison OnCloud Basic score is over 90 points, being the most usable software in these tests and obtaining an outstanding result. On the other hand, InfinitySet reaches a very low score due to its nature as a software intended for professional users and a highly complex environment, falling in the non-acceptable area (Figure 5).
Observing the standard deviation, it can be seen that the highest level of consensus was reached in the analysis of the Edison OnCloud Basic option, in which the three authors assigned the same score to items 2 (I found the system unnecessarily complex, with a value of 1), 3 (I thought the system was easy to use, with a value of 5), and 7 (I would imagine that most people would learn to use this system very quickly, with a value of 5). This fact reinforces the strong outcome of this option as well as the authors’ perception of the high usability of the program. Similarly, there is consensus regarding statement 7 (I would imagine that most people would learn to use this system very quickly) in reference to InfinitySet, assigning it a score of 1, which highlights the complexity of this system, its limited adaptability for general users, and the requirement for extensive training to master it. On the other hand, the results with the greatest variability are established by a standard deviation of 1.29, which indicates that the four authors evaluated the different aspects in a similar manner, thereby reinforcing the common view obtained of the usability of the four alternatives, as represented by the overall SUS values.
It is necessary to point out that only Author 1, as the most experienced user, was able to complete the entire configuration and startup process of InfinitySet without external assistance. The numerous steps required, along with the dispersion of these elements across different parts of an interface saturated with options, caused users to forget some of the configurations or to have difficulties remembering where the suitable options were located. The lack of automation in some of these complex configurations, thus requiring manual execution, makes it a many-step process (see Section 3.1.1) which is not intuitive at all for a non-expert user. An example of this is the introduction of a delay in the real image to eliminate the lag caused by the rendering time. This is a complex process due to the difficulty of locating where this configuration is performed (within the inputs tab, for each input it is necessary to find the “Video delay override” option, which is not visible on the screen without scrolling, activate it, and configure the delay). These issues disappeared in Edison and Edison OnCloud, as many of these processes are transparent to the user, who has far fewer options to search through in the interface. In the case of video delay, input recognition is automatic, and it is not necessary to enter the value manually since the lag is eliminated automatically due to the nature of the application.

3.3. System Analysis Following Nielsen’s Heuristic Principles

Regarding the analysis carried out based on Nielsen’s usability principles, we have the following results:
1. 
Visibility of system status: All three systems have difficulties to properly informing the user about the system’s status. This problem is especially noticeable when large 3D scenarios are being loaded, a process that generally takes a long time, and the interface appears frozen as if the system had crashed. When opening all the programs, a Windows Console window is always present. In it, the user can observe information about the actions being performed by the software, but this information is neither complete nor accessible to all types of users (it is sometimes too technical).
2. 
Match between the system and the real world: As this is professional software focused on a very narrow market niche such as virtual production, InfinitySet includes a series of concepts that deviate from those handled in the daily life of an average user such as Xpoint, First Tracking offsets, 3D Matte, Despill, etc. However, this has been partially solved in Edison and Edison OnCloud since, although terms such as Xpoint or Despill still persist, elements like background, floor, display, slides, etc., have been defined, which are much more common in an academic or generalist environment.
3. 
User control and freedom: In all three solutions, it is possible to undo actions, and the user is always in control of the program. However, when a serious problem occurs, it is generally necessary to close and restart the program, as in many cases it is not possible to regain control. Since this is a complex technology, sometimes problems inherent to the 3D models used arise which are not easily solved within the program itself and must be fixed in third-party software (such as Blender, Maya, Unreal Engine) to achieve the expected result.
4. 
Consistency and standards: The systems are consistent, as the operation across the different interfaces is always the same. The configuration and presentation of functionalities are always carried out in the same way, following the most common standards. For example, the color picker for chromakeying is an eyedropper like the ones found in image processing applications, and the color systems used are the most common in the industry (RGB and HSV). Consistency is maintained both aesthetically and functionally across all three programs, with the most innovative, modern, and distinct component being the control panel of Edison OnCloud, which is mainly intended for users who do not have direct contact with the traditional interface.
5. 
Error prevention: The inherent complexity of the systems makes it easy to make configuration mistakes. However, these errors are not usually fatal and typically result in black screens in the viewport due to not activating the game engine or not properly setting up a video input. It is rare to receive an error message, and the errors that can be made are not easily preventable by the system. Additionally, these are more often due to incorrect configuration by the user than to a software bug. In any case, when such issues occur, they are not easy to identify since there can be multiple elements failing, and the system does not provide sufficiently specific feedback.
6. 
Recognition rather than recall: In many cases, the icons could be significantly improved, as it is difficult to determine their function without relying on the textual description that appears when hovering the mouse over each icon (Figure 6). Nevertheless, this iconography is consistent across the different programs, and the same icon is always used for the same action, even if it appears in different tabs, maintaining consistency.
7. 
Flexibility and efficiency of use: InfinitySet is not designed to be an accessible program but rather to offer substantial flexibility for professional users. In this sense, a wide range of different actions can be performed, and it is possible to achieve effects or arrive at the same outcome using different strategies within the same program. Edison is an example of good adaptation, flexibility, and efficiency of use. By default, it displays an interface with minimal functionality. However, by activating panels and effects, more advanced functionality can be achieved for more experienced users. By default, the program only allows the use of one actor. However, through the effects panel, it is possible to access the control of up to six actors within the same scene.
8. 
Aesthetic and minimalist design: Aesthetics is a relative concept, but none of the three solutions is visually attractive. The interfaces appear outdated, and the iconography, button colors, shapes, menus, etc., remind the user of designs from many years ago. This connects to what was mentioned in principle number 6, as the iconography is, in many cases, confusing and outdated. The complexity of the tasks to be performed means that none of them is minimalist. Edison improves the impression given by InfinitySet (where hundreds of buttons, checkboxes, sliders, etc., make it almost unmanageable without intensive prior training) by cutting back on functional elements. From the programs analyzed, the only part that can be considered visually attractive and minimalist is the controller of Edison OnCloud, which incorporates brighter colors, rounded designs, and generally a more modern aesthetic. However, the functionalities offered by this alternative are very limited in comparison with the others.
9. 
Help users recognize, diagnose, and recover from errors: When a configuration error occurs, none of the three systems provides assistance for recovery or even for detecting why the error is happening. Beyond an occasional error message, the only information about the system’s status is provided through the parallel windows command line, which is always available. This does not offer diagnostic strategies for non-advanced users or guidance on how to recover the system from the error. In general, when a serious error occurs, the way to resolve it is to close and restart the program, losing all non-saved work. Moreover, it should be noted that, in general, the command line is not visible to the user while working (since the program’s interface occupies the entire screen). Therefore, the user must actively consult it to determine whether an error has occurred or if the action has been successfully executed. In certain cases, an error is repeatedly generated across all the rendered frames, making the command line illegible due to the continuous repetition of the same error.
10. 
Help and documentation: All three programs offer comprehensive documentation that describes, step by step, the tasks that can be performed and the functionality of each element. This documentation is available in both web and PDF versions. Detailed documentation is provided for the program itself and for its integration with UE. In the case of Edison and Edison OnCloud, three-to-five-minute mini-tutorials are also offered, which briefly but thoroughly explain how to carry out the different tasks which are possible with the software. The company also provides forums and a support system that responds within 48 h. For InfinitySet, support can even be provided through a remote connection to the system.

4. Discussion

Starting the discussion of the results with the InfinitySet software, a clear trend towards high complexity and low usability is observed in all three analyses performed. On one hand, the analysis of its functionalities and interface demonstrates that the multitude of options and the specificity of its application field greatly hinder the use of the tool. Its clearly professional focus means it is not an environment accessible to a standard user. This can be seen in the complexity of camera calibration, video configuration, or in the necessary steps to start a production from scratch. Likewise, the interface favors providing the user with a vast amount of options, where tabs, menus, and submenus follow one another so that an expert can always find the functionality sought in a relatively agile manner. This makes it efficient in professional and highly trained environments, but it directly contradicts Nielsen’s Principles 6 (recognition rather than recall) and 8 (aesthetic and minimalist design). Due to this, for a first-time user, this advantage for experts becomes a virtually insurmountable barrier. The combination of complex hardware and the software functionalities required for its management make professional virtual production software not easily accessible to an average user, even if that user has theoretical knowledge of such applications and has used simpler Brainstorm suite programs. It should be noted that this professional scenario requires using a complete studio, with multiple cameras and all the equipment inherent to such setups, making hardware management much more complex than in the other scenarios studied.
This complexity of use is also reflected in the SUS questionnaire, in which InfinitySet scored only 37.5 points, far from what the authors would consider to be a satisfactory usability. Statements such as “I thought the system was easy to use” (average rating of 2) and “I felt very confident using the system” (average rating of 1.75) clearly show that the software, despite some prior training, is complex for a novice user. Additionally, the statement “I would imagine that most people would learn to use this system very quickly” receives a score of 1 (the minimum on the Likert scale) from all authors, indicating the perception that long and intensive theoretical and practical training is necessary to confront the software with any guarantee. This is supported by the average scores of “I think that I would need the support of a technical person to be able to use this system” (4) and “I needed to learn a lot of things before I could get going with this system” (4.5), showing the need for a lengthy learning period to use the program confidently. These responses contrast with the average score of 4.25 for “I found the various functions in this system were well integrated,” indicating that the main problem is not the integration of features but their sheer number and inherent complexity. The purpose of including this software in the study was to determine if its use by non-expert users was viable and to provide a fair comparison with the other two alternatives, allowing a proper assessment of virtual production usability by non-expert users or those with more modest equipment, considering the features that must be forsaken to achieve simplification. The results seem to demonstrate that it is very difficult for a non-expert user to operate a system based on Infinity Set, both due to its cost and its complexity.
Regarding the Edison software, run on a local computer, the developers’ effort to simplify and limit functions to suit its target audience (mainly small production companies and educational centers) stands out. Edison greatly simplifies the preliminary steps required to start recording or broadcasting. It is intended for use with a webcam or equivalent connected via USB to the computer. This allows the program itself to identify connected devices and compatible resolutions without complex video input calibration or configuration. Additionally, by using a trackfree environment, the camera has no sensors as it always remains in the same position, and movements are simulated using virtual cameras. These configuration limitations are complemented by environmental ones. There is a background, up to six actors (compared to twelve in InfinitySet), a stand, and a presentation. The background must be static, and the user cannot interact with its elements unless it is a template prepared so that certain elements, like textures or text, can be modified. Two slides or 3D objects cannot be shown at the same time, as the presentation is unique and each slide can only display one 3D object. Animations of environmental objects and the events that can be triggered are completely limited due to the lack of access to specific 3D objects in the scene. In a streaming or small production environment, all these limitations are not a major hindrance, as most can be overcome with creativity if necessary. For example, a multi-camera setup can be somehow simulated through the use of different virtual camera presets and camera cuts. To keep the interface simple and accessible, many of the application’s features are blocked by default. To avoid overwhelming the user, as happens with InfinitySet, developers have hidden the more specific software features (considered less commonly used) in the Effects menu. This means that a novice user can encounter a very simple interface, with very limited options, and increase complexity and functionality as needed. All this makes starting a recording with a template background and an existing slideshow a very simple and fast process. This approach is directly related to a good implementation of Nielsen’s principles 3 (user control and freedom) and 7 (flexibility and efficiency in use).
As a result, Edison’s SUS questionnaire yields an excellent result, with an average score of 74.4 points. Improvements are observed in the usability assessment across all areas compared to InfinitySet. Especially notable are “I would imagine that most people would learn to use this system very quickly” (scored 3.25 versus 1 for InfinitySet), “I felt very confident using the system” (3.75 versus 1.75), or “I think that I would need the support of a technical person to be able to use this system” (1.5 versus 4). Users clearly demonstrate more confidence in the feasibility of autonomous software use and the need for simpler and more moderate training to reach the required skill level to approach the program for the first time. These results show that it is possible to achieve a virtual production environment accessible to new audiences, that does not require major professional infrastructure for operation, and still offers professional production capabilities (Edison, for example, supports Unreal Engine integration for backgrounds). Of course, this new production environment is more limited technically and technologically than the traditional professional alternative, but it can be an entry point for small companies that find the initial investment for a traditional virtual studio as well as the necessary equipment maintenance unfeasible.
Lastly, Edison OnCloud was analyzed. This alternative presents two use modes that were studied separately. The first consists of direct execution and management of an Edison instance on Amazon Cloud Services. This scenario is called Edison OnCloud Advanced, as it offers the greatest configuration possibilities and a more flexible environment. This test environment involves managing Edison OnCloud in exactly the same way as the desktop alternative. The interface, therefore, is the same, but it adds a layer of complexity defined by its remote use. Within the browser window, there are tools to switch windows in the remote environment, change settings, upload files, copy from the local to the remote clipboard and configure the camera and microphone. This makes certain tasks, such as dragging files, copying and pasting text, etc., more complicated. Additionally, both camera and microphone configuration depends on the browser, with better compatibility observed with Microsoft Edge and Google Chrome. Therefore, to access the camera in Edison OnCloud, it must be previously configured in the browser, granting access to the specific device. During testing, some camera issues were observed, as it was not possible to configure the maximum resolution in Edison (Figure 3), resulting in a lower-quality image compared to desktop Edison (Figure 2). This may be due to the software still being in its beta development phase prior to commercialization and these issues should be solved before its commercial use begins.
As it is the same software, the SUS results are very similar for Edison and Edison OnCloud Advanced, the latter scoring 69.4, reflecting a drop of five points compared to the desktop version. This drop in perceived usability by the authors is undoubtedly due to the added complexity layer of the remote desktop, which sometimes makes certain settings difficult or breaks the usual workflow in this new paradigm. However, this subtle drop is offset by what this alternative offers; it allows the use of the software on devices with very limited graphics power, on different operating systems (Edison can only run on Windows environments), and without the need to invest in high-end hardware to run virtual scenarios. As long as subscription costs remain moderate, the possibility of generating virtual production content in the cloud offers a new scenario for small or occasional system users, who pay by subscription and do not need to buy a license or specific hardware beyond a good-quality webcam.
The final scenario evaluated was the use of Edison OnCloud Basic, which refers to using Edison OnCloud with pre-defined templates in the system and employing the control panel provided by the application. This option means the user never has to interact with the original Edison interface, as all settings are accessed through simple visual menus that allow the scenario, presentation files, and actor positioning to be predefined. Once the template is configured, the service can be launched remotely and controlled by a simple and minimalistic interface, allowing the camera to move among the template’s predefined positions, slideshow control, chromakey and other actor parameter adjustment, show or hide the display, background and stand adjustment if available, as well as presentation management. All this greatly limits system flexibility since it forces the user to use a pre-defined template with default display and camera positions. This extreme simplification of functionalities, combined with a design that is much more aesthetic and minimalist than the other options, directly relates to Nielsen’s Principles 5 (error prevention, as the user’s flexibility is so limited that it is difficult to make mistakes), 6 (recognition rather than recall, since the interface and the few concepts it presents are much more understandable to a general audience), and 8 (aesthetic and minimalist design, as the aesthetics are modernized compared to the other three programs, making it more attractive). However, these limitations on functionalities could be considered contrary to Principle 7 (flexibility and efficiency of use), since there is only one way to perform tasks and no option to vary them through the program’s interface. However, for an end user, this approach makes configuring the environment and starting to create content with template-based virtual production trivial. The main potential here is that an expert user can create different templates for various programs or classes, and teachers or small production teams can use the simplified interface to deliver their programs in real time. Combined with cloud execution, this enables any user to create content within a virtual environment.
This perception is supported by the SUS results, as the Edison OnCloud Basic option scores up to 91.9, indicating an excellent level of perceived usability. Notable scores include “I found the system unnecessarily complex” (average of 1), “I thought the system was easy to use” (average of 5), or “I think that I would need the support of a technical person to be able to use this system” (average of 1.25). This demonstrates that users found the software to be very easy to use and believe they could operate it autonomously without extensive training.
Regarding Nielsen’s heuristics, it should be noted that all three systems derive from the same software called eStudio, and so many strengths and weaknesses are shared. Primarily, two issues stand out. The first relates to interface design. The design of buttons, checkboxes, menus, etc., has become outdated and is not attractive to users. Moreover, the icons used are not, in many cases, self-explanatory, so it is necessary to remember the function of the buttons or, failing that, hover over them to read their functionality. Additionally, the naming of some elements is unique to the Brainstorm Multimedia ecosystem, which provides unity across all programs, but also presents an adoption barrier for users entering this universe for the first time. All these facts contradict principles 2 (match between the system and the real world), 6 (recognition rather than recall), and 8 (aesthetic and minimalist design) while maintaining consistency (principle 4) through all the applications and inside each program itself. The mode of operation, iconography, and processes are consistent across all tested programs, making it easy to move from one to another and even allowing for a progressive learning process from the simplest solution to the most complex. In this case, such consistency causes the issues present in the original program to extend to the derived ones, not being resolved by the proposed limitation of functionalities. Despite this, the simplification of functionalities does have a positive effect regarding the use of some more common terms such as stand, display, slide, or background (in line with Nielsen’s second principle, match between the system and the real world). It also has the counterpart of reduced freedom and control for the user (principle 3) by limiting what the user can do and the different ways to achieve an effect. The second problem is the frequent lack of information regarding the errors that occur. Often, users may feel lost when they do not see the expected image in the viewport and are kept waiting for an event that may not happen. For example, while loading a large 3D model, the interface tends to freeze with no indication of whether it is loading correctly or there has been an error (principle 1: visibility of system status). In some cases, errors can be seen in the console that opens beside the program via error codes, but this information is generally not useful for an average user, contradicting principle 9 (help users recognize, diagnose, and recover from errors). Among the positive aspects, the freedom and flexibility provided by all three programs should be highlighted (principle 3), as users can achieve an effect by following several different paths and can configure the program for complex environments with multiple cameras, sensors and active options as well as for simplified environments with one camera, no sensors and a drastically reduced list of options, still allowing for the recording of a real-time virtual production. Finally, following principle 10, the help features offered in the programs also deserve recognition. All three offer comprehensive manuals detailing all functions. The company provides forums and a user assistance system with responses within 48 h. For Edison and Edison OnCloud, short tutorials (about 3 to 5 min) are provided, explaining how to perform the main tasks of a virtual production using these programs.
It is also necessary to take into account certain aspects that were not considered in this study, as they are not directly related to usability, when implementing these technologies in a real environment. First, there are the limitations that may arise from the use of the Edison OnCloud system in terms of scalability. Virtual production, as previously noted, is a highly computationally demanding process, requiring very powerful equipment to ensure the real-time execution of programs. In the case of the cloud, with a controlled number of users, this does not represent a problem, since computing capacity is more than sufficient. The problem could arise, however, if this technology were to become widely adopted and begin to be used, for example, across all schools, secondary institutions, and universities in Europe. In such a scenario, the number of users (and concurrent users) would scale to figures that would be extremely difficult for any service provider to manage. In parallel, this could also have a significant environmental impact, given the continuous execution of such computationally intensive tasks on servers (similar to what is currently occurring with artificial intelligence). Nevertheless, this technology is designed to serve as a support tool in content creation and not to become the cornerstone of the educational system; therefore, at least in the coming years, this appears to be a rather unlikely scenario.
A second aspect that could be problematic when bringing these technologies to a general audience is the generation of backgrounds and 3D elements. In this research, templates and preconfigured backgrounds were used, allowing the user to focus exclusively on configuring the environment and its audiovisual components, leaving the creation of these spaces to the experts. In this regard, although the software permits the direct integration of 3D models in standard formats such as .obj or .fbx, the existence of predefined scenarios, tested by experts and specifically configured for the programs, greatly simplifies the user’s task. In this respect, the software includes a series of such scenarios, but it would be desirable to have a marketplace where professionals could upload their templates for users to purchase and use them directly. This is a strategy that, if the technology becomes widespread, will likely be adopted by development companies, as it facilitates the user’s experience while simultaneously generating revenue for the companies.
Similarly, the integration with Unreal Engine could be a problem for non-expert users, as it introduces a higher layer of complexity compared to the direct use of 3D models. This added complexity derives from the dependence on the version of Unreal Engine compatible with the virtual production program being used, as well as the need to adopt appropriate strategies to activate animations and effects within Unreal Engine from the third-party software. In this case, the solution could be similar to the previous one, providing the user with a library of preconfigured scenarios featuring different possibilities and levels of integration with the platform. If such a library existed, the user’s task would be limited to loading a scenario, configuring audiovisual elements (cameras, actors, effects, etc.), and starting the recording or broadcast.
Furthermore, when discussing the democratization of a technology, which implies moving it from a closed field (such as, in our case, large audiovisual production companies) to a general audience, it is necessary to consider the entire society. In this regard, when analyzing the usability of a system or implementing improvements to make it more accessible, it is crucial to take into account diverse users. In this case, given that the software is highly niche and is just beginning to take its first steps towards a broader usage, we observe that no strategies have been adopted to include people with disabilities among the potential users of the three analyzed programs. This is something that must change in the future, as a crucial aspect of a program’s usability is its compliance with accessibility standards that truly democratize the technology for the entire population.
Finally, it is necessary to consider that working in the cloud requires even greater care when handling data privacy, particularly since one of the primary directions this technology is taking is toward education, a field that involves daily work with minors. Therefore, given that the images of individuals using the software are being processed and that the highest level of data protection should be provided, data privacy must be guaranteed at all times by implementing the highest security standards. This responsibility must be assumed by the service providers managing both remote cloud work and the streaming of images sent to the cloud from the recording space, as well as those received back as a result of virtual production.
In summary, it can be observed that the inherent complexity of virtual production makes a 100% professional solution too costly and complex for non-expert users or for its adoption by small studios and educational centers. However, by limiting certain functionalities and reducing the hardware requirements, this technology can be brought closer to the general public thanks to a drastic reduction in implementation and maintenance costs alongside improved system usability, making it feasible for non-experts to manage. On the other hand, there are a series of concerns that arise when considering this potential future that must be taken into account when addressing the democratization of the technology, such as accessibility, the management and availability of 3D materials, the scalability of cloud-based solutions, and the privacy of user data.

5. Conclusions

First, it is necessary to emphasize that this study employs three usability analysis techniques to carry out a comparison between a professional virtual production solution, one oriented towards educators, and the adaptation of the latter for cloud-based execution. To achieve this, a functionalities and interface analysis was conducted, a SUS questionnaire was answered, and, finally, a comparative evaluation was performed following Jakob Nielsen’s ten heuristic principles.
The tests have shown that using a professional virtual production system is not viable in a non-professional environment such as education, or in a small independent production company without the sufficient economic muscle for the implementation and maintenance of such a complex hardware–software infrastructure. The software has proven to be extremely complex, to the point where users do not feel capable of using it autonomously. Both price and complexity make the use of professional virtual production systems unfeasible in non-professional environments.
On the other hand, it has been proven that alternatives can be developed to simplify virtual production and make it accessible to new audiences. The approach explored in the case of Edison involves limiting certain advanced features and simplifying the hardware. In this sense, the software is much more accessible to users who feel capable of learning to use it relatively easily and perceive that they would be able to use it independently. This approach relatively limits creative possibilities, since it does not offer real multi-camera production (often using only virtual cameras to simulate camera movements). However, Edison allows professional-quality virtual productions to be carried out in a simple, fast, and effective way. Evidence of this is its integration with Unreal Engine.
Furthering the usability and democratization of the system, Edison OnCloud not only simplifies the software component (offering the same as Edison and even taking the simplification of processes a step further) but it also decouples virtual production from virtually all hardware requirements, needing only a computer with internet access, a webcam, and a green screen setup. This also means that the quality of the internet connection becomes crucial for the final experience of using the software. This solution is in its final steps before reaching the market, but it presents itself as a very interesting alternative for small businesses and for environments not typically associated with virtual production, such as education, scientific dissemination, or content creation for the internet, as it has proven to offer the best usability and accessibility for non-experts.
The simplification of functions has proven to be a good approach to reach a broader group of users. However, excessive simplification can lead to excessive rigidity of the solution, focusing heavily on a prescribed use and restricting one of the greatest advantages of virtual television studios: creative freedom and the pursuit of new audiovisual boundaries. Therefore, it is essential to maintain a balance between simplification or limitation of functions and room for maneuvering for a non-expert user who wishes to improve and use more advanced features.
On the other hand, there are hardware and technology configuration requirements that should be avoided for the user as much as possible. A novice user will find it very cumbersome to have to perform a calibration or even a specific video configuration every time a new camera is connected to the system. This process should, whenever possible, be transparent to the user.
It is imperative that the program’s interface be user-friendly and display just enough options to avoid overwhelming a novice user. The software should allow the user to start with a simple template-based configuration but have sufficient flexibility to enable more advanced and customized productions.
Therefore, it has been demonstrated that the democratization of virtual production is possible and is underway. It is probably a long road that is just beginning to be traveled today, but one that will undoubtedly continue, making it so that not only large production companies can enjoy the creative and technical possibilities offered by the integration of virtual and real worlds.
Looking to the future, within the framework of the projects that encompass this research, an improvement and extension of this study is proposed regarding the use of Edison and Edison OnCloud, incorporating university professors from different disciplines to conduct a comparative analysis between both environments for the creation of educational materials that goes beyond this initial empirical approach carried out by the authors. InfinitySet is excluded due to the intrinsic complexity of its operation, its hardware requirements, and the hours of training necessary for an average user to independently start using the software. The research presented in this article has served to establish this cutoff and the impossibility of including this software in a broader study, as well as to suggest that the other two alternatives may indeed be accessible to a more general audience. The findings of this research will also be presented to Brainstorm for the improvement of their programs and are made available to the industry through the publication of this article to be taken into account in future developments, aiming to follow this path of technological democratization.

Author Contributions

Conceptualization, R.M.-F. and R.d.P.S.-F.; methodology, R.M.-F., R.d.P.S.-F. and F.F.-L.; software, R.M.-F.; validation, R.d.P.S.-F., F.F.-L. and E.C.-M.; formal analysis, R.M.-F., R.d.P.S.-F., F.F.-L. and E.C.-M.; investigation, R.M.-F., R.d.P.S.-F., F.F.-L. and E.C.-M.; resources, R.M.-F. and E.C.-M.; data curation, R.d.P.S.-F. and F.F.-L.; writing—original draft preparation, R.M.-F.; writing—review and editing, R.d.P.S.-F., F.F.-L. and E.C.-M.; visualization, R.M.-F. and R.d.P.S.-F.; supervision, R.M.-F. and E.C.-M.; project administration, E.C.-M.; funding acquisition, R.M.-F. and E.C.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the EUREKA member countries and the European Union Horizon 2020 Framework Programme through the project “CloudClass: Low Cost, Mobile, Cloud & Template Based Augmented Reality Studio for Education” grant number Eurostars-E115354 and “National Recovery, Transformation, and Resilience Plan of the State Research Agency and the Ministry of Science and Innovation of Spain, through Next Generation funding from the European Union (ref. EQC2021-007535-P)”.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the fact that the SUS questionnaire does not refer to personal data or involve any type of ethical implications, as it only refers to impressions when using a software, and that the only participants in the questionnaire were the authors of the work themselves who answered the questionnaire as a research activity among the different methods used for this study and not as study subjects. Thus, following the University of Santiago de Compostela regulations, approval from the bioethics committee is not required for the present research.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All the data collected in a structured manner for this research is available in the article and in Appendix A.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
UEUnreal Engine
RTXRay Tracing Texel Extreme
DLSSDeep Learning Super Sampling
SUSSystem Usability Scale

Appendix A

Table A1. Author 1 responses to SUS questionaire and total score.
Table A1. Author 1 responses to SUS questionaire and total score.
Author 1InfinitySetEdisonEdison OnCloud BasicEdison OnCloud Advanced
I think that I would like to use this system frequently4555
I found the system unnecessarily complex4213
I thought the system was easy to use1453
I think that I would need the support of a technical person to be able to use this system4222
I found the various functions in this system were well integrated4554
I thought there was too much inconsistency in this system2212
I would imagine that most people would learn to use this system very quickly1352
I found the system very cumbersome to use4222
I felt very confident using the system2453
I needed to learn a lot of things before I could get going with this system5313
Total SUS score32.5759562.5
Table A2. Author 2 responses to SUS questionaire and total score.
Table A2. Author 2 responses to SUS questionaire and total score.
Author 2InfinitySetEdisonEdison OnCloud BasicEdison OnCloud Advanced
I think that I would like to use this system frequently4544
I found the system unnecessarily complex2111
I thought the system was easy to use3454
I think that I would need the support of a technical person to be able to use this system3111
I found the various functions in this system were well integrated5444
I thought there was too much inconsistency in this system1111
I would imagine that most people would learn to use this system very quickly1353
I found the system very cumbersome to use3111
I felt very confident using the system2454
I needed to learn a lot of things before I could get going with this system3111
Total SUS score57.587.59585
Table A3. Author 3 responses to SUS questionaire and total score.
Table A3. Author 3 responses to SUS questionaire and total score.
Author 3InfinitySetEdisonEdison OnCloud BasicEdison OnCloud Advanced
I think that I would like to use this system frequently3333
I found the system unnecessarily complex3212
I thought the system was easy to use2353
I think that I would need the support of a technical person to be able to use this system4111
I found the various functions in this system were well integrated4444
I thought there was too much inconsistency in this system3222
I would imagine that most people would learn to use this system very quickly1353
I found the system very cumbersome to use3222
I felt very confident using the system1232
I needed to learn a lot of things before I could get going with this system5424
Total SUS score32.5608060
Table A4. Author 4 responses to SUS questionaire and total score.
Table A4. Author 4 responses to SUS questionaire and total score.
Author 4InfinitySetEdisonEdison OnCloud BasicEdison OnCloud Advanced
I think that I would like to use this system frequently4555
I found the system unnecessarily complex5314
I thought the system was easy to use2353
I think that I would need the support of a technical person to be able to use this system5212
I found the various functions in this system were well integrated4555
I thought there was too much inconsistency in this system4213
I would imagine that most people would learn to use this system very quickly1454
I found the system very cumbersome to use5223
I felt very confident using the system2555
I needed to learn a lot of things before I could get going with this system5312
Total SUS score22.57597.570

References

  1. Silva Jasaui, D.; Martí-Testón, A.; Muñoz, A.; Moriniello, F.; Solanes, J.E.; Gracia, L. Virtual Production: Real-Time Rendering Pipelines for Indie Studios and the Potential in Different Scenarios. Appl. Sci. 2024, 14, 2530. [Google Scholar] [CrossRef]
  2. Gibbs, S.; Arapis, C.; Breiteneder, C.; Lalioti, V.; Mostafawy, S.; Speier, J. Virtual studios: An overview. IEEE Multimed. 1998, 5, 18–35. [Google Scholar] [CrossRef]
  3. Mitchell, S.; Perry, C.; Redmond, S.; Torre, L. The Screens of Virtual Production: What Is Real? Taylor and Francis: Abingdon, UK; pp. 1–310.
  4. Swords, J.; Willment, N. ‘It used to be fix-it in post production! now it’s fix-it in pre-production’: How virtual production is changing production networks in film and television. Creat. Ind. J. 2024, 1–17. [Google Scholar] [CrossRef]
  5. Livingstone, T. Game engines: Optimising VFX, reshaping visual media. Necsus Eur. J. Media Stud. 2024, 13, 180–201. [Google Scholar] [CrossRef]
  6. Chanpum, P. Virtual Production: Interactive and real-time technology for filmmakers. Humanit. Arts Soc. Sci. Stud. 2023, 23, 9–17. [Google Scholar] [CrossRef]
  7. Swords, J.; Willment, N. The emergence of virtual production—A research agenda. Convergence 2024, 30, 1557–1574. [Google Scholar] [CrossRef]
  8. Comparison of the Characteristics of Green Screen and LED Wall in Virtual Production System. Int. J. Adv. Smart Converg. 2022, 11, 64–70. [CrossRef]
  9. An, D. Produção virtual orientada para a tecnologia: As vantagens e as novas aplicações dos motores de jogo na indústria cinematográfica. Rev. Famecos 2022, 29, e43370. [Google Scholar] [CrossRef]
  10. Tan, T.W. Mastering Lumen Global Illumination in Unreal Engine 5. In Game Development with Unreal Engine 5 Volume 1: Design Phase; Apress: Berkeley, CA, USA, 2024; pp. 223–275. [Google Scholar] [CrossRef]
  11. Lu, W. Unreal engine nanite foliage shadow imposter. In Proceedings of the Second International Conference on Applied Statistics, Computational Mathematics, and Software Engineering (ASCMSE 2023), Kaifeng, China, 26–28 May 2023; Zhang, Y., Batista, P., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2023; Volume 12784, p. 127842E. [Google Scholar] [CrossRef]
  12. Oakden, T.; Kavakli, M. Graphics Processing in Virtual Production. In Proceedings of the 2022 14th International Conference on Computer and Automation Engineering (ICCAE), Brisbane, Australia, 25–27 March 2022; pp. 61–64. [Google Scholar] [CrossRef]
  13. Sen, S.; Bhushan, B. Image Quality Comparison Between Nvidia’s Deep Learning Super Sampling and AMD’s FidelityFX Super Resolution. In Proceedings of the 2024 International BIT Conference (BITCON), Jharkhand, India, 7–8 December 2024; pp. 1–6. [Google Scholar] [CrossRef]
  14. Jiang, J.; Lin, J.; Su, Y.; Fang, L.; Ye, L. Multiple HD Screen-Based Virtual Studio System with Learned Mask-Free Portrait Harmonization. Wirel. Commun. Mob. Comput. 2022, 2022, 6014795. [Google Scholar] [CrossRef]
  15. Cremona, C.; Kavakli, M. The Evolution of the Virtual Production Studio as a Game Changer in Filmmaking. In Creating Digitally: Shifting Boundaries: Arts and Technologies—Contemporary Applications and Concepts; Brooks, A.L., Ed.; Springer International Publishing: Cham, Switzerland, 2023; pp. 403–429. [Google Scholar] [CrossRef]
  16. Nebeling, M.; Rajaram, S.; Wu, L.; Cheng, Y.; Herskovitz, J. XRStudio: A Virtual Production and Live Streaming System for Immersive Instructional Experiences. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 8–13 May 2021. CHI 21. [Google Scholar] [CrossRef]
  17. Brooke, J. SUS—A quick and dirty usability scale. In Usability Evaluation in Industry; Redhatch Consulting Ltd.: Earley, UK, 1996; Volume 189, pp. 4–7. [Google Scholar]
  18. Nielsen, J. Enhancing the explanatory power of usability heuristics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 24–28 April 1994; CHI 94. pp. 152–158. [Google Scholar] [CrossRef]
  19. Kim, S.L.; Suk, H.J.; Kang, J.H.; Jung, J.M.; Laine, T.H.; Westlin, J. Using Unity 3D to facilitate mobile augmented reality game development. In Proceedings of the 2014 IEEE World Forum on Internet of Things, WF-IoT 2014, Seoul, Republic of Korea, 6–8 March 2014; pp. 21–26. [Google Scholar] [CrossRef]
  20. Mercan, S.; Durdu, P.O. Evaluating the usability of unity game engine from developers’ perspective. In Proceedings of the 11th IEEE International Conference on Application of Information and Communication Technologies, Rusia, Moscow, 20–22 September 2017. AICT 2017-Proceedings. [Google Scholar] [CrossRef]
  21. Sapio, F.; Ratini, R. Developing and Testing a New Reinforcement Learning Toolkit with Unreal Engine. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022; Volume 13336, pp. 317–334. [Google Scholar] [CrossRef]
  22. Bédard, P. Virtual Production and the Transformation of Cameras Mechanical, Virtual, and Actual. Animation 2022, 17, 226–243. [Google Scholar] [CrossRef]
Figure 1. Screenshot of the InfinitySet software during the recording of a virtual production.
Figure 1. Screenshot of the InfinitySet software during the recording of a virtual production.
Informatics 12 00104 g001
Figure 2. Screenshot of the Edison software during the recording of a virtual production.
Figure 2. Screenshot of the Edison software during the recording of a virtual production.
Informatics 12 00104 g002
Figure 3. Screenshot of the Edison OnCloud software during the recording of a virtual production.
Figure 3. Screenshot of the Edison OnCloud software during the recording of a virtual production.
Informatics 12 00104 g003
Figure 4. Screenshot of the Edison OnCloud control panel during the recording of a virtual production.
Figure 4. Screenshot of the Edison OnCloud control panel during the recording of a virtual production.
Informatics 12 00104 g004
Figure 5. System Usability Scale results.
Figure 5. System Usability Scale results.
Informatics 12 00104 g005
Figure 6. Example of icons used in InfinitySet which are also used in Edison.
Figure 6. Example of icons used in InfinitySet which are also used in Edison.
Informatics 12 00104 g006
Table 1. Software functionality comparison.
Table 1. Software functionality comparison.
InfinitySetEdisonEdison OnCloud BasicEdison OnCloud Advanced
MulticameraYesNoNoNo
VR modeYesNoNoNo
AR modeYesNoNoNo
FreetrackYesYesYesYes
Virtual FocusYesNoNoNo
Editable backgroundYesPartially (if developed with Aston)Partially (if developed with Aston)Partially (if developed with Aston)
Max. number of actors12666
Max. number of
camera presets
512888
Max. number of actions512512512512
Max. number of playlists12121212
Max. number of inputs24666
Max. number of Xpoints12666
Table 2. Global SUS results presenting the average score and standard deviation (average-standard deviation) for each item and system.
Table 2. Global SUS results presenting the average score and standard deviation (average-standard deviation) for each item and system.
InfinitySetEdisonEdison OnCloud BasicEdison OnCloud Advanced
I think that I would like to use this system frequently3.75-0.54.5-14.25-0.964.25-0.96
I found the system unnecessarily complex3.5-1.292-0.821-02.5-1.29
I thought the system was easy to use2-0.823.5-0.585-03.25-0.5
I think that I would need the support of a technical person to be able to use this system4-0.821.5-0.581.25-0.51.5-0.58
I found the various functions in this system were
well integrated
4.25-0.54.5-0.584.5-0.584.25-0.5
I thought there was too much inconsistency in this system2.5-1.291.75-0.51.25-0.52-0.82
I would imagine that most people would learn to use this system very quickly1-03.25-0.55-03-0.82
I found the system very cumbersome to use3.75-0.961.75-0.51.75-0.52-0.82
I felt very confident using the system1.75-0.53.75-1.264.5-13.5-1.29
I needed to learn a lot of things before I could get going with this system4.5-12.75-1.261.25-0.52.5-1.29
Total SUS score36.2574.491.969.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Méndez-Fernández, R.; Sosa-Fernández, R.d.P.; Fernández-Ledo, F.; Castelló-Mayo, E. Democratization of Virtual Production: Usability Analysis of Three Solutions with Different Levels of Complexity: Professional, Educational and Cloud-Based. Informatics 2025, 12, 104. https://doi.org/10.3390/informatics12040104

AMA Style

Méndez-Fernández R, Sosa-Fernández RdP, Fernández-Ledo F, Castelló-Mayo E. Democratization of Virtual Production: Usability Analysis of Three Solutions with Different Levels of Complexity: Professional, Educational and Cloud-Based. Informatics. 2025; 12(4):104. https://doi.org/10.3390/informatics12040104

Chicago/Turabian Style

Méndez-Fernández, Roi, Rocío del Pilar Sosa-Fernández, Fátima Fernández-Ledo, and Enrique Castelló-Mayo. 2025. "Democratization of Virtual Production: Usability Analysis of Three Solutions with Different Levels of Complexity: Professional, Educational and Cloud-Based" Informatics 12, no. 4: 104. https://doi.org/10.3390/informatics12040104

APA Style

Méndez-Fernández, R., Sosa-Fernández, R. d. P., Fernández-Ledo, F., & Castelló-Mayo, E. (2025). Democratization of Virtual Production: Usability Analysis of Three Solutions with Different Levels of Complexity: Professional, Educational and Cloud-Based. Informatics, 12(4), 104. https://doi.org/10.3390/informatics12040104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop