A Study on Design Requirement Development and Satisfaction for Future Virtual World Systems

: Virtual worlds have become global platforms connecting millions of people and containing various technologies. For example, No Man’s Sky (nomanssky.com), a cross-platform virtual world, can dynamically and automatically generate content with the progress of user adventure. AltspaceVR (altvr.com) is a social virtual reality platform supporting motion capture through Microsoft’s Kinect, eye tracking, and mixed reality extension. The changes in industrial investment, market revenue, user population, and consumption drive the evolution of virtual-world-related technologies (e.g., computing infrastructure and interaction devices), which turns into new design requirements and thus results in the requirement satisfaction problem in virtual world system architecture design. In this paper, we ﬁrst study the new or evolving features of virtual worlds and emerging requirements of system development through market / industry trend analysis, including infrastructure mobility, content diversity, function interconnectivity, immersive environment, and intelligent agents. Based on the trend analysis, we propose a new design requirement space. We, then, discuss the requirement satisfaction of existing system architectures and highlight their limitations through a literature review. The feature-based requirement satisfaction comparison of existing system architectures sheds some light on the future virtual world system development to match the changing trends of the user market. At the end of this study, a new architecture from an ongoing research, called Virtual Net, is discussed, which can provide higher resource su ﬃ ciency, computing reliability, content persistency, and service credibility.


Introduction
Virtual worlds, including game worlds and social worlds, are synthetic, persistent, and immersive environments, facilitated by computer networks connecting multiple users, in the form of avatars, who interact in (near) real-time [1,2]. Virtual worlds have been evolving from text-based games (e.g., early multiuser dungeon games which only support text-based user interaction) to highly immersive three-dimensional graphic environments [3]. In the evolution of their forms and contents, virtual worlds have developed some important features, including interactivity, immersion, presence, persistency, embodiment, shared temporality, and spatiality, etc. [2]. Supported by these features, the growth of virtual worlds has brought the participation of millions of people and innovations in many fields, including education [4], medicine [5], tourism [6], commerce [7], and entertainment [8]. Like the real than 22,500 trusted partners, such as Experian Simmons and Euromonitor. It maintains sources transparency and adherence to academic standards: https://www.statista.com/sources/1/) with the keyword "Virtual World", which returned 821 results, including statistics, forecasts, and studies (the search result can be downloaded from https://sunniel.github.io/VirtualNet/post/trendanalysis/Statista.xlsx). The search results were then examined by relevance and filtered by the year range from 2015 to 2035. Here, we manually identify the relevance by inspecting the titles of results. For example, "Estimated virtual reality (VR)/augmented reality (AR) headset unit shipments by type" is relevant to virtual world system design, while "Top earners in virtual worlds of Warcraft" is not. This reduced the results to 538. We, then, tagged them with relevant topics and showed them in a bar chart in Figure 1. With the obtained knowledge, we further looked into each scope and investigated the trends of related technologies, including mobile computing, social networks, virtual/augmented reality, virtual world applications, internet of things (IoT), wearable devices, game intelligence, multisensory multimedia, and computer graphics. The trends were analyzed along with five directions: mobility, diversity, interconnectivity, immersion, and intelligence. Instead of exploring the details of specific technologies, the trends are mainly studied from the perspective of revenue, investment, population, and consumption (which are the representative form of market/industrial evidence on Statista, but may not be all available in each direction) of them to show industrial, market, and user acceptance and adoption, as virtual world applications have to be built on top of them. They are elaborated in the subsections below, followed by the implications for future virtual world design.

Mobility
The wide usage of smart devices (e.g., smartphones and tablets) and the pervasion of mobile networks (e.g., the 5th generation cellular mobile network) promote the concept of accessing virtual worlds everywhere and facilitate the development of pervasive gaming (the search result can be downloaded from https://sunniel.github.io/VirtualNet/post/trend-analysis/Statista.xlsx). The market/industrial evidence in mobility is illustrated in Figure 2, which shows the trend of the mobile virtual world since 2014. By the end of 2017, the number of mobile gamers has been more than 1.4 times the number of 2014 (Figure 2a, data source: [18]). Figure 2b (data source: [19])) shows that worldwide mobile social gaming revenue has increased by 2.82 billion dollars from 2010 to 2014. Interestingly, the trend of the gamer population by device shows that smartphones gain larger traction than tablets. It is likely that with the increase of smartphone size and capacity, the distinction of smartphone and tablet are diminishing, which reduces the demand on tablets. Moreover, the investigations in the year 2017 and 2018 show that smartphone-based applications gain the largest

Mobility
The wide usage of smart devices (e.g., smartphones and tablets) and the pervasion of mobile networks (e.g., the 5th generation cellular mobile network) promote the concept of accessing virtual worlds everywhere and facilitate the development of pervasive gaming (the search result can be downloaded from https://sunniel.github.io/VirtualNet/post/trend-analysis/Statista.xlsx). The market/industrial evidence in mobility is illustrated in Figure 2, which shows the trend of the mobile virtual world since 2014. By the end of 2017, the number of mobile gamers has been more than 1.4 times the number of 2014 (Figure 2a, data source: [18]). Figure 2b (data source: [19])) shows that worldwide mobile social gaming revenue has increased by 2.82 billion dollars from 2010 to 2014. Interestingly, the trend of the gamer population by device shows that smartphones gain larger traction than tablets. It is likely that with the increase of smartphone size and capacity, the distinction of smartphone and tablet are diminishing, which reduces the demand on tablets. Moreover, the investigations in the year Future Internet 2020, 12, 112 4 of 28 2017 and 2018 show that smartphone-based applications gain the largest market share both in video games (63%) [20] and VR/AR (77%) [21], compared with personal computer (PC), laptop, game console, and other standalone game device counterparts. In summary, the trends in mobility show that the user population and consumption on mobile virtual world platforms are growing.
Future Internet 2020, 12, x FOR PEER REVIEW 4 of 28 market share both in video games (63%) [20] and VR/AR (77%) [21], compared with personal computer (PC), laptop, game console, and other standalone game device counterparts. In summary, the trends in mobility show that the user population and consumption on mobile virtual world platforms are growing.

Diversity
Virtual worlds have been designed to serve different purposes of use, as shown in Figure 3a (data source: [22]). Along this direction, the market/industrial evidence shows the increasing range of investment, which is illustrated in Figure 3b. A shift of investment from traditional sectors (gaming, marketing, and military simulation) to new sectors (retail and manufacturing) can also be observed in Figure 3b (data source: [23]). To better serve diverse business purposes, some virtual worlds now allow users to create their own virtual objects with primitive building blocks to facilitate user innovation and collaboration [24], which has already been implemented in Second Life (secondlife.com) and Minecraft (minecraft.net). There are some typical examples of content creation and diverse usage of virtual worlds. In education, for example, immersive learning environments (Immersive learning environments are the virtual learning environments which employ VR, AR, or mixed reality (MR) to enhance community, presence and authentic situated learning experiences [25]) provide simulated and controlled situations to improve student's performance and motivation in studies [25]. Additionally, dynamic content generation can decrease development efforts for teachers to customize learning games [26]. In medicine, the realistic models created in virtual environments can facilitate surgical training [5]. In tourism, virtual world technologies can convert heritage relics to digital assets, which can avoid erosion in long-term preservation [27]. In e-commerce, user involvement in product cocreation [28] can improve product satisfaction.

Diversity
Virtual worlds have been designed to serve different purposes of use, as shown in Figure 3a (data source: [22]). Along this direction, the market/industrial evidence shows the increasing range of investment, which is illustrated in Figure 3b. A shift of investment from traditional sectors (gaming, marketing, and military simulation) to new sectors (retail and manufacturing) can also be observed in Figure 3b (data source: [23]). To better serve diverse business purposes, some virtual worlds now allow users to create their own virtual objects with primitive building blocks to facilitate user innovation and collaboration [24], which has already been implemented in Second Life (secondlife.com) and Minecraft (minecraft.net). There are some typical examples of content creation and diverse usage of virtual worlds. In education, for example, immersive learning environments (Immersive learning environments are the virtual learning environments which employ VR, AR, or mixed reality (MR) to enhance community, presence and authentic situated learning experiences [25]) provide simulated and controlled situations to improve student's performance and motivation in studies [25]. Additionally, dynamic content generation can decrease development efforts for teachers to customize learning games [26]. In medicine, the realistic models created in virtual environments can facilitate surgical training [5]. In tourism, virtual world technologies can convert heritage relics to digital assets, which can avoid erosion in long-term preservation [27]. In e-commerce, user involvement in product cocreation [28] can improve product satisfaction. provide simulated and controlled situations to improve student's performance and motivation in studies [25]. Additionally, dynamic content generation can decrease development efforts for teachers to customize learning games [26]. In medicine, the realistic models created in virtual environments can facilitate surgical training [5]. In tourism, virtual world technologies can convert heritage relics to digital assets, which can avoid erosion in long-term preservation [27]. In e-commerce, user involvement in product cocreation [28] can improve product satisfaction.

Interconnectivity
Virtual worlds can connect different people and devices to support various interactions and communication. Multiple access devices lead to various user data input to virtual worlds. One emerging category is wearable devices. Figure 4a,b (data source: [29,30]) shows that the market value on wearable devices is growing in all categories, from both revenue and consumption perspective. Wearable devices can collect user data by tracing various user interaction patterns. For instance, Cyberith Virtualizer (cyberith.com) can track user posture in walking, running, standing, crouching, and sitting. Data are also collected from IoT devices, creating a mirrored world to telepresent real-world data in a virtual environment. For example, Eye Create Worlds [31] visualizes the data in a virtual world from sensors placed over a city to monitor and optimize the performance of a rail transportation network. Besides connecting to devices, people in virtual worlds are increasingly interconnected. Figure 4c (data source: [32]) shows that the revenue of social gaming has almost quadrupled from 2010 to 2014, and Figure 4d (data source: [33]) shows that the market value of social worlds in 2020 is expected to be twice the value in 2011. Moreover, a game industry survey in 2018 shows 54% of gamers feel video games help them connect with friends and spend time with family [34]. In summary, the above observation shows the interconnectivity trend from both hardware peripherals and social connections, leading to the growth of in-world user data collection. interconnected. Figure 4c (data source: [32]) shows that the revenue of social gaming has almost quadrupled from 2010 to 2014, and Figure 4d (data source: [33]) shows that the market value of social worlds in 2020 is expected to be twice the value in 2011. Moreover, a game industry survey in 2018 shows 54% of gamers feel video games help them connect with friends and spend time with family [34]. In summary, the above observation shows the interconnectivity trend from both hardware peripherals and social connections, leading to the growth of in-world user data collection.

Immersion
The user's feeling of immersion is an important trend to study, for it generates a sense of presence [13]. By observing the evolution of immersive technologies and cross-sensory technologies, two trends can be found. First, the market evidence in consumption (Figure 5a, data source: [35]) and revenue ( Figure 5b, data source: [36]) shows the increasing trend of immersive hardware adoption, as VR, AR, and MR are three typical immersive peripheral technologies. Second, the recent

Immersion
The user's feeling of immersion is an important trend to study, for it generates a sense of presence [13]. By observing the evolution of immersive technologies and cross-sensory technologies, two trends can be found. First, the market evidence in consumption (Figure 5a, data source: [35]) and revenue ( Figure 5b, data source: [36]) shows the increasing trend of immersive hardware adoption, as VR, AR, and MR are three typical immersive peripheral technologies. Second, the recent innovation in multiple sensorial media (mulsemedia) [37] provides users more sensorial experience than visual and audio delight, leading to a deeper experience of immersion. Figure 5c (data source: Supplementary Materials) shows the trend of new mulsemedia device release in the haptic rendering, olfactory rending, and gustatory rendering market. Overall, the figures for all device categories are growing in the long run, especially since 2012. Haptic devices gain the largest traction in growth, since they cover a much wider market with many types, including haptic gloves, chair, vest, etc., which is consistent with the haptic device market value growth (Figure 5d, data source: [38]). Some VR headsets integrating the multisensory function can provide multiple sensorial experiences (besides audio and video experience). For example, OhRoma (camsoda.com/products/ohroma/) provides different smells during video or audio playing. Feelreal (feelreal.com) can simulate both smell and wind by changing airflow. These trends show the increasing adoption of immersive rendering techniques, which enriches user experience. which is consistent with the haptic device market value growth (Figure 5d, data source: [38]). Some VR headsets integrating the multisensory function can provide multiple sensorial experiences (besides audio and video experience). For example, OhRoma (camsoda.com/products/ohroma/) provides different smells during video or audio playing. Feelreal (feelreal.com) can simulate both smell and wind by changing airflow. These trends show the increasing adoption of immersive rendering techniques, which enriches user experience.

Intelligence
Game intelligence (i.e., computational intelligence in games) applies a set of computational approaches to address complex problems in games [39]. Many AI techniques have been widely applied in game worlds (called game AI) for creating a sense of challenging, enriching humancomputer interaction, providing better content and bringing additional experience to users [40]. A previous study (found in the Supplementary Materials) has rendered the milestones of game AI innovation in nonplayer character (NPC) behavior, procedural content generation (PCG), and narrativity. The study shows that game AI in NPC can endow NPCs with believable or human-like characteristics in interaction. The game AI in PCG can automatically generate a complex world

Intelligence
Game intelligence (i.e., computational intelligence in games) applies a set of computational approaches to address complex problems in games [39]. Many AI techniques have been widely applied in game worlds (called game AI) for creating a sense of challenging, enriching human-computer interaction, providing better content and bringing additional experience to users [40]. A previous study (found in the Supplementary Materials) has rendered the milestones of game AI innovation in nonplayer character (NPC) behavior, procedural content generation (PCG), and narrativity. The study shows that game AI in NPC can endow NPCs with believable or human-like characteristics in interaction. The game AI in PCG can automatically generate a complex world system, many 3D models, and a full scenario with random quests. It also shows that game AI can even develop game plots from interactive storytelling to augment users' dynamic and immersive gaming experience. To further improve game AI, machine learning techniques (e.g., reinforcement learning) have been applied to games, in addition to planning and tree search. They have become either the dominant or the secondary method in NPC behavior learning, player modeling, PCG, computational narrative, believable agents, AI-assisted game design, and general game AI [40]. Summerville et al. [41] surveyed the cutting-edge machine learning approaches applied in PCG. Some artificial intelligence techniques have been applied in story generation [42], such as genetic algorithms.
Outside of games, AI can simulate virtual human, virtual nonhuman, crowds, and other representations in virtual environments for scientific simulation or other nongaming purposes [43]. Human-level AI agents are important ingredients in social virtual worlds with autonomous functions Future Internet 2020, 12, 112 8 of 28 (e.g., navigation guidance) [44]. An artificial life environment can be created through individual or group learning to generate virtual creatures (e.g., dolphins) with autonomous behaviors to gain more knowledge on nature life [45]. Crowd agents in an immersive environment can be applied to simulate emergency responsiveness [46]. Realistic virtual terrains can be generated with conditional generative adversarial networks [47], which can facilitate virtual world authoring with low cost. Moreover. AI has also been used for human-machine interaction, such as voice control in mixed reality [48]. Through natural language processing, a new environment for speaking and listening skill improvement can be created [49]. The above trends show that we can witness the increasing adoption and improvement of intelligent techniques in virtual worlds.

Implications
The changes in investment, value, population, and consumption show the industrial/market adoption of each virtual-world-related technology. Following technology changes, we believe user and market expectations towards future virtual worlds will change along with their underlying technologies, resulting in new system design requirements. Thus, the implications of trends towards virtual world system development are studied, mainly from two perspectives: computing and content.
From the computing perspective, the mobility trend implies the capability limitation of client-end devices for running virtual world applications. Compared with desktop computers and game consoles, mobile devices are not configured with equally high computing power (e.g., graphic processing power), storage, and bandwidth resources. In terms of floating-point arithmetic computation, for example, the performance of Nintendo Switch (2017, portable mode) and Samsung Galaxy S10 (2019) is 157 GFLOPS and 899 GFLOPS, respectively, while Sony PlayStation 4 already reached 4197.8 GFLOPS in 2014. On the other hand, the immersion and intelligence trends imply the increase of rendering complexity and computing tasks to run a high-fidelity virtual world with massive copresent user and software agents to promote users' sense of social presence [50] and satisfaction [51], such as World of Warcraft. The conflict between the limited client-end resources and the rapid increase of computational complexity implies that mobile clients may have limited capability to provide large-scale immersive rendering and intelligent computing functions [52]. Thus, computing resource sufficiency is the first emerging requirement.
Moreover, a single mobile device is more subject to failure than a desktop computer for two reasons [52,53]. First, a connection to remote services through a wireless network is less stable than through a wired network, as the former relies on signal quality. Second, mobile devices have limited battery life and battery quickly depletes for resource-hungry applications, such as virtual world, leading to device failure. A poorly connected or highly congested network will even cause extra power consumption, due to communication overhead increase [54]. Since unstable connections will corrupt users' gaming experience, and device failure may cause game state loss, computing reliability is another new requirement. The above issues bring mobile cloud gaming [55] to virtual world development.
From the content perspective of virtual world development, diversity implies that more content will be generated by users, including user-created virtual objects and virtual wealth generated in gaming [56]. Meanwhile, interconnectivity implies that more user information will be stored in virtual worlds, including the user data collected from multiple peripheral (i.e., wearable and IoT) devices and the relationships built from users' online social networks. For simplicity, the content, including user-generated objects, virtual wealth, user data, and social connections, which virtually belong to a user, are uniformly called user content. The increase and importance of user content have two questions drawing our attention. First, intuitively, will users suffer content loss in the case of virtual world application failures, such as infrastructure failure or service discontinuation? To avoid content loss, this question brings the persistency requirement to system design.
Besides content storage, users may also care more than ever about how a system will treat their content. With the increase of personal data and social connections, data security is gradually becoming a great concern to users, especially the security of privacy-sensitive information, including personal Future Internet 2020, 12, 112 9 of 28 identification, personal (i.e., religion, race, health, sexual orientation, financial, biometric, etc.) information, collected data, and device traceable information. With the increase of virtual properties (i.e., objects and wealth), legal protection has become another concern, especially for the properties having real-world economic value (e.g., the Linden Dollar in Second Life). In the case of service termination, for example, users may worry about the access to or even the economic benefit from them [57]. Moreover, users may share their content with others, leading to another level of security and legal concern. All the above concerns request for a trusted environment of content storage. We call it the credibility requirement.
Bringing together the above implications, Figure 6 summarizes the relations between the emerging technology trends and requirements of virtual world development, including sufficiency, reliability, persistency, and credibility. They will be elaborated in the next sections.
belong to a user, are uniformly called user content. The increase and importance of user content have two questions drawing our attention. First, intuitively, will users suffer content loss in the case of virtual world application failures, such as infrastructure failure or service discontinuation? To avoid content loss, this question brings the persistency requirement to system design.
Besides content storage, users may also care more than ever about how a system will treat their content. With the increase of personal data and social connections, data security is gradually becoming a great concern to users, especially the security of privacy-sensitive information, including personal identification, personal (i.e., religion, race, health, sexual orientation, financial, biometric, etc.) information, collected data, and device traceable information. With the increase of virtual properties (i.e., objects and wealth), legal protection has become another concern, especially for the properties having real-world economic value (e.g., the Linden Dollar in Second Life). In the case of service termination, for example, users may worry about the access to or even the economic benefit from them [57]. Moreover, users may share their content with others, leading to another level of security and legal concern. All the above concerns request for a trusted environment of content storage. We call it the credibility requirement.
Bringing together the above implications, Figure 6 summarizes the relations between the emerging technology trends and requirements of virtual world development, including sufficiency, reliability, persistency, and credibility. They will be elaborated in the next sections.

New Requirement Space Development
The trend analysis suggests new requirements of virtual world systems' design, which are complementary to the existing requirements, namely consistency, responsiveness, scalability, and security. We have observed that responsiveness is a common criterion in sufficiency, persistency, consistency, and scalability problems. Hence, it is more appropriate to express it as a design criterion towards other requirements. Figure 7 shows the taxonomy of design requirements and issues. The design requirements space can be regarded as a guideline to specific design requirements that are essential for virtual world system development. For example, reliability satisfaction requires a multiuser virtual world system to address the requirements of concurrent access, real-time user interaction, etc. While we believe that specific design requirements are crucial for system development, we focus on higher-level design requirements in this paper. Thus, a depth discussion of the transition from general requirements to specific requirements will be left to future work.

New Requirement Space Development
The trend analysis suggests new requirements of virtual world systems' design, which are complementary to the existing requirements, namely consistency, responsiveness, scalability, and security. We have observed that responsiveness is a common criterion in sufficiency, persistency, consistency, and scalability problems. Hence, it is more appropriate to express it as a design criterion towards other requirements. Figure 7 shows the taxonomy of design requirements and issues. The design requirements space can be regarded as a guideline to specific design requirements that are essential for virtual world system development. For example, reliability satisfaction requires a multiuser virtual world system to address the requirements of concurrent access, real-time user interaction, etc. While we believe that specific design requirements are crucial for system development, we focus on higher-level design requirements in this paper. Thus, a depth discussion of the transition from general requirements to specific requirements will be left to future work.

Brief Introduction on Existing Requirements
Before elaborating on the new requirements, existing requirements are briefly introduced for completeness. The consistency models originate from distributed computing [58]. For virtual world systems, consistency can be studied in two domains [59]. In the discrete domain, it requires that two users can see the same set of events handled in causal order, called causal preservation. Yet, in the continuous domain, object state also changes over time, following some internal rules, e.g., physical laws. It, then, requires that two users can see the same state changes if they have received the same

Brief Introduction on Existing Requirements
Before elaborating on the new requirements, existing requirements are briefly introduced for completeness. The consistency models originate from distributed computing [58]. For virtual world Future Internet 2020, 12, 112 10 of 28 systems, consistency can be studied in two domains [59]. In the discrete domain, it requires that two users can see the same set of events handled in causal order, called causal preservation. Yet, in the continuous domain, object state also changes over time, following some internal rules, e.g., physical laws. It, then, requires that two users can see the same state changes if they have received the same set of events, called perceptive consistency [60]. In practice, consistency error can be compensated by latency error to achieve the overall fairness, which is called playability [61]. Scalability is "a desirable . . . ability of a system to accommodate an increasing number of elements or objects, to process growing volumes of work gracefully" [62]. In virtual worlds, the scalability issues can be classified into world scalability and hotspot scalability. The challenge of world scalability comes from two sources [16]: large-scale real-time simulations and simultaneous visualization for many users, caused by user population increase. Hotspot scalability comes from another source: multiple heterogeneous actors with different operating characteristics. When many users gather at a small place, a hotspot is formed [63]. The scalability issues aim at minimizing system performance reduction with the increase of users and user interactions, either globally or regionally. Virtual world security issues can be classified as Internet security, social misconduct, and cheating. First, as virtual worlds belong to Internet applications, they inherit general Internet security concerns, including authentication and identity theft, payment and transaction integrity, malware and computer virus, etc. [64]. From a social perspective, misconduct may happen in virtual worlds, including vandalism, harassment and stalking, defamation and disparagement, etc. [64]. They can only be prevented by rules and moral codes. Finally, cheating is a technical issue specific to game worlds, which is ubiquitous in gaming communities [65], caused by the combination of malicious attempts and system faults. [66] has classified them into three types of security break and further divided them into 13 categories. They summarized that cheating in virtual worlds needs either be prevented or be detected for providing a fair playground.

Sufficiency
Sufficiency is the first emerging requirement such that users can access highly immersive environments from any device. It becomes outstanding with the growth of mobile devices and computational complexity. Partially/completely offloading computing tasks to remote sites for meeting the resource requirement is a common choice in mobile cloud computing [52,55]. Based on computation offloading, two concerns can be derived: energy efficiency and quality of service (QoS)/quality of experience (QoE). Energy efficiency concerns the device energy consumption for local computing or communication to remote sites [67]. The authors of [68] provided an energy consumption model for smartphone task offloading in WLAN and mobile network (3G/4G). QoS/QoE concerns the cost and benefit of offloading [69]. The cost includes communication overhead and response latency. The benefit includes graphic (i.e., image or video) quality improvement, quantified by bitrate or frame rate. Moreover, both energy efficiency and QoS/QoE are sensitive to network conditions. If network availability is low, code offloading will increase both energy consumption and latency [54].

Reliability
Reliability also comes from the mobility trend, as both mobile devices and mobile networks are subject to failure. Redundancy can add reliability to a system by replicating data and program to multiple sites so that a device can both tolerate connection failure (i.e., failure tolerance) and recover application state after client failure (i.e., failure recovery) [70]. Redundancy brings the consistency problem (called replica consistency to distinguish it from the consistency requirement), since all replicas must maintain the application game state for state integrity. Consistency is a hard problem in distributed computing, due to the notorious consistency, availability, and partition (CAP) theorem [71]. It states that if a design cannot tolerate network partition, availability and consistency cannot be simultaneously achieved, because a network failure may prevent a replica from being synchronized [72]. On the other hand, if a design can solve the network partition problem, consistency and responsiveness will then become the conflicting requirements, since consistency control protocols normally add additional communication steps [73]. Thus, the reliability property seeks a computing redundancy design that can achieve high availability, consistency, and responsiveness, meanwhile without or tolerating network partition.
Additionally, reliability is an important requirement for device interoperability [74] in pervasive games. In the "Hot Potato" game which is played by a group of users at a specific place [75], for example, users' mobile devices are locally connected with a wireless sensor network (WSN) and P2P neighbor discovery. They also need to connect to a backbone network for coordinating heterogeneous devices, providing global interaction and storage for all game instances.

Persistency
The persistency requirement includes state persistency and content persistency. State persistency requires that when a user leaves an application, his/her application state is well-kept for the next-time retrieval and play [76]. State data normally has a small size but could be frequently updated. Thus, data read/write efficiency is the main concern. [77] found that some states only require approximate consistency between write and read (e.g., avatar position), while others require exact consistency (e.g., user inventory). Thus, different persistency strategies can be applied for different consistency requirements to minimize system overhead and response delay. Game state storage also needs to be reliable and robust to any failure. If redundancy is applied for fault tolerance, then the replica consistency, responsiveness, and load balance issue also have to be studied [78].
On the other hand, user content persistency in virtual worlds is still short of attention. With the growth of user content, however, this aspect is becoming increasingly important. Content persistency requires that a user's data and content can be permanently preserved if the user has not departed from the virtual world. Compared to state data, user content files normally have a larger size (e.g., multimedia files), but are less frequently updated. Thus, the main concern is distinctive [79], which includes storage efficiency (i.e., reliability to storage space), bandwidth cost, data access latency, and content integrity. Furthermore, if user content can be shared with other users, view consistency is needed, which requires that two users can see the same set of objects if both are interested in them.

Credibility
Credibility (or trust) comes from the increase and importance of user content on virtual worlds. It is more complex and thus deserves some detailed explanation. In the context of information systems, trust is a subjective belief that a system or a system component will behave as expected [80], which is established based on direct evidence, recommendation, and reputation [81]. In a virtual world, users can have their belief of the underlying system in content management only if their content is securely preserved and legally protected. Thus, the credibility requirement has two concerns: user data security and content legal protection.
Data security (to distinguish it from the security requirement), though new to virtual worlds, has been widely studied in cloud services, which includes privacy and confidentiality. Both require that user sensitive content is confidential to unauthorized access, while data privacy stresses more on the interest of users in controlling the data about themselves [82]. The authors of [83] divide the top threats into five categories, including data breaches, account or service traffic hijacking, API or shared technology vulnerabilities, malicious insiders, and insufficient due diligence. The categories of threats imply that data security is not only a technical issue but also a regulatory issue. From the technical perspective, secure data access needs to be achieved by mitigating the above threats from privacy-enhancing technologies, security mechanisms, data obfuscation, or anonymization. From the regulatory perspective, data security involves procedural solutions for committing legal and contractual requirements in a verifiable way. For verifying requirement commitment, accountability [84], and compliance are the key issues.
Legal protection concerns the legitimate interest of the involved parties, mainly virtual world users and platforms. The first issue is content ownership. If users only have control of their created content but not the ownership, such separation will become the barrier to user innovativeness due to the additional cost [85]. Secondly, if users have content ownership, they may need to control the distribution of their content to others through digital rights management (DRM), which not only allows users to represent and constrain content usage but also provides traceability of service violation. Moreover, user-created content may also bring legal risks, including plagiarism, offensive content, spam, soft hacking, etiquette breach, personal exposure, etc. [86]. These risks push the content moderation techniques to a new frontier.

Virtual World Architecture Inspection
Existing virtual world system architectures can be classified in client/server (C/S) architectures, peer-to-peer (P2P) architectures, and hybrid architectures. Figure 8 illustrates their conceptual models. This section introduces the existing virtual world architectures and conducts a qualitative study on their design requirement satisfaction and limitations. Table 1 lists the representative solutions to the design issues. In this paper, only the issues related to emerging requirements are elaborated.

Client/Server
In the C/S architecture (Figure 8a), simulation logic is processed at the server end, while clients only render updates, which are sent from servers to users. A C/S virtual world has a definitive platform owner (e.g., Mozilla owning Mozilla Hubs (hubs.mozilla.com), a VR-friendly social virtual world platform running in browsers). The online services (e.g., authentication, inventory management, and region management in Second Life) and application data (e.g., user items, user inventory, and virtual property in Second Life) belong to the platform owner. C/S architecture has predominated the virtual world industry (see the list of sampled virtual worlds in the Supplementary Materials).

Requirement Satisfaction
C/S architecture adopts a centralized structure, which easily satisfies sufficiency and reliability. For sufficiency, codes running on the client can be partly or completely offloaded to dedicated servers through function invocation [109]. In code offloading, [67] has provided an energy trade-off model to maximize energy efficiency in terms of the ratio of energy cost to device-specific cost in local computation and additional communication. For QoS/QoE, MAUI [87] has shown the improved video refresh rate from 6 to 13 frames per second (FPS) with low energy consumption, latency, and data transfer overhead. Cloud gaming [69] can also address the sufficiency issue through interactive remote rendering [110]. As such, energy consumption and QoS/QoE remain the main focuses of cloud gaming.
C/S virtual worlds can resolve the reliability problem with replication. The authors of [88] applied a database replication approach to multiplayer online games to provide fault tolerance. Virtual world operations are handled with a Read-One-Write-All-Available approach. That work provides four alternative synchronization approaches for propagating updates to all replicas to maintain consistency. It also shows fast responsiveness (specifically, ≤20 ms for read-only transactions and ≤100 ms for write transactions). Database replication can also improve reliability in game state storage. Yet, to the best of our knowledge, existing C/S virtual world systems have not fully addressed the connection failure issue. Directing communications to nearby fog facilities might be a solution [111], since a shorter distance between fog facilities and mobile clients can reduce the chance of connection failure.
For data security, policy-based access control [93], together with secure hardware [112] or cryptographic measures [113], can be applied in data sharing [113], keyword search [114], and public audit [115]. Fog computing can also mitigate insider data theft [116]. Moreover, security policies can be enforced through data protection compliance [95] and accountability check [84]. For legal protection, Kim et al. [97] proposed a user-centric rights representation model for the DRM of user-generated content by employing a separate license server [97].
Moreover, the content moderation issue has been discussed in [86], which proposed a risk management framework for user-generated content with seven moderation techniques, respectively, employed by platform owner, system, trusted curator, and the public. Notably, the solution to this issue is applicable to all architectures.

Limitations
The centralized ownership creates a single point of failure, leading to two issues. The first issue is persistency. When the entity owning a virtual world dies, bankrupts, or withdraws its operations, the affected virtual worlds will collapse together with the loss of its user-generated contents (Among the 126 virtual world applications based on OpenSimulator (see the inactive virtual world list in http://opensimulator.org/wiki/Grid_List), 28 of them are no longer in operation). The result is that user content will become nonpersistent. Temporary infrastructure failure may also cause user content loss. For example, the World of Warcraft game server crash on 21 Jan 2006 led to the inventory damage of thousands of players [88].
The second issue is credibility, which involves information and legal protection. User data of a virtual world may include sensitive information, such as location information in context-aware games [117]. One of the key privacy compliance issues is to provide a transparent and controlled environment to data owners [118], which is, however, at the cost of platform owner, and subject to many external factors (e.g., global societal trends), firm-specific factors (e.g., industry best practice and corporate culture), and ethical frame (e.g., rule utilitarianism or moral relativism) selection [119]. There are also some trust-level issues related to transparency, including requirement propagation along the subcontract chain and malicious insiders [118]. Thus, the gap remains between the platform owner's claims and the user's trust in their sensitive information protection.
Moreover, storing user-created content on a C/S platform separates content ownership and control in social worlds, creating a legal dilemma. Users may own their content but may not have control over the actual data, while a platform can unilaterally terminate their accounts and confiscate their virtual asset, as in the Bragg case [120] and the Evans case [121]. The separation of content ownership and control may lead to ownership tensions and inhibits user innovation [96]. The authors of [57] have shown that platforms tend to have such separation either for competition purposes or for liability safe harbor, which discloses the second problem. Storing user content imposes legal pressure on platform owners for copyright infringement or offensive content [122]. Though content moderation can mitigate such risks [86], it largely increases the cost paid by platform owners and the risks still exist.

Peer-To-Peer
P2P virtual world aims at resolving the scalability issue of the classic C/S architecture [123], which is jointly promoted by P2P computing and virtual world research. In the P2P architecture (Figure 8b), services are collectively run by user devices that play the roles of both server and client. All user devices are connected to a P2P overlay network [124]. When a user is accessing a virtual world application, he/she is also providing services to others of the same application through his/her accessing device. Such a virtual world is supported by a reciprocal economy. A P2P virtual world can have an application provider who develops and distributes the application software to run on a P2P overlay network. Currently, P2P virtual worlds are still far from wide industry acceptance, partially due to their higher difficulty in system maintenance for developers [66].

Requirement Satisfaction
P2P virtual worlds, firstly proposed in Knutsson et al. [123], remove the single point of failure of C/S virtual worlds. By running applications on user devices with open protocols, application data are no longer owned or controlled by any entity. Thus, they are promising to address persistency issues. The authors of [78] provided a comprehensive survey of P2P game state persistency. Later, [89] proposed a two-tier architecture. Overlay storage employs a distributed hash table (DHT) and super-peers to provide high reliability and fault tolerance. Group storage employs a distance-based grouping approach to improve availability and responsiveness.
For content persistency, P2P virtual worlds can also employ a P2P file storage protocol to store user content. Existing P2P file storage systems, such as BitTorrent (bittorrent.com), have shown their persistency property without central storage. Varvello et al. [92] designed a communication infrastructure for distributing and storing virtual objects to Kad, a global P2P storage network. Total Recall [125] provides optimized storage efficiency with high responsiveness and low bandwidth cost. The authors of [126] proposed a content retrieval approach based on Total Recall to provide an efficient content integrity check. These properties are requested by the content persistency requirement. Moreover, [127] proposed a P2P 3D object streaming scheme from nearby peers, which can be used in virtual world content sharing. The authors of [128] improved its efficiency.
For data security, P2P virtual worlds can leverage Blockchain technology, combined with cryptographic measures, to achieve user-centric secure data access [129]. Blockchain-based privacy-preserving access control has been widely discussed in many fields, including personal identity management [130], healthcare record sharing [94], collected data access [131], etc. Blockchain technology has some advantages. First, a third-party service is not needed for stewarding user data, on which users must put their trust. Second, security policies added to Blockchain [132] are transparent to all users and data compliance can be achieved through the consensus mechanism. Moreover, a Blockchain data structure is tamper-proof, providing additional merits, including integrity and nonrepudiation, etc.
Blockchain technology can also be used in digital rights management [98]. Its nonrepudiation property can enable the conditional tractability of license violation. Moreover, a P2P virtual world does not have a definitive platform owner, attributed to its decentralized structure. It removes ownership inconsistency issues and users really have the ownership of their virtual assets as well as the underlying data. Thus, a P2P virtual world can achieve higher credibility.

Limitations
Sufficiency and reliability are the two weaknesses of P2P architecture, and they are becoming outstanding with the increase of mobile clients. In pervasive games or augmented reality, user clients could be wearable devices, IoT devices, and custom-built devices. They have nonstandard interfaces and heterogeneous capabilities [74,117], which exhibit several limitations in the P2P design. First, resource-limited wearable devices may not have enough computing resources or standard interfaces to play the role of a server for running full game logic. Yet, the gaming experience on mobile devices can be unsatisfying because of limited processing capacity. For example, Final Fantasy XV Pocket Edition provided lower graphics [133] and AI experience [134] than the desktop and console edition.
Moreover, P2P virtual worlds may also suffer from heterogeneous peer resources. Mobile devices and wearable devices can run fewer services than desktop devices. The departure of users with major resources provision can unduly stress the other peers [135].
Mobility also greatly increases the chance of client failure due to connection loss [136] or battery depletion [137]. Client failure may cause unsaved state loss. Additionally, reliability is important to the gaming experience. Pervasive games require high connectivity to game masters (i.e., specially selected players) for content distribution, diegetic communication, and game progress track [74]. Although Pithos [89] adds reliability through a two-tier game state replication, the design does not address the replica synchronization issue, which may lead to inconsistent game state.

Hybrid-I
The C/S and P2P architectures can be combined with a hybrid architecture to overcome the weakness of one architecture by the strength of another. According to the ways of combination, hybrid architectures can be divided into two classes. The first one, denoted by Hybrid-I (Figure 8c), leverages the P2P computing techniques to the C/S architecture for computation offloading, since P2P resources can be easily scaled with user population. In Hybrid-I virtual worlds, e.g., [91], clients disseminate updates to each other through P2P communication to save a server's outgoing bandwidth and thus its operating cost. Cloud gaming can also exploit P2P techniques to reduce game latency [69] and service operating costs [100,138].

Requirement Satisfaction and Limitations
The Hybrid-I architectures have different requirement satisfactions, depending on how clients participate in a simulation. This section introduces some typical examples. Firstly, in [139], the cloud servers form a P2P publish-subscribe overlay for the communication between game servers. Clients do not involve in any simulation. Then, in [91], a central server is employed to control and store game state, while clients are only in charge of message dissemination to save the server's outgoing bandwidth. Their requirement satisfaction is similar to that of C/S architecture, except that the latter design provides more cost-efficient scalability solutions, because the efficiency of message dissemination scales with the user population. They also share the same single point of failure with C/S architecture due to the existence of a central point for content storage, including the limitations in content persistency, data security, and legal protection.
In [106], a central server stores application states and relays client messages for security checks. Clients compute and disseminate the game state with P2P techniques. The requirement satisfaction of this approach lies between the C/S architecture and the P2P architecture. Specifically, the P2P clients allow it to have similar requirement satisfaction and limitation to the P2P architecture. The centrally controlled game state allows it to have the similar requirement satisfaction and limitations to the C/S architecture in persistency, credibility, and security, while P2P clients allow it to have the similar requirement satisfaction and limitations to the P2P architecture with respect to the rest issues.
In [140], moreover, P2P clients manage application states for each other, while a central server only keeps the sensitivity data (e.g., user profile) and provides the utility functions (e.g., authentication). This design is similar to the P2P architecture, except that the satisfaction of the data security requirement can approach that of the C/S architecture because of the centralized storage of sensitive data.

Hybrid-II
The second hybrid class, denoted by Hybrid-II (Figure 8d), introduces cloud resources to the P2P architecture to improve the overall performance. In [141], for instance, special server nodes play the roles of zone masters. They do not retain the game state but only help clients in intra/interzone communication. The Hybrid-II architecture was initially proposed by [135] after identifying the heterogeneous peer resource issue. In their design, cloud resources are introduced to complement peer resources, and both are virtualized to nodes (called virtual nodes, VN). The VNs construct two P2P overlays for providing the common services: the state action manager (SAM) and the positional action manager (PAM). The SAM nodes, organized in a structured overlay, manage the state of virtual objects [90]. The PAM nodes, organized in an unstructured overlay, manage user position for neighbor and object discovery [142]. In an application of Hybrid-II architecture, a client firstly receives the nearby users and objects by querying the PAM service. Then the client can interact with them through the SAM service for state update and state synchronization.

Requirement Satisfaction
The Hybrid-II architecture shares some similarities with the P2P architecture, but there are some differences between them. Firstly, the Hybrid-II design has enhanced the scalability by resolving the resource heterogeneity issue with reliable cloud resources to cover resource deficiency from user departure. The optimal cloud resource assignment problem with respect to system load has been studied in [143] to minimize the economic cost from cloud resource utilization. Resource virtualization can also provide load balance to further improve scalability. In [90], load imbalance is evaluated with the Gini Coefficient. If a device is overloaded, it will migrate some of the VNs to other devices with minimized visual inconsistency.
Cloud resources also have a contribution to state persistency [135]. The SAM service provides two layers of replication. First, each VN assigned to a user device (called uVN) is backed up with a cloud node (called bVN). An uVN periodically synchronizes its node state to the bVN. This type of fault-tolerant mechanism is called coarse-grain data replication. Moreover, a VN in SAM also dynamically replicates its objects to a VN (i.e., overlay replicas) in the neighbor address space [144], which is called fine-grained data replication. In the case of uVN failure, the SAM service will forward requests to the overlay replica. If an overlay replica has become overloaded, it will inform its clients to forward their requests to the bVN. In such a design, the coarse-grain data replication can reinforce storage reliability, and the fine-grain one can improve service responsiveness through neighbor address access [144].
Though not mentioned by the authors, we believe that content persistency can be achieved with the same approach as that in the P2P architecture, since the overall Hybrid-II architecture is decentralized. Likewise, the Hybrid-II architecture can achieve the same credibility requirement satisfaction as the P2P architecture, due to the lack of a centralized control entity.

Limitations
Similar to the P2P architecture, the Hybrid-II architecture is limited in resource provisioning for resource-limited devices. In the SAM service, objects stored on a VN have the identifiers close to the VN's address [90]. That is to say, objects are not grouped by their users but by mapping identifiers to the address space of the DHT overlay [144]. Thus, a SAM node may not have all the content of a user for running complete simulation logic. Moreover, even though cloud resources have been introduced, which may have higher computation capabilities, they are partitioned to many VNs for service and data storage backup. Thus, the cloud gaming functions (in C/S and Hybrid-I) cannot be run on the Hybrid-II architecture for code offloading [109] or interactive remote rendering [69].
For reliability, though the two-layer data replication mechanism can double the reliability of the SAM service, as in the P2P architecture, the state synchronization issue between replicas is not addressed by the authors. When an uVN fails, it may not timely and successfully synchronize the latest state to its bVN or overlay neighbor, leading to state loss or inconsistency. The extent of inconsistency between replicas depends on the length of the synchronization interval. However, state consistency could be critical to the integrity of a game, such as in-game trade and user inventory [77].

Architecture Comparison
In this section, we compare the virtual world architectures in requirement satisfaction. The introduced architecture classes include C/S, P2P, Hybrid-I, and Hybrid-II, and Virtual Net. In the discussion, they are further classified as centralized architectures (C/S and Hybrid-I) and the decentralized architectures (P2P and Hybrid-II). Table 2 shows the comparison results. Sufficiency can be satisfied with the separation of client and service. In centralized architectures, since most computational complex tasks are moved to the service end, including simulation logic, physics computation, and graphics computation, a lightweight client only needs to render the results generated from servers. In contrast, decentralized architectures require client devices to play the role of both server and client, imposing large computation loads on them. A lightweight client device may not have the capability to render a highly immersive environment. Moreover, compared to the rest architectures, P2P virtual worlds may suffer from heterogeneous peer resources. Some peers may be located on low-performance or unreliable devices, which will provide fewer services to other peers and even slow down the entire system.
Reliability can be satisfied with replication and synchronization. In centralized architectures, each state update will be synchronized from client to server. Since the centralized architectures preserve a copy of the user state at the server end, they can tolerate client failure. A recovered client can catch up with the latest state by retrieving the data from the server. Moreover, the database synchronization approaches can maintain replica consistency between replicated databases. However, centralized architectures only maintain a connection between a client and a server, which does not tolerate connection failure. The decentralized architectures also provide replication for backing up the client state. Thus, the client state can be recovered after client failure. However, they neither provide replica consistency guarantee nor tolerate connection failure.
Content persistency can be satisfied with decentralization. The centralized architectures contain a central entity for stewarding user content, creating a single point of failure. As shown above, even if advanced fault tolerance techniques can minimize the failure of a system, the failure of the control entity will cause services to discontinue. Decentralized architectures, on the other hand, are immune from system-level failure, since such a system is not controlled by a single entity. Moreover, the underlying P2P storage techniques [125] ensure that the failure of a node will not spread over the entire system, and local failure can be recovered from data replicas.
Data security can also be satisfied by decentralization. The security compliance of the centralized architectures is normally nontransparent to the external and is subjective to many firm-specific factors. Users only put limited trust in the protection of their data and information, even for cloud-based systems. An industry observation has shown that security and compliance are still significant challenges to cloud computing in 2019 [145]. The decentralized architectures, on the other hand, do not have a single stakeholder. Every user is the stakeholder of a system, and P2P-based security mechanisms can be applied effectively as long as most users are obedient to the system rules [146]. Thus, users need to trust no entity but only the system per se, which is also the argument why Blockchain-based solutions can achieve higher security in data access [94]. Thus, decentralized architectures can provide more trust in data security to users.
Legal protection can be satisfied with unitary ownership. In the existing virtual worlds, which are centralized, control entities own all digital assets (i.e., file and data), while users own their virtual assets, leading to ownership inconsistency and disputation [96]. The decentralized architectures, on the other hand, do not have a control entity. Both virtual objects and digital assets belong to users. Thus, users' virtual property can be legally protected by property or copyright law. Moreover, application providers do not need to worry about their liability for, e.g., plagiarism or offensive content, since they do not own them. Thus, decentralized architectures can offer higher legal protection to both users and application providers. Table 2 also shows the architectural comparison for existing requirements, including scalability and security. Scalability cost refers to the economic cost imposed on someone or some entity, who provides the virtual world application, for scaling up the system to accommodate the growth of the user population. In the C/S architecture, the cost is solely paid by the platform owner who pays the cost of the entire infrastructure. In the Hybrid-I architecture, as some functions (e.g., state distribution) are distributed to user clients, the platform owner's cost can be largely decreased. In the Hybrid-II architecture, the cost depends on who provides cloud resources. Since a Hybrid-II virtual world is collectively run by cloud resources and user devices, the cost in the Hybrid-II architecture will not be larger than the one in the Hybrid-I architecture. In the P2P architecture, the infrastructure cost is fully shared by all users. Thus, application providers barely need to pay for the infrastructure.
Game cheating is the main factor leading to the low acceptance of decentralized architectures in the industry [66]. Compared to centralized structures, decentralized structures, due to the lack of a central arbitrator, are more vulnerable to player escaping, network flooding, suppress-correct cheat, etc. than their C/S counterparts, while time cheat, blind opponent, consistency cheat are only possible to P2P games [66]. In P2P architectures, though a few cheatings (e.g., code tampering and event suppressing) can either be prevented or detected through cryptographic measures [104] or mutual verification [107], a more general approach either needs a central server for rule enforcement [106,147] or lockstep message check which largely increasing communication overhead. In the Hybrid-II architecture, a referee anti-cheating scheme is applied for detecting illegal messages in communication [108]. Nevertheless, a malicious user can still tamper the code of VNs and change the simulation logic which becomes favorable to them and cannot be detected by the scheme. Moreover, the deterministic mapping from object identifiers to the VN address space [144] enables an attacker to guess the content on the controlled device, increasing the attack success rate.

A Possible Avenue Towards Future
As no existing architecture can fully satisfy all new requirements, is it to possibly leverage their advantages to create a new one? For answering this question, a new architecture called Virtual Net is under development to satisfy more requirements [126]. The central idea of Virtual Net is that nobody owns a virtual world, but all users collectively create a selforganized one. Thus, Virtual Net belongs to the decentralized architectures. In Virtual Net, users contribute a part of their computing Future Internet 2020, 12, 112 20 of 28 resources, which is virtualized into one or multiple virtual nodes. All virtual nodes have the same computing resources, managed by a node pool. Users of Virtual Net can store their contents or deploy their applications on the nodes without a central server. Thus, it can inherit the advantages of the decentralized design paradigm in persistency, data security, and legal protection.
In the Virtual Net architecture, a Mesh is a set of replicated virtual nodes. One Mesh is assigned to each user for running virtual world applications. The replicas in a Mesh apply a replica synchronization protocol to maintain replica consistency. Thus, externally, each Mesh can be treated as a reliable peer, and inter-Mesh interaction is equivalent to the interpeer interaction in the P2P architecture. In a Mesh, a client is a special node, which provides the user interface for receiving user operations and rendering updated states to users. A Mesh can receive the operations from or send updates to a client. This type of structure offloads the computational complex tasks from the client to the remote service end. So, a lightweight client device only needs to concentrate on limited computing tasks. Moreover, a client can simultaneously communicate with all the replicas of a Mesh to provide connection failure tolerance. Thus, the overall reliability of Virtual Net is higher than the existing architectures. Notably, a special Mesh is assigned to an NPC for running the AI program, which does not have a client node. Virtual Net also contains a P2P cloud that provides the common services needed by all Meshes, such as object request routing [148].
A full comparison of Virtual Net with other architectures is shown in Table 2. In addition to the above features, Virtual Net also has some advantages in scalability and cheating prevention. The scalability cost of Virtual Net is a bit tricky. It depends on whether an application provider contributes any virtual nodes to the node pool. Thus, it is equivalent to the cost of the Hybrid-II model. With respect to cheating prevention, Virtual Net can mitigate client-side code tampering by detaching the service from the client device. Though an attacker can still break the program running on his/her device, through random node allocation, the resources of the device are most likely not assigned to his/her Mesh, which minimizes the value and incentive of such attack.
Though promising, a full-fledged Virtual Net requires various challenges to be resolved, including virtual resource management, rapid replica synchronization, composition of event handling, and object simulation. Currently, the Virtual Net research is making progress [149] and we can anticipate a better future of virtual worlds.

Conclusions
We have studied the trends of the virtual-world-related technologies and their implications for future virtual worlds, mainly from the perspectives of revenue, investment, population, and consumption over the platform, application, peripheral and social connection, immersive technologies, and AI. The importance of trend analysis lies in the proposal of new requirements that have not been commonly recognized in the virtual world community for two reasons: First, our trend analysis and requirement development covers a wide range of virtual-world-related technologies, while existing virtual world communities only focuses on a subset of them, e.g., platform, immersive techniques, or game AI, depending on the discipline of a community. Second, these technologies are still evolving, which leads to the change of design requirements derived from them. Therefore, to the best of our knowledge, there is no comprehensive and timely requirement study for virtual world system design. Based on the trends and implications, we have discussed the emerging requirements of virtual world systems' design in detail. These requirements, including the design issues and criteria, provide a complete requirement space for design reference. We have also thoroughly examined the existing virtual world architectures and discussed their satisfiability and limitations to the new requirements. A complete list of requirement satisfaction examples has been provided for function implementation reference. Then, the comparison between architectures shows that none of the existing architectures can fully satisfy all requirements. The detailed comparison results and a new architecture design provide new avenues for virtual world system development.
The main contributions of this paper include the development of new virtual world system design requirements, the requirement satisfaction analysis of existing virtual world architectures, and the possible approach to satisfying new design requirements. We hope that the results of this study, the surveyed content, and the analysis can lay a solid foundation and research avenue for future virtual world development.
For the future of work, we intend to build a prototype virtual world based on the complete Virtual Net design to validate requirement satisfaction with a real system. Based on the prototype, the feature of sufficiency and reliability can be straightforwardly evaluated with mobile clients, such as smartphones. Persistency and credibility, however, cannot be simply evaluated with a close system. They, especially credibility, are more determined to users' subjective feelings, such as user-perceived security and trust in content storage. Thus, they have to be studied by opening the system to human users and collecting user perceptions with a survey or questionnaire.