Quality of Experience in 6G Networks: Outlook and Challenges

: In this paper, we discuss the critical characteristics of user experience in sixth generation (6G) cellular networks. We ﬁrst describe cellular networks’ evolution through 5G and then discuss the enabling technologies and projected services in 6G networks. We note that these networks are markedly centered around expanded intelligence, end-to-end resource and topology synchronization, and the intrinsic support to low-latency, high-bandwidth communication. These capabilities make context-rich, cyberphysical user experiences viable. It thereby becomes necessary to deﬁne and identify the role of quality of experience in 6G networks, especially when it comes to network management. We elaborate on these expected challenges and allude to viable opportunities in emerging technologies


Introduction
According to 5G Americas, there are 135 live deployments worldwide for the 5G network, with the bulk of networks (~60%) deployed in Western Europe and the Asia-Pacific region at the time of writing the report [1]. Comparatively, the current numbers of worldwide deployments for LTE (long term evolution) and LTE-Advanced are 680 and 338, respectively. The current number of 5G deployments is expected to increase, as these deployments are in response to the first relevant 3GPP release (Release 15) made in 2018 Q2 and the Phase 2 release (Release 16), just recently made in 2020 Q3 [2]. Release 17 currently has its timeline under review, with a decision on the freeze to be made in December 2020. The release aims to offer further features in 5G networks [3]. Different projections suggest that 5G networks are expected to account for 20% of mobile connectivity in 2025 [4]; considerable investments are thus naturally underway.
However, while 5G deployment is still ongoing, and even with ambivalent consumer reception to deployments that have already been made, substantial discussions are already underway regarding technologies and services in 6G networks [5]. Operators such as NTT Docomo [6] and vendors such as Samsung [7] and Ericsson [8] have readily released white papers detailing possible visions. Meanwhile, initiatives such as the University of Oulu's "6G Flagship" have sought input from leading researchers and practitioners, who have offered insights on various considerations of 6G networks through detailed positions and vision papers [9]. A basic Google Scholar search for "6G (cellular OR wireless)" limited to "since 2019" yields over 17,000 hits today. There is also a plethora of journal and magazine special issues and dedicated workshops already receiving submissions.
There is substantial consensus on the awaited enabling technologies and the distinguishing (or "killer") services as we look beyond 5G networks [10,11]. Some reports suggest that the earliest 6G deployments may occur during 2028-2029, with projections on the downlink data rate reaching ambitious data rates at terabits per seconds (Tbps) [12]. However, the expected focal characteristic of 6G networks is autonomous intelligence. This expectation is not limited to the user's view but also to that of the network operator. Indeed, where 5G has been marked by a strong emphasis on thorough softwarization and resource virtualization [13], the general expectation is that 6G will capitalize on advances in artificial intelligence and inference, with applications of these advances considered at all levels of network operation.
The general interest in shifting towards autonomous intelligence has long been anticipated, especially to overcome decision complexity in different network elements. For example, the complexity of managing the radio resource has already overwhelmed traditional solution approaches since Release 12 [14][15][16]. This complexity is particularly apparent in appropriating the best resource management option to the individual user's needs or circumstances and in deciding on this appropriation through the set scheduling constraints. Addressing this rising concern in radio resource management (RRM), various AI-based architectures have been proposed (e.g., [17,18]). Deliberations on these solutions have ultimately resulted in 5G's NWDAF (network data analytics function) as part of the core network [19]. NWDAF interacts with the virtualized network instances (slices) to provide the operator with an analytics-based view of the network state. The introduction of other AI-based network instances can also be traced, and these are expected to be further materialized as the development of 6G specifications proceeds.
The complexity in network management came as a result of the increasing demand. It has been observed that the growth in the numbers of mobile subscribers has plateaued in recent years. The GSMA expectation for subscriber growth (in terms of compound annual growth rate, or CAGR) during 2019-2025, for example, is at 1.9% [4]. However, this figure translates to 5.8 billion subscribers in 2025. Meanwhile, connectivity challenges will be more prominent in the context of the Internet of Things (IoT), where 24.6 billion "things" (e.g., devices) are expected to be connected in 2025. In addition to these increasing numbers, connectivity requirements are also becoming more demanding. For instance, there is a rising emphasis on time-sensitive communications in contexts ranging from tactile Internet, to intelligent transport systems (ITS) and to industrial IoT (or Industry 4.0). In 5G networks, accommodations have been made through massive machine type communications (mMTC) or massive IoT (mIoT), and ultra-reliable and low-latency communications (URLLC). Therefore, the expectation is that 6G networks will offer solutions capable of handling these increasingly complex demands and capabilities [20].
Visions on 6G networks have yielded detailed descriptions touching on various aspects [6,13], but there is a lack of discussions on managing user experience in the literature. 6G will deliver enhanced mobile experiences (through more capable "devices") and individualized mobile experiences. Realizing these objectives entails harmonizing network resources in an end-to-end and cross-layer fashion to match individual user requirements and expectations. A harmonized and individualized delivery is another definite by-product of capitalizing on AI-based network management. Nevertheless, specific considerations for user quality of experience (QoE) are yet to be addressed in the context of 6G networks.
The notion, practicalities, and applications of QoE have considerably evolved since its inception in the telecommunication context [21]. Defined at the time as the "the overall acceptability of an application or service, as perceived subjectively by the end-user", QoE was understood to include "end-to-end system effects" and that it "may be influenced by user expectations and context". The evolution of QoE inference and use have included improvements on the manually aggregated scores, e.g., the mean opinion score (MOS); standardizing QoE mappings to network measurements of service delivery; association/correlation studies with user-end observations and responses; and more [22]. This evolution came in tandem with enhanced network delivery performance, especially after introducing IP-based services and smartphones in 2007.
In light of the ongoing deliberations on 6G networks, QoE aspects take on new dimensions that have not been previously accounted for, especially with the projected user context cyberphysical enrichment through augmented reality (AR) and virtual reality (VR), in addition to tactile experiences. Streamlining QoE investigation towards relevant and usable techniques thus becomes necessary.
The objective of this work is to address this void. Specifically, we aim to identify the critical characteristics of user experience in 6G networks. We also highlight how the evolutions in the relevant areas will enhance the understanding and use of QoE inferences. Finally, we elaborate on the challenges expected in QoE and allude to viable opportunities in both emerging and expected technologies.
The remainder of this paper is organized as follows. In Section 2, we discuss the evolution of cellular networks through 5G. We then discuss the enabling technologies and projected services of 6G networks in Sections 3 and 4, respectively. Next, we expand on QoE aspects in 6G networks in Section 5 and discuss the outlook, challenges, and opportunities. Finally, we conclude the work in Section 6.

Quality of Experience in the Cellular Evolution
In what follows, we elaborate on the evolution of cellular communication networks and highlight major intersections with QoE developments. The first-generation (1G) cellular networks' essential goal was to enable wireless and mobile voice calls, i.e., establishing a wireless interface to the standard public switched telephone networks. This involved introducing the cellular concept and maintaining an analog basis of over-the-air signals [23]. Digitized cellular communication for voice connection was introduced in second-generation (2G) networks. In the backend, this involved migrating away from a connection-oriented, public-switched telephone network to a packet-oriented network. Other functionalities allowed for structuring the network into a hierarchy that enabled mobility through handovers and roaming. In addition, 2G introduced the first digital services, including short messaging service, multimedia messaging service, and wireless application protocol. In combination, these allow for short text messages, picture-based messages, and accessing the web through mobile interfaces.
The evolution from the earliest instances of 2G networks, i.e., a global system for mobile communications (GSM), to the later evolutions, i.e., general packet radio service (GPRS) then enhanced data for global evolution (EDGE), exhibited changes in the radio interface and the core. For the radio interface, a greater emphasis was made on capable spread techniques. This allowed for an increase in data rates from 64 kbps to 2 Mbps (in small cell deployments). Meanwhile, remnants of connection-oriented communications at the core gave way to a more packet-oriented operation.
With voice being the predominant service in the first two generations, capturing user impression of the service relied on mean opinion score (MOS)-a subjective test extensively used in regular phone networks. In 1996, the International Telecommunication Union-Telecommunication (ITU-T) standardization took place for the MOS scoring scheme through the recommendations in [24]. This includes standardizing aspects such as room characteristics of the two subjects and the subject eligibility. It then assigns a score from 1 (lowest or bad) to 5 (highest or excellent). The standard also advises how the aggregate score can be calculated for an individual service. Quality models derived from human-based MOS estimates have also been generated to automate MOS assessment, as seen in [25]. Similar standards, both manual and automated, are set by the ITU in other multimedia communications.

QoE in 3G Networks
The earlier instances of 3G deployments (around 2001) coincided with a boom in Internet usage. Personal Digital Assistants (PDAs) and phones with QWERTY interfaces were already introduced at the time. However, smartphone-like devices were introduced only around 2005, together with the earlier deployments of the first evolution to 3G, namely high-speed downlink packet access (HSDPA). Remarkably, all 3G evolutions were primarily software upgrades to the initial Universal Mobile Telecommunications System (UMTS) infrastructure [26].
The last (pre-LTE) evolution in 3G networks saw downlink data rates up to 56 Mbps. Simultaneously, the network core underwent a change towards the IP multimedia subsystem (IMS), whereby a view of data-switched communications was introduced based on the session initiation protocol (SIP). This transformation facilitated a stronger coupling between cellular network cores and the Internet at large. It also allowed for portable mobility between different radio interfaces (e.g., cellular to WiFi).
3G networks exhibited a shift in terms of involving user impression of the service in network management. Network management consideration relied on a quality of service (QoS)-based view. Estimates are made of the network performance in terms of delay, jitter, call drop ratio, etc. The network operation is amended or provisioned to enhance its performance. However, complementing voice with multimedia in 3G resulted in expanding consideration to a QoE-based view [27]. A vital motivator of this shift was a twofold emerging realization at the time: Enhancements in networks cannot be done in the absence of understanding the impact on user perception and of recognizing that factors other than network performance may affect user experience.
Specifically, QoE is concerned with the user perception of the delivered service. The work in [28] takes a long view of QoE's development over the years and how it migrated from its inception in the context of TV communications to more generalized aspects. The authors converge on a definition that QoE is "the degree of delight or annoyance of a person whose experiencing involves an application, service, or system. It results from the person's evaluation of the fulfillment of his or her expectations and needs with respect to the utility and/or enjoyment in the light of the person's context, personality, and current state". The 2017 updated ITU-T definition is similar in stating that QoE is "the degree of delight or annoyance of the user of an application of service", with an additional note that while "recognizing ongoing research on this topic, this is a working definition which is expected to evolve for some time" [29].
The natural subjectivity of these QoE measures is generally not overlooked, neither in terms of "evaluation of the fulfillment", "expectations and needs", nor a "person's context, personality, and current state". Various efforts have been made over the years to objectify (or standardize) the QoE measure to address this. These include the objective MOS techniques noted above. They also include efforts to identify how values on a particular QoE scale can translate into specific network performance (e.g., [30] or [31]). A more comprehensive review of the matter can be found in [32].

Fourth Generation Networks and QoS Legacy
Before 4G, a migratory transition called long term evolution (LTE) was introduced with critical changes at the access level interface and architecture and at the core [33]. These changes allowed for a fully packet-switched core and up to 300 Mbps (under a 4 × communications through 4 Multiple-Input Multiple-Output (MIMO) at a 20 MHz allocation) in the downlink. This evolved packet core (EPC) enabled a fully IP-based cellular network with mobility and billing management across multiple radio interfaces.
True 4G, called LTE-Advanced, is currently at 338 deployments worldwide and continues to be rolled out [1]. LTE-Advanced saw significant improvements over LTE in terms of access layer technology that significantly improved downlink rate (up to 1 Gbps).
Transitioning to 5G, 3GPP Releases 13 and 14 introduced what is called LTE-Advanced Pro. These releases also allowed for a more capable carrier aggregation, yielding a possible 3 Gbps data rate in the downlink. Meanwhile, the releases allowed for the use of an unlicensed spectrum and provided the means for supporting IoT access to the cellular network. Further details on these releases can be found in [34,35].
A key observation to be made from the evolution discussed above is a focus on improving mobile user experience through enhancing data rates, which meant coupling QoE with a QoS-based view in managing the network, i.e., in isolation of the user perception.
The 4G enhancements in data rates came with supporting services such as multiradio access and offloading, location-based services, etc. While 5G networks will continue to observe order-of-magnitude-increase-per-generation growth, a strong emphasis has been made on supporting operational flexibility and new communication modes. These and other aspects of 5G networks are discussed next. Further details on cellular networks' evolution and expanded descriptions of 4G networks can be found in [33,36].

The Fifth Generation-Generation Software
ITU's IMT-2020 specifications set the minimum downlink for 5G networks to be 20 Gbps. In response, various technologies have been introduced in Release 15 and beyond to make this requirement viable. For example, 5G New Radio (5G) allows access to the mmWave spectrum (24-100 GHz range) in addition to the original sub-6-Ghz spectrum that is commonly shared with 4G networks [37]. This dual access allows for both range (for the sub-6-Ghz) as well as speed and frequency reuse (for the mmWave). Evolutions of new radio (NR) are discussed in Releases 16 and 17 (respectively in refs. [2,3]).
The 20 Gbps requirement will be a significant step towards an enhanced mobile broadband service (eMBB). This is one of three significant services expected in 2020 and beyond, with the other two being mMTC and URLLC. The expectation is that the eMBB will be introduced with the initial rollouts of 5G (i.e., based on Release 15), with the other two introduced in the later releases (i.e., Releases 16 and 17).
The emergence of mMTC came as a response to the increasing dependence on IoT applications and the IMT-2020 requirements on dense connectivity and low power consumption. Earlier instances (e.g., NB-IoT and LTE-M) have been defined in earlier releases (i.e., Releases 13 and 14), with NB-IoT addressing bursty IoT communications with lowto-no delay requirements and LTE-M focusing on delay-sensitive applications. With the introduction of 5G NR, substantial support will be made for further deployment models and service requirements. Coexistence measures (both between NB-IoT, LTE-M, and mMTC and between mMTC and eMBB) are made in Release 16.
The emphasis on URLLC is that it supports both high mobility (i.e., up to 500 km/h) and low latency. A central aim of URLLC is support for vehicular-to-anything (V2X) communications and intelligent transportation systems, including railway communications. However, URLLC is also directed at services such as AR/VR, remote monitoring, tactile Internet, smart grids, Industry 4.0, and Industrial IoT (IIoT).
Through a service view, the combined offering of eMBB, mMTC, and URLLC stresses the need for great flexibility on the part of the network, especially at the core. This need resulted in two critical actions. The first is a strong dependence on softwarization in 5G core; the second is introducing an analytics-based element in the network.
In 5G networks, softwarization essentially allows for easily instantiating services across the network in a manner that harmonizes resource allocation while maintaining stability [38]. To support this, a dependence on a two-tier cloud computing hierarchy (i.e., cloud-edge) allows for slicing the physical network resource towards different services, e.g., eMBB, mMTC, and URLLC. Variants on these services can also be realized-even ones that are geographically localized. This results in "campus solutions" (i.e., a localized 5G instance in an office, factory, residential, stadium, etc.) that renders a different service offering depending on the localized requirements [39]. In addition to this, softwarization will make it easier to satisfy IMT-2020 latency requirements through its reliance on edge servers.
The 5G core will include a network data analytics function (NWDAF) [19] to complement the increased dependence on resource slicing. This function will help manage the increasing network complexity, especially towards realizing autonomous policy-based (or intent-based) management.
5G networks will be rolled out in phases, with the initial phase exhibiting a change in the radio interface. This entails connecting 5G NR to the 4G network's EPC, and this initial stage is called the nonstandalone phase. In the standalone phase, 5G NR will be connected to the 5G core, and a dual-interface phone would be able to perform handover (or roaming) between the two networks. The rollout of the first phase (based on Release 15) is already taking place. Meanwhile, especially with the postponed timeline for Releases 17 and 18, certain aspects of 5G and its evolution continue to be under development.
Efforts on QoE in 5G networks continue to emerge and focus on extending the work on standardizing QoE mapping to network performance indicators. This standardization, however, can be problematic due to two reasons: legacy and applicability.
The legacy aspect consists of transitioning a QoS point-of-view in network management to a QoE one. The QoS view limited the basis for network management to performance criteria (e.g., flow or packet priority, delay, jitter, bandwidth, etc.). This view materialized in the DiffServ/IntServ evolution and other instances, e.g., IEEE 802.11e. However, these performance aspects were optimized in isolation from the eventual user perception of the service delivery. This isolation became further detrimental as multimedia began to dominate Internet access. Expanding the basis of management to include a QoE view was therefore necessary. This expansion has long been made, but substantial handling of QoE like that of QoS parameters remains.
The applicability aspect of QoE can be seen here. For example, the work presented in [40] investigates the use of drones in cellular networks, with the aim of "enhancing QoE". The work, however, does focus on enhancing video streaming performance. This objective is a fair QoS objective on its own, just not one that necessarily and sufficiently leads to enhancing QoE. The mixed QoS/QoE view persists in other areas, e.g., associating higher bps with increased QoE, hence automatically expecting better QoE as cellular networks evolve.

Expectations for QoE in 6G Networks
As aforementioned, considerable deliberations are already underway for the 6G network while the first phase of nonstandalone 5G is rolled out. In March 2020, for example, the ITU signaled that it commenced research work on the 6G networks and beyond, with the expectation that the earliest visions (or requirements) be specified in 2023. To be sure, an ITU focus group called "Focus Group on Technologies for Network 2030" commenced in mid-2018, and it takes a holistic view of telecommunications in 2030 and beyond, i.e., a view that is not limited to cellular environments.
Some have voiced concerns that it is too early (and too risky) to contemplate 6G [13], on the sound premise that the full potential of the extensive softwarization in 5G networks and other technologies is yet to begin. There is also the fact that the industry's struggle with initial deployment in 2020 might require time in seeking recovery. Despite these concerns, contemplations on 6G are already in an avalanche, and while a specific timeline for development requirements and specification of 6G networks is yet to be formalized, a review of the industrial and research view of 6G networks appear to have substantial commonalities.
However, before discussing QoE in 6G networks, a review of the enabling technologies and the expected services is needed. This review is provided in the next two sections. Meanwhile, further details on 5G networks can be found in [41,42].

Enabling Technologies of Sixth Generation Networks
In this section, we take a bottom-up approach in describing the enabling technologies of 6G services.

Physical Layer
In general, 6G networks are expected to deliver above~100 Gbps over the air. Achieving this will require exceeding the capabilities of mmWave and venturing into the terahertz band, i.e., 0.1-10 THz [43]. In this range, the network will rely on wide-band allocations. Such access is readily considered for backhauling 6G networks. However, expecting terahertz communication to the handset will require overcoming considerable challenges that include channel integrity and antenna design.
Massive and dense MIMO have a crucial role in 6G networks, especially if THz is enabled. As noted in [7], the use of specific technologies such as metamaterial-based antennae, reconfigurable intelligent surfaces (RISs), and orbital angular momentum communication would all allow for holistic resolution to the physical layer limitations and may act as the cornerstone technologies for the 6G radio interface.
RISs (also called intelligent reflective surfaces, IRSs) comprise the evolution of amplifyand-forward relay [44]. They introduce an agent of change into reflective surfaces to control the signal's quality at the receiver. Ideally, these surfaces (or lenses) have high resolution and accommodate prompt and optimal control. However, both resolution and reconfiguration parameters have upper limits, whether in fabrication or operation. It is expected that these surfaces will nevertheless allow for the gains needed for 6G's THz communication.
To realize further gains, the 6G radio interface is expected to allow for true fullduplex wireless [45]. Given recent advances in self-interference cancellation, full-duplex wireless is set to offer substantial gains in terms of spectral efficiency. It will also provide further advantages when combined with the recently emerging nonorthogonal multiple access [46]. However, cross-layer interactions must be made to realize the full potential of these technologies.

Infrastructure and Topology
Support for drone-based (unmanned aerial systems/vehicles) has already been introduced in 5G networks and its evolutions, i.e., Releases 15 through 17 (https://www.3gpp. org/uas-uav). This support allows for added flexibility in infrastructure and a possible means of enhanced spectral efficiency. In 6G networks, it appears viable that drones will further contribute to the computing and softwarization infrastructure [47].
As established earlier (in Release 14), satellite communications requirements have been set in response to the increasingly visible potential regarding expanded coverage, especially in mission-critical applications and specific industrial deployments. These requirements span outer space vehicles, i.e., LEO, MEO, and GEO satellites, and nonterrestrial networks. These last networks are typical cellular networks that use either spaceborne or airborne elements such as high-altitude platforms (HAPs) and unmanned aerial systems (UAS). The support for nonterrestrial elements is studied in Release 15 [48], while the study of satellite use is part of Release 16 [49].
A direct advantage of these additions is that they can expand network coverage to maritime networks reliably and sustainably. This expansion is particularly advantageous for the shipping and oil industries, where the cost of constant monitoring can be substantially reduced. Meanwhile, the notion of controlled variable-topology in the network dramatically benefits network operators. For example, the network structure can efficiently and effectively adapt to varying demands, be it planned (as in rallies, sports events, etc.) or emerging (accidents, natural phenomena, etc.). More critically, these connectivity options will enable providers to address the digital divide, allowing for more effective Internet access to areas that were either previously neglected or unreachable [50].
The adaptive infrastructure of 6G promises adaptive computing capabilities. Specifically, in addition to managing logical computing power in the network, the network can now redirect physical resources to areas where computing is geographically needed. In establishing this "edge-to-cloud continuum", the network's computing resources can be realigned (both logically and physically) depending on the existing demand [51]. This includes engaging computing resources in the user's context, e.g., in the VR visor, as part of the end-to-end resource alignment for applications.

Networking
Networking at both the link and networking layers in 6G is expected to exhibit great flexibility. As noted above, ITU-T expectations for 2030 go beyond a wider adoption of IPv6 (and relevant ossifications), and they expand to a more dynamic and adaptive communication [52]. The control-data split introduced by the softwarization in the Internet is undoubtedly a positive step in this direction.
There are other aspects to consider. For example, the advances in the physical layer and the topology entail more sophisticated link-layer management modes. On one end, there is the complexity of the connection possibilities at the physical layer that needs to "structured" and scheduled. On the other, there is the need to interact with upper layers more closely (i.e., a stronger cross-layer coupling) to attain higher performance levels. For both, there is also the consideration of increased connectivity for IoT/MTC.
Similarly, changes are needed in layer 3 (L3) to handle the increased connectivity from both physical and logical entities, i.e., virtualized instances. Meanwhile, emphasis on reliability will have to extend to "high-precision networking", whereby low-to-zero tolerance is exercised in supporting service delivery requirements [53]. These extensions are inevitable if proper support is to be provided for URLLC services and their evolutions. There is also a need to go beyond what can be offered by efforts such as time-sensitive networking, especially as they were designed for traditional and wired operations.
Another inevitable expectation is complementing the control-data split with a knowledge plane [54,55]. This plane would include functionalities and procedures where the Internet gathers its understanding and reasoning and shares it across the different domains. Realizing the knowledge plane requires efficiency and robustness in knowledge representation and sharing. The evolution of content distribution networks through informationcentric networks allows for multimodal access to knowledge, namely through traditional IP addressing, geographical addressing, and content labeling, among others [56]. A relevant view of 6G networks is that of "semantic communications" [57].

Intelligence and Autonomy
The softwarization of 5G, coupled with the introduction of analytics (through NWDAF), points to a network where intelligence is more finely fused with all aspects of network operation. This has been previously envisioned (e.g., in [17,18]), whereby a hierarchical structure of intelligence that can address different operational tasks is introduced.
In 6G networks, distributed forms of learning, e.g., federated learning, seem to hold strong potential. This learning distributes the processing aspects of learning in a networked structure, i.e., parallelizing the computational requirement of learning across multiple nodes. The neural or Bayesian network would be mapped over the physical processors, either IoT devices or edge-computing cores [58,59].
A view of the coupling between computing and intelligence here allows for another view. As will be discussed in the next section, certain services will rely on localized computing. This "split" or "fog" computing can be exploited to accelerate specific learning and decisions at the network peripherals [7]. Computing assignments at different levels of the intelligence hierarchy can then be made depending on the nature of the problem. For example, long-term, "deep-thought" processing can be made in the cloud, and real-time, reflex-like processing can be allocated to the fog tiers.

Network Management Consideration
From an operational perspective, one key advantage of combining network-wide softwarization and intelligence is the possibility of realizing intent-based networking (IBN). As the complexity of network management increases, scalable reconfiguration also increases in difficulty. Meanwhile, the relationship between configurations and outcomes becomes further obscured given the intricacy or interrelationship of various "degrees-of-freedom" or "operational choices" in the network. Operational points that are advantageous to the operator might then be overshadowed, which may be critical. IBN aims to overcome the complexity of managing these intricacies [60].
Specifically, an end-to-end intelligence framework with access to sufficient computing power can help explore and/or understand the configuration-to-outcome relationship and the intent-to-outcome pathway of actions. Based on these findings, an operator can specify the desired outcome from the network operation (e.g., in terms of KPIs), whether it is network-wide or specific to a campus deployment. In turn, the network would autonomously take care of verifying the intent safety and configuration viability.
Here, "configuration" refers to more than updating OpenFlow tables. The expectation is that IBN extends its impact to various network aspects, ranging from radio types to topology and topology elements, and reaching service offerings and content. This overhauling view is also called "mandate-driven networking" [61] or "goal-driven communications" [62].

Other Considerations
Establishing whether 6G will achieve true cognition and/or self-awareness at the network level is still at the early stages. It is more reasonable to expect considerable advances in network operation autonomy to be deployed by 2030. There are also aspects of intelligence and learning that continue to merit scrutiny, such as replicability and explainability. However, given the steadfast development in realizing autonomous systems, project advances in these areas over the next decade can be expected.
The notion of "self-aware networking" has been put forth as part of the more sophisticated expectations from IBN [60]. Realizing self-awareness will depend on two key achievements: a substantial simplification of autonomic operation requirements and a closer understanding of the achievable gains from self-awareness.
Finally, a crucial aspect that was not discussed in the development above is security. Given the expansive reach of 5G networks and their evolutions, there were numerous significant security enhancements made, which were presented in ref. [62]. Towards 6G, it is expected that blockchain deployments will become more common, especially in services. Meanwhile, continuous advances in quantum computing seem to indicate industrialized instances' viability by 2030, especially in encryption. Notwithstanding this, the increasingly individualized nature of 6G services necessitate the realization of effective security and privacy measures.

Sixth Generation Services and Features
There are indicators that 6G will allow for service delivery in a manner that capitalizes on excessive bandwidth availability, responsive communication, and the capability to promptly process and respond to a rich and dynamic context [6][7][8]. Relative to current features or even those of 5G networks, the extent to which these features will be offered is far-reaching to great degrees. For example, augmented and virtual reality (AR and VR) services are already taking hold today in different gaming boxes. As these services evolve, a richer and more responsive context will be delivered to the users, even with multiple simultaneous inputs being made.
Digital twinning is a core feature of Industry 4.0 that must be considered, whereby a digitized replica of an actual realistic environment, e.g., factory or office space, is fully simulated to the user. The realization of this twinning so far has been impeded by two factors: computation and communication. However, digital twinning now appears more viable, with 6G networks availing a combined breakthrough in both.
Another expected service in 6G networks is tactile Internet, which will further the user experience with aspects such as weight, temperature, or speed. Such aspects are crucial in various contexts where the need for prompt remote control is present. These contexts include remote surgery, manufacturing, or navigation. This is an example of service enrichments that motivate the use of high-precision networking.
Extended reality (XR) and mixed reality (MR) are other forms that are indicative of service expectations. Through these, user experience becomes dependent on personal interaction and interaction with the user's surroundings. This is particularly the case in MR, where physical objects can both obscure and impact virtualized objects. Cost-effective high-resolution and responsive holographs may be available by 2030, adding a further dimension to user-context enrichment. Support for V2X communication in 5G will play a significant role in the evolution of autonomic vehicles [63]. Moreover, 6G networks will further enable the rise of autonomy in other applications, especially in manufacturing and remote applications. These applications will depend heavily on computing and communication advances, especially in terms of distributed intelligence.
In addition, 6G networks may finally deliver on the notion of AI or machine learning (ML) ML as a service (AI/MLaaS). This entails the availability of platforms whereby intelligence tasks can be requested or exchanged. In turn, AI/ML platforms can utilize these various "experiences" and access valuable insights relevant or applicable across different verticals.
The mobile user experience in 6G networks will thus exhibit significant enhancements. The viability of (hyper-) individualized mobile experiences is more significant. As discussed above, network operators will be able to direct network operation through intent-based or mandate-based networking [60]. The premise of the campus deployments in 5G was tailoring network operations depending on the needs of enterprise consumers. In 6G, it can be envisioned that personalization and tailoring can occur at the individual consumer level. This individualization will be made possible through a ubiquitous intelligence that can promptly and effectively understand user needs. More importantly, it will better understand the user's impression of network delivery.

Quality of Experience in 6G Networks
In general, 6G services are essentially cyber-physical, immersive, and personalized. These characteristics come in stark contrast to possibilities in either 4G or 5G networks.
5G networks are capable of a high level of synchrony, one where geographically separated musicians can seamlessly jam together [64]. As previously noted, more missioncritical applications of low-latency are already supported, including communications in vehicular networks. Similarly, delay and synchrony requirements in IIoT and remote surgery applications allow for little to no tolerance [65].
6G networks also realize services where the user context is densely, reliably, and seamlessly connected in a real, virtual, or mixed environment. These networks will continue to allow for immediate or near-immediate interaction and will substantially surpass distinguishable performance. Specifically, 6G may offer a network performance beyond what can be appreciated given the limitations of human senses. Put differently, 5G and its evolution have made substantial gains in addressing QoS concerns [66].
QoE cannot be disassociated from QoS measures, and the state of the network does impact the user experience. However, what needs to be overcome is viewing the network as the main contributing factor in evaluating QoE. Innovations such as [67] improve the modeling of the QoE-QoS interactions but continue to limit QoE consideration to what can be measured and managed through the network. Studies such as the one offered in [68], while certainly more comprehensive, remain speculative and static.
The objective, therefore, is realizing a more prompt and comprehensive captures of QoE. Concurrently, it is necessary that QoE captures can be easily related to their impact factor(s) for the network to respond effectively. In what follows, we elaborate on potential solutions for both these aspects.

Capturing QoE through Affect
A human's affect is more commonly referred to as the user's emotional state-an expression of the human's feelings or impression. In day-to-day interactions, a human may discern others' affective states through various "tells" or social cues. These include facial expression, tone, language, gesture, and gait. These indicators may also extend to more subtle aspects, such as tolerance to sound, light, or action.
Recently, there has been a rise in the use of affective sensing or emotional AI in various contexts. In communications, instances such as the one in [69] exploit emotional sensing to enhance the mobile computing experience. Meanwhile, the authors in [70] utilized a proprietary library to estimate the user's MOS and to match the estimates with QoS measurement to enhance QoE estimates.
These tools hold great promise in terms of automating the capture of the user's affective state. However, there are definite challenges that need to be addressed. For example, the notion of affect and affect discretization remains problematic and has largely relied on empirical consideration [71]. Standardization and cross-culture/context appropriation are similarly problematic [72]. There are also considerable ethical concerns in using these technologies that require careful consideration [73].
Despite these challenges, and considering the rising demand for explainable intelligence, it is expected that these challenges will be addressed over the next decade. This will avail capable affective sensing technologies for 6G networks.

Causal Inference
How is it possible to infer the cause of change in the user's QoE? Moreover, what would be the best operational actions in response? As discussed above, user experience in 6G networks is predominantly a cyberphysical, multisensory experience. This means that the network usually delivers all or most of the user context. Conversely, many elements in the end-to-end view, possibly over many pathways, will come together in 6G to deliver the service.
Generally speaking, there are three classes of impact factors (or IFs) that affect the QoE of service delivery [28,68,74]: human, context, and system. The first relates to what can be called the demographic composite that makes the user. These include age, gender, culture, personality, mood, and interest. Naturally, these are not static affects, and they are bound to change in time or space. Context amends human IFs with an understanding of the user's location, surroundings, time, activity, space co-occupants, technical affluence, and economic capability. Finally, system IFs are concerned with the content, media configuration, network status, and device (or gear).
Minding the projected intricacy of 6G networks and the vast array of service alternatives that can be delivered (be it at the user or the enterprise levels), the need to examine the QoE's definition and usability becomes more apparent.
Consider a crudely reduced model of the types of components that are involved in delivering a user experience, namely human, context, network, service, content, and equipment (or gear). The 6G network operator is concerned with understanding the network's influence on the change in the user's QoE. Alternatively, the interest may be in the service or the content for the service provider. If the QoE measures can pinpoint the network as the reason, further exploration would be needed into the network elements.
A decomposition of the network may involve access, core, computing, intelligence, and security. Within each of these elements, there stands a possible cause within computing cloud or edge, or within intelligence, data, error modeling, or optimization, etc. The cause is not necessarily singular, and the interaction of several factors may have resulted in the QoE.
Addressing this complexity calls for causal inference engines. The reference here is not for correlation engines, which, with human support, can help identify cause and effect. Instead, it is to the autonomous or algorithmized causal inference engines as described in [75]. These engines can identify the reason behind the user's QoE and trace the cause to manageable impact factors.
The research on these engines is still in its early stages. Still, considering the expected computing and intelligence capabilities in 6G, it can be predicted that they will play a significant role in network management.

Relevant Considerations
In the context of networks, it is typical that security is discussed with respect to the data communicated and stored. In several projected 6G services, mostly where experience depends on a form of sensory immersion, the network's and service's access to the user data is considerably expanded. Measures for managing the secure and privatized handling of this data will be necessary and will need to be made in a way that will not impact the user experience.
The immersion aspect of the services further entails the consideration of user safety, both physically and mentally. Cyberphysical or tactile interaction implies physical involvement in the experience, and the notion of safe robotics or safe AI would need to have matured by then. Meanwhile, consideration for the mental state of the users becomes an added requirement.
Specific steps are already taken towards these objectives. The IEEE P7000 effort, for example, seeks to investigate the means of ethical design, especially where AI is considered. Other examples include the subtle interfaces in Instagram's "You're all caught up" function and Apple's "screen time", both of which show an increasing awareness of technology's addictive affect.

Conclusions
Considerable deliberations are readily underway regarding the technologies and services of 6G networks. Some have even deliberated what Beyond 6G networks will look like, as they contemplate the realization and applications of technologies, such as brain-type communications and nano-and bio-nano interactions. This is occurring at a time when 5G networks are only beginning to be deployed. Despite the quality and volume of this deliberation, QoE aspects are yet to receive consideration. This work's objective was to allude to QoE's subtle aspects in 6G networks and the limitation of the current definitions. It also aims at offering a new basis for a more useful definition in the networks to come. Future work will expand on the emerging resolution to the noted challenges, especially within causal inference.