Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = QoE metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1109 KiB  
Article
User Preference-Based Dynamic Optimization of Quality of Experience for Adaptive Video Streaming
by Zixuan Feng, Yazhi Liu and Hao Zhang
Electronics 2025, 14(15), 3103; https://doi.org/10.3390/electronics14153103 - 4 Aug 2025
Abstract
With the rapid development of video streaming services, adaptive bitrate (ABR) algorithms have become a core technology for ensuring optimal viewing experiences. Traditional ABR strategies, predominantly rule-based or reinforcement learning-driven, typically employ uniform quality assessment metrics that overlook users’ subjective preference differences regarding [...] Read more.
With the rapid development of video streaming services, adaptive bitrate (ABR) algorithms have become a core technology for ensuring optimal viewing experiences. Traditional ABR strategies, predominantly rule-based or reinforcement learning-driven, typically employ uniform quality assessment metrics that overlook users’ subjective preference differences regarding factors such as video quality and stalling. To address this limitation, this paper proposes an adaptive video bitrate selection system that integrates preference modeling with reinforcement learning. By incorporating a preference learning module, the system models and scores user viewing trajectories, using these scores to replace conventional rewards and guide the training of the Proximal Policy Optimization (PPO) algorithm, thereby achieving policy optimization that better aligns with users’ perceived experiences. Simulation results on DASH network bandwidth traces demonstrate that the proposed optimization method improves overall Quality of Experience (QoE) by over 9% compared to other mainstream algorithms. Full article
Show Figures

Figure 1

18 pages, 7391 KiB  
Article
Reliable QoE Prediction in IMVCAs Using an LMM-Based Agent
by Michael Sidorov, Tamir Berger, Jonathan Sterenson, Raz Birman and Ofer Hadar
Sensors 2025, 25(14), 4450; https://doi.org/10.3390/s25144450 - 17 Jul 2025
Viewed by 282
Abstract
Face-to-face interaction is one of the most natural forms of human communication. Unsurprisingly, Video Conferencing (VC) Applications have experienced a significant rise in demand over the past decade. With the widespread availability of cellular devices equipped with high-resolution cameras, Instant Messaging Video Call [...] Read more.
Face-to-face interaction is one of the most natural forms of human communication. Unsurprisingly, Video Conferencing (VC) Applications have experienced a significant rise in demand over the past decade. With the widespread availability of cellular devices equipped with high-resolution cameras, Instant Messaging Video Call Applications (IMVCAs) now constitute a substantial portion of VC communications. Given the multitude of IMVCA options, maintaining a high Quality of Experience (QoE) is critical. While content providers can measure QoE directly through end-to-end connections, Internet Service Providers (ISPs) must infer QoE indirectly from network traffic—a non-trivial task, especially when most traffic is encrypted. In this paper, we analyze a large dataset collected from WhatsApp IMVCA, comprising over 25,000 s of VC sessions. We apply four Machine Learning (ML) algorithms and a Large Multimodal Model (LMM)-based agent, achieving mean errors of 4.61%, 5.36%, and 13.24% for three popular QoE metrics: BRISQUE, PIQE, and FPS, respectively. Full article
Show Figures

Figure 1

24 pages, 19576 KiB  
Article
Evaluating HAS and Low-Latency Streaming Algorithms for Enhanced QoE
by Syed Uddin, Michał Grega, Mikołaj Leszczuk and Waqas ur Rahman
Electronics 2025, 14(13), 2587; https://doi.org/10.3390/electronics14132587 - 26 Jun 2025
Viewed by 1037
Abstract
The demand for multimedia traffic over the Internet is exponentially growing. HTTP adaptive streaming (HAS) is the leading video delivery system that delivers high-quality video to the end user. The adaptive bitrate (ABR) algorithms running on the HTTP client select the highest feasible [...] Read more.
The demand for multimedia traffic over the Internet is exponentially growing. HTTP adaptive streaming (HAS) is the leading video delivery system that delivers high-quality video to the end user. The adaptive bitrate (ABR) algorithms running on the HTTP client select the highest feasible video quality by adjusting the quality according to the fluctuating network conditions. Recently, low-latency ABR algorithms have been introduced to reduce the end-to-end latency commonly experienced in HAS. However, a comprehensive study of the low-latency algorithms remains limited. This paper investigates the effectiveness of low-latency streaming algorithms in maintaining a high quality of experience (QoE) while minimizing playback delay. We evaluate these algorithms in the context of both Dynamic Adaptive Streaming over HTTP (DASH) and the Common Media Application Format (CMAF), with a particular focus on the impact of chunked encoding and transfer mechanisms on the QoE. We perform both objective as well as subjective evaluations of low-latency algorithms and compare their performance with traditional DASH-based ABR algorithms across multiple QoE metrics, various network conditions, and diverse content types. The results demonstrate that low-latency algorithms consistently deliver high video quality across various content types and network conditions, whereas the performance of the traditional adaptive bitrate (ABR) algorithms exhibit performance variability under fluctuating network conditions and diverse content characteristics. Although traditional ABR algorithms download higher-quality segments in stable network environments, their effectiveness significantly declines under unstable conditions. Furthermore, the low-latency algorithms maintained high user experience regardless of segment duration. In contrast, the performance of traditional algorithms varied significantly with changes in segment duration. In summary, the results underscore that no single algorithm consistently achieves optimal performance across all experimental conditions. Performance varies depending on network stability, content characteristics, and segment duration, highlighting the need for adaptive strategies that can dynamically respond to varying streaming environments. Full article
(This article belongs to the Special Issue Video Streaming Service Solutions)
Show Figures

Figure 1

27 pages, 10314 KiB  
Article
Immersive Teleoperation via Collaborative Device-Agnostic Interfaces for Smart Haptics: A Study on Operational Efficiency and Cognitive Overflow for Industrial Assistive Applications
by Fernando Hernandez-Gobertti, Ivan D. Kudyk, Raul Lozano, Giang T. Nguyen and David Gomez-Barquero
Sensors 2025, 25(13), 3993; https://doi.org/10.3390/s25133993 - 26 Jun 2025
Viewed by 482
Abstract
This study presents a novel investigation into immersive teleoperation systems using collaborative, device-agnostic interfaces for advancing smart haptics in industrial assistive applications. The research focuses on evaluating the quality of experience (QoE) of users interacting with a teleoperation system comprising a local robotic [...] Read more.
This study presents a novel investigation into immersive teleoperation systems using collaborative, device-agnostic interfaces for advancing smart haptics in industrial assistive applications. The research focuses on evaluating the quality of experience (QoE) of users interacting with a teleoperation system comprising a local robotic arm, a robot gripper, and heterogeneous remote tracking and haptic feedback devices. By employing a modular device-agnostic framework, the system supports flexible configurations, including one-user-one-equipment (1U-1E), one-user-multiple-equipment (1U-ME), and multiple-users-multiple-equipment (MU-ME) scenarios. The experimental set-up involves participants manipulating predefined objects and placing them into designated baskets by following specified 3D trajectories. Performance is measured using objective QoE metrics, including temporal efficiency (time required to complete the task) and spatial accuracy (trajectory similarity to the predefined path). In addition, subjective QoE metrics are assessed through detailed surveys, capturing user perceptions of presence, engagement, control, sensory integration, and cognitive load. To ensure flexibility and scalability, the system integrates various haptic configurations, including (1) a Touch kinaesthetic device for precision tracking and grounded haptic feedback, (2) a DualSense tactile joystick as both a tracker and mobile haptic device, (3) a bHaptics DK2 vibrotactile glove with a camera tracker, and (4) a SenseGlove Nova force-feedback glove with VIVE trackers. The modular approach enables comparative analysis of how different device configurations influence user performance and experience. The results indicate that the objective QoE metrics varied significantly across device configurations, with the Touch and SenseGlove Nova set-ups providing the highest trajectory similarity and temporal efficiency. Subjective assessments revealed a strong correlation between presence and sensory integration, with users reporting higher engagement and control in scenarios utilizing force feedback mechanisms. Cognitive load varied across the set-ups, with more complex configurations (e.g., 1U-ME) requiring longer adaptation periods. This study contributes to the field by demonstrating the feasibility of a device-agnostic teleoperation framework for immersive industrial applications. It underscores the critical interplay between objective task performance and subjective user experience, providing actionable insights into the design of next-generation teleoperation systems. Full article
(This article belongs to the Special Issue Recent Development of Flexible Tactile Sensors and Their Applications)
Show Figures

Figure 1

17 pages, 474 KiB  
Article
User Experience-Oriented Content Caching for Low Earth Orbit Satellite-Enabled Mobile Edge Computing Networks
by Jianhua He, Youhan Zhao, Yonghua Ma and Qiang Wang
Electronics 2025, 14(12), 2413; https://doi.org/10.3390/electronics14122413 - 13 Jun 2025
Viewed by 290
Abstract
In this paper, we investigate a low Earth orbit (LEO) satellite-enabled mobile edge computing (MEC) network, where multiple cache-enabled LEO satellites are deployed to address heterogeneous content requests from ground users. To evaluate the network’s capability in meeting user demands, we adopt the [...] Read more.
In this paper, we investigate a low Earth orbit (LEO) satellite-enabled mobile edge computing (MEC) network, where multiple cache-enabled LEO satellites are deployed to address heterogeneous content requests from ground users. To evaluate the network’s capability in meeting user demands, we adopt the average quality of experience (QoE) of the users as the performance metric, defined based on the effective transmission rate under communication interference. Our analysis reveals that the average QoE is determined by the content caching decisions at the satellites, thereby allowing us to formulate an average QoE maximization problem, subject to practical constraints on the satellite caching capacity. To tackle this NP-hard problem, we design a two-stage content caching algorithm that combines divide-and-conquer and greedy policies for efficient solution. The numerical results validate the effectiveness of the proposed approach. Compared with several benchmark schemes, our algorithm achieves notable improvements in terms of the average QoE while significantly reducing caching costs, particularly under resource-constrained satellite settings. Full article
Show Figures

Figure 1

17 pages, 1444 KiB  
Article
Adaptive Slotframe Allocation with QoS and Energy Optimization in 6TiSCH for Industrial IoT Applications
by Nilam Pradhan, Bharat S. Chaudhari and Prasad D. Khandekar
Telecom 2025, 6(2), 41; https://doi.org/10.3390/telecom6020041 - 10 Jun 2025
Viewed by 520
Abstract
Industry 4.0 has transformed manufacturing and automation by integrating cyber–physical systems with the Industrial Internet of Things (IIoT) for real-time monitoring, intelligent control, and data-driven decision making. The IIoT increasingly relies on IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) to achieve reliable, low-latency, and [...] Read more.
Industry 4.0 has transformed manufacturing and automation by integrating cyber–physical systems with the Industrial Internet of Things (IIoT) for real-time monitoring, intelligent control, and data-driven decision making. The IIoT increasingly relies on IEEE 802.15.4e Time-Slotted Channel Hopping (TSCH) to achieve reliable, low-latency, and energy-efficient industrial communications. The 6TiSCH protocol stack integrates scheduling and routing to optimize transmissions for resource-constrained devices, enhancing Quality of Service (QoS) in IIoT deployments. This paper proposes an innovative adaptive and cross-layer slotframe allocation technique for 6TiSCH networks, dynamically scheduling cells based on node hop distance, queue backlog, predicted traffic load, and link quality metrics. By dynamically adapting to these parameters, the proposed method significantly improves key QoS metrics, including end-to-end latency, packet delivery ratio, and network lifetime. The mechanism integrates real-time queue backlog monitoring, link performance analysis, and energy harvesting awareness to optimize cell scheduling decisions proactively. The results demonstrate that the proposed strategy reduces end-to-end latency by up to 32%, enhances PDR by up to 27%, and extends network lifetime by up to 10% compared to state-of-the-art adaptive scheduling solutions. Full article
Show Figures

Figure 1

35 pages, 5812 KiB  
Article
A Chemistry-Based Optimization Algorithm for Quality of Service-Aware Multi-Cloud Service Compositions
by Mona Aldakheel and Heba Kurdi
Mathematics 2025, 13(8), 1351; https://doi.org/10.3390/math13081351 - 21 Apr 2025
Cited by 1 | Viewed by 472
Abstract
The increasing complexity of cloud service composition demands innovative approaches that can efficiently optimize both functional requirements and quality of service (QoS) parameters. While several methods exist, they struggle to simultaneously minimize the number of combined clouds, examined services, and execution time while [...] Read more.
The increasing complexity of cloud service composition demands innovative approaches that can efficiently optimize both functional requirements and quality of service (QoS) parameters. While several methods exist, they struggle to simultaneously minimize the number of combined clouds, examined services, and execution time while maintaining a high QoS. This novelty of this paper is the chemistry-based approach (CA) that draws inspiration from the periodic table’s organizational principles and electron shell theory to systematically reduce the complexity associated with service composition. As chemical elements are organized in the periodic table and electrons organize themselves in atomic shells based on energy levels, the proposed approach organizes cloud services in hierarchical structures based on their cloud number, composition frequencies, cloud quality, and QoS levels. By mapping chemical principles to cloud service attributes—where service quality levels correspond to electron shells and service combinations mirror molecular bonds—an efficient framework for service composition is created that simultaneously addresses multiple objectives in QoS, NC, NEC, NES, and execution time. The experimental results demonstrated significant improvements over existing methods, such as Genetic Algorithms (GAs), Simulated Annealing (SA), and Tabu Search (TS), across multiple performance metrics, i.e., reductions of 14–33% are observed in combined clouds, while reductions of 20–85% are observed in examined clouds, and reductions of 74–98% are observed in examined services. Also, a reduction of 10–99% is observed in execution time, while fitness levels are enhanced by 1–14% compared to benchmarks. These results validate the proposed approach’s effectiveness in optimizing service composition while minimizing computational overhead in multi-cloud environments. Full article
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications, 2nd Edition)
Show Figures

Figure 1

21 pages, 783 KiB  
Article
Robust Beamfocusing for Secure NFC with Imperfect CSI
by Weijian Chen, Zhiqiang Wei and Zai Yang
Sensors 2025, 25(4), 1240; https://doi.org/10.3390/s25041240 - 18 Feb 2025
Viewed by 780
Abstract
In this paper, we consider the issue of the physical layer security (PLS) problem between two nodes, i.e., transmitter (Alice) and receiver (Bob), in the presence of an eavesdropper (Eve) in a near-field communication (NFC) system. Notably, massive multiple-input multiple-output (MIMO) arrays significantly [...] Read more.
In this paper, we consider the issue of the physical layer security (PLS) problem between two nodes, i.e., transmitter (Alice) and receiver (Bob), in the presence of an eavesdropper (Eve) in a near-field communication (NFC) system. Notably, massive multiple-input multiple-output (MIMO) arrays significantly increase array aperture, thereby rendering the eavesdroppers more inclined to lurk near the transmission end. This situation necessitates using near-field channel models to more accurately describe channel characteristics. We consider two schemes with imperfect channel estimation information (CSI). The first scheme involves a conventional multiple-input multiple-output multiple-antenna eavesdropper (MIMOME) setup, where Alice simultaneously transmits information signal and artificial noise (AN). In the second scheme, Bob operates in a full-duplex (FD) mode, with Alice transmitting information signal while Bob emits AN. We then jointly design beamforming and AN vectors to degrade the reception signal quality at Eve, based on the signal-to-interference-plus-noise ratio (SINR) of each node. To tackle the power minimization problem, we propose an iterative algorithm that includes an additional constraint to ensure adherence to specified quality-of-service (QoS) metrics. Additionally, we decompose the robust optimization problem of the two schemes into two sub-problems, with one that can be solved using generalized Rayleigh quotient methods and the other that can be addressed through semi-definite programming (SDP). Finally, our simulation results confirm the viability of the proposed approach and demonstrate the effectiveness of the protection zone for NFC systems operating with CSI. Full article
(This article belongs to the Special Issue Secure Communication for Next-Generation Wireless Networks)
Show Figures

Figure 1

18 pages, 4103 KiB  
Article
Content-Adaptive Bitrate Ladder Estimation in High-Efficiency Video Coding Utilizing Spatiotemporal Resolutions
by Jelena Šuljug and Snježana Rimac-Drlje
Electronics 2024, 13(20), 4049; https://doi.org/10.3390/electronics13204049 - 15 Oct 2024
Viewed by 1251
Abstract
The constant increase in multimedia Internet traffic in the form of video streaming requires new solutions for efficient video coding to save bandwidth and network resources. HTTP adaptive streaming (HAS), the most widely used solution for video streaming, allows the client to adaptively [...] Read more.
The constant increase in multimedia Internet traffic in the form of video streaming requires new solutions for efficient video coding to save bandwidth and network resources. HTTP adaptive streaming (HAS), the most widely used solution for video streaming, allows the client to adaptively select the bitrate according to the transmission conditions. For this purpose, multiple presentations of the same video content are generated on the video server, which contains video sequences encoded at different bitrates with resolution adjustment to achieve the best Quality of Experience (QoE). This set of bitrate–resolution pairs is called a bitrate ladder. In addition to the traditional one-size-fits-all scheme for the bitrate ladder, context-aware solutions have recently been proposed that enable optimum bitrate–resolution pairs for video sequences of different complexity. However, these solutions use only spatial resolution for optimization, while the selection of the optimal combination of spatial and temporal resolution for a given bitrate has not been sufficiently investigated. This paper proposes bit-ladder optimization considering spatiotemporal features of video sequences and usage of optimal spatial and temporal resolution related to video content complexity. Optimization along two dimensions of resolution significantly increases the complexity of the problem and the approach of intensive encoding for all spatial and temporal resolutions in a wide range of bitrates, for each video sequence, is not feasible in real time. In order to reduce the level of complexity, we propose a data augmentation using a neural network (NN)-based model. To train the NN model, we used seven video sequences of different content complexity, encoded with the HEVC encoder at five different spatial resolutions (SR) up to 4K. Also, all video sequences were encoded using four frame rates up to 120 fps, presenting different temporal resolutions (TR). The Structural Similarity Index Measure (SSIM) is used as an objective video quality metric. After data augmentation, we propose NN models that estimate optimal TR and bitrate values as switching points to a higher SR. These results can be further used as input parameters for the bitrate ladder construction for video sequences of a certain complexity. Full article
Show Figures

Figure 1

24 pages, 918 KiB  
Article
Quality of Service-Aware Multi-Objective Enhanced Differential Evolution Optimization for Time Slotted Channel Hopping Scheduling in Heterogeneous Internet of Things Sensor Networks
by Aida Vatankhah and Ramiro Liscano
Sensors 2024, 24(18), 5987; https://doi.org/10.3390/s24185987 - 15 Sep 2024
Viewed by 1081
Abstract
The emergence of the Internet of Things (IoT) has attracted significant attention in industrial environments. These applications necessitate meeting stringent latency and reliability standards. To address this, the IEEE 802.15.4e standard introduces a novel Medium Access Control (MAC) protocol called Time Slotted Channel [...] Read more.
The emergence of the Internet of Things (IoT) has attracted significant attention in industrial environments. These applications necessitate meeting stringent latency and reliability standards. To address this, the IEEE 802.15.4e standard introduces a novel Medium Access Control (MAC) protocol called Time Slotted Channel Hopping (TSCH). Designing a centralized scheduling system that simultaneously achieves the required Quality of Service (QoS) is challenging due to the multi-objective optimization nature of the problem. This paper introduces a novel optimization algorithm, QoS-aware Multi-objective enhanced Differential Evolution optimization (QMDE), designed to handle the QoS metrics, such as delay and packet loss, across multiple services in heterogeneous networks while also achieving the anticipated service throughput. Through co-simulation between TSCH-SIM and Matlab, R2023a we conducted multiple simulations across diverse sensor network topologies and industrial QoS scenarios. The evaluation results illustrate that an optimal schedule generated by QMDE can effectively fulfill the QoS requirements of closed-loop supervisory control and condition monitoring industrial services in sensor networks from 16 to 100 nodes. Through extensive simulations and comparative evaluations against the Traffic-Aware Scheduling Algorithm (TASA), this study reveals the superior performance of QMDE, achieving significant enhancements in both Packet Delivery Ratio (PDR) and delay metrics. Full article
(This article belongs to the Special Issue Advanced Applications of WSNs and the IoT)
Show Figures

Figure 1

19 pages, 14210 KiB  
Article
Video Quality Modelling—Comparison of the Classical and Machine Learning Techniques
by Janusz Klink, Michał Łuczyński and Stefan Brachmański
Appl. Sci. 2024, 14(16), 7029; https://doi.org/10.3390/app14167029 - 10 Aug 2024
Viewed by 1241
Abstract
The classical objective methods of assessing video quality used so far, apart from their advantages, such as low costs, also have disadvantages. The need to eliminate these defects results in the search for better and better solutions. This article proposes a video quality [...] Read more.
The classical objective methods of assessing video quality used so far, apart from their advantages, such as low costs, also have disadvantages. The need to eliminate these defects results in the search for better and better solutions. This article proposes a video quality assessment method based on machine learning using a linear regression model. A set of objective quality assessment metrics was used to train the model. The results obtained show that the prediction of video quality based on a machine learning model gives better results than the objective assessment based on individual metrics. The proposed model showed a strong correlation with the subjective user assessments but also a good fit of the regression function to the empirical data. It is an extension and improvement of the efficiency of the classical methods of objective quality assessment that have been used so far. The solution presented here will allow for a more accurate prediction of the video quality perceived by viewers based on an assessment carried out using a much cheaper, objective method. Full article
(This article belongs to the Special Issue AI for Multimedia Information Processing)
Show Figures

Figure 1

18 pages, 3164 KiB  
Article
PixRevive: Latent Feature Diffusion Model for Compressed Video Quality Enhancement
by Weiran Wang, Minge Jing, Yibo Fan and Wei Weng
Sensors 2024, 24(6), 1907; https://doi.org/10.3390/s24061907 - 16 Mar 2024
Cited by 2 | Viewed by 2674
Abstract
In recent years, the rapid prevalence of high-definition video in Internet of Things (IoT) systems has been directly facilitated by advances in imaging sensor technology. To adapt to limited uplink bandwidth, most media platforms opt to compress videos to bitrate streams for transmission. [...] Read more.
In recent years, the rapid prevalence of high-definition video in Internet of Things (IoT) systems has been directly facilitated by advances in imaging sensor technology. To adapt to limited uplink bandwidth, most media platforms opt to compress videos to bitrate streams for transmission. However, this compression often leads to significant texture loss and artifacts, which severely degrade the Quality of Experience (QoE). We propose a latent feature diffusion model (LFDM) for compressed video quality enhancement, which comprises a compact edge latent feature prior network (ELPN) and a conditional noise prediction network (CNPN). Specifically, we first pre-train ELPNet to construct a latent feature space that captures rich detail information for representing sharpness latent variables. Second, we incorporate these latent variables into the prediction network to iteratively guide the generation direction, thus resolving the problem that the direct application of diffusion models to temporal prediction disrupts inter-frame dependencies, thereby completing the modeling of temporal correlations. Lastly, we innovatively develop a Grouped Domain Fusion module that effectively addresses the challenges of diffusion distortion caused by naive cross-domain information fusion. Comparative experiments on the MFQEv2 benchmark validate our algorithm’s superior performance in terms of both objective and subjective metrics. By integrating with codecs and image sensors, our method can provide higher video quality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

26 pages, 6137 KiB  
Article
Impact of Latency on QoE, Performance, and Collaboration in Interactive Multi-User Virtual Reality
by Sam Van Damme, Javad Sameri, Susanna Schwarzmann, Qing Wei, Riccardo Trivisonno, Filip De Turck and Maria Torres Vega
Appl. Sci. 2024, 14(6), 2290; https://doi.org/10.3390/app14062290 - 8 Mar 2024
Cited by 17 | Viewed by 4846
Abstract
Interactive, multi-user experiences are meant to define the present and future of Virtual Reality (VR). Such immersive experiences will typically consist of remote collaborations where content is streamed and/or synchronized over a network connection. Thus, real-time collaboration will be key. In this light, [...] Read more.
Interactive, multi-user experiences are meant to define the present and future of Virtual Reality (VR). Such immersive experiences will typically consist of remote collaborations where content is streamed and/or synchronized over a network connection. Thus, real-time collaboration will be key. In this light, the responsiveness of the system and the network will define the overall experience. As such, understanding the effect of network distortions, especially related to time delay, on end-user’s perception (in terms of Quality-of-Experience (QoE)), performance, and collaboration becomes crucial. The existing literature, however, has mostly focused on network requirements from a system point-of-view, where the key performance parameters are only provided in the form of Quality-of-Service (QoS) parameters (such as end-to-end latency). However, the translation of these network impairments to the end-user experience is often omitted. The purpose of this paper is to fill the gap by providing a thorough investigation of the impact of latency on the perception of users while performing collaborative tasks in multi-user VR. To this end, an experimental framework was designed, developed, and tested. It is based on a multi-device synchronizing architecture, enabling two simultaneous users to work together in a gamified virtual environment. The developed test environment also allows for the identification of the most prominent network requirements and objective analysis for each traffic link. To experimentally investigate the impact of latency on user perception, a user study was conducted. Participants were paired and asked to perform the collaborative task under different latency-prone scenarios. The results show that users are able to easily distinguish between distorted and non-distorted network configurations. However, making a distinction between different manifestations of latency is much less straightforward. Moreover, factors such as the user’s role in the experience and the required task, and the level of interactivity and movement have an important influence on the subjective level of perception, the strength of the user’s preferences, and the occurrence of cybersickness. In contrast, no significant differences in objective metrics, such as system performance and user completion time were observed. These results can support the creation of collective QoE metrics that model the group as a whole rather than each individual separately. As such, this work provides an important step to dynamically counteract any drops in group dynamics and performance by means of smart interventions in the transmission system and/or virtual environment. Full article
(This article belongs to the Special Issue Virtual Reality and Human-Computer Interaction)
Show Figures

Figure 1

35 pages, 741 KiB  
Review
Survey on Quality of Experience Evaluation for Cloud-Based Interactive Applications
by Jesus Arellano-Uson, Eduardo Magaña, Daniel Morato and Mikel Izal
Appl. Sci. 2024, 14(5), 1987; https://doi.org/10.3390/app14051987 - 28 Feb 2024
Cited by 3 | Viewed by 2117
Abstract
A cloud-based interactive application (CIA) is an application running in the cloud with stringent interactivity requirements, such as remote desktop and cloud gaming. These services have experienced a surge in usage, primarily due to the adoption of new remote work practices during the [...] Read more.
A cloud-based interactive application (CIA) is an application running in the cloud with stringent interactivity requirements, such as remote desktop and cloud gaming. These services have experienced a surge in usage, primarily due to the adoption of new remote work practices during the pandemic and the emergence of entertainment schemes similar to cloud gaming platforms. Evaluating the quality of experience (QoE) in these applications requires specific metrics, including interactivity time, responsiveness, and the assessment of video- and audio-quality degradation. Despite existing studies that evaluate QoE and compare features of general cloud applications, systematic research into QoE for CIAs is lacking. Previous surveys often narrow their focus, overlooking a comprehensive assessment. They touch on QoE in broader contexts but fall short in detailed metric analysis. Some emphasise areas like mobile cloud computing, omitting CIA-specific nuances. This paper offers a comprehensive survey of QoE measurement techniques in CIAs, providing a taxonomy of input metrics, strategies, and evaluation architectures. State-of-the-art proposals are assessed, enabling a comparative analysis of their strengths and weaknesses and identifying future research directions. Full article
(This article belongs to the Special Issue Cloud Computing: Challenges, Application and Prospects)
Show Figures

Figure 1

25 pages, 13921 KiB  
Article
Mobile Network Operators’ Assessment Based on Drive-Test Campaign in Urban Area for iPerf Scenario
by Dariusz Zmysłowski and Jan M. Kelner
Appl. Sci. 2024, 14(3), 1268; https://doi.org/10.3390/app14031268 - 3 Feb 2024
Cited by 1 | Viewed by 2475
Abstract
The development of new telecommunication services requires the implementation of advanced technologies and the next generations of networks. Currently, the Long-Term Evolution (LTE) is a widely used standard. On the other hand, more and more mobile network operators (MNOs) are implementing the fifth-generation [...] Read more.
The development of new telecommunication services requires the implementation of advanced technologies and the next generations of networks. Currently, the Long-Term Evolution (LTE) is a widely used standard. On the other hand, more and more mobile network operators (MNOs) are implementing the fifth-generation (5G) New Radio standard in their networks. It allows for increasing throughput, spectral, and energy efficiency and maximizing coverage, while reducing latency. The effectiveness of the introduced changes is measured by assessing the quality of service (QoS) in mobile networks. The paper presents the result evaluation of the QoS measurement campaign carried out using the drive test method in an urban area for four MNOs. We analyze the data transmission scenario, which is the basis of most modern telecommunications services. The result comparison provides an assessment of the 5G service implementation advancement by MNOs. In this analysis, we consider many QoS metrics (e.g., session time, throughput, and round-trip time) and parameters defining the radio signal quality (i.e., reference signal received power, signal-to-interference-plus-noise ratio). Our work also included searching for relationships between these parameters, using a correlation analysis. It allows for the selection of uncorrelated parameters to assess the quality of network, i.e., MNO evaluation, in terms of the provided QoS. Full article
(This article belongs to the Special Issue 5G/6G Mechanisms, Services, and Applications)
Show Figures

Figure 1

Back to TopTop