sensors-logo

Journal Browser

Journal Browser

Special Issue "Human-Computer Interaction in Pervasive Computing Environments"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 October 2021.

Special Issue Editors

Dr. Alicia García-Holgado
E-Mail Website1 Website2
Guest Editor
Department of Computer Science, University of Salamanca, 37008 Salamanca, Spain
Interests: social responsibility and inclusion; gender in STEM; gender and ICT; technological ecosystems; knowledge management; human–computer interaction
Special Issues and Collections in MDPI journals
Dr. Brij B Gupta
E-Mail Website
Guest Editor
National institute of Technology Kurukshetra, India
Interests: artificial intelligence; information security; cyber security; intrusion detection; cloud security, mobile security, web security, big data analytics; botnet detection; phishing; ddos attacks; network performance evaluation
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Sensors are widely used in everyday life today. They are present in a wide variety of areas, offering an excellent opportunity to face challenges related to medicine and healthcare, smart cities, smart homes, smart learning, and entertainement, among others. Sensors bring technology closer to humans in an increasingly transparent and natural approach, building genuine technological ecosystems in which human–computer interaction plays a key role.

Despite the penetration of the sensors, though, it is necessary to continue improving their design, implementation, and use to improve usability, accessibility, and user experience in smart environments.

The aim of this Special Issue is to highlight recent advances and trends in human–computer interaction in pervasive computing environments. It will address a broad range of topics related to smart environments, including (but not limited) to the following:

  • Usability, accessibility, and sustainability;
  • User experience;
  • Natural user interfaces;
  • Sensors networks;
  • Haptic computing;
  • Ambient-assisted living;
  • Healthcare environments;
  • Smart cities design;
  • Multimodal systems and interfaces;
  • IoT dashboards and platforms;
  • Technological ecosystems for smart enviroments;
  • Service-oriented information visualization;
  • Smart interfaces for learning;
  • Ambient and pervasive interactions.

Dr. Alicia García-Holgado
Dr. Brij B. Gupta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Smart environment
  • Smart spaces
  • Smart cities
  • Smart interfaces for learning
  • Human–computer Interaction
  • Usability, accessibility and sustainability
  • User experience
  • Natural user interfaces
  • Sensors networks
  • Haptic computing
  • Ambient assisted living
  • Healthcare Environments
  • Multimodal systems
  • Multimodal interfaces
  • IoT dashboards
  • IoT platforms
  • Technological ecosystems
  • Service-oriented information visualization
  • Ambient and pervasive interactions

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Power and Radio Resource Management in Femtocell Networks for Interference Mitigation
Sensors 2021, 21(14), 4843; https://doi.org/10.3390/s21144843 - 15 Jul 2021
Viewed by 423
Abstract
The growth of mobile traffic volume has been exploded because of the rapid improvement of mobile devices and their applications. Heterogeneous networks (HetNets) can be an attractive solution in order to adopt the exponential growth of wireless data. Femtocell networks are accommodated within [...] Read more.
The growth of mobile traffic volume has been exploded because of the rapid improvement of mobile devices and their applications. Heterogeneous networks (HetNets) can be an attractive solution in order to adopt the exponential growth of wireless data. Femtocell networks are accommodated within the concept of HetNets. The implementation of femtocell networks has been considered as an innovative approach that can improve the network’s capacity. However, dense implementation and installation of femtocells would introduce interference, which reduces the network’s performance. Interference occurs when two adjacent femtocells are operated with the same radio resources. In this work, a scheme, which comprises two stages, is proposed. The first step is to distribute radio resources among femtocells, where each femtocell can identify the source of the interference. A constructed table is generated in order to measure the level of interference for each femtocell. Accordingly, the level of interference for each sub-channel can be recognized by all femtocells. The second stage includes a mechanism that helps femtocell base stations adjust their transmission power autonomously to alleviate the interference. It enforces a cost function, which should be realized by each femtocell. The cost function is calculated based on the production of undesirable interference impact, which is introduced by each femtocell. Hence, the transmission power is adjusted autonomously, where undesirable interference can be monitored and alleviated. The proposed scheme is evaluated through a MATLAB simulation and compared with other approaches. The simulation results show an improvement in the network’s capacity. Furthermore, the unfavorable impact of the interference can be managed and alleviated. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Article
A Qualitative Approach to Help Adjust the Design of Management Subjects in ICT Engineering Undergraduate Programs through User Experience in a Smart Classroom Context
Sensors 2021, 21(14), 4762; https://doi.org/10.3390/s21144762 - 12 Jul 2021
Viewed by 448
Abstract
Qualitative research activities, including first-day of class surveys and user experience interviews on completion of a subject were carried out to obtain students’ feedback in order to improve the design of the subject ‘Information Systems’ as a part of a general initiative to [...] Read more.
Qualitative research activities, including first-day of class surveys and user experience interviews on completion of a subject were carried out to obtain students’ feedback in order to improve the design of the subject ‘Information Systems’ as a part of a general initiative to enhance ICT (Information and Communication Technologies) engineering programs. Due to the COVID-19 (corona virus disease 2019) pandemic, La Salle URL adopted an Emergency Remote Teaching tactical solution in the second semester of the 2019–2020 academic year, just before implementing a strategic learning approach based on a new Smart Classroom (SC) system deployed in the campus facilities. The latter solution was developed to ensure that both on-campus and off-campus students could effectively follow the course syllabus through the use of new technological devices introduced in classrooms and laboratories, reducing the inherent difficulties of online learning. The results of our findings show: (1) No major concerns about the subject were identified by students; (2) Interaction and class dynamics were the main issues identified by students, while saving time on commuting when learning from home and access to recorded class sessions were the aspects that students considered the most advantageous about the SC. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Article
Gamification and Hazard Communication in Virtual Reality: A Qualitative Study
Sensors 2021, 21(14), 4663; https://doi.org/10.3390/s21144663 - 07 Jul 2021
Viewed by 524
Abstract
An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an [...] Read more.
An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an incorrect action/decision, and creates a smart training environment. The present study analyzes the user experience in a gamified virtual environment of risks using the HTC Vive head-mounted display. The game was developed in the Unreal game engine and consisted of a walk-through maze composed of evident dangers and different signaling variables while user action data were recorded. To demonstrate which aspects provide better interaction, experience, perception and memory, three different warning configurations (dynamic, static and smart) and two different levels of danger (low and high) were presented. To properly assess the impact of the experience, we conducted a survey about personality and knowledge before and after using the game. We proceeded with the qualitative approach by using questions in a bipolar laddering assessment that was compared with the recorded data during the game. The findings indicate that when users are engaged in VR, they tend to test the consequences of their actions rather than maintaining safety. The results also reveal that textual signal variables are not accessed when users are faced with the stress factor of time. Progress is needed in implementing new technologies for warnings and advance notifications to improve the evaluation of human behavior in virtual environments of high-risk surroundings. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Article
Measuring User Experience, Usability and Interactivity of a Personalized Mobile Augmented Reality Training System
Sensors 2021, 21(11), 3888; https://doi.org/10.3390/s21113888 - 04 Jun 2021
Cited by 1 | Viewed by 793
Abstract
Innovative technology has been an important part of firefighting, as it advances firefighters’ safety and effectiveness. Prior research has examined the implementation of training systems using augmented reality (AR) in other domains, such as welding, aviation, army, and mathematics, offering significant pedagogical affordances. [...] Read more.
Innovative technology has been an important part of firefighting, as it advances firefighters’ safety and effectiveness. Prior research has examined the implementation of training systems using augmented reality (AR) in other domains, such as welding, aviation, army, and mathematics, offering significant pedagogical affordances. Nevertheless, firefighting training systems using AR are still an under-researched area. The increasing penetration of AR for training is the driving force behind this study, and the scope is to analyze the main aspects affecting the acceptance of AR by firefighters. The current research uses a technology acceptance model, extended by the external constructs of perceived interactivity and personalization, to consider both the system and individual level. The proposed model was evaluated by a sample of 200 users, and the results show that both the external variables of perceived interactivity and perceived personalization are prerequisite factors in extending the TAM model. The findings reveal that the usability is the strongest predictor of firefighters’ behavioral intentions to use the AR system, followed by the ease of use with smaller, yet meaningful, direct and indirect effects on firefighters’ intentions. The identified acceptance factors help AR developers enhance the firefighters’ experience in training operations. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Article
Assembly Assistance System with Decision Trees and Ensemble Learning
Sensors 2021, 21(11), 3580; https://doi.org/10.3390/s21113580 - 21 May 2021
Viewed by 460
Abstract
This paper presents different prediction methods based on decision tree and ensemble learning to suggest possible next assembly steps. The predictor is designed to be a component of a sensor-based assembly assistance system whose goal is to provide support via adaptive instructions, considering [...] Read more.
This paper presents different prediction methods based on decision tree and ensemble learning to suggest possible next assembly steps. The predictor is designed to be a component of a sensor-based assembly assistance system whose goal is to provide support via adaptive instructions, considering the assembly progress and, in the future, the estimation of user emotions during training. The assembly assistance station supports inexperienced manufacturing workers, but it can be useful in assisting experienced workers, too. The proposed predictors are evaluated on the data collected in experiments involving both trainees and manufacturing workers, as well as on a mixed dataset, and are compared with other existing predictors. The novelty of the paper is the decision tree-based prediction of the assembly states, in contrast with the previous algorithms which are stochastic-based or neural. The results show that ensemble learning with decision tree components is best suited for adaptive assembly support systems. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Article
Visual Echolocation Concept for the Colorophone Sensory Substitution Device Using Virtual Reality
Sensors 2021, 21(1), 237; https://doi.org/10.3390/s21010237 - 01 Jan 2021
Viewed by 910
Abstract
Detecting characteristics of 3D scenes is considered one of the biggest challenges for visually impaired people. This ability is nonetheless crucial for orientation and navigation in the natural environment. Although there are several Electronic Travel Aids aiming at enhancing orientation and mobility for [...] Read more.
Detecting characteristics of 3D scenes is considered one of the biggest challenges for visually impaired people. This ability is nonetheless crucial for orientation and navigation in the natural environment. Although there are several Electronic Travel Aids aiming at enhancing orientation and mobility for the blind, only a few of them combine passing both 2D and 3D information, including colour. Moreover, existing devices either focus on a small part of an image or allow interpretation of a mere few points in the field of view. Here, we propose a concept of visual echolocation with integrated colour sonification as an extension of Colorophone—an assistive device for visually impaired people. The concept aims at mimicking the process of echolocation and thus provides 2D, 3D and additionally colour information of the whole scene. Even though the final implementation will be realised by a 3D camera, it is first simulated, as a proof of concept, by using VIRCO—a Virtual Reality training and evaluation system for Colorophone. The first experiments showed that it is possible to sonify colour and distance of the whole scene, which opens up a possibility to implement the developed algorithm on a hardware-based stereo camera platform. An introductory user evaluation of the system has been conducted in order to assess the effectiveness of the proposed solution for perceiving distance, position and colour of the objects placed in Virtual Reality. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Article
Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks
Sensors 2020, 20(17), 4935; https://doi.org/10.3390/s20174935 - 31 Aug 2020
Cited by 1 | Viewed by 927
Abstract
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered [...] Read more.
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users’ eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%–8.9% under low and dark light conditions, which is a promising step toward further research. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
Business Simulation Games Analysis Supported by Human-Computer Interfaces: A Systematic Review
Sensors 2021, 21(14), 4810; https://doi.org/10.3390/s21144810 - 14 Jul 2021
Viewed by 452
Abstract
This article performs a Systematic Review of studies to answer the question: What are the researches related to the learning process with (Serious) Business Games using data collection techniques with Electroencephalogram or Eye tracking signals? The PRISMA declaration method was used to guide [...] Read more.
This article performs a Systematic Review of studies to answer the question: What are the researches related to the learning process with (Serious) Business Games using data collection techniques with Electroencephalogram or Eye tracking signals? The PRISMA declaration method was used to guide the search and inclusion of works related to the elaboration of this study. The 19 references resulting from the critical evaluation initially point to a gap in investigations into using these devices to monitor serious games for learning in organizational environments. An approximation with equivalent sensing studies in serious games for the contribution of skills and competencies indicates that continuous monitoring measures, such as mental state and eye fixation, proved to identify the players’ attention levels effectively. Also, these studies showed effectiveness in the flow at different moments of the task, motivating and justifying the replication of these studies as a source of insights for the optimized design of business learning tools. This study is the first systematic review and consolidates the existing literature on user experience analysis of business simulation games supported by human-computer interfaces. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Other

Jump to: Research, Review

Systematic Review
User Experience in Social Robots
Sensors 2021, 21(15), 5052; https://doi.org/10.3390/s21155052 - 26 Jul 2021
Viewed by 470
Abstract
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user [...] Read more.
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user experience is vital, and it should be considered as a critical part of the robots’ development process. This may potentially lead to excessive use of social robots and strengthen their diffusion in society. The goal of this study is to summarize the extant literature that is focused on user experience in social robots, and to identify the challenges and benefits of UX evaluation in social robots. To achieve this goal, the authors carried out a systematic literature review that relies on PRISMA guidelines. Our findings revealed that the most common methods to evaluate UX in social robots are questionnaires and interviews. UX evaluations were found out to be beneficial in providing early feedback and consequently in handling errors at an early stage. However, despite the importance of UX in social robots, robot developers often neglect to set UX goals due to lack of knowledge or lack of time. This study emphasizes the need for robot developers to acquire the required theoretical and practical knowledge on how to perform a successful UX evaluation. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Pervasive Computing Environments)
Show Figures

Figure 1

Back to TopTop