This Special Issue proposes four papers that, by different approaches, attempt to tackle the above issues. The first one compares a mobile device used as a controller with a control by gestures tracked by Microsoft Kinect. The second manuscript addresses the problem of managing annotation in a manufacturing context, whereas the third paper presents a prototype solution based on a head-mounted projector (to display the interface) and depth camera (to enable the interaction with the interface). The last contribution is a survey paper that analyzes all of the available solutions of virtual and augmented reality used for shoulder rehabilitation. In the rest of this editorial, the four contributions are presented in more detail.
2. On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations
Recent technological improvements have allowed for the diffusion of interactive digital installations in public spaces. These systems usually provide the capability of interacting with 3D virtual assets, such as digital characters or large virtual environments. Interactions with virtual contents is a complex challenge. Several approaches exist, such as using desktop interfaces or exploiting mid-air interactions through ad-hoc devices, such as Microsoft Kinect. Although interactions based on mid-air gestures have been proven to be motivating and playful, users may feel embarrassed to assume awkward positions in public. Moreover, the sensors used to capture the gestures suffer from interferences caused by unpredicted obstacles. The study presented in “On the Use of Mobile Devices as Controllers for First-Person Navigation in Public Installations” compares two different interaction modalities for navigating in large virtual environments, as follows: the first modality exploits a mobile device as a controller, whereas the second one relies on the use of Microsoft Kinect. Three different navigation modes have been investigated, namely: walk/turn, walk/sideways, and look around. The system is composed of a PC and projector, used to display the virtual environment. In addition, a smartphone app has been developed to acquire the user’s input. The results suggest that although the two interaction modalities performed in a similar manner, users preferred the mobile phone navigation, assessing it as being more intuitive and comfortable. There were interesting results regarding the orientation of the mobile controller. Experienced game users tended to hold the phone in landscape orientation, keeping their thumbs on the screen and using it as a console game controller. On the other hand, unexperienced users preferred to employ motion gestures to navigate the virtual environments, proposing new intuitive solutions. Finally, users were asked to indicate new gestures to improve the mobile phone interaction. The results indicate a preference for surface over motion gestures.
3. A Mobile Solution for Augmenting a Manufacturing Environment with User-Generated Annotations
Industry 4.0 is changing manufacturing industries. Factories are required to face an increasingly competitive market that requires high production levels, without lowering the overall quality. Operators are required to work effectively and efficiently, being able to rapidly access data and information. However, instructions and annotations are still usually paper-based. Text-based instructions and manuals may result in being hard to comprehend by novice and unskilled workers. Augmented reality (AR) has been widely proved to be a technology capable of improving the productivity of manufacturing. Its intrinsic capability of adding virtual data in real space allows for acquiring information faster and more effectively than using paper-based annotations. The work presented in “A Mobile Solution for Augmenting a Manufacturing Environment with User-Generated Annotations” proposes a mobile markerless AR interface to interact with virtual annotations. The system is composed of a physical smart environment, a mobile phone, and a database. An inexperienced user can easily add virtual annotations directly on the real objects by using the mobile phone. Firstly, the user is required to scan the objects by moving the mobile phone. A visual-inertial odometry method, which employs the acceleration measurement and camera movements among frames, is employed to reconstruct the 2D features in the 3D space. Then, the features are stored in the database, and the user has to link a virtual representation of the scanned object to the corresponding feature. The AR interface is composed of four buttons that allow the user to take different types of annotations. Short instructions can be directly displayed close to the referenced object, whereas long annotations can be stored as speech and played as needed. Moreover, the data acquired by sensors positioned in the real environment can be also visualized to monitor the machine status. Future developments will consider complex multiuser interactions, allowing different users to interact with the same virtual annotations.
4. A Large Effective Touchscreen Using a Head-Mounted Projector
The diffusion of portable devices with a high computational power, such as smartphones and tablets, offer the advantage of portability with the limitation of small operating areas. Studies on user interfaces based on projected, augmented reality interfaces address this limitation, displaying the screen on a large display area. However, to effectively provide a wearable, projectable user interface, other problems have to be addressed, namely: the projector should be wearable, the display should be always visible to the user and the display area should be extendible. In “A Large Effective Touchscreen Using a Head-Mounted Projector”, the authors propose an experimental evaluation of a custom wearable projected augmented reality (AR) system. The proposed system consists of a head-mounted device with a projector and a depth camera. The projector displays the user interface on a flat surface in front of the user, placing the image on the estimated plane, performing a perspective projection to obtain a 2D image, and changing the displayed information according to the direction of the user’s head. The depth camera enables the user to interact with the displayed information, thus providing a virtual touch interface. The system estimates the user’s head position using the plane information computed by the depth camera. The aim of the evaluation process is to esteem the interface accuracy and the delay of the system. The accuracy is computed by measuring the positional errors between the real world and the AR interface after the user moves and rotates his/her head. The experiments allowed for estimating the mean translation errors as 10 mm, whereas the delay was approximately 200 ms. The results of the evaluation revealed that both the system delay and the weight of the head-mounted device are limitations that should be addressed in future works in order to obtain a practical device, trying out different hardware solutions in order to obtain the best tradeoff between these two characteristics. The final step would be to assess the proposed system through usability tests.
5. Review of the Augmented Reality Systems for Shoulder Rehabilitation
Shoulder disorders affect a meaningful amount of the population, and can heavily impact a patient’s quality of life. The literature shows a growing interest towards the application of virtual and augmented reality (AR) in the rehabilitation field, because of the possibility to enhance the user experience. In “Review of the Augmented Reality Systems for Shoulder Rehabilitation”, the authors investigate how much AR applications are used in shoulder rehabilitation, and they provide evidence supporting AR effectiveness. Despite the usage of AR for rehabilitation still being in the exploratory stage, nine AR systems were identified and analyzed. For each system, the authors evaluated the tracking paradigms, visualization technologies, integrated feedback, rehabilitation setting, and clinical evaluation. The research shows that the adopted display method is either screen-based or projection-based, whereas the tracking system is mostly marker-based. A visual feedback is always provided to the user, whereas audio feedback is provided sometimes; eventually, biofeedback or haptic feedback can be integrated. Most systems are designed to also support home rehabilitation without the direct supervision of the therapist. Out of nine systems, four have been evaluated in clinical trials, proving that AR applications can provide clear benefits with respect to traditional rehabilitation methods in terms of usability, enjoyability, motivation, and improved performance outcomes. Even if head mounted displays (HMDs) have not been used yet, probably as a result of the technical limitations of older models, they deserve future attention because of their ability to deliver immersive experience and an egocentric point of view. Moreover, more recent HMDs, such as Microsoft HoloLens, integrate tracking capabilities and provide multi-modal interaction interfaces, thus allowing users a more intuitive and enjoyable AR experience. Overall, the review proves that AR for shoulder rehabilitation is a promising research topic, but the low number of clinical studies does not allow for generally assessing the effectiveness of such applications.