1. Introduction
Human–Computer Interaction (HCI) is an interdisciplinary field that examines how people use and understand computing systems. Its aim is to create technologies that are functional, usable, accessible, and enjoyable. Drawing from computer science, psychology, design, sociology, and other disciplines, HCI develops interfaces that meet human needs [
1,
2]. It addresses user interface design, usability, accessibility, and user experience, while also engaging with challenges posed by technologies such as artificial intelligence, the Internet of Things, and virtual/augmented reality [
1].
Haptic Human–Computer Interaction (HHCI), a subfield of HCI, uses the sense of touch for more natural communication with computing systems [
3]. Real-time haptic interactions can be active or passive [
4]. Through force, vibration, and pressure feedback, users can “feel” virtual objects, enriching experiences beyond audiovisual channels. HHCI applications span the fields of medical training, entertainment, virtual/augmented reality, and e-commerce, while also improving accessibility for people with disabilities [
5,
6]. Research emphasizes the realistic reproduction of textures and forces, cost reduction, and personalized experiences.
Recent haptic technologies employ touch as a main interaction channel. For visually impaired users, tools like dynamic Braille displays, smart wearables with haptic signals, and vibration-based mobile apps enhance daily life and digital access. More advanced feedback systems such as gloves simulating texture, pressure, or temperature, and force-feedback devices enable realistic interaction with virtual objects [
7,
8,
9,
10]. Overall, haptic interaction strengthens accessibility, offers immersive experiences, and opens new opportunities across diverse fields.
Touch is a fundamental human sense, distinct from vision and hearing as it requires active engagement to perceive texture, temperature, or shape. Research by Roberta L. Klatzky et al. highlights how the finger processes haptic patterns through friction [
11]. The skin, acting as a complex sensory organ, transmits information via specialized receptors: mechanoreceptors, thermoreceptors, and nociceptors. Merkel disks detect fine details, Meissner’s corpuscles light touch, Pacinian corpuscles high-frequency vibrations, and Ruffini endings stretch and motion. This variety enables precise, multidimensional perception. Sensitivity is higher in the fingertips due to greater receptor density, while receptors may be rapidly or slowly adapting, contributing to the perception of transient vibrations or sustained pressure [
7,
12,
13]. Unlike sight and hearing, limited to specific organs, touch spans the whole body, supporting direct, continuous interaction with the environment essential for safety, movement, and daily adaptation.
Over the past decade, virtual reality (VR) has advanced mainly in visual immersion, yet the absence of touch limits realism, as perception is inherently multisensory. Multimodal VR, combining visual and haptic stimuli, is expected to enhance applications in e-commerce, education, and entertainment by offering more authentic experiences [
14,
15]. Accurate reproduction of texture and vibrations is central to haptic perception, with digital materials contributing to product design and sales. Research has emphasized the haptic rendering of textiles [
16]. Physically based approaches offer accuracy but require high computational resources, whereas data-driven methods are more flexible but less scalable. Despite advances in techniques such as vibrotactile feedback, electrostatic methods, and ultrasonic systems, realistic rendering of complex textures remains challenging. The integration of artificial intelligence and deep learning algorithms opens new opportunities, enabling more personalized and adaptive experiences.
This work presents the development and evaluation of a haptic simulation environment created in Unity3D platform version 2021.1.18f1, which includes six different virtual textures (brick, metal, wood, sand, ice, and concrete) enhanced with haptic properties for increased realism. Using the Touch haptic device (3D Systems, Rock Hill, SC, USA), participants interacted with the surfaces through adjustable properties friction, hardness, and relief to assess perceived realism. Two experimental phases were conducted with a total of 47 participants; in the first phase, the textures were evaluated both visually and haptically, with the ability to adjust parameters, while in the second phase, textures were recognized exclusively through touch in a randomized order. The results showed an overall improvement in recognition accuracy between the two experiments, with significant gains for metal and wood textures. Statistical analysis revealed a reduction in standard deviation, indicating more consistent performance across textures, while confusion matrices highlighted specific textures (concrete, brick) that require further optimization. These findings contribute to the advancement of haptic simulation methods for realistic virtual environments. Despite the encouraging results, the present study has limitations stemming from the use of a limited number of haptic patterns and a relatively small participant sample. Future research is expected to expand the range of haptic stimuli and apply experimental procedures to larger user populations, aiming to enhance the generalizability of the results and provide a more rigorous assessment of the system’s effectiveness.
Despite the growing interest in haptic interaction, many existing approaches face limitations. Several studies focus on the design and implementation of complex geometric patterns for each texture, which increases development time and requirements. Others develop sophisticated machine learning models, which demand specialized expertise, significant training time, and computational resources. The present study introduces a methodology based on the direct parameterization of three fundamental properties (friction, stiffness, relief), providing control without the need for training neural networks or generating complex geometric models for each texture. This approach is implemented through affordable hardware (Touch device) and software (Unity, Blender), making it more flexible and less costly compared to solutions relying on specialized wearable systems. An additional distinction lies in the systematic process of optimizing texture realism through two successive experimental phases. The use of human feedback to refine parameters, combined with the statistical analysis of recognition rates, temporal variations, and confusions, provides both quantitative and qualitative evidence for the effectiveness of the method. Such an approach, which aims to enhance recognizability through touch alone, serves as a complement to machine learning based methodologies.
2. Related Works
The study by Hodaya Dahan et al. [
17] aimed to develop and evaluate a haptic virtual reality system for distinguishing hardness and texture, in comparison with established clinical tests (Shape/Texture Identification Test and Moberg Pick-Up Test). A total of 44 healthy adults participated in the procedure, interacting via the 3D Systems Touch haptic device with pairs of virtual surfaces differing in friction or hardness, and each time, selecting the smoother or harder surface. The difficulty was dynamically adjusted using a “staircase” method. The results demonstrated very high agreement with traditional measurements, proving that the proposed VR-based test is reliable and can be used as an alternative or complementary tool in clinical settings.
Existing VR applications rarely provide the sense of touch experienced in the real world, particularly the perception of texture when moving across a surface. Current methods typically require a separate model for each texture, which limits scalability. Negin Heravi et al. [
18,
19] present the development and evaluation of a machine learning model for rendering realistic haptic textures in real-time within virtual reality (VR) environments. The model is a neural network that takes as input an image from the GelSight visuo-haptic sensor along with the user’s motion and pressure data. Its output is a spectrum of accelerations (DFT magnitude), which is converted into vibrotactile feedback via a high-frequency vibrotactile transducer connected to the 3D Systems Touch device. The method was tested in an experimental study with 25 participants across four phases: similarity comparison, forced-choice trials, evaluation (e.g., rough–smooth), and generalization to novel, unseen textures. Results demonstrated that the model is real-time implementable, easily scalable to multiple textures, and capable of producing realistic texture vibrations even for surfaces not included in the training set, thereby enhancing the haptic experience in VR.
Alongside the aforementioned studies, recent advanced methods have explored the use of deep learning. In particular, the work of Joolee and Jeon [
20] presented a comprehensive data-driven framework (Deep Spatio-Temporal Networks) for the realistic real-time synthesis of acceleration signals. Although this approach achieves remarkable fidelity for both isotropic and anisotropic textures, its reliance on large datasets and the complex process of model training limits its immediate applicability in rapid prototyping scenarios. From a different perspective, the study by Lee et al. [
21] focused on the design of a lightweight and wearable haptic device for rendering stiffness in Augmented Reality (AR) environments. Their device, based on a passive tendon-locking mechanism, highlights stiffness as a critical parameter for realistic interaction one that our work also incorporates and systematically investigates. However, while their solution is optimized for AR interaction using a wearable device, our method aims at the flexibility of direct parameterization of multiple properties for Virtual Reality (VR) applications using a stylus device. Furthermore, the work of Rouhafzay and Cretu [
22] developed a visuo-haptic framework for object recognition, guided by a visual attention model. The integration of multiple information sources (cutaneous and kinesthetic stimuli) aligns with the spirit of our multiparametric approach. Nevertheless, the present study differentiates its methodology by focusing exclusively on haptic-based texture recognition, systematically isolating the tactile channel in order to clearly assess its contribution without visual guidance or the complexity of video integration thus complementing this broader spectrum of approaches.
The research by Tzimos N. et al. [
23] focuses on the development and evaluation of geometric haptic patterns for three-dimensional virtual objects, aiming to enhance the tactile interaction experience. Nine different geometric patterns were created and tested in a virtual environment using the 3D Systems Touch (Phantom Omni) haptic device and H3DAPI software. In the experimental procedure, both blind and sighted users were asked to identify the textures without visual cues. Participants were able to distinguish different textures but faced difficulties with precise recognition.
In a related study, Papadopoulos K. et al. [
24] examined similar geometric haptic patterns, focusing on roughness as a tactile variable. The study investigated users’ ability to recognize friction and hardness parameters via a haptic device (Geomagic Touch) in both visually impaired and sighted participants. It was found that accurate haptic discrimination requires noticeably distinct surfaces. Additionally, visually impaired users needed more time and effort to perceive these properties, highlighting the need for specialized design of haptic interface content with an emphasis on accessibility.
The study by Ruiz et al. [
25] proposes a method for training models that recognize objects using both vision and touch. They employ synthetic data from 3D simulations, as real haptic samples are scarce. This approach is cost-effective and easily scalable. The results demonstrate that using both virtual and real samples significantly improves recognition accuracy, highlighting the value of multimodal methods for robotic systems. In contrast, the study by Cao et al. [
26] presents an approach to haptic rendering that leverages visual information, allowing texture representation without direct physical contact. The work emphasizes the potential of integrating visual and haptic signals to produce realistic tactile sensations.
The study by Tena-Sánchez et al. [
27] presents a virtual training environment based on the digital reconstruction of real vegetation, aiming to simulate the haptic experience. This approach combines three-dimensional modeling with haptic rendering methods to realistically reproduce the textures and tactile characteristics of plant elements. The work highlights the potential of haptic simulations in training scenarios, strengthening the connection between physical and virtual environments. The study by Bazelle et al. [
28] investigates a cost-effective and lightweight approach to texture simulation in virtual reality through the use of vibrotactile haptic gloves. The system employs simple vibrators on the fingers to reproduce the roughness of different materials, based on heightmap data and hand tracking. The experimental results indicate that users can distinguish between different textures with reasonable accuracy, demonstrating the potential to deliver a convincing haptic experience at low cost.
4. Results
The aim of the present study was to investigate how users perceive haptic technology during their interaction with virtual objects that represent corresponding physical materials in three-dimensional environments. The results of the interaction with the selected haptic surfaces of real-world materials (
Figure 2), combined with the data collected through observation and the questionnaire, are analyzed in the following section. The research primarily focused on identifying the optimal haptic parameters that should be assigned to the textures of virtual materials in order to more faithfully approximate their counterparts in the physical environment. In addition, the study examined users’ ability to recognize different texture patterns in three-dimensional scenes through the use of haptic devices. In the first phase of the study, which involved sixteen (16) participants, the initial values of the haptic properties were used for the textures under examination. The experimental data yielded both the average recognition rates of the six patterns and the interaction times required to achieve recognition. An important finding was the determination of new haptic property values by the participants, which were subsequently adopted as the basis for the second phase of the experiment, involving thirty-one (31) new users.
In a recognition experiment with binary responses (“Yes”/“No”), the recognition rate for each object, although intuitive, provides an incomplete picture of reality. It represents an average that may arise from entirely different response patterns. The Standard Deviation (SD) and Standard Error (SE) are introduced precisely to uncover this hidden information, namely, the variability and reliability of our measurements [
38]. The SD reveals the degree of agreement within our sample and serves as an indicator of the internal consistency of the data for each object individually. The SE quantifies the uncertainty of the estimated recognition rate [
39,
40]. In summary, the calculation of SD answers the question, “To what extent do users disagree with each other regarding the recognition of each material?”, whereas the SE addresses, “If the interaction were extended to a new group of users, how different would the result be?” For binary variables (correct/incorrect recognition), the standard deviation (SD) and standard error (SE) were calculated using the following equations:
Between the first and second recognition phases, notable differences were observed in the values of the standard deviation (SD) and the standard error (SE), indicating how stable and reliable the results are. In the first phase, which included 16 participants, the SD ranged approximately from 0.4 to 0.5, while the SE was relatively high (10–12%). This suggests greater uncertainty in the recognition rates due to the small sample size the fewer the participants, the less stable the results tend to be. In the second phase, with 31 participants, the SD remained nearly the same (0.39–0.50), indicating that the variability in participants’ performance did not change substantially. However, the SE decreased (to 7–9%), meaning that the recognition rates became more reliable and robust. In simple terms, although individual differences remained approximately constant, the inclusion of more participants made the overall results more stable, accurate, and valid for material comparison.
The aggregate results of the two phases are presented in
Table 4 and
Table 5 The differences in recognition rates and interaction times between the two phases of the experiment for each material are also presented in the following diagrams (
Figure 4 and
Figure 5).
To further assess the reliability of the observed improvements, the effect size Cohen’s d was calculated for recognition accuracy between the two experimental phases (
Table 6). The Cohen’s d index is a measure of effect size used to estimate the magnitude of the difference between two groups, expressed in units of standard deviation. While statistical significance tests (e.g.,
t-tests, ANOVA) determine whether a difference exists and the probability that it occurred by chance, effect size measures such as Cohen’s d address the question of “how large is the difference.” In the present experiment, where the two phases (before and after optimization) involved different groups of participants (n1 = 16, n2 = 31) and SD values had already been computed, the calculation of Cohen’s d is methodologically appropriate for estimating the magnitude of improvement in recognition accuracy [
41].
d: Cohen’s d
M1: Average recognition rates (%) for phase1
SD1: Recognition SD for phase1
M2: Average recognition rates (%) for phase2
SD2: Recognition SD for phase2
SDp: The pooled standard deviation of the two groups
Table 6.
Cohen’s d index between the two experimental phases.
Table 6.
Cohen’s d index between the two experimental phases.
Material | Phase 1 (M1, SD1) | Phase 2 (M2, SD2) | Cohen’s d | Interpretation |
---|
Brick | 68.75%, 0.463 | 61.29%, 0.487 | −0.16 | Slight decrease in recognition |
Metal | 50%, 0.500 | 80.62%, 0.395 | +0.67 | Significant improvement |
Wood | 50%, 0.500 | 77.42%, 0.418 | +0.60 | Significant improvement |
Sand | 75%, 0.433 | 77.42%, 0.418 | +0.06 | Almost no change |
Ice | 81.25%, 0.390 | 74.19%, 0.438 | −0.17 | Slight decrease in recognition |
Concrete | 56.25%, 0.496 | 54.84%, 0.498 | −0.03 | Almost no change |
During the interactions, certain misassociations were identified among the six materials under examination. The summary table (
Table 7) and heatmaps (
Figure 6 and
Figure 7) quantitatively presents the observed instances of confusion, enabling analysis of recognition patterns and the difficulty of distinguishing each material. The color intensity (ranging from light yellow to dark green) corresponds to the magnitude of the percentage. The darker the green, the more frequently the two materials were confused.
Upon completion of the experiment, 41 out of the total 47 participants responded to a set of 8 questions based on the UEQ (
Appendix A). The User Experience Questionnaire (UEQ), consisting of eight items rated on a 1–5 scale, provides a multifaceted analysis of UX, covering both classical dimensions (e.g., attractiveness, efficiency) and more specialized ones (e.g., content quality, overall experience). The UEQ proved to be a suitable tool for measuring user experience in novel HCI applications, as it can guide application design improvements by revealing both strengths and weaknesses [
42]. The findings of this questionnaire are presented in
Table 8.
The phases of the experimental procedures are presented in the diagram below (
Figure 8).
5. Discussion
Observation of participant interactions revealed a high degree of acceptance and enthusiasm, given that for most individuals, this was their first encounter with a haptic device. Familiarization with the environment and the use of the Touch device was immediate, with only minor difficulties related to determining the appropriate pressure of the stylus, which were quickly overcome. Overall, the process was evaluated positively, as users remained focused and engaged throughout the experimental procedure, with no reports of fatigue or negative emotions.
These new settings formed the basis of the second phase, enhancing the fidelity of the simulation. The most significant parameter modifications (friction, stiffness, texture, and viscosity in the case of sand) corresponded primarily to materials that exhibited higher recognition error rates. Brick showed an increase in friction, stiffness, and texture, making it more stable and rough. Similarly, Concrete became more frictional, stiffer, and more textured, reflecting a strengthening of its core mechanical characteristics. Metal exhibited a reduction in friction, while its stiffness and texture increased, suggesting that despite its slipperiness, it became sturdier and more visually pronounced in surface detail. Wood displayed increases in both friction and stiffness while maintaining its already high texture, making it more resistant while preserving its characteristic feel. Sand showed a slight increase in friction but reduced stiffness and texture, resulting in a softer and less rough tactile sensation. Finally, Ice remained frictionless but became slightly softer with an increase in texture, indicating minor changes in surface perception without altering its inherent slipperiness. Overall, user interactions appeared to reinforce the characteristics of hard and frictional materials, whereas softer and more slippery materials demonstrated smaller variations.
With regard to texture recognition, overall performance improved in the second phase, indicating that the modified values contributed to greater clarity and realism. In the first phase, with 16 participants, recognition rates and interaction times varied across materials. Ice achieved the highest recognition rate (81.25%) and the shortest interaction time (9.31 s), suggesting that users could identify it quickly and clearly. In contrast, Brick showed moderate recognition (68.75%) but required the longest interaction time (18.63 s), indicating that its texture was more complex or demanded more exploration. Metal and Wood both had recognition rates of 50%, with interaction times of 22.25 and 20.65 s, respectively, implying that users found these textures more difficult to distinguish and required prolonged contact. Sand, despite a high recognition rate (75%), had a relatively short interaction time (14.37 s), while Concrete exhibited moderate recognition (56.25%) with a time of 12.76 s. Overall, these results suggest that more distinctive textures, such as Ice and Sand, are recognized more quickly, whereas less pronounced or more homogeneous textures, such as Metal and Wood, require longer exploration to be accurately identified.
Following the modification of haptic property values, the recognition results and interaction times of the 31 participants in the second phase revealed significant differences compared to the first phase. Metal achieved the highest recognition rate (80.62%) with an interaction time of 17.85 s, indicating that the adjustments made its texture more easily distinguishable and faster to identify. Wood and Sand both had recognition rates of 77.42%, though Wood required more time (24.76 s) than Sand (17.09 s), suggesting that the enhanced intensity of Wood’s texture demanded more exploration. Ice remained relatively easy to recognize at 74.19%, with a reduced interaction time of 15.85 s, indicating faster recognition compared to the previous phase. Brick showed a decrease in recognition (61.29%) and an increase in interaction time (23.33 s), suggesting that the modified properties hindered its quick identification. Finally, Concrete displayed moderate recognition (54.84%) but the longest interaction time (24.95 s), implying that users required more time to identify this texture. Overall, the changes to the haptic properties resulted in improved recognizability for materials such as Metal and Wood, while other textures, such as Brick, became more difficult to identify and required longer interaction.
In parallel, the confusions recorded during recognition highlighted specific pairs of textures with overlapping characteristics (e.g., roughness or hardness), which made them more difficult to distinguish. The systematic documentation of these confusions provides useful insights for improving future parameter settings and avoiding overlaps between similar haptic surfaces. Notably, frequent confusions were observed between Metal and Concrete, as well as between Wood and Ice, indicating that certain material pairs exhibit overlapping haptic properties.
In the first phase, with 16 participants, using the initial property values, the frequency of material confusions varied considerably. Brick was primarily confused with Concrete (18.8%) and secondarily with Metal (12.5%). Metal was often confused with Brick (25%) and Concrete (18.8%), and to a lesser extent with Sand (6.3%). Wood was frequently mistaken for Ice (31.3%), with lower confusion rates for Concrete (12.5%) and Sand (6.3%). Sand was mainly confused with Wood (12.5%) and less often with Metal and Ice (6.3% each). Ice was confused primarily with Sand (12.5%) and secondarily with Metal (6.3%). Finally, Concrete was strongly confused with Metal (31.3%) and less so with Brick and Sand (6.3% each). Overall, materials with more similar haptic properties, such as Brick–Concrete and Metal–Concrete, showed higher confusion rates, whereas more distinct materials, such as Ice and Sand, exhibited lower confusion.
In the second phase, with 31 participants and the adjusted haptic property values, a shift was observed in the confusion patterns between materials. Brick was still mainly confused with Concrete (29.0%) and to a lesser extent with Metal (9.7%), indicating that distinguishing it from Concrete remained challenging. Metal exhibited markedly reduced confusion with all other materials, being confused primarily with Concrete (9.7%) and only minimally with Brick, Sand, and Ice (3.2% each), suggesting that the adjustments made it more easily recognizable. Wood continued to be confused with Ice (16.1%), with lower confusion rates for Metal and Sand (3.2% each). Sand was confused mainly with Ice (16.1%) and to a lesser extent with Wood (6.5%). Ice showed confusion mainly with Wood (16.1%) and Sand (9.7%). Finally, Concrete displayed high confusion with Metal (29.0%) and lower confusion with Brick (12.9%). Overall, the modifications to the haptic properties reduced overall confusion for materials such as Metal and Brick, while more similar or softer textures, such as Wood, Sand, and Ice, maintained certain levels of confusion.
In conclusion, the comparison between the two phases demonstrates that the revision of the haptic property values had a clear impact on both the recognition and discrimination of materials. In the first phase, with the initial values, materials such as Metal and Wood exhibited high confusion rates and required longer interaction times for recognition, whereas Ice and Sand were identified quickly and clearly. Following the adjustments to the properties in the second phase, recognition improved significantly for materials such as Metal and Wood, interaction times decreased for more distinctive textures, and overall confusion rates were reduced, particularly among closely related materials. Nevertheless, certain materials with more homogeneous or softer textures, such as Concrete, Sand, and Ice, continued to show moderate levels of confusion. Overall, the revision of the haptic properties proved effective, making most materials more distinguishable and recognizable, thereby enhancing the participants’ interaction experience.
The comparison of the two experimental phases using the Cohen’s d index quantitatively confirmed the variations observed in the recognition rates of the six materials, providing an estimate of the magnitude of the effect of the haptic parameter adjustments. The results showed that the largest positive effects were recorded for the materials Metal (d = 0.67) and Wood (d = 0.60), corresponding to a medium-to-large effect size according to Cohen’s criteria. These increases indicate that the adjustments in friction, stiffness, and relief values substantially enhanced the distinctiveness and realism of these materials, making them more easily recognizable by participants. Conversely, for Brick (d = −0.16), Ice (d = −0.17), and Concrete (d = −0.03), the index values suggest very small or negative effects, implying that the parameter adjustments may have slightly reduced recognition accuracy or had no significant impact. Finally, Sand (d = 0.06) exhibited an almost negligible change, consistent with the overall stability of performance for this material. Overall, the Cohen’s d values support the conclusion that the adjustments of the haptic properties were most effective for textures with initially lower recognizability (e.g., Metal and Wood), confirming the positive contribution of the optimization process to the realism and haptic discriminability of the virtual surfaces.
The observed confusions in texture recognition are not random but can be interpreted based on the physiological characteristics of the human haptic system and the technical specifications of the Touch device. The human cutaneous sensory system comprises specialized mechanoreceptors with distinct response profiles. Although the 3D Systems Touch device offers good spatial resolution, it exhibits technical limitations that may affect the fidelity of haptic rendering. Its maximum output force (3.3 N) is significantly lower than human capability, resulting in limited realism when rendering the stiffness of materials such as metal or concrete. Moreover, the accurate representation of fine textures depends on the haptic rendering algorithm, which often struggles to convey subtle differences in vibration frequencies. Finally, the inertia and friction of the mechanism reduce sensitivity at higher frequencies, introducing “noise” that constrains the discrimination of fine differences between materials.
The pronounced confusion between Concrete and Metal can be explained by the device’s limitations in generating high frequencies, which are essential for stimulating the Pacinian corpuscles, rendering the two textures difficult to distinguish. Concrete, characterized by random low and mid-frequency reliefs, and Metal, with its smooth yet rigid surface, tend to be conflated due to the device’s low maximum output force (3.3 N). Similarly, the bidirectional confusion between Wood and Ice highlights the device’s challenges in simulating differences in friction and stiffness. While Merkel and Meissner receptors contribute to discrimination through static pressure and micro-slips, the enhancement of ice’s texture appears to have diminished the perceptual cues related to its low friction, thereby hindering accurate differentiation between wood and ice.
The analysis of the UEQ results highlights an overall exceptionally positive user experience, with all dimensions recording scores above 87%. The highest values were observed in the dimensions Leading edge (97.56%), Interesting (97.07%), and Exciting (96.1%), indicating that participants perceived the interaction environment as particularly innovative, attractive, and capable of maintaining their engagement. Similarly high evaluations were reported for the dimensions Inventive (94.63%) and Clear (93.17%), reflecting users’ perceptions of innovation and clarity in use. The indicators related to the system’s practical usability namely Easy (90.24%), Efficient (89.76%), and Supportive (87.8%), although slightly lower, still remain at very high levels. The internal consistency of the User Experience Questionnaire (UEQ) was assessed using Cronbach’s Alpha [
43]. The obtained value (α = 0.54) indicates low internal reliability, suggesting that participants’ evaluations varied across the different UEQ items. This result may reflect the multidimensional nature of the questionnaire, which captures diverse aspects of user experience (e.g., efficiency, attractiveness, novelty), rather than a single homogeneous construct. Moreover, the relatively small sample size (N = 41) may have contributed to this lower reliability estimate. Nonetheless, the consistently high mean scores across all items (above 87%) confirm a generally positive perception of the system’s usability and innovation.