Next Article in Journal
Edge AI Trustworthiness: Revisiting Bit-Flip Impacts and Criticality Conditions
Previous Article in Journal
Dependent Task Graph Offloading Model Based on Deep Reinforcement Learning in Mobile Edge Computing
Previous Article in Special Issue
Integrating OpenPose for Proactive Human–Robot Interaction Through Upper-Body Pose Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment

by
Andreas P. Plageras
1,
Christos L. Stergiou
1,
Vasileios A. Memos
1,
George Kokkonis
2,
Yutaka Ishibashi
3 and
Konstantinos E. Psannis
1,*
1
Department of Applied Informatics, University of Macedonia, 54636 Thessaloniki, Greece
2
Department of Information and Electronic Engineering, International Hellenic University, 57001 Thessaloniki, Greece
3
Department of Business Management, Aichi Sangyo University, Okazaki 444-0005, Japan
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(16), 3185; https://doi.org/10.3390/electronics14163185
Submission received: 9 July 2025 / Revised: 27 July 2025 / Accepted: 29 July 2025 / Published: 11 August 2025

Abstract

In this research paper, we have implemented a computer program thanks to the XML language to sense the differences in image color depth by using haptic/tactile devices. With the use of “Bump Map” and tools such as “Autodesk’s 3D Studio Max”, “Adobe Photoshop”, and “Adobe Illustrator”, we were able to obtain the desired results. The haptic devices used for the experiments were the “PHANTOM Touch” and the “PHANTOM Omni R” of “3D Systems”. The programs that were installed and configured properly so as to model the surfaces, run the experiments, and finally achieve the desired goal are “H3D Api”, “Geomagic_OpenHaptics”, and “OpenHaptics_Developer_Edition”. The purpose of this project was to feel different textures, shapes, and objects in images by using a haptic device. The primary objective was to create a system from the ground up to render visuals on the screen and facilitate interaction with them via the haptic device. The main focus of this work is to propose a novel pattern of images that we can classify as different textures so that they can be identified by people with reduced vision.

Graphical Abstract

1. Introduction

For those who are visually impaired, access to written information such as words, photographs, and graphics is still severely restricted. Thus, haptics -the representation of the sense of touch- has been attempted. Haptics provides the user with an immersive and stimulating experience. It is one of the primary senses in the human body that aids in understanding and seeing the environment. With the use of this technology, people who are blind or visually disabled can examine an object’s form, texture, geometry, mass, elasticity, and measurements with their hands [1]. Scientists are utilizing 5G network technology to create assistive devices for individuals with visual impairments, especially by integrating vibro-tactile stimulation on haptic interfaces, because of the development of mobile networks.
The term “Internet of Things” (IoT) refers to the soon-to-be-developed network of physically connected objects, including appliances, structures, cars, and other things. They have embedded software, electronic hardware, sensors, and connectivity capabilities that allow them to collect and share information over the internet.
New technologies can be utilized together with the Internet of Things (IoT) for enhancing the user experience (QoE) and adding new features. It must be mentioned that the assessment of QoE in haptic applications with force feedback over the Internet is in the early stage, and the resolution of this open problem is being researched and identified as a great challenge [2]. Emerging technologies comprise various techniques and practices, along with haptic technologies and artificial intelligence, that can be implemented in machines, computers, devices, and other objects that individuals may utilize without necessarily understanding their management and operation in detail.
There are two categories of haptic perception, tactile and kinaesthetic, based on the kind of feedback we receive. Kinaesthetic information regards the way our muscles feel about their position and movement. Tactile information entails sensations like pressure, temperature, and touch [3]. Because haptic technology allows for the development of precisely controlled haptic virtual objects, it has made it possible to study how the human sense of touch operates. Haptic devices have their uses in the majority of areas, like manufacturing and education. Tactile Internet (TI) is, as defined by the IEEE P1918.1 Tactile Internet standard group, “A network, or network of networks, for remotely accessing, perceiving, manipulating or controlling real or virtual objects or processes, in perceived real time by humans or machines.” That is the official definition [4].
The scientific community also wants to enable internet protocols on the upcoming generation of smart devices so that end-to-end transparency and convergence can be achieved with the new IPv6 protocol. One of the main goals is to use cloudlet-based services to accelerate the convergence of cloud computing and mobile computing for a new class of haptic applications [4].
The IoT can certainly leverage TI operations in many respects [5]. TI is predicted to be the future of the Internet of Things (IoT), merging how humans and machines communicate (H2M) and how machines communicate with one another (M2M). TI is a network with extremely low latency and extremely high availability, reliability, and security. The International Telecommunication Union (ITU) thinks it may be viewed as a “revolutionary level of development for society, economics, and culture”.
The TI is envisioned to be able to take control of the IoT in real time. This means that it will be adding a new dimension to human-to-machine (H2M) interaction by making tactile and haptic sensations possible, and simultaneously, fundamentally changing the interaction of machines [2,6]. The revolution of the TI is soon expected to solve challenges in many aspects, such as education, healthcare, energy, smart city, and culture [7]. Better communication and more realistic social interaction in a variety of varied contexts will result from TI. In particular, the TI will make it possible to conduct remote learning and training, drive remotely, monitor and do surgery remotely, service and decommission industrial facilities remotely, manage wireless exoskeletons, and synchronize suppliers in smart grids, among many other application areas [8].
TI will assist virtual and augmented reality by ensuring there is rapid communication when numerous internet users are connected via a VR or AR simulation to collaborate. For instance, in order to develop a 3D computer model, a team of artists from various nations does not need to collaborate by sharing photographs, designs, and pictures online to construct a sculpture. They can use a robot engine, haptic feedback, and tactile sensors to cooperate on the creation of a specific sculpture in real time via the TI, despite being in different spaces. This is made possible using Texas Instruments technology [7].
We have seen a dramatic shift in the form factor of audio and vision technology over the past ten years, moving from bulky, anchored instruments to lightweight, ergonomically designed gadgets that suit our body like second nature. Despite this, wearability has only lately started to be considered in the design of haptic devices. Wearability opens new possibilities for human–machine cooperation, communication, and integration. This occurs because robotic gadgets and wearable haptic interfaces may naturally and privately communicate with their human wearers while they engage with their surroundings [9].
The era of TI can start practically when the 5G technology is going to be released in 2020. It should be noted that in order to realize the TI goal, numerous firms are currently putting 5G standards into practice and creating new technologies, infrastructures, and solutions that allow for incredibly low-latency end-to-end communications [10].
A new generation of services called cloud computing (CC) seeks to always make data accessible from everywhere. When employing this unique technology, there are no constraints or the necessity for hardware equipment. In recent years, cloud computing services have emerged as one of the most competitive markets for leading software and IT firms [11,12].
Taking this into account, cloud computing may become the foundational technology for other technologies, including big data, and subsequently achieve cloud and big data integration [13,14]. In addition, cloud computing has traditionally been a foundational technology for other innovative state-of-the-art technologies due to the services it provides [15,16].
Big Data (BD) is one of those, as was previously mentioned. The term “big data” describes the surprisingly rapid increase in the volume of both organized and unstructured data. Data sets that are too large or complex for conventional data processing software are generally referred to by this term. Furthermore, the phrase “Big Data” is commonly used to refer to the use of predictive analytics or other innovative methods to glean value from data. Sometimes it also describes the size of a data set. [17,18]. Big data accuracy could result in more assured decision-making, and wiser choices could reduce risk, boost productivity, and save costs [17]. This breadth helps us to see that big data is becoming equally crucial for the internet and business. This happens as a result of more data leading to more accurate analyses [15]. The crux of the issue lies not in the sheer volume of data you have gathered, but rather in its potential usefulness. We anticipate achieving the following objectives by assuming that the companies will be able to gather pertinent data, access information from any source, and analyze it to rapidly arrive at answers: (1) to cut expenses, (2) to save time, (3) to create new goods and enhance existing ones, and (4) to make better choices [19,20].
We may now share the primary contributions of our study after presenting the fundamental theoretical knowledge of the research field of our work. This work’s primary contributions are presented as follows:
  • Propose a novel pattern of images that we can classify as different textures so that they can be identified by people with reduced vision.
  • Proposed instances of different surface models
  • A system is proposed that will be able to serve visually impaired people remotely through the help of IoT Haptic devices and a Cloud Server.
The remainder of the document is organized as follows. A review of the background studies on the subject is included in Section 2, which deals with IoT Haptic models and what they are trying to solve, to find out the most used case scenarios that have already been made, and to reveal the scientific gaps we are trying to investigate and solve in our work. Section 3 presents the installation of the proposed system and device and gives a simple operation tutorial. Furthermore, Section 4 presents the proposed model and the investigation scenario of our work. Experimental results of our proposal are presented in Section 5 and offer the major findings of our model. The current paper’s conclusions are presented in Section 6, which also presents fresh ideas for the direction of future research.

2. Literature Review

There are several research studies and IoT-based implementations for people with impaired vision. The following are various research works that mainly use haptic feedback, IoT platforms, and Cloud applications to assist impaired people in avoiding various obstacles during their movement.
M. Jiménez et al. [21] proposed an assistive movement device with the use of haptic technology to properly guide the visually impaired people. In this project, haptic technology can provide meaningful feedback that is essentially intuitive information about the route that should be followed by people with impaired vision using a smart walker device.
A corresponding smart navigation system for visually impaired people was proposed by A. Mueen et al. [22] involving multiple processes to achieve this. The proposed system, entitled “Fog Assisted Smart Navigation System” (FASNS), was implemented in Network Simulator 3, simulating indoor and outdoor environments for static and dynamic obstacle detection and task completion time. The system uses a fog-connected IoT-cloud environment to collect and analyze the route data and thus make the proper decision, improving the navigation accuracy.
A. K. Srinivas et al. [23] proposed a haptic display unit based on IoT that can be integrated into smartphones to improve the quality of life of visually impaired citizens. The haptic screen can explain a text message in the form of a renewable Braille script, while the same procedure can also be extended to incorporate touch screens into smart gadgets, providing useful information.
A remote navigation assistive system for visually impaired and blind people was proposed by B. Chaudary et al. [24]. This system offers haptic feedback from one or two installed relevant vibrating actuators and cutting-edge technologies that enable a high-quality experience for people with vision disabilities.
J. Ganesan et al. [25] presented a deep learning reader for visually impaired persons using a Convolution Neural Network (CNN) based Long Short-Term Memory (LSTM) network. The CNN was implemented for feature detection from the printed images and their captions, while the LSTM network offers captioning capabilities, explaining the detected text from the images. The recognized texts and captions are turned into voice messages to the impaired users via a proper API with a high level of accuracy.
Y. Bouteraa [26] proposed a smart wearable navigation device for visually impaired and blind people using a fuzzy decision support system. The system is based on two detection units for obstacles and an embedded controller. Data related to the acquisition and obstacle avoidance from several nodes are managed by a suitable Robot Operating System (ROS) architecture to be delivered as mixed haptic–voice messages for guidance of the impaired and blind people.
M. Poggi and S. Mattoccia [27] proposed an accurate, lightweight, and efficient wearable device for impaired people using 3D computer vision and machine learning methods to offer obstacle detection and categorization. CNN is used to categorize the obstacles, while a bespoke RGBD camera for dense depth mapping and an embedded CPU board provide the basis for obstacle detection.
S. Rao and V. M. Singh [28] proposed a novel assistive system architecture in which a shoe with embedded sensors and actuators relates to a specialized smartphone application to achieve better movement for impaired people. Moreover, with the use of computer vision, the system can efficiently detect potential obstacles and alert the user to these.
In addition to the above research studies, M. S. Farooq et al. [29] implemented a smart IoT-based stick for impaired people to detect and identify potential obstacles, water, etc. The proposed model also has the ability to inform the user via audio messages and/or haptic feedback about the obstacles to avoid them, while it can share the user’s location globally through an IoT platform in case of getting lost.
Finally, a similar application that makes use of haptic feedback to detect various obstacles in multiple areas was proposed by A. R. See et al. [30]. The proposed application is especially designed for visually impaired and blind people and provides many benefits. The model uses an optimized obstacle detection algorithm to distinguish noise and detect even smaller obstacles at the same time, making the detection more effective.
Table 1 summarizes the aforementioned research papers, presenting the advantages and disadvantages of each. Our proposed model can be integrated into such system architectures to make visually impaired people display on the screen in better and higher quality the obstacles provided by the integrated sensors and interact with them through a haptic device to avoid them.

3. Device Installation

Installing “OpenHaptics_Developer_Edition_v3.4.0” and the “H3D API” (H3DApi-Full-2.3.0) is the first step. The “Geomagic_Touch_Device_Driver_Win_2015.5.26” driver for the haptic device must then be installed. Verify that drive C has all of the programs installed.

3.1. Installation Step-by-Step

You must configure the haptic device and use the Diagnostic Tool after installing the device and apps. Double-click the Geomagic Touch Setup first, then adhere to the instructions below:
  • To pair the device, click the Pairing button.
  • Immediately, a time bar (the green, empty bar) appears. Click the Pairing button on the device’s back immediately after that. Verify that when I press the button device, the time will not be blank.
  • Lastly, you will receive the notification that the device was properly paired. After selecting Apply, select OK. As you can see below, the color will turn green once the calibration is finished.
  • Press the two haptic device buttons in this stage, then click Next without moving them. The following Figure 1 displays the outcome of tapping the haptic device’s two buttons.
  • All you need to do in this step is click Next when both indicators turn green.
  • You can regulate the forces applied to the haptic device in this phase. When you are done, select Next.
  • You can test your gadget in this stage by pointing the stylus in all directions. Simply select Next.
  • The last stage, when you can obtain some haptic device metrics. Click X to exit the Diagnostic Tool after you are done.

3.2. How to Execute the Code

Go to the Windows Start menu (all versions of Windows) and type 'CMD' to launch the command prompt. It shows a program named “Command Line.” Run and then enter the following command:
C: \ H3D \ bin64 \ H3DLoad.exe “C: \ H3D \ HapticsProject \ depthMap.x3d”
Ensure that the H3D folder, which was automatically generated while installing the H3D API, contains a HapticsProject folder on drive C. You must import the code-containing files from inside this folder. To run the various examples and tests, all you need to do is swap out the photos. The code in the two images must be accessed, url = “Surface1.tif” must be changed each time, and the name of the image you want to view must be entered, together with its suffix, which specifies the image type.
The codes for the two images that you should alter one at a time are shown in Table 2. One serves as the bump map (also known as the Image Depth Map) and the other as the material.
To enhance the robustness of the tactile system under real-world usage, real-time monitoring of physiological signals (e.g., SAR, skin temperature) could be integrated. This allows adaptive control of force feedback based on user-specific and environmental variations. Future versions could incorporate fuzzy logic or other soft computing approaches to resolve uncertainty in user perception and ambiguous tactile inputs.

4. Proposed Method

In this section, we present our proposed scenario as an application that has been produced and tested for evaluation. Additionally, the source code of our evaluation scenario is presented in Figure 2 along with the images we used to produce our model.

4.1. Problem Evaluation

This study aimed to enable those with visual impairments to feel and identify various object textures using haptic devices. The initial objective was to create the code from scratch to interact with the haptic device and display visuals on the screen.
The first thought was to use the bump map in images to make the different shapes as a relief over them to be recognizable to visually impaired people. The bump map is a technique that is based on white and black, which is the absence of any color. The numbers 0 and 1 were given to separate the colors and give a sense of depth in the image, where the bump map is set to 0, and the sense of cavity is set to 1. The same can be performed the other way around.
When visually impaired people started to recognize the different textures of the images, a concern arose. The shapes were easy to recognize. What if a texture has more details? Well, the need for more detail in textures gave rise to the idea that “All colors have a percentage of white except the black, which is characterized as the absence of color or light”. So, if every color has white in it, then a number can be assigned to each color on the bump map. With that number, every color will have a different height/depth in the image, and due to that, the different textures can be represented in more detail.

4.2. Algorithm Approach

The project’s goal was to develop the ability to recognize various objects using a haptic device. The initial objective involved creating code from scratch to interface with the haptic device and display visuals on the screen. The following code is what this task does (Algorithm 1):
Algorithm 1—XML Source Code
<Scene>
     <Shape>
                    <Appearance>
                           <Material/>
                           <ImageTexture url="Surface1.tif" DEF="IMT" repeatS="false" repeatT="false"/>
                    </Appearance>
                    <Box DEF="FLOOR" size="0.95 0.49 0"/>
     </Shape>
</Scene>
The template is made to prevent numerous writers with the same affiliation from having their affiliations repeated. Kindly ensure that your affiliations are as brief as possible (do not distinguish between different departments within the same organization, for instance). This template was created with two associations in mind.
Integration with AI-based spatiotemporal analysis techniques would enable better classification of bump map patterns and depth cues. Moreover, the system could be calibrated for individual users, adjusting tactile resolution based on hand size, touch sensitivity, and interaction style. The architecture can potentially be extended with photon-based image decoding or light-based depth sensing to enrich the detail and accuracy of surface texture rendering. Incorporating functional energy analysis would allow modeling of complex textured surfaces beyond geometry, using biomechanical interaction energy estimation. Surface electromyographic (sEMG) data can be collected and used for real-time adaptation of tactile feedback, reflecting user intent and interaction force. The system is modular and could be scaled for integration into wearable haptic assistive devices, offering tactile assistance in mobile or home-based applications. Monitoring device fatigue using fuzzy divergence or mechanical stress inference could prevent long-term damage and user fatigue during extended use [31].

4.3. Image Patterns

Below, the image patterns that have been built and tested with the haptic device and the proposed algorithm to identify the difference in colors can be observed. The two colors that have been used in this research paper are black and white. In the future, more experiments will be carried out to identify and sense the difference between each color depending on the percentage of white that every color has.
Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 depict the different textures of images depending on black and white patterns. Specifically, Figure 3 and Figure 4 show and represent the different geometrical shapes; Figure 5, Figure 6 and Figure 7 represent lines and other similar shapes.

5. Experimental Results

In this section, the research results of the experimental implementation of the images, combined with the algorithmic intervention, are reflected. For the purpose of checking our research, a questionnaire (Table 3) was created, which was given to those who voluntarily participated in the practical implementation of our research. The people who participated were randomly selected from the authors’ personal circle so that a first test/impression of the research proposal could be performed. All the participants were visually impaired people. The questionnaire provided evaluates on a scale from 1 (not at all) to 5 (very much) how understandable the texture of each image was through the haptic device we used.
As we can see from the responses recorded in Table 4, it is observed that some images (textures) are more easily recognizable than others. More specifically, the three most recognizable textures, based on their overall score, are texture 11, where 8 out of 10 participants found it very recognizable, texture 9, where 7 out of 10 participants found it very recognizable, and texture 2, where 5 out of 10 participants found it very recognizable.
On the other hand, the three least recognizable textures, based on their overall score are texture 12, where 3 out of 10 participants rated it a little not at all recognizable, texture 10, where 3 out of 10 participants rated it a little not at all recognizable, and texture 8, where 2 of the 10 participants rated it as slightly recognizable. Based on the findings of the practical imprinting of our research, we can rely on the standards that will be able to capture some specific colorations, to create a standardization for them based on specific textures in their touch. Also, we should continue to improve all textures so that they reach the same levels of recognition by people with impaired vision, and thus improve the list of colors that we will be able to standardize.

6. Conclusions

It is commonly recognized that while cloud computing presents numerous opportunities, it also has several drawbacks. Cloud computing has the potential to be a fundamental technology for other technologies due to the services it provides. A new wave of services called cloud computing seeks to make data, apps, and information accessible from anywhere at any time. Furthermore, big data is a technology that can incorporate a lot of data. The term “big data” refers to the unexpectedly quick rise in both organized and unstructured data volume. Both technologies encountered numerous difficulties and problems when they were in use.
Therefore, in this study, we explore big data and cloud computing, focusing on the security and management issues surrounding these two technologies. To be more precise, we integrate the two technologies stated above -Cloud Computing and Big Data- to look at their associated properties and learn about the advantages of doing so. Finally, we highlight Big Data’s contribution to cloud computing. It so demonstrates how Cloud Computing technology enhances Big Data’s functionality. In conclusion, we examine the security issues arising from the amalgamation of Cloud Computing and Big Data, and we provide an innovative security framework. Additionally, experimental findings on the application of the suggested model, AES, RC5, and RSA encryption algorithms are described at the end of this paper.
In conclusion, the security issues related to the combination of cloud computing and big data were summarized using the suggested algorithm model. A presentation on the functions of the three encryption methods studied in this work in the Cloud Computing and Big Data integration paradigm was also given. As a result, future studies on the combination of those two technologies may focus on that area. Considering the swift advancement of both technologies, it is imperative to address or minimize the security and management obstacles associated with data stored and streamed in the cloud to improve the integration model. The security issues that this paper surveyed may be the subject of future investigation as a case study, with the aim of reducing them.

Author Contributions

Conceptualization, A.P.P., C.L.S. and V.A.M.; methodology, A.P.P., C.L.S., V.A.M. and G.K.; software, A.P.P., C.L.S. and V.A.M.; validation, A.P.P., C.L.S., V.A.M. and G.K.; formal analysis, A.P.P., C.L.S., V.A.M. and G.K.; investigation, A.P.P., C.L.S. and V.A.M.; resources, A.P.P., C.L.S. and V.A.M.; data curation, A.P.P., C.L.S. and V.A.M.; writing—original draft preparation, A.P.P., C.L.S. and V.A.M.; writing—review and editing, A.P.P., C.L.S. and V.A.M.; visualization, A.P.P., C.L.S. and V.A.M.; supervision, K.E.P. and Y.I.; project administration, K.E.P. and G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and feedback, which were extremely helpful in improving the quality of the paper. Also, the whole work is a part of the PhD project conducted by A. P. Plageras.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Emami, M.; Bayat, A.; Tafazolli, R.; Quddus, A. A Survey on Haptics: Communication, Sensing and Feedback. IEEE Commun. Surv. Tutor. 2025, 27, 2006–2050. [Google Scholar] [CrossRef]
  2. Antonakoglou, K.; Xu, X.; Steinbach, E.; Mahmoodi, T.; Dohler, M. Toward Haptic Communications Over the 5G Tactile Internet. IEEE Commun. Surv. Tutor. 2018, 20, 3034–3059. [Google Scholar] [CrossRef]
  3. Minopoulos, G.; Kokkonis, G.; Psannis, K.E.; Ishibashi, Y. A Survey on Haptic Data Over 5G Networks. Int. J. Future Gener. Commun. Netw. 2019, 12, 37–54. [Google Scholar] [CrossRef]
  4. IEEE P1918 Tactile Internet Emerging Technologies Subcommittee. Available online: http://ti.committees.comsoc.org (accessed on 22 February 2025).
  5. Oteafy, S.M.A.; Hassanein, H.S. Leveraging Tactile Internet Cognizance and Operation via IoT and Edge Technologies. Proc. IEEE 2019, 107, 364–375. [Google Scholar] [CrossRef]
  6. Aijaz, A.; Dohler, M.; Aghvami, A.H.; Friderikos, V.; Frodigh, M. Realizing the Tactile Internet: Haptic Communications over Next Generation 5G Cellular Networks. IEEE Wirel. Commun. 2017, 24, 82–89. [Google Scholar] [CrossRef]
  7. Cao, H. Technical Report: What is the next innovation after the Internet of Things? People in Motion Lab (Cisco Big Data Analytics), Department of Geodesy and Geomatics Engineering, University of New Brunswick. arXiv 2017, arXiv:1708.07160. [Google Scholar]
  8. Fettweis, G. The Tactile Internet: Applications and Challenges. IEEE Veh. Technol. Mag. 2014, 9, 64–70. [Google Scholar] [CrossRef]
  9. Pacchierotti, C.; Sinclair, S.; Solazzi, M.; Frisoli, A.; Hayward, V.; Prattichizzo, D. Wearable Haptic Systems for the Fingertip and the Hand: Taxonomy, Review, and Perspectives. IEEE Trans. Haptics 2017, 10, 580–600. [Google Scholar] [CrossRef]
  10. Simsek, M.; Aijaz, A.; Dohler, M.; Sachs, J.; Fettweis, G. The 5G-Enabled Tactile Internet: Applications, requirements, and architecture. In Proceedings of the 2016 IEEE Wireless Communications and Networking Conference, Doha, Qatar, 1 April 2016; pp. 1–6. [Google Scholar] [CrossRef]
  11. The NIST Definition of Cloud Computing; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011. [CrossRef]
  12. Skourletopoulos, G.; Mavromoustakis, C.X.; Mastorakis, G.; Batalla, J.M.; Sahalos, J.N. An Evaluation of Cloud-Based Mobile Services with Limited Capacity: A Linear Approach. Soft Comput. J. 2016, 21, 4523–4530. [Google Scholar] [CrossRef]
  13. Garg, S.K.; Versteeg, S.; Buyya, R. A framework for ranking of cloud computing services. Future Gener. Comput. Syst. 2013, 29, 1012–1023. [Google Scholar] [CrossRef]
  14. Haghighat, M.; Zonouz, S.; Abdel-Mottaleb, M. CloudID: Trustworthy cloud-based and cross-enterprise biometric identification. Expert Syst. Appl. 2015, 11, 7905–7916. [Google Scholar] [CrossRef]
  15. Kumar, J. Integration of Artificial Intelligence, Big Data, and Cloud Computing with Internet of Things. In Convergence of Cloud with AI for Big Data Analytics: Foundations and Innovation; Wiley: Hoboken, NJ, USA, 2023; pp. 1–12. [Google Scholar] [CrossRef]
  16. Stergiou, C.; Psannis, K.E.; Gupta, B.; Ishibashi, Y. Security, Privacy & Efficiency of Sustainable Cloud Computing for Big Data & IoT. Sustain. Comput. Inform. Syst. 2018, 19, 174–184. [Google Scholar]
  17. Hilbert, M.; López, P. The World’s Technological Capacity to Store, Communicate, and Compute Information. Science 2011, 332, 60–65. [Google Scholar] [CrossRef]
  18. Fu, Z.; Ren, K.; Shu, J.; Sun, X.; Huang, F. Enabling Personalized Search over Encrypted Outsourced Data with Efficiency Improvement. IEEE Trans. Parallel Distrib. Syst. 2015, 27, 2546–2559. [Google Scholar] [CrossRef]
  19. Stergiou, C.; Psannis, K.E. Efficient and Secure Big Data delivery in Cloud Computing. Multimed. Tools Appl. 2017, 76, 22803–22822. [Google Scholar] [CrossRef]
  20. Galego, N.M.C.; Martinho, D.S.; Duarte, N.M. Cloud computing for big data analytics: How cloud computing can handle processing large amounts of data and improve real-time data analytics. Procedia Comput. Sci. 2024, 237, 297–304. [Google Scholar] [CrossRef]
  21. Jiménez, M.F.; Mello, R.C.; Bastos, T.; Frizera, A. Assistive locomotion device with haptic feedback for guiding visually impaired people. Med. Eng. Phys. 2020, 80, 18–25. [Google Scholar] [CrossRef]
  22. Mueen, A.; Awedh, M.; Zafar, B. Multi-obstacle aware smart navigation system for visually impaired people in fog connected IoT-cloud environment. Health Inform. J. 2022, 28, 146045822211126. [Google Scholar] [CrossRef]
  23. Abburu, K.S.; Rout, K.K.; Mishra, S. Haptic Display Unit: IoT For Visually Impaired. In Proceedings of the 2018 International Conference on Information Technology (ICIT), Bhubaneswar, India, 19–21 December 2018; pp. 244–247. [Google Scholar] [CrossRef]
  24. Chaudary, B.; Pohjolainen, S.; Aziz, S.; Arhippainen, L.; Pulli, P. Teleguidance-based remote navigation assistance for visually impaired and blind people—Usability and user experience. Virtual Real. 2023, 27, 141–158. [Google Scholar] [CrossRef]
  25. Ganesan, J.; Azar, A.T.; Alsenan, S.; Kamal, N.A.; Qureshi, B.; Hassanien, A.E. Deep Learning Reader for Visually Impaired. Electronics 2022, 11, 3335. [Google Scholar] [CrossRef]
  26. Bouteraa, Y. Design and Development of a Wearable Assistive Device Integrating a Fuzzy Decision Support System for Blind and Visually Impaired People. Micromachines 2021, 12, 1082. [Google Scholar] [CrossRef] [PubMed]
  27. Poggi, M.; Mattoccia, S. A wearable mobility aid for the visually impaired based on embedded 3D vision and deep learning. In Proceedings of the 2016 IEEE Symposium on Computers and Communication (ISCC), Messina, Italy, 27–30 June 2016; pp. 208–213. [Google Scholar] [CrossRef]
  28. Rao, S.; Singh, V.M. Computer Vision and IoT Based Smart System for Visually Impaired People. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 552–556. [Google Scholar] [CrossRef]
  29. Farooq, M.S.; Shafi, I.; Khan, H.; Díez, I.D.; Breñosa, J.; Espinosa, J.C.; Ashraf, I. IoT Enabled Intelligent Stick for Visually Impaired People for Obstacle Recognition. Sensors 2022, 22, 8914. [Google Scholar] [CrossRef]
  30. See, A.R.; Costillas, L.V.M.; Advincula, W.D.C.; Bugtai, N.T. Haptic Feedback to Detect Obstacles in Multiple Regions for Visually Impaired and Blind People. Sens. Mater. 2021, 33, 1. [Google Scholar] [CrossRef]
  31. Pratticò, D.; Laganà, F.; Oliva, G.; Fiorillo, A.S.; Pullano, S.A.; Calcagno, S.; De Carlo, D.; La Foresta, F. Integration of LSTM and U-Net Models for Monitoring Electrical Absorption With a System of Sensors and Electronic Circuits. IEEE Trans. Instrum. Meas. 2025, 74, 2533311. [Google Scholar] [CrossRef]
Figure 1. Device calibration.
Figure 1. Device calibration.
Electronics 14 03185 g001
Figure 2. XML code and textures used for the tests.
Figure 2. XML code and textures used for the tests.
Electronics 14 03185 g002
Figure 3. Texture with shapes and lines. (a): Black squares, (b): Black circles, (c): Open circles, (d): Cross shapes, (e): Open squares, (f): Vertical lines, (g): Triangle shapes, (h): Curved lines, and (i): Diagonal lines, (j): Straight lines.
Figure 3. Texture with shapes and lines. (a): Black squares, (b): Black circles, (c): Open circles, (d): Cross shapes, (e): Open squares, (f): Vertical lines, (g): Triangle shapes, (h): Curved lines, and (i): Diagonal lines, (j): Straight lines.
Electronics 14 03185 g003
Figure 4. Texture with different line directions and distances. (a): Dense horizontal, (b): Wide horizontal, (c): Dense diagonal, (d): Wide diagonal, (e): Dense vertical, and (f): Crossed diagonals.
Figure 4. Texture with different line directions and distances. (a): Dense horizontal, (b): Wide horizontal, (c): Dense diagonal, (d): Wide diagonal, (e): Dense vertical, and (f): Crossed diagonals.
Electronics 14 03185 g004
Figure 5. Texture with different shapes. (a): Checkered squares, (b): Open squares, (c): Triangle rows, and (d): Open ovals.
Figure 5. Texture with different shapes. (a): Checkered squares, (b): Open squares, (c): Triangle rows, and (d): Open ovals.
Electronics 14 03185 g005
Figure 6. Texture with different shapes and vertical and horizontal lines. (a): Large X shapes, (b): Vertical lines, (c): Horizontal lines, and (d): Black circles.
Figure 6. Texture with different shapes and vertical and horizontal lines. (a): Large X shapes, (b): Vertical lines, (c): Horizontal lines, and (d): Black circles.
Electronics 14 03185 g006
Figure 7. Texture with different line directions. (a): Horizontal lines, (b): Right-leaning diagonals, (c): Vertical lines, and (d): V-pattern lines.
Figure 7. Texture with different line directions. (a): Horizontal lines, (b): Right-leaning diagonals, (c): Vertical lines, and (d): V-pattern lines.
Electronics 14 03185 g007
Table 1. Comparative table of assistive technologies for the visually impaired.
Table 1. Comparative table of assistive technologies for the visually impaired.
Ref.AuthorsTechnology UsedAdvantagesDisadvantages
[21]M. Jiménez et al.Haptic feedback via smart walker
-
Intuitive real-time guidance
-
Natural feedback system
-
Limited on uneven terrain
-
May be bulky to carry
[22]A. Mueen et al.Fog-assisted IoT with Cloud and NS-3 simulation
-
Smart decision-making
-
Handles multiple obstacle types
-
Accurate navigation
-
Requires fog + cloud infrastructure
-
Not easily deployable without a network
[23]A. K. Srinivas et al.IoT-based haptic display (refreshable Braille)
-
Converts messages to Braille
-
Integrates with smartphones
-
Requires user training
-
Not focused on navigation or obstacle detection
[24]B. Chaudary et al.Teleguidance with haptic actuators
-
Innovative vibration feedback
-
High user experience quality
-
Dependent on the remote guide
-
Possible communication delays
[25]J. Ganesan et al.CNN + LSTM for text/image captioning
-
High accuracy
-
Converts printed text/images to voice
-
Not a navigation tool
-
Limited to reading functionality
[26]Y. BouteraaWearable device with Fuzzy Logic + ROS
-
Combines voice and haptic guidance
-
Decision-making using a fuzzy system
-
Modular and scalable via ROS
-
Complex system
-
Requires careful calibration
[27]M. Poggi & S. Mattoccia3D vision + deep learning with CNN
-
Accurate depth analysis
-
Obstacle classification
-
Needs a special RGB-D camera
-
High energy consumption
[28]S. Rao & V. M. SinghSmart shoe with sensors + smartphone app
-
Discreet integration (in footwear)
-
Real-time obstacle alerts
-
Limited vertical obstacle detection
-
May miss upper-body-level hazards
[29]M. S. Farooq et al.Smart IoT-based stick with audio/haptic + GPS
-
Portable and low-cost
-
Detects water and obstacles
-
Shares the user’s location if lost
-
Limited detection angle
-
Requires the user to carry the stick
[30]A. R. See et al.Haptic feedback + optimized obstacle detection
-
Sensitive to small objects
-
Good noise filtering
-
May be too sensitive
-
Calibration is crucial
Table 2. Source code to be changed.
Table 2. Source code to be changed.
Source Code
<ImageTexture url="Surface1.tif" DEF="IMT" repeatS="false" repeatT="false"/>
<ImageTexture containerField="depthMap" url="Surface1.tif" repeatS="false" repeatT="false"/>
Table 3. Source code to be changed.
Table 3. Source code to be changed.
12345
texture 1 (Figure 5a)
texture 2 (Figure 5b)
texture 3 (Figure 5c)
texture 4 (Figure 5d)
texture 5 (Figure 6a)
texture 6 (Figure 6b)
texture 7 (Figure 6c)
texture 8 (Figure 6d)
texture 9 (Figure 7a)
texture 10 (Figure 7b)
texture 11 (Figure 7c)
texture 12 (Figure 7d)
Table 4. Experimental results.
Table 4. Experimental results.
Candidate 1Candidate 2Candidate 3Candidate 4Candidate 5Candidate 6Candidate 7Candidate 8Candidate 9Candidate 10
texture 1 (Figure 5a)5444544554
texture 2 (Figure 5b)4454554545
texture 3 (Figure 5c)5534435455
texture 4 (Figure 5d)4323433343
texture 5 (Figure 6a)5533334444
texture 6 (Figure 6b)3344444545
texture 7 (Figure 6c)3254345555
texture 8 (Figure 6d)5342323333
texture 9 (Figure 7a)5455455545
texture 10 (Figure 7b)3411213322
texture 11 (Figure 7c)5555455545
texture 12 (Figure 7d)3111222332
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Plageras, A.P.; Stergiou, C.L.; Memos, V.A.; Kokkonis, G.; Ishibashi, Y.; Psannis, K.E. Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment. Electronics 2025, 14, 3185. https://doi.org/10.3390/electronics14163185

AMA Style

Plageras AP, Stergiou CL, Memos VA, Kokkonis G, Ishibashi Y, Psannis KE. Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment. Electronics. 2025; 14(16):3185. https://doi.org/10.3390/electronics14163185

Chicago/Turabian Style

Plageras, Andreas P., Christos L. Stergiou, Vasileios A. Memos, George Kokkonis, Yutaka Ishibashi, and Konstantinos E. Psannis. 2025. "Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment" Electronics 14, no. 16: 3185. https://doi.org/10.3390/electronics14163185

APA Style

Plageras, A. P., Stergiou, C. L., Memos, V. A., Kokkonis, G., Ishibashi, Y., & Psannis, K. E. (2025). Surface Reading Model via Haptic Device: An Application Based on Internet of Things and Cloud Environment. Electronics, 14(16), 3185. https://doi.org/10.3390/electronics14163185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop