Skip to Content
SustainabilitySustainability
  • Article
  • Open Access

15 September 2022

Image Recognition-Based Architecture to Enhance Inclusive Mobility of Visually Impaired People in Smart and Urban Environments

,
,
,
and
1
ADiT-LAB, Instituto Politécnico de Viana do Castelo, 4900-367 Viana do Castelo, Portugal
2
Departmento de Engenharia Mecânica, ISEP Politécnico do Porto, 4249-015 Porto, Portugal
3
INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal
*
Author to whom correspondence should be addressed.

Abstract

The demographic growth that we have witnessed in recent years, which is expected to increase in the years to come, raises emerging challenges worldwide regarding urban mobility, both in transport and pedestrian movement. The sustainable development of cities is also intrinsically linked to urban planning and mobility strategies. The tasks of navigation and orientation in cities are something that we resort to today with great frequency, especially in unknown cities and places. Current navigation solutions refer to the precision aspect as a big challenge, especially between buildings in city centers. In this paper, we focus on the segment of visually impaired people and how they can obtain information about where they are when, for some reason, they have lost their orientation. Of course, the challenges are different and much more challenging in this situation and with this population segment. GPS, a technique widely used for navigation in outdoor environments, does not have the precision we need or the most beneficial type of content because the information that a visually impaired person needs when lost is not the name of the street or the coordinates but a reference point. Therefore, this paper includes the proposal of a conceptual architecture for outdoor positioning of visually impaired people using the Landmark Positioning approach.

1. Introduction

In recent decades, we have witnessed an exponential demographic growth of cities, which substantially increases the difficulties of urban planning in terms of mobility, which presents itself as an emerging challenge in all countries [1]. Whether in terms of transport or pedestrian navigation, mobility has taken on a new dimension and represents a problem to which many researchers respond, focusing on the sustainable development of cities. Infrastructures and city design are constantly changing to respond to the daily displacement needs of millions of people from the outskirts to city centers [2]. Sustainable development will be difficult to achieve with a large number of private vehicles and the current public transport network [3]. The mobility as a service (MaaS) paradigm emerges as an innovative concept that intends to be a strong ally towards sustainability, which is user-centric and combines different means of transport to minimize CO2 emissions and, at the same time, maximize the use of sustainable transportation and/or means of active mobility.
Navigation within cities is a daily activity inherent to all citizens, performed several times throughout the day for commuting to work, schools, shopping, and leisure purposes, among others. For all those in a well-known city, the navigation turns out to be intuitive, something simple that does not pose any significant difficulties. However, when we are in an unknown city, the aspects of orientation and mobility become more relevant. The orientation task in a city involves several elements, such as positioning, navigation, and direction. Urban spatial orientation is, in turn, intrinsically linked to the aspect of mobility, which can be described as the ability of someone to move in an environment, known or not, and which may contain several obstacles that are related to how citizens are informed about their position or how to navigate to the desired location [4]. Suppose several challenges arise for urban mobility in general terms, such as precision. In that case, additional questions are brought up when we talk about visually impaired people (VIP) that are immediately noticed by the different ways VIP use their senses to orient themselves [5]. The World Health Organization (WHO), in its report on “Visual Impairment 2010”, refers to approximately 320 thousand people per million with some visual impairment worldwide, and about 47 thousand people per million are considered blind [6]. In Portugal, the 2001 reports had values of 160 thousand visually impaired individuals [7], while the 2011 reports point to 900 thousand visually impaired individuals, 28 thousand of whom are blind [8]. Although the mobility challenges for this segment are varied, we will focus, throughout this paper, on the problem of positioning when a VIP loses track of where they are, which represents one of the main challenges they feel and experience in their daily routines. Receiving information that they are close to a particular store, near a given known intersection, etc., can be decisive and make all the difference for a VIP’s orientation [9,10].
One of the techniques that could be used would be the GPS combined with the information on Google Maps. Still, the information usually available does not have the necessary refinement and detail because the points of interests (POIs) inserted are typically thought of more globally and not with all the detail VIPs need. This paper proposes a conceptual framework for an outdoor positioning system for VIP that uses Landmark Positioning, an approach that fits inside the broad concept of Visual Positioning. The framework assumes a VIP with a smartphone camera that captures the image of surroundings as the VIP is walking and a backend server that can process those images and get information about its positioning, which is informed to the VIP. The approach mainly uses images with landmarks that represent locations in the cities and, for that reason, constitute helpful information for a VIP [11].
The paper is organized as follows. Firstly, Section 1 describes the problem context and its main implications for defining a proper solution. Section 2 presents a literature review on smart mobility and its main challenges, trends, and inclusiveness. Section 3 offers the related work regarding the VIP. Section 4 presents an architecture proposed for Outdoor Positioning for Visually Impaired People using Landmarks (OPIL) framework and model. Finally, in Section 5, the conclusions and future work are addressed.

4. Proposed OPIL Framework

4.1. Overview and Architecture

The architecture proposed for Outdoor Positioning for Visually Impaired People using Landmarks (OPIL), presented in Figure 7, aims to allow a VIP to obtain information on the location where they are positioned, one of the scenarios that represent a challenge for this segment of people when they move autonomously in a city. The architecture comprises two main components: a mobile application that the VIP must use and a backend server that does the image recognition processing and obtains the information about a helpful description that can be transmitted to the VIP so they can orient themselves.
Figure 7. Core components of the proposed framework.
In the architecture proposed for OPIL, it is assumed that each VIP has a mobile phone and uses an app that is one of the components of the system. This app allows the capture of the image through the mobile phone’s camera, which helps collect information about where the user is. This app must consider the battery consumption and the processing power it takes from the mobile phone. The user must keep the mobile phone fixed for some time, and only when they get the result sent by the backend as a result of processing its positioning (in some cases, the location may not be identified) can they move the mobile phone to another location. Whenever the backend receives an image, a comparison is made with all the images in a prepopulated and georeferenced image database. If the image is successfully identified, a description is returned by the algorithm in such a way that it is helpful for the VIP.

4.2. Components

4.2.1. Mobile Application

The mobile application represents the process’s starting and ending point, as seen in Figure 8. The VIP triggers the process when the user opens the app and points the camera to the place it is facing, allowing the image collection. The image capture module (ICM) is responsible for presenting the UI that captures the image, collecting it, and sending it to the backend server communication module (BSCM), which is responsible for communicating over the Internet with the backend server that will process the image (this process will be described in more detail in the next section). As soon as there is a response from the backend server, the callback method module (CMM) is triggered automatically to be able to process the response and send it to the notification module (NM), which is responsible for the interaction with the VIP to inform the user of the result about their positioning.
Figure 8. The architecture of the mobile app component.

4.2.2. Backend Server and Proposed Algorithm

The proposed algorithm for image recognition is divided into three phases (Figure 9). In the initial phase, datasets are obtained where all points of interest necessary for the project (from different angles, light levels, weather conditions, etc.) are photographed and identified. The second phase trains the model using a convolutional neural network. In this phase, the identification attributes are the images of the photographed objects, and the target attribute is the identification of these objects. The final phase corresponds to the trained model’s use in identifying new images of the objects.
Figure 9. The 3 phases of the algorithm for the recognition of points of interest.
The first phase corresponds to data collection, where images are gathered. Later, the identification model is trained using those same pictures. The collection of images must have an equal number of images with points of interest and those without points of interest so that the model’s training is not biased. Images without points of interest should be similar to images with points of interest. All images must be changed to have the exact dimensions. The image distribution should be about 80% training and 20% test images. Changing image properties such as opacity, converting them to black and white, or reducing the definition may be necessary to reduce model training time at the expense of accuracy. An image marking tool may be required to identify and select points of interest in the photos manually. Some examples of these tools are LabelImg (https://github.com/tzutalin/labelImg (accessed on 1 July 2022)) and OpenLabeling (https://github.com/Cartucho/OpenLabeling (accessed on 1 July 2022)).
The second phase corresponds to the training of the convolutional neural network. In this phase, the model is trained to differentiate the landmarks based on their geometric-functional typologies, such as punctual elements (e.g., monuments, stores, schools, city hall), linear elements (e.g., rivers, roads, sidewalks), or areal elements (e.g., squares, green parks). When creating the model, there are some general guidelines to follow. The use of convolution and pooling should be alternated, with convolution initially being used and pooling at the end before connecting to the neural network (e.g., convolution-polling-convolution-pooling-neural network-output). During convolution, there is little need to use a filter more significant than a three-by-three matrix. It is advisable to use max pooling along the neural network and average pooling at the end before connecting to the neural network layer. Generally, the more layers a neural network has, the greater its accuracy and execution time. Preprocessing of the images should be done only if necessary to increase the accuracy or speed of the model. Creating new pictures with existing images (data augmentation) almost always helps improve a model’s accuracy. The number of nodes in the first intermediate layer of the neural network should be half of the nodes in the input layer, and in the second layer, half of those in the first layer. The number of nodes in the neural network’s output layer must equal the number of classes identified by the network. The number of nodes in the middle layers of a neural network must follow a geometric progression (2, 4, 8, 16, 32, …).
The third phase corresponds to the use of the model. A smartphone application must be created to obtain images from the camera and send the data to an external API. If access to the Internet via mobile data is impossible, then the pretrained model must be integrated into the application. To be able to use the android model, the pretrained model must be in the “.pb” format (TensorFlow) and must be converted to this format if it is in the “.h5” format (keras). Suppose the solution is used in more than one geographical area. In that case, it is advisable to add the GPS coordinates to the characteristics matrix of the images that serve as input data to the convolutional neural network. Suppose the solution is used in more than one geographical area. In that case, the smartphone application must also obtain the GPS position when taking images, which must be used to forecast the point of interest.
The model’s usefulness will depend on the imageability of the urban environment. This is because, and according to Lynch, an imageable city has, in principle, a better degree of legibility [71]. Lynch also states, in his 1960 article [71], that imageability is the “quality in a physical object which gives it a high probability of evoking a strong image in any given observer. It is that shape, color, or arrangement which facilitates the making of vividly identified, powerfully structured, highly useful mental images of the environment”.

4.3. Framework Security Aspects

The preferred framework is supported by mobile devices and other digital technologies for its operation, i.e., cloud computing. The technological development and the sophistication of such devices usually reveal unexpected and underestimated security vulnerabilities. As there is an increased reliance on technology, it is also necessary to increase efforts to protect the technological resources and infrastructure, ensuring that they operate safely and uninterruptedly.
As the amount of data collected increases, new systems and applications are integrated, the number of users increases, and many that lack computer skills, the potential occurrence of information security incidents, breaches, and threats also increases. Thus, it is almost mighty to have adequate security services and systems, as well as good practices, to manage all the interactions within the framework; being fundamental to have mechanisms that ensure the confidentiality, integrity, and availability (CIA) properties of the framework operation and information [72].
As a smartphone supports the proposed framework for capturing landmarks and surroundings, on the Internet for bidirectional communication, and in cloud computing for the backend server operation and image comparison, the CIA properties must also be guaranteed. To guarantee the CIA properties in the proposed framework, a cybersecurity layer in the form of a mobile threat defense (MTD) system must be considered. MTD can be viewed as sophisticated, dynamic protection against cyber threats targeted against mobile devices, and the safety is applied to devices, networks, and applications [73]. This will maintain the security information of the data generated and received by the smartphone (or another mobile device), protecting against malicious applications, network attacks, and device vulnerabilities [74].
For the backend server, a cloud service system can be used. As the information and data, the server manages, and stores are compassionate. If the CIA properties are not met, this can cause a high-level impact on the users and organizations that use the proposed framework, possibly causing a severe or catastrophic adverse effect on organizational operations, organizational assets, and individual users. It is suggested to use the security as a service (SecaaS) provided by the cloud provider. This is a package of security services offered by a service provider that offloads much of the security responsibility from an enterprise to the security service provider. The services typically provided are authentication, antivirus, antimalware/spyware, intrusion detection, and security event management [75]. SecaaS is considered part of any cloud provider’s software as a service (SaaS). A segment of the SaaS offering of a CP. In a significantly more straightforward manner, the Cloud Security Alliance defines SecaaS as the provision of security applications and services via the cloud either to cloud-based infrastructure and software or from the cloud to the customers’ on-premises systems [76]. The Cloud Security Alliance has identified the following SecaaS categories of service: Identity and access management; and data loss prevention. The combination of the previously referred systems and services will improve the level of usage confidence of the framework and can be viewed as an adequate strategy to secure both the end user, that relies on the utilization of mobile devices, and server-side data and information (at rest and being used).
Cybersecurity is critical nowadays because safety, security, and trust are inseparable. The relative insecurity of almost every connected thing is top of mind due to the heightened awareness of the impact of cyberattacks and breaches, highlighted in the media and popular culture depictions of nefarious hacking exploits. This makes achieving security from the outset extremely critical to avoid becoming any computing service and application highlighted by this media attention [77].

5. Conclusions and Future Work

Urban mobility is currently in a massive evolving period due to the digitalization of society, the effects of the COVID-19 pandemic, the global economic dynamics, and the ongoing pattern of people concentrated in the urban area. The mobility challenges are one of the most visible and notable elements that profoundly affect urban metabolism and its decurrent effects. The solutions being developed towards ensuring more inclusive and universal access to the mobility system are essential to guarantee that visually impaired people could benefit from this reality and substantially improve their daily commuting activities and overall life in the urban ecosystem.
The presented solution addresses a specific problem related to the difficulty of orientation by a blind or visually impaired person when they lose their direction in an urban environment. Making use of the massive use of mobile phones, the solution assumes the existence of one of these devices to capture an image of the environment that surrounds the user to be able, through image recognition techniques, to obtain the user’s location, by comparison to a georeferenced image data and a trained model.
Some factors need to be addressed such as reducing the time to acquire data and the performance of the increased number of users that could use such a solution simultaneously in a city. These factors are decisive for the success of the solution.
This solution aims to contribute to more inclusive mobility in urban environments. The main limitations of the solution are mainly in terms of the scalability for large areas, although the potential for historic centers or smaller cities is promising. In future terms, we intend to create a functional prototype that allows the execution of tests in a controlled environment. The evolution to an approach that contemplates the cooperation of citizens, through crowdsourcing, is a possible way to overcome the scalability problems that the solution may have now.

Author Contributions

Formal analysis, A.A.; Investigation, S.P., A.A., J.G., R.L. and L.B.; Methodology, A.A.; Supervision, S.P.; Writing—original draft, S.P., Rui Lima and Luis Barreto; Writing—review & editing, S.P. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by the European Regional Development Fund (ERDF) through the Regional Operational Program North 2020, within the scope of Project TECH—Technology, Environment, Creativity and Health, Norte-01-0145-FEDER-000043.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Giduthuri, V.K. Sustainable Urban Mobility: Challenges, Initiatives and Planning. Curr. Urban Stud. 2015, 3, 261–265. [Google Scholar] [CrossRef][Green Version]
  2. Bezbradica, M.; Ruskin, H. Understanding Urban Mobility and Pedestrian Movement. In Smart Urban Development; IntechOpen: London, UK, 2019. [Google Scholar] [CrossRef]
  3. Esztergár-Kiss, D.; Mátrai, T.; Aba, A. MaaS framework realization as a pilot demonstration in Budapest. In Proceedings of the 2021 7th International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), Heraklion, Greece, 16–17 June 2021; pp. 1–5. [Google Scholar] [CrossRef]
  4. Riazi, A.; Riazi, F.; Yoosfi, R.; Bahmeei, F. Outdoor difficulties experienced by a group of visually impaired Iranian people. J. Curr. Ophthalmol. 2016, 28, 85–90. [Google Scholar] [CrossRef] [PubMed]
  5. Lakde, C.K.; Prasad, D.P.S. Review Paper on Navigation System for Visually Impaired People. Int. J. Adv. Res. Comput. Commun. Eng. 2015, 4, 166–168. [Google Scholar] [CrossRef]
  6. WHO. Blindness and Visual Impairment. Available online: https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (accessed on 29 July 2022).
  7. ANACOM. ACAPO—Associação dos Cegos e Amblíopes de Portugal. Available online: https://www.anacom.pt/render.jsp?categoryId=36666 (accessed on 29 July 2022).
  8. ACAPO. Deficiência Visual. Available online: http://www.acapo.pt/deficiencia-visual/perguntas-e-respostas/deficiencia-visual#quantas-pessoas-com-deficiencia-visual-existem-em-portugal-202 (accessed on 29 July 2022).
  9. Brito, D.; Viana, T.; Sousa, D.; Lourenço, A.; Paiva, S. A mobile solution to help visually impaired people in public transports and in pedestrian walks. Int. J. Sustain. Dev. Plan. 2018, 13, 281–293. [Google Scholar] [CrossRef]
  10. Paiva, S.; Gupta, N. Technologies and Systems to Improve Mobility of Visually Impaired People: A State of the Art. In EAI/Springer Innovations in Communication and Computing; Springer: Berlin/Heidelberg, Germany, 2020; pp. 105–123. [Google Scholar] [CrossRef]
  11. Heiniz, P.; Krempels, K.H.; Terwelp, C.; Wuller, S. Landmark-based navigation in complex buildings. In Proceedings of the 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, NSW, Australia, 13–15 November 2012. [Google Scholar] [CrossRef]
  12. Giffinger, R.; Fertner, C.; Kramar, H.; Pichler-Milanovic, N.Y.; Meijers, E. Smart Cities Ranking of European Medium-Sized Cities; Centre of Regional Science, Universidad Tecnológica de Viena: Vienna, Austria, 2007. [Google Scholar]
  13. Arce-Ruiz, R.; Baucells, N.; Moreno Alonso, C. Smart Mobility in Smart Cities. In Proceedings of the XII Congreso de Ingeniería del Transporte (CIT 2016), Valencia, Spain, 7–9 June 2016. [Google Scholar] [CrossRef]
  14. Van Audenhove, F.; Dauby, L.; Korniichuk, O.; Poubaix, J. Future of Urban Mobility 2.0. Arthur D. Little; Future Lab. International Association for Public Transport (UITP): Brussels, Belgium, 2014. [Google Scholar]
  15. Neirotti, P. Current trends in Smart City initiatives: Some stylised facts. Cities 2012, 38, 25–36. [Google Scholar] [CrossRef]
  16. Manville, C.; Cochrane, G.; Cave, J.; Millard, J.; Pederson, J.; Thaarup, R.; Liebe, A.; Wissner, W.M.; Massink, W.R.; Kotterink, B. Mapping Smart Cities in the EU; Department of Economic and Scientific Policy: Luxembourg, 2014. [Google Scholar]
  17. Carneiro, D.; Amaral, A.; Carvalho, M.; Barreto, L. An Anthropocentric and Enhanced Predictive Approach to Smart City Management. Smart Cities 2021, 4, 1366–1390. [Google Scholar] [CrossRef]
  18. Groth, S. Multimodal divide: Reproduction of transport poverty in smart mobility trends. Transp. Res. Part A Policy Pract. 2019, 125, 56–71. [Google Scholar] [CrossRef]
  19. Paiva, S.; Ahad, M.A.; Tripathi, G.; Feroz, N.; Casalino, G. Enabling Technologies for Urban Smart Mobility: Recent Trends, Opportunities and Challenges. Sensors 2021, 21, 2143. [Google Scholar] [CrossRef]
  20. Barreto, L.; Amaral, A.; Baltazar, S. Mobility in the Era of Digitalization: Thinking Mobility as a Service (MaaS). In Intelligent Systems: Theory, Research and Innovation in Applications; Springer: Cham, Switzerland, 2021; pp. 275–293. [Google Scholar]
  21. Turoń, K.; Czech, P. The Concept of Rules and Recommendations for Riding Shared and Private E-Scooters in the Road Network in the Light of Global Problems. In Modern Traffic Engineering in the System Approach to the Development of Traffic Networks; Macioszek, E., Sierpiński, G., Eds.; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; Volume 1083. [Google Scholar] [CrossRef]
  22. Talebkhah, M.; Sali, A.; Marjani, M.; Gordan, M.; Hashim, S.J.; Rokhani, F.Z. IoT and Big Data Applications in Smart Cities: Recent Advances, Challenges, and Critical Issues. IEEE Access 2021, 9, 55465–55484. [Google Scholar] [CrossRef]
  23. Oliveira, T.A.; Oliver, M.; Ramalhinho, H. Challenges for Connecting Citizens and Smart Cities: ICT, E-Governance and Blockchain. Sustainability 2020, 12, 2926. [Google Scholar] [CrossRef]
  24. Muller, M.; Park, S.; Lee, R.; Fusco, B.; Correia, G.H.d.A. Review of Whole System Simulation Methodologies for Assessing Mobility as a Service (MaaS) as an Enabler for Sustainable Urban Mobility. Sustainability 2021, 13, 5591. [Google Scholar] [CrossRef]
  25. Gonçalves, L.; Silva, J.P.; Baltazar, S.; Barreto, L.; Amaral, A. Challenges and implications of Mobility as a Service (MaaS). In Implications of Mobility as a Service (MaaS) in Urban and Rural Environments: Emerging Research and Opportunities; IGI Global: Hershey, PA, USA, 2020; pp. 1–20. [Google Scholar]
  26. Amaral, A.; Barreto, L.; Baltazar, S.; Pereira, T. Mobility as a Service (MaaS): Past and Present Challenges and Future Opportunities. In Conference on Sustainable Urban Mobility; Springer: Cham, Switzerland, 2020; pp. 220–229. [Google Scholar]
  27. Barreto, L.; Amaral, A.; Baltazar, S. Urban mobility digitalization: Towards mobility as a service (MaaS). In Proceedings of the 2018 International Conference on Intelligent Systems (IS), Funchal, Portugal, 25–27 September 2018; pp. 850–855. [Google Scholar]
  28. Al-Rahamneh, A.; Javier Astrain, J.; Villadangos, J.; Klaina, H.; Picallo Guembe, I.; Lopez-Iturri, P.; Falcone, F. Enabling Customizable Services for Multimodal Smart Mobility With City-Platforms. IEEE Access 2021, 9, 41628–41646. [Google Scholar] [CrossRef]
  29. Turoń, K. Open Innovation Business Model as an Opportunity to Enhance the Development of Sustainable Shared Mobility Industry. J. Open Innov. Technol. Mark. Complex. 2022, 8, 37. [Google Scholar] [CrossRef]
  30. Abbasi, S.; Ko, J.; Min, J. Measuring destination-based segregation through mobility patterns: Application of transport card data. J. Transp. Geogr. 2021, 92, 103025. [Google Scholar] [CrossRef]
  31. Khan, S.; Nazir, S.; Khan, H.U. Analysis of Navigation Assistants for Blind and Visually Impaired People: A Systematic Review. IEEE Access 2021, 9, 26712–26734. [Google Scholar] [CrossRef]
  32. Chang, W.-J.; Chen, L.-B.; Chen, M.C.; Su, J.P.; Sie, C.Y.; Yang, C.H. Design and Implementation of an Intelligent Assistive System for Visually Impaired People for Aerial Obstacle Avoidance and Fall Detection. IEEE Sens. J. 2020, 20, 10199–10210. [Google Scholar] [CrossRef]
  33. El-taher, F.E.; Taha, A.; Courtney, J.; Mckeever, S. A Systematic Review of Urban Navigation Systems for Visually Impaired People. Sensors 2021, 21, 3103. [Google Scholar] [CrossRef]
  34. Dakopoulos, D.; Bourbakis, N.G. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Trans. Syst. Man Cybern. Part C 2010, 40, 25–35. [Google Scholar] [CrossRef]
  35. Renier, L.; De Volder, A.G. Vision substitution and depth perception: Early blind subjects experience visual perspective through their ears. Disabil. Rehabil. Assist. Technol. 2010, 5, 175–183. [Google Scholar] [CrossRef]
  36. Tapu, R.; Mocanu, B.; Tapu, E. A survey on wearable devices used to assist the visual impaired user navigation in outdoor environments. In Proceedings of the 2014 11th International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 14–15 November 2014; pp. 1–4. [Google Scholar]
  37. Jacobson, D.; Kitchin, R.; Golledge, R.; Blades, M. Learning a complex urban route without sight: Comparing naturalistic versus laboratory measures. In Proceedings of the Mind III Annual Conference of the Cognitive Science Society, Madison, WI, USA, 1–4 August 1998; pp. 1–20. [Google Scholar]
  38. Kaminski, L.; Kowalik, R.; Lubniewski, Z.; Stepnowski, A. ‘VOICE MAPS’—Portable, dedicated GIS for supporting the street navigation and self-dependent movement of the blind. In Proceedings of the 2010 2nd International Conference on Information Technology (ICIT 2010), Gdansk, Poland, 28–30 June 2010; pp. 153–156. [Google Scholar]
  39. Ueda, T.A.; De Araújo, L.V. Virtual Walking Stick: Mobile Application to Assist Visually Impaired People to Walking Safely; Lecture Notes in Computer Science (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2014; Volume 8515 LNCS, pp. 803–813. [Google Scholar] [CrossRef]
  40. Minhas, R.A.; Javed, A. X-EYE: A Bio-smart Secure Navigation Framework for Visually Impaired People. In Proceedings of the 2018 International Conference on Signal Processing and Information Security (ICSPIS), Dubai, United Arab Emirates, 7–8 November 2018; pp. 2018–2021. [Google Scholar] [CrossRef]
  41. Kaiser, E.B.; Lawo, M. Wearable navigation system for the visually impaired and blind people. In Proceedings of the 2012 IEEE/ACIS 11th International Conference on Computer and Information Science, Shanghai, China, 30 May–1 June 2012; pp. 230–233. [Google Scholar] [CrossRef]
  42. Zeb, A.; Ullah, S.; Rabbi, I. Indoor vision-based auditory assistance for blind people in semi controlled environments. In Proceedings of the 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France, 14–17 October 2014; pp. 1–6. [Google Scholar] [CrossRef]
  43. Fukasawa, A.J.; Magatani, K. A navigation system for the visually impaired an intelligent white cane. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 4760–4763. [Google Scholar] [CrossRef]
  44. Bhardwaj, P.; Singh, J. Design and Development of Secure Navigation System for Visually Impaired People. Int. J. Comput. Sci. Inf. Technol. 2013, 5, 159–164. [Google Scholar] [CrossRef]
  45. Treuillet, S.; Royer, E. Outdoor/indoor vision-based localization for blind pedestrian navigation assistance. Int. J. Image Graph. 2010, 10, 481–496. [Google Scholar] [CrossRef]
  46. Serrão, M.; Rodrigues, J.M.F.; Rodrigues, J.I.; Du Buf, J.M.H. Indoor localization and navigation for blind persons using visual landmarks and a GIS. Procedia Comput. Sci. 2012, 14, 65–73. [Google Scholar] [CrossRef]
  47. Shadi, S.; Hadi, S.; Nazari, M.A.; Hardt, W. Outdoor navigation for visually impaired based on deep learning. CEUR Workshop Proc. 2019, 2514, 397–406. [Google Scholar]
  48. Chen, H.E.; Lin, Y.Y.; Chen, C.H.; Wang, I.F. BlindNavi: A Navigation App for the Visually Impaired Smartphone User. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 19–24. [Google Scholar] [CrossRef]
  49. Idrees, A.; Iqbal, Z.; Ishfaq, M. An Efficient Indoor Navigation Technique to Find Optimal Route for Blinds Using QR Codes. In Proceedings of the 2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA), Auckland, New Zealand, 15–17 June 2015; pp. 690–695. [Google Scholar] [CrossRef]
  50. Zhang, X.; Li, B.; Joseph, S.L.; Muñoz, J.P.; Yi, C. A SLAM based Semantic Indoor Navigation System for Visually Impaired Users. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 1458–1463. [Google Scholar] [CrossRef]
  51. Yelamarthi, K.; Haas, D.; Nielsen, D.; Mothersell, S. RFID and GPS integrated navigation system for the visually impaired. In Proceedings of the 2010 53rd IEEE International Midwest Symposium on Circuits and Systems, Seattle, WA, USA, 1–4 August 2010; pp. 1149–1152. [Google Scholar] [CrossRef]
  52. Tanpure, R.A. Advanced Voice Based Blind Stick with Voice Announcement of Obstacle Distance. Int. J. Innov. Res. Sci. Technol. 2018, 4, 85–87. [Google Scholar]
  53. Salahuddin, M.A.; Al-Fuqaha, A.; Gavirangaswamy, V.B.; Ljucovic, M.; Anan, M. An efficient artificial landmark-based system for indoor and outdoor identification and localization. In Proceedings of the 2011 7th International Wireless Communications and Mobile Computing Conference, Istanbul, Turkey, 4–8 July 2011; pp. 583–588. [Google Scholar] [CrossRef]
  54. Basiri, A.; Amirian, P.; Winstanley, A. The use of quick response (QR) codes in landmark-based pedestrian navigation. Int. J. Navig. Obs. 2014, 2014, 897103. [Google Scholar] [CrossRef]
  55. Nilwong, S.; Hossain, D.; Shin-Ichiro, K.; Capi, G. Outdoor Landmark Detection for Real-World Localization using Faster R-CNN. In Proceedings of the 6th International Conference on Control, Mechatronics and Automation, Tokyo, Japan, 12–14 October 2018; pp. 165–169. [Google Scholar] [CrossRef]
  56. Maeyama, S.; Ohya, A.; Sciences, I.; Tsukuba, T. Outdoor Navigation of a Mobile Robot Using Natural Landmarks. In Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS ’97, Grenoble, France, 11 September 1997; Volume 3, pp. V17–V18. [Google Scholar] [CrossRef]
  57. Raubal, M.; Winter, S. Enriching Wayfinding Instructions with Local Landmarks; Lecture Notes in Computer Science (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2002; Volume 2478, pp. 243–259. [Google Scholar] [CrossRef]
  58. Yeh, T.; Tollmar, K.; Darrell, T. Searching the web with mobile images for location recognition. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 2, p. II. [Google Scholar]
  59. Li, F.; Kosecka, J. Probabilistic location recognition using reduced feature set. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation ICRA 2006, Orlando, FL, USA, 15–19 May 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 3405–3410. [Google Scholar]
  60. Schindler, G.; Brown, M.; Szeliski, R. City-scale location recognition. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–7. [Google Scholar]
  61. Gallagher, A.; Joshi, D.; Yu, J.; Luo, J. Geo-location inference from image content and user tags. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 55–62. [Google Scholar]
  62. Schroth, G.; Huitl, R.; Chen, D.; Abu-Alqumsan, M.; Al-Nuaimi, A.; Steinbach, E. Mobile visual location recognition. IEEE Signal Process. Mag. 2011, 28, 77–89. [Google Scholar] [CrossRef]
  63. Cao, S.; Snavely, N. Graph-based discriminative learning for location recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 700–707. [Google Scholar]
  64. Wan, J.; Wang, D.; Hoi, S.C.H.; Wu, P.; Zhu, J.; Zhang, Y.; Li, J. Deep learning for content-based image retrieval: A comprehensive study. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 157–166. [Google Scholar]
  65. Liu, P.; Yang, P.; Wang, C.; Huang, K.; Tan, T. A semi-supervised method for surveillance-based visual location recognition. IEEE Trans. Cybern. 2016, 47, 3719–3732. [Google Scholar] [CrossRef]
  66. Zhou, W.; Li, H.; Tian, Q. Recent advance in content-based image retrieval: A literature survey. arXiv 2017, arXiv:1706.06064. [Google Scholar]
  67. Tzelepi, M.; Tefas, A. Deep convolutional learning for content based image retrieval. Neurocomputing 2018, 275, 2467–2478. [Google Scholar] [CrossRef]
  68. Saritha, R.R.; Paul, V.; Kumar, P.G. Content based image retrieval using deep learning process. Clust. Comput. 2019, 22, 4187–4200. [Google Scholar] [CrossRef]
  69. Bhagwat, R.; Abdolahnejad, M.; Moocarme, M. Applied Deep Learning with Keras: Solve Complex Real-Life Problems with the Simplicity of Keras; Packt Publishing Ltd.: Birmingham, UK, 2019. [Google Scholar]
  70. Lima, R.; da Cruz, A.M.R.; Ribeiro, J. Artificial Intelligence Applied to Software Testing: A literature review. In Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Seville, Spain, 24–27 June 2020. [Google Scholar]
  71. Lynch, K. The image of the environment. Image City 1960, 11, 1–13. [Google Scholar]
  72. Pfleeger, C.; Shari, L. Security in Computing, 4th ed.; Prentice Hall PTR: Hoboken, NJ, USA, 2007. [Google Scholar]
  73. Khan, J.; Abbas, H.; Al-Muhtadi, J. Survey on Mobile User’s Data Privacy Threats and Defense Mechanisms. Procedia Comput. Sci. 2015, 56, 376–383. [Google Scholar] [CrossRef]
  74. Becher, M.; Freiling, F.C.; Hoffmann, J.; Holz, T.; Uellenbeck, S.; Wolf, C. Mobile Security Catching Up? Revealing the Nuts and Bolts of the Security of Mobile Devices. In Proceedings of the 2011 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 22–25 May 2011; pp. 96–111. [Google Scholar] [CrossRef]
  75. Stallings, W. Cryptography and Network Security: Principles and Practice, 7th ed.; Pearson: London, UK, 2020. [Google Scholar]
  76. Varadharajan, V.; Tupakula, U. Security as a Service Model for Cloud Environment. IEEE Trans. Netw. Serv. Manag. 2014, 11, 60–75. [Google Scholar] [CrossRef]
  77. Paiva, S.; Ahad, M.A.; Zafar, S.; Tripathi, G.; Khalique, A.; Hussain, I. Privacy and security challenges in smart and sustainable mobility. SN Appl. Sci. 2020, 2, 1175. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.