A Sightseeing Support System Using Augmented Reality and Pictograms within Urban Tourist Areas in Japan

: Though tourists can search for necessary information on the internet while sightseeing, it takes e ﬀ ort and is inconvenient to obtain available information related to speciﬁc sightseeing spots among the copious amount of information online. Targeting urban tourist areas in Japan, the present study aims to develop a system that can provide guidance and information concerning sightseeing spots by integrating location-based augmented reality (AR) and object-recognition AR and by using pictograms. The system enables users to e ﬃ ciently obtain the directions to sightseeing spots and nearby facilities within urban tourist areas and sightseeing spot information. Additionally, the city of Chofu in the metropolis of Tokyo was selected as the operation target area. The operation of the system was conducted for 1 month, targeting those inside and outside the operation target area, and a web questionnaire survey was conducted with a total number of 50 users. From the evaluation results of the web questionnaire survey, the usefulness of the original functions of integrating location-based AR and object-recognition AR and by using pictograms, as well as of the entire system, was analyzed. From the results of the access analysis of users’ log data, it is expected that users will further utilize each function. Additionally, it is evident that location-based AR was used more often than was object-recognition AR.


Introduction
Due to the spread of mobile information terminals, such as smartphones and tablet PCs, in the current advanced information and communications society in recent years, anyone can easily access the internet and transmit and obtain information. However, as a plethora of information is available on the internet, each user must efficiently search the information they seek by themselves. Additionally, because a wide variety and a large amount of information are available, it may take time to search and obtain the necessary information. Therefore, information literacy is required for users to appropriately determine what information is necessary and how to effectively use it. The above can also be applied to sightseeing. Although tourists can search for the necessary information on the internet while sightseeing, it takes effort and is inconvenient to obtain the available information related to specific sightseeing spots among the copious amount of information online. Therefore, it is necessary to develop a method to efficiently and appropriately present sightseeing information to tourists. In response to this necessity, Dinis et al. (2019) [1] proposed a methodology that uses the big data from Google Trends (GT) tool to develop composite indicators to measure the public online search interest by tourist destinations. Additionally, Wise et al. (2019) [2] described the significance of the Internet of Things (IoT) and its potential for smart cities and provided practical foundations for destination organizers destinations. Amirian et al. (2016) [12] developed a landmark-based pedestrian navigation system using machine learning in addition to AR. Gerstweiler et al. (2016) [13] presented a hybrid tracking system specifically designed for complex indoor spaces that runs on mobile information terminals.
In (2), Mata et al. (2011) [14] developed an experimental virtual museum using paintings. Mulloni et al. (2011) [15], Okada et al. (2011) [16], Möller et al. (2012) [17], and Neges et al. (2017) [18] proposed a system that points an arrow in the direction of the destination by recognizing the user's present location using images. Kurihara et al. (2014) [19] developed a marker-based indoor navigation system that allows location information to be shared. Wang et al. (2018) [20] proposed a mobile navigation system, adopting a multitarget AR recognition mechanism and polygon approximation-based data acquisition. Zhou et al. (2019) [21] designed a tourist attraction guide system combining image-recognition technology and AR technology.
In (3), Fukada et al. (2011) [22] developed a system that provides sightseeing information using image-recognition AR, while Han et al. (2014) [23] developed a mobile sightseeing application that provides information related to urban heritage using location-based AR. Komoda et al. (2013) [24] developed a system that provides information related to regional movies using images. Jung et al. (2015) [25] used a quality model to test user satisfaction and intention to recommend marker-based augmented reality applications in the case of a theme park. Ma et al. (2018) [26] developed a smart sightseeing application system combining AR technology and a paper sightseeing map for emergency response. Blanco-Ponsa et al. (2019) [27] presented an application that provides information related to cultural heritage. Makino et al. (2019) [28] developed a system that visualizes spatiotemporal information in both real and virtual spaces, integrating social networking services (SNS), Web GIS, mixed reality (MR) and the original gallery system, as well as Wikitude and connecting external social media. Both virtual reality (VR) and AR are integrated into MR that is used in the system for area-based learning and sightseeing.
In (4), Kusano et al. (2013Kusano et al. ( , 2015 [29,30] developed a disaster information sharing system, while Yamamoto (2018) [7] and Abe et al. (2019) [8] developed sightseeing support systems using nonlinguistic information, including pictograms, as mentioned in the previous section. Sándor (2017) [31] examined the impact of weather-related warning message signs on traffic in adverse weather circumstances on motorways. Hayashi et al. (2019) [32] proposed an information-providing method to express train service situations in central urban areas by combining multiple pictograms, such as sign-logo images. Kołodziejczak (2019) [33] analyzed the information systems on internet tourist portals and suggested the uses of various symbols for accessible sightseeing for people with disabilities.
From the results of (1) and (2), i.e., the studies related to navigation using location-based AR and image-recognition AR, in smart cities, it is evident that AR is one of the powerful smart tourism tools to appropriately navigate the general public to their destinations. However, the AR smart glasses, which were used in Fujita et al. (2016) [5], Zhou et al. (2016) [6], and Rehman et al. (2017) [9], have not been popular all over the world. Moreover, considering the limitations of Kurihara et al. (2014) [19] and Zhou et al. (2019) [21], in the case of outdoor navigation, object-recognition AR is more suitable than image-recognition AR to show the information related to specific spots. The results of (3), i.e., the studies related to information services using AR, show that the general public, in addition to scientists and technicians, can efficiently obtain various information by means of AR technologies. Additionally, the results of (4), i.e., the studies related to information services using pictograms, indicate it is possible to develop a user-friendly system of universal design for both Japanese and foreigners by adopting nonlinguistic information.
In reference to the aim of the present study, as mentioned in Section 1, it is essential to integrate location-based AR and object-recognition AR and use pictograms in a unique system, despite these methods being separately adopted in the above 4 study fields. Therefore, in comparison with the abovementioned studies, the present study demonstrates originality by developing a unique system that can provide guidance and information concerning sightseeing spots by integrating location-based AR and object-recognition AR as smart tourism tools and by using pictograms as nonlinguistic information.
Specifically, considering the findings and limitations based on the results of the above preceding studies, the present study demonstrates originality by navigating users to sightseeing spots, as well as by providing sightseeing spot information, using both location-based AR and object-recognition AR, in addition to visualizing and providing information about facilities near sightseeing spots on mobile information terminals using location-based AR and pictograms. Therefore, by means of the system, it is expected that even those who are not good at reading maps in either the digital or paper format or at using Google Maps and who have limited knowledge or sense of locality would be able to fully enjoy sightseeing.

System Characteristics
As shown in Figure 1, the system in the present study is composed of the website and the original mobile application, and the latter is uniquely developed by integrating location-based AR and object-recognition AR and by using pictograms. To use the system, users must first access the website on their mobile information terminals, such as smartphones and tablet PCs, and install the original mobile application that has been uploaded onto the online storage. Then, users can start using the system by activating the application and registering their information, including email addresses and passwords.
Using images, such as pictograms, that are provided by location-based AR, the system can display the directions to sightseeing spots and nearby facilities on the screens of mobile information terminals and provide the navigation to each destination. Furthermore, as the system displays images on mobile information terminals in a way that overlaps with the actual world using location-based AR, users do not need to walk around with pamphlets or guidebooks. Additionally, as users can easily obtain the relevant information by pointing the mobile information terminal in the direction of the sightseeing spot and using object-recognition AR, the system contributes to raising efficiency while sightseeing. Therefore, with a system that makes obtaining sightseeing information efficient, tourists who are not familiar with the local area can enjoy sightseeing without relying heavily on information found on the internet. spots, as well as by providing sightseeing spot information, using both location-based AR and object-1 recognition AR, in addition to visualizing and providing information about facilities near sightseeing 2 spots on mobile information terminals using location-based AR and pictograms. Therefore, by means 3 of the system, it is expected that even those who are not good at reading maps in either the digital or 4 paper format or at using Google Maps and who have limited knowledge or sense of locality would 5 be able to fully enjoy sightseeing. As shown in Figure 1, the system in the present study is composed of the website and the original 9 mobile application, and the latter is uniquely developed by integrating location-based AR and object-10 recognition AR and by using pictograms. To use the system, users must first access the website on 11 their mobile information terminals, such as smartphones and tablet PCs, and install the original 12 mobile application that has been uploaded onto the online storage. Then, users can start using the 13 system by activating the application and registering their information, including email addresses and 14 passwords. 15 Using images, such as pictograms, that are provided by location-based AR, the system can 16 display the directions to sightseeing spots and nearby facilities on the screens of mobile information 17 terminals and provide the navigation to each destination. Furthermore, as the system displays images 18 on mobile information terminals in a way that overlaps with the actual world using location-based 19 AR, users do not need to walk around with pamphlets or guidebooks. Additionally, as users can 20 easily obtain the relevant information by pointing the mobile information terminal in the direction of 21 the sightseeing spot and using object-recognition AR, the system contributes to raising efficiency 22 while sightseeing. Therefore, with a system that makes obtaining sightseeing information efficient, 23 tourists who are not familiar with the local area can enjoy sightseeing without relying heavily on 24 information found on the internet.   28 The following three points explain the usefulness of the system. 29 (1) Guidance and information services using AR 30 The system displays images for sightseeing spots, as well as pictograms for facilities near 31 sightseeing spots, which enables users to know the directions of their destinations. As the images are 32 displayed using AR, those who are not skilled at reading maps can also grasp the directions through 33 the screen of their mobile information terminals. Additionally, users can obtain the related

Usefulness of the System
The following three points explain the usefulness of the system.
(1) Guidance and information services using AR The system displays images for sightseeing spots, as well as pictograms for facilities near sightseeing spots, which enables users to know the directions of their destinations. As the images are displayed using AR, those who are not skilled at reading maps can also grasp the directions through the screen of their mobile information terminals. Additionally, users can obtain the related information without having to go online to research the sightseeing spots they wish to visit by pointing their mobile information terminals to sightseeing spots or their images.
(2) Information services using pictograms The system uses nine types of pictograms (i.e., stations, convenience stores, public restrooms, lodging facilities, restaurants, parking lots, police stations, bus terminals, and rental cars) provided by the Foundation for Promoting Personal Mobility and Ecological Transportation [34]. Therefore, users can easily understand what facilities are indicated by the pictograms displayed on the screens of their mobile information terminals.
(3) Dynamic and real-time services The distances from users' present locations to all sightseeing spots are constantly updated in real-time. Therefore, the users' location information is updated by GPS, and the distance to each sightseeing spot can be provided to them on a real-time basis. Additionally, the above nine types of pictograms that are nearest to the users' present locations will be displayed each time the distance information is updated, which helps users to see what facilities are available nearby.

Target Information Terminals and Operating Environment
The system is meant to be used mainly on mobile information terminals both indoors and outdoors. As the mobile application used on mobile information terminals is set to be an Android application, the OS is also required to be Android (5.0 or higher). Due to the system requirement of Wikitude SDK used in application development, mobile information terminals must have a compass, network positioning, an accelerometer, high resolution, high-performance CPU and OpenGL (2.0 or higher), in addition to a camera and GPS function.

Overview of the System Design
As mentioned in Section 3.1, the system in the present study is composed of the website and original mobile application, and the latter is uniquely developed by integrating location-based AR and object-recognition AR and by using pictograms. The application can be downloaded from the website. Additionally, the images used for object-recognition AR and user's manual can be viewed. The following will provide an explanation concerning the application, as well as location-based AR and object-recognition AR, that makes up the application.

Mobile Application
In the system, the mobile application that is uniquely developed by integrating location-based AR and object-recognition AR, and by using pictograms in an Android application. In developing the application, Android Studio, which is an integrated development environment for mobile applications, as well as Wikitude SDK, which is the SDK for AR application development that can be installed into Android Studio, were used. Android Studio can not only develop an application layout but can also confirm the camera and GPS permissions installed into Android devices as well as load the location information of the mobile information terminals. Since Wikitude SDK uses web technologies, such as HTML and JavaScript, the screen layout for the application was developed using xml files for Android Studio, HTML files, and CSS files.
For Android Studio, Java was used to set the camera and location information of the mobile information terminals, while JavaScript was used in Wikitude SDK to develop location-based AR and object-recognition AR. The screen layout was designed using HTML files and CSS files, while the user interface was designed using jQuery Mobile.

Location-Based AR
As the basic function of location-based AR, in addition to the marker images indicating the directions of sightseeing spots, the pictograms indicating facilities, such as stations and public restrooms, are displayed on the screens of mobile information terminals, and the directions are provided to sightseeing spots and nearby facilities. Furthermore, users can tap the image on the screens to see the name of each sightseeing spot and the distance to it and tap the marker to refer to each sightseeing spot information.
When starting the mobile application, the location-based AR screen will always appear. When this screen comes up, the user's present location will be determined using GPS, and the images of sightseeing spots and nearby facilities will be allocated in each specified part of the screen. Then, the distance from the user's present location to each sightseeing spot will be calculated. While the location-based AR screen is up, the present location will be constantly updated on a real-time basis using GPS, and the distance to sightseeing spots and nearby facilities displayed as images on the screen will be recalculated. Therefore, the system can provide users with real-time distances to each sightseeing spot.
As the displayed image is allocated based on the location information (latitude and longitude) of sightseeing spots and nearby facilities, the location information of such places is researched using Google Maps. The file, along with the names of such places, was added into the application in the JSON format. Regarding pictograms, the location information and numbers according to the type of facility were added, and the pictogram that corresponds to each number was set to be displayed. The process mentioned above is shown in Figure 2.  When starting the mobile application, the location-based AR screen will always appear. When 3 this screen comes up, the user's present location will be determined using GPS, and the images of 4 sightseeing spots and nearby facilities will be allocated in each specified part of the screen. Then, the 5 distance from the user's present location to each sightseeing spot will be calculated. While the 6 location-based AR screen is up, the present location will be constantly updated on a real-time basis 7 using GPS, and the distance to sightseeing spots and nearby facilities displayed as images on the 8 screen will be recalculated. Therefore, the system can provide users with real-time distances to each 9 sightseeing spot. 10 As the displayed image is allocated based on the location information (latitude and longitude) 11 of sightseeing spots and nearby facilities, the location information of such places is researched using 12 Google Maps. The file, along with the names of such places, was added into the application in the 13 JSON format. Regarding pictograms, the location information and numbers according to the type of 14 facility were added, and the pictogram that corresponds to each number was set to be displayed. The 15 process mentioned above is shown in Figure 2.   19 As the basic function of object-recognition AR, sightseeing spots can be recognized by pointing 20 a mobile information terminal at actual sightseeing spots or the images of such places, and the related 21 information can be displayed on the screen. Users can switch between the screens of object- 22 recognition AR and location-based AR. 23 Object-recognition AR uses a web tool called Wikitude Target Manager provided by Wikitude 24 to register 20 to 30 images of one sightseeing spot that is recognized. After the number of 25 characteristics for the objects used in the recognition of registered images were extracted, files 26 containing the characteristic amount for every sightseeing spot were downloaded and installed in 27 the mobile application. Additionally, the images displayed for sightseeing spots were also installed 28 into the application, and the location settings of each image were determined. The process mentioned 29 above is shown in Figure 3.

Object-Recognition AR
As the basic function of object-recognition AR, sightseeing spots can be recognized by pointing a mobile information terminal at actual sightseeing spots or the images of such places, and the related information can be displayed on the screen. Users can switch between the screens of object-recognition AR and location-based AR.
Object-recognition AR uses a web tool called Wikitude Target Manager provided by Wikitude to register 20 to 30 images of one sightseeing spot that is recognized. After the number of characteristics for the objects used in the recognition of registered images were extracted, files containing the characteristic amount for every sightseeing spot were downloaded and installed in the mobile application. Additionally, the images displayed for sightseeing spots were also installed into the application, and the location settings of each image were determined. The process mentioned above is shown in Figure 3.
to register 20 to 30 images of one sightseeing spot that is recognized. After the number of 25 characteristics for the objects used in the recognition of registered images were extracted, files 26 containing the characteristic amount for every sightseeing spot were downloaded and installed in 27 the mobile application. Additionally, the images displayed for sightseeing spots were also installed 28 into the application, and the location settings of each image were determined. The process mentioned 29 above is shown in Figure 3.

The Front End of the System
The system will implement unique functions for users, which will be mentioned below, in response to the aim of the present study, as mentioned in Section 1. To implement the unique functions below, the system is composed of the website and original mobile application, and the latter was uniquely developed by integrating location-based AR and object-recognition AR and the use of pictograms. The basic functions of these two types of AR are described in Sections 3.4.2 and 3.4.3. In addition to the basic function of using location-based AR, the three functions are available to switch the displayed markers for each sightseeing course, change the displayed pictograms, and indicate the display range of images. Additionally, using object-recognition AR, the function is available to change the recognition target.

User Registration Function for the System
The user registration function in the system uses Firebase Authentication. After installing the mobile application in the mobile information terminal and starting it for the first time, the registration screen for user information will come up. Users can create their own accounts using either their email addresses and passwords or their Google accounts. After confirming that the registered account has not already been registered, location-based AR will automatically be activated.

Function to Switch the Displayed Markers for Each Sightseeing Course When Using Location-Based AR
Regarding the operation target area of the system, areas offering multiple sightseeing routes are assumed. The system enables the display of markers that indicate the directions of sightseeing spots within the course. In this way, users can change the markers for each sightseeing course by tapping on the sightseeing course they wish to display on the screens.

Function to Change Displayed Pictograms When Using Location-Based AR
In the system, every time the distance information is updated, the 10 nearest pictograms to the users' present locations are displayed on the screens of their mobile information terminals, which helps them to know what facilities are available nearby. Additionally, using the checkbox next to the category name of each facility, users can see the 10 nearest pictograms from the selected facility categories. For example, if users wish to see the directions only for the nearest restaurants, the restaurant category can be selected, and pictograms for restaurants only will be displayed on the screens of their mobile information terminals.

4.1.4.
Function to Indicate the Display Range of Images When Using Location-Based AR When using the system, users may want to limit the displayed images of sightseeing spots and nearby facilities to within a specific area. In this case, the displayed range can be changed by moving the slider on the screen sideways, as shown in Figure 4. Every time the displayed range is changed, the maximum distance and total sightseeing spots within the selected area will be displayed on the screen. The process of selecting the displayed range is shown in Figure 4.  4.1.5. Optional function to switch between location-based AR and object-recognition AR 3 When visiting sightseeing spots on the sightseeing course, the system enables users to easily 4 switch between the location-based AR and the object-recognition AR screens to obtain the relevant 5 information by pointing their mobile information terminals at the sightseeing spot. While users are 6 on the location-based AR screen, users can go to the menu, select "Object-recognition AR", and move 7 to the object-recognition AR screen. 8 4.1.6. Function to change recognition target when using object-recognition AR 9 When changing the sightseeing spot that is the recognition target for object-recognition AR, 10 users can select a favorite sightseeing spot from the recognition target list of the menu. In the system, the users' present locations are determined using the GPS installed into their 14 mobile information terminals. When the users' present locations are updated, the distances to all 15 sightseeing spots are displayed as images on the screens are also updated. As the updated distance 16 information is reflected on a real-time basis, users can know the distance to each sightseeing spot by 17 simply tapping on the image displayed on the screens. Figure 5 shows the real-time updates of 18 distance information.

Optional Function to Switch between Location-Based AR and Object-Recognition AR
When visiting sightseeing spots on the sightseeing course, the system enables users to easily switch between the location-based AR and the object-recognition AR screens to obtain the relevant information by pointing their mobile information terminals at the sightseeing spot. While users are on the location-based AR screen, users can go to the menu, select "Object-recognition AR", and move to the object-recognition AR screen.

Function to Change Recognition Target When Using Object-Recognition AR
When changing the sightseeing spot that is the recognition target for object-recognition AR, users can select a favorite sightseeing spot from the recognition target list of the menu.

Update of the Distance Information and Location Information Using GPS
In the system, the users' present locations are determined using the GPS installed into their mobile information terminals. When the users' present locations are updated, the distances to all sightseeing spots are displayed as images on the screens are also updated. As the updated distance information is reflected on a real-time basis, users can know the distance to each sightseeing spot by simply tapping on the image displayed on the screens. Figure 5 shows the real-time updates of distance information.  When visiting sightseeing spots on the sightseeing course, the system enables users to easily 4 switch between the location-based AR and the object-recognition AR screens to obtain the relevant 5 information by pointing their mobile information terminals at the sightseeing spot. While users are 6 on the location-based AR screen, users can go to the menu, select "Object-recognition AR", and move 7 to the object-recognition AR screen. When changing the sightseeing spot that is the recognition target for object-recognition AR, 10 users can select a favorite sightseeing spot from the recognition target list of the menu. In the system, the users' present locations are determined using the GPS installed into their 14 mobile information terminals. When the users' present locations are updated, the distances to all 15 sightseeing spots are displayed as images on the screens are also updated. As the updated distance 16 information is reflected on a real-time basis, users can know the distance to each sightseeing spot by 17 simply tapping on the image displayed on the screens. Figure 5 shows the real-time updates of 18 distance information.

Changing the Marker Height Level and Displayed Pictograms According to the Updated Distance Information
After the distances from the users' present locations to all sightseeing spots are calculated, the height of the image is set to be higher on the screen for farther away markers, while it is lower for closer images. This prevents the markers from becoming invisible or unable to be tapped on due to overlapping and helps users to intuitively understand which facilities are available nearby. The pictograms are arranged from shortest to longest distances from the users' present locations, and the first 10 nearest pictograms are displayed. In this way, the system can prevent overcrowding by a large number of pictograms on the screen and clearly show the directions to facilities near users. Figure 6 shows how the marker height differs according to the distance information.

13
Administrators manage registered users' information, including email addresses and 14 passwords, using the Firebase console. Users can register their email addresses and passwords on the 15 registration screen that is displayed right after starting the mobile application for the first time, which 16 enables the information to be automatically reflected to the console. Administrators can operate the 17 console to delete or invalidate the account information registered by users. 18

19
Regarding location-based AR, the files, including the locations and names of sightseeing spots 20 and nearby facilities, are loaded, and images are allocated in the appropriate places on the screen. 21 Administrators manage the names of sightseeing spots as well as where to display images within 22 Android Studio. Additionally, the images used for both location-based AR and object-recognition AR 23 and the HTML files that make up the interface are managed within Android Studio. When 24 administrators update the information, it is edited within Android Studio, the apk files are output, 25 and they are uploaded onto the downloadable online storage. Afterward, users download the apk 26 files and update the mobile application. In this way, administrators are able to update the data within 27 the application.

Users' Information Management Using Firebase
Administrators manage registered users' information, including email addresses and passwords, using the Firebase console. Users can register their email addresses and passwords on the registration screen that is displayed right after starting the mobile application for the first time, which enables the information to be automatically reflected to the console. Administrators can operate the console to delete or invalidate the account information registered by users.

Data Management Using Android Studio
Regarding location-based AR, the files, including the locations and names of sightseeing spots and nearby facilities, are loaded, and images are allocated in the appropriate places on the screen. Administrators manage the names of sightseeing spots as well as where to display images within Android Studio. Additionally, the images used for both location-based AR and object-recognition AR and the HTML files that make up the interface are managed within Android Studio. When administrators update the information, it is edited within Android Studio, the apk files are output, and they are uploaded onto the downloadable online storage. Afterward, users download the apk files and update the mobile application. In this way, administrators are able to update the data within the application.

Recognition Target Management by Means of Wikitude Target Manager
As mentioned in Section 3.4, object-recognition AR can be used by registering recognition target images to Wikitude Target Manager. Administrators create a project to register images to Wikitude Target Manager, and images of sightseeing spots are registered there. By doing so, the number of characteristics for recognition targets is automatically extracted, and this can be confirmed by the administrators. After the extraction of the characteristic amount, administrators can click on "Download WTO" within the project, download the wto files that include the characteristic amount for each sightseeing spot, and install these files into the mobile application. Then, by opening the installed wto files, sightseeing spots can be recognized.

Interface of Location-Based AR Screen
The interface when using location-based AR is shown in Figure 7. The functions introduced in Section 4.1 can be used by going to the menu button on the top left of the screen. Because the main functions used in the system can be selected from the menu and users can freely select any functions they wish to use, the interface can be easily operated even by those who are not familiar with mobile applications that use AR. If the image display is not working properly, users can use the reload button on the top right-hand corner of the screen to reload the image and recalculate the distance.   The interface when using location-based AR is shown in Figure 7. The functions introduced in    20 The interface when using object-recognition AR is shown in Figure 8. Users can change the 21 recognition target and move to the location-based AR screen by clicking on the menu on the bottom 22 right-hand corner of the screen. The images of sightseeing spots that can be recognized by mobile 23 information terminals will be displayed on the screen and automatically updated when the 24 recognition target is changed. Therefore, users can know the directions to currently recognized 25 sightseeing spots.

Interface of Object-Recognition AR Screen
The interface when using object-recognition AR is shown in Figure 8. Users can change the recognition target and move to the location-based AR screen by clicking on the menu on the bottom right-hand corner of the screen. The images of sightseeing spots that can be recognized by mobile information terminals will be displayed on the screen and automatically updated when the recognition target is changed. Therefore, users can know the directions to currently recognized sightseeing spots.  The website has an installation link for the mobile application, as well as a user's manual. 4 Because the website is expected to be accessed from PCs, as well as from mobile information 5 terminals, such as smartphones and tablet PCs, it was designed to improve the visibility of the display 6 for the screens of information terminals used to access the website. Additionally, when installing the 7 application, if the website is accessed from PCs, a quick response code (QR code) that links the 8 application download is displayed to enable the application installation into a mobile information 9 terminal, even when using PCs to access the site. The website interface is shown in Figure 9.   14 The city of Chofu in the metropolis of Tokyo was selected as the operation target area for the 15 system. The reason behind this selection was that (1) there are multiple sightseeing routes, (2) famous 16 sightseeing spots are scattered throughout the entire city, and (3) there has been an increase in foreign 17 as well as Japanese tourists. The sightseeing routes within Chofu city include the "Route of Kondo 18 Isami (a famous samurai who was born around present-day Chofu city at the end of the Edo era) and

Website Interface
The website has an installation link for the mobile application, as well as a user's manual. Because the website is expected to be accessed from PCs, as well as from mobile information terminals, such as smartphones and tablet PCs, it was designed to improve the visibility of the display for the screens of information terminals used to access the website. Additionally, when installing the application, if the website is accessed from PCs, a quick response code (QR code) that links the application download is displayed to enable the application installation into a mobile information terminal, even when using PCs to access the site. The website interface is shown in Figure 9.

Website interface
The website has an installation link for the mobile application, as well as a user's manual. Because the website is expected to be accessed from PCs, as well as from mobile information terminals, such as smartphones and tablet PCs, it was designed to improve the visibility of the display for the screens of information ter

Selection of the Operation Target Area
The city of Chofu in the metropolis of Tokyo was selected as the operation target area for the system. The reason behind this selection was that (1) there are multiple sightseeing routes, (2) famous sightseeing spots are scattered throughout the entire city, and (3) there has been an increase in foreign as well as Japanese tourists. The sightseeing routes within Chofu city include the "Route of Kondo Isami (a famous samurai who was born around present-day Chofu city at the end of the Edo era) and green space", "Route of Gegege no Kitaro (a Japanese cartoon setting in Chofu city) and the Jindai-ji Temple", "Route of City of the Movie, Chofu, and Tama River", and the "Route of art and culture in the Sengawa District". The target sightseeing spots for object-recognition AR are the "Monument for the marathon halfway point at the time of the Tokyo Olympic Games in 1964", "Monument for movie actors", "Sengawa Ichirizuka (milestone)", and "Jindai-ji Castle ruins".

Operation Period
The system was operated for 1 month (23 November-28 December 2018) with participants both inside and outside the operation target area. Whether inside or outside the operation target area, the operation of the system was advertised using the website of the authors' lab, as well as Twitter and Facebook. Additionally, the tourism department of Chofu city supported the present study by distributing pamphlets and user's manuals. Users were able to register their information by creating their own accounts using either their email addresses and passwords or their Google accounts. After registration, users were automatically moved to the location-based AR screen where they could use the various functions of the system. Table 1 indicates an overview of the system's users. There were a total of 50 users, with 29 male and 21 female users. Regarding the age of the users, there were many users in their 20s for both males and females, making up 38% of the total. Twenty-four percent were in their 50s, and 16% were in each of the teens and 40s categories. These statistics show that the system was not only used by younger generations but also by various age groups.

Evaluation
After the end of the operation, a web questionnaire survey and an access analysis of the users' log data were conducted to evaluate the system developed in the present study. Along with the aim of the present study, a web questionnaire survey was implemented to conduct an (1) evaluation concerning compatibility with the information obtainment method, (2) evaluation concerning the usage of the system, and (3) evaluation concerning the functions of the system. The web questionnaire survey was conducted for 1 week after the start of the operation. The response rate was 100%, as all 50 users responded, as shown in Table 1.
Furthermore, regarding the frequency of visits to Chofu city, 36% answered "a few times a year", 12% answered "never", 34% answered "every day", and 14% answered "a few times a week". Therefore, approximately half of the users do not often visit Chofu city, and it is evident that they were not familiar with this area.

Evaluation Concerning the Compatibility with the Information Obtainment Method
Regarding the use of applications with AR, 24% of the users used it regularly, while 76% did not. However, 92% of the users had obtained information related to sightseeing spots using the internet. Therefore, although most users were not familiar with the applications with AR, the need for the system to effectively provide sightseeing information via the internet was made clear.

Evaluation Concerning the Usage of the System
(1) Evaluations concerning the use of pictograms Of the nine types of pictograms mentioned in Section 3.2, the pictograms that were especially useful were "convenience stores" (76%), "stations" (71%), "restaurants" (63%), and "public restrooms" (59%). The reason for this outcome was that these facilities are frequently used by users on a regular basis. As these facilities are not only frequently used while sightseeing but also in everyday life, users tend to look for information concerning such facilities. On the other hand, pictograms that were not as useful were "rental cars" (10%), "police station" (20%), and "lodging facilities" (20%). This result is because both the opportunity and situations where such facilities are used are limited.
(2) Evaluations concerning the usage condition of the system Figure 10 shows the evaluation results for the usage condition of the system. Regarding incorrect directions of markers and pictograms, 70% answered "didn't notice it" or "hardly noticed it", while 25% answered "neither". Regarding the blocking of the view of users, 78% answered "didn't notice it" or "hardly noticed it", while 14% answered "neither". As mentioned in Section 6.1.1, this result is because approximately half of the users were not familiar with Chofu city and could not determine whether there were incorrect directions or vision obstructions. Regarding the sensing of any danger while using the system, 84% answered "didn't feel any danger" or "hardly felt any danger", which indicates that almost all users felt barely any type of danger while using the system. Of the nine types of pictograms mentioned in Section 3.2, the pictograms that were especially 3 useful were "convenience stores" (76%), "stations" (71%), "restaurants" (63%), and "public 4 restrooms" (59%). The reason for this outcome was that these facilities are frequently used by users 5 on a regular basis. As these facilities are not only frequently used while sightseeing but also in 6 everyday life, users tend to look for information concerning such facilities. On the other hand, 7 pictograms that were not as useful were "rental cars" (10%), "police station" (20%), and "lodging  Figure 10 shows the evaluation results for the usage condition of the system. Regarding incorrect directions of markers and pictograms, 70% answered "didn't notice it" or "hardly noticed it", while 25% answered "neither". Regarding the blocking of the view of users, 78% answered "didn't notice it" or "hardly noticed it", while 14% answered "neither". As mentioned in Section 6.1.1, this result is because approximately half of the users were not familiar with Chofu city and could not determine whether there were incorrect dire  Figure 11 shows the evaluation results concerning the usefulness of the original functions. 24 Regarding the navigation system's use of location-based AR, 84% answered "useful" or "somewhat 25 useful", and 16% answered "neither". As mentioned in the previous section, this result is because 26 half of the users were not familiar with Chofu city and cannot determine whether the navigations 27 were correct. Regarding the information provision using object-recognition AR, 41% answered 28 "useful", 47% answered "somewhat useful", and 10% answered "neither". As a reason for this, while 29 information can be obtained by pointing a mobile information terminal at sightseeing spots or their 30 images, target recognition may not be successful depending on the angle of the camera in the mobile  Figure 11 shows the evaluation results concerning the usefulness of the original functions. Regarding the navigation system's use of location-based AR, 84% answered "useful" or "somewhat useful", and 16% answered "neither". As mentioned in the previous section, this result is because half of the users were not familiar with Chofu city and cannot determine whether the navigations were correct. Regarding the information provision using object-recognition AR, 41% answered "useful", 47% answered "somewhat useful", and 10% answered "neither". As a reason for this, while information can be obtained by pointing a mobile information terminal at sightseeing spots or their images, target recognition may not be successful depending on the angle of the camera in the mobile information terminal or how it is affected by the sunlight, as well as the fact that AR can only be used for four sightseeing spots. Therefore, as noted in Section 6.1.2, although most users were not familiar with the applications with AR, many of them used and gave a high rating to the functions that used both location-based AR and object-recognition AR. Regarding the information provision using pictograms, 49% answered "useful", and 47% answered "somewhat useful". Therefore, almost all users were satisfied with the information provided using pictograms, indicating that pictograms were an effective method for providing information. (2) Evaluation concerning the usefulness of the entire system 3 Figure 12 shows the evaluation results concerning the entire system. Regarding the ease of use 4 of the system, 92% answered "I think so" or "I somewhat think so", indicating that the system 5 developed in the present study was easy to use. The sightseeing support function provided by the 6 system was highly rated, as 30% answered "I think so" and 70% answered, "I somewhat think so". 7 Therefore, the present study was able to develop a system that effectively provides sightseeing 8 support. Regarding the wish to continue using the system in the future, 52% answered "I think so" 9 and 34% "I somewhat think so", indicating that the system can be used in the long term. However, 10 14% answered "neither" because half of the users do not have opportunities to visit Chofu city, as 11 mentioned in Section 6.1.1.   15 In the present study, an access analysis was conducted using the users' log data during the 16 operation period. This analysis was conducted using Google Analytics for Firebase, one of the 17 Firebase functions, which is a mobile backend service. Firebase is a service provided by Google, and 18 log data analysis can be conducted by installing Firebase into the mobile application. (2) Evaluation concerning the usefulness of the entire system Figure 12 shows the evaluation results concerning the entire system. Regarding the ease of use of the system, 92% answered "I think so" or "I somewhat think so", indicating that the system developed in the present study was easy to use. The sightseeing support function provided by the system was highly rated, as 30% answered "I think so" and 70% answered, "I somewhat think so". Therefore, the present study was able to develop a system that effectively provides sightseeing support. Regarding the wish to continue using the system in the future, 52% answered "I think so" and 34% "I somewhat think so", indicating that the system can be used in the long term. However, 14% answered "neither" because half of the users do not have opportunities to visit Chofu city, as mentioned in Section 6.1.1. (2) Evaluation concerning the usefulness of the entire system  Figure 12 shows the evaluation results concerning the entire system. Regarding the ease of use of the system, 92% answered "I think so" or "I somewhat think so", indicating that the system developed in the present study was easy to use. The sightseeing support function provided by the system was highly rated, as 30% answered "I think so" and 70% answered, "I somewhat think so". Therefore, the present study was able to develop a system that effectively provides sightseeing support. Regarding t 12 13 Figure 12. Evaluation results concerning the entire system. 14 15 In the present study, an access analysis was conducted using the users' log data during the 16 operation period. This analysis was conducted using Google Analytics for Firebase, one of the 17 Firebase functions, which is a mobile backend service. Firebase is a service provided by Google, and 18 log data analysis can be conducted by installing Firebase into the mobile application.

Evaluation Based on the Mobile Application Analysis
In the present study, an access analysis was conducted using the users' log data during the operation period. This analysis was conducted using Google Analytics for Firebase, one of the Firebase functions, which is a mobile backend service. Firebase is a service provided by Google, and log data analysis can be conducted by installing Firebase into the mobile application. Figure 13 shows the daily transition of active users during the operation of the system. From the transition of active users, it became clear that the system developed in the present study was continuously used by users. Furthermore, from the results of the user engagement rate, the usage time was 67% for the location-based AR screen and 29% for the object-recognition AR screen. This result is because the former can be used within Chofu city or in nearby areas, while the latter can only be used at four specific sightseeing spots.  3 The issues concerning the system were extracted based on the results of the web questionnaire 4 survey, as well as the access analysis of users' log data, and are summarized below. 5 (1) Route guiding display using location-based AR 6 The system was designed to display sightseeing spots and nearby facilities on specified parts of 7 the screens for the three functions introduced in Section 4.1 that use location-based AR when the 8 location information, such as longitude and latitude, was obtained. Therefore, by combining the 9 system with remote location tracking GPS, it is possible to implement a new function to display route 10 guidance to sightseeing spots outside of the present operation target area.

11
(2) Improvement of the recognition rate of sightseeing spots with object-recognition AR 12 Object-recognition AR can be used by registering the images of sightseeing spots with the 13 Wikitude Target Manager. Therefore, the recognition rate can be improved by reviewing the 14 registered images, replacing them with clearer images of sightseeing spots, and increasing the 15 number of registered images. Additionally, images other than those of the four target sightseeing 16 spots can be gathered and registered to improve the usability of the function introduced in Section 17 4.1 of changing the recognition target when using object-recognition AR. 18 19 The present study designed and developed a system (Section 3 and 4), conducted a test of the 20 system operation (Section 5), and evaluated and extracted improvement measures (Section 6). The 21 present study can be summarized in the following three points:

Extraction of Improvement Measures
The issues concerning the system were extracted based on the results of the web questionnaire survey, as well as the access analysis of users' log data, and are summarized below.
(1) Route guiding display using location-based AR The system was designed to display sightseeing spots and nearby facilities on specified parts of the screens for the three functions introduced in Section 4.1 that use location-based AR when the location information, such as longitude and latitude, was obtained. Therefore, by combining the system with remote location tracking GPS, it is possible to implement a new function to display route guidance to sightseeing spots outside of the present operation target area.
(2) Improvement of the recognition rate of sightseeing spots with object-recognition AR Object-recognition AR can be used by registering the images of sightseeing spots with the Wikitude Target Manager. Therefore, the recognition rate can be improved by reviewing the registered images, replacing them with clearer images of sightseeing spots, and increasing the number of registered images. Additionally, images other than those of the four target sightseeing spots can be gathered and registered to improve the usability of the function introduced in Section 4.1 of changing the recognition target when using object-recognition AR.

Conclusions
The present study designed and developed a system (Sections 3 and 4), conducted a test of the system operation (Section 5), and evaluated and extracted improvement measures (Section 6). The present study can be summarized in the following three points: (1) To provide guidance to sightseeing spots and nearby facilities using location-based AR and pictograms and to provide information on the sightseeing spots using object-recognition AR, a system that is composed of the website and original mobile application was designed and developed. The application was uniquely developed by integrating location-based AR and object-recognition AR and by using pictograms. As the system enables users to efficiently obtain the directions to sightseeing spots and nearby facilities within urban tourist areas and sightseeing spot information, it is a method that can help users who are not good at reading maps or searching necessary information to enjoy sightseeing. Additionally, the city of Chofu in the metropolis of Tokyo was selected as the operation target area for the system, and an evaluation of the operation of the system was conducted.
(2) The operation of the system was conducted over a one month period, targeting those inside and outside the operation target area, and a web questionnaire survey was conducted with a total number of 50 users. Based on the results of the web questionnaire survey, the usefulness of the original functions of integrating location-based AR and object-recognition AR and of using pictograms, as well as of the entire system, was highly rated, indicating that efficient sightseeing support for users can be expected. Although most users were not familiar with the application that used AR, they highly rated the functions using location-based AR and object-recognition AR.
(3) From the results of the access analysis of users' log data, the transition of the number of active users revealed that the system was continuously used during the operation. By continuously using the system, it is expected that users will further utilize each function. Based on the results of the user engagement rate, the usage time was 67% for location-based AR and 29% for object-recognition AR, indicating that the former was used more often because it is easier for users to use.
As future study projects, the improvement of the system based on the results in Section 6.3, as well as the enhancement of the significance of using the system by gaining more data from other urban tourist areas inside and outside Japan, can be raised. Additionally, it is desirable to develop a unique mobile application in accordance with the above improved system. Author Contributions: Ryo Sasaki design, develop and operate the sightseeing support system using augmented reality and pictograms in the present study. He also initially drafted the paper. Kayoko Yamamoto carried out background work, and evaluates the system. All authors contributed to write up and review, and approved the paper manuscript.
Funding: This research received no external funding.