Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for E ﬃ cient Indoor Monitoring and Control

: As various activities related to entertainment, business, shopping, and conventions are done increasingly indoors, the demand for indoor spatial information and indoor environmental data is growing. Unlike the case of outdoor environments, obtaining spatial information in indoor environments is di ﬃ cult. Given the absence of GNSS (Global Navigation Satellite System) signals, various technologies for indoor positioning, mapping and modeling have been proposed. Related business models for indoor space services, safety, convenience, facility management, and disaster response, moreover, have been suggested. An autonomous scanning system for collection of indoor spatial and environmental data is proposed in this paper. The proposed system can be utilized to collect spatial dimensions suitable for extraction of a two-dimensional indoor drawing and obtainment of spatial imaging as well as indoor environmental data on temperature, humidity and particulate matter. For these operations, the system has two modes, manual and autonomous. The main function of the systems is autonomous mode, and the manual mode is implemented additionally. It can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters). autonomous scanning experiment was performed.


Introduction
People are known to spend more than 90% of their entire lives indoors. As this time increases, indoor spaces' various elements and environments will inevitably impact human life. Additionally, the expression and understanding of indoor spaces have grown more complicated as commercial buildings and large-scale facilities have increased in number. Accordingly, the demand for location-based services that can process space and location information is increasing. In order to realize such a service, basic indoor spatial information must be provided. This includes detailed indoor maps and models that can be used for route planning, navigation guidance, and many other applications, as well as location information for pedestrians and goods in indoor spaces.
In indoor environments unlike outdoor ones, obtaining spatial information is difficult. In the absence of GNSS (Global Navigation Satellite System) signals, various technologies for outdoor positioning, mapping as well as modeling have been proposed. An indoor positioning technology is one that determines an object's or person's location and then tracks its movement to the next location. Indoor mapping and modeling entail the generation of spatial information based on data that already exists or is obtained by surveying, followed by detection of changes and updating accordingly. Related business models for indoor space services, convenience, facility management, safety, and disaster response have been suggested.
In order to meet the market demand for indoor applications, first, techniques for deriving indoor space information and performance analysis data for the related system are presented in [1,2]. Usually, they suggest schemes for deriving indoor maps and models based on collected point cloud data. In addition, a smart home or building infrastructure integrating IoT technology based on sensors and actuators for monitoring and control of indoor spaces such as homes or buildings has been proposed [3,4]. Therefore, we consider a system that can collect indoor data autonomously for monitoring and control of indoor spaces in real time where there is no infrastructure for the purpose. The data include indoor spatial data as well as environmental data.
In these pages, an autonomous scanning system for collection of indoor spatial and environmental data and efficient monitoring and control of indoor spaces, thereby, is proposed. The system can be utilized to collect indoor environmental data on temperature, humidity, particulate matter, and spatial dimensions suitable for extraction of a two-dimensional indoor drawing and obtainment of spatial imaging. For these operations, the system has two modes, manual and autonomous. It comprises three main components: a user terminal, a gateway, and an autonomous scanner. As for the user terminal, it controls the autonomous scanner component, and immediately checks scanned information that comes through the gateway. The gateway receives control commands from the user terminal and sends monitoring information to the user. Finally, the autonomous scanner comprises the following components: a mobile robot; a lidar sensor; a camera sensor; a temperature/humidity sensor, and a particulate matter sensor. It is used to overcome the distance limitation of the lidar sensor in collecting indoor spatial information and to perform autonomous scanning. The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters). The remainder of this paper is organized as follows. In Section 2, the background and motivation of the present study are discussed. Section 3 introduces the autonomous scanning system's design and prototype implementation. Section 4 presents the system's experimental results. Section 5 concludes the paper.

Background and Motivation
As an indoor space is a deliberately designed space, it can have many characteristics according to its purpose. Therefore, various related information is available. In this section, we survey the research on indoor mapping and modeling, indoor positioning and localization, and application services utilizing indoor spatial and environmental information. In addition, we present the motivation for the present research.

Indoor Mapping and Modeling
Various techniques have been developed for efficient construction of indoor expressing and visualizing data in the forms of indoor maps or modeling, for example. Among those techniques are automatic or semi-automatic extraction from architectural blueprints, and Building Information Management (BIM) systems that use specialized software and scanning-based indoor map construction.
Autonomous mobile robots are widely employed in many areas. A mobile robot with a Laser Range Finder (LRF) developed for environment-scanning purposes is presented in [5,6]. In that study, the mobile robot was operated in a known environment; as such, path following was the easiest navigation mode to adopt. Path following allows a robot's movement and navigation according to a preprogrammed path. By the time the robot has moved through the black points, it will have stopped moving in order to scan the environment; then, after the scanned data has been completely saved in the computer, the robot continues on its path to the end point [6]. Adán et al. present a critical review of current mobile scanning systems in the field of automatic 3D building-modeling in [7]. Díaz-Vilariño et al. propose a method for the 3D modeling of indoor Processes 2020, 8,1133 3 of 28 spaces from point cloud data in [8]. Some work has evaluated and analyzed the performance of conventional indoor scanning and mapping systems [9,10] that utilize hand-held devices or backpack or trolley forms. A simultaneous localization and mapping approach based on data captured by a 2D laser scanner and a monocular camera is introduced in [11].

Indoor Positioning and Localization
Indoor positioning technology identifies and determines the location of an object or person and tracks it or him to a new location. Indoor positioning technology is a key element in indoor spatial information utilization services.
In outdoor scenarios, the mobile terminal position can be obtained with a high degree of accuracy, thanks to GNSS. However, GNSS encounters problems in indoor environments and scenarios involving deep shadowing effects. Various indoor and outdoor technologies and methodologies, including Time of Arrival (ToA), Time Difference of Arrival (TDoA), Received Signal Strength (RSS)-based fingerprinting as well as hybrid techniques are surveyed in [12], which emphasizes indoor methodologies and concepts. Additionally, it reviews various localization-based applications to which location-estimation information is critical.
Recently, a number of new techniques have been introduced, as well as wireless technologies and mechanisms that leverage the Internet of Things (IoT), and ubiquitous connectivity that provides indoor localization services to users. Other indoor localization techniques, such as Angle of Arrival (AoA), Time of Flight (ToF), and Return Time of Flight (RTOF), as well as the above-noted RSS, are analyzed in [13][14][15][16], all of which are based on WiFi, Radio Frequency Identification Device (RFID), Bluetooth, Ultra-Wideband (UWB), and other technologies proposed in the literature.
For indoor positioning, various algorithms utilizing WiFi signals are investigated in [17][18][19][20][21][22] to improve localization accuracy and performance. With regard to UWB ranging systems, an improved target detection and tracking scheme for moving objects is discussed in [23], and a novel approach to precise TOA estimation and ranging error mitigation is presented in [24].
Thanks to the wide-spread of WiFi and the support of the IEEE 802.11 standard by the majority of mobile devices, most proposed indoor localization systems are based on WiFi technologies. However, the inherent noise and instability of wireless signals usually degrades the accuracy and robustness in a dynamically changing environment. A recent novel approach is the deep learning-based framework to improve localization accuracy. Abbas et al. propose an accurate and robust WiFi fingerprinting localization technique based on a deep neural network in [25]. A deep learning framework for joint activity recognition and indoor localization task using WiFi channel state information (CSI) fingerprints is presented in [26]. Hoang et al. describe RNN (Recurrent Neural Networks) for WiFi fingerprinting indoor localization focusing on trajectory positioning in [27].

Applications Utilizing Indoor Spatial and Environmental Information
Applications utilizing indoor spatial information facilitate indoor living and can be defined as services provided through wired/wireless terminals by acquiring and managing information directly or indirectly related to the indoor space. Services using indoor spatial information can be categorized into those enhancing people's security and convenience, facility management and disaster response, marketing, and other businesses based on basic services provided by locating their client occupants.
Security services in high-rise buildings and large-sized buildings rely on the effective utilization of indoor spatial information. These services include software and smart equipment to maintain the security of occupants in the building and to assist in firefighting and rescue activities.
Convenience enhancement services include user location-based road guidance services that guide visitors to complex buildings, such as large-scale buildings, underground facilities, and complex space facilities.
Facility management and disaster response services can be classified into space and asset management, BEMS (Building Energy Management System), and PSIM (Physical Security Information Management). BEMS is an integrated building energy control and management system that enables facility managers to take advantage of rational energy consumption by using ICT (Information and Communication Technology) technology to efficiently maintain a user's pleasant and functional work environment. PSIM is a service for responding to disaster situations by identifying such situations and assisting evacuees [28].
As one research field for efficient energy control of building using ICT technologies, video/image processing technologies were explored to monitor human thermal comfort status in contactless way, which gives feedback signals for BEMS to create energy efficient, pleasant and functional work environments [29][30][31].
Martinez-Sala et al. describe an indoor navigation system for visually impaired people in [32]. Plagerasa et al. propose a system for collecting and managing sensor data in a smart building operating in an IoT environment in [33]. The importance of the indoor environmental monitoring is emphasized in [34] for human safety and health. Mujan et al. review the state-of-the-art literature and establish a connection between the factors that influence health and productivity in any given indoor environment in [35], whether it be residential or commercial. A systemic review of relevance between indoor sensors and managing optimal energy saving, thermal comfort, visual comfort, and indoor air quality in the built environment is presented in [36]. Therefore, indoor environment scanning can be useful for investigating, monitoring and studying related with indoor applications.
This paper proposes an autonomous scanning system for acquisition of indoor environmental data such as temperature, humidity, particulate matter and picture of indoor as well as spatial data for being aware of the indoor environment status and indoor visualization. The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters).

Autonomous Scanning System
This section presents the architecture, functional design and prototype implementation of the proposed autonomous scanning system.

Architecture of Autonomous Scanning System
The conceptual architecture of the autonomous scanning system is defined as shown in Figure 1. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner. and functional work environment. PSIM is a service for responding to disaster situations by identifying such situations and assisting evacuees [28]. As one research field for efficient energy control of building using ICT technologies, video/image processing technologies were explored to monitor human thermal comfort status in contactless way, which gives feedback signals for BEMS to create energy efficient, pleasant and functional work environments [29][30][31].
Martinez-Sala et al. describe an indoor navigation system for visually impaired people in [32]. Plagerasa et al. propose a system for collecting and managing sensor data in a smart building operating in an IoT environment in [33]. The importance of the indoor environmental monitoring is emphasized in [34] for human safety and health. Mujan et al. review the state-of-the-art literature and establish a connection between the factors that influence health and productivity in any given indoor environment in [35], whether it be residential or commercial. A systemic review of relevance between indoor sensors and managing optimal energy saving, thermal comfort, visual comfort, and indoor air quality in the built environment is presented in [36]. Therefore, indoor environment scanning can be useful for investigating, monitoring and studying related with indoor applications.
This paper proposes an autonomous scanning system for acquisition of indoor environmental data such as temperature, humidity, particulate matter and picture of indoor as well as spatial data for being aware of the indoor environment status and indoor visualization. The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters).

Autonomous Scanning System
This section presents the architecture, functional design and prototype implementation of the proposed autonomous scanning system.

Architecture of Autonomous Scanning System
The conceptual architecture of the autonomous scanning system is defined as shown in Figure 1. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner. The user terminal is configured to receive and confirm the results of the indoor spatial data and environmental data, and to input control commands. The gateway is configured to relay control commands input from the user terminal and spatial and environment data collected by the autonomous scanner. The autonomous scanner is configured to facilitate indoor data collection and considers capability of autonomous driving. In addition, various sensor equipment is considered for indoor environment data collection. The user terminal is configured to receive and confirm the results of the indoor spatial data and environmental data, and to input control commands. The gateway is configured to relay control commands input from the user terminal and spatial and environment data collected by the autonomous scanner. The autonomous scanner is configured to facilitate indoor data collection and considers capability of autonomous driving. In addition, various sensor equipment is considered for indoor environment data collection.

Functional Design of Autonomous Scanning System
In the design of the autonomous scanning system, we utilized prototyping platforms because, for the purpose of detailed function definition, it was necessary to first confirm the components to be used. Figure 2 shows the proposed system's components.

Functional Design of Autonomous Scanning System
In the design of the autonomous scanning system, we utilized prototyping platforms because, for the purpose of detailed function definition, it was necessary to first confirm the components to be used. Figure 2 shows the proposed system's components. A smartphone (Galaxy J6) is used for user-terminal monitoring and control. Raspberry Pi 3 Model B, a single-board computer, is used for a gateway. It has a 64-bit quad core processor, on-board WiFi, Bluetooth and USB boot capabilities. Raspberry Pi's various communication and interface support and computing capabilities are suitable for rapid prototyping gateway implementation and experimentation. The autonomous scanner component is composed of a mobile robot, a lidar sensor, a camera sensor, a temperature/humidity sensor and a particulate matter sensor. iRobot Create 2, based on Roomba 600, is used for mobile robots. This is a programmable robot platform designed to enable the user to set various functions such as motion, sound and light. As such, it is equipped with LEDs and various sensors and can be programmed for specific sounds, behavior patterns, LED light, and so on [37].
Raspberry Pi Camera V2 is used as a camera sensor. It is attached to Raspberry Pi 3 via a Camera Serial Interface (CSI). DHT22 is used as a low-cost digital temperature/humidity sensor. It uses a capacitive humidity sensor and a thermistor to measure the surrounding air and spits out a digital signal on the data pin. Plantower PMS7003 is used as a particulate matter sensor. It is in fact a kind of digital and universal particle-concentration sensor that can be used to obtain the number of suspended particles in the air, i.e., the concentration of particles, and output them in the form of a digital interface. The main components of the output are the quality and number of particles with their different sizes per unit volume, where the particle-number unit volume is 0.1 L and the unit of mass concentration is μg/m 3 . Lidar is an acronym for light detection and ranging. It is a surveying method that measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. RPLidar A2, used as a lidar sensor is a 360-degree 2D laser scanner that can take up to 4000 samples of laser-ranging data per second. Its scanning range is 12 m [38]. A smartphone (Galaxy J6) is used for user-terminal monitoring and control. Raspberry Pi 3 Model B, a single-board computer, is used for a gateway. It has a 64-bit quad core processor, on-board WiFi, Bluetooth and USB boot capabilities. Raspberry Pi's various communication and interface support and computing capabilities are suitable for rapid prototyping gateway implementation and experimentation. The autonomous scanner component is composed of a mobile robot, a lidar sensor, a camera sensor, a temperature/humidity sensor and a particulate matter sensor. iRobot Create 2, based on Roomba 600, is used for mobile robots. This is a programmable robot platform designed to enable the user to set various functions such as motion, sound and light. As such, it is equipped with LEDs and various sensors and can be programmed for specific sounds, behavior patterns, LED light, and so on [37].
Raspberry Pi Camera V2 is used as a camera sensor. It is attached to Raspberry Pi 3 via a Camera Serial Interface (CSI). DHT22 is used as a low-cost digital temperature/humidity sensor. It uses a capacitive humidity sensor and a thermistor to measure the surrounding air and spits out a digital signal on the data pin. Plantower PMS7003 is used as a particulate matter sensor. It is in fact a kind of digital and universal particle-concentration sensor that can be used to obtain the number of suspended particles in the air, i.e., the concentration of particles, and output them in the form of a digital interface. The main components of the output are the quality and number of particles with their different sizes per unit volume, where the particle-number unit volume is 0.1 L and the unit of mass concentration is µg/m 3 . Lidar is an acronym for light detection and ranging. It is a surveying method that measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. RPLidar A2, used as a lidar sensor is a 360-degree 2D laser scanner that can take up to 4000 samples of laser-ranging data per second. Its scanning range is 12 m [38].

Function Definitions
The user terminal has two modes for indoor data collection, either manual or autonomous. In the manual mode, the movement of the mobile robot, as well as the lidar and camera sensors, is manually controlled. When the autonomous mode is selected, the mobile robot performs autonomous driving, avoiding obstacles to collect and provide sensed data periodically.
The gateway receives and interprets a user's command to execute a program that drives the autonomous scanner accordingly. It stores sensed data collected by autonomous scanners. It generates spatial data by converting the data collected from the lidar sensor to a two-dimensional drawing and stores temperature/humidity and particulate matter data that is measured periodically. These data are transmitted to the user in response to the user's request. Additionally, a program that controls the movement of the robot is executed according to the driving mode of the mobile robot. It drives forward, backward, and makes left turns and right turns according to the user's movement command. The gateway defines and operates the robot's autonomous driving for autonomous scanning.
An autonomous scanner is defined as a driving vehicle that receives commands from the user through the gateway. The mobile robot equipped with sensors moves in the indoor space according to the user's command, collects spatial data and environmental data, and transmits the data to the gateway. The lidar sensor collects indoor spatial data and transmits it to the gateway. The temperature/humidity sensor and particulate matter sensor collect the corresponding data and transmit them to the gateway. The mobile robot operates in the manual mode or the autonomous mode according to the command received through the gateway. The robot can move forward, backward and turn left and right.

Autonomous Scanning Algorithm
For the autonomous scanning operation, it is necessary to define the autonomous driving technique of the mobile robot. For the autonomous driving of the robot, we utilize the spatial data collected by the lidar sensor. The scanned data of the lidar sensor has distance and angle data from the current position of the mobile robot to another obstacle. Through the collected data, we extract the direction and distance to the next destination of the robot. The detailed algorithm is as follows.
We rearrange the scanned data in descending order according to the distance, extract a certain amount of high-ranked data and average the angles. We decide the extracting amount of data considering of sampling rate and scanning frequency of lidar sensor in implementation. The average of the angles is obtained by following the mean of the circular quantities' equation.
In Equation (1), s is the mean sine of angles, c is the mean cosine of angles, and θ is the mean of angles.
The angle thus obtained becomes the direction of movement toward the next destination. The moving distance should be set in consideration of the distance value used in the calculation and the measurement range of the lidar sensor. Once the initial direction is defined, the back part of the angle corresponding to the rotation angle from the scan is excluded to prevent the robot from going back.
The detailed operations of the autonomous driving algorithm are presented from Figures 3-5.
Additionally, as shown in Figure 4, the algorithm works so that the proper movement direction can be extracted even if the mobile robot's position is not centered.
Once the initial direction is defined, to prevent the robot from going back, the back part of the angle corresponding to the rotation angle from the scan is excluded. The next direction is calculated by excluding the backward angle based on the current direction, as shown in Figure 5 (left, middle).
Even when the robot approaches the wall, the next direction can be derived according to the proposed autonomous driving algorithm, as presented in Figure 5 (right).  Additionally, as shown in Figure 4, the algorithm works so that the proper movement direction can be extracted even if the mobile robot's position is not centered.  Additionally, as shown in Figure 4, the algorithm works so that the proper movement direction can be extracted even if the mobile robot's position is not centered. Once the initial direction is defined, to prevent the robot from going back, the back part of the angle corresponding to the rotation angle from the scan is excluded. The next direction is calculated by excluding the backward angle based on the current direction, as shown in Figure 5 (left, middle). Even when the robot approaches the wall, the next direction can be derived according to the proposed autonomous driving algorithm, as presented in Figure 5 (right).

Prototype Implementation
In this section, we present the implementation of the autonomous scanning system. The prototype architecture of the autonomous scanning system is defined as shown in Figure 6. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner.

Prototype Implementation
In this section, we present the implementation of the autonomous scanning system. The prototype architecture of the autonomous scanning system is defined as shown in Figure 6. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner. Once the initial direction is defined, to prevent the robot from going back, the back part of the angle corresponding to the rotation angle from the scan is excluded. The next direction is calculated by excluding the backward angle based on the current direction, as shown in Figure 5 (left, middle). Even when the robot approaches the wall, the next direction can be derived according to the proposed autonomous driving algorithm, as presented in Figure 5 (right).

Prototype Implementation
In this section, we present the implementation of the autonomous scanning system. The prototype architecture of the autonomous scanning system is defined as shown in Figure 6. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner.  The user-terminal controls the mobile robot as well as the lidar and camera sensors. It monitors data in the forms of scanned images from the lidar sensor, captured images from the camera sensor, and data from the temperature/humidity sensor and particulate matter sensor. It exchanges this data with the gateway via TCP/IP communication. The gateway receives control commands from the user terminal and sends monitoring data back to the user terminal via TCP/IP communication. The autonomous scanner components are connected to the gateway directly. The mobile robot, particulate matter sensor and lidar sensor are connected and communicate via a USB serial interface. The camera sensor is attached to the gateway via a CSI (Camera Serial Interface). And the temperature/humidity sensor is connected to the gateway via a GPIO (General Purpose Input and Output) port. The gateway controls the autonomous scanner according to user commands and obtains scanned data.

Mobile Application for User Terminal
For the purposes of the user terminal, we developed a mobile application to monitor and control the system. Some screenshots of the mobile application are presented in Figure 7. The user-terminal controls the mobile robot as well as the lidar and camera sensors. It monitors data in the forms of scanned images from the lidar sensor, captured images from the camera sensor, and data from the temperature/humidity sensor and particulate matter sensor. It exchanges this data with the gateway via TCP/IP communication. The gateway receives control commands from the user terminal and sends monitoring data back to the user terminal via TCP/IP communication.
The autonomous scanner components are connected to the gateway directly. The mobile robot, particulate matter sensor and lidar sensor are connected and communicate via a USB serial interface. The camera sensor is attached to the gateway via a CSI (Camera Serial Interface). And the temperature/humidity sensor is connected to the gateway via a GPIO (General Purpose Input and Output) port. The gateway controls the autonomous scanner according to user commands and obtains scanned data.

Mobile Application for User Terminal
For the purposes of the user terminal, we developed a mobile application to monitor and control the system. Some screenshots of the mobile application are presented in Figure 7. In the right of the figure, the control plane is shown. When the user touches the 'click to release' menu, temperature, humidity and particulate matter data is updated periodically. The operation mode of the autonomous scanner can be selected between the autonomous mode and the manual mode. The main function of the systems is autonomous mode, and the manual mode is implemented additionally. When the autonomous mode is set, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data. The initial state of the operation mode is manual, which means that the mobile robot is controlled by the user using the four directional buttons below. The upward button is for forward movement; the downward button is for backward movement; the leftward button is for left turns, and the rightward button is for right turns. In the right of the figure, the control plane is shown. When the user touches the 'click to release' menu, temperature, humidity and particulate matter data is updated periodically. The operation mode of the autonomous scanner can be selected between the autonomous mode and the manual mode. The main function of the systems is autonomous mode, and the manual mode is implemented additionally. When the autonomous mode is set, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data. The initial state of the operation mode is manual, which means that the mobile robot is controlled by the user using the four directional buttons below. The upward button is for forward movement; the downward button is for backward movement; the leftward button is for left turns, and the rightward button is for right turns.
If the user touches the camera icon, a picture-taking command is transmitted to the gateway, and the gateway controls the camera sensor and stores the captured image. If the user touches the gallery icon, the gateway transmits the captured image to the user terminal. The scan icon controls the lidar sensor. When the gateway receives a scan command, it controls the lidar sensor and obtains and draws 2D point coordinates as a scanned image. If the user touches the map icon, the gateway transmits the scanned image to the user.
The main difference between the autonomous mode and the manual mode is the control of the mobile robot. In the manual mode, the user must control the movement of the mobile robot, and in the autonomous mode, the mobile robot is driven to avoid obstacles according to the autonomous driving algorithm. In the autonomous mode, indoor scanning and camera control are automatically performed at every movement step in the collection of sensor data, whereas in the manual mode, they must be performed manually by the user. Even so, both temperature and humidity data are collected periodically and sent to the user terminal in the manual mode.

Gateway
The gateway operates in direct connection with mobile robots and sensors. According to the user's command, it operates the associated control program and stores various data collected by the autonomous scanner.
In Raspberry Pi, three main python scripts are implemented. The first script includes processing of the user's commands and operations of iRobot in the manual or autonomous mode. The second script entails control of RPLidar A2 and Pi Camera V2. The third script includes control of the temperature/humidity and particulate matter sensors. Data collected from each sensor is stored as a file in Raspberry Pi. The format of a temperature/humidity sensor data file is as follows. The format of a particulate matter sensor data file is as follows. For the purposes of tracing and analysis, measured date and time information is added to each sensed data.
The following Table 1 shows sample angle and distance data obtained from the RPLidar A2 lidar sensor. The data of the lidar sensor is provided to the user by converting it to a scanned image in the form of a two-dimensional drawing, and is also used to calculate the movement direction and distance of the iRobot when it is operating in the autonomous mode. For an autonomous scan operation, autonomous driving of iRobot is implemented as follows. First, the gateway collects angle and distance data from the current position of iRobot to an obstacle using RPLidar A2 and sorts the data in descending order based on the distance values. Then, it extracts high-ranked 550~1100 data and averages the angles using the mean of circular quantities formula. This time, to prevent the robot from going back, we added a code that excludes the back part of the angle corresponding to the rotation angle from the scan. The angle of the current direction of the robot is always 0 • ; therefore, to prevent the robot from going back, the back part of the angle to be excluded was set from 120 • to 260 • in this implementation.
The angle thus obtained becomes the direction of movement toward the next destination. The moving distance should be set in consideration of the distance value used in the calculation and the measurement range of the lidar sensor. In this implementation, we choose the top value used in the calculation, divide it by 160, which is determined heuristically, and set the result as the movement distance.

Autonomous Scanner
The autonomous scanner is a driving vehicle that actually operates according to a command issued by the user via the gateway. The mobile robot equipped with sensors moves in the indoor space, collects spatial data and environmental data, and transmits the data to the gateway. The movement of iRobot and control of sensors are implemented using Python scripts in Raspberry Pi, as mentioned in Section 3.3.2. Figure 8 shows iRobot's sensors that are mainly related to driving. The wall sensor allows the robot to identify the right wall and continue to move along it. The light bumper sensor detects an obstacle in advance when it appears while the robot is driving and avoids it. The bumper sensor detects a collision when the robot collides with obstacles that the robot has not detected beforehand and enables the robot to continue to move after handing the collision. calculation, divide it by 160, which is determined heuristically, and set the result as the movement distance.

Autonomous Scanner
The autonomous scanner is a driving vehicle that actually operates according to a command issued by the user via the gateway. The mobile robot equipped with sensors moves in the indoor space, collects spatial data and environmental data, and transmits the data to the gateway. The movement of iRobot and control of sensors are implemented using Python scripts in Raspberry Pi, as mentioned in Section 3.3.2. Figure 8 shows iRobot's sensors that are mainly related to driving. The wall sensor allows the robot to identify the right wall and continue to move along it. The light bumper sensor detects an obstacle in advance when it appears while the robot is driving and avoids it. The bumper sensor detects a collision when the robot collides with obstacles that the robot has not detected beforehand and enables the robot to continue to move after handing the collision. In this implementation of the autonomous scanning algorithm, when the bumper sensor is activated, it is judged as a collision with an unexpected obstacle, and the robot stops and moves backward pre-defined distance. Then, the next destination is recalculated according to the autonomous scanning algorithm.

Experiments
In this section, we present the experimental results for the proposed autonomous scanning system. In order to verify the operation of the proposed system, we selected some indoor spaces and performed experiments. The experiments were performed in two parts, manual scanning and autonomous scanning.

Manual Scanning Experiment
The manual scanning experiment was carried out in the lobby shown in Figure 9. The initial state of the mobile robot was the manual-movement mode. The user controlled the lidar and camera sensors. In this implementation of the autonomous scanning algorithm, when the bumper sensor is activated, it is judged as a collision with an unexpected obstacle, and the robot stops and moves backward pre-defined distance. Then, the next destination is recalculated according to the autonomous scanning algorithm.

Experiments
In this section, we present the experimental results for the proposed autonomous scanning system. In order to verify the operation of the proposed system, we selected some indoor spaces and performed experiments. The experiments were performed in two parts, manual scanning and autonomous scanning.

Manual Scanning Experiment
The manual scanning experiment was carried out in the lobby shown in Figure 9. The initial state of the mobile robot was the manual-movement mode. The user controlled the lidar and camera sensors. Processes 2020, 8, x FOR PEER REVIEW 12 of 29 Figure 9. Indoor space (lobby) wherein experiment was performed. This was for the manual operation experiment. Figure 10 presents the environmental data and captured image for the indoor lobby along with the scanned result. The user could check temperature/humidity, particulate matter data, captured images and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway. In the second experiment, the user controlled the mobile robot's movement in the manual mode. Figure 11 shows the results of movement.   . Indoor space (lobby) wherein experiment was performed. This was for the manual operation experiment. Figure 10 presents the environmental data and captured image for the indoor lobby along with the scanned result. The user could check temperature/humidity, particulate matter data, captured images and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway. In the second experiment, the user controlled the mobile robot's movement in the manual mode. Figure 11 shows the results of movement. In the second experiment, the user controlled the mobile robot's movement in the manual mode. Figure 11 shows the results of movement.

Autonomous Scanning Experiment
The autonomous scanning experiment was carried out in the corridor. Figure 13 shows an example of the autonomous mode operation of the proposed system. The user activates autonomous mode operation using a smartphone, and then the autonomous scanner operates in autonomous mode and the user obtains the scanned result. The autonomous scanning experiment was carried out in the corridor. Figure 13 shows an example of the autonomous mode operation of the proposed system. The user activates autonomous mode operation using a smartphone, and then the autonomous scanner operates in autonomous mode and the user obtains the scanned result. The first autonomous scanning experiment was carried out in the corridor shown in Figure 14.  Figure 15 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. In the process of integrating the scan data, we found that a calibration scheme is needed. For example, there is some noise between numbers 5 and 6 in following scanned result. This issue will be held over for future research. The first autonomous scanning experiment was carried out in the corridor shown in Figure 14. The autonomous scanning experiment was carried out in the corridor. Figure 13 shows an example of the autonomous mode operation of the proposed system. The user activates autonomous mode operation using a smartphone, and then the autonomous scanner operates in autonomous mode and the user obtains the scanned result. The first autonomous scanning experiment was carried out in the corridor shown in Figure 14.  Figure 15 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. In the process of integrating the scan data, we found that a calibration scheme is needed. For example, there is some noise between numbers 5 and 6 in following scanned result. This issue will be held over for future research.  Figure 15 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. In the process of integrating the scan data, we found that a calibration scheme is needed. For example, there is some noise between numbers 5 and 6 in following scanned result. This issue will be held over for future research. In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space (corridor) for the first autonomous scanning experiment is shown in Figure 16.  Figure 17 presents the environmental data and captured image for the indoor corridor along with the scanned result at initial position. The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway. In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space (corridor) for the first autonomous scanning experiment is shown in Figure 16. In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space (corridor) for the first autonomous scanning experiment is shown in Figure 16.  Figure 17 presents the environmental data and captured image for the indoor corridor along with the scanned result at initial position. The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.   Figure 18 shows the position (marked 4 in Figure 15) in the corridor space where the first autonomous scanning experiment was performed.  Figure 15) in indoor space where first autonomous scanning experiment was performed. Figure 19 presents the environmental data and captured image and scanned result at the position (marked 4 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 18 shows the position (marked 4 in Figure 15) in the corridor space where the first autonomous scanning experiment was performed.  Figure 18 shows the position (marked 4 in Figure 15) in the corridor space where the first autonomous scanning experiment was performed.  Figure 15) in indoor space where first autonomous scanning experiment was performed. Figure 19 presents the environmental data and captured image and scanned result at the position (marked 4 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 15) in indoor space where first autonomous scanning experiment was performed. Figure 19 presents the environmental data and captured image and scanned result at the position (marked 4 in Figure 15) from which the first autonomous scanning experiment was performed. . Figure 19. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 4 in Figure 15) from which first autonomous scanning experiment was performed. Figure 20 shows the corridor space at the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 15) in indoor space where first autonomous scanning experiment was performed. Figure 21 presents the environmental data, captured image and scanned result for the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 15) from which first autonomous scanning experiment was performed. Figure 20 shows the corridor space at the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed. . Figure 19. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 4 in Figure 15) from which first autonomous scanning experiment was performed. Figure 20 shows the corridor space at the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 15) in indoor space where first autonomous scanning experiment was performed. Figure 21 presents the environmental data, captured image and scanned result for the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 15) in indoor space where first autonomous scanning experiment was performed. Figure 21 presents the environmental data, captured image and scanned result for the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.  Figure 15) from which first autonomous scanning experiment was performed.
The second and the third autonomous scanning experiment was carried out in the corridor shown in Figure 22.    Figure 15) from which first autonomous scanning experiment was performed.
The second and the third autonomous scanning experiment was carried out in the corridor shown in Figure 22.  Figure 15) from which first autonomous scanning experiment was performed.
The second and the third autonomous scanning experiment was carried out in the corridor shown in Figure 22.    Figure 23 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents the point moved to by autonomous driving from the starting position of the autonomous scanner.
In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space for the second autonomous scanning experiment is shown in Figure 24. Figure 25 presents the environmental data and captured image for the indoor corridor along with the scanned result at the initial position in the second autonomous scanning experiment. The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway. In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space for the second autonomous scanning experiment is shown in Figure 24.   In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space for the second autonomous scanning experiment is shown in Figure 24.  Figure 25 presents the environmental data and captured image for the indoor corridor along with the scanned result at the initial position in the second autonomous scanning experiment. The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.  Figure 26 shows the corridor space at the position (marked 6 in Figure 23) where the second autonomous scanning experiment was performed.      Figure 23) from which second autonomous scanning experiment was performed. Figure 27 presents the environmental data, the captured image and the scanned result at position (marked 6 in Figure 23) from which the second autonomous scanning experiment was performed.   Figure 23) from which second autonomous scanning experiment was performed. Figure 28 shows the corridor space and scanned result at the position (marked 12 in Figure 23) from which the second autonomous scanning experiment was performed.  Figure 23) from which second autonomous scanning experiment was performed. Figure 28 shows the corridor space and scanned result at the position (marked 12 in Figure 23) from which the second autonomous scanning experiment was performed.  Figure 23) from which second autonomous scanning experiment was performed. Figure 28 shows the corridor space and scanned result at the position (marked 12 in Figure 23) from which the second autonomous scanning experiment was performed. The third autonomous scanning experiment was carried out in the corridor shown in Figure 22 again. Figure 29 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. As can be seen in the results, movement path of the robot is different though the second and the third experiments were performed in the same indoor The third autonomous scanning experiment was carried out in the corridor shown in Figure 22 again. Figure 29 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. As can be seen in the results, movement path of the robot is different though the second and the third experiments were performed in the same indoor space. The direction for next destination is derived dynamically based on scanned data in proposed autonomous driving algorithm.      Figure 29) in the corridor space where the third autonomous scanning experiment was performed.
numbers indicating the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right). Figure 30 shows the position (marked 2 in Figure 29) in the corridor space where the third autonomous scanning experiment was performed.   Figure 29). The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.   Figure 29). The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.  Figure 29) from which third autonomous scanning experiment was performed. Figure 32 shows the position (marked 5 in Figure 29) in the corridor space where the third autonomous scanning experiment was performed.  Figure 29) from which third autonomous scanning experiment was performed. Figure 32 shows the position (marked 5 in Figure 29) in the corridor space where the third autonomous scanning experiment was performed.  Figure 29) from which third autonomous scanning experiment was performed. Figure 32 shows the position (marked 5 in Figure 29) in the corridor space where the third autonomous scanning experiment was performed.   Figure 29) from which the third autonomous scanning experiment was performed.   Figure 29) from which third autonomous scanning experiment was performed.
An additional autonomous scanning experiment was carried out in the corridor with an obstacle and the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner shown in Figure 34. Each number represents the point moved to by autonomous driving from the starting position of the autonomous scanner. As described in the algorithm, the direction for the next destination and the distance were extracted in consideration of not only the wall but also other obstacles.  Figure 29) from which third autonomous scanning experiment was performed.
An additional autonomous scanning experiment was carried out in the corridor with an obstacle and the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner shown in Figure 34. Each number represents the point moved to by autonomous driving from the starting position of the autonomous scanner. As described in the algorithm, the direction for the next destination and the distance were extracted in consideration of not only the wall but also other obstacles.
An additional autonomous scanning experiment was carried out in the corridor with an obstacle and the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner shown in Figure 34. Each number represents the point moved to by autonomous driving from the starting position of the autonomous scanner. As described in the algorithm, the direction for the next destination and the distance were extracted in consideration of not only the wall but also other obstacles.

Results and Discussion
As in the above experiment, we selected an indoor space and performed experiments on manual scanning and autonomous scanning in order to verify the operation of the proposed system. Table 2 shows portions of the temperature, humidity and particulate matter data collected in the previous autonomous scanning experiment.  Table 3 shows partial data for angle and distance obtained from the RPLidar A2 lidar sensor. Such data is provided to the user by converting it to a scanned image in the form of a two-dimensional drawing and is also used to calculate the movement direction and distance of the iRobot when it is operating in the autonomous mode.  Figure 35 shows a Python script for generation of a two-dimensional drawing using angle and distance data obtained from the RPLidar A2 lidar sensor. The data was obtained by dividing it into four cases according to the angle value. Each case changed the angle to the radian and generated the X and Y coordinates from the distance and the radian. The two-dimensional drawing was generated by applying those X and Y coordinates.  Figure 35 shows a Python script for generation of a two-dimensional drawing using angle and distance data obtained from the RPLidar A2 lidar sensor. The data was obtained by dividing it into four cases according to the angle value. Each case changed the angle to the radian and generated the X and Y coordinates from the distance and the radian. The two-dimensional drawing was generated by applying those X and Y coordinates. As interest in data on indoor spaces has increased, research on scanning and positioning methods for indoor space information collection and location recognition has been actively carried out, and relevant systems have been proposed. Such systems incorporate indoor mapping and modeling, positioning, and platform-related technologies. In addition, a smart home or building infrastructure integrating IoT technology based on sensors and actuators for monitoring and control of indoor spaces such as homes or buildings has been proposed. In this paper, we consider a system As interest in data on indoor spaces has increased, research on scanning and positioning methods for indoor space information collection and location recognition has been actively carried out, and relevant systems have been proposed. Such systems incorporate indoor mapping and modeling, positioning, and platform-related technologies. In addition, a smart home or building infrastructure integrating IoT technology based on sensors and actuators for monitoring and control of indoor spaces such as homes or buildings has been proposed. In this paper, we consider a system that can collect indoor data autonomously for monitoring and control of indoor spaces in real time where there is no infrastructure for the purpose. The data include indoor spatial data as well as environmental data.
In the design and implementation of the proposed system we utilized prototyping platforms and performed experiment in order to verify the operation of the system. As shown in the results, the proposed system functions properly, as it provides temperature, humidity, particulate matter, and image data as well as spatial data to the user in real time and can drive in an indoor space while avoiding obstacles in the autonomous mode.
The mobile robot and each sensor are connected to the gateway directly. The user-terminal exchanges control commands and sensor data with the gateway via TCP/IP communication. In the proposed system, each sensor data is stored in the gateway; therefore, in cases where the communication environment is inadequate, it can be checked against stored data in addition to real-time checking.
The iRobot Create 2 used as a prototyping tool has a battery that runs for 2 h when fully charged and has limitations in operating in more complex environments such as disaster areas. It is expected that if the proposed prototype's robot and sensor functions and performance are improved, it will find use in various applications.

Conclusions
People spend most of time indoors, and the indoor environment influences their health, wellbeing, and productivity. Accordingly, the demand for location-based services that can process space and location information as well as smart indoors with an emphasis on safe, healthy, comfortable, affordable, and sustainable living environments is increasing.
In order to meet the demand for indoor applications, first, techniques for deriving indoor space information must be provided. Usually, schemes for deriving indoor maps and models are based on collected point cloud data. In addition, keeping the data of indoor up-to-date and checking it is problematic, although it is critical to support rapid intervention in situation of emergency. Information about building layouts and indoor objects occupancy is a critical factor to efficient and safer emergency response in disaster management.
Second, the convergence of various new technologies such as sensing and actuation, advanced control, and data analytics has been proposed for smart indoors. Therefore, we consider a system that can collect indoor data autonomously for monitoring and control of indoor spaces in real time where there is no infrastructure for the purpose. The data include indoor spatial data as well as environmental data.
We proposed herein an autonomous indoor scanning system that can be employed for acquisition of indoor environmental data including temperature, humidity and particulate matter information along with indoor imaging and spatial data for the purposes of indoor environment status awareness and indoor visualization.
The system collects indoor data autonomously, monitoring and controlling indoor spaces in real time where there is no infrastructure for such purposes. For design and implementation of the proposed system, we utilized prototyping platforms including iRobot Create 2 and Raspberry Pi 3, along with the RPLidar A2 lidar sensor, Raspberry Pi Camera V2, the DHT22 temperature/humidity sensor, and the Plantower PMS7003 particulate matter sensor. A smartphone was used as the user terminal for monitoring and control of the system, to which ends, we implemented a mobile application.
The results on our implementation and experimentation indicated proper functioning of the proposed system. It provides temperature, humidity, particulate matter, image as well as spatial data to the user in real time, and it can drive in indoor spaces while avoiding obstacles in the autonomous mode. In addition, in the proposed system, each sensor data is stored in the gateway; therefore, in cases where the communication environment is inadequate, it can be checked against stored data in addition to real-time checking. In the process of integrating the scan data, however, we found that a calibration scheme is needed. This issue will be held over for future research.
The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency. As based on the proposed prototype, the system has limitations in operating in more complex environments such as disaster areas. It is expected that if the prototype's robot and sensor functions and performance are improved, the system will find use in various applications.
In future work, we will improve system performance by means of an advanced lidar sensor and mobile robot, and we will improve system functionality by installation of additional sensors. Additionally, a correction method for improved accuracy of scanned data and an autonomous-driving algorithm upgrade are required.
Author Contributions: S.H. prepared the paper structure, surveyed the related research, designed the proposed idea and research, and wrote the paper; D.P. implemented the prototype of the proposed system and performed the experiments. Both authors have read and agreed to the published version of the manuscript.