2.1. IoT and User Behavior Value
IoT is a system in which intelligent objects are connected in a physical or virtual space, and a network is formed between people and objects, or between objects and objects [9
]. IoT can also be defined as a global infrastructure that provides intelligent services by combining knowledge based on context awareness. Implementation of an IoT requires an embedded system represented by things, a bi-directional communication environment, including the Internet, and commercial software to process the data.
IoT was initiated with the ability to remotely control lighting, thermostats, and security devices in everyday life [10
]. This ability can be viewed as a function that satisfies user’s behavioral values (UBVs) of management, promptness, and information [11
]. From that time, IoT has evolved into a means of exchanging information between objects and objects, and the “If This Then That” (IFTTT) concept has become universal, satisfying the value of scalability and automation. IFTTT represents a service for linking various programs and applications on the Internet with a computer through a command “recipe” [12
]. In recent years, IoT in daily life has shown a tendency to expand its service centering on home IoT, which is fused with an artificial intelligence (AI) client. This enables users to manage multiple Internet devices more conveniently with voice commands.
In particular, a report summarizes existing high-level techniques in gas sensing and IoT-related papers published within the last five years. The research was tested in a kitchen environment that contained several objects monitored by different sensors [13
]. The authors of the report introduced a representational and reasoning model for the interpretation of a gas sensor situated in the sensor network. The interpretation process includes inferring high-level explanations for changes detected over the gas signals. Inspired from the Semantic Sensor Network (SSN), the ontology used in this work provides an adaptive way of modeling the domain-related knowledge. Furthermore, exploiting Answer Set Programming (ASP) enables a declarative and automatic way of rule definition. Converting the ontology concepts and relations into ASP logic programs, the interpretation process defines a logic program whose answer sets are considered as eventual explanations for the detected changes in the gas sensor signals [14
As the home IoT has become more convenient, it has become more widely used in everyday life. However, with this greater prevalence, users have become increasingly concerned about related privacy, security, and safety issues of home IoT devices. This concern is particularly the case with respect to the numerous sensors and communication devices involved. From the UBV perspective, IoT is demonstrating that the value placed on safety has recently increased along with universal UBV, such as manageability, speed, and scalability [11
]. We derived 28 items on UBV based on the previous six years of IoT-related studies and theories of change. We redefine the three UBVs, as shown in Table 2
, by incorporating the overlapping or similar concepts.
The theory of change emerged from the field of program theory and program evaluation in the mid-1990s as a new means of analyzing theories motivating programs and initiatives toward social and political change [19
]. The theory of change generates knowledge about whether a program is effective, while explaining what methods the program can employ to be effective. In the early days of the theory of change, Kubisch established three quality control criteria to combine theory with traditional manufacturing, environmental psychology, organizational psychology, sociology, and political science [20
]. The three criteria are plausibility, feasibility, and testability. Since the three criteria have been gradually extended to research on the theoretical background of system maintenance and software upgrades in information and communication technology, they have been used in various terms and as different values [21
First, plausibility refers to the “logic of outcomes” pathway. In other words, it is the user’s expectation of or satisfaction with the accuracy and logic of the new technology in terms of UBV. Plausibility has been replaced by the meaning of relationship, sociality, and convenience in later studies. We redefine plausibility as interactivity by grasping the accuracy of the technique and the satisfaction of users accordingly. Second, feasibility refers to whether the initiative can realistically achieve its long-term outcomes and impacts. This has been handled in research in terms of the manageability of technologies to solve psychological problems related to the user’s reticent relation to the given technology.
Thus, we contend that people using home IoT products or services can relinquish their technical reticence and gain psychological flexibility through certain values. We redefine all of these values as stabilities. Finally, testability refers chiefly to the indicator that measures the importance of users’ behavioral values. In other words, it is a type of instrumental utility that quantitatively measures thought flow and change. Recently, information and communication technologies (ICT) research has replaced testability with a kind of functionality. In this study, we redefine it as the comprehensive meaning of UBVs, such as scalability, compatibility, and promptness.
2.2. Studies on Improvement of Indoor Air Quality
A pleasant indoor environment is determined by the comprehensive action of various indoor environmental factors. In recent years, there has been a growing interest in indoor environmental factors that directly affect the degree of comfort for people who reside indoors, including temperature and humidity. In addition, there is a continuing need to manage indoor air quality factors, such as fine dust and carbon dioxide, which are closely related to human health [22
According to US Environmental Protection Agency research, the causes of indoor hazardous substances are carbon dioxide (CO2
), nitrogen dioxide (NO2
), sulfur dioxide (SO2
), ozone (O3
), fine dust, heavy metal, asbestos, volatile organic compounds (VOCs), formaldehyde (H-CHO), microbial substances, and radon (Rn). Various gas measurement sensors for indoor air pollution sources have been developed and employed. Moreover, studies and development are currently underway on technologies that quickly detect flammable or toxic gases and respond accordingly [23
Research on indoor air quality sensing has been conducted for various public places of everyday life, such as subways, schools, department stores, and offices. Paulos et al. [24
] developed a system for measuring and monitoring office air quality through research on the office indoor air environment and work efficiency. As a result of controlling the system through a wireless sensor network linked to mobile devices, the overall work efficiency of the employees increased. Kanjo [25
], Lohani and Acharya [26
] developed their own environmental information monitoring system that applies precautions, such as indoor fine dust reduction, by using a mobile wireless LAN. The author showed that employee satisfaction with the work environment increased. Hwang and Yoe [27
] monitored and analyzed indoor environment information through closed-circuit television (CCTV) and public environment information using an application programming interface (API). In addition, they developed an indoor environmental control system based on automatic situation recognition. Wang et al. [28
] and Pötsch et al. [29
] developed a wireless-sensor-based indoor environmental monitoring system for green buildings and the LoRaWAN stack, respectively. The system visualizes collected indoor environment data and measurement position data, and it distributes the temperature sensor to various locations in the target space. Moreover, it communicates the temperature in each space using a step color chart. Specifically, the authors calculated the distance from a window and installed sensors at three levels above the horizontal point. Their system visualizes the collected data as a three-dimensional space chart according to the spatial distribution.
In a study on an indoor air quality monitoring system, researchers divided the measurement values of the fine dust concentration on the floor plan of the space into multiple spaces and expressed them in two or three dimensions [30
]. The system has a simple structure for intuitively grasping the indoor environmental condition, thus enabling a comparison of the dust concentration according to the space. Meanwhile, the studies of Salamone et al. [31
] utilize more simple self-developed experimental tools. They installed the open-source Smart Lamp in a real office environment and tested the reliability of IoT equipment. Salamone et al. [32
] conducted a ventilation efficiency evaluation according to the ventilation method of an indoor space using a computational fluid dynamics (CFD) technique. To this end, they developed a system for measuring toluene concentrations and visualizing them in three-dimensional (3D) charts, which were applied to the field and contributed significantly to lowering the average toluene concentration.
Moreover, another paper presents a very important reference point on how to sense different kinds of gases. According to this study, the method of sensing various types of gas is described in detail. Additionally, the sensitivity (the minimum value of the target gas volume concentration when the gases could be detected) and the selectivity (the ability of gas sensors to identify a specific gas among a gas mixture) are regarded as very important measures for evaluating stability in gas sensing. In addition, it was explained that response time (the period from the time when gas concentration reaches a specific value to that when the sensor generates a warning signal), energy consumption, reversibility (whether the sensing materials could return to their original state after detection), adsorptive capacity, and fabrication cost are important factors.
As shown by the above research examples, most studies related to indoor air quality improvement involved developing a system that is suitable for a specific environment. This approach is difficult to apply to all environments of a given workplace using a standardized sensor device. Moreover, it cannot achieve the ultimate result needed for the actual user in the workplace, which is the reduction of harmful indoor components. In view of recent trends in the previous research, it can be observed that constructing the system environment that we planned, and creating the data through the distribution of the sample data, which is the methodology that is appropriate for it, is a very effective methodology. In other words, just as many experimental studies create experimental environments that can control variables themselves, we cannot only set specific situations, but we can also scientifically carry out all experimental steps consisting of system design, instrument connection, data communication control, sample data distribution analysis, and function estimation and verification. Many customized studies have been conducted through these actual system building processes [25
], and the results are reflected very successfully in practice. From the researcher viewpoint, it is more effective to develop a system suitable for the environment and apply it to identifying problems and finding solutions. According to these trends, we intend to develop an air quality improvement system that can be applied to the apartment, the most common Korean housing type.
2.3. Technique of Random Data Generation
There are several ways in which we can amplify data within a given error-term. In particular, many previous studies on random number generation have been conducted based on the following three trends. First, in the information technology (IT) field, random number generation and its statistical evaluation have been mainly performed in the research of cryptography and system security. Second, prior research on random number generation in the financial sector has been predominantly focused on predicting how stock and bond values will change in response to changes in interest rates and other macroeconomic variables. Finally, another area that heavily uses random number generation is the traditional use of statistical tests to generate test data in areas where mathematical proofs are required.
Xiao et al. [33
] argued that the most important point in generating test data is finding an efficient optimization algorithm. They generated test data using a genetic algorithm (GA), simulated annealing (SA), and genetic simulated annealing (GSA), and they concluded that GA is the best optimization algorithm for generating test data. Several studies were conducted to improve the efficiency of test data generation by improving existing optimization algorithms. Alba and Chicano [34
] applied parallel GA to test data generation, and Mousa et al. [35
] suggested application of a memetic algorithm that combines GA and local optimization algorithms. Watkins and Hufnagel [36
] compared the fitness evaluation functions used to generate the test data. The results showed that the most efficient fitness evaluation functions for generating test data are BP1, BP2, and IPP.
Monte Carlo simulation (MCS) has been considered the most effective technique for random number generation for complex financial products. MCS is a common method that involves numerical integration based on random sampling. However, since random sampling is inherently a brute force method (BFM), many trials are required to maintain a high accuracy and minimum error rate, which is also time consuming. To solve this problem, Mallat [37
] used the random number generation scheme (RNGS) to investigate bond values. This method stratifies sampling of interest rate data through a uniform distribution, applies an inverse-transform technique, and then obtains a random variable of an inverse function. The study of random sampling in the financial sector has centered on the interest rate structure; however, it has supplemented various alternative financial models, such as the standard Wiener process (SWP) [38
]. In other words, a cumulative (or spectral or density) distribution function of the actual sampling data was converted into rich interest rate data and eventually the distribution function of the random data was generated through natural cubic spline (NCS) interpolation.
In this study, we employ Gerald and Wheatley’s random number generation method. We create a density distribution function based on the actual home IoT data, such as the indoor air quality concentration from apartment complexes and the API data provided by a meteorological office, and we extract the basic data. Based on these data, we generate a random number function for the last year of data through NCS interpolation.