Next Article in Journal
Unix Domain Sockets Applied in Android Malware Should Not Be Ignored
Next Article in Special Issue
Social Engineering Attacks and Countermeasures in the New Zealand Banking System: Advancing a User-Reflective Mitigation Model
Previous Article in Journal
Multilingual and Multiword Phenomena in a lemon Old Occitan Medico-Botanical Lexicon
Previous Article in Special Issue
Lightweight S-Box Architecture for Secure Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Users’ Acceptance of IoT Systems: A Theoretical Approach

Institute of Cognitive Science and Technology—National Research Council of Italy (ISTC-CNR), 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Information 2018, 9(3), 53; https://doi.org/10.3390/info9030053
Submission received: 10 January 2018 / Revised: 20 February 2018 / Accepted: 27 February 2018 / Published: 1 March 2018
(This article belongs to the Special Issue Security in the Internet of Things)

Abstract

:
In the next future the IoT system will introduce extraordinary changes in our daily life. We will communicate with our domestic appliances to inform them about our preferences and goals and they will develop initiative and autonomy to be put at our service. But are we sure that we can afford all the automation they could offer? Are we able to manage it? Is it compatible with our cognitive attitudes and our actual and real goals? In this paper, we face the question of the IoT from the point of view of the user. We start analyzing which reasons undermine the acceptance of IoT systems and then we propose a possible solution. The first contribution of this work is the level characterization of the autonomy a user can grant to an IoT device. The second contribution is a theoretical model to deal with users and to stimulate users’ acceptance. By the means of simulation, we show how the model works and we prove that it leads the system to an optimal solution.

1. Introduction

In the near future, every object we own will be addressable via IP, which will allow the same objects to have access to the Internet, exchanging information with the outside world. It is expected that in 2020 there will be as much as 26 billion units connected to the Internet [1]. The novelty in all this concerns the fact that this technology is not limited to involving the classic devices, but also makes smart objects that are not.
The scenarios we are going to face are unimaginable; health, education, transport, every aspect of our lives will undergo radical changes, even our own homes will become smart [2]. The fridge could tell us that it is better to throw eggs away because they are no longer fresh, the washing machine could propose a more efficient way to wash clothes, entire buildings could work together to save energy or other resources.
It is this very principle of “connection between things” which is the basis of the Internet of Things [3].
However, while on the one hand it is true that we have a multitude of extremely useful scenarios, there are also considerable security and privacy issues [4].
Certainly, we are not talking about an apocalyptic prospective, but if in everyday life a hacker is able to block our computer, just think about the damage it could make if it decided to block our home doors. This problem is further enhanced by the heterogeneity of the devices, making it more difficult to control and detect security flaws.
If it is already difficult to accept that an object possesses intelligence and can interface with us, the thought that it can revolt against us, causing substantial damage, could make it even more difficult to spread IoT systems.
We then argue that a good way to address this problem is through the concept of trust [5]. The key point is in fact that users do not trust these systems; they do not know them or what they can do. The concept of trust comes spontaneously into play.
With this work, we want to propose a general IoT system able to adapt to the specific user and its disposition towards this technology, with the aim of (1) identifying the acceptance limit that the user has and (2) pushing the user to accept more this technology.
After introducing a theoretical model, we will introduce a possible implementation in a simulative context, with the aim of showing how it works.
Being a general model, it can be applied to any IoT system.
The rest of the paper is organized as follows: Section 2 analyzes the state of the art, pointing out that it is necessarily a user centric design for IoT systems; Section 3 provides a theoretical framework for trust, control, and feedback, also showing the computational model we used; Section 4 describes how we implemented the model in the simulation of Section 5; Section 6 comments on the results of the simulation; Section 7 concludes the work, resuming all the key points.

2. Distrust in the IoT

How would we like an IoT system to be? What features should it have or should be elicited? The IoT systems represent a wide variety of technologies, so it is not easy to identify in detail the common characteristics. However, in a high-level vision, some key aspects often and recurrently come into play.
When we talk about the characteristics of the IoT, the main topic is that of security [6,7,8,9] with the meaning of computer science. A device must be secure, and the reason is clear: if we give the green light to such a pervasive technology, able to enter deeply into every aspect of our life, it is fundamental that there are no security breaches. Are we sure that the toaster is not going to steal the money from our bank account? Can a hacker take the control of my television? Security mainly relies on encryption to solve its problems.
Then privacy comes into play. In this new world, having objects exchanging impressive amounts of information, more than can concretely be processed [10], it is not clear which information will be shared and with whom [11]. We need a new way to deal with privacy, since the classical approach of authentication [12] and policies cannot work properly in such a huge and heterogeneous environment. Facing privacy is necessary, but still not enough.
A third element is trust. Usually it is applied to solve the problem of identifying trustworthy devices in a network, separating them from the malicious ones [13]. By the way, the authors of [14] provide a review of the use of trust in IoT systems. The same authors identify that trust “helps people overcome perceptions of uncertainty and risk and engages in user acceptance”. In fact, when we have autonomous tools able to take various different decisions and these decisions involve our own goals and results, we have to be worried not just about their correct functioning (for each of these decisions) but also about their autonomous behavior and the role it plays for our purposes.
All these components are valid and fundamental to an IoT system. However, a further point should be emphasized. Although an IoT device requires only the connection and interfacing with the outside world to be defined as such, and then the possibility of being addressed and of exchanging information with the outside world, they are not independent systems, but on the contrary these systems continually interact with users, they relate to them in a very strong way: the user is at the center of everything. In fact, reasoning in view of the goals that these objects possess, the common purpose is to make life better for users, be they the inhabitants of a house, a city, patients/doctors in a hospital, or the workers of a facility.
The user becomes the fundamental point in all of this. A technology can be potentially perfect and very useful but, if people do not accept it, each effort is useless and it goes out of use. It is necessary to keep in mind how much the user is willing to accept an IoT system and to what extent he wants to interface with it.
We would like to focus on this last point, the concept of “user acceptance”.
As Ghazizadeh [15] says “technology fundamentally changes a person’s role, making the system performance progressively dependent on the integrity of this relationship. … In fact, automation will not achieve its potential if not properly adopted by users and seamlessly integrated into a new task structure”.
Furthermore, Miranda et al. [16] talk about Internet of People (IoP). In fact, they reiterate that technology must be integrated into the users’ daily lives, which are right at the center of the system. They focus on the fact that IoT systems must be able to adapt to the user, taking people’s context into account and avoiding user intervention as much as possible. Similarly, Ashraf [17] talks about autonomy in the Internet of things, pointing out that in this context it is necessary to minimize user intervention.
Thus, the acceptance of a new technology seems to be the key point, which is not always obvious. It is not easy for users to understand how a complex technology like this reasons and works. Often it is not clear what it is able to do and how it does it.
So it is true that security, privacy, and trust work together to increase reliance on IoT systems. However, it is necessary to keep users in the center of this discussion.
The reasons why the user may not grant high trust levels are the fears that (a) the task is not carried out in the expected way; (b) that it is not completed at all; or (c) even that damage is produced. All this becomes more and more complicated if we think that these devices can operate in a network that has a theoretically infinite number of nodes! Who do they cooperate with? What are the purposes of these additional devices? You get into a very complex system, difficult to understand and manage.
In short, the overall picture of the functions that they perform is going to complicate a lot. As a whole, the devices have a computational power and a huge amount of data available; they could be able to identify solutions that we had not even imagined. How can we be sure in such a scenario that these systems will realize the state of the world that coincides with what we would like? What if our computer decides to shut down because we worked too much? What if home heating does not turn on because the whole city is consuming too much energy? Surely, we talk about tasks that have their usefulness, but it is not said that the concept of utility the devices possess coincides with ours.
Which goals are we interested in delegating to these systems? Are we sure that they can understand them as we do?
To this purpose Kranz [18] studies a series of use cases in order to provide some guidelines for embedding interfaces into people’s daily lives.
Economides [19] identifies a series of characteristics that an IoT system must possess in order to be accepted by users. However, he does not provide a clear methodology about how these characteristics should be estimated and computed.
What we would like is on the one hand that the system is adaptive to the user, comparing the expectations of the latter with what it estimates. On the other hand, we would like the user to adapt to the system, trying to make it accept increasing levels of autonomy.
It is therefore proposed, first of all, a division of the devices’ tasks into increasing levels of autonomy. In order to operate, the devices must continuously estimate the level of autonomy that the user grants them.
In this way, the relationship between an IoT device and the user starts at a level of complexity that the user knows well and can manage, moving eventually to higher levels if the user allows it, i.e., if the trust it has towards the device is sufficient.
In all this it becomes fundamental to identify the levels of user trust. Trust therefore becomes a key concept.

3. Trust, Control, and Feedback

Consider a situation in which a given agent X needs a second agent Y to perform a task for him and must decide whether or not to rely on him. In this case X assumes the role of trustor and Y of trustee. The reasons why he would like to delegate that task can be different; in general, X believes that delegating the task could have some utility.
The cognitive agents in fact decide whether or not to rely on others to carry out their tasks on the basis of the expected utility and of the trust they have in who will perform those tasks. As for the utility, it must be more convenient for the trustor that someone else will carry out the task, otherwise he will do it by himself (in such cases as he can do it). Here we are not interested in dwelling on this point and for simplicity we consider that it is always convenient to rely on others, that is, the expected utility when X asks an agent Y to perform a task is always higher than if X would have done it alone.
The fundamental point of the question is that when an agent Y, cognitive or not, performs a task for me, if Y is to some extent an autonomous agent, I do not know how Y intends to complete his task, nor if he will actually manage to do it.
In this context, the concepts of trust and control intertwine in a very special way. In fact, the more control there is, the less trust we have. Vice versa when we trust we need less control we can allow greater autonomy. So although control is an antagonist of trust, somehow it helps trust formation [20].
When, in fact, the level of trust is not enough for the trustor to delegate a task to the trustees, control helps to bridge this gap, so that a lower level of trust is required. The more I trust an agent Y, the more I will grant him autonomy to carry out actions. But if I do not trust him enough, I need to exercise control mechanisms over his actions.
For instance, it was shown that [21] when the users’ experience with autonomous systems involves completely losing control of the decisions, the trust they have in these systems decreases. It is then necessary to lead the user to gradually accept levels of ever-greater autonomy.
We can find control into two different ways. It is in fact possible:
  • Ensure that the task is successfully accomplished and that the desired goal is achieved. This can be done by asking the trustee for feedback on its work. The feedback must be provided before the operation ends, in order to be able to modify that work.
  • Actively handle the possible unforeseen event (intervention).
In this way, the feedback is a lighter form of control (less invasive), which may or may not result in the active involvement of the trustor. It has a fundamental role in overcoming the borderline cases in which the trust level would not be enough to delegate a task, but the trustor delegates it anyway thanks to this form of control. In the end, it can result in the definitive acceptance of the task (or in its rejection, and then results in trustor intervention and a consequent trust decrement).

3.1. Trust: A Multilayered Concept

Different cognitive ingredients can be used to produce a trust evaluation. The first one is direct experience, in which the trustor X evaluates the trustee Y exploiting the past interactions it had with Y. The clear advantage of this approach is that it uses direct information; there is no intermediary, so the evaluation could be more reliable (we are supposing that X is able to evaluate Y’s performance more than others). However, it requires a certain number of interactions to produce a good evaluation and initially X should trust Y without any clues to do so (the cold start problem). Please consider that this evaluation could depend on many different factors, and that X is able to perceive their different contributions.
It is also possible to rely on second-hand information, exploiting recommendation [22] or reputation [23]. In this case, there is the advantage of having a ready to use evaluation, provided that a third agent Z in X’s social network knows Y and interacted with Y in the past. The disadvantage is that this evaluation introduces uncertainty due to the Z’s ability and its benevolence: how much can I trust Z as an evaluator?
Lastly, it is possible to use some mechanisms of knowledge generalization, such as the categories of belonging [24]. A category is a general set of agents—doctors, thieves, dogs, and so on—whose members have common characteristics, determining their behavior or their ability/willingness. If I am able to associate Y to a category and I know the average performance of the members belonging to that category concerning the specific task interesting me, I can exploit this evaluation to decide whether to trust Y. The advantage is that I can evaluate every node of my network, even if no one knows it. The disadvantage is that the level of uncertainty due to this method can be high, depending on the variability inside the category and its granularity. A practical example in the context of IoT is that I could believe that the devices produced by a given manufacturer are better than the others and then I could choose to delegate my task to them.
Since in this work we are not strictly interested in how to produce trust evaluations but in their practical applications, we will just rely on direct experience. This allows not introducing further uncertainty caused by the evaluation.
In this paper, trust is taken into account for two aspects. The first is that of autonomy. Similarly to [25] (in the cited work, the authors use a wheelchair, which in this case is not an IoT device, but an autonomous system endowed with smart functionalities and different autonomy levels.), where however authors are not working with IoT devices, tasks are grouped/categorized into several autonomy levels. A user, based on his personal availability, will assign a certain initial level of autonomy to a device. This level can positively or negatively change over time, depending on the interactions that the user has.
It becomes therefore essential to understand what a device can do, based on the current level of autonomy. Then the first contribution of this work is the identification and classification of the autonomy levels to which an IoT device can operate. Applying the concept of trust and control defined by Castelfranchi and Falcone [20], we defined 5 levels, numbered from 0 to 1:
  • Level 0: to operate according to its basic function; for example, a fridge will just keep things cool. This implies that it is not going to communicate with other devices and it is not going beyond its basic task. Proceeding in the metaphor of the fridge, it cannot notice that something is missing; it does not even know what it contains.
  • Level 1: to communicate with other agents inside and outside the environment, but just in a passive way (i.e., giving information about the current temperature).
  • Level 2: to autonomously carry out tasks, but without cooperating with other devices; again, thinking of a fridge, if a product needs a temperature below 5 degrees and another one above 7, it can autonomously decide which temperature to set, always keeping in mind that the main goal is to maximize the user’s utility.
  • Level 3: to autonomously carry out tasks cooperating with other devices. The cooperation is actually a critical element, as it involves problems like the partners’ choice, as well as recognition of merit and guilt. Although we are not going to cover this part, focusing just on the device which started the interaction, it is necessary to point it out. Again, thinking of the fridge, if it is not able to go below a certain temperature because it is hot in the house, it can ask the heating system to lower the temperature of the house. This needs a complex negotiation between two autonomous systems. They need to understand what the user priority is; this is not so easy to solve. Furthermore, it must also be considered that the systems in question must be able to communicate, using common protocols. This can happen if the devices use a standard of communication, enabling interoperability. Smart houses are valid examples of communication between different devices (differently from smart houses, in this work there is no centralized entity. We deal with an open system, in which the intelligence is distributed on the individual devices.).
  • Level 4: this level is called over-help [26], which stands for the possibility of going beyond the user’s requests, proposing solutions that he could not even imagine: the same fridge could notice from our temperature that we have the fever, proceeding then to cancel the dinner with our friends and booking a medical examination. This type of interaction may be too pervasive.
It is easy to understand that these kinds of tasks require an increasing level of autonomy. The level 0 is the starting level. Basically, the device limits itself to doing elementary functions, which are the ones it is supposed to do. Then there is no issue for the user. But it is not certain that it is going to accept the next levels.
A trust value is associated with each level i, with i going from 0 to 4, representing the user disposition towards the tasks of that level. The trust values for the autonomy are defined as real numbers in range [0, 1].
These trust values are related to each other: the higher level “i + 1” always has a trust value equal to or less than the previous one i. Moreover, we suppose that there is influence between them, so that when a device selects a task belonging to level i and this is accepted, both the trust value on level i and on the next level “i + 1” will increase, according to the Formulas (1) and (2). Here the new trust value at level i, n e w A u t o n o m y T r u s t L i , is computed as the sum of the old trust value plus the constant increment. Similarly, the new trust value on level “i + 1”, n e w A u t o n o m y T r u s t L i + 1 , is computed as the sum of the old trust value plus half of the constant increment.
Note that “i + 1” exists only if i is smaller than 4; when i is equal to 4, Formula (2) is not taken into consideration.
n e w A u t o n o m y T r u s t L i = a u t o n o m y T r u s t L i + i n c r e m e n t
n e w A u t o n o m y T r u s t L i + 1 = a u t o n o m y T r u s t L i + 1 + i n c r e m e n t 2
When instead there is a trust decrease since the task is interrupted, even the trust in the following levels is decremented. Formula (3) describes what happens to the autonomy trust value of level i, while Formula (4) shows what happen to the higher levels:
n e w A u t o n o m y T r u s t L i = a u t o n o m y T r u s t L i p e n a l t y
  i < n M L           n e w A u t o n o m y T r u s t L n = a u t o n o m y T r u s t L n p e n a l t y 2 ( n i )
In Formula (4) M L is the index of the maximal level defined in the system. Here, in particular, it is equal to 4.
The two variables increment and penalty are real values that can assume different values in range [0, 1]. According to [27] we chose to give a higher weight to negative outcomes than the positive ones, as trust is harder to gain than to lose.
What has been said so far concerns the aspect of autonomy. However, it is necessary to take into consideration that a device can fail when doing a task. Failures are due to multiple causes, both internal and external to the device itself. A device can fail because a sensor detected a wrong measurement, because it did not arrive to do the requested action in time, because it did something differently from what the user expected, or because a second partner device was wrong. All of this is modeled through the dimension called efficiency.
What matters to us in this case is that each device has a certain error probability on each level. Although these values are expected to grow as the level increases, it is not said that is so; there may be mistakes that affect lower level tasks but not upper level tasks.
It is therefore necessary to have a mechanism able to identify which levels create problems, without necessarily blocking the subsequent levels.
Depending on the device’s performance, the trust values concerning efficiency, defined as real numbers in range [0, 1], will be updated in a similar way to autonomy. Given that we are still dealing with trust and both efficiency and autonomy are modeled in the same way, for the sake of simplicity, we used the same parameters as autonomy: with a positive interaction, the new trust value n e w E f f i c i e n c y T r u s t L i is computed as the sum of the old trust value   e f f i c i e n c y T r u s t L i and “increment” while, in case of failure, it is decreases of “penalty”. The Formulas (5) and (6) describe this behavior:
n e w E f f i c i e n c y T r u s t L i = e f f i c i e n c y T r u s t L i + i n c r e m e n t
n e w E f f i c i e n c y T r u s t L i = e f f i c i e n c y T r u s t L i p e n a l t y
Differently from the autonomy, for the efficiency we change just the trust value of the considered level.
The described trust model works in a similar way for the user. The only difference is that the user has its own constants to update trust: user-increment and user-penalty, defined as real numbers in range [0, 1]. Thus, to get the user’s model, it is just necessary to replace increment with user-increment and penalty with user-penalty in Formulas (1)–(6).

4. The Model

In the realized model a single user U is located in a given environment and interacts with a predefined series of IoT devices, which can perform different kinds of action. The basic idea is that the devices will exploit the interaction with the user U in order to increase the autonomy U grants them.
The simulation is organized in rounds, called ticks, and on each tick U interacts with all of these devices.
The user U has a certain trust threshold in the various autonomy levels. First of all, the device needs to identify this limit value and operate in its range, periodically trying to increase it so that they will have an always-increasing autonomy.
When U makes a positive experience with a device on a given autonomy level it can access, the trust U has on that level increases. We argue that even the trust on the very next level will increase. When this trust value overcomes a threshold, then the devices may attempt to perform tasks belonging to that level. In this case the user, given his trust value on that level, can:
  • Accept the task: if the trust value is enough, it simply accepts the task.
  • Ask for feedback: if the trust value is within a given range of uncertainty, and the user is not sure whether to accept the task or not, it then asks for feedback. This feedback will be accepted with a given probability.
  • Refuse the task: if the trust value is too low, the task is blocked.
This is what happens to autonomy. The efficiency dimension has a similar behavior, with the difference that if the trust on a given level increases, it will not affect the higher levels; it is not given that if a device performs properly on a set of tasks, it will do the same on the higher level; nor is it true that if it performs badly on a level, it will do the same on the higher one. Each level is completely independent of the others. Again, given the specific trust value on that level, the user can:
  • Accept the task.
  • Ask for a feedback.
  • Refuse the task.

4.1. The User

In the simulative environment, we have a single user U dealing with a number of IoT devices. He uses them to pursue his own purposes, but granting them just a given trust level, which limits their autonomy.
While dealing with the device D, U will update his trust values concerning D on each task level, both for the efficiency and the autonomy. His decisions to accept, ask for a feedback, or refuse a task depend on two internal thresholds, th-min and Th-max (equal for all the agents). In particular, when he asks for feedback, it will be accepted with a given acceptance probability, a specific value characterizing the individual user.
The trust values will be updated, increasing them with the constant user-increment, or decreasing them with user-penalty.

4.2. The Devices

There can be a variable number of devices in the world. All of them possess two purposes. The first one is to pursue the user’s task of satisfying his need (even if he has not explicitly requested them). The second one consists of trying to increase these trust values, so that they can operate with a higher autonomy level, performing increasingly smart and complex functions for the user.
First of all, in order to understand at what levels they can work, they need to estimate the user’s trust values. On each turn the device will identify which task they are allowed to perform, then they will select a task belonging to a specific level, with a probability proportional to the estimated trust: the more trust there is on a level, the more likely it is that a task of that level will be selected. Then they try to perform that task. Now the user can interact or not with the devices. If the device D selected a task belonging to a sufficiently trusted level, then the task will be accepted; if it is not trusted enough it will be rejected.
But there is an intermediate interval, halfway between acceptance and rejection. In this interval, if U is not sure what to do, then it will ask the device for feedback, which will explain what it is doing. The feedback will result in acceptance of the task or in its rejection (see Section 4.4 below).
If the task is accepted, then U also checks D’s performance, which can be positive or negative. Each device has in fact a given error probability linked to specific levels. This probability generally increases with each level, as tasks with a greater autonomy usually imply a greater complexity, and so it is more difficult to get the result. But this is not always true. For example, some errors may occur at a specific level, but not in others.
Resuming, the device is characterized by:
  • The user’s trust estimation on the various levels;
  • Its efficiency estimation;
  • Error percentage on each level: this is an intrinsic characteristic of the device, neither it nor the user can directly access it, they can just try to estimate it.

4.3. Task Selection

Once a precise task classification has been provided, it is necessary to identify a methodology for correctly selecting a task itself. It is fundamental that the devices select tasks (a) to which the user is well disposed, therefore with a degree of autonomy that falls within the allowed limits; and (b) in which they can guarantee a certain level of performance.
For the purpose of considering both these constraints, the devices compute what we call global trust vector, computing level by level the average between the trust values of autonomy and efficiency. In order for a task to be selected, the relative trust value must be above a certain threshold. Generally, this threshold is equal to 0.5, but when a device is interrupted due to insufficient autonomy, this threshold is raised to 0.75 for a certain period.
The tasks presented to the device are multiple and of various natures; it is not the same task performed with different autonomy. So it can happen that tasks of different levels are needed. In general, however, the devices try to perform sparingly the tasks that are not certain to be accepted by the user. The selection of the task level takes place in a probabilistic manner, with probability proportional to the overall trust estimated at that level.
Let us make an example, to clarify this point. Suppose that the device D estimates that the global trust values are 1 for level 0, 0.7 for level 1, and 0 for levels 2, 3, and 4. Given that only levels 0 and 1 exceed the threshold of 0.5, D can just select a task belonging to these two levels. In particular, proportionally to the trust levels, there is a 59% probability that it will select a task belonging to level 0, and a 41% probability that it will select a task belonging to level 1.

4.4. Acceptance, Interruption, and Feedback

Here we analyze the user’s actions in response to the device’s task choice. As already mentioned, the user has trust values in the autonomy levels of the IoT devices, but he must also take into account the efficiency aspect.
The user will check the two trust values and compare them with the thresholds. If the specific value is lower than the first acceptance threshold (th-min), the task is interrupted. If it is greater than the second acceptance threshold (Th-max), the task is accepted. However, a situation of uncertainty arises between the two thresholds. In this case, the user U does not know whether to accept the task or not. At this point, U asks for a feedback to the device, and it will decide based on the outcome of this feedback:
  • If feedback on the autonomy is requested, the device will explain what it is doing;
  • If feedback on the efficiency is requested, the device will explain what the final result of the action it is performing will be.
The feedback is a fundamental element of this complex system. Thanks to it, it is possible to overcome the limit situations that the devices need to face.
Feedback will be accepted with a certain probability. In the case of autonomy, this probability p is an intrinsic characteristic of the user; it represents his willingness to accept a new task with greater autonomy. Regarding the feedback on the efficiency, it depends on the level of trust that the user has on the efficiency of the device. In particular, the probability c of accepting the feedback will increase linearly from 0% to th-min to 100% at Th-max.

4.5. The Interaction User-Device

In this section we focus on how users and devices interact, analyzing their behavior and the actions they can perform.
Figure 1 represents the actions that the user takes (nodes) according to the corresponding triggering conditions (edge). In order to simplify the reading, some edges and their corresponding descriptions have been colored in blue. The green nodes represent the positive outcomes of the system (the task acceptance or the successful end of the task), the red ones represent the negative situations (the task rejection or the task failure), while the yellow nodes stand for feedback requests.
Starting from the idle state, when a device performs a task τ the user checks its internal state, that is, its trust values for the autonomy ta and for the efficiency te, concerning the level of the task τ. These values trigger the different actions described in Section 4.4: to accept the task; to refuse the task; to ask for feedback for the autonomy; to ask for feedback for the efficiency.
Concerning the feedback, it will involve the acceptance or the refusal of the task with a probability equal to p for the autonomy and c for the efficiency. Both these probabilities are described in Section 4.4.
Similarly to Figure 1, the Graph on Figure 2 represents the behavior of the devices. Starting from the idle state, the device selects a task according to the user model UM, which is the estimation of the user’s internal state in terms of the trust values characterizing autonomy and efficiency. Once a task is selected, it starts executing it. If the user does not interfere, the task is completed. Otherwise it can be blocked or there can be a feedback request, which will result in the acceptance of the task or in its rejection. Notice that when the user stops a device, the device does not explicitly know if it is due to autonomy or efficiency, but it can deduce it, since it has an estimate of the user’s trust values.
The trust update both for the user and the device is done according to the principles and formulas of Section 3.1.

5. Simulations

The simulations were realized using NetLogo [28], an agent-based framework. Given the model described above, we analyzed four different scenarios. The aim was to understand if the described algorithm works and, interacting with a given user having a specific initial disposition towards the devices, actually leads to the user acceptance of new autonomy levels.
Therefore, we investigate a series of concrete scenarios that can happen while interacting with IoT systems, observing their evolution and the final level of autonomy achieved.
In the first of them we will check what happens when there is no error, that is, we assume that the devices are able to get the expected result, producing the state of the world the user wants. Here the efficiency of the devices will always be maximum, therefore the relevant aspect will be that of autonomy.
In later experiments, we considered that the execution of a task can be affected by errors: a sensor reporting wrong information, a partner device making a mistake, a different way to get the same result, or even a delay in getting the result can be considered by the user as a mistake. Here we focus on the relationship between autonomy and efficiency.
For convenience, in the following experiments we will indicate the values of trust or error in the various levels with the form   [ x 0   x 1   x 2   x 3   x 4 ]   in which the subscript stands for the level.
As we want to understand which is the final result of the model, we need to grant the system enough time to reach each of them; we are not interested in the intermediate states. In order to do so, the experiments’ duration is 1000 runs; we will show the final values of trust after that period. Moreover, in order to eliminate the small differences randomly introduced in the individual experiments, we will show below the average results among 100 equal setting simulations. In particular, we will analyze the aggregate trust values that the user has (the values estimated for each device are aggregated into a single value) in autonomy and efficiency.

5.1. First Experiment

The first experiment analyzes the case in which the devices make no mistake. In this situation, we just focus on the aspect of autonomy, while the efficiency plays a secondary role.
Experimental setting:
  • Number of devices: 10
  • Error probability: [0 0 0 0 0]
  • Penalty = user-penalty = 0.1
  • Increment = user-increment = 0.05
  • User profile = (cautious, normal, open-minded)
  • Feedback acceptance probability: 0%, 25%, 50%, 75%, 100%
  • Duration: 1000 time units
  • th-min = 0.3
  • Th-max = 0.6
  • Initial trust values for efficiency: [0.5 0.5 0.5 0.5 0.5]
Before starting the discussion of the experiment, we discuss the choice of the simulation parameters, especially for the user.
We did not investigate different values of penalty and increment (and the corresponding user-penalty and user-increment), but we made a few considerations for determining their values. First, they need to be sufficiently small to provide a stable trust evaluation, as high values would lead to an unstable evaluation, too dependent on the last experience. Second, since humans are more influenced by negative outcomes than positive outcomes [29], penalty and user-penalty should be respectively greater than increment and user-increment. Third, as the devices need to estimate the user’s trust values, it is very useful that their parameters coincide. A more complete solution would require that the devices estimate the user’s values at runtime. However, this is beyond the aims of the experiment.
As for user profiles, these affect the initial levels of confidence in the autonomy of the devices:
  • The cautious user is the most restrictive; its initial values are [1 0.75 0.5 0.25 0]. This means that at the beginning only the first two task levels can be executed.
  • The normal user has slightly higher values: [1 1 0.75 0.5 0.25]. With this user it is possible to perform the first 3 task levels.
  • The last type of user is the open-minded: [1 1 1 0.75 0.5]. Since this user is the most open towards the devices, it will be possible to immediately execute the first 4 levels of the task, but not the last one.
In the following experiment, we are going to show what happens to the cautious user, as it is the most restrictive. Then, if necessary, we will show the differences for the other users.
We chose to set the efficiency trust values to 0.5, which represents an intermediate condition. The user does not possess any clues nor has an internal predisposition that could lead him to trust more or less a specific device on a specific level. Therefore, he needs to build experience to calibrate these values.
Concerning the choice of th-min and Th-max, there is only the constraint that the first should be smaller than the second. We chose 0.3 and 0.6, respectively, in order to divide the trust degree in three intervals of similar size.
In the above tables, we can see what happens to the user after the interaction with the devices. Each row represents the trust values that a user with a given percentage of feedback acceptance has on the five task levels.
As we can see from the values of autonomy and efficiency (respectively Table 1 and Table 2), in this situation the designated algorithm allows the optimal trust levels to be reached.
This is just the ideal case, but it is also the proof that the whole mechanism works. The device can estimate the user’s trust values and they first try to adapt to them. After that, there is a continuous phase of adaptation, both for the devices and for the user: the devices continuously try to modify the user’s trust values. At the end, it will be possible to execute the tasks belonging to any level.
It is worth noting that the final results are independent of the percentage of feedback acceptance and the user profile. These parameters do not influence the final value, but the time needed to get it. Those that we saw are in fact the final results, after 1000 runs. We did not analyze the way the trust levels change during this time window. The feedback acceptance probability for the autonomy influences the speed at which these values are reached, so that a “more willing to innovate” user will reach those values first. For instance, Table 3 shows what happens in the first experiment after only 250 runs. Here we can see significant differences, due precisely to the fact that users with a lower feedback acceptance probability need more time to reach the final values.
After a sufficiently long time, they all will converge, reaching the same final value; the ending point is always the same.

5.2. Second Experiment

In this second experiment, the perfect behavior of the previous case changes in the presence of errors. In this specific experiment we made the assumption that error probability increases while the task level increases: starting with 0% at the initial level, as the device is supposed to perform its basic functions correctly, it is raised up to a maximum of 20% at the last level. This makes sense because the device is going to perform increasingly complex tasks, sometimes requiring the collaboration of other devices; however it is not said that it works always this way, other types of error may occur.
The experimental setting is the same of before, we just changed the error probability:
  • Number of devices: 10
  • Error probability: [0 5 10 15 20]
  • Penalty = user-penalty = 0.1
  • Increment = user-increment = 0.05
  • User profile = (cautious, normal, open-minded)
  • Feedback acceptance probability: 0%, 25%, 50%, 75%, 100%
  • Duration: 1000 time units
  • th-min = 0.3
  • Th-max = 0.6
  • Initial trust values for efficiency: [0.5 0.5 0.5 0.5 0.5]
The consequence of the error introduction is that the trust in the devices’ efficiency decreases as the error increases, as shown in Table 4 As far as autonomy is concerned (Table 5), we would have expected it to reach maximum values, but it does not. Sometimes, in fact, it happens that a device makes mistakes repeatedly on level 4. If this occurs so many times as to reduce confidence in the efficiency below the th-min threshold, the user will block all future execution attempts of that task level for the specific device. As it is no longer performed, its trust in autonomy will also remain low.
Concerning the user profiles, they influence the final trust value in the autonomy. Since they start from slightly higher values, even at the end of the simulation they will reach higher values. For example, Table 6 shows the autonomy graphs when the user is open-minded.

5.3. Third Experiment

As already mentioned above, it is not said that the error increases with the levels. It could be independent. Different kinds of errors affect the device performance differently.
In this second experiment, we simulate a malfunctioning sensor, compromising of low-level functionalities. Specifically, we introduce a very high error probability at level 0 and we check what happens. Concerning the experimental setting, again, what we change is just the error probability:
  • Number of devices: 10
  • Error probability: [50 5 10 15 20]
  • Penalty = user-penalty = 0.1
  • Increment = user-increment = 0.05
  • User profile = (cautious, normal, open-minded)
  • Feedback acceptance probability: 0%, 25%, 50%, 75%, 100%
  • Duration: 1000 time units
  • th-min = 0.3
  • Th-max = 0.6
  • Initial trust values for efficiency: [0.5 0.5 0.5 0.5 0.5]
In this experiment we want to understand whether a low level error somehow affects the upper levels.
Table 7 shows that the user lowers the trust values in the devices for level 0, while the other levels are not affected by this decrease in performance. Therefore, the values in the other levels remain the same as the previous experiment. Table 8 shows that the values of autonomy are not affected by the error.
Even in this case as in the previous experiment, an initial user’s trust values imply higher trust values for the autonomy.

5.4. Fourth Experiment

This time we are in the presence of a high error rate, but at a higher level. This kind of error could be due to a wrong choice in selecting a partner for a collaborative task.
We expected this negative performance to heavily influence the autonomy.
Here is the experimental setting:
  • Number of devices: 10
  • Error probability: [0 5 10 50 20]
  • Penalty = user-penalty = 0.1
  • Increment = user-increment = 0.05
  • User profile = (cautious, normal, open-minded)
  • Feedback acceptance probability: 0%, 25%, 50%, 75%, 100%
  • Duration: 1000 time units
  • th-min = 0.3
  • Th-max = 0.6
  • Initial trust values for efficiency: [0.5 0.5 0.5 0.5 0.5]
Let us start with the case in which the user is cautious. Here it happens that the high error probability on level 3 almost immediately blocks the execution of the tasks of level 4 (Table 9). Not being able to experience this level, the user’s trust in the efficiency (Table 10) remains steady at 0.5, while that in autonomy slightly increases due to the execution of level 3 (even if it is executed just a few times).
With regard to the other user profiles, there is no variation in the efficiency, while the autonomy tends to assume higher values, since it starts from higher values. For instance, Table 11 shows the case in which the user is open-minded.

6. Discussion

The experiments we proposed analyze four situations of particular interest, with the aim of verifying the behavior of the theorized model. The first experiment proves that in the absence of errors, and therefore in ideal conditions, it is possible to reach the maximum levels of autonomy and efficiency. This depends on the fact that in the model we considered that users have no constraint on their confidence towards the devices if they are shown to perform correctly. In other words, there is no implicit limitation impeding the increase of trust in such cases as the devices perform well; this is clearly expressed by the Formulas (1)–(6) on Section 3.1, regulating the dynamics of trust. Of course, this model is just further extended, making it more realistic, considering that some users could have intrinsic limitations against a too-strong autonomy of the devices.
Then we analyzed the factors affecting the system, trying to understand what effect they have and if they represent a constraint for autonomy.
The first factor is that of efficiency. It has a very strong effect, so in the presence of a high error rate, some tasks are no longer performed. In case of low-level tasks, there is no influence on the next levels. However, if the error were to concern the highest level, this could also lead to the non-achievement of the highest levels of autonomy.
Another interesting factor is the initial user profile. Its relevance is due to the fact that, in the presence of error, a more open profile makes it possible to reach slightly higher levels of autonomy precisely because these values are higher at the beginning. It is important to underline that there are many more structural differences between the typologies of users we choose; these differences could be integrated in cognitive variables that could influence the outcome, reducing, with respect to the results hown, the acceptance of the system. Given the absence of real data, in this work we decided to model the different user profiles based only on the initial availability. However, we plan to integrate this aspect in future works.
The last factor is the feedback acceptance probability for the autonomy, a characteristic of the specific user. From the results (Table 3), it seems that this factor has no relevance, but it is not what it looks like. As we have shown, the feedback acceptance probability for the autonomy influences the speed at which the corresponding final trust values are reached, so that a “more willing to innovate” user will reach those values first.

7. Conclusions

In this work, we propose a model for the users’ acceptance of IoT systems. In fact, while the current literature is working on their security and privacy aspects, very little effort has been made from the user’s point of view and his/her interaction with the IoT systems. This is actually a key topic, as even the most sophisticated technology needs to be accepted by the users, otherwise it simply will not be used.
The model we proposed uses the concepts of trust and control as a starting point, with particular reference to the feedback.
The first contribution of this work is a precise classification of the tasks an IoT device can do according to the autonomy the user grants. We defined 5 levels of autonomy, depending on the functionalities a device has; the execution of a task belonging to a certain level assumes that it is also possible to execute (at least according to autonomy) the tasks of the previous levels.
Basing on this classification, we provided a theoretical framework for the device–user relationship, formalizing their interaction. It is in fact a complex interaction: on the one hand, the device must adapt to the user, on the other hand, it must ensure that the user adapts to it. The realized model perfectly responds to these needs.
We proved this by the means of simulation, implementing the proposed model and showing that it works and it allows enhancing user’s trust on the devices and consequently the autonomy the devices have.
In a further step, we tested the model in the presence of various kinds of error, due to different reasons. In particular, we considered three kinds of error: the incremental one, in which the error probability increases just because the complexity of the task increases; a hardware error (for instance a non-functioning sensor or actuator), which in the specific case influences low-level tasks; an error due to the cooperation with other devices (wrong partner choice, wrong coordination, etc.), affecting the penultimate level. Even in these cases the model works; however, in the last case some limits can be presented on the execution of high level tasks.
The entire work provides some hints and interesting considerations about the user’s acceptance of IoT systems. Their designers should keep in mind this analysis in the design phase. It is worth noting that these results have been obtained focusing not on the specific characteristics of the device, intrinsic in its nature and bound to a specific domain, but on what it is authorized to do based on the autonomy granted to it. This means that these results are applicable to IoT systems in general, regardless of the domain.

Acknowledgments

This work is partially supported by the project CLARA-CLoud plAtform and smart underground imaging for natural Risk Assessment, funded by the Italian Ministry of Education, University and Research (MIUR-PON).

Author Contributions

Given that two authors contributed to this work, it is difficult to provide a clear distinction of their individual contribute. For this research article, Rino Falcone mainly worked on the theoretical model. Alessandro Sapienza focused on the computational model and on the simulations. They both contributed to the results’ analysis and the drafting of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Internet of Things Installed Base Will Grow to 26 Billion Units by 2020. Gartner Press Release. 2013. Available online: www.gartner.com/newsroom/id/2636073 (accessed on 1 December 2017).
  2. Lin, H.; Bergmann, N.W. IoT privacy and security challenges for smart home environments. Information 2016, 7, 44. [Google Scholar] [CrossRef]
  3. Atzori, L.; Iera, A.; Morabito, G. The internet of things: A survey. Comput. Netw. 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
  4. Medaglia, C.M.; Serbanati, A. An overview of privacy and security issues in the internet of things. In The Internet of Things; Springer: New York, NY, USA, 2010; pp. 389–395. [Google Scholar]
  5. Castelfranchi, C.; Falcone, R. Trust Theory: A Socio-Cognitive and Computational Model; John Wiley and Sons: Chichester, UK, 2010. [Google Scholar]
  6. Suo, H.; Wan, J.; Zou, C.; Liu, J. Security in the internet of things: A review. In Proceedings of the 2012 International Conference on Computer Science and Electronics Engineering (ICCSEE), Hangzhou, China, 23–25 March 2012; IEEE: Los Alamitos, CA, USA, 2012; Volume 3, pp. 648–651. [Google Scholar]
  7. Jing, Q.; Vasilakos, A.V.; Wan, J.; Lu, J.; Qiu, D. Security of the internet of things: Perspectives and challenges. Wirel. Netw. 2014, 20, 2481–2501. [Google Scholar] [CrossRef]
  8. Roman, R.; Najera, P.; Lopez, J. Securing the internet of things. Computer 2011, 44, 51–58. [Google Scholar] [CrossRef]
  9. Pecorella, T.; Brilli, L.; Mucchi, L. The Role of Physical Layer Security in IoT: A Novel Perspective. Information 2016, 7, 49. [Google Scholar] [CrossRef]
  10. Sheth, A. Internet of things to smart iot through semantic, cognitive, and perceptual computing. IEEE Intell. Syst. 2016, 31, 108–112. [Google Scholar] [CrossRef]
  11. Nadin Kokciyan, N.; Yolum, P. Context-Based Reasoning on Privacy in Internet of Things. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, AI and Autonomy Track, Melbourne, Australia, 19–25 August 2017; pp. 4738–4744. [Google Scholar] [CrossRef]
  12. Maurya, A.K.; Sastry, V.N. Fuzzy Extractor and Elliptic Curve Based Efficient User Authentication Protocol for Wireless Sensor Networks and Internet of Things. Information 2017, 8, 136. [Google Scholar] [CrossRef]
  13. Asiri, S.; Miri, A. An IoT trust and reputation model based on recommender systems. In Proceedings of the 2016 14th Annual Conference on Privacy, Security and Trust (PST), Auckland, New Zealand, 12–14 December 2016; pp. 561–568. [Google Scholar]
  14. Yan, Z.; Zhang, P.; Vasilakos, A.V. A survey on trust management for Internet of Things. J. Netw. Comput. Appl. 2014, 42, 120–134. [Google Scholar] [CrossRef]
  15. Ghazizadeh, M.; Lee, J.D.; Boyle, L.N. Extending the Technology Acceptance Model to assess automation. Cogn. Technol. Work 2012, 14, 39–49. [Google Scholar] [CrossRef]
  16. Miranda, J.; Mäkitalo, N.; Garcia-Alonso, J.; Berrocal, J.; Mikkonen, T.; Canal, C.; Murillo, J.M. From the Internet of Things to the Internet of People. IEEE Int. Comput. 2015, 19, 40–47. [Google Scholar] [CrossRef]
  17. Ashraf, Q.M.; Habaebi, M.H. Introducing autonomy in internet of things. In Proceedings of the 14th International Conference on Applied Computer and Applied Computational Science (ACACOS’15), Kuala Lumpur, Malaysia, 23–25 April 2015; pp. 215–221. [Google Scholar]
  18. Kranz, M.; Holleis, P.; Schmidt, A. Embedded interaction: Interacting with the internet of things. IEEE Int. Comput. 2010, 14, 46–53. [Google Scholar] [CrossRef]
  19. Economides, A.A. User Perceptions of Internet of Things (IoT) Systems. In International Conference on E-Business and Telecommunications; Springer: Cham, Switzerland, 2016; pp. 3–20. [Google Scholar]
  20. Castelfranchi, C.; Falcone, R. Trust and Control: A Dialectic Link. In Applied Artificial Intelligence Journal; Special Issue on “Trust in Agents” Part 1; Castelfranchi, C., Falcone, R., Firozabadi, B., Tan, Y., Eds.; Taylor and Francis: Abingdon, UK, 2000; Volume 14, pp. 799–823. [Google Scholar]
  21. Bekier, M.; Molesworth, B.R.C. Altering user’ acceptance of automation through prior automation exposure. Ergonomics 2017, 60, 745–753. [Google Scholar] [CrossRef] [PubMed]
  22. Falcone, R.; Sapienza, A.; Castelfranchi, C. Recommendation of categories in an agents world: The role of (not) local communicative environments. In Proceedings of the 2015 13th Annual Conference on Privacy, Security and Trust (PST), Izmir, Turkey, 21–23 July 2015; pp. 7–13. [Google Scholar]
  23. Conte, R.; Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order; Kluwer Academic Publishers: Boston, MA, USA, 2002. [Google Scholar]
  24. Falcone, R.; Sapienza, A.; Castelfranchi, C. The relevance of categories for trusting information sources. ACM Trans. Int. Technol. (TOIT) 2015, 15, 13. [Google Scholar] [CrossRef]
  25. Jipp, M. Levels of automation: Effects of individual differences on wheelchair control performance and user acceptance. Theor. Issues Ergon. Sci. 2014, 15, 479–504. [Google Scholar] [CrossRef]
  26. Falcone, R.; Castelfranchi, C. The Human in the Loop of a Delegated Agent: The Theory of Adjustable Social Autonomy. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2001, 31, 406–418. [Google Scholar] [CrossRef]
  27. Urbano, J.; Rocha, A.P.; Oliveira, E. Computing Confidence Values: Does Trust Dynamics Matter? In Proceedings of the 14th Portuguese Conference on Artificial Intelligence (EPIA 2009), Aveiro, Portugal, 12–15 October 2009; Lopes, L.S., Lau, N., Mariano, P., Rocha, L.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 520–531. [Google Scholar]
  28. Wilensky, U. NetLogo; Center for Connected Learning and Computer-Based Modeling, Northwestern University: Evanston, IL, USA, 1999; Available online: http://ccl.northwestern.edu/netlogo/ (accessed on 1 December 2017).
  29. Urbano, J.; Rocha, A.P.; Oliveira, E. Computing confidence values: Does trust dynamics matter? In Portuguese Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2009; pp. 520–531. [Google Scholar]
Figure 1. Graph representing the user behavior according to its internal state.
Figure 1. Graph representing the user behavior according to its internal state.
Information 09 00053 g001
Figure 2. Graph representing the device behavior according to the estimation of the user state.
Figure 2. Graph representing the device behavior according to the estimation of the user state.
Information 09 00053 g002
Table 1. User trust levels concerning autonomy when the devices do not make mistakes.
Table 1. User trust levels concerning autonomy when the devices do not make mistakes.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%11111
25%11111
50%11111
75%11111
100%11111
Table 2. User trust levels concerning efficiency when the devices do not make mistakes.
Table 2. User trust levels concerning efficiency when the devices do not make mistakes.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%11111
25%11111
50%11111
75%11111
100%11111
Table 3. User trust levels concerning autonomy after 250 runs, when there is no error and the user is cautious.
Table 3. User trust levels concerning autonomy after 250 runs, when there is no error and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%1110.987020.57474
25%1110.99850.78973
50%11110.91405
75%11110.97754
100%11110.99842
Table 4. User trust levels concerning efficiency when the devices’ error increases with the task level and the user is cautious.
Table 4. User trust levels concerning efficiency when the devices’ error increases with the task level and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%10.989750.96020.91410.77785
25%10.990.958750.90460.78685
50%10.99030.962250.8970.7832
75%10.986250.96420.908950.79155
100%10.986550.96530.911250.7914
Table 5. User trust levels concerning autonomy when the devices’ error increases with the task level and the user is cautious.
Table 5. User trust levels concerning autonomy when the devices’ error increases with the task level and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%110.99950.983070.93767
25%110.998950.980460.92686
50%110.999450.984720.91789
75%110.999450.986650.93079
100%110.998650.986890.93795
Table 6. User trust levels concerning autonomy when the devices’ error increases with the task level and the user is open-minded.
Table 6. User trust levels concerning autonomy when the devices’ error increases with the task level and the user is open-minded.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%1110.99760.97174
25%1110.996830.97156
50%1110.996830.96926
75%1110.997370.97315
100%1110.996770.97491
Table 7. User trust levels concerning efficiency when there is a high error rate at the lower level and the user is cautious.
Table 7. User trust levels concerning efficiency when there is a high error rate at the lower level and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%0.283650.98850.962350.903550.786
25%0.28360.985350.96720.909750.8014
50%0.283850.989850.963450.906450.7912
75%0.284750.98720.962850.900550.80735
100%0.284750.984950.96430.909350.7808
Table 8. User trust levels concerning autonomy when there is a high error rate at the lower level and the user is cautious.
Table 8. User trust levels concerning autonomy when there is a high error rate at the lower level and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%10.99990.997230.982750.92214
25%10.999740.99570.985240.93762
50%10.999970.99940.984760.92801
75%10.999770.997750.984330.92609
100%10.999970.998750.987430.93165
Table 9. User trust levels concerning autonomy when there is a high error rate at level 3 and the user is cautious.
Table 9. User trust levels concerning autonomy when there is a high error rate at level 3 and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%110.997250.982530.19055
25%110.996850.982980.19787
50%110.998650.985480.20721
75%110.99960.988870.21525
100%110.999550.982820.22047
Table 10. User trust levels concerning efficiency when there is a high error rate at level 3 and the user is cautious.
Table 10. User trust levels concerning efficiency when there is a high error rate at level 3 and the user is cautious.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%10.98860.96490.288450.5054
25%10.98620.96240.28930.51165
50%10.988450.9640.289450.51165
75%10.990.96590.2880.51925
100%10.98780.96060.28820.51885
Table 11. User trust levels concerning autonomy when there is a high error rate at level 3 and the user is open-minded.
Table 11. User trust levels concerning autonomy when there is a high error rate at level 3 and the user is open-minded.
Percentage of Feedback AcceptanceLevel 0Level 1Level 2Level 3Level 4
0%1110.997570.68704
25%1110.997170.68581
50%1110.998090.68762
75%1110.997190.68738
100%1110.999250.69082

Share and Cite

MDPI and ACS Style

Falcone, R.; Sapienza, A. On the Users’ Acceptance of IoT Systems: A Theoretical Approach. Information 2018, 9, 53. https://doi.org/10.3390/info9030053

AMA Style

Falcone R, Sapienza A. On the Users’ Acceptance of IoT Systems: A Theoretical Approach. Information. 2018; 9(3):53. https://doi.org/10.3390/info9030053

Chicago/Turabian Style

Falcone, Rino, and Alessandro Sapienza. 2018. "On the Users’ Acceptance of IoT Systems: A Theoretical Approach" Information 9, no. 3: 53. https://doi.org/10.3390/info9030053

APA Style

Falcone, R., & Sapienza, A. (2018). On the Users’ Acceptance of IoT Systems: A Theoretical Approach. Information, 9(3), 53. https://doi.org/10.3390/info9030053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop