Modelling the Interaction Levels in HCI Using an Intelligent Hybrid System with Interactive Agents: A Case Study of an Interactive Museum Exhibition Module in Mexico

: Technology has become a necessity in our everyday lives and essential for completing activities we typically take for granted; technologies can assist us by completing set tasks or achieving desired goals with optimal affect and in the most efﬁcient way, thereby improving our interactive experiences. This paper presents research that explores the representation of user interaction levels using an intelligent hybrid system approach with agents. We evaluate interaction levels of Human-Computer Interaction (HCI) with the aim of enhancing user experiences. We consider the description of interaction levels using an intelligent hybrid system to provide a decision-making system to an agent that evaluates interaction levels when using interactive modules of a museum exhibition. The agents represent a high-level abstraction of the system, where communication takes place between the user, the exhibition and the environment. In this paper, we provide a means to measure the interaction levels and natural behaviour of users, based on museum user-exhibition interaction. We consider that, by analysing user interaction in a museum, we can help to design better ways to interact with exhibition modules according to the properties and behaviour of the users. An interaction-evaluator agent is proposed to achieve the most suitable representation of the interaction levels with the aim of improving user interactions to offer the most appropriate directions, services, content and information, thereby improving the quality of interaction experienced between the user-agent and exhibition-agent.


Introduction
Since the dawn of the 21st century, technology has immersed itself in our everyday lives and become a necessary facilitator of daily activities; some technological devices assist or support us by completing set tasks or achieving desired goals with optimal affect and in the most efficient way, thereby improving our interactive experiences.However, what happens when a user interacts without technology?Is the interaction experienced better or worse?What are the interaction levels when using or not using technology and how does this change?Can we measure user interaction levels without metric variables, relying solely on body language, using linguistic variables?Which influencing factors increase or decrease our levels of interaction?What is the quality of the interaction?What is the interaction time?Which factors influence abandonment rates during interaction?This paper aims to address these questions by evaluating interaction levels in HCI and thereby improving user experiences.We consider the description of interaction levels using an intelligent hybrid approach to provide a decision-making system to an agent that self-evaluates interaction in interactive modules in a museum exhibition.The agents represent a high-level abstraction of the system, where communication takes place between the user, exhibition and the environment.
In our research, we analyse the evaluation made by an on-site observer from a sample of 500 users that visited "El Trompo" Museo Interactivo Tijuana in Mexico to set-up a Fuzzy Inference System (FIS) [1] using 3 hybrid techniques: (1) Empirical FIS (EF) [2,3], (2) a Fuzzy C-Means method of Data Mining named Data Mined Type-1 (DMT1F) [4,5] and (3) Neuro-Fuzzy System (NFS) [6,7].The different user action inputs were represented to classify interaction levels using a FIS to improve the provision of content with the purpose of increasing interaction levels experienced in the Museum.The involved actors included the user and the exhibition module, which were represented by agents as a high-level abstraction of the system.We expressed the native user by User-Agent, the exhibition module by Exhibition-Agent (GUI) and interaction evaluation system by Interaction Evaluator-Agent (Interaction Evaluator).

Interaction Levels
In this research, based on Gayesky and Williams' Interaction Levels Theory [8], we set defined parameters for analysis, such as presence, interactivity, control, feedback, creativity, productivity, communication and adaptation, to identify the interaction level of users using a FIS.The interaction between user and exhibition is important to evaluation, including its related factors (interaction time, type of interaction, etc.).A secondary motivation for this research is that researchers typically evaluate user interaction levels using quantitative methods and not qualitative metrics.Moreover, it is important to understand each interaction (or lack of interaction) with the user to develop dependable user-exhibition interactions and inform him/her how much is truly valued; interactions are 'moments of truth' i.e., we can learn user preferences and guide them in their subsequent choices.This approach creates a new opportunity for developing improved interactive experiences.For museums, it is crucial that exhibitions can self-learn and adapt to user interactions at every stage of the interaction, based on user actions.

Museum User-Exhibition Interaction
This research provides a means to measure interaction levels based on the natural behaviour of users formed by their interactions with museum exhibitions.We consider that, by analysing user interactions in a museum, we can help to design better methods to interact with exhibition modules according to the preferences, characteristics and behaviours of its users.To evaluate this interaction requires the identification of objective criteria based on qualitative aspects of the users' behaviour.We consider this to be a complex task requiring specific considerations about not only the performance and/or interactions of users, but also the involved uncertainty in evaluating user perceptions, making it difficult to assess and draw conclusions.

Related Work
Emerging social phenomena are difficult to explain since traditional methods do not naturally identify them.Agent-based methodologies allow for the identification and explanation of the causes of agent interactions involved in the phenomena, providing a greater understanding of the context.Rosenfeld et al. [9], proposed a methodology for using automated agents in two scenarios: real-world and human-multi robot, for collaborative tasks.The agents in this research are able to learn from past interactions, creating policies that develop deeper planning capabilities.Rosenfeld and Sarit [10] proposed a methodology for developing agents to support people in argumentative discussions, including the ability to propose arguments.This research is based on Conversational Agents (CAs), where the CAs can converse with humans and provide information and assistance [11,12].The CAs framework allows HCI because a human directly interacts with the CA starting a conversation and understanding user goals.The research analyzed the human argumentative behavior and demonstrated that effective predictions of argumentative behavior are considerably improved when merging four methods: Argumentative Theory, Relevance Heuristics, Machine Learning (ML) and Transfer Learning (TL), creating the ability to build intelligent argument agents.They concluded that ML techniques were the best option to allow predictions of human argumentative behavior and this behavior can be structured, semi-structured and free-form argumentation, as a training data on the wish topic is available, allowing argumentation in the real-world.
Garruzzo and Rosaci [13] argued that semantic negotiation is the key for agent clustering.They propose a form of 'groups of agents', based on similarities considering its ontologies.They also state that it is important to consider the context where the agent ontology is used and propose a novel clustering technique called HIerarchical SEmantic NEgotation (HISENE) which considers the structural semantic components.Their research also proposes an algorithm to compute ontology-based similarity.They build a 200 agent software with communication skills and use the semantic negotiation protocol.The research addresses the problem of developing MAS, when it is a is necessary form of the agents' group.We face several challenges relaing to cooperation and teamwork.Sometimes, it is necessary to collaborate among different groups of agents to achieve common goals or solve complex problems; if the agents are composed of the same group, it is easy to collaborate because they have the same ontologies, but on another hand, when the agent belongs to different groups, the communication is complicated because the ontologies are different, then this research responds to this issues exploiting the capabilities of HISENE.
Derive from complex problems, a novel proposal is required to find responses; sometimes, human capabilities are limited to do this.In this way, technology is needed on behalf of humans to find responses.One option is intelligent agents.Rosaci [14] proposes to build agents, considering the internal representation of behavior and interests of the owner creating ontologies; these ontologies are needed to help agents create inter-relationships of knowing-sharing.This paper proposes to construct semi-automated ontologies, based on observing behaviors.A MAS, named Connectionist Inductive Learning and Inter-Ontology Similarities (CICILOS), is proposed for recommending information agents.The agents are viewed as user models, applying the same process applied to ontologies checking similarities with humans.In this paper, the term ontology is used as Knowledge Kernel agent's.The CICILOS is composed of different levels of agents with different topologies.Level One is called 'Main'; the Main contains three agents: (1) Agent Management Systems (AMS), (2) Directory Facilitator (DF) and (3) Agent User Interface (GUI).This level is characterized by essential agent's based on JADE Platform.The second Level is called IACOM (connectionist) and contains inductive agents.These agents are related to humans, based on behavior and interests.The IACOM (symbolic) level is based on the neural-symbolic network.The third level is called IAOM.This level is connected to the IACOM level; this contains an ontology translator.The fourth level is called Ontologies Similarities Managers (OSMs).This is related to IAOM and computes similarities between agents based on IAOM ontology.The underlying enriches the capability of selected adequate agents for cooperation; the neural-symbolic network makes more efficient inductive mechanisms, improving the planning tasks of the agents based on learning run time.
On another hand, the use of agents can help us in our social lives by understanding our personality and creating relationships with other people with similarities.Cerekovic et al. [15] used Rapport by [16] with virtual agents, linking social cues and self-reported human personality collected judged the rapport of human-agent interaction, studying what kind of social cues infer on those judgments.
The Human Interaction (HINT) methodology, proposed by Sanchez-Cortes et al. [17], considered HCI using 1-min interaction videos that judged rapport.Rapport was collected to help extract social cues from audio-visual data.The social cues extraction was derived from HCI.The researchers used agents, Sensitive Artificial Listeners (SALs).Schroder et al. [18], stated that the SALs, these agents are the key to the social cue extraction because they evaluate by measuring facial expressions and social cue extraction, composed by verbal cues (language style) and no verbal cues (auditory cues, visual cues).
The Multi-Agent System Handling User and Device Adaptivity of Web Sites (MASHA), presented by Rosaci and Sarne [19], can be used as a handling user between a website and user, helped by different agents: Client Agent, Server Agent and Adapter Agent.The Client Agent creates a user profile, considering its interests, behaviors and desires.The MASHA uses the User Agent for monitoring user navigation; this profile increases with interactions.It is then helped by the Server Agent, which gathers information to have more relevant information about the Client Agent and also these agents can collaborate to improve the knowledge about navigation and user profile autonomously.The Agent Adapter analyzes gathered information and generates recommendations, based on user preferences.The MASHA delivers effective recommendation results of content-based filtering.This analysis is performed by the Adapter Agent.If the user increases its navigation, it consequently increases its profile and gets a better HCI based on its own predilections.The MASHA support constructs a community of agents; these communities are composed of two categories: C1, which links the human user and C2, which links with the website.The MASHA can deliver a novel tool for Web Site visitors and provide useful suggestions, considering its devices increase, and increase the satisfaction of user on web navigation.
When the related research is analyzed, we can see that they propose novel solutions to improve the HCI experience and provide different answers for greater adaptability supported by intelligent agents.The research combines different techniques, technologies, Argumentative Theory, Relevance Heuristics, Machine Learning and Transfer Learning; all these options can allow us to create powerful agents with intelligence, with abilities and functions to be intermediaries to interact with humans inclusive with hardware such as tracking systems.The creation of intelligent agents lets be ready to work and respond in different contexts from emergency scenario and predictions of human behavior.The advantage of agent-based approaches is that it allows the representation of an endless number of users from 1 user to thousands of users.Also, grants us to have a greater perspective on the creation of models and applications, based on agents.This allows for our proposed research adequate creation of HCI within the context of the user exhibition in an interactive museum.Likewise, it provides representation of our research, through agents and users involved in this context.We can create several simulations of different and possible scenarios of the real world.On the other hand, it also allows for the creation of agents with personalized features based on levels of user interaction.The creation of these agents will allow them to have greater autonomy, reactivity and adaptability, based on emerging changes in the context of user exhibition interaction.However, we must bear in mind that the proposed research only focuses on the context within an interactive museum and is limited to being tested in emergency contexts.

Methodology
This research proposes a model that allows for the representation of levels of interaction using a FIS to evaluate, in a qualitative and subjective manner, the values of the levels of interaction between a user and a museum exhibition module, bearing in mind uncertain results of exchanges of messages [1].This approach helps avoid imperfect information when trying to provide services, information and content that the user requires, based on its interaction level, while seeking to supply a satisfactory level of interactive experience.To model the interaction levels in HCI using an intelligent hybrid system with interactive agents, we developed the following strategies.
Firstly, we approached the user-exhibition interaction module following some of the recommendations of Gaia, a methodology for agent-oriented analysis and design, to identify the roles and interactions in the referent/target system.In Figure 1, the process and models proposed by the agent-oriented modelling method is illustrated.We designed agents, relationship and services to build a prototype for experimentation.You can find further information about the full description of Gaia in [20].Secondly, we approached the interaction levels in HCI using an intelligent hybrid system following a general methodology of computational modelling.It is an iterative process that begins with a referent system in the real world.Then, abstraction, formalisation, programming and appropriate data are used to develop a viable computational model.In Figure 2, we show the proposed steps by the modelling method applied to the case study.You can find further information about the full description of computational modelling in [21,22].Finally, we used an interactive the museum exhibition module case study to validate the proposed model.The interactions reported on in this study were simulated, performed, observed and analysed in an interactive museum in Tijuana, Mexico.We consider it appropriate to base our study on that addressed in [23]; the types of interaction that occur in this kind of environment are suitable for the proposed research.It is believed that by measuring levels of interaction, it allows for the strengthening of knowledge to determine the services or information that users require, based on predilections in conjunction with the level of interaction.After producing several computational models of the interaction levels applying different methods, we used confusion matrices to show the computational model predictions tests results.In machine learning, a confusion matrix is an error table that allows examining the performance of the data mining (or training process) algorithm and the produced inference system response.Each row of the array represents the instances in a predicted class while each column represents the cases in an actual class (or vice versa).

Room and Exhibition Module Selection
To analyse the user-exhibition interaction, we studied both physical and theoretical aspects of the museum rooms, including their themes, objectives, methods of interaction and their location in the museum; additionally, we observed the methods of interaction found in each room in order to select a suitable room which allowed us to analyze the behaviour, actions, performance, interruption factors and interaction levels of users, as well as the interactive content type, information and/or services that the exhibitions provided.We also considered whether the content was adequate for users, suitable in relation to the kind of interactions of users and adequate in maintaining the attention of the user.We further examined whether the content was evidencing interaction and analysed the objective of the exhibition to determine whether it was adequate in encouraging a good interaction for the user and the media interface of the exhibition modules to determine whether they were adequate to have a suitable interaction.
After analysis of the different exhibition modules, an interesting interactive module was chosen with features that allowed us to obtain the majority of parameters that we wished to analyze in our research.The name of the exhibition module was "Move Domain".This educational exhibition involved users interacting and playing with one of four objects (car, plane, bike or balloon) which were displayed simultaneously on four separate screens, demonstrating the 4 different methods of moving in the simulated virtual world.Users were able to get the experience of using all 4 transportation means; they were able to interact in the virtual world and see how other users travelled and interacted around the virtual world.The exhibition's objective was to allow users to develop hand-eye coordination skills and spatial orientation using its technology.The content was based on eye coordination and interaction with electronic games, with the exhibition's message being "I can learn about virtual reality through playing".The suggested numbers of simultaneous users were 4.

Exhibition Module Interface
The module interface consisted of four sub-modules attached by connectors.Each module included a cover stand for the 32-inch screen, software that simulated the virtual world and a cabinet to protect the computer.The exhibition module was supported by a joystick to handle the plane, a steering wheel and pedals to drive the car, handlebars to ride the bike and a rope to fly the balloon.This interactive exhibition module, which is one of the most visited in the museum, allowed us to obtain important data for analysis, processing and validation of the proposed model.Figure 3 depicts the analyzed exhibition module.

Study Subjects
As subjects for the study, users were randomly selected from those children and adults who participated in supervised tours as part of a permanent program of collaboration between local schools and the museum.The institution and schools involved have the necessary agreements in place to conduct non-invasive interactive module evaluations to improve their design.We evaluated user interaction behaviour by performing ethnographic research (notes style) to observe in a non-invasive manner.Personal data was not required in our data collection; therefore, information was produced directly in the museum room through real-time observation, in line with institution committee recommendations to guarantee the anonymity of users.

Evaluation Interaction Parameters
We analyzed and studied parameters such as Presence (do users have a constant presence?, do the users have intermittent presence?),Interactivity (do the users have interaction directly or indirectly with the museum exhibition?, do users have shared interactivity with the exhibition?),Control (do the users have full control over the exhibition?), Feedback (do the users receive some sort of feedback about the content viewed?), Creativity (do the users change the way they interact with the exhibition, according to their creativity?),Productivity (do the users propose something that changes their interaction?),Communication (do the users have communication directly from the exhibition?) and Adaptation (do the users adapt their actions according to the interactive content type delivered by the exhibition?).
All data collected was analyzed from user interaction behavior, which was compiled through ethnographic research that observed, in a non-invasive manner, the user's interaction.We obtained parameter values based on human expert evaluations with implicit uncertainty and calculated every user with arbitrary values, based on expert judgment.
Figure 4 depicts the average results of parameters for the 500 users analyzed.It shows the interaction parameters necessary to develop the adequate FIS to obtain the interaction level.

The Model
To support user interactions, HCI is operating as a background process, using invisible sensing computational entities to interact with users.These entities are simulated by the User-Agent and Exhibition-Agent.The collaboration of these entities, permitted during HCI, deliver a customised interactive content type to users in a non-invasive manner and are context-aware.The relationships between Users (museum users) "User-Agent" and Computer (museum exhibition) "Exhibition-Agent" must be systematically modelled and represented to be ready for the emergent context; for this reason, we represent using user-exhibition relationships.
In our research, we represent HCI simulated on museum modelling with embedded agents (User-Agent, Exhibition-Agent, InteractionEvaluator-Agent) that allow the user-exhibition interaction to be supported.Our proposed modelling provides dynamic support for interactions and is aware not only of the user's physical context, but also of its social context i.e., when a user interacts with another user.Our model consists of contextual attributes, such as the location of the user and what they are actively doing during interaction.
The handling of uncertainty in information and service exchange environments presents numerous challenges in terms of imperfect information between the user-exhibition interaction; these interactions should not be predicted nor restricted to real-world applications, such as simulation processes in the exchanges occurred during interaction between user and exhibition.This research seeks to advance the following: To propose a model for representing interaction levels using a fuzzy inference system that helps to measure the level of interaction in order to identify the performance, actions and behaviour of users to offer information or services that are adequate, based on the theory of Gayesky and Williams [8] .
Established models exist which have been developed to process information based on classical logic where the propositions are either true or false.However, no model currently exists that addresses uncertainty generated in environments of information exchange and in imprecise services involved in user-exhibition interactions.We experiment with this proposed model using the FIS to address uncertainties involved in the process of information and service exchange to learn levels of interaction between the user-exhibition in an interactive museum context.We require a diffused input variable mechanism suitable to the environment; this is required to diffuse perceptions to define a fuzzy evaluation module to evaluate values generated among user-exhibition interactions.This evaluation module or diffused perception mechanism must be adapted to consider the method of Mamdani Fuzzy Inference [24] which will enable diffusion to the level of interaction.

Modelling User-Exhibition Elements
In this research, all interactions occurred on independent exhibitions providing different content which allowed for interactivity.We represented this using agent modelling.A user represented by User-Agent (UA) has complete freedom and infinite time to encounter different interactions (individual, group, accompanied etc.).In this sense, we can map more efficiently the proposed model using agents.First, we analysed a native user (User-Agent) in the environment in order to analyse the performance of the user and obtain inputs to the FIS.We then analysed the identified exhibition represented by Exhibition-Agent (GUI) and InteractionEvaluator-Agent (Interaction) with activities and content offered to measure the level of interaction, identifying available user activities, such as when the user-exhibition interaction arises.The InteractionEvaluator-Agent mediates between the stakeholders (user-exhibition), offering a status of the current state so that both can interact without problem.The InteractionEvaluator-Agent plays a "consultant" role, linking between the user and exhibition in order to provide enhanced integration.In this context, our reactive environment is ready at all times to obtain information.Figure 5 summarises the agent system prototype of the interactive museum exhibition module in a software agent platform [25].

Representing Interaction Levels Using a Fuzzy Inference System
The Interaction Levels Scale, proposed by Gayesky and Williams [8], has been used as a basis for our proposal of Interaction Levels Scale, but it is still unknown how we represent this interaction level using a FIS?First, we defined our own interaction levels scale.The scale was defined against six levels: (1) Extremely Low Interaction (ELI), (2) Very Low Interaction (VLI), (3) Low Interaction (LI), (4) Medium Interaction (MI), (5) High Interaction (HI) and ( 6) Extremely High Interaction (EHI).Against the scale, we defined the key features of each level and assigned a linguistic value with the finality to represent these in a FIS as output variables.The following is a summary of the proposed scale.
Level 0. The user is present in the exhibition module area and is shown a welcome message and related content.The user does not answer, only presence is confirmed.No interaction exists, only the action of being present.Key Features.Interactivity Null.No significant movements, only presence.

Linguistic Value. Extremely Low Interaction (ELI).
Level 1.The user hears or sees the content, but no meaningful action is perceived.The exhibition module only provides general content (welcome message or content and basic exhibition information).The user receives information, but does not control the interaction.
Key Features.Very Low Interactivity.Few movements.Linguistic Value.Very Low Interaction (VLI).
Level 2. The user has mental reasoning of the content provided by the exhibition, while the exhibition can analyse interactions, raise questions, encourage feedback and summarise fundamental ideas or relevant passages.Approaches arise from responses to user questions.Key Features.Low Interactivity.Few movements, comment stimulation, mental analysis.Linguistic Value.Low Interaction (LI).
Level 3. The User reasons with the content offered by the exhibition, while the exhibition indicates pauses in which the user develops different types of activities, including oral queries, complementing support material etc., allowing them to control the sequence of the activity, its flow and its continuity.Key Features.Medium Interactivity.Pauses indicated, oral activities, queries.

Linguistic Value. Medium Interaction (MI).
Level 4. At this level, there is greater control between user-exhibition.The user can alter the message they receive by means of feedback i.e., they can select the desired information to receive.The user has the option to decide how, when and what part of the activity they want to develop.Key Features.High Interactivity.Control, feedback, desired data selection.Linguistic Value.

High Interaction (HI).
Level 5.The user has the ability to feedback, control, create, communicate, adapt and produce the information provided by the exhibition.This level represents all the qualities of interactivity.During parts of the interaction, "talk" can occur between the user and exhibition (a "talk" using different means of interaction).Key Features.Extremely High Interactivity.Control, feedback, creativity, adaptation, productivity and desired data selection.Linguistic Value.Extremely High Interaction (EHI).
The proposed scale was used as a reference to create the suitable model to measure the level of interaction.The ability to measure levels of interaction is essential to provide services or information that the user really needs; this is developed only by understanding the user.Nevertheless, how can we measure and represent this level of interaction using a FIS?It is desirable to have propositions of the reached level of interaction by the user for specific interactive activities.
Computational solutions involved in the development of real-time interaction can be implemented to help in this process.In our propositions, we represent the level of interaction that is assumed, where the user has a specified evaluated level.Within this context, we consider it relevant to integrate fuzzy logic modelling to formalise levels of representation.In this case, the level of interaction is not a result of interaction or non-interaction type; instead, it is a result of all elements that complement the interaction (user profile, preferences, actions, behaviour, performance etc.).The fuzzy logic maintains its knowledge base using rules, making the implementation process more appropriate for exhibition reasoning in order to measure the level of user interaction.This proposed format makes the rules easier to maintain and easier to update the knowledge base.
In this sense, this research analyses the data obtained directly from the user, in the context of interaction between user and exhibition using fuzzy logic to infer relevant information on the level of user interaction in relation to activities conducted.This information is obtained through fuzzy inputs that are used as inputs to the FIS; these inputs are: Presence (Pre), Interactivity (Int), Control (Ctl), Feedback (Fbk), Creativity (Cty), Productivity (Pdt), Communication (Com) and Adaptation (Ada).Each input value was collected using a scale from 0 (minimal value) to 1 (maximum value) derived from the user interaction behaviour.By performing ethnographic research, in a non-invasive way, we get the values based on human expert evaluations with inherent uncertainty.
This research is developed in such a way that the model can be applied to different environment scenarios.The integration of the proposed scale and the FIS helps measure the level of interactions performed by users during user-exhibition interaction.To recognise the level of interaction, the variables are defined as resources generated by the user performance, considering data analysis simulations and real-time monitoring.The obtained variables are evaluated to identify its level of interaction, analysing the proposed scale through applying the FIS.We analysed information available about the exhibition, including its features, media communication, content, etc. and studied the performance data of users, considering their individual actions.The changes that occurred are of great importance to the interaction as they are used to feedback information to the model.The user has a set level of interactions during a given period.This measurement can handle uncertainty determined by a set of membership functions.The states caused by the user are the states that can be induced.We define plans, with each being specified by different membership functions, by linguistic variables that receive the level of user interaction in the process of interaction.Through these, the model can determine the most appropriate content, service or information to be shown to the user.
The measurement of the level of interaction is composed of the input variables that are defined.Each variable has different membership values in the actions of users.When each user interaction begins, it results in different membership values being created that can vary the result of the actions.The interaction may have different levels (ELI, VLI, LI, MI, HI and EHI) to make a decision in order to provide services or information that are really needed by the user.The values of these levels can be interpreted in the calculations when monitoring and analysing the interaction between the user and exhibition; this is also used to determine subsequent interaction.

Implementing the Fuzzy Inference System
Unlike other models that use different paradigms through heuristics, our proposed model adopts a fuzzy set theory to build knowledge provided by users.We present uncertainty of information in a better and more appropriate manner.We have represented, in accordance with environmental inputs, variables (Pre, Int, Ctl, Fbk, Cty, Pdt, Com and Ada) with their respective membership functions which define the output (interaction level).The result of this implementation is written by a fuzzy value and, in this case, is given a linguistic value.The update process is dynamic and is altered according to a user's performance during interaction.
Membership functions were modelled by considering an initial user profile, ensuring a more accurate result for assessing the level of user interaction.Different activities were created with the intention to vary from low to high uncertainty to assess the level of interaction by each user.The implementation of a FIS for the purpose of effective utilization requires the use of programs that directly apply fuzzy logic functions.Some utility programs have specific modules to facilitate the accomplishment of this task, as is the case of the Fuzzy Logic Toolbox of MATLAB (MATLAB vR2017B.The MathWorks Inc., Natick, MA, USA, 2017) [26] that contains a library based on the C language, that contains a library based on the C language, providing the necessary tools to conduct effective fuzzification.JT2FIS (JT2FIS v1.0.Universidad Autónoma del Estado de Baja California, Mexicali, BC, Mexico, 2016) [27], a tool-kit for interval Type-2 fuzzy inference system, can be used to build intelligent object-oriented applications and provide an effective fuzzification method and tools; this utility was used to implement the proposed FIS (see Figure 5).
The model inputs are the variables that can be perceived by the exhibition; these are the performance data of the user interaction.As we consider the input variables (Pre, Int, Ctl, Fbk, Cty, Pdt, Com and Ada) and the output variable (interaction level) to the FIS, these are associated with a set of membership functions.The output function comprises six linguistic variables: (ELI, VLI, LI, MI, HI and EHI).Gaussian functions were used as this type of membership function has a soft non-abrupt decay.The FIS was implemented building inference rules covering all linguistic variables, composed by the operator associated with the minimum method.Aggregation rules are made by the maximum method.Table 1 depicts these base rules; thus, this is identified as the knowledge base representation.
At this stage, to enable the FIS, the fuzzy toolboxes of MATLAB [26] and JT2FIS [27] Tool-kits were used, simulating and entering the 500 users to be analysed; modifying user inputs generates a set of inputs with each one with set values, exemplifying the performance of the user's interaction.These values are subjected to the FIS that return an output variable (Interaction level).
The proposed fuzzy model provides a universe of six levels of interaction.These levels were defined with different values for the parameters of membership levels.This makes it possible to develop a knowledge base that allows a set of applications of membership functions that vary according to interaction and user performance; this is because the membership functions are altered to represent states with different degrees of uncertainty.One example is to build an initial function that is more flexible and categorizes users within sets (ELI, VLI LI MI, HI and EHI). Figure 6 depicts the variations of uncertainty from the first to last level of our fuzzy universe.
To verify the user's corresponding level of interaction, according to their inputs, we evaluate the defuzzification output of the resulting level.Thus, a user moves from one level to another when its membership function value is more inclined to the nearest integer e.g., if the level is at 0.2, it would remain at level 0 of interaction but, if the level is at 0.9, the level of interaction is moved to level 1; the system then updates the knowledge base for the next interaction.Another example is to represent the interaction at level 5, which is the highest level of interaction and presents all input variables near or at the maximum level.This value represents less uncertainty in measuring user performance.Therefore, the value forms the basis of the user behaviour in the environment; also, the value can change dynamically and functions of the membership values can be modified to characterize from a greater uncertainty to less uncertainty about the user's performance.Once the level of interaction is identified, information or services are sent, according to the interaction level result.Membership Functions Plots universe.
Figure 6.First and last level of our fuzzy

Validating the Fuzzy Inference System
The research results obtained can be considered closer to human intelligence as we consider linguistic variables from the users.We evaluated the input fuzzy set according to our knowledge base founded in the if-then rules of the FIS.As a result, the optimum outputs were obtained much closer to the target outputs.The building of the optimum results for the system depends on the experience of experts.If results are obtained that are similar to user performance, data or services can then be delivered according to user preference.The same data obtained and analysed under the same conditions of the 500 users was applied using the proposed Empirical FIS.
For validating each approach proposed in this article, we used a confusion matrix.A confusion matrix (error matrix) is a tool that objectively measures performance of a classification algorithm.Each column represents the number of predictions of each class, while each row shows the instances of the true classes.The diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier.The higher the diagonal values of the confusion matrix, the better the result, indicating many correct predictions.To measure the performance of Empirical FIS, we compare the results obtained by our proposed FIS with the results of the expert.The Figure 7 shows the results of this classification.The bottom right cell indicates the overall accuracy, while the column on the far right of the plot illustrates the efficiency for each predicted class.The row at the bottom of the plot shows the accuracy for each true class.In Section 5, we evaluate the interaction of users using alternative approaches.For each, we use the matrix to validate.To appropriately evaluate the interactions, we first separate the data into two sets, one set for the training model and the second set to test it.In all cases, 70% of the data is used for training and the rest to test.Each set was made by random selection of data.
We described the empirical FIS configuration in the supplementary material, where Table S1 shows the inputs configuration, Table S2 shows the outputs configuration and Table S3 shows the fuzzy inference rules of the empirical FIS.

The Intelligent Hybrid System Approach
In this section, the results obtained from the sample of 500 users that visited the 'El Trompo' interactive museum in Tijuana, Mexico, are presented and analyzed.Users were evaluated and processed using an empirical FIS, Decision Tree, a fuzzy c-means method of data mining [4] named Data Mined Type-1 [23] and Neuro-Fuzzy System [7].

The Decision Tree Approach
First, the data collected was processed using a decision tree.We used the fitctree function of MatLab; this function returns a fitted binary classification decision tree, based on the input variables (also known as predictors, features or attributes) and output (response or labels).For this case study, the inputs selected were the presence, interactivity, control, feedback, creativity, productivity, communication and adaptation, while output is the level of interaction (Levels 0-5).We selected 70% of the data to fit the decision tree, while the remaining data was used to predict the level of interaction by users at the museum.Figure 8 show the results of this classification.The bottom right cell shows the overall accuracy, while the column on the far right of the plot shows the accuracy for each predicted class.The row at the bottom of the plot shows the accuracy for each true class.

The Data Mined Type-1 FIS Approach
A key aim of this research was to obtain more detailed and specific values, according to the performance and behavior of the user, taking into account uncertainty.For this reason, we used the Data Mined Type-1 approach, aided by the JT2FIS Tool-kit [27].We selected 70% of the data for user sampling using a Fuzzy C-Means clustering algorithm for data mining [4]; once all data was mined, we obtained the configuration parameters of the FIS.In this case, FIS inputs were the Presence, Interactivity, Control, Feedback, Creativity, Productivity, Communication and Adaptation, while output was the level of interaction (Levels 0-5).Following this, we added six rules which enabled us to obtain a FIS with a higher level of accuracy in the realized configuration.Consistent and accurate interaction levels were obtained to adhere, as much as possible, to the performance and behavior of the user to offer services, data, and content that is ultimately required by the user.Once the data mined Type-1 FIS was configured, we evaluated the 30% of remaining users, with their information used as input, to determine the level of interaction.Figure 9 shows the results of this approach; as seen, we can observe an improvement in the classification concerning the previous methods.The bottom right cell shows the overall accuracy, while the column in the far right of the plot shows the accuracy for each predicted class.The row at the bottom of the plot shows the accuracy for each true class.We described the Data Mined Type-1 configuration in the supplementary material, where Table S4 shows the inputs configuration, Table S5 shows the outputs configuration and Table S6 shows the fuzzy inference rules of the Data Mined Type-1 FIS.

Neuro-Fuzzy System Approach
To improve the accuracy in the classification of the interaction levels, we decided to generate a FIS using a Neuro-Fuzzy method.Neuro-Fuzzy systems encompass a set of techniques that share the robustness in handling of imprecise and uncertain information that exist in problems related to the real world e.g., recognition of forms, classification, decision making, etc.The main advantage of Neuro-Fuzzy systems is that they combine the learning capacity of neural networks with the power of linguistic interpretation of FIS, allowing the extraction of knowledge for a base of fuzzy rules from a set of data.In this case study, we generated a FIS combining the fuzzy-C means clustering and Least-Squares Estimate (LSE) algorithm.This method was proposed by Castro et al. [7].
In this approach, we selected 70% of the data based on fit.The rest of data was used to predict the level of interaction users had at the museum.Figure 10 shows the results of this classification.The bottom right cell shows the overall accuracy, while the column on the far right of the plot shows the accuracy for each predicted class.The row at the bottom of the plot shows the accuracy for each true class.We described the Neuro-Fuzzy FIS configuration in the supplementary material, where Table S7 shows the inputs configuration, Table S8 shows the outputs configuration and Table S9 shows the fuzzy inference rules of the Neuro-Fuzzy FIS.

Empirical FIS Approach Versus Hybrid FIS Approach
The use of artificial intelligence has been widely applied in most computational fields.The main feature of this concept is its ability to self-learn and self-predict desired outputs.This autonomous learning may be achieved in a supervised or unsupervised manner.The interaction level prediction of user data has been applied and processed using different approaches, including Empirical FIS, Desicion Tree, Data Mined Type-1 FIS and Neuro-Fuzzy System (NFS).
Table 2 shows the accuracy of each of these approaches used.We can see the precision of each approach for each level of interaction.The neuro-fuzzy system is identified as the one with best results.

Discussion
In recent times, museum halls have experienced overcrowding with most users (students) only having a few opportunities to interact with the museum's exhibitions.The museum used to compensate this limited experience by providing guided tours by instructors, that accompanied groups of kids and maintained interest in demonstrations.In some way, the museum achieved its goals using this strategy, but the infrastructure could be considered under-utilised.A solution to this overcrowding is to make museums smart spaces with multi-user adaptive interaction exhibitions.Some museums in Mexico have based their interactive experience design on instructional activities and underlying technology (in general including touch screens with information and choices where the user plays selecting options or answers questions about a subject).However, the exhibitions often only offer the experience on an individual basis, and do not allow interaction by multiple users at the same time.If museums could expand interactive modules to multi-user experiences in Mexico, then guided tours by instructors may not be necessary.

'El Trompo' as a Complex Sociotechnical System
The 'El Trompo' Interactive Museum has introduced interactive exhibitions using newly-available technologies.As an organisation, the museum recognizes that the interaction between its users and technology is crucial.The development of this educational institution has gone beyond its technological structure with the aim of extending its systems to end users and expanding the scope of its core business.In this sense, the interaction between users' complex infrastructures and human behaviour becomes paramount.We, therefore, consider it and most of its substructures as complex socio-technical systems.We further acknowledge social behaviour, spontaneous collaboration, feedback and adaptation, among users and technology as a complex system.The museum should be considered as a set of many interacting elements where the modelling of user behaviour is challenging due to its dependencies, relationships or interactions between users or between the user and their environment.As a result, we provide discussion on the different impressions encountered during the museum case study, with the aim of exploring the current state of the agent and multi-agent systems technology and its application to the complex socio-technical system domain.

Human-Agent Interaction
Firstly, in terms of the user and their environment, we considered that the HCI examined the intention and usage of computer technologies centered on the interfaces between users and devices.Thus, the behavioral sciences, media studies, sensors networks and other fields of study could help us to observe how humans interact with computers and to enable us to design technologies that let users interact with exhibitions in innovative ways.Interaction is the central subject of HCI, so we consider this to be crucial.Distinguishing levels of interaction is the first step to be taken in managing the interaction of a user and its technological environment and turning it into a socio-technical system.This could be a challenging task due to the subjective conceptual meaning.
The interaction-levels, proposed by Gayesky and Williams [8] , offer the advantage of being easy to interpret and implement in a software system, but the inputs could be difficult due to them requiring a set of qualitative measurements that could be difficult to observe through sensors, for example.As the level rises in the assessment, observations on user behaviour are often harder to obtain.For instance, at Level 0, we only need to sense if the user is present or not in the exhibition but, at level 5, we not only need to identify if the feedback, control, creativity, communication, adaptability and productivity is taking place, but we also need to identify the quality of these assessments.Of course, this is a challenge worth facing in order to create a more human validation of the user's competitiveness.By measuring interaction-levels, it can provide a simple communication among the involved elements.User preferences can be predicted to offer adequate information or services to complete their goals; this can increase the knowledge and productivity of the user, satisfying their needs.The understanding and knowledge of the human interaction allows for the development of an interactive system, which should provide the ability to choose and act, anticipating the possible actions of the user and coding them in the program, allowing for continuing interaction by the user.
The interaction-levels measuring can allow us a simple communication among the involved elements.The user's preferences can be predicted to offer adequate information or services to complete his goals; this can increase the knowledge and productivity, satisfying user's necessities.The understanding and knowledge of the human interaction step by step allowing development of an interactive system should take into account the ability to choose and act, anticipating the possible actions of the user and coding them in the program, continuing the interaction by the time.

The Intelligent Interactive-Exhibit System
Secondly, by using software systems that generate inferences from knowledge, we can use them to develop interactive displays in a museum to predict user performance.Reasoning systems play a significant role in the implementation of intelligent and knowledge-based interactive-exhibit systems.Thus, machine learning methods unfold user behaviour over time, based on activity in the exhibit room, particularly with the interactive displays.A learning process that searches for generalised rules or functions that users produce, in line with observations of actions, can be incorporated into the environment and used to manage predicted user behaviour.In our case, we built a FIS to represent the interaction-levels based on the observation rules proposed by Gayesky and Williams [8].
Further, considering that a Hybrid Intelligent System is a knowledge-based inference system that can combine data mining and knowledge discovery methods to produce an Inference System, we used it to build a FIS from real evaluator outcome data.We applied a neuro-fuzzy technique to produce an inference system in state of the art fashion.The advantage of a neuro-fuzzy system is that it convenes the neural network training process that researchers widely use in machine learning, but is hard to open to understand.Using a fuzzy inference system makes it easy to see what has happened inside the box.A fuzzy inference system could be built by the designer, if necessary, or by using a machine learning process.For this study, we used a dataset from an in situ observer with real museum visitors interacting with an actual exhibition-display to validate the hand-crafted FIS [8].We then compared it to a FIS discovered form the data set through a neuro-fuzzy method.One vulnerability with this approach is that we assume that other sensor systems provide the inputs, as the fuzzy inference system expects it.A gap of this first approach is that we assume that input data is correct and the context awareness systems are capable of providing it, in the case of no human expert evaluations.

Knowledge-based Agent and Agent Architecture
Thirdly, incorporating the above into software agents, an intelligent or knowledge-based agent could perceive through sensors the motions and actions taken in an environment.In the museum case study, the strategy is to direct the user activity towards achieving instructional goals.The intelligent agents may further learn from the user and use the discovered knowledge to meet their aims.Further, multiple interacting intelligent agents can be used to address problems that are difficult or impossible for an individual agent to solve within a museum environment.In our case, we approached the museum as a complex socio-technical system by knowledge-based agents.With this strategy, we started with agent-based modelling of some components, but then went through a knowledge-based agent and agent-based architecture design to finally build an agent-based computational system.For this approach, all the museum components are considered agents that interact complexly and where the user is another agent and part of the community.
At this point of our study, we consider the inputs of user-module interaction in the agent architecture (presence, interactivity, control, feedback, creativity, productivity, communication and adaptation) as a simple behaviour evaluation performed by an observer.These attributes could be more intricate than first appears and we could conduct further in-depth study.For example, "Communication" could involve not only a user-module interaction, but also the talk between users."Adaptation" could imply the negotiation of results in user collaboration processes to achieve common goals or "presence" could be determined by ubiquity in an infrastructure system and vicinity in a social network.In other words, we may evolve the exercise of a multi-agent system and consider further analysis techniques to approach complexity.6.1.4.Agent-Oriented Software Engineering Finally, as Agent-Oriented Software Engineering (AOSE) starts to support best practices in the development of complex Multi-Agent Systems (MAS), we can focus now on the use of agents and the combination of agents as the intermediate generalization of socio-technical systems at a museum, in an agent-oriented analysis, design and programming fashion.In the architecture, the modeler represented the intercommunication observer as an intelligent (knowledge-based) agent that evaluated the interaction behaviour of the user that performed in the exhibition module.This agent is an agent-oriented software that infers the interaction-level from environmental observations to send feedback to the user to support experience.In our case study, the interaction-evaluator agent is a Java software agent capable of qualifying the interaction level in real-time, and its prediction performance was tested and validated.Based on this practice, we believe that this type of knowledge-based engineering, hybridized with agent architectures, could form part of the AOSE.

Conclusions and Future Work
This paper has explored the evaluation of interaction on HCI using Gayesky and Williams' Interaction Levels Theory [8] to improve the user experience when interacting with museum exhibition modules.It has taken into account user behaviour based on presence, interactivity, control, feedback, creativity, productivity, communication and adaptation.In our experience, the Gayesky and Williams' interaction levels [8] were simple to understand and use.
Firstly, we modelled the interaction levels using an Intelligent Hybrid System to provide a classifier that evaluated user performance into interactive modules in HCI.We applied machine-learning techniques to set-up or automatically discover knowledge from a real observation data-set.The generated model was a FIS that described their interaction levels according to the Gayesky and Williams' user behaviour attributes.The Gayesky and Williams' interaction levels were simple to model by an inference system and we provided the obtained FIS configuration of all cases.We then used an empiric design from expert experience and an automatized method form on-site observed data-mining to generate the corresponding FIS.The prediction accuracy then validated and compared against the evaluators to recommend the best approach.We provided a confusion analysis and a comparative summary to highlight the advantages and disadvantages of each approach.We recommended that the method is the Neuro-Fuzzy System.
To show the applicability of the proposed model, we built software agents that represented a high-level abstraction of a gallery, specifically an interactive exhibition module at the 'El Trompo' museum in Tijuana, Mexico.In the agent architecture, the FIS performed as a decision-making system that helped the InteractionEvaluator-Agent to identify the interaction level from sensors in the environment and feedback the Exhibition-Agent to improve the user experience.We discussed different impressions when approaching the museum case study with the aim of showing the current state of the agent and multi-agent system technology and its application to the complex socio-technical system domain.We found that Agent-Based Models, with Intelligent Hybrid Systems (as an agent decision-making system), to approach complex socio-technical systems was beneficial.
Finally, we can see that the benefits of the proposed model help HCI agent-based systems to evaluate the user interaction in a high-level abstraction.Accurate feedback enhanced the user experience.
For future work, we must consider that Gayesky and Williams' user interaction attributes (presence, interactivity, control, feedback, creativity, productivity, communication and adaptation) should be further developed to add a new level of description.Each feature means new challenges to characterize and implement.The proposed architecture allowed us to add new evaluation fuzzy inference systems on cascade in each performed input to escalate the model and consequently improve the Interaction Evaluator-Agent.Considering approaching complexity, we will also evolve the model to a multi-agent system.From this perspective, the interaction between user-agents to coordinate and collaborate to achieve common goals and describe the relationship between them, is an essential improvement to enhance HCI.Social and network theory will contribute to new epistemological approaches to user interaction modelling as users' social nature involves them in complex social systems.

Supplementary Materials:
The following are available online at www.mdpi.com/2076-3417/8/3/446/s1,Table S1: Inputs configuration of the empirical FIS.s = standard deviation, m = average; Table S2.Outputs configuration of the empirical FIS.s = standard deviation, m = average; Table S3.Inference Fuzzy Rules of the Empirical FIS; Table S4.Inputs configuration of the Data Mined Type-1 FIS.s = standard deviation, m = average; Table S5.Outputs configuration of the Data Mined Type-1 FIS.s = standard deviation, m = average; Table S6.Rules configuration of the Data Mined Type-1 FIS; Table S7.Inputs configuration of the Neuro-Fuzzy FIS.s = standard deviation, m = average; Table S8.Outputs configuration of the Neuro-Fuzzy FIS.s = standard deviation, m = average; Table S9.Rules configuration of the Neuro-Fuzzy FIS.

Figure 1 .
Figure 1.Process and models of Gaia, a methodology for agent-oriented analysis and design, applied to the interactive museum exhibition module case study.

Figure 2 .
Figure 2. General computational modelling methodology applied to the interactive museum exhibition module case study.

Figure 5 .
Figure 5.The agent system of the interactive museum exhibition module.

Figure 7 .
Figure 7. Confusion Matrix of the Empirical Fuzzy Inference System (FIS) Approach.

Figure 8 .
Figure 8. Confusion Matrix of the Decision Tree Approach.

Figure 10 .
Figure 10.Confusion Matrix of the Neuro-Fuzzy System Approach.

Table 1 .
Inference Fuzzy Rules of the Empirical FIS.
4 If (Presence is Good) and (Interactivity is Good) and (Control is Good) and (FeedBack is Good) and (Creativity is Good) and (Productivity is Good) and (Communication is Good) and (Adaptation is Good) then (Level 0 is Low) (Level 1 is Low) (Level 2 is Low) (Level 3 is High) (Level 4 is Low) (Level 5 is Low).5 If (Presence is Very Good) and (Interactivity is Very Good) and (Control is Very Good) and (FeedBack is Very Good) and (Creativity is Very Good) and (Productivity is Very Good) and (Communication is Very Good) and (Adaptation is Very Good) then ( Level 0 is Low) (Level 1 is Low) (Level 2 is Low) (Level 3 is Low) (Level 4 is High) (Level 5 is Low).6 If (Presence is Excellent) and (Interactivity is Excellent) and (Control is Excellent) and (FeedBack is Excellent) and (Creativity is Excellent) and (Productivity is Excellent) and (Communication is Excellent) and (Adaptation is Excellent) then (Level 0 is Low) (Level 1 is Low) (Level 2 is Low) (Level 3 is Low) (Level 4 is Low)(Level 5 is High).

Table 2 .
Accuracy Percent/Error Percent for Each Predicted Class for Each Method.