A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring

: There are more than 800 million people in the world with chronic diseases. Many of these people do not have easy access to healthcare facilities for recovery. Telerehabilitation seeks to provide a solution to this problem. According to the researchers, the topic has been treated as medical aid, making an exchange between technological issues such as the Internet of Things and virtual reality. The main objective of this work is to design a distributed platform to monitor the patient’s movements and status during rehabilitation exercises. Later, this information can be processed and analyzed remotely by the doctor assigned to the patient. In this way, the doctor can follow the patient’s progress, enhancing the improvement and recovery process. To achieve this, a case study has been made using a PANGEA-based multi-agent system that coordinates different parts of the architecture using ubiquitous computing techniques. In addition, the system uses real-time feedback from the patient. This feedback system makes the patients aware of their errors so that they can improve their performance in later executions. An evaluation was carried out with real patients, achieving promising results.


Introduction
There are more than 800 million people in the world with chronic diseases [1]. Many of them do not have easy access to healthcare facilities for their recovery. According to the authors of [2,3], more than 50% of these people could benefit from integrating rehabilitation services into their homes and everyday devices, e.g., smartphones, computers, and tablets. The reasons that account for this range from disabilities to travel-related issues. The concept of telemedicine tries to fill this gap by offering remote access to healthcare. Telemedicine [4] refers to the ability to perform medical diagnosis and treatment remotely. Thus, it uses Information and Communication Technologies (ICT) for its implementation. There are several works that fall under this concept [5][6][7].
The concept of telerehabilitation is found within the field of telemedicine. It refers to the use of ITCs to carry out a rehabilitation service remotely [8]. Many studies have been carried out around this concept using different technological paradigms. Some use Internet of Things (IoT), which allows a greater amount of information to be collected about users. Such information enables healthcare professionals to monitor them remotely and offer

Background
The last decade has seen a slow increase in telemedicine applications [16]. Within this field, several works have been carried out in areas such as patient activity monitoring systems [17] or systems for the recovery of chronic disease patients [18]. Works that deal with rehabilitating patients remotely, different kinds of data are recorded to monitor them. On remote rehabilitation projects, this data is usually recording with an IoT paradigm [19,20]. Among the works that make use of this paradigm, Gaddam et al. [21] analyzed user gait and mobility to give them certain strategies when performing outdoor exercise. The system makes use of Near Field Communication (NFC) technology to obtain the information that is later analyzed. On the other hand, Celesti et al. [22] carried out an analysis of different NoSQL databases to determine which is better suited to be used in IOT-based telemedicine systems. In this case, they highlighted the performance of the MongoDB database in the Cloud for handling this information.
Among IoT systems, there are those dedicated to monitoring movements made by different parts of the body to check that movements are carried out properly. These works are focused on the development and use of the so-called exoskeletons [23,24]. Erdogan et al. [25] developed a system based on an exoskeleton for the rehabilitation of the ankle. To this end, they constructed an exoskeleton to treat the multiple phases of treatment and shorten the patient's recovery time. On the other hand, Wang et al. [26] developed an ankle exoskeleton to direct three rotation movements through a simple and reliable structure, managing to implement a control system capable of accurately positioning the ankle according to the exercise being performed.
From another point of view, there are solutions capable of monitoring whole-body movements of the users. These solutions make use of Deep Learning algorithms for the detection of postures through image processing techniques [27][28][29]. Hernandez et al. [30] developed a motion capture system capable of measuring the kinematics of spacesuits in an underwater test environment. However, these solutions are not 100% effective, which can result in erroneous data that can lead to errors in reviewing patient progress. On the other hand, the development and usage of motion capture suits to capture this information is an active area of research. Kim [31] used the Rokoko motion-capture suit to identify different strategies that could be used to capture the movements of dancers. These movements are later used in film production, editing, and special effects.
Another important point of these telerehabilitation systems is user feedback. This feedback makes the user feel more integrated and motivated while showing them their actions. Accessibility, ease of use, and human-computer interactions are also very important aspects. To achieve this, most of the systems developed make use of VR technology. Some studies demonstrate the advantages of using this technology. Dif [32] showed that users performing motor function-related tasks improve its performance after training when obtaining visual feedback. The research group of the authors of this study, the ESALab research group, has previous experience related to VR and rehabilitation systems. de la Iglesia et al. [33] proposed an immersive VR rehabilitation system based on the repetition of certain exercises with the help of a exoskeleton. Postolache et al. [34] included VR technology to allow a patient with motor difficulties to perform exercises in a very interactive and non-intrusive way, using a set of devices that can be worn, thus contributing to his or her motivational rehabilitation process.
In telerehabilitation dedicated projects, the communication and coordination between patients and healthcare members are crucial. Healthcare professionals have to plan exercises and, at the same time, check progress during the rehabilitation process. Meanwhile, users should be able to receive feedback from medical professionals and updates on the exercises. Some studies (e.g., [35]) seek to solve this problem by using multiagent systems that allow one to create a dynamic, scalable, and decentralized system. Calvaresi et al. [36] carried out an analysis of solutions for telerehabilitation and highlighted that multi-agent systems are useful. They allow contextualizing scenarios in a simple way in situations where planning and problem solving uncertainties are intertwined with distributed information source coordination and sophisticated concurrency controls. Calvaresi and Calbimonte [37] presented a model that represents sensors as autonomous agents capable of programming tasks and performing interactions and negotiations that comply with strict time constraints.
In this context, the proposed work seeks to build a novel IoT system capable of monitoring whole-body movements using the techniques described throughout this section. The objective is to allow the assigned doctor to remotely monitor the patient in real time. The user should be aware of their recovery process and the effectiveness of the exercises carried out. The errors performed should be presented to the user to allow the patient to correct them. This closer involvement with the patient in the rehabilitation process will influence their motivation and, therefore, their state of mind. In addition, the use of a multi-agent system will make it possible to coordinate the different assets, resulting in real-time and dynamic monitoring. In the following section, the system developed from these specifications is shown.

Proposed System and Architecture
This section presents the proposed system. First, we present the proposed architecture based on the PANGEA multi-agent architecture, describing each agent that is part of the system and its functionality. The main objective of this proposal is to create a system that can be accessible to everyone. Moreover, since the development of the system is modular, it can be used through different devices, either in its entirety or only with some of them. In this case, we integrate these device connections with the system in an abstract way so that they can be replaced by other models of devices suitable to the users' possibilities.
The main characteristic of the architecture that is proposed is the capacity and inclusion of new functions and the ability to adapt to new environments in the future in a simple way. To solve the problem, the architecture must contain a series of well-defined characteristics to achieve correct operation. As the objective of this research work, the creation of a platform has been proposed to allow the rehabilitation of people who need to do some physical exercise independently and who, in turn, can be accompanied by a specialist who indicates the exercises to be done. To do this, the user or patient of the application can use various devices, such as a motion capture suit or a heart rate monitor, that allow them to monitor their progress, or they can use various terminal environments that allow them to guide and see the results of their progress. The architecture for the system must have the capacity to adapt to the incorporation of new functionalities so that it is adaptive, scalable, distributed, and with different communication protocols that allow the integration of the different parts of the system. To this end, we propose the use of a multi-agent architecture that offers the functionalities described above. In a multi-agent based architecture, each of the agents must have a well-defined functionality and task so that it can coordinate and interact with the other agents. For the construction of multi-agent systems, there are currently several solutions developed to speed up the process. The systems currently available can be from simple libraries such as SPADE and Python's library or complete complex systems such as JADE, PANGEA, and osBrain. Figure 1 shows the proposed architecture using MAS PANGEA with its virtual organizations and the main agents that are part of the designed architecture.
The architecture proposed for this research work is used as a starting point for PANGEA. This is because PANGEA is based on the theory of organizations, which allows it to be applied to most systems, and allows for the modelling of human iteration with the system. The main advantage of PANGEA over other multi-agent systems is its internal rule engine that allows for the distribution of computational load among the different agents.
The designed architecture is divided into different parts. There are two well-defined parts: The upper part of the image shows the minimum agents required for the operation of the PANGEA multi-agent system. The lower part shows the virtual organizations of agents belonging to the case study.
Capture System Organization: This organization interacts as a bridge between the system and the elements of the sensorization system to monitor the execution of the exercises. There are two main agents in this organization. The first agent we meet is the Node Position, which connects to the monitoring suits, in this specific case via Bluetooth BLE. It has the ability to detect changes in movement and publish them so that they can be used by the system. The second agent is the Pulsometer, which, similar to the agent described above, connects via Bluetooth BLE with the heart rate monitors to capture the heart rate at each instant it is using the system. This agent allows for recommendations to increase or decrease the rate of exercise performance.
Monitoring Organization: This organization aims to carry out the tasks of history storage, progress analysis, patient profile control, and report generation. This organization is detailed in Table 1.

Historical
This agent is responsible for generating and obtaining historical data of carrying out exercises in such a way that they can be reviewed and evaluated by an expert in the future.

Evolution
This agent is in charge of calculating the evolution from the percentage of completion of the exercises.

Medical Record
The function of this agent is to simulate integration with a health system where patients' medical conditions can be obtained and reported.

Profile Data
This agent is responsible for making the user's profile available to the entire system with the user's parameters. This agent is also responsible for updating the profile values as they are updated with progress.
Report Generator Agent in charge of generating reports either of each exercise or of the rehabilitation progress.
Simulation Organization: This organization is in charge of the classification and validation of poses made by the user as well as the execution of actions. This organization has an important role within the whole system since it is in charge of checking that the exercises are carried out and that they are done correctly. To this end, the main agents of this organization are described: • Pose Estimation: This agent can recreate the human pose from the RAW data coming from the Node Position agent. This agent can filter out and discard invalid poses due to what may be a temporary error in a sensor or due to the system being started up and the user wearing the suit. Application Interface Organization: This organization is in charge of adapting the information generated by the system to the application layer. This organization is used as an interface so that the applications can interact directly with the system. This organization is in charge of converting the raw information of the system into information that can be easily interpreted by humans. In this case, the information displayed acts as an interface for the VR applications, the mobile applications, and the expert monitoring application.
PANGEA Multi-Agent System Organization: This organization is composed of the minimum necessary agents for the operation of PANGEA. This organization aims to manage the virtual organizations and coordinate the agents within each of them. The agents of this organization can be seen below: • Database Agent: This is the only agent with database access permissions and can store the information present within the organization, such as records, historical agents, and tasks performed by each of the system agents. • Information Agent: This agent indicates the services available by all of the agents and can be requested by other agents of the system. When a new agent wants to join the system, this new agent must indicate to the information agent the services that are available so that the information agent can say which services are available to the other agents. • Normative Agent: This agent is responsible for imposing and ensuring that the rules are complied with by the communication established between the agents. • Service Agent: This agent has the functionality of arranging and distributing the system's functionality through web services. Its role could be considered as a Gateway style; it allows for the communication of external services with the virtual organization of agents. This allows agents to be easily integrated and built in any programming language. • Manager Agent: This agent is important within the whole system, as it is in charge of checking the status of the system periodically. It detects a load in a part of the system, any overloaded functionality, or possible failures in agents from different organizations. • Organization Agent: This agent is responsible for verifying all operations of virtual organizations, checking security and load balancing, and offering encryption for communication between agents.
Within the system, it is worth noting that two databases with well-defined functionalities are used. The PANGEA database is only used by the PANGEA virtual organization and is intended to store the information on the organizations and the services available to each of their agents. The APP Database Storage contains information specific to the use case. In this particular case, patient profiles, information on the experts responsible for the rehabilitation, exercises to be performed, exercise and monitoring histories, alerts, and incidents are stored.
The proposed architecture indicates the existence of external agents. These represent the two available roles within the application that have direct interaction with it. The first user who makes use of the application is the patient who is the user of the application carrying out the rehabilitation task. The second user of the application is the expert or doctor who is in charge of setting up the user profile, proposing the exercises, checking the progress of the recovery that must be validated manually by an expert.
The modules and agents of architecture are specialized in a specific objective or task. The advantage of this architecture is that it allows the replacement of either one agent or a set of agents that have similar characteristics without affecting the rest of the system. An example of this would be the use of a different motion capture suit. To do so, we would only have to replace the Node Position agents that allow them to communicate with the suit to be incorporated.

Materials and Methods
In this section, we describe the devices and the methods used to collect and unify the data. These data represent the user's movements in the platform. In addition, the processing system to be carried out to classify the movements made by the patient is described. Finally, the tools and devices used to visualize the patient's evolution during the rehabilitation process are outlined.

Pose Detection Devices
The pose detection system aggregated to the architecture is used to collect the movements made by the user. With this information, the system can estimate its pose. The key to this process is the usage of clothing with IMU sensors to track the movement of the user's body. In this work, we used the Enflux Suit [38], a compound of five motion sensors with an accuracy of ±2 degrees, working in a three-dimensional coordinate system. It uses the technology Bluetooth LE 4.0, connecting to the central module to receive and send data. It has an internal refresh rate of 125 Hz and uses the Arm M4 32 MHz as a microprocessor. Figure 2 shows the location of the sensors in the suit and their characteristics. The use of this suit is ideal because of its low cost and the IMU sensors used. These sensors are electronic devices that provide measurement information about the speed, orientation, and gravitational forces of a device using a combination of accelerometers, gyroscopes, and magnetometers. Each sensor provides a function to achieve a single result. The gyroscope measures the turns made, the accelerometer measures the linear acceleration, and the magnetometer obtains information about the location of the Earth's magnetic field. This device connects through Bluetooth and connects to the user's devices, which transmit the information to the server.
On the other hand, a wearable device is used, in this case a Garmin 245 Forerunner Music [39] smartwatch. This device is connected to the device used by the patient through a Bluetooth, allowing for the collection of information from the user regarding the heart rate and the level of oxygen in the blood to monitor the patient's condition and be able to track them. Therefore, the data collected from the above-mentioned devices are sent to the server for processing so that the capture system agent collects this information and sends it to the simulation node for processing.

Exercise Estimation
After the data collection process, the data are received by the simulation agent. This agent collects and processes the information, IMU sensors, and smartwatch, as shown in Figure 3. The information generated by the IMU sensors is collected as quaternions [40]. Quaternions are extensions of real numbers, similar to complex numbers, but they are an extension generated by adding the imaginary units i,j, and k to real numbers such that The set of all quaternions can be expressed as follows: where q = (q 0 , − → q ) is the real part, and − → q = (q 1 i + q 2 j + q 3 k) T is the imaginary part of the quaternion. In addition, for Hamilton's product between two quaternions, the following properties are fulfilled: The use of quaternions to collect the information produced by the IMU sensors allows us to process this information in the Unity3D [41] environment using 3D points in space that have rotations.
After the transformation of the information, a processing of the information is carried out to determine whether the patient is doing the exercise properly as well as his or her condition.
To determine whether the patient has performed the exercise correctly, a skilled user (who may be the doctor or someone with knowledge of the sport or motor system) must enter the exercise into the system. To do this, the process to be carried out is similar to Figure 3. The expert user must indicate to the system the introduction of a new exercise, performing the relevant movements relating to the exercise. The system collects the information generated, transforms it, and stores it for later use.
The processing of the data allows for the generation of information that is used for the following things: • Visualization of movements: The data received from the Enflux suit are transformed into quaternions for storage. The position of each of the nodes, which are equivalent to the IMU sensors, can be represented within the Unity3D environment, which allows us to carry out a representation of the patient's movements within the system. • Information on errors made: Once the quaternions have been obtained, they are used to detect movements that have been made incorrectly. This is discussed below. • Data for patient monitoring: The data generated by the Garmin watch and the Enflux Suit, once transformed, are processed to obtain statistics and information that can be displayed to know the status of the user and the progress of their rehabilitation.
As for the error detection, the angular distance between quaternions is used. As such, the angle between two rotations a and b is treated as the angle formed if a third rotation c is defined as moving a and b such that, if two lines are drawn from c to a and from c to b, the angle between ca and cb is the angle to be used in this case as an interpretation of the angle between a and b. The equation for obtaining this angle is as follows: where x is the real part of (b • a −1 ). Bearing this in mind, it is possible to discern between correctly executed movements by knowing the angles formed between the various IMU sensors at every instant of time so that, knowing the movements and values of the exercise performed by the expert as well as the patient, it is possible to process this information to know the errors made. The use of angles makes it possible to generalize this processing independently of the size and body of the person, since the angle formed during the execution of the movement is independent of the values of these points in space.
When this information processing is finished, the system proceeds to store the information that has been generated. In addition, in the case of the patient who performs the exercise, the system returns feedback so that it offers the results obtained after the analysis of the movements, being able to observe the errors made so that one is aware of them and can try to correct them. Finally, the system can represent the movements so that the user, in a visual way, can, in comparison with the example exercises, know if they are doing the exercise well or badly and try to adjust to the appropriate movements in real time.

Experimental Results and Contributions
This section shows the final system developed from the proposed architecture. In addition, to validate the developed system, the test that it was subjected to and the results obtained from it are shown.

Monitoring and Information Display System
In this section, the final result of the developed system is presented. For this purpose, each of the systems was analyzed individually to verify the implementation of the system. For each implementation, the methodology followed by the final result achieved is presented.

Virtual Reality System
The virtual reality system, as well as the 3D model representation system used in the proposed system, is based on the Unity3D development environment. The integration of VR technology into the system is aimed at achieving the following objectives: • offer more advanced rehabilitation methods as an alternative to traditional therapy, thus maximizing the effect of the rehabilitation measures; • allow patients to perform actions that they are not able to do in real life due to their disabilities; • provide individualized treatment plans developed based on careful assessment and following the treatment goals of each case; • increase patient commitment and motivation with virtual environments where the tasks to be performed are simulated; • provide immediate and illustrative feedback; • improve the results by measuring and analyzing different data related to the user's actions; and • provide a controlled environment through a dynamic environment that can be managed according to certain conditions as well as actions taken by the user.
However, when designing a VR environment dedicated to the rehabilitation, there are several aspects [42] to be considered before the tool is developed to make it suitable for users, where the similarity of the scenes in these environments to those in video games is highlighted. The aspects to be taken into account are the following: • Award: According to the body of research on the neuroscience of reward and motivation, the limbic system, in particular the Nucleus Accumbens (NA), is critical for learning new behaviors, especially those associated with reward-seeking, pleasure, and addiction. Activity in NA has been shown to scale linearly to the likelihood of receiving a reward, and differences in NA activity correlate with individual differences in sensation-seeking. • Difficulty: It is important to consider difficulty as an interaction of individual and environmental limitations to understand how difficulties might arise directly from an injury/illness or from the changes that accompany these individual changes. Skill transfer is particularly important for rehabilitation, where skills acquired in play are expected to be transferred to activities of daily living. • Feedback: Feedback can be used to achieve better long-term retention of the developed skill. However, positive feedback must be given more often to efficiently influence the developed skill. • Interaction: The user's interaction with the system increases the user's connection with the virtual environment. The exploration of new stimuli and new environments is strongly associated with physiological rewards. • Clear objectives and mechanics: Goal-directed tasks lead to a higher probability of acceptance of assistive devices. Lack of objectives and instructions can have a significant negative impact on patient motivation. Therefore, therapeutic goals that are not clear to patients can compromise the recovery process. Patients with high motivation in a rehabilitation setting reported communicating more actively with their therapist, and being clear and consistent in the instructions given to the patient created a sense of comfort in knowing that they were progressing towards their therapy goals. Unclear instructions caused patients to become confused and frustrated and eventually lowered their motivation.
This VR system is integrated through the Oculus Quest device [43] to allow interaction with different VR scenarios where the user performs different exercises. The diagram shown in Figure 4 shows the main components related to the representation of the VR scenario together with the information exchange. The system implemented allows the patient to receive instructions for each of the exercises, both visually and acoustically, and to observe which movements must be performed to achieve the objective of the exercise. During the exercise, the user sees two avatars, as shown in Figure 5. The avatar on the left represents the user, while the avatar on the right represents the expert user who entered the exercise into the system. During the exercise, the patient is able to observe both their own movements and the movements that must be followed by the expert user. In this way, the patient knows the movements to be made and is able to check whether these movements are being made correctly. Figure 5 shows a counter with the number of repetitions made as well as the repetitions that still need to be made. After finishing the exercise, the user is able to see the mistakes made, according to the repetitions made and for each part of the body, as can be seen in Figure 6. The data generated during the execution of the exercise provides information, such as the level of performance of an exercise, and allows the patient to be aware of the progress in the rehabilitation process. All this is controlled by the doctor who is able to consult this information remotely so that they are able to know the patient's progress, adapting the patient's rehabilitation process according to this progress and ensuring that it is carried out correctly.

Mobile/Web System
This system is similar to the one described in the previous section; however, in this system, the immersion of the patient is sacrificed to carry out VR, which allows greater accessibility to the system through the use of the web tool or mobile application, being accessible from any device with an operating system, a visual interface, and Internet access for everyday use, such as mobile phones and personal computers. These platforms can achieve the telerehabilitation that this work seeks to achieve. The use of Unity3D is key for the objectives mentioned in the previous section. Furthermore, this development environment allows for the export of the 3D scenario to different platforms, in this case, web and mobile. Mobile development is more oriented towards patient use, allowing them to carry out their rehabilitation exercises and obtain information on their progress during the rehabilitation process. One of the objectives to be achieved with the development of this mobile application is to make the application intuitive and understandable for the user so that it is easy to use. Figure 7a shows the screen of access to the application. Figure 7b shows a screenshot of a list of exercises to be performed by the patient.
The web application is more oriented towards health professionals that can monitor, in a more precise way and with more information. The progress of a patient in their rehabilitation process is represented in the Figure 8. By means of this platform and this information, the doctor assigned to the patient can modify the exercises to be performed by the patient according to this progress. In addition, on this platform, patients are allowed to carry out consultations with their doctor through the use of a chat so as to increase communication between patient and doctor.   The aim of monitoring the patient through the platform is to provide the health specialist with important information, so as to go deeper into the rehabilitation process and improve it by adapting it to the progress of the patient. To achieve this, different visual representations of the information are shown, as can be seen in Figure 9. These representations show information to the user so that they know in detail what they need to check more precisely. The doctor assigned to the patient can replay the exercises in the virtual reality embedded player, as shown in Figure 9. In addition, the doctor is allowed to send a note to the patient about the exercise performed.

Real Environment Validation
To validate the system in a real environment, a test was carried out with five rehabilitation patients who used the system for over a month. The study was carried out thanks to the collaboration with a clinic specialized in physical rehabilitation. All the patients voluntarily agreed to carry out the test and were duly informed. This process involving human subjects in the tests was carried out in accordance with the Helsinki Declaration of 1964, Ethical Principles for Medical Research Involving Human Subjects.
The volunteers (two men between 23 and 52 years old, together with three women between 26 and 47 years old) were undergoing a rehabilitation process due to muscular pain in the performance of certain daily movements. The experiment was carried out for over a month, with an initial session and four control sessions over the month, corresponding to one session per week.
The exercises performed were designed to eliminate certain causes of pain through preventing certain muscles from becoming atrophied. These exercises were included in the system by an expert in sports under the supervision of a specialized doctor, confirming that these exercises were performed properly and would help the patient.
Throughout the month, users did the different exercises established by the doctor to improve their condition. In each of the sessions, the different parameters collected by the system were analyzed, such as the angle of the movements, pulses per minute, and the level of oxygen in the blood. In Figure 10, the patient can be seen with the VR equipment.
In this case, the VR system was used to carry out the experiments so that their operation and the level of satisfaction with the patient could be monitored. In Figure 11, the patient can be seen performing an exercise with the VR system.  To verify that the system correctly detected the errors made by the patient, a faceto-face control of the user was carried out in the first session. During this control, each of the patients was asked to perform each of the established exercises with the system components on. In addition, a recording was made from different angles of the patient. The sports expert was then asked to visually identify the number of errors made by the patient. Once the exercise was completed, the number of errors detected by the system in the execution of the exercise was collected to compare its effectiveness in detecting these errors. The results obtained can be shown in Table 2. Table 2 indicates that the system was able to detect more errors than the expert user in real time. To confirm that these errors were real, the expert user was made to review the images captured during the exercise by the patient, verifying that the number was correct and not due to a malfunction of the system. In this way, the effectiveness of the system in this process of detecting bad posture was confirmed.
Subsequently, during the four remaining sessions, a process of information gathering was carried out so that it could be verified that the patient was capable of improving his or her way of performing the exercise by reducing the number of errors made in the performance of each of the exercises. These sessions were carried out on Days 7, 14, 21, and 28 of the treatment. The information collected corresponds to the average number of errors made by users during the performance of these exercises during these control sessions. This information can be confirmed in Figure 12. In this image, it can be seen that the patients managed to reduce the errors committed during the execution of these exercises in a logarithmic manner, and that the rapid learning and assimilation process by the users during the first 14 days reduced, to a great extent, the errors committed during the execution of these exercises. In addition, it can be verified that, after 14 days, the users maintain low values corresponding to the average error committed, which means that they managed to commit a low amount of errors in the exercises, supporting their rehabilitation. This same evolution is observed if we focus the errors of all patients according to the exercise for each of the body parts being monitored, as shown in Figure 13. These images show, for each of the days in which a control session was performed, the errors made by the patients for each of the exercises performed for the parts of the body that were monitored.
The same trend as shown in the figure below can be observed, where the errors made over the first 14 days are greatly reduced and remain at low levels over the last 14 days, and it can be seen that these values decrease in a general way throughout the patient's body.

Conclusions
This article presents the development and implementation of a training system for rehabilitation through a suit, with IMU sensors, integrated with a system that allows its use in web, mobile, and VR. The development of the system is based on the PANGEA multiagent system designed to develop a decentralized system that allows one to dynamically make configurations, which allows for the handling of different information and monitoring the patient, enabling one to observe the evolution of the rehabilitation process. The system is capable of processing the data collected when monitoring of the user, evaluating the movements made, and informing patients of the errors made so that they can improve their performance, thus improving the rehabilitation process. In addition, the system allows one to monitor the evolution of the patient and to adapt the exercises that one needs in order to advance. In addition, the use of different forms of access to the system (web, mobile, and VR) allows it to be used by a wide variety of people, improving its accessibility by adapting to the context of each person. The use of VR technology allows the patient to be incorporated during exercise more interactively, influencing their motivation and progress.
If we compare the developed system with other similar rehabilitation systems, the contributions of this project become clear. As far as data collection is concerned, the system developed uses a suit with IMU sensors covering the entire body, thus managing to monitor the movements of both the lower and upper parts of patient's body, except for the head. In addition, the incorporation of the portable device makes it possible to monitor the patient's vital signs, which makes it possible to know whether the exercise is being performed correctly or not, as well as whether the effort involved for the patient is high. With this, each of the patient's movements is captured with precision, achieving a more accurate follow-up, which translates into a more precise follow-up of the patient. In addition, this information is processed to make the user aware of his or her evolution and the effectiveness of the execution of the movements, which can improve the performance of the exercises planned for recovery. The latter has a positive influence on the recovery process, as shown in the results obtained during the execution of the use case.
Furthermore, through the use of the multi-agent system, together with the use of the Unity3D engine, exercises can be added to the system remotely, accommodating different methods of rehabilitation. This allows the system to adapt the activities to the advancement and performance of the patient.
For future work, we intend to improve the system by using artificial intelligence techniques to identify errors made by the user automatically and efficiently according to the exercises performed. In addition, a study will be carried out with a greater number of patients with a diversity of chronic illnesses, which will make it possible to compare their evolution according to the pathology diagnosed.

Conflicts of Interest:
The authors declare no conflict of interest.