An Introduction to Patterns for the Internet of Robotic Things in the Ambient Assisted Living Scenario

: The Internet of Things paradigm envisions the interoperation among objects, people, and their surrounding environment. In the last decade, the spread of IoT-based solutions has been supported in various domains and scenarios by academia, industry, and standards-setting organizations. The wide variety of applications and the need for a higher level of autonomy and interaction with the environment have recently led to the rise of the Internet of Robotic Things (IoRT), where smart objects become autonomous robotic systems. As mentioned in the recent literature, many of the proposed solutions in the IoT ﬁeld have to tackle similar challenges regarding the management of resources, interoperation among objects, and interaction with users and the environment. Given that, the concept of the IoT pattern has recently been introduced. In software engineering, a pattern is deﬁned as a general solution that can be applied to a class of common problems. It is a template suggesting a solution for the same problem occurring in different contexts. Similarly, an IoT pattern provides a guide to design an IoT solution with the difference that the software is not the only element involved. Starting from this idea, we propose the novel concept of the IoRT pattern. To the authors’ knowledge, this is the ﬁrst attempt at pattern authoring in the Internet of Robotic Things context. We focus on pattern identiﬁcation by abstracting examples also in the Ambient Assisted Living (AAL) scenario. A case study providing an implementation of the proposed patterns in the AAL context is also presented and discussed.


Introduction
Over the last decade, advances in technology and manufacturing processes have enabled the miniaturization of components to the point that, as noted by Biderman in an interview in [1], "Computers are becoming so small that they are vanishing into things". This miniaturization, having an impact on the availability of communication-enabled devices, has allowed the spread of the Internet of Things (IoT) paradigm. The term Internet of Things was introduced in 1999 by Kevin Ashton [2] while proposing the use of RFID technology in supply chain management at Procter & Gamble. From that moment on, an increasingly rapid spread of IoT solutions has occurred. At first, the effort was focused on tracking objects, then on making them interconnected, and, further, on guaranteeing their interoperability. In fact, one of the main problems encountered in the application of the IoT paradigm concerns interoperability. If in an initial phase, the lack of standards designed for the IoT had caused the proliferation of a myriad of applications based on proprietary technologies and therefore not compatible with each other, now, the presence of so many different standards creates the opposite problem of which standard should be supported in which context.
The development of a great variety of strategies in the plethora of IoT scenarios and domains has demonstrated the existence of common/recurring problems that can be managed using very similar solutions. Sometimes, a solution is so reliable and well performing that it is widely adopted. A solution that has proven to be a best practice to tackle a specific problem is usually defined as a pattern. The adoption of patterns is widespread in several fields ranging from architecture and engineering, for the design and construction of the means of transport, buildings, infrastructures, and urban nuclei [3], to narration and music composition [4,5], to behavioral and social skills [6], to information exchange [7]. A relatively common field of application regards software engineering, where design patterns are widely adopted to develop more reusable and reliable software solutions [8].
Depending on the area of application, a pattern may be composed of different parts. For example, in the case of IoT patterns [9], both the software and the hardware involved in the logical and physical interaction among objects or between objects and the surrounding environment should be described. It is important to note that a pattern does not represent a final implementation, but is a template that suggests a solution for the same problem occurring in different contexts. Similarly, an IoT pattern provides a guide to design an IoT system: it helps the designer to manage constraints regarding logical aspects such as the protocol to exchange information or the role and relations between objects, but also the position of objects, the number of features per object, etc.
When moving from the Internet of Things to the Internet of Robotic Things (IoRT) domain [10][11][12], the interaction and intervention of the "things" with the environment represent a relevant part of their activity.
The IoRT is built on robotic IoT-based systems, merging the digital and the physical worlds. The concept of robots seen as components ("things") of the Internet of Things was introduced for the first time in [13]. The cloud-based framework behind these systems allows them to share and collect information among both humans and machines [14]. Although being introduced in the context of Industry 4.0, recently, the IoRT has also been employed in many other scenarios, such as museums, entertainment, and Ambient Assisted Living (AAL) [11,15], where it is crucial to consider both the presence and the intervention of humans.
According to the definitions given in [16][17][18], AAL refers to the ecosystem of assistive products, solutions, and services, aimed at realizing intelligent environments in order to support elderly and/or disabled people in their daily activities. The ultimate goal is to enhance both the health and the quality of life of the elderly by providing a safe and protected living environment.
Especially when dealing with the assisted living scenario, a set of further constraints and considerations must be taken into account while designing IoRT solutions. In fact, although IoT solutions have already been proposed in the literature for the AAL scenario [16,18,19], the presence of humans and the interaction with them become much more relevant in the IoRT case. This leads to to the need for identifying and defining patterns that can support designers. Starting from the definition of IoT patterns, we propose the introduction of IoRT patterns, which is the main contribution of this paper. In fact, the concept of patterns in the Internet of Robotic Things context has never been adopted in the related literature, although we believe they can considerably help the designers in the development and implementation of IoRT solutions. A case study from the AAL scenario will also be presented, where the authors have been involved in the framework of the SUMMITproject for the development of multi-platform IoT applications.
The remainder of this paper is structured as follows. Section 2 reports a summary of the concept of patterns in the IoT context, concluding with the pattern description format adopted in this paper. Sections 3-5 present the three patterns proposed, respectively obstacle avoidance, indoor localization and inertial monitoring. These sections include a related work subsection, reporting the works from which the patterns were identified and presented in the subsequent pattern definition subsection. A case study taken from the AAL scenario is presented in Section 6, including a real-world implementation of the three proposed patterns. In Section 7, a critical discussion on the limits of the design pattern approach and the challenges related to their use is reported, whereas final conclusions are drawn in the last section.

Related Work
As mentioned in the Introduction, patterns have been adopted for a long time in a wide range of fields. Recently, the fast spread of IoT-based applications has led to the definition of IoT patterns [9,[20][21][22], with a special emphasis on the industrial scenario as well [23,24]. In all these works, the authors identified existing products and technologies in the IoT field to abstract patterns and, in some cases, divided template solutions and best practices into two categories, namely patterns and pattern candidates (not fully formulated templates that can be promoted to patterns in the future). Furthermore, in robotics, solutions to common problems have often been addressed through modularity [25], which can be somehow seen as the hardware counterpart of the software pattern. However, in the context of system-level hardware design, the problem of identifying hardware design patterns was explicitly addressed in [26], where the authors aimed at reinterpreting the original idea of software design patterns for the hardware case. A similar approach was also adopted in [27] for cloud computing and in [28] for so-called smart contracts in financial services.
In their extensive work [9,29,30], the authors went through major works regarding the identification, definition, and application of patterns [31][32][33][34] to select a set of elements, some of them required and other optional, useful to describe IoT patterns. In particular, there are the name and aliases to identify the pattern, the icon to associate a graphical element to the pattern in a diagram, a problem section to state the issue for which the pattern is designed, the context to describe the situation in which the problem occurs and any constraints, and the forces represented by possibly contradictory considerations to bear in mind while choosing a solution. Then, there is the solution section that describes steps to solve the problem and a result section that describes the context after applying the pattern. Any similar solution that does not deserve a separate pattern description is listed in the variants section. There is also a list of related patterns, patterns that can be applied together or exclude each other. Finally, an example section completes the pattern description. To keep pattern descriptions general, in accordance with [33], we describe three patterns, namely obstacle avoidance, indoor localization, and inertial monitoring, through the following elements: name, context, problem, forces, solution, and consequences (these ones generically adopted instead of results, but with a very similar meaning). We further extend pattern descriptions by including variants and providing a graphical representation of the pattern components and their interactions. In particular, light grey and dark grey boxes are respectively software and hardware components. In contrast, dashed boxes are those components that can be both local (i.e., within the device) or remote (i.e., in a companion device or following the principles of edge, fog, and cloud computing) in the IoRT network.

A Pattern for Obstacle Avoidance
In the following, the first pattern related to the obstacle avoidance function for a mobile robotic platform within an IoRT context is introduced. After a brief review of the well-established approaches to this problem, we proceed with the description of the pattern. It is worth underlining that most of the works presented in the following section are not designed for an IoRT network. However, the seed of the IoT interoperability and data exchange can often be found in the distributed systems proposed. Furthermore, some parts of the presented solutions do not strictly require the presence of a network, while some other functions can be generalized to network-based or even cloud-based solutions, as better explained in the following.

Related Work
The obstacle avoidance problem is part of the wider navigation problem for a robotic vehicle. It consists of computing a motion control that avoids collisions with the obstacles detected by the sensors while the robot is driven to reach the target location by a higher level planner. The result of applying this control action at each time is a sequence of motions that drive the vehicle free of collisions towards the target [35].
A classification of obstacle avoidance algorithms can be done, starting from the so-called one-step methods that directly transform the sensor information to motion control. These methods assimilate the obstacle avoidance to a known physical problem like, for example, the Potential Field Method (PFM) [36,37]. The PFM adopts an analogy in which the robot is a particle that is repelled by the obstacles due to repulsive forces F rep . Moreover, this algorithm is also capable of handling a target location, which exerts an attractive force F att on the particle. In this case, the most promising motion direction is computed to follow the direction of the artificial force induced by the sum of both potentials On the other hand, methods with more than one step compute some intermediate information, which is then further processed to obtain the motion. In particular, Reference [38] proposed the Vector Field Histogram (VFH), which solves the problem in two steps, while [39], with the Obstacle Restriction Method (ORM), solved the problem in three steps, where the result of the two first steps is a set of candidate motion directions. Finally, there are methods that compute a subset of velocity control commands, as for example the Dynamic Window Approach (DWA) described in [40], which provides a controller for driving a mobile base in the plane. Some of the algorithms listed above can be more suited to a specific navigation context compared to others, depending on environment uncertainty, speed of motion, etc.
Besides the algorithms, sensing and actuation play a crucial role in obstacle avoidance as well. In the last few years, the spread of the Robot Operating System (ROS) framework [41] has allowed the development of a common approach in tackling the general problem of robot navigation by bridging the gap between the hardware interface and software development. Such an approach is included in the so-called navigation stack (http://wiki.ros.org/navigation (accessed on 2 April 2021) which exploits range-sensing devices and obstacle maps. In particular, the obstacle avoidance part is performed by a local path planner module, implementing several types of control algorithms [42]. A large variety of robots (https://robots.ros.org/ (accessed on 2 April 2021)) adopt the ROS navigation stack to perform both local and global path planning, thus representing a noticeable example of a modular and generally applicable solution to the recurring problem of navigation.
In the AAL scenario as well, several examples of mobile robotic platforms can be found, which are designed for the mobility support of the elderly and people with impairments. Most of the times, they are typically referred to as smart walkers [43] since they provide high-level functions such as monitoring of the user's status, localization, and obstacle avoidance. In relation to their propulsion, smart walkers can be classified into two broad categories: passive and active walkers. In the former, the user is responsible for the motion, while in the latter, the wheels are actuated by electric motors. We will now focus on the obstacle avoidance function of the smart walkers. For both walker categories, the related literature presents a considerable number of works, which in any case show similar features and share the same grounding idea.
In [44], a smart walker prototype was developed on top of an omnidirectional drive robot platform. To detect obstacles at various heights, the robot is equipped with arrays of ultrasonic transducers and infrared near-range sensors, while a laser range finder is used for mapping and localization. Force sensors are embedded in the handlebars for the acquisition of the force exerted by the user. Furthermore, a map of the environment is provided to consider those obstacles that the sensors could not detect, e.g., staircase. The collision avoidance module directly controls the velocity and the heading of the robot to avoid obstacles while the platform is driving to the target location. Actual robot motion commands through waypoints are given by the local motion planner [45].
The walker presented by [46] consists of a custom frame, two caster wheels, two servobraked wheels for steering, a laser range finder, and a controller. In this work, the obstacle avoidance was performed by an adaptive motion control algorithm incorporating environmental information, which was based on the artificial potential field method [36]. In [47], a passive smart walker was proposed, with obstacle avoidance and navigation assistance functions. It is equipped with a laser range finder (used for obstacle detection), frontal and side sonar sensors (to detect transparent objects and objects outside the laser scanning plane), two encoders for odometric localization, and a potentiometer on the steering wheel for heading acquisition. A geometry-based obstacle avoidance algorithm [48] is used, which searches for clear paths in the area in front of the walker while still reacting to user inputs. Similarly, References [46,49] presented a passive walker with a 2D laser scanner and wheel brakes actuated by servos, thus ensuring passive guidance. In this case as well, force sensing on the two handles, for the recognition of user intention, was included.
In [50], a passive smart walker was developed, by enhancing a commercial rollator with three low-cost Time-of-Flight (ToF) laser-ranging sensors (one frontal and two on the sides), a magnetometer for the heading acquisition, force sensors on the handlebars, brake actuators, and a companion computer, used for the wireless connection to a custom IoT platform. The obstacle avoidance solution proposed can handle both static and dynamic obstacles. Static obstacles are included in a map. The 2D localization inside the map is given by an indoor localization system, whose data can be accessed through the IoT platform. The braking system is operated in such a way to guide the elderly user's walking to avoid a collision every time a close obstacle is localized from the map or through the ranging sensors for dynamic obstacle detection.

Pattern Definition
Context: The real world where a mobile robotic platform has to move is characterized by the presence of "obstacles", which may hinder or even damage the vehicle during its motion. We assume to have a map of the environment (including static obstacles) and a localization system.
Problem: A mobile vehicle must be capable of avoiding obstacles in the considered environment (while trying to reach the target position given by the high-level planning). If no target is given, a mobile vehicle must be capable of wandering in its operating environment without hitting obstacles. Forces: • The possibility to consider any type of obstacle (both static and dynamic obstacles). • The definition of a proper control strategy. • The adoption of suitable sensors.
Solution: Provide the mobile robot with range sensing, a steering system, and a processing unit. Define a control algorithm to process the range and the localization data, and act on the navigation system to divert the path of the mobile robot from the obstacles. The map of the environment is used for static obstacles and those obstacles undetectable by the range sensors. A representation of this IoRT pattern is given in Figure 1.
Consequences: Benefits: • The guidance of the mobile robot in such a way to keep it safely away from known or sudden obstacles. • The avoidance of potentially destructive impacts. • Extended time of exploration of the environment. Liabilities: • Sensors' and actuators' integration on the platform. • Increased weight and, possibly, size of the platform.
• Potential presence of blind spots, depending on the adopted solution.
Variants: local re-planning, motion control, and traction control. Related patterns: patterns for environment mapping, device localization solutions, and high-level path planning.

A Pattern for Indoor Localization
Similar to what was done in the previous section, the pattern related to indoor localization is introduced below. After a brief overview of the state-of-the-art regarding the main approaches to this problem, we proceed with the description of the pattern.

Related Work
Indoor localization is an across-the-board research topic, and in recent years, some of the most prominent solutions introduced at the academic level have reached the market. The variety of technological solutions proposed in the literature is usually strongly dependent on the specific application domain. It is common to apply data fusion algorithms to obtain better results. Applications typically fall within the scope of indoor localization, indoor navigation, and Location-Based Services (LBS). Among the systems described in the literature, the most common are: Dead reckoning systems: They use sensors such as accelerometers, magnetometers, and gyroscopes to provide a quick estimate of the user's current position starting from his/her (known) initial position. Although dead reckoning-based systems can theoretically be accurate, in practice, they are subject to strong drift errors introduced, for example, by the quality of the sensors adopted. It is also important to manage paths that are "noncontinuous" due to the presence of elevators or escalators. For these reasons, a periodic re-calibration is performed to restore the cumulative drift error, such as in [51,52]. In general, dead reckoning systems are used in combination with other indoor location technologies to improve the accuracy of the entire system.
Systems based on interaction with wireless communication technologies: Considering the widespread use of mobile devices with wireless connectivity, the localization systems based on WLAN and Bluetooth (in both standard and Low Energy versions) are among the most common [53]. Although these systems may also be used outdoors, the existence of other more common and accurate solutions for open-air environments (such as GPS) Magnetic field-based systems: These systems exploit the magnetic fields that characterize an environment due to the presence of elevators, escalators, doors, pillars, and other ferromagnetic structures [54,55]. In these systems, magnetic signatures are collected on a map (off-line phase) and can be subsequently exploited by the user, through his/her smartphone, to know his/her position (on-line phase). The interest around this approach has increased in recent years, and there are some commercial solutions such as Indoor Atlas [56].
Audio-based systems: Audio-based systems exploit uncontrolled environmental sounds (e.g., noise from the background) or controlled sounds (such as those generated from loudspeakers in shopping malls, shops, and museums) [57,58]. Depending on the particular approach adopted, they can be used both to locate the user precisely and as an add-on for increasing the accuracy of other technologies.
Localization systems based on ultrasonic transducers: These systems are generally composed of two elements: the transmitting node (the node from which the ultrasonic signal starts, in general, fixed to the roof) and the receiving node, which in the context of the location, represents the node that we want to determine the position (target) [59][60][61].
RFID-based systems: A typical system based on RFID is composed of tags, readers, and back-end electronics for data processing. The performance of these systems is highly dependent on the type of tag (active or passive) and on the specific frequency band [62,63].
Systems based on Visible Light Communication (VLC): These exploit the susceptibility of LEDs to amplitude modulation at high frequencies to transmit information in the environment [64,65]. If the frequency exceeds a "flicker fusion" threshold, the lighting functionality is kept intact since the modulation is not perceived by the human eye, while it is possible to exploit the information generated through modulation to locate the user. This information, for example, can be associated with a point on a map. Thanks to the fact that the sensors inside smartphone cameras are composed of photodiode arrays, it is possible to use a smartphone as the device for indoor location based on VLC. The use of light sources such as those used in the VLC framework allows exploiting algorithms valid for electromagnetic waves (such as the AoA and ToA) briefly introduced for the systems based on the interaction with wireless communication technologies.
Systems based on computer vision algorithms: These systems exploit the recognition of particular tags (marker-based systems) or of characteristic objects/points (marker-less systems) associated with specific positions within an environment [66][67][68][69]. Marker-based approaches require that a set of 2D markers be placed in the environment in which the localization mechanism has to be enabled. Each ID and therefore each marker is associated during the off-line setup phase with a known position on a map. On the other hand, marker-less approaches are based on the analysis of the characteristics of perceived images: the user's position is obtained from what the smartphone sees in the environment.
Hybrid systems: As noted above, usually to increase accuracy while reducing the cost/performance ratio of an indoor localization system, it is preferable to use multiple solutions at the same time, as presented in [70][71][72]. This choice is particularly interesting for the systems associated with the use of smartphones, thanks to the fact that this category of devices features different types of sensors (inertial, magnetic, and microphone), at least one camera, and antennas to manage wireless connectivity (WiFi, BT/BLE).

Pattern Definition
Context: It is necessary to access localization information in environments where outdoor positioning systems (such as GPS, GLONASS, and GALILEO) do not directly work. Localization can be performed with different accuracy levels depending on the application.
Problem: A user needs to know his/her own position in an indoor environment or there is a need to know the position of a user/object. Forces: • The possibility to intervene in the environment, e.g., place tags, lights, beacons, and base stations (in case they are not already present). • Setup time and costs: localization could be necessary for temporary events or to monitor a single person, as may happen in AAL scenarios. • Required accuracy level.
Solution: As stated in the problem paragraph, there are two indoor localization cases: (a) a user tracks his/her position; (b) someone else tracks the position of a user/object. In the first case (a), using a handheld device to know the current position is a common habit indoors. Given that, solutions require a map of the environment that can contain additional information such as the position of tags (e.g., visual, textual, and graphic), the distribution of magnetic or radio fingerprints, or even the position of landmarks (used in dead reckoning systems to reduce drift). When approaches requiring trilateration are applied, users receive their position from external parties (e.g., base stations or servers that compute the position). In the second case (b), external entities monitor the user/object position, given that the device held by the user or attached to the object (e.g., a smart walker) can be straightforward. A graphic representation of this IoRT pattern is depicted in Figure 2, where, as already said above, dashed boxes represent those components that can be either on-board the device (or worn by the user) or remote, i.e., other nodes within the IoRT network.
Consequences: Benefits: • Monitor the position of objects and people. • Enable services based on position (LBS). • Find people in cases of emergency. • Help people move inside buildings, airports, and malls. Liabilities: • Need to calibrate the system over time. • Effort to map/set up an environment (e.g., to build a database of "fingerprints" or to place beacons and tags).
Variants: indoor navigation. Related patterns: patterns for environment mapping. Aliases: indoor positioning and indoor tracking.

A Pattern for Inertial Monitoring
Following the structure already defined in the previous sections, a pattern for inertial monitoring is now introduced and described. As will be clarified in Section 5.1, this type of pattern is crucial in AAL, industrial, and building maintenance scenarios.

Related Work
Inertial sensors have been widely adopted in a huge number of applications in recent years: production of smartphones, consoles, avionics, and robotic systems. Nevertheless, inertial systems can also be found in applications concerning seismology and structural vibration control, terrain roughness classification, fall detectors, and so on. Although the applications can be different, the way the data are processed and the hardware architectures often share common properties. In the following, a brief overview of the state-of-the-art in the area of inertial-based systems is given, and common properties, which will be used to define the pattern, are highlighted.
An area that has attracted a considerable amount of applications is related to health care and AAL. As an example, in [73], a human activity recognition system with multiple combined features to monitor human physical movements from continuous sequences via tri-axial inertial sensors was proposed. The author used a decision tree classifier optimized by the binary grey wolf optimization algorithm, and the experimental results showed that the proposed system achieved accuracy rates of 96.83% over the proposed human-machine dataset.
In the same context, Andò et al. presented in [74] the development of a fall and Activity of Daily Living (ADL) detector using a fully embedded inertial system. The classification strategy was mainly based on the correlation computed between run-time acquired accelerations and signatures (a model of the dynamic evolution of an inertial event). In particular, the maximum among the correlation coefficients was used as a feature to be passed to the classifier.
In the case of robotic applications, and especially in the context of autonomous ground vehicles, the assessment of the traversability level of the terrain is a fundamental task for safe and efficient navigation [75]. Oliveira et al. proposed in [76] the use of inertial sensors to estimate the roughness properties of the terrain and to autonomously regulate the speed according to the terrain identified by the robot in order to avoid abrupt movements over terrain changes. The classification was performed by employing a classifier capable of clustering different terrains based only on acceleration data provided by the inertial system.
In the area of personal service robots, a novel idea was proposed by [77]. Due to the growing interest in the use of robots in everyday life situations, and especially their interaction with the users, the article proposed the use of wearable inertial systems to improve the interaction between humans and robots while walking together. The inertial system was used to compute real-time features from the user's gait, which were consequently used to act on the control system of the robot.
In the context of gesture recognition for robot control, an interesting study was presented by [78]. With the development of smart sensing and the trend of aging in society, telerobotics is playing an increasingly important role in the field of home care service. In this domain, the authors presented a human-robot interface in order to help control the telerobot remotely by means of an instrumented wearable sensor glove based on Inertial Measurement Units (IMUs) with nine axes. The developed motion capture system architecture, along with a simple but effective fusion algorithm, was proven to meet the requirements of speed and accuracy. Furthermore, the architecture can also be extended to up to 128 nodes to capture the motion of the whole body.
A similar study was also presented in [79]. In addition to the previous one, in this study, the authors addressed the definition of a proper open-source framework for gesturebased robot control using wearable devices. A set of gestures was introduced, tested both with respect to performance aspects and considering qualitative human evaluation (with 27 untrained volunteers). Each gesture was described in terms of the three-dimensional acceleration pattern generated by the movement, and the architecture was developed in such a way to have a dedicated module to analyze on-line the acceleration data provided by the inertial sensor and a modeling module to create a representation of the gesture that maximized the accuracy of the recognition. The gesture classification was based on the Mahalanobis distance for the recognition step and on Gaussian mixture models and Gaussian mixture regression for the modeling step.
Another common research topic in the area of robotics is fault detection and diagnosis. As an example of an application, Zhang et al. presented in [80] a method for the fault detection of damaged UAV rotor blades was presented. Fault detection is a relevant topic, especially when dealing with UAVs since any defect in the propulsion system of the aerial vehicle will lead to the loss of thrust and, as a result, to the disturbance of the thrust balance, higher power consumption, and the potential crashing of the vehicle. The presented fault detection methodology was based on a signal processing of the vehicle acceleration and a Long Short-Term Memory (LSTM) network models. The vehicle accelerations, caused by unbalanced rotating parts, were retrieved from the onboard inertial measurement unit. Features based on wavelet packet decomposition were then extracted from the acceleration signal, and the standard deviations of the wavelet packet coefficients were employed to form the feature vector. The LSTM classifier was finally used to determine the occurrence of rotor fault.

Pattern Definition
Context: The need for detection and classification of inertial phenomena in order to act on the system/device under control.
Problem: Detect and classify either environmental conditions or potentially dangerous inertial events for the activation of suitable control actions. Forces: • The definition of a suitable classification strategy. • The definition of the required inertial sensor type and quantity for the task accomplishment. • The available computing power.
Solution: Provide the entity to be monitored with a system including inertial sensors (such as accelerometers and gyroscopes) and a processing unit (microcontroller or micro-processor). Define a classification methodology that, by processing the acquired inertial quantities, can associate a given unknown event with a given class of events. Define the classification-dependent control actions. A representation of this IoRT pattern is given in Figure 3. Variants: gesture recognition and terrain classifier. Related patterns: patterns for inertial signature extraction. Aliases: inertial event classification.

Case Study
In this section, a real-world case study including the implementation of the three patterns proposed, namely obstacle avoidance, indoor localization, and inertial monitoring, is reported and described. The solution was developed in the framework of the SUMMIT project, which aimed at realizing an Internet of Things platform for three application scenarios: smart health, smart cities, and smart industry.
In the context of smart health and smart assisted living, a smart walker [50] was developed as a smart "robotic thing" of the platform. As partially described in the previous sections, the walker is able to: • localize itself through a multi-sensor system, combining trilateration data acquired by a network of ultrasound sensors and odometry data acquired by an inertial measurement unit [81]; • avoid obstacles by means of time of flight laser sensors, a map of known obstacles, and actuators in the brakes of the rear wheels [50] Moreover, inertial monitoring of the walker user was performed by classifying accelerometer, gyroscope, and compass data acquired by a wearable sensing system, through an event correlated approach, in order to detect falls and, more generally, the activity of daily living. Furthermore, statistics on walker usage by the elderly person were assessed and how the user's walking changed. All these tasks were carried out within a domestic environment, and all the data (e.g., ADL monitoring data, walker position, etc.) were shared through the IoT cloud both among the modules of the solution and with other external components of the IoT network. A picture of the final setup of the smart walker is shown in Figure 4, whereas a block diagram showing the interconnection of the three patterns is depicted in Figure 5. Clearly, the testing environment was equipped with the necessary infrastructure for wireless communication and the indoor localization system [81].

ToF ToF sensors sensors
force force sensors sensors actuated actuated brakes brakes localization localization node, IMU, and node, IMU, and onboard PC onboard PC A preliminary testing phase was performed in the simulation in order to properly tune the adopted obstacle avoidance approach, as shown in Figure 6. Once the resulting behavior of the walker was acceptable, tests on the real setup were performed in order to assess the proper interconnection and communication among the modules of the solution through the network. A set of different operating conditions of the walker, depending on the presence and position of the obstacle, is shown in Figure 7.

Discussion
The adoption of patterns during the design and development of the IoRT solution described in the previous section allowed us to clearly identify the boundaries of each part of the whole robotic system and to treat it as a separate element. The modularity of this approach is particularly suitable for the continually changing nature of a network of "things", in terms of both continuous integration and types of interacting devices.
However, this comes at the expense of a considerable overhead in the data flow. For instance, thinking of the proposed case study, the walker position is first shared to the cloud by the walker and then retrieved again by the walker itself for obstacle avoidance (taking into account the obstacle map). Although this approach might look convoluted, as localization data produced on board the smart walker might be directly "consumed" by the walker itself, such an approach allows the smooth replacement of the walker with any other device that may need localization.
Clearly, this implies the need to have a stable and reliable connectivity among all the network components, especially in those cases where real-time demanding applications are required. Such a requirement introduces another criticality of the IoRT approach; however, this issue is becoming minor thanks to the possibility to have well-performing networks currently, even in domestic environments (as in the AAL scenario). This issue becomes more relevant both in the smart industry and in the smart city scenarios, as in the former, electromagnetic disturbances may compromise the quality of the connection, whereas, in the latter, typically, very large spaces have to be covered.

Conclusions
After the establishment of the IoT, the novel paradigm of the Internet of Robotic Things is now growing in the research community, as the crossover of the IoT and robotics. The integration of robotic systems into the wider context of the IoT provides such platforms with a higher level of situational awareness [82], which can be fruitfully exploited. The possibility to access a map, to localize within an indoor environment, or even to exploit remote computational resources opens up a wider variety of application scenarios and opportunities that could not be fully explored before.
As very similar approaches to basic functions, such as obstacle avoidance, indoor localization, and inertial monitoring, have been identified in the literature, in this paper, we proposed to abstract such approaches and to rearrange them by leveraging the concept of patterns. In this manner, three design patterns were presented, with the aim of simplifying the work of the IoRT designer, by allowing him/her to quickly determine whether the solution fits the problem at hand and to identify the core elements for the subsequent implementation.
We hope this first attempt of pattern definition in the IoRT context may encourage the identification and definition of many more patterns, with the final goal of coming to a large and thorough pattern language for the Internet of Robotic Things. As a future development, we plan to define and extract patterns for the IoRT from other application contexts, besides the AAL, such as agricultural robotics, navigation in unstructured environments, and robot cooperation.