Unmanned Ground Vehicle Modelling in Gazebo/ROS-Based Environments

: The fusion of different technologies is the base of the fourth industrial revolution. Companies are encouraged to integrate new tools in their production processes in order to improve working conditions and increase productivity and production quality. The integration between information, communication technologies and industrial automation can create highly ﬂexible production models for products and services that can be customized through real-time interactions between consumer, production and machinery throughout the production process. The future of production, therefore, depends on increasingly intelligent machinery through the use of digital systems. The key elements for future integrated devices are intelligent systems and machines, based on human–machine interaction and information sharing. To do so, the implementation of shared languages that allow different systems to dialogue in a simple way is necessary. In this perspective, the use of advanced prototyping tools like Open-Source programming systems, the development of more detailed multibody models through the use of CAD software and the use of self-learning techniques will allow for developing a new class of machines capable of revolutionizing our companies. The purpose of this paper is to present a waypoint navigation activity of a custom Wheeled Mobile Robot (WMR) in an available simulated 3D indoor environment by using the Gazebo simulator. Gazebo was developed in 2002 at the University of Southern California. The idea was to create a high-ﬁdelity simulator that gave the possibility to simulate robots in outdoor environments under various conditions. In particular, we wanted to test the high-performance physics Open Dynamics Engine (ODE) and the sensors feature present in Gazebo for prototype development activities. This choice was made for the possibility of emulating not only the system under analysis, but also the world in which the robot will operate. Furthermore, the integration tools available with Solidworks and Matlab-Simulink, well known commercial platforms of modelling and robotics control respectively, are also explored.


Introduction
The term "Wheeled Mobile Robot" (WMR) underlines the ability of a vehicle to operate without a human presence on board or remote controlling the vehicle, in order to navigate in environments for which it is designed (ground, air, water).The form and degree of control of the vehicle provide the type and level of autonomy, ranging from the absence of automation to full automation, generally through a robotic detection system that uses model-based approaches and learning to increase their levels of driving.A definition of a mobile robot is presented by [1] that identifies their features and purpose: "A mobile intelligent robot is a machine capable of extracting information about its environment and using the knowledge of its world to move in a meaningful and safe way".Unmanned mobile robots are being used in difficult, dangerous and/or highly unpleasant tasks to be carried out by human beings since the costs of accessibility, safety, and survival is high, or when fatigue, time, or unpleasantness are increased [2,3].Therefore, the nowadays missions for ground, aerial and underwater-unmanned vehicles fulfil tasks like monitoring infrastructures as bridges, canals, offshore oil, and gas installation.Furthermore, mobile robotics are generally integrated into exploration, intervention, surveillance activities and, increasingly, it is taking root in new fields such as agriculture and health sector [4][5][6].A more generic distinction allows us to understand the degrees of development reached that we will describe later, thus the "mobile robotics", which involves all unmanned vehicles in all the environments described previously, and the "intelligent vehicles", which deals with the mobility of people and goods on regular surfaces commonly.The basic components of mobile robots include at least one controller, a power source, a software or control algorithm, some sensors, and actuators [7][8][9][10].The areas of knowledge usually involved in the field of mobile robotics are: mechanical engineering, responsible for the design of vehicles and, in particular, the mechanisms of locomotion; computer science, responsible for visualization, simulation, and control with algorithms for detection, planning, navigation, control, etc.; electrical engineering, capable of integrating systems, sensors, and communications; cognitive psychology, perception, and neuroscience that study biological organisms to understand how they analyse information and how they solve problems of interaction with the environment.Finally, Mechatronics, which is the combination of mechanical engineering with computing, computer engineering and/or electrical engineering [11][12][13][14][15]. Therefore, there is important "knowledge" and "know-how" from conception to experiments and implementation.The degree of development of mobile robots varies greatly according to the time and the impetus with which the research topics, technologies, and initiatives are approached.The academic community generally makes a classification as Autonomous Ground Vehicles (AGV), autonomous land vehicles (ALV) or mobile robots for vehicles travelling on land, which would be the "Intelligent vehicles".Unmanned Aerial Vehicles (UAV) generally classified in fixed wings and rotating wings.Autonomous Submarine Vehicles (ASV) or Unmanned Surface Vehicles (USV) for those travelling below and above the surface of the water [16][17][18].Unmanned ground vehicles (UGV) are vehicles that operate while in contact with the ground and without a human presence on board.The development of such systems began as an application domain for Artificial Intelligence research at the end of the 1960s.The initial purpose was to recognize, monitor and acquire objects in military environments.The "Elmer and Elsie" tortoises, composite acronyms for the electromechanical and photosensitive robots, respectively, were the first automatic vehicles invented by the neurophysiologist William Gray Walter at the Burden Neurological Institute in Bristol from 1947-1950, and are considered one of the first achievements of cybernetic science and as the ancestors of ground robots and "intelligent" weapons [19,20].These turtles identified the sources of dim light and approached them, so they had locomotion, detection, and evasion of obstacles capabilities.The most notable advance of these battery turtles is their ability to make intelligent associations such as "soft light", "intense light", or even "light equal to a type of sound" so they were able to react to these stimuli as "conditioned reflex", thus they are recognized as the pioneers of Artificial Intelligence (AI).Another important step in the development of mobile robots was the development of Shakey in 1966-1967, who was able to navigate from one room to another and even transport an object.Shakey, which means, "who trembles", was capable to translate by themselves only because he can "feel" his surroundings, although to make each movement he needed a good hour of calculation.Its morphology could be described as a large camera as a head that could rotate and lean, its body was the large computer that rested on a platform with three wheels that were its means of locomotion.The robot used electrical energy, and it carried with it several sensors: a camera, a distance measuring device and tactile sensors to perceive obstacles, actuators as step by step motors.It served as a test-bed for AI's work funded by DARPA at the Stanford Research Institute [21][22][23].The Shakey system is the pioneer of WMRs, establishes the functional and performance baselines for mobile robots, identifies the necessary technologies and together with the Bristol turtles help define the research agenda of Artificial Intelligence (AI) in areas such as planning, vision, conditioned reflex processing and natural language [24].The Shakey system had a new development momentum at the end of the 80s, with the implementation of an eight-wheel all-terrain vehicle with standard hydrostatic steering, able to move on roads and in rough terrain, this vehicle converted into an unmanned ground vehicle had all the electronics and software for the search of objectives and navigation [25].The result in 1987 was a road trip guided by vision along 0.6 km at speeds of up to 3 km/h on rough terrain, avoiding ditches, rocks, trees, and other small obstacles [26,27].Another type of WMRs called rovers serve to explore, analyse and photograph the surface of the planet Mars, the first was named Sojourner, meaning "hero" or "traveller", launched in 1997.It was active from 6 July to 27 September.Its mission was to explore Mars in a radius of action of 20 m around the landing platform called PathFinder.This platform served as communication with the land.Later, Spirit and Opportunity, two twin rovers were launched in January 2004, again to explore Mars in a wider area.Then, Curiosity, sent in August 2012 in the mission Scientific Laboratory on Mars and it is expected that in 2021 the rover call Mars2020 will be sent in mission.The development of increasingly performing robots, capable of performing missions effectively, goes hand in hand with the development of robust computational platform and tools to be used for rapid prototyping, evaluate robots design, simulate virtual models and sensors, provide and evaluate models and controllers, capable of satisfying the expanding demand and the renewed interest in robotics [28][29][30].It is also important for developers and implementers to be aware of the available platforms, methods, algorithms, hardware components that are most used, as well as their underlying physical and numerical paradigms, advantages, and disadvantages [31,32].Those are the reasons why in this paper we present a robotics framework from a broader perspective, with an emphasis on open-source ones.The main characteristics and components are discussed and compared.In 2012, a paper presenting MIRA middleware did a comparative benchmarking of the robotics platforms available at this moment.Updated general information of these platforms and their basic characteristics are summarized in Table 1.The benchmarking could help to realize which platform will fit better to specific robotics' needs and purposes.It is even possible to combine them for large robotics environments or projects.Robotic software must cover a broad range of topics and expertise from low-level embedded systems, for controlling the physical robot actuators, all the way up to high-level tasks such as collaboration and reasoning [33][34][35].The many layers of computation must seamlessly be able to communicate and integrate with each other for a robotic system to function successfully.Additionally, several tasks, such as mapping and navigation, are common to many robotic applications.Then, a layer of software above the operating system but below the application program appears in the market to wrap this complexity and to provide a common programming abstraction across a distributed system [36].
Table 2 summarizes the advantages of these platforms.The primary research areas are Control, Locomotion, Machine learning, Human-Robot Interaction (HRI), Planning, Mechanical design, Cognitive robotics and Mathematical modeling.The primary application fields instead are Humanoid robotics, Mobile robotics, Multi-legged robotics, Service robotics, Industrial robotics and Numerical simulation of physical systems [37,38].Gazebo-ROS is a powerful combination used by the robotics community in general for its ability to simulate with credibility their environments and the flexibility, robustness, and availability of common features for thefts, capable of supporting complex distributed environments with multiple robots performing tasks in a coordinated manner [39].That being said, it is easy to understand that this management of complexity that allows both flexibility and integration with other robotic environments requires some solid knowledge to exploit the full potential of this Gazebo-ROS platform [40,41].The authors for some time have been using 3D design software to develop detailed models for dynamic simulations.In particular, the use of the SimMechanics link tool allows the creation of rigid multibody models in the multi-domain simulation environment.In this environment, it is possible to build multi-domain models as in the case of mechanical parts driven by electric actuators.These tools therefore make it possible to obtain a good model with regard to geometry and mass distribution.However, the force fields that surround the system we want to analyse are left to the designer who has the task of modelling the forces present at the interface between the system and the environment.The Gazebo simulation environment instead takes into account system-environment interactions allowing a more truthful study on the dynamic behaviour of the system under analysis.In literature, it is easy to find examples of robotic simulations in Gazebo through the use of multibody models supplied directly by the manufacturers.Instead, it is more difficult to find applications with custom robots.For this reason, we decided to test this platform by recreating a mobile robot that we have in our laboratory, in order to test the strengths and weaknesses of this prototyping environment.The paper is organised as follows.In Section 2, we describe the Gazebo robot simulator and its potential.Moreover, the design process for creating the multibody model is shown together with the modeling of the contact forces between wheels and floor.Section 3 describes the co-simulation activity conducted by using Gazebo 10.0 and Matlab 2018b softwares.Finally, in the last sections, we present our considerations.

Portability
Common programming model across language and/or platform boundaries, as well as across distributed end systems Reliability Can be reused and optimized with confidence over many applications

Managing complexity
Low-level programming abstractions can be made more accessible through suitable (possibly object-oriented) libraries.However, programming combinations of these abstractions can be excessively tedious and error-prone.Programming within the context of pattern aware middleware can drastically reduce both chances of introducing errors into the code, and the amount of pain that the programmer must endure when implementing the system (Schmidt et al., 2000).

Materials and Methods
Gazebo development starts as part of a Ph.D. research project.After 2009, it had been integrated with ROS in a PR2 robot at Willow Garage Company (Menlo Park, CA, USA), which becomes the most important financial support since 2011.Now, the version 10 is on development, and it is expected to be launched by January 2019.V.11 is already describing new functionalities and is expected to be released by January 2020.Therefore, respecting the promise, Gazebo launches a new major version once a year, with a useful life of two and five years for even and odd versions, respectively [42].Gazebo is a powerful 3D simulator, capable of being integrated into various robotic platforms; however, it is the natural complement of ROS.One of the recognized strengths is the ability to incorporate different physical engines, each of which has their own level of development and some has a marked orientation to the simulation of certain types of robots, as in the case of Simbody for humanoids [43].For the selection of the various engines, it will be enough to invoke them during the launch in the scripts that invoke the packages to be executed with their respective parameters.Another important component during robotics simulation is rendering this management of the appearance of the moving image.Both the rendering and the physical engine will make the simulation plausible; even so, there is a compromise to achieve between accuracy of response to the physical phenomenon of the modeled environment and the capacity to respond in computational terms [44].A simulator must synthetically simulate the environment from a visual perspective as physical (laws of physics) of the environment they represent.To this graphical environment, Gazebo calls it "world" where the various static and/or dynamic objects must be represented; it identifies them as "model" that, according to the mission to be developed, will be rich in terms of set design and/or dynamism.Both the "world" and the "model" have a series of configuration parameters to which it is possible to access from the graphic Gazebo environment, through plugins (executable with specific functionalities) or through control platforms such as ROS or others [45,46].Described in this way, everything seems very simple, but working in a virtual environment simulating the basic capabilities of a mobile robot is not as simple as it seems.These tools are powerful because they just hide all this complexity in the functionalities they offer and allow us to interact with them through configurable parameters, which in the case of the physical environment are interrelated, and, in many cases, have immediate implications on each other.It is therefore important to have knowledge of the basic concepts and laws that govern them, to understand how they have been incorporated into these tools and what they really mean in each environment [47].Thus, Gazebo is a three-dimensional open-source dynamics simulator for single and multi-robots' mechanisms, for inside and outside environments.Despite the fact that it was created to close the gap of realistic robot simulation in outdoor environments, the users mostly use it for indoor simulations.Gazebo attempts to create realistic worlds for the robots, relying heavily on physics-based characteristics; what it means is that, when the model is pushed, pulled, knocked over, or carried "reflecting the physics" [48].
The general structure (see Figure 1) remains simple and almost unchanged since it was presented in 2004 because it relies on third-party software packages as ODE for articulated rigid bodies dynamics and kinematics; and system independent visualization toolkit, called GLUT, for the creation of 2D and 3D interactive applications (over a standard library OpenGL).This characteristic makes Gazebo an independent platform, which permits, for example, to add Dynamics Engines as Bullet, Simbody, and DART for different versions and platforms; however, the original ODE remains the default engine [49].By doing so, Gazebo's major feature is the creation and addition of models, which are virtual dynamic objects such as robots, actuators, sensors, ground surfaces, buildings or other stationary objects, relying on ODE Dynamics by means of Newton-Euler equations and First-order time integrator for Motion; Frictionless joints for Constraints; Perfectly inelastic collision for Collisions and Friction pyramid for Contacts.The World represents these models and environmental factors as gravity, lighting, and so on.Finally, user Interfaces are used by client programs to communicate and control models [50].As reported in the Gazebo aArchitecture scheme, there is a division between server and client, which are provided by two executable programs "gzserver" for simulating the physics, rendering, and sensors; and "gzclient" for a graphical interface to visualize and interact with the simulation.The client and server communicate using the Gazebo communication library Google Protobuf and boost::ASIO, for message serialization and transport mechanism, respectively [51].The Gazebo features are described on their official website and there are also tutorials and models to use in order to understand such powerful environments.For modelling, Gazebo uses SDF (Simulation Description Format), which is an XML format to describe objects and environments, capable of describing all properties of robot model (links, joints, sensors, plugins), static and dynamic objects, lighting, terrain, and even physics.Links are described by Inertial (mass, a moment of inertia), Collision and Visual (geometry) which are used by physics, collision and render engines, respectively.Joints connect two links and are used to constrain their movement, limiting the DOF (degrees of freedom) by the type (revolute, prismatic, revolute2, universal, ball, screw) of their configuration [52].Because of the above, the framework, conceived as a robotic platform and a work environment, has been developed under the guidelines and scope of the chosen base robotic middleware call GAZEBO-ROS and the integration capabilities that expand its usage possibilities [53].It is also known in the robotics community how several difficulties are faced while developing robotics applications due to the heterogeneity of the concepts in the field.In fact, in the case of mobile robotics, they must master the details related to the locomotion medium of robots, its morphology, and its sensors.In addition, the variability of the hardware makes the robotic applications fragile, which means that a hardware change in an already developed application would imply the rewriting of the code [54].To respond to the observation of hardware variability, some robotic middleware such as ROS, MIRO, and PyRO, proposed abstractions of the hardware components in relation to the technical details of these; it is thus possible to encapsulate specific data and instead provide higher level data.However, these abstractions are still at a low level and do not allow the isolation of some hardware components' changes [55].After evaluating the different alternatives reported in Table 1, we choose Gazebo-ROS as the base middleware environment because it supports the simulation and control of complex robotic missions, thanks to its easy integration with tools such as Simulink, MATLAB, and Solidworks.Version October 23, 2018 submitted to Robotics what it means is that when the model is pushed, pulled, knocked over, or carried "reflect the physics" [18].The general structure reported in Fig. 1, remains simple and almost unchanged since it was presented in 2004, because it relies on third party software packages as ODE for articulated rigid bodies dynamics and kinematics; and system independent visualization toolkit, called GLUT, for the creation of 2D and 3D interactive applications (over a standard library OpenGL).This characteristic makes Gazebo to be an independent platform, which permits for example to add some Dynamics Engines as Bullet, Simbody, and DART for certain versions and platforms; however the original ODE remains as a default engine.
In this way, Gazebo major feature is the creation and addition of models, which are virtual dynamic objects such as robots, actuators, sensors; and ground planes, buildings or other stationary objects.
Relying on ODE Dynamics by the means of Newton-Euler equations and First order time integrator

Wheeled Robot Modelling in a Gazebo-Based Environment
The mathematical modelling of mobile robots, in general, is carried out in order to understand their behaviour in an established acting environment.This mathematical formulation describes the kinematic and dynamic models that will serve as the basis for the design and control of robots in general [48,56].When designing mobile robots, the aim is to achieve models with levels of reliability and manoeuvrability necessary to fulfil the desired functionalities, such as precision and speed, while having stable mechanical structures [57].Furthermore, such analysis, depending on the morphology, allows to study the best arrangement of the components like sensors and actuators, in order to fulfil the purpose of the robot.Kinematic and dynamic characteristics are expressed in mathematical formulations that may or may not consider the geometry of the robot [58,59].It is also possible to develop several mathematical models to represent the same mobile robot, each of them having different utility depending on the functionality that we want to achieve, observe or analyse.The WRM, reported in Figure 2, is the prototype of wheeled mobile robot, designed and assembled in the laboratory of Applied Mechanics of the Department of Industrial Engineering of the University of Salerno.It has a conventional geometry that facilitates the study of control design application, identification techniques and autonomous navigation algorithms.Furthermore, it is normally used for educational activities in order to provide a first approach to robotics for our students [60,61].The chassis of the three-wheeled mobile robot is made of metal-acrylic and has a combination several sensors for performing different tasks.For obstacle avoidance, SRF05 and SRF06 ultrasonic sensors are installed on the vehicle together with three other sensors on the front dedicated to object recognition activities.Such ultrasonic sensors have a range of claimed detection distance ranging from 2 cm to 450 cm and from 2 cm to 510 cm, respectively, with an accuracy of 2 mm.The two fixed-axle wheels are driven by electric DC motor-reducers with digital incremental encoders.The system is controlled, depending on the experimental activities, by an Arduino-Galileo, a board based on the Intel Quark SoC X1000 application processor or an Arduino Mega2560.The sensors, actuators, and microcontrollers work with a 12-volt battery (see Figure 2).On the rear, a castor wheel gives stability to the rover and allows the chassis to remain horizontal.The traction-steering system associated with our robot allows for independently managing the linear and angular speed, with the advantages derived from the mechanical structure and the control electronics, make this configuration a simple solution that can be subjected to various laboratory tests [62,63].These advantages could be summarized as follows: • simple mechanical structure that facilitates kinematic modelling; • low manufacturing costs; • facilitates calculations of safe space (free of obstacles) by using the biggest dimension of the rigid platform as "robot radio".
It facilitates the calibration of various components that tend to present systematic errors such as unequal wheel diameters, wheel misalignment, effective contact points of the wheel with the floor, and loss of efficiency of encoders [64].However, the disadvantages are: • difficulty moving on uneven surfaces; • the loss of contact of one of the active wheels with the ground can change the orientation sharply; • sensitivity to the sliding of the wheels, due to slippery floors, external or internal forces.
In order to create a good model capable of reproducing the dynamic behaviour of the WMR, we decided to build a multi body model of the rover by using a CAD assembly file from Solidworks software [65].
Such Cad files, through the use of Blender software, allowed us to obtain a multi body model of the rover for the Gazebo environment (see Figure 3).Instead, in the Gazebo-ROS platform, it is common to have a model in a ".urdf" or ".urdf.xacro"file types are used in "RVIZ", a visualization tool heavily used for testing and debugging as reported in Figure 4. Once the system has been modelled, we concentrated on the force fields that surround the robot in the environment in which it is located.In general, the kinematic modelling of a rover depends on the physical characteristics of the robot and its components [13][14][15].Their characteristics will make them suitable for a certain task, and vice versa, the task itself will be the one that will determine in a first stage the structural particularities of the vehicle.The design must consider the mobility required to carry out the assigned task, energy efficiency, weight/load ratio, dimensions, and manoeuvrability; in the same way the environment of ground operation [66,67].The ground mobile robots distribute their traction and steering systems on the axes of their wheels according to the demands of speed, manoeuvrability, and characteristics of the terrain in which they must perform.The capacities required according to the missions will determine the type of more convenient wheels, the number and the arrangement; as well as the traction and direction system, and finally the physical form of the robot.Therefore, several mathematical models can be used to represent the kinematic characteristics of the robot, by incorporating various properties that will be of interest to achieve or observe the particular behaviour.

Figure III.1 Unisa-UGV 3D prototype (Source: DIIN -UNISA Department of Industrial Engineering)
Thus, the structure of our UGV is formed by a rigid platform equipped with two conventional fixed front wheels and a rear wheel or castor that gives stability and moves in a horizontal plane.During the movement, the plane of each wheel remains vertical and the front wheels rotate on the same horizontal axis.In addition, the contact between the wheels and the ground is ideally reduced to three individual points as can be seen in  Based on these models, it determines the different positions in which it is located and the speeds at which it moves.For our rover, we have chosen standard wheels that meet the three conditions defined for this design:

•
The front wheels are equidistant in the common axis of rotation, without lateral variations, while the castor, located in the rear, provides a pure rotation contact between it and the ground without causing slips in the vehicle when moving.

•
The mechanical design of the two front wheels as "fixed" confers a speed restriction in the driving direction (only forward and backward), while the castor wheel has free movement.The two front wheels are controlled by the two actuators, while the idler wheel is passively controlled, meaning that it is influenced by the general movement of the chassis and does not provide an additional speed restriction in the movement of our robot [68].For our vehicle, the origin of its frame on the coordinate system of reference has been located at the midpoint of the line that joins the two fixed wheels and an axis coinciding with this line forming the orthogonal [69].Three possible movements for vehicles that use this technique are revealed in literature-the first straightforward movement when the speeds of both front wheels are equal, the second a rotation in its central axis (the midpoint of the common axis) when the wheel speeds are equal but in opposite directions and third a rotation around one of the wheels, when one of them has zero speed.There is no possibility for lateral movement; this restriction is called singularity.The other singularities related to the errors in the relative speeds of the wheels, or the small variations in the level of the ground are mitigated by the castor wheel [70].When the front wheels act independently, i.e., by varying their speeds, we will make the mobile robot move with linear trajectories or with turns, to the right or to the left depending on the lower speed value of one of the wheels.All movements are appreciated with respect to the frame of the vehicle.If we want the vehicle to move in a circle, the robot must turn around a point that is along the common axis of the right and left wheels.The point on which the robot rotates is known as instantaneous curve center [71].The mobile robots need mechanisms of locomotion that allow them to move through the environment to carry out the assigned mission; these mechanisms of locomotion on land can be different and, depending on the choice made and implemented, the robots can walk, jump, run, slide, crawl and roll.The mechanism of locomotion on the ground preferred and chosen by researchers and the robotics industry has been by far the wheel, being mechanically simpler and more efficient, especially on flat surfaces [72].The key components that influence the total kinematics of the WMR are undoubtedly the wheels, so the selection and the arrangement of these in the vehicle are important.There are four types of wheels commonly used, with advantages and disadvantages, and have very diverse kinematics, these are:

•
Standard wheel: two degrees of freedom, rotation around the wheel axle (usually motorized) and the point of contact; • Rotating wheel: two degrees of freedom, rotation around a controllable displaced joint; • Swedish wheel (Swedish): three degrees of freedom, rotation around the wheel axis (usually monitored), around the rollers or bearings, and around the point of contact; • Ball or spherical wheel: technically difficult realization.
According to the selection and disposition of this type of wheels, the WMR will have different degrees of freedom, which will characterize its manoeuvrability, how easily it rolls in a straight line, or makes turning motions.Omnidirectional robots have the maximum manoeuvrability in the plane.Such behaviour is granted by the Swedish wheels; this freedom of movement can also be achieved in the plane with steerable wheels centered with traction and steering motors that control each of the wheels in an independent and synchronized way through mechanical systems of belts or electronic means.
The last option has a degree of mechanical and electronic complexity to achieve good coordination between the wheels-requiring also having a complex control algorithm, so its use is very limited [73].The efficiency of wheeled robots depends to a large extent on the quality of the terrain, particularly the smoothness or hardness of the terrain, the type of surface (flat or non-flat), and the number of obstacles (free or dense).Thus, conventional vehicles on wheels usually move on regular and hard enough terrain, while, for irregular terrains, track wheels with gears and adapted diameters are required.For example, a robotic vehicle for floor cleaning missions will have an appropriate configuration to the displacement on polished and/or carpeted floors in general, while the ground mobile robots that will have to attend to monitoring requests in devastated places will have a diverse configuration that can adapt to irregular terrain, with debris and other conditions that will limit its displacement.In Gazebo, when two objects collide, like wheels rolling on a surface, a friction force is generated; to manage those forces, there are physical engine systems defined in simulator software.Gazebo has different physics engines including ODE, Bullet, Simbody, and DART, and it is possible to choose one through the <physics> element in a .worldfile.These physics parameters' configurations permit to personalize the characteristics and values needed in the simulation environment through profiles, available via the C++ API or gz command line tool [74].The performance, accuracy, and general behaviour of physics simulation depend to a great degree on the physics engine and how the main parameters of them are defined.Some of these parameters are shared between the different physics engines supported by Gazebo, like maximum step size and target real-time factor.To manage them, Gazebo has a physics preset manager interface that offers a way to easily switch between a set of physics parameters and save them into Gazebo's .sdfrobot configuration.This physics configuration could be called and redefined at any plugin creation by calling world → GetPresetManager() in C++ programs.The default physical engine in Gazebo is ODE.In such engine, the main friction elements are composed of two parameters, "mu" and "mu2", friction coefficients along the contact surface, which stands for: • "mu" is the Coulomb friction coefficient µ for the principal contact directions or the first friction direction; • "mu2" is the friction coefficient for the second friction direction (perpendicular to the first friction direction).
Another important element that works closely related to each other is: • the two contact stiffness k p and damping k d for rigid body contacts; those are defined as Gazebo's links parameters, "kp" and "kd"; • the joint stop constraint force mixing (cfm) and error reduction parameter (erp) used to simulate damping.
In simulation, the dynamics world (dWorld) stores bodies (dBody) and is responsible for computing where they are at any given time.Thus, the "state" of all the rigid bodies are recalculated at every "step time", meaning that, at a selected period of time, a new position vector (x,y,z) and linear velocity (vx,vy,vz) of the body point of reference that usually correspond to the body's center of mass, are calculated, in addition to their orientation, represented by a quaternion (qs,qx,qy,qz) or a 3 × 3 rotation matrix; and their angular velocity vector (wx,wy,wz) [75].When two bodies are close enough, the simulator will call a function to determine which bodies or "geometries" are potentially intersecting/colliding.It creates a collision space (dSpace) to store geometries corresponding to bodies (dGeom), flags those bodies, and actually specifies the maximum of contact joints to create, so that the dynamics world can adjust its velocity accordingly.Each body has its collision space, which is tested for collisions against each other before and then dGeoms inside them are tested in case these are nested.ODE has its own built-in collision detection with collision spaces and dGeoms that will return a number of "Contact Joints" that define where the bodies are in contact.All contact joints that ODE finds are placed in the "contact" array.The functions that are called for space creation and flags control look like: void d S p a c e C o l l i d e ( dSpaceID space , void * data , dNearCallback * c a l l b a c k ) ; i n t d C o l l i d e ( dGeomID o1 , dGeomID o2 , i n t f l a g s , dContactGeom * c o n t a c t , i n t s k i p ) .
The CFM (Constraint Force Mixing) and ERP (Error Reduction Parameter) can be independently set in many joints; both are floating point values between 0.0 and 1.0.The first one will permit a degree of joint constraint violation, and collisions will have a "spongy" look; the second one refers to how much "joint error" is fixed in each time step.Gazebo tutorials suggest default values as 0.2 and 0.8 for both of them, respectively [76].ERP and CFM can be selected to have the same effect as any desired spring and damper constants.If you have a spring constant k p and damping constant k d , then the corresponding ODE constants are: where "h" is the step size.Between each step time, the user can call functions to apply forces to the rigid body.These forces are added to "force accumulators" in the rigid body object.It means that dynamic steps when the next step time happens start with the sum of all the applied forces to be used to push the body around; then, a collision detection is called, if ODE finds a possible collision, it creates a "Contact Joint".Gazebo's friction has models: cone friction, pyramid friction, and box friction.Thus, depending on " f riction_model" variable setting, the specific pieces of code will set the parameters and call the functions named needed to tread the collisions.If it is not specified, the dxConeFrictionModel will be used.The Coulomb friction model has a simple relationship between the normal and tangential forces present at a contact point [77].The rule is: where F T is the Tangential force vector, F N is the Normal force vector and µ is the friction coefficient.
In ODE's friction cone model (see Figure 5), to achieve the "adhesion mode", the vector of the total friction force must be inside the cone, and the friction force must be enough to prevent the contact surfaces from moving with respect to each other.If this vector is on the surface of the cone, then the contact is in "sliding mode" and the friction force is usually not large enough to prevent the surfaces in contact from sliding.The parameter "mu" represents the maximum ratio of tangential force to normal force.In this model, there are currently two approximations to choose from: • "mu" is the force limit to be chosen appropriately for the simulation, which means that the maximum friction (tangential) force that can be present at a contact, in either of the tangential friction directions.This is rather non-physical because it is independent from the normal force, but it is the computationally cheapest option.

•
The friction cone is approximated by a friction pyramid aligned with the first and second friction directions.First, ODE computes the normal forces assuming that all the contacts are frictionless; then, it computes the maximum limits F m for the friction (tangential) forces from |F m | ≤ µ|F N | and then proceeds to solve for the entire system with these fixed limits.
This differs from a true friction pyramid in that the "effective" mu is not quite fixed and can be set to a constant value around 1.0 without regard for the specific simulation.
ODE will automatically compute the first and second friction directions; however, it is possible to manually specify the first friction direction in the model description file ".sdf" or in ".urdf" if it is used in a Gazebo-ROS environment.The two objects in collision specify their own "mu" and "mu2".Gazebo will choose the smallest "mu" and "mu2" from the two colliding objects.The valid range of values for "mu" and "mu2" is any non-negative number, where 0 equates to a friction-less contact and a large value approximates a surface with infinite friction.Tables of friction coefficient values for a variety of materials can be found in engineering handbooks or through an online toolbox.
The cone friction model algorithm computes the corresponding hi act and lo act for friction constraints.For each contact, we have lambda n , lambda f 1 , and lambda f 2 .Now, couple the two frictions and, to satisfy the cone Coulomb friction model tangential velocity at the contact frame:

end
For our wheel-terrain interaction model, we consider nondeformable wheels because our real WRV has solid plastic materials and the weight load charge during operation in simulation and real experiments would not be that important that it would change the rigidity.The indoor environments selected were on solid pavement; therefore, the wheel-terrain interaction can be reasonably approximated as a point contact, which permits the use of a classical Coulomb friction to describe bounds on available tractive and lateral forces as a function of the load (basically power, motors, sensors, actuators, and other components carried on) with a coefficient of friction in Gazebo-ROS.
In our case, based on our 3D model characteristics, the charge of the load carried on, and the wheel-terrain interaction hypothesized, we configure the values of mu and mu2 with the same value of 0.8 for either two frontal wheels and for the back caster one in the WMR .urdfconfiguration file.We configure <kp>100000000.0</kp> and <kd>10.0</kd>for all three wheels.In addition, in the .worldfile, the element <constraints> for our simulated environment with the values of <cfm>0.00001</cfm>and <erp>0.2</erp>.

Results
The described capabilities required by autonomous vehicles, such as mobile robots, can incorporate techniques that allow them to estimate their position and location build or use a map, navigate in them or without any knowledge of the environment, identifying brands, planning routes and following them [78].To do so, it is necessary to identify the mobile robot in an abstract way, as a point (x, y) in a continuous or delimited space of two or three dimensions, generally, a Cartesian plane, to describe its state also called pose (position and orientation).When the mobile robot moves, it changes its position and orientation, but it must do so through free spaces; each free space is called Cfree and could house the robot in its path.
It is also possible to represent the mobile robot as a set of rigid bodies, for example, the WMR have the chassis, the wheels, the actuators and sensors on board and communication equipment.With these three fundamental abstractions "space", "pose" and "free space", we can create techniques for path planning, location, perception or sensing, mapping and SLAM (simultaneous localization and mapping) to give the mobile robot a safe journey.An example of such tecniques has been reported in Figure 6.It is possible to observe the rover in the Gazebo environment called "empty world" on the left side of the screenshot, in the middle the plot of the executions aligned with the first three points tours with precision by our WMR.The first three points correspond to a simple square that we prepare as a mission to fulfil during WMR navigation; finally, on the right side of the screen capture, it is possible to see the main components of the Matlab-Simulink control program for this way-points navigation activity in Gazebo.The virtual model of our unmanned vehicles was made using an XML-like language (Xacro, URDF, and SDF), compatible with the Gazebo simulator environment.Despite the primitives shapes of our rover, it was not easy to construct the model directly on Gazebo's building editor, due to the limited building tools.Thus, we use two strategies for modelling; in both, it was necessary to pay special attention to geometry and relationship of all the components.In addition, we must adhere apply, as much as possible, to the three most important ROS standards-one related to Standard Units of Measure (REP-103), another to Coordinate Conventions (REP-105, in robotics, the orthogonal coordinate systems are commonly called frames) and finally ROS Package Naming (REP-144) [79].The first strategy was to download and modify a xacro file from another simple wheeled mobile robot provided by the ROS community on the GitHub platform.In our case, it was a four-wheeled vehicle.The second strategy was to design our WMR model in SolidWorks and import it as a .daefile through a plugging that converts a 3D model into URDF.Comparatively, both roads give as a virtual 3D model, but the time expended to create a model from scratch and to export an adequate model from a CAD tool is both comparable time-consuming.Both strategies give us a virtual 3D model; the second one generated in Solidworks could be exported with the links' inertial information calculated directly from the model, as well as with the sensors and their characteristics.In this way, the model is completely ready to use; however, the plugin used to convert to an urdf or sdf file was not a straightforward endeavour; there had been a lot of workarounds to do in order to align the reference frames and the poses.The key process in taking a CAD model in SolidWorks and bringing it through the exportation process into Gazebo for simulating requires to have a complete description of the robot and its components.In a Gazebo-ROS platform, there are two kinds of files, Universal Robot Description Format (URDF) and Simulation Description Format (SDF).The URDF file type is used heavily in ROS for visualizing and controlling and SDF files are what Gazebo uses when performing the simulation.Before importing the model CAD into Gazebo, it is advisable to simplify the assembly as much as possible so that there are no errors during the export process.Thus, the bodies are assembled together if those will not act independently in any way; the parts of the body that will participate to the motion are considered as a rigid body and must be identified correctly.To export from SolidWorks to Gazebo, it is necessary to generate the meshes for the main body and all the components.Then, it will be necessary to check that the various parts are positioned in the right way by opening the robot in Gazebo.If the collision models are not in the same place as the visual model, there may be an error with the origins in the SolidWorks model.To make it easier, it is necessary to place the origins of the visual and collision model in the same place.It is also preferable to check the meshes in software as a Blender and move the part origin to the exact position if necessary.The flowchart, reported in Figure 7, shows the procedures to follow in order to achieve a multibody model for the Gazebo environment.The first process uses a plugin in Solidworks, called SW2URDF, to get a kind of ROS package to be used with Gazebo-ROS; once compiled (catkin process), the .urdffile is available to be used in any ROS application.The second process is a procedure to follow called Gazebo Exporter that give .sdffiles with the robot description all the files needed to be launch in Gazebo simulator.There is a small application, a plugin in Solidworks, which helps with the task of getting the files needed for Gazebo-ROS.To work with, the plugin sw2urdf must be installed and configured.Immediately after the button exporter is activated, "Export to URDF" is available from the file menu.Gazebo Export generates an SDF file, a graphical scheme helps to choose the components of the main body.The first step is to choose the model name (without spaces), select the base plane and the axis direction.Another important thing to do, on the first screen, is to insert the various links to the base they are attached to.For each link, it is necessary to specify the name, collision and visual component (selecting the component itself from the SolidWorks model), the mass and the inertia matrix.It is also possible to add sensors, cameras or motors to the links.The values of physical properties could be set up at this stage for each rigid body component; it could be also managed in a global configuration later.The inertia values are generated by Solidworks, which helps a lot while simulating in Gazebo-ROS.ROS uses Forward and Inverse kinematics, through RobotModel and RobotState core classes, which come with MoveIt!ROS-based package.Invoking their functions and variables, it is possible to retrieve and set values and limits of the all frames models and their states individually or grouped (for example related frames put together a defined as the left arm in a humanoid).This package uses the function "setToRandomPositions" to get information of an end-effector or any special link or joint or group of them.The MoveIt! package is mostly used to control robotic arms.To control a WMR, it is possible to use one of the differential driver plugins available in every Gazebo-ROS implementation.It is a model plugin that provides a basic controller for differential drive robots in Gazebo.There is another ROS-based package call Navigation, which also has a differential drive algorithm, but, to use it, a planar laser must be mounted somewhere on the mobile base sending an appropriate ROS message; and a tf ROS-based package in order to keep track of multiple coordinate frames over time, related to each other in a tree fashion.Navigation takes information from odometry and sensor streams and outputs velocity commands to send to a mobile base [80].It is possible to install one of the packages made available by the ROS community in the GitHub site; there are versions written in C++ and Python; they differ in performance, restrictions, and lighten code.Once downloaded and made usable by the catkin compilation processes, those will be available as other ROS-based packages.Finally, customized packages could be created by the use of available classes in C++ programs that need to be compiled in an ROS package.For all ROS-based packages, it is usual to invoke the functionalities through launch files, which are script files used to spawn 3D models in a simulator or launch ROS nodes with desired characteristics through parameters configurations.The kinematics and dynamics declarative definitions of transmissions are contained in the robot's ".urdf" or ".yaml" files.In general, the control flow in ROS-based framework starts when the joint state data and input set point are taken as input from robot's actuator's encoders; then, a generic control loop feedback mechanism, typically a PID controller, controls the output, typically effort or velocity, which is sent to robot's actuators.The ros_control framework, Navigation, MoveIt! and other ROS-based packages and plugins as the differential drivers for a different kind of steering mechanism include Kalman Filtering and Bayesian-based approaches in the algorithms of robot motion to deal with positioning and sensor uncertainty.To interact with the environment, there is a need for sensing and actuating instruments, it will be the robot capabilities and the mission (indoors or outdoors navigations, mapping, grasping, face recognition) or goal achievement expectancies (real-time or accuracy expected) that will guide the selection of the number and precision capabilities of sensors and actuators to put on the WMR.Gazebo and ROS have plugins for most used sensors and actuator, if there is a need for a different kind or brand of sensor, motor or other components, it is possible to adapt every plugin available, changing the particular features that usually are described in the component information card.Those definitions are usually implemented in the calling ".launch" or ".world" files, or ".yaml" configuration files as parameters to be configured.We used a laser distance sensor Lidar 360 LDS-01 and IMU with three axes for gyroscope, accelerometer, and magnetometer as can be seen in Figure 3.We defined their geometries and placement by links and joints characteristic for both of them in ".urdf.xacro"and ".gazebo.xacro"files associated within libgazebo_ros_laser.so and libgazebo_ros_imu.soplugins, respectively.Another add-on used is libgazebo_ros_diff_drive.so a differential driver plugin that was already described in the control section above.Some robots' motions could require a fine-grained control of velocities in-between trajectory points-for example, if they need to manipulate or carry-on fragile or dangerous objects.To send velocity commands, there are different options; WMR usually uses a keyboard or joystick for teleoperation, and building algorithms that use sensor and actuators controlled programmatically for unmanned vehicles.In addition, Gazebo-ROS offers alternatives packages ready to use in the case of keyboard teleoperation; the one called teleop_twist_keyboard is available as installation from package manager since Indigo distribution version; some other implementations are available as one of PAL robotics key_teleop.For joysticks and gamepads, such as PS3 and Xbox360, the teleop_twist_joy is available for Indigo and Kinetic ROS distributions that use parameters to scale the inputs for the command velocity outputs [81].There are two kinds of files for performing simulation, the Universal Robot Description Format (URDF) file type used heavily in ROS for simulation and testing; and Simulation Description Format (SDF) files used by Gazebo.In the Gazebo-ROS platform, it is common to have a model in a ".urdf" or ".urdf.xacro"file types are used in "RVIZ", a visualization tool heavily used for testing and debugging.The robot description in .urdf is converted automatically into ".sdf", on the fly, when it is launched into the Gazebo simulator.Our WMR could be seen simultaneously in Gazebo and RVIZ; the first is the simulator environment where it is possible to use the sensor's plugins to see and control the robot behaviour in a world configured with a simulated physics characteristic, while, in a visualization environment, it is possible to see built-in display types as, for example, the RobotModel and their Axes of reference, Grid Cells, data of Laser Scan, Point Cloud, Pose Array, the Map and their Path of navigation.Display Markers, TF transform hierarchy, and Ranges as cones represent range measurements from sonar or infrared sensors.To show the integration capabilities of Gazebo-ROS platform, we used the Matlab-Simulink software, connected as an ROS node, for controlling our WMR.We developed a Simulink controller to guide the simulated WMR to follow consecutive waypoints.In order to complete the mission in a smooth and appropriate way, we take care of some considerations such as getting to within 0.1 m of each of the waypoints, and setting up the maximum forward velocity as 0.5 m/s and the maximum angular velocity to 1.0 rad/s.In addition, we paced the update rate of the Simulink to 20 Hz [82].The controller developed permitted having an unmanned vehicle that will do a selected mission controlling the velocities required following each prefixed waypoint.It included a PI (Proportional-Integral) and a PID (Proportional-Integral-Derivative) controller, which adjusts the control force based on the error signal at time step t, called e(t), between the desired value of the system (the setpoint) and the measured output value.In Figure 6, we can observe the rover in the Gazebo environment called "empty world" on the left side of the screenshot, in the middle the plot of the executions aligned with the first three points tours with precision by our WMR.The first three points correspond to a simple square that we prepare as a mission to fulfil during WMR navigation; finally, on the right side of the screen capture, it is possible to see the main components of the Matlab-Simulink control program for this way-points navigation activity in Gazebo.Instead, in Figure 8, the way-point navigation conducted in co-simulation between Gazebo simulator and the controller designed in Simulink is reported.To test the dynamic behavior of the WMR, in simulation, various tests were performed experimenting with various plugins for odometry, mapping of three-dimensional environments and autonomous navigation.In Figure 8, a navigation activity for way-point is reported to test the design in Simulink of control laws.Among the various simulations, it was decided to show these results to appreciate the role of the swing wheel.The goal was to let the WMR traverse a trajectory by rotating the rover in place.In Figure 8a, the influence of the wheel self-alignment can be seen after the trajectory change.This behavior can be strongly mitigated by making the control system robust as shown in Figure 8b.

Discussion
The goal of this work was to test new tools for creating multibody models and new simulation environments.The advent of this fourth industrial revolution based on the fusion of different technologies is the new goal for companies.The authors have long been engaged in the search for new tools and methods to model and simulate machines and systems.In particular, the use of opensource software, thanks to the formation of user communities, allows a dynamism in the development of new tools, not comparable with the evolution of proprietary software.There are many software programs that allow the modelling of complex systems starting from three-dimensional geometries.In fact, the creation of detailed models passes mainly from a geometry and a detailed mass distribution.Among the most used simulation environments, there is undoubtedly SimScape, a multi-domain environment of Mathworks that allows the integration of mechanics, electronics, hydraulics, etc. in a single simulation environment.The software instead chosen for this work is the open source simulation software Gazebo, which allows in a single environment to simulate not only the system as a whole, but also the environment with which it must interact.Furthermore, the Gazebo-ROS framework allows a quick simulation and control of robots, since it allows the reuse of open source large capacity created to customize its use through customizable parameters depending on the robot and the desired simulation environment as well as the degree of reliability and precision of the expected physics, even though this will be a negotiation between the "degree of reality" and the available computing capacity.The rigid body dynamics already integrated into the physics engine ODE as default in Gazebo use solutions of the ordinary differential equations or spring-dumper models for physical properties in contacts between rigid bodies or soft bodies which could co-exist in the simulation environment in addition to algorithms of constraint methods or penalty systems that manage the impacts of collisions between bodies depending on the forces entered in each unit of time during the simulation [83].After testing various techniques for modelling the rover in Gazebo, the authors used the 3D Solidworks design software to import the geometry of the rover into the simulation environment.The next step concerned the definition of the forces present at the interface between wheels and floor.The plugins provided by the software allow, in a simple and intuitive way, calculation of the normal reaction between wheel and plane and, consequently, the traction force due to the presence of friction.In our case, the simulation of the rigid bodies has required the configuration of parameters of the objects in simulation; that is to say, the parameter for each component of our WMR such as friction mu and mu2 of the wheels, the contact stiffness, and damping Kp and Kd, respectively, to characterize the materials in contact of the land on which they roll, and the environment parameters like cfm and erp as constraints [84].

Conclusions
The potential of these plugins, as well as allowing the vehicle to advance, is demonstrated by the excellent dynamic behaviour of the tilting wheel, present on the rear of the due rover, recorded during the simulations.Another contribution of this work is the demonstration of the perfect integration of this software with other programs including the Matlab calculation software.To test the potential of this integration, a waypoint navigation activity was simulated by designing the control system in Simulink.Given the potential of such a modelling and development method, the comparison between the response of a rover modelled in Gazebo environment will be compared with the response of the real rover in a future paper.
3 of 16Gazebo attempt to create realistic worlds for the robots, relying heavily on physics-based characteristics;

Figure 3 .
Figure 3. Wheeled mobile robot 3D multibody model in the Gazebo environment.

Figure 4 .
Figure 4.The wheeled mobile robot in the Gazebo environment.

Figure 7 .
Figure 7. Flow chart for modelling a generic system from Solidworks to Gazebo.

Figure 8 .
Figure 8. Waypoint navigation tests for the optimization of the control system developed in a Matlab-Simulink Environment.
http://www.robots.ox.ac.uk/~mobile/MOOS/ wiki/pmwiki.php/Mstar-shaped topology network.Data as named messages stored in MOOSDB.Other clients can fetch also the history of changes.