The Integration of Collaborative Robot Systems and Their Environmental Impacts

: Today, industrial robots are used in dangerous environments in all sectors, including the sustainable energy sector. Sensors and processors collect and transmit information and data from users as a result of the application of robot control systems and sensory feedback. This paper proposes that the estimation of a collaborative robot system’s performance can be achieved by evaluating the mobility of robots. Scenarios have been determined in which an autonomous system has been used for intervention in crisis situations due to ﬁre. The experimental model consists of three autonomous vehicles, two of which are ground vehicles and the other is an aerial vehicle. The conclusion of the research described in this paper highlights the fact that the integration of robotic systems made up of autonomous vehicles working in unstructured environments is di ﬃ cult and at present there is no unitary analytical model.


Introduction
The strategic concept of surfing robots has spread throughout the world and has stimulated government programs in the United States, China, Japan, South Korea, India and many developing countries, taking into account national specific outlooks [1][2][3][4]. The academic community is also increasingly involved in stimulating studies and research on the implementation of digital technologies in economic sectors [5][6][7].
On the other hand, disasters strike anywhere and cause numerous losses to life and property. The statistics presented in the 2016 World Disaster Report confirm the need to implement strategies to reduce the loss and impact on people's daily lives and socio-economic development. Fortunately, making emergency decisions using robots can be an optimal alternative to respond to or control these situations in order to protect both life and property. For example, unmanned ground vehicles and unmanned aerial vehicles can be used where the risk of disaster in response teams is high. Due to its important role in reducing losses and the impact of emergencies, robot collaboration has become an active research area in recent years [8].
Moreover, unmanned autonomous systems (UAS) represent one of the research challenges in the robotics and artificial intelligence domains. Autonomous path planning algorithms can be useful and effective using artificial intelligence, but challenges are created due to the randomness of the environment. This paper aims to highlight the results obtained through the collaboration of two types of autonomous vehicle, namely two unmanned ground vehicles (UGVs) [9] and an unmanned aerial random obstacles. The research work that combines other collaboration tasks and approaches is discussed in Section 4. Finally, this paper is concluded in Section 5.

Methods
Existing studies [8][9][10][11] have neglected the fact that for decision-making interventions, the decision maker has to treat them differently by using measures based on concrete information that can be made available by mobile robots.
The realization of a collaborative robot family involves developing a complex and integrated model based on the following:

•
Modularity through integration, coordination, evaluation, and optimization of heterogeneous subsystems (hardware, mobile platforms, actuators/grippers/tools, sustainable energy systems), real-time test/evaluation models, and algorithms for subsystems; • Cooperative navigation based on the evaluation and optimization of robot movements from the three work environments; • Coordination and synchronization of end-effector movement while performing system tests (as a whole), streamlining work and navigation paths via the structural integration of components, effectors, and analytical models; • Engaging robotized subsystems (terrestrial/air/underwater robots) in a collaborative/collective/ cooperative way, intuitively and safely in the sense of adaptability to identify the obstacles and orientation uncertainties introduced by the initial algorithms; • Assimilation of instructions and the updating of working states by performing data interpretation from sensors and comparing the sensor data with the data stored in the assigned software libraries; • Intuitive plug-and-play systems that consider the fact that the subsystems that form the assembly are heterogeneous; • Open-source software.

UGV, the Terrestrial Component of an UAS
Programming the paths is done by introducing an algorithm with both an ideal and a real path that will consider the permanently measured values [9]. In the unstructured (terrestrial) environment, the analytical approach is based on Bekker equations [20], according to the Coulomb-Mohr soil failure criteria. It is aimed at determining the following:

•
The soil friction coefficient, µ terrain , based on the tractive force F, the weight of the vehicle G a , the ground pressure p av , the constant specific value of the soil K, the soil shear coefficient δ, and the length of the contact spot L a : • Unitary shear stress τ, where s is the slip coefficient There are methods to determine friction coefficients and unitary shear stresses that have been developed by other authors [4,21].
The complexity of the phenomena occurring at the terrain-vehicle interface has enabled the development of empirical methods for the evaluation of vehicle mobility. For example, the NATO Reference Mobility Model (NRMM) software can determine the performance of a terrestrial robot on the move on any type of land in a global sense [9]. Through this performance measurement method, a very good prediction of the maximum permissible speed of terrestrial robots can be made for any specified geographic region in any environmental condition (humid season, precipitation, snow).
Each analytical model separately studies the different characteristics of the UGV propulsion performance. The point of view of achieving a balance between the capabilities of the mobile robot to orient, plan, and estimate the positions of the obstacles and the factors limiting the progressive field ability [22][23][24] takes into account the fact that both the speeds in rectilinear or turn motion and driving autonomy are influenced by the unstructured character of the terrain, which could be sand, grass, concrete, etc. Their conclusion is that an approach that makes corrections on coefficients (following real-time measurements) is much closer to reality. Thus, a predictive control model (PCM) can generate better planning for the paths to be followed.
The sensor system of the firefighting robot FFR-1-UTM (Firefighting Robot 1 from Titu Maiorescu University) consists of temperature, ultrasonic, proximity, infra-red distance measurement, triaxial accelerometer, GPS, gyroscope, air quality measurement, alcohol gas, liquefied petroleum gas (LPG), CO, CO 2 , CH 4 , hydrogen gas (H 2 ), and weather station sensors ( Figure 1). to orient, plan, and estimate the positions of the obstacles and the factors limiting the progressive field ability [22][23][24] takes into account the fact that both the speeds in rectilinear or turn motion and driving autonomy are influenced by the unstructured character of the terrain, which could be sand, grass, concrete, etc. Their conclusion is that an approach that makes corrections on coefficients (following real-time measurements) is much closer to reality. Thus, a predictive control model (PCM) can generate better planning for the paths to be followed.
The sensor system of the firefighting robot FFR-1-UTM (Firefighting Robot 1 from Titu Maiorescu University) consists of temperature, ultrasonic, proximity, infra-red distance measurement, triaxial accelerometer, GPS, gyroscope, air quality measurement, alcohol gas, liquefied petroleum gas (LPG), CO, CO2, CH4, hydrogen gas (H2), and weather station sensors ( Figure 1). The predictions regarding mobility over large areas require a stochastic approach, as the terrain profile, terrain and robot interaction, sensors [25,26], altitude, remote sensing, physical properties of the terrain, slope of the land, internal friction coefficients for soft soils, and friction angles for hard soils introduce several variables that generate uncertainties that can be seen during analysis with either analytical or numerical models [27].
The integration environment is represented by an integration tool for open architecture modeling processes here. This AI model makes and implements decisions and collaborates and distributes standard algorithms and simulation codes with non-preferential interfaces, GIS tools, and Python, with the objectives of the research task group (RTG) and NATO's research and technology organization (RTO) [26].
Planning deals with finding a sequence of actions to reach a target, starting from an initial state. Planning is a tree structure that uses searches in the state space by creating additional successors and may even go over the intermediate states [28,29].
The determination of the runway requires, first and foremost, solving the uncertainties. For this, we use the Kriging estimation method, based on geostatistical information [29][30][31][32].
The Bekker-Wong model, a model of uncertainty calculation, uses the statistical data obtained by measuring the soil density rated cone index (RCI) [9]. This method is preferred because it allows the determination of the actual runway profile for all four wheels (each wheel crosses a structurally modified terrain profile following the passing of the front wheels) [33,34].
Programming collaborative terrestrial robots has an exploratory behavior [35,36]. To avoid local minima, it supposes multiple approaches, including geographic information systems (GISs), soil geometry, the kinematics and dynamics of the robot, power management, the global system for mobile communications (GSM), and electro-magnetic protection. The predictions regarding mobility over large areas require a stochastic approach, as the terrain profile, terrain and robot interaction, sensors [25,26], altitude, remote sensing, physical properties of the terrain, slope of the land, internal friction coefficients for soft soils, and friction angles for hard soils introduce several variables that generate uncertainties that can be seen during analysis with either analytical or numerical models [27].

UAVs, the Aerial Component of an UAS
The integration environment is represented by an integration tool for open architecture modeling processes here. This AI model makes and implements decisions and collaborates and distributes standard algorithms and simulation codes with non-preferential interfaces, GIS tools, and Python, with the objectives of the research task group (RTG) and NATO's research and technology organization (RTO) [26].
Planning deals with finding a sequence of actions to reach a target, starting from an initial state. Planning is a tree structure that uses searches in the state space by creating additional successors and may even go over the intermediate states [28,29].
The determination of the runway requires, first and foremost, solving the uncertainties. For this, we use the Kriging estimation method, based on geostatistical information [29][30][31][32].
The Bekker-Wong model, a model of uncertainty calculation, uses the statistical data obtained by measuring the soil density rated cone index (RCI) [9]. This method is preferred because it allows the determination of the actual runway profile for all four wheels (each wheel crosses a structurally modified terrain profile following the passing of the front wheels) [33,34].
Programming collaborative terrestrial robots has an exploratory behavior [35,36]. To avoid local minima, it supposes multiple approaches, including geographic information systems (GISs), soil geometry, the kinematics and dynamics of the robot, power management, the global system for mobile communications (GSM), and electro-magnetic protection.

UAVs, the Aerial Component of an UAS
The use of UAVs for the development of collaborative robot systems has become a necessity due to the complexity and diversification of fire hazards.
Terrestrial robots need to be guided to intervene autonomously in spaces about which they do not have sufficient information. This can be effectively accomplished by combining their own information with that of UAVs. Also, the implementation of autonomous/semi-autonomous dual systems allows specialized intervention team operators to take control of UASs.
For flight planning and control, the UAV's on-board controller uses board sensors to estimate the position and orientation, along with setting payload parameters. For the experimental model HEXA-01-UTM (Hexacopter Robot 01 from Titu Maiorescu University), shown in Figure 2, the open source Arduino IDE (integrated development environment) and Raspbian software were implemented. The use of UAVs for the development of collaborative robot systems has become a necessity due to the complexity and diversification of fire hazards.
Terrestrial robots need to be guided to intervene autonomously in spaces about which they do not have sufficient information. This can be effectively accomplished by combining their own information with that of UAVs. Also, the implementation of autonomous/semi-autonomous dual systems allows specialized intervention team operators to take control of UASs.
For flight planning and control, the UAV's on-board controller uses board sensors to estimate the position and orientation, along with setting payload parameters. For the experimental model HEXA-01-UTM (Hexacopter Robot 01 from Titu Maiorescu University), shown in Figure 2, the open source Arduino IDE (integrated development environment) and Raspbian software were implemented. The two software packages control motor drivers, encoders, the GPS, GSM, payload, radio transponder, accelerometer, gyroscope, etc. The algorithms for flight planning convert the mission objectives and act on acceleration, kinematics, and aerodynamics to obtain the command set for the trajectory using the feedback kinematic state.
The quantification of the UAV mission accomplishment [37] allows the evaluation of robot performance, either separately or within the UAS of which it belongs. According to AGARD-AR-343 (Advisory Group for Aerospace Research and Development) [38] this can be done via the analysis of the payload function and the command and control function.
As energy autonomy is essential to deliver missions (2700 mAh with 22.2 V), the HIRRUS V1 payload model implemented on the HEXA-01 UTM chassis was the best solution.
This payload has built-in electro-optical/infrared (EO/IR) cameras, video tracking, a passive ultra-high frequency radio-frequency identification (UHF-RFID) low-cost transponder (868 or 915 MHz), a stabilized gimbal, a transmitter for HD cameras, a GoPro camera using coded orthogonal frequency-division multiplexing (COFDM) for transmission, and a roadrunner on-orbit processing experiment (ROPE) system for processing and compressing JPEG data and transmitting them in real time to the GCS.
According to [39], the measured parameters influence flight and, implicitly, the values help to identify obstacles and the environment.
The methodology considers autonomy levels (AL) and technology readiness levels (TRL) defined by the National Institute of Standards and Technology (NIST). The GPS sensors, the accelerometer, and the gyroscope allow navigation by decomposing the planned route into points. HEXA-01 UTM, developed as an experimental model, in terms of system integration and operational reliability, reaches AL 4 and TRL 5.
To determine the UAV precision, both static and dynamic data are analyzed. Combining the two The two software packages control motor drivers, encoders, the GPS, GSM, payload, radio transponder, accelerometer, gyroscope, etc. The algorithms for flight planning convert the mission objectives and act on acceleration, kinematics, and aerodynamics to obtain the command set for the trajectory using the feedback kinematic state.
The quantification of the UAV mission accomplishment [37] allows the evaluation of robot performance, either separately or within the UAS of which it belongs. According to AGARD-AR-343 (Advisory Group for Aerospace Research and Development) [38] this can be done via the analysis of the payload function and the command and control function.
As energy autonomy is essential to deliver missions (2700 mAh with 22.2 V), the HIRRUS V1 payload model implemented on the HEXA-01 UTM chassis was the best solution.
This payload has built-in electro-optical/infrared (EO/IR) cameras, video tracking, a passive ultra-high frequency radio-frequency identification (UHF-RFID) low-cost transponder (868 or 915 MHz), a stabilized gimbal, a transmitter for HD cameras, a GoPro camera using coded orthogonal frequency-division multiplexing (COFDM) for transmission, and a roadrunner on-orbit processing experiment (ROPE) system for processing and compressing JPEG data and transmitting them in real time to the GCS.
According to [39], the measured parameters influence flight and, implicitly, the values help to identify obstacles and the environment.
The methodology considers autonomy levels (AL) and technology readiness levels (TRL) defined by the National Institute of Standards and Technology (NIST). The GPS sensors, the accelerometer, and the gyroscope allow navigation by decomposing the planned route into points. HEXA-01 UTM, developed as an experimental model, in terms of system integration and operational reliability, reaches AL 4 and TRL 5.
To determine the UAV precision, both static and dynamic data are analyzed. Combining the two types of data increases optimization. An analytical model [40] which determines the position of the UAV, but also those of objects or obstacles, uses the following equations, specific to the accelerometer, gyroscope, and magnetometer: where s y a is the sensor-measured acceleration, S a is a linear scale factor, N a is the non-orthogonal axis, s a is the corrected real acceleration, b a is the sensor polarity, and ε a is the accelerometer noise.
where y ω is the angular speed rate, S ω is a scale factor, N ω is the non-orthogonal axis, R ω is the error due to the geometric position of the three sensors, s ω is the corrected real acceleration, b ω is the sensor error, s a is the real acceleration, G ω is the sensor sensibility to the gravitational acceleration, and ε ω is the gyroscope noise.
where y m is the measured magnetic domain, D m is the soft-iron distortion [41] that depends on the orientation of the material towards the sensor and the magnetic field, s m is the real magnetic field, o m is the hard-iron distortion, ε ω is the magnetometer noise, and D m and o m include manufacturing defects, scaling factors, non-orthogonal axes, and the relative positions of the three sensors.

Results
The payload shown in Figure 3 can retract, i.e., the EO/IR camera can retract. Also, the three brushless motors help to change the target of the camera according to the mission. To determine the UAV precision, both static and dynamic data are analyzed. Combining the two types of data increases optimization. An analytical model [40] which determines the position of the UAV, but also those of objects or obstacles, uses the following equations, specific to the accelerometer, gyroscope, and magnetometer: where is the sensor-measured acceleration, is a linear scale factor, is the non-orthogonal axis, is the corrected real acceleration, is the sensor polarity, and is the accelerometer noise.
where is the angular speed rate, is a scale factor, is the non-orthogonal axis, is the error due to the geometric position of the three sensors, is the corrected real acceleration, is the sensor error, is the real acceleration, is the sensor sensibility to the gravitational acceleration, and is the gyroscope noise.
where is the measured magnetic domain, is the soft-iron distortion [41] that depends on the orientation of the material towards the sensor and the magnetic field, is the real magnetic field, is the hard-iron distortion, is the magnetometer noise, and and include manufacturing defects, scaling factors, non-orthogonal axes, and the relative positions of the three sensors.

Results
The payload shown in Figure 3 can retract, i.e., the EO/IR camera can retract. Also, the three brushless motors help to change the target of the camera according to the mission. Atmospheric instabilities and those due to flight adjustment require a system of correction for image capturing and the attachment of detected obstacles. The existing AI techniques for autonomous robots, analyzed in [37], require data fusion techniques and data extraction procedures to perform data interpretation and diagnostics. The payload functional architecture must also deal with internal and external thermomechanical influences.
The retraction subsystem works in a closed loop. Coefficients of differential equations are time variables, so the analytical and numerical model is extremely complex, because the relationships between the flux, induced voltage, and currents change continuously when the electrical circuit is in relative motion. The movement is highlighted by a simulation of its own vibration modes [42] in Figure 4. Atmospheric instabilities and those due to flight adjustment require a system of correction for image capturing and the attachment of detected obstacles. The existing AI techniques for autonomous robots, analyzed in [37], require data fusion techniques and data extraction procedures to perform data interpretation and diagnostics. The payload functional architecture must also deal with internal and external thermomechanical influences.
The retraction subsystem works in a closed loop. Coefficients of differential equations are time variables, so the analytical and numerical model is extremely complex, because the relationships between the flux, induced voltage, and currents change continuously when the electrical circuit is in relative motion. The movement is highlighted by a simulation of its own vibration modes [42] in Figure 4. For fire identification and extinction, the autonomous collaborative robot system will operate according to the following algorithm:

•
The hexacopter rises and takes a tour to detect and locate the fire; • The mini rover receives the information (wireless) and moves to the coordinates , , where the fire was found; • The predefined route is a 3D map ( Figure 5) of a randomly chosen location; • Commence movement to the defined target point, continuously calculating the path and indicating its position with respect to the reference system defined as an origin; • Unknown objects are scanned ultrasonically and via the IR camera; • At the target, the GCS processes data in real time and generates a new map with an optimized route; • From this moment, FFR-1-UTM moves to the fire that has been identified using its orthogonal coordinates; • For feedback, FFR-1-UTM is equipped with a video camera.
The conclusions following the simulations and software testing are as follows: the command processes are distributed and imply the transformation from three phases into two phases [35,36] and the conversion from stator values to a rotor reference frame. Similar to three-phase asynchronous machines, these processes are described as voltage and current equations.

Discussion
In the case of emergencies, the location of access points has become an important research issue, given the impact of using different measures due to limited resources and their dynamic evolution [8,[43][44][45].
The collaboration between autonomous mobile robots, conducted at the experimental model level, ended with the extinguishment of a small fire, shown in Figure 6, (simulation of a fire that has For fire identification and extinction, the autonomous collaborative robot system will operate according to the following algorithm:

•
The hexacopter rises and takes a tour to detect and locate the fire; • The mini rover receives the information (wireless) and moves to the coordinates (x i , y i , z i ) where the fire was found; • The predefined route is a 3D map ( Figure 5) of a randomly chosen location; For fire identification and extinction, the autonomous collaborative robot system will operate according to the following algorithm: • The hexacopter rises and takes a tour to detect and locate the fire; • The mini rover receives the information (wireless) and moves to the coordinates , , where the fire was found; • The predefined route is a 3D map ( Figure 5) of a randomly chosen location; • Commence movement to the defined target point, continuously calculating the path and indicating its position with respect to the reference system defined as an origin; • Unknown objects are scanned ultrasonically and via the IR camera; • At the target, the GCS processes data in real time and generates a new map with an optimized route; • From this moment, FFR-1-UTM moves to the fire that has been identified using its orthogonal coordinates; • For feedback, FFR-1-UTM is equipped with a video camera.
The conclusions following the simulations and software testing are as follows: the command processes are distributed and imply the transformation from three phases into two phases [35,36] and the conversion from stator values to a rotor reference frame. Similar to three-phase asynchronous machines, these processes are described as voltage and current equations.

Discussion
In the case of emergencies, the location of access points has become an important research issue, given the impact of using different measures due to limited resources and their dynamic evolution [8,[43][44][45].
The collaboration between autonomous mobile robots, conducted at the experimental model level, ended with the extinguishment of a small fire, shown in Figure 6, (simulation of a fire that has

•
Commence movement to the defined target point, continuously calculating the path and indicating its position with respect to the reference system defined as an origin; • Unknown objects are scanned ultrasonically and via the IR camera; • At the target, the GCS processes data in real time and generates a new map with an optimized route; • From this moment, FFR-1-UTM moves to the fire that has been identified using its orthogonal coordinates; • For feedback, FFR-1-UTM is equipped with a video camera.
The conclusions following the simulations and software testing are as follows: the command processes are distributed and imply the transformation from three phases into two phases [35,36] and the conversion from stator values to a rotor reference frame. Similar to three-phase asynchronous machines, these processes are described as voltage and current equations.

Discussion
In the case of emergencies, the location of access points has become an important research issue, given the impact of using different measures due to limited resources and their dynamic evolution [8,[43][44][45].
The collaboration between autonomous mobile robots, conducted at the experimental model level, ended with the extinguishment of a small fire, shown in Figure 6, (simulation of a fire that has paper fuel). The UAS presented here is a limiting one, so there remain several issues that will be investigated further in the future. We will seek to include artificial intelligence techniques in the functional domain of robots that move in two different unstructured environments.
Processes 2020, 8, x FOR PEER REVIEW 8 of 11 paper fuel). The UAS presented here is a limiting one, so there remain several issues that will be investigated further in the future. We will seek to include artificial intelligence techniques in the functional domain of robots that move in two different unstructured environments. This research proposes an important application which proves the fact that mobile robots can successfully collaborate with efficacy in real-world emergencies. Additionally, by locating the access point in a timely manner, robots can help alleviate and reduce various losses and damage (e.g., life, property, and the environment) caused by fires.
Until now, the solutions proposed in this area have been based more on theoretical hypotheses or computer simulations to demonstrate the effectiveness of a collaborative robot system [44][45][46][47]. On top of that, unmanned ground vehicles (UGVs) can be deployed to accurately locate ground targets and detect humans, fires, gases, etc., but they have the disadvantage of not being able to move rapidly or see through such obstacles as buildings or fences [19].

Conclusions
In recent years, UAVs have provided additional degrees of freedom for UGVs, enabling them to negotiate obstacles by simply lifting them across them. Missions including intelligence surveillance and reconnaissance are some of the most investigated and applied types of UAV-UGV collaboration system. Researchers can use virtual reality programs to develop and design UAV-UGV collaborative systems, including multi-robot communication and artificial intelligence systems.
In this study, we intended to evaluate the performance of a collaborative robot system and its use to identify and extinguish fires. According to the general control scheme, this system was also equipped with sensors for route planning and obstacle avoidance. This has enabled the location of the intervention to be identified in an efficient, rapid, and accurate manner. Point setting mechanisms have been experimentally tested with two UGVs and one UAV. This has made it possible to estimate the extent of the missions.
Extending these ideas, the equipment used to develop the experimental models allowed the AL 4 and TRL 5 levels to be reached, which created some problems in obtaining minimum data to initiate a simulation with NRMM I/II.
In our research, the UAV design requires structural changes, where, according to simulations, an octocopter is more stable and can have a more useful load ratio over the energy reserve, approximately 20% to be specific. Equipped with a return to base functionality, a parachute, and a flight termination, an octocopter UAV is extremely safe, especially as it is able to continue flying with up to two engines out of use (not on the same arm). It can be flown in automatic, GPS, or manual modes, with the pilot being able to intervene at any time if necessary.
These changes lead us to believe that delays in decision-making will be diminished, such that This research proposes an important application which proves the fact that mobile robots can successfully collaborate with efficacy in real-world emergencies. Additionally, by locating the access point in a timely manner, robots can help alleviate and reduce various losses and damage (e.g., life, property, and the environment) caused by fires.
Until now, the solutions proposed in this area have been based more on theoretical hypotheses or computer simulations to demonstrate the effectiveness of a collaborative robot system [44][45][46][47]. On top of that, unmanned ground vehicles (UGVs) can be deployed to accurately locate ground targets and detect humans, fires, gases, etc., but they have the disadvantage of not being able to move rapidly or see through such obstacles as buildings or fences [19].

Conclusions
In recent years, UAVs have provided additional degrees of freedom for UGVs, enabling them to negotiate obstacles by simply lifting them across them. Missions including intelligence surveillance and reconnaissance are some of the most investigated and applied types of UAV-UGV collaboration system. Researchers can use virtual reality programs to develop and design UAV-UGV collaborative systems, including multi-robot communication and artificial intelligence systems.
In this study, we intended to evaluate the performance of a collaborative robot system and its use to identify and extinguish fires. According to the general control scheme, this system was also equipped with sensors for route planning and obstacle avoidance. This has enabled the location of the intervention to be identified in an efficient, rapid, and accurate manner. Point setting mechanisms have been experimentally tested with two UGVs and one UAV. This has made it possible to estimate the extent of the missions.
Extending these ideas, the equipment used to develop the experimental models allowed the AL 4 and TRL 5 levels to be reached, which created some problems in obtaining minimum data to initiate a simulation with NRMM I/II.
In our research, the UAV design requires structural changes, where, according to simulations, an octocopter is more stable and can have a more useful load ratio over the energy reserve, approximately 20% to be specific. Equipped with a return to base functionality, a parachute, and a flight termination, an octocopter UAV is extremely safe, especially as it is able to continue flying with up to two engines out of use (not on the same arm). It can be flown in automatic, GPS, or manual modes, with the pilot being able to intervene at any time if necessary.
These changes lead us to believe that delays in decision-making will be diminished, such that the collaborative work of robots will be effective. Most of the solutions in the market consist of a single robot only; however, our model, being composed of three autonomous vehicles, will help greatly in saving the precious lives of people and also the lives of the defense personnel and rescue teams involved in these missions, also saving a lot of time.