Self-Tuning Method for Increasing Reliability in Obstacle Detection based on Internet-of-Things LiDAR Sensor Models

Nowadays, the research and development of on-chip LiDAR sensors for vehicle collision avoidance is growing very fast. Therefore, the assessment of the reliability in obstacle detection using the information provided by LiDAR sensors has become a key issue to be explored by the scientific community. This paper presents the design and implementation of a self-tuning method in order to maximize the reliability of an Internet-of-Things sensors network and to minimize the number of sensors to localize with the required accuracy obstacles by a detection threshold. In order to achieve this goal, models that predict accuracy (i.e., prediction error) for object localization using data collected by LIDAR sensors are designed and implemented in Webots Automobile 3D simulation tool. The approach is based on combining different techniques. Firstly, point-cloud clustering technique and an error prediction model library composed by a multilayer perceptron neural network with backpropagation, k-nearest neighbors and linear regression are explored. Secondly the above-mentioned techniques for modeling are also combined with a supervised and reinforcement machine learning technique, Q-learning in order to minimize the detection threshold. In addition, a IoT driving assistance simulated scenario with a LiDAR sensor network is designed in order to validate the prediction model and the optimal configuration of the sensor network to guarantee reliability in obstacle localization. The results demonstrate that the self-tuning method is appropriate to increase the reliability of the sensor network whereas minimizing the detection threshold.


Introduction
Nowadays, the Internet of Things (IoT) applications are present in many sectors from industry environments (e.g.manufacturing, energy, etc.) to our personal lives (e.g.health, society, mobility, etc.).IoT are strategic for automotive applications with fresh push and investment in recent years in order to develop and put into the market smart mobility ecosystems with an autonomous level of interaction between vehicles and infrastructures.Nevertheless, everyday car manufacturers, OEMs for automotive sector, researchers and engineers are introducing new technological contributions and new challenges should be addressed in short terms [1,2].
One particular challenge aiming at autonomous driving is the estimation of the accuracy and reliability in vision devices such as Light Detection and Ranging (LiDAR) and stereo camera integrated into automotive driving assistance systems for pattern recognition and obstacle detection tasks [3].In many scenarios, it is very difficult to certify with a lower uncertainty level the real topology and distance of the objects, in most cases due to phenomena as dead zones, object transparency, light reflection, weather conditions and sensors failures [4].Furthermore, traditional networking devices are not designed to be used in the unpredictable, varying and dynamic environments of a IoT transportation ecosystems, being necessary to develop new methodologies to characterize and estimate the sensors reliability [5].By the other hand, sensors fusion is commonly applied to combine different sensors for road detection, mainly cameras and LiDARs.Nevertheless, current sensor fusion methods are taking advantage of both sensors (cameras or LIDARs), rather than exploiting the advantages of each sensor isolated [6].Furthermore, the parallel processing of frames (from camera) and scans (LiDAR) imply a high computational cost, being unnecessary in many scenarios if a method of error prediction based on sensor model for assessing reliability in runtime is developed [7].
Another important issue is the increase of computing power and wireless communication capabilities to expand the role of sensors from mere data collecting to more demanding tasks as sensor fusion, classification and collaborative target tracking.Fault tolerance and reliability perform a key role for embedded systems, such as obscured wireless sensors, which are deployed in some applications where it is difficult to access them physically [8].Reliable monitoring of a phenomenon (or event detection) depends on the set of data provided by the cluster of sensors, and not only rely on any individual node.The failure of one or more nodes may not cause the disconnection of operational data sources from the data sinks (command nodes or end user stations).However, it may increase the number of hops a data message has to go through before reaching its destination (and subsequently increase the message delay), giving an estimation of the failure probabilities of the sensors, as well as the intermediate nodes (nodes used to relay messages between data sources, and data sinks) [9].
Several reconstruction methods are reported in the literature to create specific geometry models of existing objects from scanned point clouds based on information obtained from LiDARs [10].The progress in modelling techniques to simulate complex driving environments provides a realistic representation between multiple input/output variables in order to determinate which factors are most influential in degrading reliability, and rank ordering them, as well as, the detection of pedestrians, obstacles and vehicles in real-time driving scenarios [11].Clustering techniques are highly used on exploratory data mining, statistical analysis, pattern recognition, image analysis, information retrieval, bioinformatics, data compression and computer graphics [12].Among clustering algorithms one of the most used is the k-nearest neighbor's (k-NN) algorithm because of the simplicity to get nearest neighbors of a query in training dataset, and then predicts the query with the major class of nearest neighbors [13].Another widely applied technique in industrial applications is the reinforcement learning [14].A good example of that are the results archived by the Q-learning algorithm in function to generate artificial intelligence and self-learning strategies on complex processes, providing self-tuning capability to obtain the optimal configuration based on the reward or penalties learned in previous states (iterative knowledge generation) [15].
From the best of authors' knowledge, the main contributions of this work is the design and implementation of a four-step method to maximize the reliability on IoT LiDAR sensors network and to minimize detection threshold (the number LiDAR sensors required to detect one obstacle).The method includes a point cloud grouping, modeling, learning and self-tuning (knowledge-based learning algorithm) tasks, combining supervised and reinforcement machine learning techniques and clustering.Furthermore, a IoT driving assistance scenario with a sensor network is created using Webots simulation tool to generate a LiDAR scan benchmark.Finally, the method is validated on dynamic obstacle detention scenario in order to obtain the best prediction model and optimal number of LiDAR sensors needed to guarantee reliability in the obstacles localization using these sensors.
The paper consists of five sections.Following this introduction, the second section shows the design and implementation of several modules introduced on self-tuning reliability methodology.Subsequently, a driving-assistance case study scenario for obstacle detection based on IoT LiDAR model sensors information is developed in Section 3. Besides, the proposed methodology is validated based on the minimal number sensors demanded to ensure the LiDAR sensor reliability on each scan.Finally, the conclusions and future research steps are presented.

Self-tuning Method for Reliability in LiDAR sensors network.
The self-tuning method for reliability in LiDAR sensors network mainly consists of a computeraided system to enable an efficient data interchange between the data provided by IoT sensor networks managed by a control node network and external modules devoted to evaluate the behaviour of these networks.The component in charge of generating sensory data through the simulation of sensor models is called Supervisor Node Controller (SNC); while the interface with the external modules is called IoT assessment framework.Different simulation tools can be used for this purpose.On the one hand, a 3D model simulator for automotive applications and on the other hand, an external programming software with a set of toolkits to manage points cloud, clustering methods, pattern recognition algorithms and modelling strategies based on Artificial Intelligence techniques, among others.

Conceptual design
The conceptual and architectural design of the proposed method is presented in this section.Figure 2 shows the data interchange between sensory and actuation.The data interchange component operates as data sharing broker.Supervisor Node Controller (SNC) and IoT Assessment framework bring and collect information from the data interchange broker.The SNC is composed by different local control nodes containing Internet-of-Things (IoT) sensor network models, distributed according to their functions.The distributed IoT sensors are in charge of capturing sensory data and interchange these data with the SCN in order to share it with other external modules.It is important to highlight that to send and receive information between the different nodes of the IoT sensors network, these data must necessarily pass through the supervisor.However, this obligation is not necessary when this data transfer is between the different IoT sensors that make up the sensor network.
The IoT assessment framework is responsible for receiving / sending data from/to the SCN.The first key component is a model developed for a tailored function in direct link with the local control nodes and the IoT sensor network.The training procedure for this model is carried out using computational intelligence such as k-Nearest Neighbour, Multi-Layer Perceptron, Support Vector Machine, Self-Organizing Map, Bayesian Network, etc.For example, one model can be in charge of predicting predict the error in the localization of an object from data cloud points given by a LiDAR sensor.On the other hand, the second is group of tasks for self-tuning (knowledge-based learning algorithm).This module consists of a computational intelligence (CI) model library that contains other models with similar functions and a learning strategy (i.e., Q-Learning) that computes in runtime the actual threshold value with the aim of performing corrective actions.Both methods can also be enriched at runtime from data received by nodes of the IoT sensors network.

Implementation
The SCN, the local control nodes and IoT sensor network are designed and implemented using a simulation tool for 3D models using Webots for automobiles R2018a [16].In addition to its high degree of potentiality when simulating sensors for driving assistance, Webots is able to interact with other external software or programming languages, such as MatLab, Python, Java and Visual #C/C++, among others.It should be noted that for modelling and simulation of sensors, any other simulation tool for 3D sensor models can be selected.Therefore, IoT assessment framework can be implemented using any of the previously mentioned software or external programming languages.However, one of these programming software with an extensive set of libraries is MaLlab 2017b, then selected for developing the selftuning procedure.This tasks are carried out by two parallel execution threads, one of them in a local mode with directly data transfer with the IoT sensor network, and the other on a global level.The local tread (parallel execution 1) executes the current error prediction model from sensory data provided by IoT sensor network.Subsequently, depending on the value of a certain threshold that are calculated in runtime through a learning process, a set of corrective actions are performed.This procedure is described in later sections.
On the other hand, the global tread (parallel execution 2) contains the CI model library with other error prediction models with different performance indices.Later on, the library can be also enriched from the process simulation.During the simulation, new sensory data can be generated providing new environment information in each interaction.Based on this continuous information flow and the previous knowledge-based learning algorithm, the library executes a parallel learning procedure for all error prediction models to obtain a personalize setting for each particular critical situation.Finally, once a new best configuration is yielded the corresponding model in the IoT sensor network is then updated.

Supervisor Control Node
This controller is in charge to manage the scenario in runtime and interchange data between local control nodes or IoT sensors with other external modules.The overall operation of the 3D scenario is managed by the SCN, in this case, of Webots.Webots roots come from an extension of robot's simulation software adapted to automobile simulations in a virtual environment.A set of computational procedures is in charge of adapting and transferring sensory information.The data transfer is carried out by means of different functionalities available in Webots.For example, some of available functions serves to create sensor models, such as LiDARs, stereo vison cameras, radar, Inertial, Magnetic, Gyroscope and GPS sensors that can be emulated with this software.In addition, many obstacles and objects can be added to the scenario, such as simple Vehicles, toad segments, traffic signals and lights, buildings, etc.Therefore, a 3D traffic scenario can be created in order to simulate the behaviour of IoT sensor networks that incorporates in each control local node (i.e. a fully automated vehicle) for driving-assistance scenarios.
2.2.2 Threshold detector and Q-learning procedure Q-learning algorithm is a model-free reinforcement learning technique.Specifically, Q-learning can be used to find an optimal action-selection policy for any given (finite) Markov decision process [17].It works by learning an action-value function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy (off-policy learner) thereafter.The algorithm is based on a simple value iteration update.It assumes the old value and makes a correction based on the new information [18]. ) where, rt+1 is the reward observed after performing a t in st, and  is the learning rate (0 <  ≤ 1).
The Q-learning algorithm is introduced in the closed-loop cycle (self-tuning) in order to minimize the numbers of LiDAR sensors needed to guarantee a good accuracy in the localization of a detected obstacle with the minimum computational illustrated in Figure 2. Table 1 shows different rewards assigned for each detection threshold ().These range of values are defined in function of the obstacle prediction error calculated during the classification step (see Figure 2).During the learning process, the algorithm recommends the optimal numbers of sensors to achieve a reliable obstacle detection, based on the previous knowledge generated by the rewards obtained from similar situations learnt in the past.This self-turning threshold produces a reduction of the computational cost and an accelerates the prediction time to detect an obstacle in the driving assistance environments.

IoT LiDAR Sensor Models for Obstacle Detection. Case Study
A particular driving assistance scenario is defined in order to evaluate and to validate the proposed self-tuning methodology.In this use case, the methodology was applied to a LiDAR sensor model to assess the reliability related to the error accuracy in the location of an object.In this section, a model of error prediction from data of a single LiDAR sensor model was generated.Instead, this same model will be used in a later section to evaluate and establish the reliability of an IoT sensor network.The way to generate a dataset for training and validate the error prediction models is described in the following sections.

Traininig dataset from 3D scenario simulation
In order to generate a virtual driving traffic scenario, Webots automobile simulation tool was used, creating also a 4-layer LiDAR sensor model.The scenario emulates the real setup available in the Centre for Automation and Robotics (CAR) in Ctra.Campo Real Km. 0.2, Arganda del Rey (Madrid, Spain), composed by a test track (a roundabout, traffic lights in the central crossing and additional curves on the main straight) to simulate an urban environment with pedestrians and a fleet of six fully-automated vehicles in movement and a communications tower [19].Figure 3 illustrates the aerial view of some of these 3D scenarios in Webots Automobile for driving assistance.A vehicle model (Toyota Prius), a camera image with objects recognized and the LiDAR point cloud are illustrated in Figure 3, implemented in this simulation tool.The fully sensorized vehicle model (Toyota Prius model) incorporates two on-board sensors, one LiDAR sensor and a 3D stereo vision camera (see Figure 3b).Both sensors are located inside the vehicle, the LiDAR on the bottom front and the camera on the upper front.Table 2 shows the specifications and localization of this sensor models into the vehicle.On the one hand, the camera benchmark contains 1031 images and its corresponding annotation files obtaining from the object recognition algorithm of the camera sensor model with the following information: localization (weight x height) into the image (in pixels) of each object and the size (weight x height) of each object into the image.This model sensor is a stereo vision camera which specifications are 0.8 MP; resolution, 1032 × 776, in color and 20 FPS.
On the other hand, the LiDAR benchmark contains 1031 scans.Each scan contains 3-D point cloud/scan.This small benchmark set is useful for exploring the accuracy of an obstacle in the scene.Each scene contains in first, second and third column the X, Y, Z relative position of object regarding the localization of the LiDAR into the scene (X0 = 0, Y0 = 0 and Z0 = 0), and the fourth column are the number of the corresponding layer.
In addition, the raw data obtained from the LiDAR need to be filtering and pre-processing in order to facilitate the determination of the error in the location of an object.Firstly, points corresponding to the ground plane that make up the road asphalts and the vegetation are eliminated.Principally, the points deleted are located 20 cm above the ground plane.
Secondly, in order to process the sensory data, fast indexing and search capabilities are required.For this, the data of the point cloud is internally organized using a k-d tree structure [20] and then, the next step of data-processing consists of extracting the points that correspond to nearby obstacles corresponding to specific point-cloud sequence.For this segmentation, a density-based spatial clustering of applications with noise (DBSCAN) was applied [21], being able to segment the point cloud for each available obstacle in the scene.Product to the algorithm returns the points clustered for each axis, the following formula is used to calculate the centroid of each point cloud segmented (X0, Y0, Z0) that corresponds to each obstacle: ( , , , ) , , Finally, the last step is to compare each centroid calculated by means of the LiDAR with the actual location obtained by the object recognition algorithm of each obstacle in order to obtain the accuracy error.
Once the benchmark data set is created, the next step is to create the training data set itself for the generation of the error prediction models.Next, the spatial statistics of the point cloud used as inputs to the error prediction model are described.Subsequently, the two errors that have been taken into account as outputs of the model are also explained.

Model Inputs
A group of spatial statistics were implemented in order to standardize the model inputs independently of distribution of the point cloud [22].Based on these spatial point pattern methods have been obtained the centrographic and directional distribution of the cloud of points.Common centrographic statistics for a point pattern are the mean center, median center, standard deviational circle and standard deviational ellipse [23].The mean center MC is characterized by geographic coordinates {X, Y, Z} equal to the arithmetic means of the x-, y-and z-coordinates of all the N points in a pattern: An alternative unique measure of central spatial tendency of a point pattern that was used is the center of minimum distance (often referred to as the median center), which is robust in the presence of spatial outliers.Unlike the mean center, defining the median center MedC requires a much more computationally complex iterative process to find a location that minimizes the Euclidean distance d to all the points in a point pattern [24]: where i defines each point in a point pattern, t is an iteration number and {x t , y t , z t } is a location of an iterative candidate median center.An important property of a point pattern is the degree of its spatial spread.It can be characterized by the standard distance SD, estimated as: where xi, yi and zi are the coordinates of point i{xi, yi, zi}, N is the total number of points and X, Y and Z are the coordinates of the mean center MC{X,Y,Z}.
Finally, the last spatial statistics used as input in this work is the third central moment (3 th CM) [25].This value represents the mean of the cubic deviation in of each point with respect to the mean center (MC) of each axis.The 3thCM value is calculated as follows: where xi, yi and zi are the coordinates of point i{xi, yi, zi}, n is the total number of points and MC is the mean center by geographic coordinates {X, Y, Z} which were calculated in equation (3).

Model output
The outputs of these models are two figure of merits of the accuracy: the distance root mean squared (DRMS) and the mean radial spherical error (MRSE).The first is a measure of data tracked in the x-y plane (2D) and the second is a measure of the data tracked in x-y-z space (3D) [26].The DRMS and MRSE values are calculated as follows: where n is the number of readings for a dynamic tag during the time it is tracked, (xti, yti, zti) are the coordinates of the tag at time ti, and (xActual,ti, yActual,ti, zActual,ti) are the actual coordinates of the tag at time ti.

Model training and initial validation
In order to estimate the value of the figure of merits in terms of error (DRMS and MRSE) in function of parameters extracted from the cloud of point generated by the LiDAR sensors, the errorbased prediction model library for the localization is defined.During the approach, three models were considered.First, a multilayer perceptron neural network with backpropagation (MLP) composed by two hidden layer, 5 neurons for each hidden layer, sigmoid activation functions, 1•10 4 epochs, training initial value of  = 10 -3 , decrease factor of 0.1, increase factor of 10, maximum value of  was 10 10 and the minimum performance gradient was 10 -7 .The second modeling technique is a k-Nearest Neighbors (k-NN) with 2 neighbors.Finally, a lineal regression is also obtained by minimizing the sum of squared of the difference between the predicted and observed values.1031 scans were extracted from the Webots simulator to generate the training and validation datasets.Subsequently, the scans were randomly divide into two datasets: 765 samples for the training dataset (representing the 74% of the total of samples) and 255 samples to compose the validation dataset (representing the another 26% of the total of samples).The model correlation coefficients (R 2 ) were estimated for all the models implemented in the modelling library.The Table 3 showed the values obtained for each model based on the plane (DMRS) and space (MRSE) figures of merits described before.As it can be appreciated, in both cases the k-NN algorithms represent the best fitting parameters, with a 93% of correlation between the x, y, z cloud of points coordinates in the localization of each obstacle detected.Finally, distance root mean square tendency for all the models is shown in Figure 4, validating the best fitting obtained between the observed solution and the prediction values based on the k-NN algorithm.Nevertheless, in most of cases, the MLP model present very similar behavior with the k-NN model, not so far from the linear regression model.

Experimental results.
Additional experimental tests, for evaluating the IoT on-chip LiDAR sensor network and the performance of self-tuning methodology on dynamic obstacle detention scenario, were also conducted in order to define the best prediction model and optimal number of LiDAR sensors required to ensure the reliability of this sensor network.Some critical conditions were taken into account in the conducted study.
In this case, the simulation time (43 seconds) for each traffic scenario is somewhat smaller due to the computational overload generated by the processing and storage of all the data provided by 5 LiDARs plus 2 high resolution cameras.In addition, the global scenario is the same that was generated in section 3.However, there are more devices and objects in each scene with different distribution.Some of these dynamic objects in the scenario are 4 buildings, 50 trees, 20 pedestrians, 10 small and medium vehicles, and 1 truck.Therefore, the main difference relies on the distribution of the IoT sensory system in the fully automated vehicle (Toyota Prius) modelled in Webots (see Figure 5).On the bottom front of the vehicle, three equidistant LiDAR models (LiDAR 0/1/2) are placed in order to expand the horizontal field of view.In the upper front, just on either side of the stereoscopic camera model, there are two models of LiDAR devices (LiDAR 3/4) whose purpose is to expand the field of view, in this case, vertical.The aim of this IoT evaluation is to demonstrate how IoT sensory system incorporates a series of extended capabilities in terms of precision in measurements regarding the behavior of a single sensor with better specifications than each node of the sensory network isolated.The setup of the IoT sensory system is summarized in Table 4.
The next step was associate the same model of the error prediction model library to each LiDAR sensor.The conceptual diagram of the self-tuning method is shown in Figure 6.During simulations of each scenario, the information provided by each sensor is collected, filtered and processed (as it was discussed in section 3.2.1), the spatial statistics are calculated (it was described in section 3.2.2) and applied as inputs to each model in the library of error prediction models.These models are able to estimate the accuracy of the localization for an object using LiDAR sensor, based on the DMRS and MRSE figures of merits.Table 5 lists the value of the correlation (R 2 ) of each type of model (ANN, k-NN and regression models) according to the number of LiDARs (1, 3 or 5) used at each instant with the objective of expanding the field of view, both vertical and horizontal, of the IoT LiDARs network in different critical situations.From this table, the type of model within the library with better correlation for both outputs of 2D and 3D spatial error type with respect to different IoT LiDAR system configurations can be extracted.With one LiDAR the correlation values for both outputs are similar to those obtained in section 3.2.3.On the other hand, for a configuration of 3 sensors, the value of this performance index improves notably in all models, highlighting k-NN since it is very close to 100%.Although, in theory, increasing the number of sensors the field of view is widened, it turns out that the correlation decreases with 5 sensors in all models.One of the causes of this is the duplicity of information provided by too many sensors.This problem could be solved by using an optimized distribution in mesh that avoids the duplication of space covered by each LiDAR.
Finally, during the simulation of this dynamic obstacle localization scenario, the learning and self-tuning (knowledge-based learning algorithm) tasks were also validated to automatically set the best prediction model and optimal number of LiDAR sensors needed to ensure the reliability.The Qlearning classification error matrix is shown in the Figure 7.As it is shown, the 67% of the scenarios can be appropriately addressed with a threshold between 0-1 (in other words, with more than 99% of detection accuracy) and only one LiDAR is demanded.In total, 87% of the scenarios can be solved using only one LiDAR, but it is recommended to use 3 LiDAR if the threshold is bigger than 10 (with less than 90% of obstacle localization accuracy), increasing considerably the reliability on multi-sensor-based system.Furthermore, it is good to clarify that only in the 2% of the cases 5 LiDAR sensors are needed, which is an evidence of the suitability of the selftuning method for minimizing the number of sensors required to achieve a higher obstacle localization reliability in the driving-assistance environments.

Conclusions
The design and implementation of self-tuning method based on simple soft-computing methods for automatically selecting the LiDAR sensors in a IoT multisensory driving-assistance scenarios in order to increase the obstacle localization reliability is presented in this paper.The proposed method includes four main tasks combining point cloud grouping, clustering, supervised and reinforcement learning algorithms.Three simple techniques suitable from the perspective of industrial informatics have been considered to implement the modelling library: a linear regression to corroborate the direct correlation between the extracted point clouds with obstacle localization; secondly, the well-known multi-layer perceptron once again corroborated the suitability for modelling the main process characteristics; and a k-Nearest Neighbors showing the suitability of clustering techniques to stablish correlations based on point dispersions.All the selected models have accurately reflected the behavior of the selected variables and the statistical tests have confirmed the goodness-of-fit, highlighting the more than 90% of the correlation coefficient obtained for k-NN algorithm in almost all the scenarios.
By the other hand, a Q-learning algorithm was also introduced in order to minimize the number of LiDAR sensors needed in each obstacle localization scenario to ensure sensors reliability based on global IoT sensors network information.The self-tuning procedure is based on a reinforcement learning algorithm to explore for each particular scenario how many sensors are required to detect the number of obstacles present in one scan.Based on that, the proposed method fulfils two main criteria: the best model-based fitting and self-tuning management of the computational resources (smaller number of LiDAR real required for each particular situation) necessary to improve the obstacle localization reliability on IoT LiDAR sensor networks.The accuracy and generalization of the proposed method was in a virtual driving traffic scenario developed with Webots Automobile simulation tool, solving the 67% of the scenarios taken account using one LiDAR with more than 99% of obstacle localization accuracy.Finally, the proposed self-tuning methodology will be embedded and validated in real driving environments as part of the contributions to the IoSENSE project1 .

Figure 1 .
Figure 1.Conceptual design of Self-tuning Method.Iteration between IoT Assessment Framework and Supervisor Control Node.

Figure 3 .
Figure 3. Simulated 3D scenario in Webots for Driving Assistance.(a) Aerial view of simulation scenario, (b) vehicle model with sensors incorporated (c) camera obstacles recognition procedure and (d) point cloud of objects into the scenario.

Figure 4 .
Figure 4. Prediction error behaviour of the model library in the localization of obstacles by LiDAR point clouds.

Figure 5 .
Figure 5. Side (a), front (b), plan (c) and rear view (d) of on-board IoT sensory system setup into vehicle model.

Figure 5
Figure 5 represents the configuration of the IoT sensory system mounted in a fully automated vehicle modelled in Webots automobile.

Figure 6 .
Figure 6.Flow diagram of the self-tuning method for the IoT sensors dynamic obstacle detection scenario.

Table 1 .
Q-learning reward matrix for detection threshold.

Table 2 .
Specifications and localization into the vehicle of both sensor models.

preprints.org) | NOT PEER-REVIEWED | Posted: 28 February 2018 doi:10.20944/preprints201802.0192.v1
During 2 minutes and 56 seconds of simulations, a data collection provided by the LiDAR sensor model and a set of captured images by 3D stereo vision camera model were obtained.In total, a benchmark of 1031 scenes is available with the same number of LiDAR scans, captured images and annotation files with the localization of each recognized obstacles.

Table 3 .
Model correlation coefficients based on plane & space figures of merits

Table 4 .
Localization of each sensor which makes up the IoT sensory system.

Table 5 .
Behaviour of the correlation (R2) of each one of the types of models according to the number of LiDARs used at each moment.