Next Article in Journal
Designing Large Two-Dimensional Arrays of Josephson Junctions for RF Magnetic Field Detection
Next Article in Special Issue
TLI-YOLOv5: A Lightweight Object Detection Framework for Transmission Line Inspection by Unmanned Aerial Vehicle
Previous Article in Journal
A Cryo-CMOS, Low-Power, Low-Noise, Phase-Locked Loop Design for Quantum Computers
Previous Article in Special Issue
Low Cost PID Controller for Student Digital Control Laboratory Based on Arduino or STM32 Modules
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Outdoor Terrain Classification Algorithm for Wheeled Mobile Robots Equipped with Inertial and Magnetic Sensors

1
Department of Mechatronics and Automation, Faculty of Engineering, University of Szeged, 6725 Szeged, Hungary
2
Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, 1034 Budapest, Hungary
3
Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia
4
Institute of Informatics, University of Dunaújváros, 2400 Dunaújváros, Hungary
5
Symbolic Methods in Material Analysis and Tomography Research Group, Faculty of Engineering and Information Technology, University of Pecs, 7624 Pecs, Hungary
6
John von Neumann Faculty of Informatics, Óbuda University, 1034 Budapest, Hungary
7
Facultad de Ingeniería, Universidad Autónoma de Queretaro, Santiago de Querétaro 76010, Mexico
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(15), 3238; https://doi.org/10.3390/electronics12153238
Submission received: 10 July 2023 / Revised: 22 July 2023 / Accepted: 25 July 2023 / Published: 26 July 2023

Abstract

:
Terrain classification provides valuable information for both control and navigation algorithms of wheeled mobile robots. In this paper, a novel online outdoor terrain classification algorithm is proposed for wheeled mobile robots. The algorithm is based on only time-domain features with both low computational and low memory requirements, which are extracted from the inertial and magnetic sensor signals. Multilayer perceptron (MLP) neural networks are applied as classifiers. The algorithm is tested on a measurement database collected using a prototype measurement system for various outdoor terrain types. Different datasets were constructed based on various setups of processing window sizes, used sensor types, and robot speeds. To examine the possibilities of the three applied sensor types in the application, the features extracted from the measurement data of the different sensors were tested alone, in pairs and fused together. The algorithm is suitable to operate online on the embedded system of the mobile robot. The achieved results show that using the applied time-domain feature set the highest classification efficiencies on unknown data can be above 98%. It is also shown that the gyroscope provides higher classification rates than the widely used accelerometer. The magnetic sensor alone cannot be effectively used but fusing the data of this sensor with the data of the inertial sensors can improve the performance.

1. Introduction

Terrain classification plays an important role in reliable mobile robot navigation problems [1,2], since the parasitic accelerations generated by different terrains inherently influence the state estimation performance as well as the path planner and control algorithms that rely on these results. As an example, a flat surface generates much less parasitic accelerations, thus the pose estimation is executed on reliable external acceleration integrations. On the other hand, bumpy terrains generate significant vibrations, which superimpose on the measurements of the accelerometer. In these cases, the separation of reference and observation vectors is difficult to execute reliably, thus the pose estimation becomes uncertain. The output of a terrain classification algorithm can help in characterizing the measurements with proper certainty measures, which contribute to both effective and reliable state estimation. This enables the establishment of algorithms that adaptively vary their parameters based on the identified environment of the robot.
Terrain classification systems apply onboard sensors. Various technologies were applied in relevant studies to identify the terrain type using mobile robots. Methods based on LiDAR [3] and camera [4,5,6,7,8] data are widely used, but these systems require embedded systems with high computational capacity, and they also have high costs. Beside the large amount of data which have to be processed, complex classification algorithms must be used, such as convolutional neural networks [8]. RGB-D [9] and sound-based [10,11] systems were also proposed by researchers.
Accelerometers, which measure linear acceleration in one or more axes, offer a low-cost alternative for terrain classification in the case of mobile robots, and were widely used in such applications [12,13,14,15,16,17,18,19,20,21,22].
The fusion of various technologies was also reported in relevant works, e.g., LiDAR and camera [23,24], sound and vibration [25], sound and camera [26,27,28], vibration and camera [29,30], camera, spectroscopy and nine degree-of-freedom (9DOF) inertial measurement unit (IMU) [31]. 9DOF IMUs consist of three tri-axial sensors, an accelerometer, a gyroscope, and a magnetometer.
Gyroscopes, which measure angular velocity around one or more axes, are widely used in pattern recognition applications, such as human movement or activity recognition [32,33]. In terrain classification tasks, these sensors were mainly tested together with accelerometers [31,34,35,36,37,38]. In a previous study, the authors of this paper showed that gyroscopes can provide significantly higher classification efficiencies than accelerometers using a frequency domain-based feature set [39].
Magnetometers are passive sensors that measure magnetic fields. Vector magnetometers measure the flux density value in a specific direction in three-dimensional space [40]. These devices are mainly used as compasses, since they can estimate the heading direction based on the Earth’s magnetic field. Magnetic sensors were also applied in pattern recognition-based applications, such as movement classification [32,33,41] and vehicle classification [42,43], which utilize the sensor measurements in a different way. In movement or activity classification systems, the methods utilize the changes in the orientation of the geomagnetic field vector compared to the sensor frame and are usually used together with inertial sensors. In vehicle detection/classification systems stationary sensors are applied and the distortions in the magnetic field are utilized, which are caused by metallic parts of the vehicles. Although, most outdoor mobile robots are equipped with magnetometers, due to their ability to serve as a compass, to the best knowledge of the authors, the raw measurements of these sensors were not utilized earlier in terrain classification methods.
Inertial sensor-based terrain classification methods most often rely on features extracted using the sensor signals, which are then forwarded to an appropriate classifier to determine the class. The base of the feature extraction is to extract information about the changes in the signals, which occur due to the movement of the sensors. Other solutions also exist, e.g., Reina et al. developed a model-based observer that estimates terrain parameters for vehicles based on two acceleration signals using a Kalman filter [12]. In [13], recurrent neural networks were applied using a three-axis sensor without extracting features on a dataset consisting of 14 terrain classes. In [34], acceleration, angular rate, and roll-pitch-yaw (RPY) data, which were provided by the 9DOF IMU sensor, were applied to form the feature vector. An ensemble classifier was applied to classify measurements into five classes: brick, concrete, grass, sand, and rock. In [31], angular acceleration, linear acceleration, and linear jerk were extracted using the signals of a 9DOF IMU. The IMU, the camera, and the spectroscopy-based classifiers were fused to classify terrains into 11 types.
Various feature extraction techniques were tested in related studies. Oliveira et al. utilized only the Z-axis of the accelerometer, which points to the ground, and the root mean square feature was used to classify 5 pavement types [14]. In [15], also only the Z-axis was used, and a Laplacian support vector machine (SVM) was applied with various time-domain features (TDFs) and frequency-domain features (FDFs) to classify six outdoor terrain types: natural grass, asphalt road, cobble path, artificial grass, sand beach, and plastic track. Weiss et al. applied signals of all three sensor axes and evaluated their applicability in the given task [16]. The components of the amplitude spectrum, which were computed using the fast Fourier transform (FFT), were utilized as features. An SVM-based classifier was applied in two experiments, i.e., a 3-class and a 7-class experiments. Magnitudes computed using the FFT from vibration signals were also the base of the method proposed in [17], where terrains were classified into clay, grass, sand, and gravel. In [18], 523 power spectrum density (PSD)-based features and 11 different statistical features were used to classify 4 indoor surface types. In [19], features were extracted in time, frequency, and time-frequency domains to classify the terrain into four types (hard ground, grass, small gravel, and large gravel). Bai et al. defined five outdoor terrain classes, concrete, grass, sand, gravel, and grass&stone, and applied spectral features from acceleration data to classify samples using artificial neural networks (ANN) [20]. In [21], Bai et al. proposed a deep neural network-based solution using a similar feature extraction concept. Mei et al. composed three feature sets using TDFs, FDFs, and PSD-based features, and made a comparative study using different classifiers to differentiate 8 outdoor terrain types: asphalt, cobble, concrete, artificial grass, natural grass, gravel, plastic, and tile [22]. DuPont et al. used magnitudes computed using the FFT for the acceleration in the Z-axis and the angular velocity around the X and Y axes [35]. The terrains were classified using probabilistic neural networks into six classes: asphalt, packed gravel, loose gravel, tall grass, sparse grass, and sand. In [36], more than 800 features were extracted from the inertial sensor signals for indoor terrain classification with a linear Bayes normal classifier. Hasan et al. applied altogether 60 different temporal, statistical and spectral features using accelerometer and gyroscope data together to classify 9 indoor surface types [37]. In [38], the components of the amplitude spectrum computed for the six channels of the IMU sensor were used together as inputs of the ANN to classify terrains into five classes, i.e., indoor floor, asphalt, grass, soil, and loose gravel.
This study deals with outdoor terrain classification using inertial and magnetic sensors in the case of wheeled mobile robots. The contributions of this work, which is the extension of an initial investigation presented by the authors in [44], can be summarized as follows:
  • A novel online terrain classification algorithm is proposed, which applies only TDFs extracted from the raw accelerometer, gyroscope, and magnetometer signals. The chosen feature set has both low computational and low memory costs, which enables easy implementability. Classification is realized using multilayer perceptron (MLP) neural networks. The proposed algorithm is suitable to run online in real-time on the embedded system of the mobile robot or it can be used on a separate intelligent sensor that provides the classification results to the main control unit of the robot.
  • The proposed algorithm is validated using a measurement database collected using a prototype measurement system. Tests are performed utilizing different processing window sizes and multiple robot speeds to examine their effect on recognition efficiency.
  • Due to the previous considerations, it was reasonable to test the applicability of raw magnetometer data and the impact of the three sensor types in such an application. Thus, the features extracted using signals of the three sensor types are tested alone, in pairs, and together using the proposed algorithm.
  • In the evaluation process, achieved results using different setups are compared with results obtained using a set of spectral features and with results obtained using the two feature sets together.
The rest of the paper is organized as follows. Section 2 presents the proposed terrain classification algorithm, while Section 3 describes the used measurement database. The experimental results achieved using different setups are discussed in Section 4, while Section 5 summarizes the results of the paper and gives potential future work directives.

2. Classification Algorithm

The proposed classification algorithm, which can be seen in Figure 1, consists of two main parts. In the first part, windowing and feature extraction are performed on raw sensor signals. The extracted features are then forwarded to the second part, where an MLP-based classifier is utilized to determine the class.
The classifier needs to be trained offline using training data, after which the trained MLP can be implemented and used online.

2.1. Windowing and Feature Extraction

Feature extraction is performed in fixed-size processing windows, which are shifted with also constant sizes. The shift size determines the overlap between windows and the frequency with which the algorithm updates the terrain class. It does not have any effect on the amount of data in the window.
One of the advantages of the applied features is that these do not require the transformation from time-domain to frequency-domain, which beside performing fast Fourier transform (FFT) requires the storage of the measurement values in the processing window.
Many popular TDFs require the storage of the measurement vector in the processing window, and can be calculated only before the classification process. e.g., in the case of features such as standard deviation, skewness, and kurtosis, the mean value must be subtracted from each of the measurement values in the processing window. The mean value can only be computed at the end of the processing window, so all measurements in the window must be stored. In the proposed algorithm, all selected feature types can be updated after every measurement, and they do not need the storage of the measurement vector in the processing window. At most two previous measurement values are required to update the features. This enables easier implementation, and real-time online operation on the embedded system of the robot.
The selected TDFs are discussed as follows:
  • Mean absolute value (MAV):
MAV = 1 N i = 1 N | x i | ,
where x i is the ith measurement value, and N is the number measurements in the processing window.
  • Number of zero crossings (NZC):
NZC = i = 1 N 1 [ sgn ( x i · x i + 1 )   | x i x i + 1 | t h ] ,   sgn ( x ) = { 1 ,   if ( x 0 ) 0 ,   o t h e r w i s e ,
where th is the threshold, which is defined using the peak-to-peak noise level.
  • Number of slope sign changes (NSSC):
NSSC = i = 2 N 1 [ f [ ( x i x i 1 ) · ( x i x i + 1 ) ] ] ,   f ( x ) = { 1 ,   if ( x t h ) 0 ,   o t h e r w i s e ,
  • Root mean square (RMS):
RMS = 1 N i = 1 N x i 2 ,
  • Waveform length (WL):
WL = i = 1 N 1 | x i + 1 x i | ,
  • Willison amplitude (WAMP):
WAMP = i = 1 N 1 [ f ( x i x i + 1 ) ] ,   f ( x ) = { 1 ,   if ( x t h ) 0 , o t h e r w i s e ,
  • Maximal value (MAX):
MAX = max ( x i ) ,
  • Minimal value (MIN):
MIN = min ( x i ) ,
  • Peak-to-peak (PTP):
PTP = MAX MIN .
Some features were not applied for all three sensor types. Raw accelerometer measurements suffer from the gravitational acceleration, which can be neglected using complex pose estimation methods, which require additional computation. Due to this effect, the MAV, RMS, MAX, and MIN features would largely differ if the robot would move parallel with the Earth’s surface or uphill. Based on the previous considerations, these features were not utilized in the case of the accelerometer. In the case of the magnetometer, the changes in the orientation of the magnetic field vector are utilized in the classification process. Since the components of the vector act as an unknown bias in the measurements, the same features, i.e., the MAV, RMS, MAX, and MIN, were also not utilized with this sensor type.

2.2. Classifier

A three-layer Multi-Layer Perceptron (MLP) neural network was chosen as the classifier in the proposed algorithm, since it proved to be an optimal solution for similar online pattern recognition tasks based on classification efficiency and implementability [32].
MLPs are feedforward neural networks, where neurons are organized into an input, an output, and one or more hidden layers. All layers are fully connected to the following one through weighted connections. A neuron has an activation function that maps the sum of its weighted inputs to the output. These ANNs are usually trained using the backpropagation algorithm.
In the proposed algorithm, a three-layer neural network is used with one hidden layer. A feature vector is composed of the TDFs computed in the feature extraction stage of the algorithm. The feature values are computed separately for the three axes of the applied different sensor types. In setups utilizing multiple sensors, the data of the different sensors are fused by using the extracted features together in the feature vector. The computed feature vector forms the input of the MLP, thus, the number of inputs is equal to the number features in the given setup. In the output layer a neuron is assigned to each class. In the hidden layer, the hyperbolic tangent sigmoid activation function is applied, while the linear transfer function is utilized in the output layer. The neuron with the highest output value in the output layer is declared as the class for the given input vector. The optimal number of neurons in the hidden layer must be found by testing different configurations.

3. Applied Measurement Data

3.1. Prototype Measurement System

To obtain measurement data on which the proposed terrain classification algorithm can be tested, a mobile robot was constructed with appropriate sensors. The constructed wheeled mobile robot can be seen in Figure 2. The size of the robot was 158 × 255 × 45 mm, while its mass was 0.595 kg. The robot had two driven and two additional wheels. The wheelbase was 116 mm, the track width was 132 mm, and the wheel radius was 65 mm.
A 9DOF sensor board was also installed on the constructed robot, which consists of a tri-axial magnetometer, a tri-axial accelerometer, and a tri-axial gyroscope. The main characteristics of the sensors can be seen in Table 1.
A microcontroller-based ESP32 unit, which was manufactured by Espressif Systems, Shanghai, China, was used for motor control via H-bridges, which vary the speed of the motors based on the pulse width modulation (PWM) duty cycle. The ESP32 was also responsible for reading measurements from the inertial sensors and storing the data.

3.2. Data Acquisition

Measurement data acquisition was done for six different outdoor terrain types, which can be seen in Figure 3. These terrain classes are the following:
  • Concrete
  • Grass
  • Pebbles
  • Sand
  • Paving stone
  • Synthetic running track
Measurements were collected in sessions, which were 4.5 s long, and the applied sampling rate was 400 Hz for both inertial sensors and 100 Hz in the case of the magnetometer. For all six classes data were recorded in seven sessions, which resulted in 31.5 s measurement data for each terrain type. The seven sessions within a class were chosen to be at different locations with as most diverse terrains as possible. All measurements were performed with two different motor speeds, which were set using the PWM, to explore its effect on the algorithm. The PWM duty cycles were constant during the measurements since the goal of the algorithm is to provide information to a sensor fusion framework, which can estimate the true speed of the mobile robot. The speed was not measured during data acquisition, and it was probably varying due to slips of the mobile robot. The algorithm should provide reliable data without knowledge of the PWM duty cycle or the encoder measurements, it only relies on raw inertial and magnetic sensor data. The two used PWM duty cycles were 86% and 100%.
Figure 4 shows parts of the signals for the three sensor types from the measurement data collected for two classes, i.e., grass and paving stone.

4. Experimental Results

4.1. Datasets and Test Setups

Altogether 63 different datasets were tested and evaluated with the proposed algorithm based on different setups.
Three processing window sizes were tested: 0.32 s, 0.64 s, and 1.28 s. The applied window shift size was 0.05 s for all three segment sizes. The reason for setting a small shift size was to generate as many training samples as possible from the available data. The applied shift size in a real application should be set based on the requirements.
Different datasets were constructed based on the used sensor configurations. The three sensor types were tested alone, in pairs, and together, to examine their possibilities in the application. The applied feature types were extracted for all sensor axes separately.
To explore the effect of different speeds on the proposed algorithm, the measurement data using the applied two speeds were tested separately and together.
In the case of all setups, the training datasets were formed using measurements from four of seven sessions, while the remaining three sessions formed the validation datasets. Validation datasets were not applied during the training of the MLPs and were used as unknown inputs to test the performance of the trained classifiers.
The hyperparameters of the MLP training process can be found in Table 2. During the training of the MLPs, 70% of the training data were used as training inputs, and the remaining 30% as validation inputs. The training was tested with 1–40 hidden layer neurons for all setups, and all configurations were tested 4 times, since the achievable performance of the ANNs largely depends on the initial random weights. During performance evaluation, the results of the configuration which achieved the highest recognition efficiencies on validation data were used. The tested neuron numbers in the hidden layer proved to be sufficient, since convergence could be noticed in the recognition rate on unknown samples for all setups.
Classification efficiencies (E) were used as the performance metric, which can be calculated using the following equation:
E ( % ) = N C N S · 100 ,
where NC is the number of correctly classified samples and NS is the number of all samples.
In the evaluation process, achieved results using the TDFs were compared with recognition efficiencies provided by FDFs for all setups. The applied spectral feature set was the same as in [39], which consists of the following features: spectral energy, median frequency, mean frequency, mean power, peak magnitude, peak frequency, and variance of the central frequency. To explore if the FDFs carry any further information compared to the proposed time domain-based feature set, the two feature sets were also tested together.

4.2. Performance Evaluation

The achieved highest classification efficiencies on training and validation data using the proposed algorithm can be seen in Table 3. The results are given based on different used sensor and speed combinations for the three processing window sizes. The used abbreviations are the following: L—lower speed, H—higher speed.
Analyzing the achieved classification efficiencies with the proposed TDF-based algorithm using different processing window widths, it can be seen that the recognition rate on both training and unknown data rises by increasing the size of the segment. On validation data, above 94% efficiency can be achieved even using the smallest window size, which is 0.32 s. The highest classification efficiencies for the two larger segment sizes were above 98%. In the case of training data, the classification efficiencies were above 95% for most of the setups even with the smallest window size, thus, the increases were smaller. The largest effect of the increase in the processing window size can be noticed for the setups where the accelerometer and the magnetometer data were utilized alone. In the case of the accelerometer, the increase was around 5% for both jumps in the segment size. Using the data of the magnetic sensor even above 10% increases were noticed, especially in the case of the training data.
Based on the obtained results using different robot speeds, difference can be noticed in some setups between the lower and the higher speed. The gyroscope results in higher recognition rates with the lower speed on unknown data, where the difference can be above 5% with the two smaller segment sizes. This can be noticed also in the setups where the gyroscope data were fused with other sensors, but with smaller differences. By combining the two speeds in the datasets, in most setups the classification efficiency decreases compared to the results achieved using single speeds. For other datasets the recognition rates are between the results obtained using the two speeds separately. It can be noticed that the use of multiple speeds together has a larger effect on setups where accelerometer data is applied, since in these cases the decrease in efficiency is larger.
The results based on different sensor combinations where the two speeds were used together show that the highest recognition rates were achieved using the setups where the gyroscope data were present. The highest obtained classification efficiencies were above 91%, 96% and 98% for the three tested segment sizes, respectively. Utilizing data of only a single sensor type, significantly higher results were obtained using gyroscope data than with accelerometer data, which is widely used in terrain classification applications. The difference can be even more than 10% depending on the window size. Using the smallest window size, the accelerometer and the gyroscope-based results were 76.51% and 90.16%, respectively. With the largest segment size, the recognition rates increase to 90.41% and 97.05%, respectively. The magnetometer data alone cannot provide acceptable classification efficiencies, since even with the largest processing window size the results were 62.85%. Fusing the data of this sensor with the inertial sensors can improve the recognition rates, especially in the case of smaller window sizes. e.g., using the accelerometer and the magnetic sensor data together 80.42% was obtained, which was almost a 4% improvement compared to the accelerometer-based result.
Table 4 presents the misclassification rates on validation data when the data of the three sensors for both speeds were utilized together and the features were extracted in the smallest processing window. The overall classification efficiency for this setup was 91.03%. It can be observed from the results that above 10% miss rate can be noticed for three classes, i.e., concrete, grass, and paving stone. Grass was recognized with the lowest efficiency since the misclassification rate for this class was 21.49%. Within this class 16.47% of the samples were classified as sand. Higher, above 7%, misclassification rates can be noticed between concrete and paving stone.
To explore the performance of the proposed TDF-based feature set, the results were compared with results obtained using an FDF-based feature set [39] and the two feature sets together. The obtained classification efficiencies on validation data using the different feature extraction techniques are summarized in Figure 5. Recognition rates are given based on different sensor combinations using data collected for both speeds together. The used abbreviations are the following: A—accelerometer, G—gyroscope, M—magnetometer.
It can be observed from the obtained results that the proposed feature set outperforms the FDF-based feature set in most setups, except where accelerometer and magnetometer data were utilized alone. Significant, more than 10%, difference can be only noticed using the magnetometer data, where almost 75% efficiency can be obtained using the FDFs. In case of the accelerometer, the differences are smaller, around 2–3%. In other setups, the proposed TDF set provides mostly 3–5% better performance compared to the FDF-based set, but it can reach even above 10% for some datasets. Using the two feature sets together, but not considering the setups where the FDFs provide higher efficiencies than the TDFs, can increase the classification efficiencies. The difference is not significant, mainly 1–2%, which shows that the FDF does not carry much further information compared to the information extracted using the proposed feature set.
It is also very important to explore the performance that can be achieved when the classifiers are trained and tested using measurements recorded with different speeds. To evaluate the results in this perspective, the trained MLPs using the L speed measurements were tested with the features extracted using the H speed measurements, and vice versa. Table 5, Table 6 and Table 7 show the obtained results for the three tested processing window sizes, i.e., 0.32 s, 0.64 s, and 1.28 s, respectively. The classification efficiencies in the tables are given for validation and test data. Validation datasets were formed from data not used during training, but from the sessions of the same speed as used for training, while test datasets were formed using data of the other speed. The FDF-based results are also included in the tables for comparison. It can be observed from the obtained results that such classifiers must be trained using a wide range of speeds since the classification efficiencies significantly decrease by using different speeds for training and testing. Comparing different sensor combinations, the gyroscope data showed to be more universal for different speeds than the accelerometer data. The magnetometer data-based recognition rates drastically decrease when the speed is different, especially with FDFs, where the classification efficiencies were below 30%. It can be also noticed from the results that the proposed TDF-based feature set provides significantly better results than the FDF-based when data of multiple sensors are used together.

4.3. Implementation

The implementation of the proposed method requires multiple steps. Different classifiers should be developed for different types of mobile robots since many features of the mobile robot (such as size, mass, wheelbase, track width, etc.) affect the classification algorithm. The first step is to collect a measurement database for the defined terrain classes using the applied robot in a wide range of speeds. The feature extraction and the training of the MLP classifiers must be performed offline. Many options must be considered to find the optimal setup. Both the required memory for the implementation and the processing time of the MLP depend on the number of inputs and the number of hidden layer neurons so it is important to minimize both besides maximizing the classification efficiency [32]. The number of inputs is defined by the number of used features, which depend on the number of used sensors. Based on the achieved results in this study, it is reasonable to test various sensor combinations with the required processing window size. The optimal MLP that should be implemented on the embedded system should be chosen based on the hardware limitations defined by the used embedded system and the achievable classification efficiencies of different setups. The size of the window shift, which defines the period with which the algorithm updates the terrain class, should be chosen based on the requirements of the application in which the method is used.
The proposed method uses the highest output value of the MLP as the predicted class. This assumes that all possible terrain types are known in advance. This can lead to uncertain predictions in applications where the mobile robot can encounter unknown terrain types. To handle these situations, possible solutions can be to add an “unknown” class to the outputs of the MLP or to use a probability threshold to reject such uncertain predictions.

5. Conclusions

In this paper, a novel terrain classification algorithm was proposed, which can run on the embedded system of a mobile robot with low requirements in both memory and computation. The algorithm applies only time-domain analysis in the feature extraction process and the MLP is used as classifier.
The algorithm was tested using measurements collected for six different outdoor terrain types with a prototype measurement system. Various setups were tested based on used sensor data, different processing window sizes, and robot speeds. The achieved results were compared with results obtained using a previously proposed feature set, which consists of only FDFs, and the two feature sets together.
The achieved classification efficiencies show significant results, since above 98% can be reached in some setups. It was also shown that the gyroscope is well usable in terrain classification systems and provides much higher recognition rates than the accelerometer. The magnetometer data alone cannot be effectively used in the given application, but it can improve the performance of the inertial sensors. The proposed algorithm also outperforms the FDF-based algorithm in most setups.
The main goal in the future is to implement the proposed terrain classification method into a novel sensor fusion framework, which can utilize the provided information to improve the pose estimation of the mobile robot. Other future plans include testing the algorithm on measurements recorded in a wider range of speeds for various motion types, such as cornering. Selecting the features with the highest effect on recognition accuracy using an appropriate feature selection method would be also reasonable, because this would further decrease the required computation. A pose estimator could be also implemented to neglect the effect of gravitational acceleration, since this could enable the usage of further features in the case of the accelerometer.

Author Contributions

Conceptualization, P.S.; methodology, P.S.; software, P.S., D.C., R.P., S.S., V.T. and A.O.; validation, P.S. and A.O.; formal analysis, P.S., S.S., J.R.-R., J.S. and A.O.; investigation, P.S., D.C., R.P. and A.O.; resources, P.S., D.C., J.S. and A.O.; data curation, P.S., D.C. and R.P.; writing—original draft preparation, P.S. and A.O.; writing—review and editing, S.T., J.R.-R. and J.S.; visualization, P.S., D.C., V.T. and A.O.; supervision, P.S., J.S. and A.O.; project administration, P.S., J.S. and A.O.; funding acquisition, P.S. and A.O. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the National Research, Development, and Innovation Fund of Hungary through project no. 142790 under the FK_22 funding scheme.

Data Availability Statement

The data presented in this study are openly available at: https://github.com/petersarcevic/outdoor_terrain_classification_database (accessed on 24 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sebastian, B.; Ben-Tzvi, P. Support vector machine based real-time terrain estimation for tracked robots. Mechatronics 2019, 62, 102260. [Google Scholar] [CrossRef]
  2. Guastella, D.C.; Muscato, G. Learning-Based Methods of Perception and Navigation for Ground Vehicles in Unstructured Environments: A Review. Sensors 2021, 21, 73. [Google Scholar] [CrossRef] [PubMed]
  3. Suger, B.; Steder, B.; Burgard, W. Traversability analysis for mobile robots in outdoor environments: A semi-supervised learning approach based on 3D-lidar data. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar] [CrossRef]
  4. Khan, Y.N.; Komma, P.; Zell, A. High resolution visual terrain classification for outdoor robots. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011. [Google Scholar] [CrossRef] [Green Version]
  5. Kingry, N.; Jung, M.; Derse, E.; Dai, R. Vision-Based Terrain Classification and Solar Irradiance Mapping for Solar-Powered Robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef]
  6. Zhou, L.; Liu, Z.; Wang, W. Terrain Classification Algorithm for Lunar Rover Using a Deep Ensemble Network with High-Resolution Features and Interdependencies between Channels. Wirel. Commun. Mob. Comput. 2020, 2020, 8842227. [Google Scholar] [CrossRef]
  7. Chen, Y.; Rastogi, C.; Norris, W.R. A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification. IEEE Robot. Autom. Lett. 2021, 6, 7965–7972. [Google Scholar] [CrossRef]
  8. Wang, W.; Zhang, B.; Wu, K.; Chepinskiy, S.A.; Zhlenkov, A.A.; Chernyi, S.; Krasnov, A.Y. A visual terrain classification method for mobile robots’ navigation based on convolutional neural network and support vector machine. Trans. Inst. Meas. Control. 2021, 44, 744–753. [Google Scholar] [CrossRef]
  9. Bellone, M.; Reina, G.; Giannoccaro, N.I.; Spedicato, L. Unevenness Point Descriptor for Terrain Analysis in Mobile Robot Applications. Int. J. Adv. Robot. Syst. 2013, 10, 1–10. [Google Scholar] [CrossRef] [Green Version]
  10. Libby, J.; Stentz, A.J. Using sound to classify vehicle-terrain interactions in outdoor environments. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar] [CrossRef] [Green Version]
  11. Valada, A.; Spinello, L.; Burgard, W. Deep Feature Learning for Acoustics-Based Terrain Classification. In Robotics Research; Bicchi, A., Burgard, W., Eds.; Springer Proceedings in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2018; Volume 3, pp. 21–37. [Google Scholar] [CrossRef]
  12. Reina, G.; Leanza, A.; Messina, A. Terrain estimation via vehicle vibration measurement and cubature Kalman filtering. J. Vib. Control 2020, 26, 885–898. [Google Scholar] [CrossRef]
  13. Otte, S.; Weiss, C.; Scherer, T.; Zell, A. Recurrent Neural Networks for fast and robust vibration-based ground classification on mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016. [Google Scholar] [CrossRef]
  14. Oliveira, F.G.; Santos, E.R.S.; Neto, A.A.; Campos, M.F.M.; Macharet, D.G. Speed-invariant terrain roughness classification and control based on inertial sensors. In Proceedings of the Latin American Robotics Symposium (LARS) and Brazilian Symposium on Robotics (SBR), Curitiba, Brazil, 8–11 November 2017. [Google Scholar] [CrossRef]
  15. Shi, W.; Li, Z.; Lv, W.; Wu, Y.; Chang, J.; Li, X. Laplacian Support Vector Machine for Vibration-Based Robotic Terrain Classification. Electronics 2020, 9, 513. [Google Scholar] [CrossRef] [Green Version]
  16. Weiss, C.; Stark, M.; Zell, A. SVMs for Vibration-Based Terrain Classification. In Autonome Mobile Systeme 2007; Berns, K., Luksch, T., Eds.; Informatik Aktuell; Springer: Berlin, Germany, 2007; pp. 1–7. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, S.; Wu, Y.; Lv, W.; Chang, J.; Li, Z.; Zhang, W. Broad Feature Alignment for Robotic Ground Classification in Dynamic Environment. IEEE Trans. Ind. Electron. 2022, 69, 2697–2707. [Google Scholar] [CrossRef]
  18. Vicente, A.; Liu, J.; Yang, G.-Z. Surface classification based on vibration on omni-wheel mobile base. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. [Google Scholar] [CrossRef]
  19. Wang, M.; Ye, L.; Sun, X. Adaptive online terrain classification method for mobile robot based on vibration signals. Int. J. Adv. Robot. Syst. 2021, 18, 1–10. [Google Scholar] [CrossRef]
  20. Bai, C.; Guo, J.; Zheng, H. Three-Dimensional Vibration-Based Terrain Classification for Mobile Robots. IEEE Access 2019, 7, 63485–63492. [Google Scholar] [CrossRef]
  21. Bai, C.; Guo, J.; Guo, L.; Song, J. Deep Multi-Layer Perception Based Terrain Classification for Planetary Exploration Rovers. Sensors 2019, 19, 3102. [Google Scholar] [CrossRef] [Green Version]
  22. Mei, M.; Chang, J.; Li, Y.; Li, Z.; Li, X.; Lv, W. Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers. Sensors 2019, 19, 1137. [Google Scholar] [CrossRef] [Green Version]
  23. Häselich, M.; Arends, M.; Wojke, N.; Neuhaus, F.; Paulus, D. Probabilistic terrain classification in unstructured environments. Rob. Auton. Syst. 2013, 61, 1051–1059. [Google Scholar] [CrossRef]
  24. Schilling, F.; Chen, X.; Folkesson, J.; Jensfelt, P. Geometric and visual terrain classification for autonomous mobile navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar] [CrossRef]
  25. Libby, J.; Stentz, A. Multiclass Terrain Classification using Sound and Vibration from Mobile Robot Terrain Interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar] [CrossRef]
  26. Zürn, J.; Burgard, W.; Valada, A. Self-Supervised Visual Terrain Classification From Unsupervised Acoustic Feature Learning. IEEE Trans. Robot. 2021, 37, 466–481. [Google Scholar] [CrossRef]
  27. Kurobe, A.; Nakajima, Y.; Kitani, K.; Saito, H. Audio-Visual Self-Supervised Terrain Type Recognition for Ground Mobile Platforms. IEEE Access 2021, 9, 29970–29979. [Google Scholar] [CrossRef]
  28. Ishikawa, R.; Hachimura, R.; Saito, H. Self-Supervised Audio-Visual Feature Learning for Single-Modal Incremental Terrain Type Clustering. IEEE Access 2021, 9, 64346–64357. [Google Scholar] [CrossRef]
  29. Weiss, C.; Tamimi, H.; Zell, A. A combination of vision- and vibration-based terrain classification. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, 22–26 September 2008. [Google Scholar] [CrossRef]
  30. Bekhti, M.A.; Kobayashi, Y.; Matsumura, K. Terrain traversability analysis using multi-sensor data correlation by a mobile robot. In Proceedings of the IEEE/SICE International Symposium on System Integration, Tokyo, Japan, 13–15 December 2014. [Google Scholar] [CrossRef]
  31. Hanson, N.; Shaham, M.; Erdoğmuş, D.; Padir, T. VAST: Visual and Spectral Terrain Classification in Unstructured Multi-Class Environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar] [CrossRef]
  32. Sarcevic, P.; Kincses, Z.; Pletl, S. Online human movement classification using wrist-worn wireless sensors. J. Ambient Intell. Humaniz. Comput. 2019, 10, 89–106. [Google Scholar] [CrossRef]
  33. Altun, K.; Barshan, B.; Tunçel, O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognit. 2010, 43, 3605–3620. [Google Scholar] [CrossRef]
  34. Dutta, A.; Dasgupta, P. Ensemble Learning With Weak Classifiers for Fast and Reliable Unknown Terrain Classification Using Mobile Robots. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2933–2944. [Google Scholar] [CrossRef]
  35. DuPont, E.M.; Roberts, R.G.; Selekwa, M.F.; Moore, C.A.; Collins, E.G. Online Terrain Classification for Mobile Robots. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Orlando, FL, USA, 5–11 November 2005. [Google Scholar] [CrossRef]
  36. Tick, D.; Rahman, T.; Busso, C.; Gans, N. Indoor robotic terrain classification via angular velocity based hierarchical classifier selection. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar] [CrossRef]
  37. Hasan, M.A.M.; Abir, F.A.; Shin, J. Surface Type Classification for Autonomous Robots Using Temporal, Statistical and Spectral Feature Extraction and Selection. In Proceedings of the IEEE International Symposium on Embedded Multicore/Many-Core Systems-on-Chip (MCSoC), Singapore, 20–23 December 2021. [Google Scholar] [CrossRef]
  38. Sanusi, M.A.; Dewantara, B.S.B.; Setiawardhana; Sigit, R. Online Terrain Classification Using Neural Network for Disaster Robot Application. Indones. J. Comput. Sci. 2023, 12, 48–62. [Google Scholar] [CrossRef]
  39. Csík, D.; Odry, Á.; Sárosi, J.; Sarcevic, P. Inertial sensor-based outdoor terrain classification for wheeled mobile robots. In Proceedings of the IEEE International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 16–18 September 2021. [Google Scholar] [CrossRef]
  40. Hadjigeorgiou, N.; Asimakopoulos, K.; Papafotis, K.; Sotiriadis, P.P. Vector Magnetic Field Sensors: Operating Principles, Calibration, and Applications. IEEE Sens. J. 2021, 21, 12531–12544. [Google Scholar] [CrossRef]
  41. Maekawa, T.; Kishino, Y.; Sakurai, Y.; Suyama, T. Activity recognition with hand-worn magnetic sensors. Pers. Ubiquitous Comput. 2013, 17, 1085–1094. [Google Scholar] [CrossRef]
  42. Sarcevic, P.; Pletl, S.; Odry, A. Real-Time Vehicle Classification System Using a Single Magnetometer. Sensors 2022, 22, 9299. [Google Scholar] [CrossRef] [PubMed]
  43. Zhang, X.; Huang, H. Vehicle classification based on feature selection with anisotropic magnetoresistive sensor. IEEE Sens. J. 2019, 19, 9976–9982. [Google Scholar] [CrossRef]
  44. Sarcevic, P.; Csík, D.; Sárosi, J.; Odry, Á. Novel online terrain classification algorithm for mobile robots. In Proceedings of the IROS2022 Late Breaking Results Poster Presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar]
Figure 1. Parts of the proposed classification algorithm.
Figure 1. Parts of the proposed classification algorithm.
Electronics 12 03238 g001
Figure 2. The constructed wheeled mobile robot used as the prototype measurement system for data acquisition.
Figure 2. The constructed wheeled mobile robot used as the prototype measurement system for data acquisition.
Electronics 12 03238 g002
Figure 3. The tested outdoor terrain types: (a) concrete; (b) grass; (c) pebbles; (d) sand; (e) paving stone; (f) synthetic running track.
Figure 3. The tested outdoor terrain types: (a) concrete; (b) grass; (c) pebbles; (d) sand; (e) paving stone; (f) synthetic running track.
Electronics 12 03238 g003
Figure 4. Parts of the signals measured using the accelerometer, gyroscope, and magnetometer for grass (a,c,e) and paving stone (b,d,f), respectively.
Figure 4. Parts of the signals measured using the accelerometer, gyroscope, and magnetometer for grass (a,c,e) and paving stone (b,d,f), respectively.
Electronics 12 03238 g004aElectronics 12 03238 g004b
Figure 5. Achieved classification efficiencies on validation data using different feature extraction techniques for different sensor combinations.
Figure 5. Achieved classification efficiencies on validation data using different feature extraction techniques for different sensor combinations.
Electronics 12 03238 g005
Table 1. Main characteristics of the applied sensors.
Table 1. Main characteristics of the applied sensors.
CharacteristicAccelerometerGyroscopeMagnetometer
typeADXL345ITG3200HMC5883L
technologymicroelectromechanical system (MEMS)MEMSanisotropic magnetoresistive (AMR)
measurement range±16 g±2000 deg/s±810 µT
resolution13-bit16-bit12-bit
highest sampling frequency3.2 kHz8 kHz160 Hz
Table 2. Hyperparameters of the MLP training process.
Table 2. Hyperparameters of the MLP training process.
HyperparameterValue
training functionscaled conjugate gradient backpropagation
performance functionmean squared error (MSE)
maximum number of epochs to train4000
performance goal0
maximum validation failures20
minimum performance gradient10−7
maximum time to train in secondsinf
Table 3. Achieved classification efficiencies (%) on training and validation data based on different sensor and speed combinations for the three processing window sizes.
Table 3. Achieved classification efficiencies (%) on training and validation data based on different sensor and speed combinations for the three processing window sizes.
SensorSpeedProcessing Window Size and Data Type
0.32 s0.64 s1.28 s
TrainingValidationTrainingValidationTrainingValidation
AccelerometerL89.2681.5994.8187.66100.0092.45
H94.4382.2699.5186.7299.9491.49
L and H86.4476.5195.1382.1498.9690.41
GyroscopeL97.9994.2499.6296.25100.0099.22
H96.5987.5599.5191.63100.0097.66
L and H94.6090.1697.4094.99100.0097.05
MagnetometerL66.8753.9575.9260.0288.0266.58
H58.9458.9075.5463.2875.1363.72
L and H59.2455.0268.9159.0984.4162.85
Accelerometer, gyroscopeL99.3094.1899.8996.32100.0099.83
H99.3591.43100.0092.71100.0096.70
L and H97.4291.73100.0095.85100.0098.18
Accelerometer, magnetometerL93.1786.0197.0891.4999.8794.79
H97.1486.0899.8989.11100.0093.58
L and H91.2780.4298.3885.46100.0090.80
Gyroscope, magnetometerL97.7993.91100.0097.62100.0099.74
H98.2492.30100.0096.18100.0098.52
L and H96.3492.24100.0096.28100.0098.87
Accelerometer, gyroscope, magnetometerL98.5992.64100.0098.05100.0099.65
H98.8592.70100.0095.96100.0097.31
L and H98.2791.03100.0096.43100.0097.79
Table 4. Misclassification rates (%) on validation data when the data of the three sensors for both speeds were utilized together and the features were extracted in the smallest processing window.
Table 4. Misclassification rates (%) on validation data when the data of the three sensors for both speeds were utilized together and the features were extracted in the smallest processing window.
Output ClassSum
123456
Target class1 3.41 7.23 10.64
2 3.8216.47 1.2021.49
3 3.61 0.203.81
4 2.61 2.61
57.63 1.61 3.4112.65
61.41 0.20 1.00 2.61
Table 5. Achieved classification efficiencies (%) on training and test data using the 0.32 s processing window size when the MLPs were trained and tested using measurement recorded with different speeds.
Table 5. Achieved classification efficiencies (%) on training and test data using the 0.32 s processing window size when the MLPs were trained and tested using measurement recorded with different speeds.
SensorUsed Speed Data for Training and ValidationUsed Speed Data for TestingTDFFDF
ValidationTestValidationTest
AccelerometerLH81.5956.0084.6165.00
HL82.2653.6188.1563.02
GyroscopeLH94.2473.2690.9676.85
HL87.5575.4286.2172.63
MagnetometerLH53.9547.7385.5421.00
HL58.9041.2571.6216.50
Accelerometer, gyroscopeLH94.1874.3091.9074.27
HL91.4373.5889.8373.95
Accelerometer, magnetometerLH86.0162.2891.1044.23
HL86.0858.4982.3343.72
Gyroscope, magnetometerLH93.9168.4290.7050.43
HL92.3071.0084.3461.65
Accelerometer, gyroscope, magnetometerLH92.6477.4294.1145.12
HL92.7072.0986.2164.57
Table 6. Achieved classification efficiencies (%) on training and test data using the 0.64 s processing window size when the MLPs were trained and tested using measurement recorded with different speeds.
Table 6. Achieved classification efficiencies (%) on training and test data using the 0.64 s processing window size when the MLPs were trained and tested using measurement recorded with different speeds.
SensorUsed Speed Data for Training and ValidationUsed Speed Data for TestingTDFFDF
ValidationTestValidationTest
AccelerometerLH87.6663.8889.6165.62
HL86.7257.0590.7762.93
GyroscopeLH96.2569.8894.1678.70
HL91.6370.6391.9976.72
MagnetometerLH60.0251.3982.4024.00
HL63.2842.5876.9820.66
Accelerometer, gyroscopeLH96.3277.0696.7574.21
HL92.7177.5593.7275.97
Accelerometer, magnetometerLH91.4966.4592.7943.51
HL89.1150.1683.9141.28
Gyroscope, magnetometerLH97.6271.0998.3449.54
HL96.1873.7586.4451.52
Accelerometer, gyroscope, magnetometerLH98.0580.0997.8453.46
HL95.9669.8886.5864.35
Table 7. Achieved classification efficiencies (%) on training and test data using the 1.28 s processing window size when the MLPs were trained and tested using measurement recorded with different speeds.
Table 7. Achieved classification efficiencies (%) on training and test data using the 1.28 s processing window size when the MLPs were trained and tested using measurement recorded with different speeds.
SensorUsed Speed Data for Training and ValidationUsed Speed Data for TestingTDFFDF
ValidationTestTrainingValidation
AccelerometerLH92.4566.0792.9763.65
HL91.4953.3594.7966.67
GyroscopeLH99.2278.5798.6179.02
HL97.6664.3696.3576.30
MagnetometerLH66.5850.4179.2525.93
HL63.7247.4777.9518.71
Accelerometer, gyroscopeLH99.8380.1798.8777.72
HL96.7074.3796.5378.94
Accelerometer, magnetometerLH94.7965.3797.0542.45
HL93.5863.7392.2747.92
Gyroscope, magnetometerLH99.7466.2299.7454.43
HL98.5274.8989.6768.82
Accelerometer, gyroscope, magnetometerLH99.6574.5599.8353.61
HL97.3164.5198.1875.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sarcevic, P.; Csík, D.; Pesti, R.; Stančin, S.; Tomažič, S.; Tadic, V.; Rodriguez-Resendiz, J.; Sárosi, J.; Odry, A. Online Outdoor Terrain Classification Algorithm for Wheeled Mobile Robots Equipped with Inertial and Magnetic Sensors. Electronics 2023, 12, 3238. https://doi.org/10.3390/electronics12153238

AMA Style

Sarcevic P, Csík D, Pesti R, Stančin S, Tomažič S, Tadic V, Rodriguez-Resendiz J, Sárosi J, Odry A. Online Outdoor Terrain Classification Algorithm for Wheeled Mobile Robots Equipped with Inertial and Magnetic Sensors. Electronics. 2023; 12(15):3238. https://doi.org/10.3390/electronics12153238

Chicago/Turabian Style

Sarcevic, Peter, Dominik Csík, Richard Pesti, Sara Stančin, Sašo Tomažič, Vladimir Tadic, Juvenal Rodriguez-Resendiz, József Sárosi, and Akos Odry. 2023. "Online Outdoor Terrain Classification Algorithm for Wheeled Mobile Robots Equipped with Inertial and Magnetic Sensors" Electronics 12, no. 15: 3238. https://doi.org/10.3390/electronics12153238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop