Next Article in Journal
Automated On-Tree Detection and Size Estimation of Pomegranates by a Farmer Robot
Previous Article in Journal
Multi-Objective Intelligent Industrial Robot Calibration Using Meta-Heuristic Optimization Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Energy-Based Surface Classification for Mobile Robots in Known and Unexplored Terrains

by
Alexander Belyaev
* and
Oleg Kushnarev
Department of Automation and Robotics, National Research Tomsk Polytechnic University, 634050 Tomsk, Russia
*
Author to whom correspondence should be addressed.
Robotics 2025, 14(9), 130; https://doi.org/10.3390/robotics14090130
Submission received: 18 July 2025 / Revised: 11 September 2025 / Accepted: 19 September 2025 / Published: 21 September 2025

Abstract

Mobile robot navigation in diverse environments is challenging due to varying terrain properties. Underlying surface classification improves robot control and navigation in such conditions. This paper presents an adaptive surface classification system using proprioceptive energy consumption data. We introduce an energy coefficient, calculated from motor current and velocity, to quantify motion effort. This coefficient’s dependency on motion direction is modeled for known surface types using discrete cosine transform. A probabilistic classifier, enhanced with memory, compares real-time coefficient values against these models to identify known surfaces. A neural network-based detector identifies encounters with previously unknown terrains by recognizing significant deviations from known models. Upon detection, a least squares method identifies the new surface’s model parameters using data gathered from specific motion directions. Experimental results validate the approach, demonstrating high classification accuracy for known surfaces (91%) and robust detection (96.2%) and identification (MAPE < 3%) of unknown surfaces.

1. Introduction

Navigation and localization, based on readings from the onboard sensor system, remain a critical component for outdoor mobile ground robots. The influence of underlying surfaces on robot motion introduces a degree of complexity to the aforementioned tasks. One approach to handling navigation tasks in such environments is to use surface classification. This approach allows the use of additional information related to surface classes. Examples of such information include refined localization models, numerical parameters of the environment, etc. There are different approaches to surface classification. Most often these methods are divided into exteroceptive [1,2] and proprioceptive [3,4] ones. A similar division is used in other tasks related to outdoor-type robots, such as slip detection [5]. Exteroceptive methods classify surfaces without interacting with them. Proprioceptive methods, on the other hand, use information about the contact of the robot with it. We propose a way to distinguish methods based on the type of information used to classify surfaces. We define four groups of methods: visual (exteroceptive) [2,6], topography or vibration (proprioceptive, exteroceptive) [2,7,8], contact or force (proprioceptive) [5,9,10,11], and energy (proprioceptive) [12]. The first group of methods uses visual sensors (camera, stereo camera, lidar) for exteroceptive classification [13,14,15,16]. Extensive review of visual methods in terrain classification is shown in [17]. The advantage of such methods is the ability of the robot to understand complex or dangerous areas in advance and prevent movement through them. The disadvantage is the inability to directly estimate surface parameters, so the parameters must be studied in advance. The second group of methods uses accelerometers, gyroscopes, vibration sensors, and other devices [18,19,20,21,22] to characterize surfaces in relation to their topography. Also, the topography can be classified remotely, for example, using sonars or lidars [23]. This approach has the advantage of directly describing the surface. The third group is based on the identification of surface parameters, using special strain gauges mounted on a wheel [4] or a leg [11]. These gauges measure the contact force of the propulsor with the terrain. It allows the estimation of surface properties such as softness. The fourth group of methods is the least considered and studied. In papers [24,25], authors used information from current sensors of robot motors to classify surfaces. At the same time, there are works that demonstrate the effectiveness of using motor current/momentum information to plan motion trajectories [26,27,28] and to detect indications of surface effects on robot motion, such as slip. That is to say, this group of methods explores the impact of terrain on robot motion rather than the direct characteristics of the surface. Furthermore, given that the majority of robots are autonomous in terms of their energy supply, they are required to execute tasks while taking their residual energy into consideration. This requires the monitoring of energy consumption and the integration of current, voltage, or torque sensors. Consequently, a significant proportion of robots are already equipped with the necessary sensors for the fourth group of methods.
One of the primary challenges associated with most of the methods is the dependency of classification on various parameters, including motion speed, surface slope, and robot kinematics, etc. Additionally, surface classification is dependent on the labeled classes, i.e., almost all classifiers are only functional on the surfaces involved in the labeling. In [18], a classification algorithm is proposed that utilizes automatic labeling of new surfaces and incorporates them into subsequent classification processes. The approach uses a specific spring sensor and can be referred to as a vibration-based method.
We propose to investigate a classification method based on evaluating the energy consumption coefficient during the robot motion. This method will extend the classification capability and provide additional information about the influence of surfaces on robot motion. In order to expand the capabilities of the algorithm, we employ a method for detecting unknown surfaces, which is followed by the identification of their parameters. We refer to our previous work [25], where we were able to achieve high-quality classifications of test surfaces. The objective of this paper is to improve the approach by considering the robot’s direction and speed. We also derive an energy-based surface model and investigate an algorithm for the detection and identification of unexplored surfaces.
Section 2 describes the research setup, data collection, and the resulting models of three test surface types. Section 3 describes the overall system structure and the principles of the classifier, unexplored surface detection, and identification algorithms. Section 4 summarizes the experimental results of surface type classification, validation on test trajectories, and validation on an unexplored surface type. Section 5 concludes the paper, shows a comparison with known works, and describes perspectives on further research.

2. Method

2.1. Energy Consumption Coefficient, Models of Surfaces

In the course of our research, we explored a range of methodologies for estimating energy consumption related to movement. In this study, we introduced an energy consumption coefficient:
K e = P real P no - load = i = 1 m U i I i i = 1 m U i I n ,
where Preal is the instantaneous actual power consumption at a given direction of motion, Pno-load is the instantaneous power consumption of motion at no-load currents, m is the number of motors, Ui and Ii are the motors’ voltages and currents, and In is a no-load current. We calculate no-load power with Ui to compensate for the influence of the built-in speed regulator.
The coefficient depends on the motor voltages. These, in turn, depend on the velocity vector (α) and the robot’s kinematics. Therefore, we build a surface model depending on the direction of motion (2). The shape of the model will be different for different kinematics of the robot.
α = tan 1 υ y υ x ,
where v y ,   v x are set velocities of the robot.
To derive analytical models, we use polynomial approximation (3) and discrete cosine transform (DCT-II) (4).
f type α = i = 0 n A i α i ,
f type α = n = 0 N 1 A n cos π N n + 1 2 α .
The core component of the system is a surface classifier based on energy consumption. During robot motion, Ke values will shift from the model values of known surfaces. This is due to surface heterogeneity, measurement, and process noise. Such biases are taken into account in the standard deviation of the models. We use a probabilistic approach in the classifier implementation to handle the biases.
However, large deviations of Ke from the model boundaries may indicate the appearance of an unexplored surface under the robot. In order to successfully understand which deviations should be considered a sign of a new surface and which ones should not, we need to develop rules for identifying a previously unknown surface. To recognize motion over unexplored surfaces, we use algorithms based on machine learning techniques. This way we can avoid subjective selection of classification parameters and formulation of distinguishing criteria by hand. Once the robot determines that the underlying surface is unexplored, it starts to identify its model parameters.
The system structure is illustrated in Figure 1. The data from the robot is filtered and then the energy consumption coefficient is calculated. The coefficient values are used by the surface type classifier and the unexplored surface detector. The classifier feeds the surface type to the system output. If the detector spots an unknown surface, an identification algorithm is run in conjunction with a trajectory generator that moves the robot along the desired directions. The parameters of the unexplored surface are recorded in the surface models database as a new class.

2.2. Data Filtering

Initially, the sensor data collected from the robot is characterized by substantial noise. In our case, this applies more to the measurement of motor currents. The wheel speed measurement is not affected due to the use of optical encoders. We use a Kalman filter [29] to avoid the lag associated with the filtering process. It also allows us to filter the current and wheel speed values simultaneously using the robot motor model:
x ˙ ( t ) = A x ( t ) + B u ( t ) y ( t ) = C x ( t ) ,
where x = [I, ω, Mload], u(t) = [ωset]. Matrices A, B, C are state matrices of the system.
A = R a / L a ( k p + C e ) / L a 0 k t / J b / J 1 / J 0 0 0 , B = k p / L a 0 0 , C = 1 0 0 0 1 0 ,
where kp is the P coefficient of the speed controller, J is the moment of inertia (7.1∙10−6), b is the viscous friction coefficient (10−3), and La is the motor inductance (8.9∙10−3), R is motor winding resistance (5.3351 Ohm), and Ce is motor constant (0.0501). This model is transformed to a discrete form.
Each robot motor uses its own Kalman filter. The values of the covariance of the process noise R and the covariance of the observation noise Q are drawn from experimental data.
In our case, the Kalman filter allows us to achieve a filtering quality similar to the moving average with a window width of 0.5 s. This applies to median values and standard deviation.

2.3. Probabilistic Surface Classifier with Memory

The classifier’s structure is shown in Figure 2. The robot yields the energy coefficient value during movement along specified direction. For the direction, the model coefficient values are found by accessing the surface model database. The current coefficient value is subtracted from the model values. The resulting difference is fed into the normal distribution functions to calculate the membership degree for each surface. The standard deviation of the model, averaged by motion direction, is used accordingly as the standard deviation of the normal distribution. The final surface type is defined as the argmax of the membership degrees. This classifier can be easily extended by adding new surface models.
We use a memory-based approach to store the previous classification result in order to improve the accuracy. The memory is implemented similarly to the exponential moving average:
p ^ surf t = a p surf t + ( 1 a ) p ^ surf t 1 ,
where psurf is the raw membership degree, p ^ s u r f is the processed membership degree, and a is the memory coefficient.
After applying (7), the classifier predictions are normalized to give a total of 1.

2.4. Detection of Unexplored Surfaces

The detection algorithm works as a binary classification: positive class—a surface is previously unknown; negative—a surface is known. Input data includes difference of actual value of energy coefficient from the model value, the direction of motion, and the standard deviation of the model. The algorithm processes feature values by averaging them over a 2 s window instead of taking them instantaneously. If the direction of motion varies during the window time, its mean value is used. Our preliminary studies have shown that this approach will not have a significant impact on detection accuracy. Early in training, we added a feature that measures how far the actual energy coefficient deviates from the model’s estimate, expressed in standard deviations. This feature helped to improve the accuracy of algorithms in complex cases.
The described approach allows us to understand whether a surface is unexplored compared to a single surface model. If there are several models, the detection algorithm is applied to each of them. A logical AND is then applied to the detection results. If at least one of them considers the surface to be known, the final result remains 0. Only when the detection for all surface models shows 1 does the system mark the surface as unexplored. It is worth noting that in this case, it is not necessary to train separate machine learning models for each surface model. Only one algorithm trained on examples with different surface models will be needed. Such an algorithm allows us to use new surface models without additional training.
The training dataset for machine learning models is generated based on a time window of 2 s of motion; the specific number of entries is calculated based on a 30 Hz data sampling rate. Each row contains information about the direction of motion and the value of the energy coefficient. Between 1 and 10 motion directions are picked for the window, each receiving a random row count of at least 1. The combined row count remains within the window width limit.
The data is taken from experiments with linear motion on gray, green, and table surfaces. At each iteration, one of the surfaces is assigned as the baseline, and then two are selected for comparison: the baseline itself and randomly, one of the remaining. This forms two rows in the sample: positive and negative classes. Hence, the dataset is balanced. The average direction of motion and the average deviation of Ke values from the baseline surface model are calculated from the time window. This data is recorded in a line of the dataset together with the standard deviation of the baseline surface model. The classification label is 0 if the surface is the same as the baseline surface and 1 if the surface is new.
It should be noted that for most of the parameters, the number of directions in the window, the number of lines per direction, and the lines from the experiments themselves are taken randomly. Due to the large amount of input data, iterating over all parameters will require a large number of computational resources and result in a huge dataset, thus leading to difficulties in training machine learning models.

2.5. Surface Parameters Identification

We use the least squares method to identify the surface parameters. Equations (3) and (4) are used as a model. The identification task is to obtain the vector of coefficients Anew = {Ai}. In order to identify the three parameters of the model, it will be necessary to collect data on the energy coefficient for at least three directions of motion. The motion generator controls the robot to move in the desired directions. The standard deviation is approximated as the average of the standard deviations along the selected directions.
Specific directions of motion for identification are chosen so that the difference between the identified surface model and the reference one is minimized by the mean absolute percentage error MAPE (8). The metric is averaged over all surfaces and over different identification durations. This way we try to obtain a set of directions for identification with a higher generalization ability.
M A P E = 1 n i = 1 n K e i K ^ e K e i
The directions are chosen from the range from 0° to 180° because the models are symmetric about 180°. The first three directions are found by iterating over combinations of three directions from all available directions. The set is then expanded by one direction per iteration. The additional direction is picked so that the set with it gives the minimum identification error among all sets.

3. Results

3.1. Research Setup and Data Gathering

The research setup used in the experiments is original and is built at Tomsk Polytechnic University. As shown in Figure 3, the stand includes the following:
Mobile robot with 3 omni-wheels. Such kinematics allows the robot to move in holonomic mode.
Computer vision system. It reads the coordinates and orientation of the mobile robot in the experimental site. It is stationary, fixed above the site. The computer vision system is not involved in navigation process.
Interchangeable underlying surfaces of different types. To collect training data, three test types are used in this work: soft smooth, rubber (gray); hard rough, plastic (green); hard smooth, laminated chipboard (table). The influence of such types on robot motion is shown in [30,31].
Example of an experiment and the robot appearance are shown in Figure 4. During the experiments, the robot may follow a linear path on one terrain or a complex path such as a circle or square with transitions between two terrains, as shown in Figure 4.
There are 504 experiments in which the robot moves in a given direction with the same velocity amplitude. The motion directions are in range from 0° to 355° with a step of 5°. Motion velocities: 0.1, 0.2, 0.3 m/s. Duration of experiments: 18 s for 0.1 m/s, 10 s for 0.2 m/s, 6 s for 0.3 m/s. During each run, the following readings are taken from the robot and the computer vision system at 30 Hz: rotation speed of each wheel (ωi), motor currents (Ii), and robot coordinates (x, y, φ). Synchronization between the computer vision readings and the robot sensors is implemented by software. The motor voltage (Ui) is calculated indirectly by the following formula [23]:
U i = I i R + C e ω i .
We analyze the obtained data for three test surfaces and construct analytical surface models using a variety of approximation methods. The MAPE is used to estimate the deviation of the model from the actual values of the coefficient. As shown in Figure 5, we compare polynomial approximation with discrete cosine transform. The complexity of the models is the number of coefficients in the equation. The DCT demonstrates optimal performance. Original data and models are shown in Figure 6. We use only the first three transform coefficients due to the high approximation accuracy (less than 2% MAPE) and generalizability of the results.
The final analytical forms describing the relationship between the energy coefficient and the angle of motion demonstrate identical frequencies and phases in the cosine transform. The analytical representation of a surface model is as follows:
f type α = A 0 + A 1 cos π 180 2 α + 5 + A 2 cos π 180 6 α + 15
where coefficients A0, A1, A2 are DCT parameters. All angles are in degrees.

3.2. Surface Classifier

To evaluate the classifier, we performed two additional test experiments on circular and square trajectories. There was no coordinate control during these runs. While following the trajectory, the robot moved from one surface to another. We used the two closest surfaces, green and gray. At the moment of transition, it is impossible to unambiguously determine which surface the robot is on. Therefore, the transition between surfaces is not included in the accuracy evaluation.
The Accuracy metric is used. Table 1 shows evaluation of the classifier with and without memory. Usage of memory improved the classification accuracy by at least 5%.

3.3. Detector of Unexplored Surfaces

To verify the accuracy of the unexplored surface detection algorithm, we conducted additional experiments. We took a new surface (red) which differed in structure and impact from the others. It had medium stiffness and less pronounced roughness and was made of honeycomb-shaped EVA. The data from this surface was not involved in the training process.
We selected five machine learning models that demonstrate high accuracy in solving the binary classification problem.
  • Decision Tree: max depth = 10, min samples leaf = 10, min samples split = 100.
  • Random Forest: max depth = 20, min samples leaf = 20, estimators = 10.
  • Neural Network: hidden layers = 2, neurons per layer = 32, dropout = 0.05, activation = ReLU, fully connected.
  • Linear Regression. Standard parameters.
  • CatBoost. Standard parameters.
The final Accuracy metric and additional F-score are summarized in Table 2. It should be noted that only red was considered as a new class in this sample. Data from experiments on other surfaces is defined as previously known.
The best results for both the Accuracy metric and F-Score are demonstrated by the fully connected neural network.
We also analyzed the detection accuracy depending on the direction of motion. As a demonstration example, we chose the gray surface, where errors are most pronounced. Detection accuracies by motion directions are shown in Figure 7. It worth noting that detection errors for different algorithms occur in the same motion directions. A similar situation is present on other surfaces. We hypothesize that the training dataset lacks examples for the mentioned directions.
The best algorithm, the neural network, is further trained on the extended dataset. A comparison of the obtained accuracies is given in Table 3.
Separately, we evaluated the algorithm on continuous trajectories. In the process of moving from one surface to another, the robot has different wheels on different surfaces at the same time. Because of this, it is not clear how to interpret such readings—as a new surface or as a known surface. Figure 8 reflects the accuracy difference in such cases.
The neural network showed the best accuracy results for the square and circle motion (99.9%, 87.6%) when the transition was not a new surface and (88.1%, 86.2%) when the transition between surfaces was considered a new surface.
Thus, using a neural network as a classification algorithm achieves an average accuracy of at least 87.6% with improvement to 90.63% after retraining.

3.4. Surface Parameters Identification

The surface model identification is associated with the need to collect information on energy consumption coefficient readings from at least three directions of motion. Table 4 shows the deviations of the identified models from the reference models from Section 2 as a function of the number of directions and the width of the data collection window per direction of motion.
Figure 9 summarizes the MAPE on the total identification time for different “window width—number of directions” pairs. The error is averaged over three attempts to identify gray, green, and table surfaces. The data for each attempt is taken from different parts of the original dataset in order to estimate the error in a more generalized way.
Both factors, number of directions and window width, improve the identification accuracy. However, increasing the number of directions has a larger impact. Using a small window of 0.5 s and nine directions allows us to achieve the MAPE of about 3%. At the same time, the algorithm can only achieve significantly high accuracy with a large number of directions and a large data collection window within each direction. However, this option is not always achievable in real conditions, as it may severely prolong the identification process.
The directions selected in the optimization process are (30, 75, 105, 110, 5, 70, 155, 125, 80, 85). We chose a combination of nine motion directions and 0.5 s window width to achieve a balance between accuracy and identification time. Red surface data is used to validate the identification algorithm, as it is excluded from analysis and training. Identified models are shown in Figure 10.
The boxplots in Figure 9 reflect the distribution properties of the energy coefficient values gathered from a 0.5 s window for the specified directions.

3.5. Classification with Identified Models

The surface models gray, green, and table were obtained from all available experiments. However, we are interested in comparing what the classification accuracy would be if all models were unknown. That is, all surface models will be identified using nine directions of 0.5 s each. The approach described in Section 3.2 was used for classification.
The results are shown in Table 5. The average accuracy of the models decreased only by 1.5%. However, the accuracy on the complex gray surface dropped by almost 13%. This occurred due to both an increase in the value of the model coefficient A0 and a larger amplitude of oscillations caused by coefficients A1, A2. As a result, the data shifts to the green surface region and the accuracy of gray surface definition drops significantly.

4. Discussion

Comparison with the classifier accuracies of other works can be rather controversial. This is due to the small number of works that use energy or motor current data for classification, as well as because of experimental setups, i.e., different robots and underlying surfaces. Table 6 compares the surface classification accuracy of our work with that of other authors for the four proposed groups of methods.
Our proposed approach loses by no more than 5–6% compared to visual and IMU-based methods. However, most of those methods use complex processing techniques that are computationally expensive, such as CNN or LSTM.
However, compared to [24], the proposed method showed higher accuracy. Compared to the previous work [25], the new classifier demonstrates a 5.8% improvement in accuracy, primarily due to the incorporation of memory and continuous dependence on the direction of movement. However, the structure of the new classifier does not depend on the motion direction and velocity.
Our approach allows new surfaces to be easily added both manually and automatically. A separate comparison with the algorithm presented in [20] shows similar performance in terms of detecting unexplored surfaces, 84.85% vs 96.2% for our work.
In addition, similar to the paper [20], the proposed probabilistic classifier allows us to characterize the output surface type in terms of percentages of known surface types. This approach will be considered in future studies to describe unexplored surfaces.
In real-world scenarios, data from one type of sensor is often insufficient to accurately estimate the impact of the surface on the robot. Therefore, data from multiple sensors is used to obtain more accurate features. Based on the comparison, we believe that the proposed coefficient can be one of the possible proprioceptive components in solving the classification problem, along with IMU, vibration, pressure sensors, or exteroceptive methods.

5. Conclusions

In this paper, a novel approach for underlying surface classification is proposed. It uses the energy consumption coefficient of mobile robot motion. The classification principle is based on the comparison of the received data from the robot with models of known surfaces. To extend the functionality, a new method, based on neural networks, is proposed. It is used to detect unexplored surfaces and to subsequently identify parameters of their models. The surface classifier has shown high overall accuracy of 92.6% in classifying four types of surfaces and accuracy of 96.2% in detecting the unexplored surface. We have shared source files for this article on https://github.com/okushnarev/energy-based-surface-classification (accessed on 20 September 2025).
Despite the use of simple algorithms, the data processing method presented in this paper demonstrates high accuracy. Although our dataset is small, it contains two overlapping surfaces: gray and green. We hypothesize that the greater the similarities among the surfaces in the data, the lower the accuracy of any method, including ours, will be. To verify this hypothesis, we plan to collect a larger dataset, train modern artificial intelligence models on the same source features, and compare the models’ accuracy with that of our method.
All experiments performed thus far have been conducted using a robot of a specific design—a platform with three omni-wheels. We hypothesize that the form of Ke models is determined by the robot kinematics. We believe these analytical surface models implicitly describe the features of specific kinematics. In the future, we plan to verify this by applying our method to robots with different designs.
Moreover, we want to improve the motion synthesis algorithm for identification of new surfaces. The motion generator should modify the robot’s motion trajectory so that it contains the specified directions, rather than interrupting the main motion for the sake of identification.
Our research has determined that a particular set of movement directions yields optimal accuracy for the identification process. However, when robots are controlled by coordinates, their movements undergo slight alterations around the selected direction, which may result in changes in the nature of the data for identification. On the one hand, such effects can be mitigated through the implementation of filtration processes. On the other hand, the paper [32] demonstrates the efficacy of employing chaos to regulate a robot’s movement across diverse surfaces. In subsequent studies, we will examine the impact of chaos on the efficacy of our algorithm.
Also, an additional direction of development could be the study of the relationship between model decomposition coefficients and physical characteristics of surfaces. This is important for our method, as it is based on the analysis of the robot’s movement across the surface. We believe that such research will allow us to identify surface characteristics that directly impact the robot’s movement. This will help us to significantly improve the navigation accuracy of mobile platforms.

Author Contributions

Conceptualization, A.B. and O.K.; methodology, A.B.; software, O.K.; validation, O.K.; formal analysis, A.B. and O.K.; investigation, A.B. and O.K.; resources, A.B. and O.K.; data curation, O.K.; writing—original draft preparation, A.B. and O.K.; writing—review and editing, A.B. and O.K.; visualization, O.K.; supervision, A.B.; project administration, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zurn, J.; Burgard, W.; Valada, A. Self-Supervised Visual Terrain Classification From Unsupervised Acoustic Feature Learning. IEEE Trans. Robot. 2021, 37, 466–481. [Google Scholar] [CrossRef]
  2. Kurobe, A.; Nakajima, Y.; Kitani, K.; Saito, H. Audio-Visual Self-Supervised Terrain Type Recognition for Ground Mobile Platforms. IEEE Access 2021, 9, 29970–29979. [Google Scholar] [CrossRef]
  3. Satsevich, S.; Savotin, Y.; Belov, D.; Pestova, E.; Erhov, A.; Khabibullin, B.; Bazhenov, A.; Kovalev, V.; Fedoseev, A.; Tsetserukou, D. HyperSurf: Quadruped Robot Leg Capable of Surface Recognition with GRU and Real-to-Sim Transferring. In Proceedings of the 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Kuching, Malaysia, 6–10 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 2625–2630. [Google Scholar] [CrossRef]
  4. Yuan, Y.; Yang, H.; Yang, C.; Ding, L.; Gao, H.; Li, N. Multi-Slip Conditions Acquisition of Planetary Rovers with Application to Terrain Parameter Identification. In Proceedings of the 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Shanghai, China, 26–28 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 60–65. [Google Scholar] [CrossRef]
  5. Gonzalez, R.; Iagnemma, K. Slippage estimation and compensation for planetary exploration rovers. State of the art and future challenges. J. F. Robot. 2018, 35, 564–577. [Google Scholar] [CrossRef]
  6. Mohammad, F.; Gao, Y.; Kay, S.; Field, R.; De Benedetti, M.; Ntagiou, E.V. Deep Learning based Semantic Segmentation for Mars Rover Terrain Classification. In Proceedings of the 2024 International Conference on Space Robotics (iSpaRo), Luxembourg, 24–27 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 292–298. [Google Scholar] [CrossRef]
  7. DuPont, E.M.; Moore, C.A.; Collins, E.G.; Coyle, E. Frequency response method for terrain classification in autonomous ground vehicles. Auton. Robot. 2008, 24, 337–347. [Google Scholar] [CrossRef]
  8. Li, X.; Wu, J.; Li, Z.; Zuo, J.; Wang, P. Robot Ground Classification and Recognition Based on CNN-LSTM Model. In Proceedings of the 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), Nanchang, China, 26–28 March 2021; IEEE: Piscataway, NJ, USA; pp. 1110–1113. [Google Scholar] [CrossRef]
  9. Liu, X.; Chen, H.; Chen, H. Contrastive Learning-Based Attribute Extraction Method for Enhanced Terrain Classification. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 5644–5650. [Google Scholar] [CrossRef]
  10. Wu, X.A.; Huh, T.M.; Sabin, A.; Suresh, S.A.; Cutkosky, M.R. Tactile Sensing and Terrain-Based Gait Control for Small Legged Robots. IEEE Trans. Robot. 2020, 36, 15–27. [Google Scholar] [CrossRef]
  11. Bednarek, M.; Lysakowski, M.; Bednarek, J.; Nowicki, M.R.; Walas, K. Fast Haptic Terrain Classification for Legged Robots Using Transformer. In Proceedings of the 2021 European Conference on Mobile Robots (ECMR), Bonn, Germany, 31 August–3 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar] [CrossRef]
  12. Odedra, S. Using unmanned ground vehicle performance measurements as a unique method of terrain classification. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 286–291. [Google Scholar] [CrossRef]
  13. Dutta, A.; Dasgupta, P. Ensemble Learning With Weak Classifiers for Fast and Reliable Unknown Terrain Classification Using Mobile Robots. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2933–2944. [Google Scholar] [CrossRef]
  14. Iwashita, Y.; Nakashima, K.; Gatto, J.; Higa, S.; Stoica, A.; Khoo, N.; Kurazume, R. Virtual IR Sensing for Planetary Rovers: Improved Terrain Classification and Thermal Inertia Estimation. IEEE Robot. Autom. Lett. 2020, 5, 6302–6309. [Google Scholar] [CrossRef]
  15. Lee, K.; Lee, K. Terrain-aware path planning via semantic segmentation and uncertainty rejection filter with adversarial noise for mobile robots. J. F. Robot. 2025, 42, 287–301. [Google Scholar] [CrossRef]
  16. Alcayaga, J.M.; Menéndez, O.A.; Torres-Torriti, M.A.; Vásconez, J.P.; Arévalo-Ramirez, T.; Romo, A.J.P. LSTM-Enhanced Deep Reinforcement Learning for Robust Trajectory Tracking Control of Skid-Steer Mobile Robots Under Terra-Mechanical Constraints. Robotics 2025, 14, 74. [Google Scholar] [CrossRef]
  17. Arafin, T.; Hosen, A.; Najdovski, Z.; Wei, L.; Rokonuzzaman, M.; Johnstone, M. Advances and Trends in Terrain Classification Methods for Off-Road Perception. J. F. Robot. 2025. [Google Scholar] [CrossRef]
  18. Li, Q.; Cicirelli, F.; Vinci, A.; Guerrieri, A.; Qi, W.; Fortino, G. Quadruped Robots: Bridging Mechanical Design, Control, and Applications. Robotics 2025, 14, 57. [Google Scholar] [CrossRef]
  19. Sarcevic, P.; Csík, D.; Pesti, R.; Stančin, S.; Tomažič, S.; Tadic, V.; Rodriguez-Resendiz, J.; Sárosi, J.; Odry, A. Online Outdoor Terrain Classification Algorithm for Wheeled Mobile Robots Equipped with Inertial and Magnetic Sensors. Electronics 2023, 12, 3238. [Google Scholar] [CrossRef]
  20. Yu, Z.; Sadati, S.M.H.; Hauser, H.; Childs, P.R.N.; Nanayakkara, T. A Semi-Supervised Reservoir Computing System Based on Tapered Whisker for Mobile Robot Terrain Identification and Roughness Estimation. IEEE Robot. Autom. Lett. 2022, 7, 5655–5662. [Google Scholar] [CrossRef]
  21. Bai, C.; Guo, J.; Guo, L.; Song, J. Deep Multi-Layer Perception Based Terrain Classification for Planetary Exploration Rovers. Sensors 2019, 19, 3102. [Google Scholar] [CrossRef]
  22. Hu, Y. Robotic terrain classification based on convolutional and long short-term memory neural networks. Cogn. Robot. 2025, 5, 166–175. [Google Scholar] [CrossRef]
  23. Sheppard, A.; Brown, J.; Renno, N.; Skinner, K.A. Learning Surface Terrain Classifications from Ground Penetrating Radar. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–18 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 3047–3055. [Google Scholar] [CrossRef]
  24. Ojeda, L.; Borenstein, J.; Witus, G.; Karlsen, R. Terrain characterization and classification with a mobile robot. J. F. Robot. 2006, 23, 103–122. [Google Scholar] [CrossRef]
  25. Belyaev, A.S.; Kushnarev, O.Y.; Brylev, O.A. Synthesis of a hybrid underlying surface classifier based on fuzzy logic using current consumption of mobile robot motion. Inf. Control. Syst. 2024, 1, 31–43. [Google Scholar]
  26. Zhu, B.; He, J.; Yuan, Z.; Gao, F. Probabilistic Path Planning for Wheel-Legged Rover in Dense Environment Based on Extended MDP and Configuration Topology Analysis. IEEE Trans. Robot. 2025, 41, 2512–2532. [Google Scholar] [CrossRef]
  27. Bai, X.; Li, C.; Zhang, B.; Wu, Z.; Ge, S.S. Efficient Performance Impact Algorithms for Multirobot Task Assignment With Deadlines. IEEE Trans. Ind. Electron. 2024, 71, 14373–14382. [Google Scholar] [CrossRef]
  28. Bai, X.; Cao, M.; Yan, W. Event- and time-triggered dynamic task assignments for multiple vehicles. Auton. Robot. 2020, 44, 877–888. [Google Scholar] [CrossRef]
  29. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  30. Andrakhanov, A.; Belyaev, A. GMDH-Based Learning System for Mobile Robot Navigation in Heterogeneous Environment. In Advances in Intelligent Systems and Computing II; Springer: Cham, Switzerland, 2018; pp. 1–20. [Google Scholar] [CrossRef]
  31. Belyaev, A.S.; Brylev, O.A.; Ivanov, E.A. Slip Detection and Compensation System for Mobile Robot in Heterogeneous Environment. IFAC-Pap. 2021, 54, 339–344. [Google Scholar] [CrossRef]
  32. Buscarino, A.; Fortuna, L.; Frasca, M.; Muscato, G. Chaos does help motion control. Int. J. Bifurc. Chaos 2007, 17, 3577–3581. [Google Scholar] [CrossRef]
Figure 1. System structure.
Figure 1. System structure.
Robotics 14 00130 g001
Figure 2. Classifier structure.
Figure 2. Classifier structure.
Robotics 14 00130 g002
Figure 3. Research setup used for data gathering during experiments.
Figure 3. Research setup used for data gathering during experiments.
Robotics 14 00130 g003
Figure 4. On the left: view of the experimental site with gray and green surfaces. The robot moves along a complex path with transitions between surfaces. True surface labels are shown at each step. On the right: the appearance of the omnidirectional robot used in the experiments.
Figure 4. On the left: view of the experimental site with gray and green surfaces. The robot moves along a complex path with transitions between surfaces. True surface labels are shown at each step. On the right: the appearance of the omnidirectional robot used in the experiments.
Robotics 14 00130 g004
Figure 5. Approximation accuracy for DCT and polynomial approximation methods.
Figure 5. Approximation accuracy for DCT and polynomial approximation methods.
Robotics 14 00130 g005
Figure 6. Energy consumption coefficient vs the direction of motion. DCT models approximating the values of the coefficient.
Figure 6. Energy consumption coefficient vs the direction of motion. DCT models approximating the values of the coefficient.
Robotics 14 00130 g006
Figure 7. Unexplored surface detection accuracy while driving on gray surface. Evaluated across motion directions and machine learning models.
Figure 7. Unexplored surface detection accuracy while driving on gray surface. Evaluated across motion directions and machine learning models.
Robotics 14 00130 g007
Figure 8. Unexplored surface detection accuracy for square and circle trajectories. Different views on transition between surfaces.
Figure 8. Unexplored surface detection accuracy for square and circle trajectories. Different views on transition between surfaces.
Robotics 14 00130 g008
Figure 9. Mean absolute percentage error (MAPE) of model identification vs. total data collection time.
Figure 9. Mean absolute percentage error (MAPE) of model identification vs. total data collection time.
Robotics 14 00130 g009
Figure 10. Graphical representation of surface models obtained from identification by 9 directions with 0.5 s window width.
Figure 10. Graphical representation of surface models obtained from identification by 9 directions with 0.5 s window width.
Robotics 14 00130 g010
Table 1. Classification accuracies of the probabilistic algorithm with and without memory on different data samples.
Table 1. Classification accuracies of the probabilistic algorithm with and without memory on different data samples.
Dataset
LinearCircleSquare
Original85.9%65.1%70.2%
With memory91.0%83.7%76.0%
5.1%18.5%5.8%
Table 2. Detection accuracies of different machine learning methods.
Table 2. Detection accuracies of different machine learning methods.
ModelSurface
GrayGreenTableRedF-Score
Neural Network98.1%99.9%90.5%95.9%88.81%
CatBoost94.8%99.6%90.3%95.8%84.95%
Logistic Regression92.0%99.1%90.2%95.9%81.63%
Random Forest89.1%98.4%90.1%95.9%78.34%
Decision Tree89.0%97.3%90.3%95.7%77.25%
Table 3. Accuracy of the neural network with and without retraining on different surfaces and trajectories.
Table 3. Accuracy of the neural network with and without retraining on different surfaces and trajectories.
Neural Network
OriginalRetrain
F-score88.81%89.67%+0.87%
AccuracyGray98.08%98.80%+0.72%
Green99.89%99.96%+0.07%
Table90.46%90.63%+0.16%
Red95.87%95.62%−0.25%
Square99.91%99.91%0.00%
Circle87.64%92.55%+4.91%
Table 4. MAPE of model parameter identification.
Table 4. MAPE of model parameter identification.
SurfaceNumber of DirectionsWindow Width, s
34511.52
Gray6.80%6.77%1.92%6.80%7.67%7.08%
Green5.59%2.96%3.99%5.59%5.25%1.71%
Table13.70%3.13%1.98%13.70%13.53%8.73%
Table 5. Accuracy of surface classification with different sources of surface models.
Table 5. Accuracy of surface classification with different sources of surface models.
SurfaceModels Source
Whole DatasetIdentification
Gray86.2%73.4%
Green88.3%93.8%
Table98.4%98.4%
Red97.5%97.3%
Mean92.6%90.7%
Table 6. Surface classification accuracy comparison for different methods.
Table 6. Surface classification accuracy comparison for different methods.
GroupsArticle, YearData TypeSurfacesAccuracy
MixedOjeda et al. [24], 2006VelocityGravel, Grass, Sand, Pavement, Dirt53.20%
IMU78.40%
Current56.90%
Andrakhanov et al. [30], 2017Velocity, Current, IMURubber, Plastic, Wood, Carpet, Foam Rubber90.52%
(1)
Visual
Iwashita et al. [14], 2020RGB and IR cameraRocks, Bedrock, Compacted Send, Compacted Send with Gravel, Loose Send with Gravel95.10%
Mohammad et al. [6], 2024VisualAI4Mars Dataset99%
Sheppard et al. [23], 2024RadarAsphalt, Grass, Sand, Sidewalk80–98.5%
Lee et al. [15], 2025RGB-DRUGD Dataset + Hard Ground, Dirt, Gravel, Grass, Sand, Mulch, Bush, Water, Background74.80%
(2)
audio
Zurn et at. [1], 2021AudioAsphalt, Parking Lot, Grass, Gravel, Cobblestone93.10%
Kurobe et al. [2], 2021Audio, VideoCarpet, Concrete Flooring, Title, Linoleum, Rough Concrete, Asphalt, Grass, Pavement, Wood Deck, Mulch85%
(2)
IMU
DuPont et al. [7], 2008IMUPacked Gravel, Loose Gravel, Sparse Grass, Tall Grass, Asphalt, Sand70–100%
Dutta et al. [13], 2017IMU, Acceleration, RPYGrass, Rock, Concrete, Sand, Brick63%
Bai et al. [21], 2019IMUBrick, Sand, Flat, Cement, Soil75–98%
Li et al. [8], 2021IMUHard Tiles with Large Space, Hard Tiles, Soft Tiles, Fine Concrete, Concrete, Soft Polyvinyl Chloride, Tiles, Wood, Carpet68.54 ± 3.71%
Sarcevic et al. [19], 2023IMU, MagnetometerConcrete, Grass, Pebbles, Sand, Paving Stone, Synthetic Running Track75–98%
Satsevich et al. [3], 2024IMUCarpet, Rubber, Tile, Rough Tile98%
Hu et al. [22], 2025IMULarge Space, Hard Tiles, Soft Tiles, Fine Concrete, Concrete, Soft Polyvinyl Chloride (PVC), Tiles, Wood, And Carpet81%
(2)
vibration
Yu et al. [20], 2022Tapered Whiskered Tactile SensorHard Rough Cobblestones, Hard Roughish Brick Soft Rough Grass, Soft Roughish Sand Hard Smooth Flat, Soft Smooth Carpet84.12%
(3)
contact force
Wu et al. [10], 2021Leg tactileConcrete, Waxed Tile, Laminate Wood, Medium-Density Grass, Wood Chips/Mulch, Gravel, Rubble, Sand82.50%
Bednarek et al. [11], 2021Leg tactileCarpet, Artificial Grass, Rubber, Sand, Foam, Rocks, Ceramic Tiles, PVC83.3–91.7%
Liu et al. [9], 2024Leg tactileAsphalt, Stone Brick Soul, Sidewalk, Grass, Dry Grass, Tall Grass, Track96.95%
(4)
Energy
Belyaev et al. [25], 2024CurrentSoft Smooth, Hard Rough, Hard Smooth85.2–91.1%
OursCurrentRubber, Plastic, Laminated chipboard, EVA honeycomb92.60%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Belyaev, A.; Kushnarev, O. Energy-Based Surface Classification for Mobile Robots in Known and Unexplored Terrains. Robotics 2025, 14, 130. https://doi.org/10.3390/robotics14090130

AMA Style

Belyaev A, Kushnarev O. Energy-Based Surface Classification for Mobile Robots in Known and Unexplored Terrains. Robotics. 2025; 14(9):130. https://doi.org/10.3390/robotics14090130

Chicago/Turabian Style

Belyaev, Alexander, and Oleg Kushnarev. 2025. "Energy-Based Surface Classification for Mobile Robots in Known and Unexplored Terrains" Robotics 14, no. 9: 130. https://doi.org/10.3390/robotics14090130

APA Style

Belyaev, A., & Kushnarev, O. (2025). Energy-Based Surface Classification for Mobile Robots in Known and Unexplored Terrains. Robotics, 14(9), 130. https://doi.org/10.3390/robotics14090130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop