Design and Implementation of an Autonomous Electric Vehicle for Self ‐ Driving Control under GNSS ‐ Denied Environments

: In this study, the hardware and software design and implementation of an autonomous electric vehicle are addressed. We aimed to develop an autonomous electric vehicle for path tracking. Control and navigation algorithms are developed and implemented. The vehicle is able to perform path ‐ tracking maneuvers under environments in which the positioning signals from the Global Navigation Satellite System (GNSS) are not accessible. The proposed control approach uses a modified constrained input ‐ output nonlinear model predictive controller (NMPC) for path ‐ tracking control. The proposed localization algorithm used in this study guarantees almost accurate position estimation under GNSS ‐ denied environments. We discuss the procedure for designing the vehicle hardware, electronic drivers, communication architecture, localization algorithm, and controller architecture. The system’s full state is estimated by fusing visual inertial odometry (VIO) measurements with wheel odometry data using an extended Kalman filter (EKF). Simulation and real ‐ time experiments are performed. The obtained results demonstrate that our designed autonomous vehicle is capable of performing path ‐ tracking maneuvers without using Global Navigation Satellite System positioning data. The designed vehicle can perform challenging path tracking maneuvers with a speed of up to 1 m per second.


Introduction
Recently, developing electric autonomous cars has been in the spotlight. Path tracking is the most important maneuver that an autonomous vehicle must be able to perform. In order to do so, an autonomous vehicle needs to be equipped with some essential elements.
The controller is the first crucial element of the autonomous vehicle. In order to perform challenging path-tracking maneuvers, an autonomous ground vehicle needs a robust, fast, and stable controller. However, designing controllers for stabilizing such vehicles is a challenging task due to the presence of non-holonomic constraints [1].
Secondly, the vehicle should be able to determine its position and orientation in the environment with high accuracy. Sensors and state estimation algorithms must be able to accurately estimate the position and orientation of the vehicle in different environments, which might have a variety of different weather conditions. However, localization of ground vehicles has always been a challenging process. The majority of existing vehicle localization systems are equipped with receivers that are able to receive positioning and timing signals from at least one of the elements in the Global Navigation Satellite System (GNSS). Contrary to aerial vehicles that usually have a clear view of the sky and can receive positioning signals from the GNSS elements easily, ground vehicles pass through a variety of different environments including roads, tunnels, urban canyons, forest areas, of the linear model [17]. As the ordinary MPC cannot take the nonlinear dynamic model as its prediction model, researchers have introduced a new optimal model predictive control approach called the nonlinear model predictive control (NMPC), in which a nonlinear model of the plant is used to predict the forward reaction of the system during ensuing states (see Section 4.2).
Most studies have used a dynamic bicycle model combined with a linear tire model as a vehicle model for MPC [18]. This approach, however, comes with two main drawbacks. It is computationally heavy and the tire model approaches its singularity points at low vehicle velocities. Generally, tire models consider the sideslip angle estimator term, which has the vehicle speed in the denominator. It reduces control performance of the stop-and-go maneuver, which is an essential capability for driving in urban environments [19]. Another disadvantage of the dynamic bicycle model is its problem with system identification (because of several parameters that need to be measured with high accuracy). Contrary to the dynamic bicycle model, kinematic bicycle models do not rely on tire models. Kinematic bicycle models by nature are more suitable for stop-and-go driving control when velocity approaches zero. In addition, system identification is far easier for the kinematic bicycle model (as compared with its dynamic counterpart) because it needs fewer parameters to be measured. The aforementioned advantages of the kinematic bicycle model support using this model as a prediction model for the NMPC. In this study, a nonlinear kinematic bicycle model is employed as a prediction model for the controller (see Section 5.1).

Vehicle Localization
Vehicle localization is the second challenge when implementing an autonomous vehicle. Although the controller plays the main task in driving the vehicle, it heavily relies on the position and orientation data of the vehicle that are fed to the controller as feedback, and therefore the performance of the controller depends on the accuracy of data provided by the localization algorithm. Hence, obtaining the position and orientation of the vehicle with high accuracy plays a vital role in the desirable performance of an autonomous vehicle. A variety of methods are used in vehicles to find the position and orientation of ground vehicles.
The most widely used positioning system in vehicles is the GNSS-based positioning approach. GNSS stands for Global Navigation Satellite System, which includes all global satellite-positioning systems providing position and timing signals for navigation. The Navstar Global Positioning System (GPS) is among the oldest components of GNSS used in the positioning of vehicles. The positioning accuracy of the GPS, however, is limited to almost 8 m (excluding the survey-grade GPS). Furthermore, the position data updates are not fast enough (~10 Hz). In addition, the GNSS receiver must be able to receive an electronics' triangulation signal, therefore, the system needs to have a clear view of the sky. This problem hinders the positioning in tunnels and urban canyons. Multi-path interference is another problem in GPS-based positioning systems that reduces the accuracy of data.
Another method that is used for finding the heading and position of a ground vehicle is wheel odometry. Wheel odometry uses signals from encoders, coupled with wheels, to calculate the revolutions each wheel has made. A dynamic model along with the data from wheel encoders are used to calculate a vehicle's present position. The algorithm can also find the current orientation of a vehicle relative to the starting point. The reported position, however, is susceptible to error because of the drift phenomenon. In this phenomenon, position error accumulates over time. The problem deteriorates when the vehicle moves on a non-smooth surface. The problem makes the reported odometry data unreliable when there is slippage between the surface and the tire. The problem usually happens when the vehicle carries out maneuvers on uneven terrains or slippery surfaces.
Visual odometry (VO) is another method used in vehicle position estimation. The method uses a sequence of camera images to estimate the amount of vehicle movement.
The majority of studies on VO have employed three famous methods called the featurebased method [20][21][22], direct approach [23,24], and hybrid approach (this approach, combines the benefits of the feature-based method and direct method) [25,26]. Direct methods work on the assumption that the projection of a point in two consecutive frames has the same intensity. This assumption often fails due to lighting changes, sensor noise, pose errors, and dynamic objects [27]. Hence, direct methods require a high frame rate, which minimizes the intensity changes. Another issue with the direct method is high computational costs due to the use of all the pixels over all frames. Generally, when there is a smooth and low-textured surface or bad lighting conditions, the odometry data from the VO are susceptible to error. Moreover, VO is liable to error when there is a sudden camera movement [28]. In order to alleviate the error, usually, inertial measurement units (IMU) are employed along with VO.
Most positioning systems today have an element called inertial measurement unit (IMU). Inertial measurement units are considered to be the cornerstone of an inertial navigation system (INS). The INS uses data from IMU to find acceleration attitude, angular velocity, linear velocity, and position relative to the world frame [29]. In an INS, acceleration is integrated with respect to time to find position and velocity. The integration process, however, has a problem. The issue arises from the integration of errors over time. The problem brings about a drift in the position that is reported by the sensor. The error in acceleration generates a linear error in velocity. The error also generates a quadratic error in position. Similarly, an error in gyroscope data causes a quadratic error in velocity and a cubic error in the position [30].
The advantage of an IMU is that it can provide odometry data with a fast update rate when there are large sudden movements across a short time interval. This motivates designers to use it along with VO to form a visual inertial odometry (VIO). Data from the IMU and VO can be fused either loosely or tightly. A loosely coupled approach for visualinertial systems keeps the visual and inertial framework as independent entities [31], while a tightly coupled approach combines the visual and inertial parameters under a single optimization problem and their states are jointly estimated [32]. Contemporary tightly coupled methods for visual inertial odometry fall into two categories, namely optimization-based methods [33][34][35] and filter-based algorithms [36][37][38]. In an optimization-based approach, an optimal estimate is calculated using an optimization problem solver. The optimizer tries to minimize the photometric error in order to extract more information from images. Although optimization-based methods provide high accuracy, these algorithms impose high computational costs on the system. Contrary to optimization-based methods, the Kalman filter is the cornerstone of filter-based approaches [39]. Filter-based methods show acceptable efficiency, and the accuracy is comparable to that of optimization-base methods. In this study, we use a filter-based stereo visual-inertial odometry introduced in [40]. It employs a multi-state constraint Kalman filter (MSCKF). For the feature detector, a Fast corner detection [41,42] is used.

Research Contribution
The main contribution of this study is to design an autonomous electric ground vehicle for the path-tracking maneuver. In this study, we propose a localization architecture that fuses data from a hybrid visual-inertial odometry with data from a wheel odometry algorithm. The proposed localization system is able to estimate the position and orientation of the vehicle with high accuracy. The proposed control approach used in this study uses a modified nonlinear model predictive controller (NMPC) with constrained input-output for path tracking.
In this paper, we also present the process of hardware and software design and implementation of the vehicle. The optimal design of hardware and software is conducted such that the designed autonomous vehicle can run the proposed control and localization algorithms with high accuracy.
The next sections of the paper are organized as follows: In Section 3, we introduce the overall structure of the proposed system; in Section 4, we introduce the localization algorithm, system state estimator algorithm structure, and sensors; in Section 5, we introduce the proposed NMPC that is employed for the ground vehicle control; in Section 6, we discuss the hardware structure and implementation; in Section 7, we describe the simulation and real-time experimental results of the employed localization algorithm; in Section 8, we depict simulation and evaluation of the employed control algorithm; in Section 9, we discuss real-time electric vehicle experimental results; and in Section 10, we provide our conclusions.

Proposed Approach
The proposed approach consists of a path tracking module based on NMPC (see Section 5.2) which uses 10 steps ahead to predict the future state and control inputs. The full system architecture is presented in Figure 1. For accurate state estimation, in this study, we used an extended Kalman filter (EKF) to fuse odometry data of multiple algorithms. The visual-inertial Odometry (VIO) algorithm fuses data from the visual odometry (VO) algorithm with data from the inertial measurement unit (IMU). The output of the VIO is the position and orientation data of the vehicle in three-dimension (3D). Simultaneously, the position data and the orientation data of the vehicle, in two-dimension (2D), are generated by a wheel odometry processor unit that consists of rotary encoders and a data processor. In the next step, the data provided by the two sources are sent to the final position and orientation estimator that is an extended Kalman filter in the robot localization package (see Section 4.4). The last localization unit does both the frame transformation and sensor fusion. The unit provides the position data and orientation data of the ground vehicle with high accuracy. The generated data are used as input signals for the NMPC algorithm. The controller generates control commands that are sent to the low-level control in the electronic control unit (ECU) of the vehicle. In the next three sections, we discuss the theory, design, and implementation of each element in the introduced system architecture.

The Proposed Algorithms for Vehicle State Estimation, Localization, and Sensors
The localization method used in this study relies on data fusion of visual-inertial odometry and wheel odometry. Data from the aforementioned sources are sent to an estimator algorithm that uses an extended Kalman filter to provide an almost accurate estimation of vehicle location and heading with reference to the initial point of the car in the odometry frame.

Visual-Inertial Odometry Algorithm
In this study, a Kalman-filter-based stereo visual inertial odometry is used. Data from the IMU and a stereo camera are used in the algorithm. The IMU model can be written as: where (q stands for quaternion) provides the rotation from the inertial frame to the vehicle frame, and define the linear velocity and location of the vehicle frame mapped into the inertial frame, respectively, arrays and are defined as measurement biases of velocity and acceleration from the inertial measurement unit, respectively, and and provide the transformation between the body and camera frames, respectively. In order to avoid singularities in covariance matrixes, the IMU error-states in Equation (1) can be modified as follows: In this relation, standard additive error is used for position, velocity, and biases (e.g., ). The quaternion error, ⨂ , has a close relation to state error as follows: where, is a representation of a small angle rotation. As a result, the ultimate state error can be written as: where each camera state error can be described as follows: , In order to obtain a process model, an indiscrete dynamic model of the estimated inertial measurement unit states can be considered as: where and are extracted from the IMU measurements for angular velocity and acceleration (without biases) as follows: where [ ] is the antisymmetric matrix of at Equation (6) playing a role as quaternion to the rotation matrix convertor. According to Equation (6), the linearized indiscrete dynamics for the IMU state error can be as follows: (10) Here, , , , . The vectors and are representations of a Gaussian noise of gyroscope measurement and accelerometer. The other terms and represent the random walk rate of the gyroscope and accelerometer measurement biases.
To propagate uncertainty of state, a discrete-time state-transition matrix extracted from Equation (10) and discrete-time covariance matrix must be calculated at the initial step as: where Q = is defined as the dispersion matrix of continuous-time noise in the system. As a result, the propagation covariance of inertial measurement unit states can be written as: After portioning the covariance of the overall state as: The propagation of uncertainty can be written as: After getting new images, the state must be augmented using renewed state from the camera. The position of the recent camera state can be calculated from the newest IMU state as follows: Moreover, the augmented covariance matrix is as follows: Considering a scenario where feature is observed using the stereo camera for position , , note that the used stereo cameras have two single camera cells with positions represented as , , , and , , , for the right side and left side camera cells in the stereo camera package, respectively. The stereo measurement, , is represented as: In Equation (19), , k ∈ 1,2 , are considered to be the location of the feature, , on the left-side and right-side sub-camera frame ( , , , ) having relation to the camera location as follows: where is the noise of measurement, and and are the measurements of the Jacobian. After collecting multiple sampled observations of the similar feature , we can have: (21) In order to make sure that the uncertainty of does not have any effect on residual, the residual in (20) is projected to the kernel, V, of as follows: , Taking into account Equation (22), the updating step of EKF could be calculated. A simple execution of EKF-based VIO produces incorrect information in the heading. This problem originates from the difference between the linearizing point for process and measurement step at the same time instant. In order to maintain the consistency of the filter, a variety of different methods have been used in previous studies. Some of these methods have been presented in FEJ-EKF [43], OC-EKF [44], and a robocentric mapping filter [45]. In this study, we employed OC-EKF.

Wheel Odometry Algorithm
The wheel odometry works based on data from encoders coupled to the rear wheels. Every single encoder generates 100 sets of pulses for a revolution of the tire (encoder resolution). A revolution of the tire makes the single revolution in the encoder (1 by 1 coupling). Each rotary incremental encoder provides a least two output signals (usually A and B), which are in form of digital square waves. The rate of occurrence in the signal represents the shaft speed rotation, while the quantity of pulses shows the covered distance. Encoder output signals are sent to a processor board. The processor samples the encoder's signal every 5 milliseconds. Vehicle kinematic state can be defined using vehicle position (X, Y) in the world coordinate frame (with an index point) and the vehicle heading Ѱ. Whenever the vehicle starts to turn, it must follow a circular path (see Figure 2). The integration time is so insignificant that we can consider the curvature of the path as a constant curvature. In Figure 2, and are initial points and and are final points. Length of the arcs for the right wheel ∆ and the left wheel ∆ are calculated using encoder measurement from the encoder (∆ ,∆ ), the radius of the wheels (WRr, WRl), and resolution of the encoder ( ) as follows: Considering as the distance between wheels, the radius of curvature for each wheel and center can be calculated using the following relations: Using the above parameters, change of heading angle (∆ ) and ∆ , ∆ increments can be calculated as: Finally, the position and orientation of the vehicle are updated as follow: Table 1 depicts the vehicle parameters (in our designed vehicle in this study) used in the wheel odometry model.

Sensor Fusion
In order to fuse output data from both the wheel encoder units and the visual-inertial odometry algorithm, Robot_localization package [46] in the Robot Operating System (ROS) is employed. The package contains two types of estimators, namely an extended Kalman filter (EKF) and an unscented Kalman filter (UKF). The extended Kalman filter in the "robot_localization" package is defined as a node named "ekf_localization_node". The node implements an extended Kalman filter that employs an internal omnidirectional model for motion. The model is used to project states forward (in time) and rectify the projected estimate using data from the visual-inertial odometry and wheel odometry altogether. The EKF imposes less computational costs on processors as compared with the UKF. The sensor fusion node estimates six degrees of freedom position and velocity of the vehicle.

Kinematic Bicycle Model
The vehicle bicycle model which consists of a stiff body and non-deforming wheels is shown in Figure 3. Consider the vehicle moving on the surface without slipping, there is full rolling friction between tires and surface [47]. This model is employed in NMPC to predict the future state of the system.
The center of the rear axle is chosen as the desired point. If we consider the instantaneous center of rotation (ICR), the kinematics bicycle model states are formulated as follows: Here, system states (position, heading, and linear velocity with respect to inertial frame {O, X, Y}) are represented as . Vector of control inputs is defined in , where (v) and (δ) are the linear velocity and steering angle, respectively. Since the NMPC algorithm employed in this study is not computed in continuous time, at the first step, the kinematic model must be discretized. Considering the sampling period time dt, a data-sampling instant (t), and using the Euler approximation on (37)-(39), the discrete-time model can be formulated as follows: where:

The NMPC Algorithm
The model predictive control (MPC) is considered to be an optimal control algorithm that uses a model of the plant to find a series of optimal control signals by minimizing a cost function. A plant model is used at each sampling iteration in order to predict the future behavior of the system during the prediction horizon. Taking into account the predictions, an objective function can be minimized with respect to the future sequence of inputs. A quadratic function of states and control inputs can be used to define the cost function in (44) as: where the value of x at the time instant (m) is predicted at the time instant (n), the relation is shown with x m | n . Vectors of system states and control inputs are defined as X and U respectively. Weighting matrices (Q and R) are used to penalize the state's error and control effort, respectively. The prediction horizon is represented as , which is an important factor in defining prediction horizon duration (T). The prediction horizon duration can be formulated as follows: The second part in the aforementioned cost function (44) minimizes the control effort. The optimization problem is defined such that it can find a proper series of control inputs and states as follows: where the inceptive value of states is represented by X . It corresponds to the numeric value of states that is measured at current time instant. Prediction model in optimization is defined in (48). In addition, there is a possibility to impose some bounds, defined in (49) and (50), on the magnitude of control variables and states as follows: It has been proven that only some initial control predictions are effective in stabilizing the system. Hence, in the majority of cases, another parameter called the control horizon ( ) is defined, which is the optimized number of control moves at each control interval. It falls between one and the prediction horizon. The final goal of optimizing the objective function is to reduce the error while states are approaching the desired point. Therefore, (51) substitutes for (44) as follows: where the predicted states vector is represented with X i | . Here, r i is the desired set-point vector. In addition, some weighting matrices ( , , and ) are employed to reduce state tracking error, control effort, and rate of change in control signal, respectively.
In the modified cost function, the third summation reduces stress on actuators by limiting the rate of change. Constraints (52)-(54) are imposed on the control inputs and states as follows: where the rate of change in steering angle is represented by . The simplified differentially flat bicycle model is discretized using the direct multiple shooting method. The model is used as a prediction plant to decrease the computational costs of nonlinear model predictive control. This approach utilizes the long prediction horizon of nonlinear model predictive control, which allows safe path tracking while approaching a userspecified goal destination. The task is done by controlling the longitudinal velocity and the steering angle.

Hardware Architecture and Interfaces
System architecture and interfaces are shown in Figure 4. The system is comprised of two main parts, namely the high-level part and the low-level part (containing a low-level controller). The main processor is programmed to run the high-level controller algorithm, visual-inertial odometry, sensor fusion, and serial port communication with a low-level controller. The control commands are sent to the low-level controller that tries to control actuators including speed controller, brushless DC motor (BLDC), brake motor, and electronic power steering (EPS). The main processor in this study is a Jetson AGX Xavier (NVIDIA Corporation, Santa Clara County, CA, USA). This is an embedded system-on-module (SOM) from the NVIDIA AGX Systems family. It is equipped with an octa-core NVIDIA Carmel ARMv8.2 CPU, 16 GB 256-bit LPDDRX with 137 GB/s, and other processing related to parallel processing, machine learning, and image processing. The processor runs the ROS on which the control algorithm, visual-inertial odometry algorithm, and sensor fusion algorithm are launched. Table 2 shows the specifications of the main processor. The main processor, and connections' structure with other parts are shown in Figure  5. The incremental rotary encoders coupled with the wheels' shafts provide data in the form of two square waves.
These raw data (from encoders) are sent to an Arduino Uno embedded processor board (Arduino Uno is an open-source microcontroller board employing an 8-bit AVR Microchip ATmega328P that is developed by Arduino). The processor board processes the square wave signals and generates the position and heading of the ground vehicle. An ELLIPSE2-N from SBG is employed as IMU. It contains an accelerometer with velocity random walk 100 (x,y) μg/√hz and 150 (z) μg/√hz. The accelerometer bandwidth is 250 Hz while the sampling rate of the accelerometer is 3 kHz. The sensor also contains a gyroscope with an angular random walk of 0.16°/√hr. The bandwidth of its gyroscope is 133 Hz, whereas the sampling rate is up to 10 kHz. It should be noted that the device is equipped with other aiding sensors, but in this study, we do not use them. The device publishes inertial data with an update rate of up to 200 Hz. The IMU specifications are shown in Table 3. The inertial measurement data are published via a topic in the ROS with the updated rate adjusted at 100 Hz. A ZED stereo camera (StereoLabs, San Francisco, CA, USA) with resolution 2 1920 1080 in 30 fps is employed to capture a stream of images. Its maximum field of view is 90° horizontal and 60° vertical. The camera image stream is received through its special package in the ROS. Table 4 shows the specifications of the stereo camera. The low-level controller is comprised of two processor boards that are programmed to receive commands and control actuators. The actuators include the BLDC motor (with coupled gearbox), brake motor (responsible for controlling the flow of hydraulic oil from master brake pistons to caliper pistons), and electronic power steering (responsible for changing steering angle). Figure 6 shows the content of the low-level control box. The platform control unit (PCU) is responsible for receiving control commands from the high-level controller via RS232 serial communication protocol. It also receives commands from a remote transceiver. The processor module establishes a Control Area Network Bus (CAN Bus) by which it can communicate with the automatic speed control module (ASM), brake motor driver, and electronic power steering (EPS). The BLDC motor is controlled by the ASM and PCU, while the brake and EPS are controlled through the PCU module. In Figure 6, RT stands for the remote transceiver. Solid state relays (SSR) play a role as electronic controlled switches. The final system configuration for the electric vehicle is depicted in Figure 7. The electric vehicle employs a 3 kW BLDC motor with 3000 r/min that generates the needed torque. The rotational force is transferred to wheels via a gearbox and vehicle differential. The EPS communicates with the low-level controller through the CAN Bus. It is responsible for changing the steering angle. The brake motor driver also receives commands via the CAN Bus. The power supply box is equipped with several DC-to-DC converters that provide the proper voltage and current for each device. Figure 8 depicts the designed autonomous electric vehicle prototype.

Simulation and Real-Time Experimental Results from the Employed Localization Algorithm
In order to evaluate the performance of the VIO algorithm, the Malaga dataset [48] was used. At the second step, a real-time experiment with our designed electric vehicle prototype was performed to evaluate the performance of the integrated localization algorithm (VIO plus wheel odometry). The Malaga dataset was collected in different urban scenarios with a car equipped with a variety of different sensors including one stereo camera, an IMU, and laser sensors. This dataset provides different driving scenarios. In this study, a scenario named "short avenue loop closure" was used. Figure  9a shows the employed driving scenario. The size of the vehicle estimation trajectory was intentionally chosen to be bigger than that of the true trajectory, because we wanted to evaluate the ability of the filter for scale estimation. In this test, the ratio was defined to be three. Figure 9b shows the performance of the employed visual-inertial odometry in simulation as compared with the ground truth from a GPS in the dataset. As it can be seen in Figure 9, although the scale of the VO trajectory was set to be three times bigger than the ground truth trajectory, the EKF was able to do scale estimation and provided correct estimation for VIO. In order to evaluate the performance of the proposed localization algorithm, an experiment using our designed electric vehicle was conducted. Figure 10 shows the result of the experiment that was conducted on the Kunsan National University campus. From the result of the real-time experiment, it can be seen that the combination of the VIO and wheel odometry provides a far better estimation as compared with both the VO and VIO.

Simulation and Evaluation of the Employed Control Algorithm
The performance of the controller performing the trajectory-tracking maneuver was evaluated by defining a trajectory using (55)-(57) as follows: 10 * (55) 15 * (56) 1.36 rad (57) where vectors of x and y coordinates in the trajectory plane are shown with X, Y. Heading at the starting point is shown with . Simulations were conducted using MATLAB and GAZEBO simulators. The CasADi package [49] was employed in order to solve the optimization problem. The package provides an open-source software framework for numerical optimization. The package is a general-purpose tool that can be used to model and solve optimization problems. The core of the package has been written in C++, but the creators made some interfaces for Python, MATLAB, and Octave. The package is widely using for academic purposes as well as in industrial applications in different fields, including optimal control, robotics control, and the aerospace industry. In this study, the optimizer used the IPOPT class inside the package. A primal-dual interior-point method was applied, which uses line search based on the filter method [50]. The software simulation architecture for evaluating the controller is shown in Figure 11.   Figure 12b. The control inputs including velocity and steering angle are shown in Figure 13a,b, respectively. Lateral and longitudinal trajectory tracking performance (with respect to their occurrence time) are shown in Figure 14.  The simulation results in Figure 12 show that NMPC is able to control the car properly while tracking the trajectory. Simulation results, also, show that NMPC has optimized the controller effort during the simulation. In addition, the controller managed to steer the car towards the desired heading at the destination.

Trajectory Tracking
The proposed control and state estimation (the vehicle position and heading angle estimation) algorithms were implemented on our designed electric ground vehicle. A sinusoidal trajectory was defined as follows: 1 /10 (58) 0 (59) sin 0.1 * We conducted a real-time trajectory tracking maneuver using our designed electric vehicle and the proposed system structure. Figure 15a,b show the trajectory tracking of the vehicle and the vehicle heading angle, respectively. Figure 16a shows vehicle linear velocity during the maneuver, while Figure 16b depicts vehicle trajectory tracking in the X coordinate (with respect to the occurrence time). It must be noted that the time shift in Figure 16b is the effect of the response delay of the system.

Path Following
The performance of the vehicle in performing path tracking was evaluated by conducting an experiment on the Kunsan university campus. Figure 17 shows the experimental result. The controller is fed with waypoint data during the driving maneuver. Control inputs are depicted in Figure 18a. Figure 18b shows vehicle derivation from the desired path. The controller's brake commands and changes in the vehicle steering angle, respectively, are shown in Figure 19a,b. Vehicle velocity and heading angle are shown in Figure 20.

Conclusions
In this study, we aimed to develop an autonomous electric vehicle for path tracking. We discussed both hardware and software design and implementation. Control and navigation algorithms were developed and implemented. The vehicle was able to perform path tracking maneuvers under environments in which the positioning signals from the Global Navigation Satellite System (GNSS) are not accessible. The proposed approach used a constrained input-output nonlinear model predictive controller (NMPC) for path tracking. The implemented localization algorithm guaranteed almost accurate position estimation under GNSS-denied environments.
The performances of the algorithms were evaluated using MATLAB and GAZEBO as simulators. In addition, the capability of the system was evaluated in real time by performing experiments using the designed vehicle. The simulation results and the realtime experiments confirm the capability of the designed vehicle for performing challenging path tracking under GNSS-denied environments.