Homogeneous Sensor Fusion Optimization for Low-Cost Inertial Sensors

The article deals with sensor fusion and real-time calibration in a homogeneous inertial sensor array. The proposed method allows for both estimating the sensors’ calibration constants (i.e., gain and bias) in real-time and automatically suppressing degraded sensors while keeping the overall precision of the estimation. The weight of the sensor is adaptively adjusted according to the RMSE concerning the weighted average of all sensors. The estimated angular velocity was compared with a reference (ground truth) value obtained using a tactical-grade fiber-optic gyroscope. We have experimented with low-cost MEMS gyroscopes, but the proposed method can be applied to basically any sensor array.


Introduction
MEMS inertial sensors (i.e., gyroscopes and accelerometers) allow for the development of low-cost robotic and other mobile applications, where high-end fiber-optic gyroscopes (FOGs) are not affordable. The greatest concern is the bias stability and the noise of the MEMS sensors, which are much more significant compared to the FOG. Several approaches have been developed to compensate for the miscalibration and noise of the MEMS sensors. We may divide them into two categories: calibration and redundancy.

Calibration
The aim is to calculate the transformation function between the sensor output (raw data) and the best estimate of the measured variable (e.g., angular velocity, in the gyroscope case). The calibration is performed sample-by-sample and may also consider the sensor's gain, bias, misalignment, and optionally higher-order nonlinearity. Especially in the case of the MEMS sensors, the calibration parameters change after the sensor restarts [1] and also significantly depend on the temperature [2]. There are two types of calibration applicable to the MEMS gyroscopes: • Offline-the sensor is rotated by several predefined angular velocities. The measured points are then fitted using a calibration curve. We need to measure all angular velocities at different temperatures to compensate for the thermal drift. The most basic version of this is the start-up bias calibration, which computes the gyroscope's bias (offset) by measuring the steady state's mean output during the start-up phase while neglecting the Earth's rotation. The authors of [3] applied a convolution neural network (CNN) to obtain the calibration model. When a simple linear calibration model is used, its coefficients can be calculated using the least squares method. The external stimuli may be, in exceptional cases, replaced with internal ones, e.g., in the case of honeycomb disk resonator gyroscopes (HDRG), the authors of [4] analyzed the third-order harmonic component of the sensor signal to estimate the scale factor 2 of 16 of the closed-loop sensor, as well as to compensate for its thermal drift. The authors of [5] improved the precision of the calibration procedure by detecting the outliers in the measured calibration data using the random sample consensus algorithm (RANSAC), Mahalanobis distance, and median absolute deviation. A significant source of systematic errors is the misalignment of the sensor. The authors of [6] applied correlation analysis and the Kalman filter to estimate the installation errors. • Online-the sensor readings are compared with the readings of other sensors (e.g., in the sensor array [7][8][9]) and/or with the previous readings of the same sensor (see, e.g., [10]). Such methods overcome the static nature of the offline calibration, compensating both long-term and short-term drift in the parameters using the Kalman filter or an algebraic estimator combined with a finite-response (FIR) filter [11][12][13]. An essential advantage of such methods is avoiding the requirement of multipoint offline thermal calibration since they adjust the calibration constants in real time.

Redundancy
The combination (fusion) of the information obtained from multiple sensors (possibly of different types) has the potential to be more precise, robust, and reliable compared to a single sensor [14]. In a standard configuration, the MEMS gyroscopes are coupled with the MEMS accelerometers (often in the same chip). The accelerometer estimates the vertical direction by measuring the gravity acceleration. The accelerometer is also inherently affected by the linear acceleration of the object itself concerning the inertial frame of reference. Still, in many applications, the long-term mean of the linear acceleration can be considered negligible concerning the gravity acceleration. The estimated vertical direction from the accelerometer can be combined with the gyroscope readings using various sensor fusion methods, improving the precision of the estimated angular velocity and Euler angles (see, e.g., [8,9,[15][16][17][18][19]). Using multiple sensors also improves the safety of the whole system (see, e.g., [20,21]). To estimate the position of the object, the accelerometer and gyroscope readings can be combined with the odometrical data [22], microwave range finder [23], or laser scanner [24]. A variation of the nonlinear Kalman filter (e.g., the extended Kalman filter or EKF) is usually applied as the maximum likelihood estimator. The advantage of sensor fusion is its ability to estimate some parameters of the environment. Researchers in [25] used multiple independent models to enhance the robustness of an INS-based navigation system combined with a Doppler velocity log (DVL). The EKF requires presetting covariance matrices for the measurement. In real-life scenarios, the noise characteristics of the sensors are not constant; hence, they need to be estimated in real time. One commonly used technique is applying an LSTM neural network to learn how to estimate those changing parameters from the past readings of the sensors. For example, the authors of [11] developed a self-learning square-root cubature Kalman filter based on LSTM networks for GPS/INS navigation systems. However, estimators based on the neural networks are poorly explainable in general and hence cannot be used in safety-related applications. The fusion of the homogeneous sensor array can be implemented as a linear minimum variance (LVM) estimation (see, e.g., [26]). Such an approach requires one to know or precalculate the error covariance matrix, which is the same drawback that the EKF has.

Calibration in the Redundant Sensor Array
This article focuses on combining the approaches mentioned above, especially in cases where multiple inertial sensors of the same type are being used. Theoretically, when systematical errors of individual sensors are eliminated, the only source of error is the random (unpredictable) noise. Researchers in [8] investigated the precision of a MEMS array comprising 16 IMUs when the systematical errors were suppressed by offline calibration. It is more desirable to calculate each sensor's deviation for the overall estimate of the measured variable. The authors of [27] proposed a method for online calibration for the array of 32 IMUs (later reduced to 8 IMUs due to the low communication bandwidth available) using a maximum likelihood estimator, compensating bias, gain, and misalignment errors. The reference value of the angular velocity has been estimated from the rotation of the measured gravity acceleration vector. Such a method is applicable only when vibrations and linear acceleration are not present (e.g., rotating the IMU array by hand as suggested by authors).
An essential part of evaluating the algorithm's precision is the reference measurement system. The standard approach is to use a turntable. Researchers in [15] used military grade AHRS (Attitude and Heading Reference System) STIM300 to provide the reference value of angular velocity, compared with the values measured by a redundant array of 6 IMU units in a cubic constellation. Our first experiments used a rotational platform driven by a stepper motor, but it caused irreducible rotational-mode vibrations, rendering the reference value non-usable. Later, we used navigation-grade IMU SPAN-CPT for measurement of the ground-truth angular velocity.

Random Errors
Each sensor generates noisy readings. In the ideal case, we may assume that the noise is superposed to the true value. For the gyroscope, the relation is: where ω true is the true angular velocity, ω is the raw analog value at the output of the MEMS sensor unit, and ν is the noise from normal (gaussian) distribution: With only one sensor, the mechanical vibrations of the rotational character may also be considered noise.

Systematic Errors
The raw value of the gyroscope is truncated to the sensor's full-scale range ±ω max and quantized to obtain the digital value. The quantization is a rounding of the analog value towards the nearest digital level and is performed by an internal A/D converter built-in within many commercially available MEMS sensors. The digital raw value converted to SI units (International System of Units) can be modeled by the following: where ∆ω is the quantization step (assuming uniform quantization), ω truncated is the analog raw value (true angular velocity + random noise) truncated within the sensor's full-scale range, and round(x) is the rounding function returning the nearest integer to x, further noted as x . All variables in Equation (3) are in rad·s −1 . The quantization step is: where r is the sensor's resolution in bits (e.g., r = 16 bits for the MPU9250 sensor). The histogram of the real gyroscope noise is shown in Figure 1. Since the gyroscope is a directional sensor (measures angular velocity around one or multiple principal perpendicular axes), the misalignment of the sensor is a source of the systematic error: where ε is the misalignment angle. In the case of three-dimensional movements, the misalignment causes cross-talk between axes, which is expressed by the rotational matrix Ralign: Another source of systematic errors is the miscalibration of the sensor-deviation of its gain and/or offset: where G is the gain and B is the bias. The gain and bias are generally unpredictable and vary with time and temperature. The MEMS gyroscope measures the Coriolis force affecting periodically oscillating mass. Vibrations of the measured object perpendicular to the sensing axis of the gyroscope may cause additional periodic components of the measured signal due to the resonance. Such disturbance can be modeled using discrete Fourier series. It is a sum of harmonic (sinus) signals with frequencies from 0 to Nyquist frequency where A is the amplitude of the measured oscillations (not to be confused with the amplitude of the vibrations itself), Ts is the sampling period of the sensor, and φ is the phase of Since the gyroscope is a directional sensor (measures angular velocity around one or multiple principal perpendicular axes), the misalignment of the sensor is a source of the systematic error: where ε is the misalignment angle. In the case of three-dimensional movements, the misalignment causes cross-talk between axes, which is expressed by the rotational matrix R align : Another source of systematic errors is the miscalibration of the sensor-deviation of its gain and/or offset: where G is the gain and B is the bias. The gain and bias are generally unpredictable and vary with time and temperature. The MEMS gyroscope measures the Coriolis force affecting periodically oscillating mass. Vibrations of the measured object perpendicular to the sensing axis of the gyroscope may cause additional periodic components of the measured signal due to the resonance. Such disturbance can be modeled using discrete Fourier series. It is a sum of harmonic (sinus) signals with frequencies from 0 to Nyquist frequency F s /2 = 1/(2T s ). The smallest frequency change detectable by the sensor with the constant sampling frequency F s is F s /N, where N is the count of samples (window size). The model of the periodic noise is then: where A is the amplitude of the measured oscillations (not to be confused with the amplitude of the vibrations itself), T s is the sampling period of the sensor, and ϕ is the phase of the measured oscillations. Altogether, the measured value at the output of the single-axis gyroscope is: If we use the sensor with sufficient full-scale range ±ω max and a very small quantization step ∆ω, the model of the sensor can be considered linear. The mean square error of the estimated angular velocity is: where γ = G cos ε is the directional gain of the sensor. Since we assume that the internal noise component ν[k] is independent from the true value ω true [k], the mean of their product is negligible. At zero rate (during static calibration, ω true [k] = 0), the output bias B can be observed at the output of the gyroscope as the mean value, and the MSE is: The same principles and formulas are valid for accelerometers or any sensors sensitive to the environmental noise.

Synchronization Errors
Within a multi-sensor environment, we need to consider the synchronization of individual sensors. Some intelligent sensors provide a trigger input, which allows the master controller (e.g., a microprocessor) to send a synchronization pulse, starting the measurement of all sensors at the same time. To quantify the impact of incorrect synchronization, we may assume two identical sensors with the same sampling period T s but with different sampling phases. Qualitatively, the effect of invalid synchronization increases when the measured variable changes rapidly. If we model the dynamic input signal (true measured angular velocity) as a sine wave with the frequency f sig , the readings of two sensors are: where A is the signal amplitude, ν is white Gaussian noise superposed independently to both sensors, and τ is the synchronization delay of the second sensor. Naïve sensor fusion is the average of the two sensors: Figure 2 shows the RMSE of the value estimated using equation (14 concerning the relative sampling frequency and synchronization delay. The RMSE of each sensor noise was set to 1% of the signal amplitude for the simulation. A significantly larger sampling frequency than signal frequency (20 times and more) suppresses the adverse effects of invalid sensor synchronization (note the logarithmic vertical scale). Figure 2 shows the RMSE of the value estimated using equation (14 concerning the relative sampling frequency and synchronization delay. The RMSE of each sensor noise was set to 1% of the signal amplitude for the simulation. A significantly larger sampling frequency than signal frequency (20 times and more) suppresses the adverse effects of invalid sensor synchronization (note the logarithmic vertical scale).

Homogeneous Sensor Fusion
We may now analyze the situation when multiple almost-identical sensors sense the same variable. While the environmental noise (e.g., vibrations) is common for all sensors, the random internal sensor noise ν is purely random. The homogeneous sensor fusion can be expressed as a weighted average of the sensors' readings: where qj is the weight of the j-th sensor. If we apply Equations (9)-(15), neglecting the quantization and limits of the sensor, we obtain: The goal of the sensor fusion is to minimize the MSE of the output (least-squares method). Theoretically, if the gain and bias errors are completely compensated by calibration, the sensor is perfectly aligned, and there are no vibrations present, the only source of the error is the internal noise of the sensor, and the result of the sensor fusion is: The MSE of the sensor fusion is then: Figure 2. RMSE of the sensor fusion concerning the sampling frequency and synchronization delay between two identical sensors.

Homogeneous Sensor Fusion
We may now analyze the situation when multiple almost-identical sensors sense the same variable. While the environmental noise (e.g., vibrations) is common for all sensors, the random internal sensor noise ν is purely random. The homogeneous sensor fusion can be expressed as a weighted average of the sensors' readings: where q j is the weight of the j-th sensor. If we apply Equations (9)-(15), neglecting the quantization and limits of the sensor, we obtain: The goal of the sensor fusion is to minimize the MSE of the output (least-squares method). Theoretically, if the gain and bias errors are completely compensated by calibration, the sensor is perfectly aligned, and there are no vibrations present, the only source of the error is the internal noise of the sensor, and the result of the sensor fusion is: The MSE of the sensor fusion is then: When each ideal sensor has a weight coefficient proportional to q j ∼ 1/MSE(ω j ), the MSE of the ideal sensor fusion result MSE(Ω ideal ) will be minimal (see, e.g., [28]).
The precision of the real fusion result depends on the precise estimation of the MSE. If we use the zero-rate estimate of the MSE (Equation (11)), which is an overestimate, the MSE of the sensor fusion result will be worse. We do not know the harmonic components of the measured vibrations (parameters A m , ϕ m ), nor the values of the calibration parameters after a longer run. The values of the calibration parameters for a group of sensors are considered randomly distributed around their ideal values. When the homogeneous sensor fusion (weighted average) provides a reasonable estimate of the true value, it is possible to estimate the values of the calibration parameters from the last N samples. The simple calibration model is: where ω j(raw) [k] is the k-th raw digital sample of the j-th sensor converted to SI units, ω j(cal) [k] is the corresponding calibrated value, and c 1j , c 2j are the calibration parameters. The Equation (20) represents the first-order (linear) calibration model. The sensor gain should ideally be equal to one; the c 1 parameter represents gain deviation and is usually significantly lower than one. The form of the calibration model was chosen to improve numerical precision when floating point numbers are used. The calibration parameters c 1 and c 2 may be considered quasi-constant since they may slightly vary with the elapsed time and temperature. In the ideal case, both calibration parameters are zero. If we invert (20) and compare it with (7), the gain and bias of j-th sensor G j , B j from (7) can be rewritten in terms of c 1j , c 2j : If we would know the true value and the noise is Gaussian, the calibration parameters can be estimated using the least-squares method: [2] . . .
In a real application, the true value is unknown. In the case of multiple sensors, the apriori estimate of the true value is the mean of all calibrated sensors' measurements at the given time (all q j are the same): where M is the count of sensors. Initially, the bias and gain deviation may be set to zero; hence, ω j(cal

Algorithm 1 explains the homogeneous sensor fusion with the calibration estimation:
Algorithm 1 Homogeneous sensor fusion with calibration set: .c 1j ← 0, c 2j ← 0, q j ← 1 ⁄M. for j = 1 to M get ω j(raw) end for initialize the estimate of the true value: compute sensor deviation y j ← Ω est − ω j(raw) compute the calibration parameters c 1 , c 2 using (24) for j = 1 to M compute calibrated values ω j(cal) ← 1 + c 1j · ω j(raw) + c 2j end for for r = 1 to iterations re-compute the estimate of the true value: Ω est ← ∑ j q j ω j(cal) for j = 1 to M compute the MSE of each sensor MSE(ω j ) ← Var(Ω est − ω j(cal) ) compute the weight of the sensor q j ← 1/MSE(ω j ) end for compute maximal weight q max = µ M · ∑ j q j , truncation factor µ = 3 (empirical) for j = 1 to M truncate q j ← min(q j , q max ) normalize: q j ← q j /∑ k q k

end for end for
In the real-time version of the above algorithm, we process a fixed-size window of historical raw readings.

Truncation Factor
The proposed algorithm has intrinsic instability. When the MSE of one sensor is significantly underestimated compared to the other sensors, the estimated MSE of that sensor is low and its weight q j is significantly higher than the weights of the other sensors. Such a sensor becomes a "dictator"-in the next iteration, the estimated weighted average Ω est is pulled towards the readings of the dictator sensor. Therefore, it is pulled away from the readings of the other sensors. Then, the algorithm overestimates the MSE of the other sensors, resulting in a further decrease in their weight. After multiple iterations, the normalized weight of the dictator sensor converges to one, and all other sensors are suppressed. To avoid such behavior, truncation factor µ was introduced. It ensures that the weight of the best sensor never exceeds µ-times the average weight of all sensors. Suppose we assume that the RMS of a single sensor is a gamma-distributed random variable. In that case, the probability of false truncation (underestimation of the best sensor) is shown in Figure 3. The parameter shape θ = 23 ± 8 (std. deviation) and scale β = 0.03 ± 0.01 of the gamma distribution were roughly estimated by fitting the histogram of the RMS values obtained from a sample of 16 sensors. Figure 4 shows the RMSE of the estimated angular velocity concerning the truncation factor (see the next section for further details about the simulation parameters). According to the simulation results, the optimal truncation factor is approximately 3, which is the value to be used for further experiments. The false sensor weight truncation probability for µ = 3 is around 1%. 3. The parameter shape θ = 23 ± 8 (std. deviation) and scale β = 0.03 ± 0.01 of the gamma distribution were roughly estimated by fitting the histogram of the RMS values obtained from a sample of 16 sensors. Figure 4 shows the RMSE of the estimated angular velocity concerning the truncation factor (see the next section for further details about the simulation parameters). According to the simulation results, the optimal truncation factor is approximately 3, which is the value to be used for further experiments. The false sensor weight truncation probability for µ = 3 is around 1%.   3. The parameter shape θ = 23 ± 8 (std. deviation) and scale β = 0.03 ± 0.01 of the gamma distribution were roughly estimated by fitting the histogram of the RMS values obtained from a sample of 16 sensors. Figure 4 shows the RMSE of the estimated angular velocity concerning the truncation factor (see the next section for further details about the simulation parameters). According to the simulation results, the optimal truncation factor is approximately 3, which is the value to be used for further experiments. The false sensor weight truncation probability for µ = 3 is around 1%.

Simulation
We have used both simulation and real experiments to test the proposed method's performance. The simulation raw data were computed using MATLAB R2021a environment with the following parameters: The above parameters roughly correspond to the noise parameters of real low-cost MEMS gyroscopes available commercially. Using a pseudo-sinusoidal signal with a fluc-tuating frequency simulates the continuous angular velocity of a physical object with a non-zero moment of inertia. An example of the simulated signal is in Figure 5. The average gain across all sensors equals one and the average bias is zero. Such normalization is necessary because the algorithm cannot intuitively determine the common-cause bias of all sensors without any apriori information (e.g., from different types of sensors). For simplicity, when all sensors deviate in the same direction, the result will also deviate in that direction.
• simulated (true) angular velocity: • RMS of all 16 sensors is from a gamma distribution with shape α = 5 and scale β = 0.02 • bias of all 16 sensors is from a zero-centered Gaussian distribution with standard deviation σ = 30 deg·s −1 • gain of all 16 sensors is one-centered with standard deviation σ = 0.04 The above parameters roughly correspond to the noise parameters of real low-cost MEMS gyroscopes available commercially. Using a pseudo-sinusoidal signal with a fluctuating frequency simulates the continuous angular velocity of a physical object with a non-zero moment of inertia. An example of the simulated signal is in Figure 5. The average gain across all sensors equals one and the average bias is zero. Such normalization is necessary because the algorithm cannot intuitively determine the common-cause bias of all sensors without any apriori information (e.g., from different types of sensors). For simplicity, when all sensors deviate in the same direction, the result will also deviate in that direction.

Simulation Results
In the simulation mode, each sensor's gain, bias, and MSE are known. Therefore, we may compute the optimal weights and compare the estimated parameters with the known values (see Table 1). The algorithm converges under the abovementioned ideal conditions (zero-centered biases, one-centered gains). The shown values are obtained by three iterations of the RMS estimation. Please note that the algorithm, as described, computes MSE (mean square error) instead of RMS to avoid unnecessary computation of the square root.

Experimental Setup
Real-world experiments were conducted using a matrix of 16 MPU9250 MEMS sensors from TDK InvenSense (Shenzhen, China) (see Figure 6). All sensors communicate with the microcontroller STM32F446 from STMicroelectronics (Geneva, Switzerland) via two independent SPI channels. The microcontroller then sends all sensor readings to the computer via serial connection. To synchronize the sensor readings, all sensors contain synchronization trigger input. The synchronized readings are stored within internal FIFO buffers within each sensor and later retrieved via SPI. The typical noise characteristics of the used gyroscope sensor obtained from the datasheet are [29]: To evaluate the proposed method, it is necessary to measure the ground-truth value of the angular velocity. Our first attempts utilized a stepper motor driving a rotational platform. Still, the MEMS gyroscopes mentioned above captured the high-frequency changes in the motor's rotation speed caused by the steps. The frequency of those steps is higher than the Nyquist frequency of the sensors, thus causing aliasing. To circumvent that, we attached the sensor matrix to a commercially available SPAN-CPT inertial navigation unit from NovTel Inc. (Calgary, AB, Canada), which contains three DSP-3000 single axial fiber optical gyroscopes (FOGs) manufactured by KVH Industries (Middletown, CT, USA). The characteristics of FOG sensors are [30]: It is clear that FOG has 20 times lower RMS, so it thus may be used as a source of ground-truth value. Initial calibration allows us to compensate for the bias of the FOG sensor. The matrix of MEMS sensors attached to the SPAN-CPT unit is shown in Figure 7. Both sensors were rotated randomly by hand. The time series of the measured angular velocity from the MEMS array and FOG was roughly synchronized using a timestamp and fine-aligned by hand to suppress delays caused by the communication interfaces in the PC. The manual alignment of MEMS and FOG signals is needed only for validation purposes; in real applications, the FOG is not present. The measured angular velocity is in Figure 8. To evaluate the proposed method, it is necessary to measure the ground-truth value of the angular velocity. Our first attempts utilized a stepper motor driving a rotational platform. Still, the MEMS gyroscopes mentioned above captured the high-frequency changes in the motor's rotation speed caused by the steps. The frequency of those steps is higher than the Nyquist frequency of the sensors, thus causing aliasing. To circumvent that, we attached the sensor matrix to a commercially available SPAN-CPT inertial navigation unit from NovTel Inc. (Calgary, AB, Canada), which contains three DSP-3000 single axial fiber optical gyroscopes (FOGs) manufactured by KVH Industries (Middletown, CT, USA).  Figure 7. Both sensors were rotated randomly by hand. The time series of the measured angular velocity from the MEMS array and FOG was roughly synchronized using a timestamp and fine-aligned by hand to suppress delays caused by the communication interfaces in the PC.
The manual alignment of MEMS and FOG signals is needed only for validation purposes; in real applications, the FOG is not present. The measured angular velocity is in Figure 8. ground-truth value. Initial calibration allows us to compensate for the bias of the FO sensor. The matrix of MEMS sensors attached to the SPAN-CPT unit is shown in Figure  Both sensors were rotated randomly by hand. The time series of the measured angula velocity from the MEMS array and FOG was roughly synchronized using a timestam and fine-aligned by hand to suppress delays caused by the communication interfaces i the PC. The manual alignment of MEMS and FOG signals is needed only for validatio purposes; in real applications, the FOG is not present. The measured angular velocity in Figure 8. The estimated gain, bias, and weight of all sensors are shown in Figure 9, which pre sents a comparison of the sensor calibration parameters. Blue circles are "true" value considering the FOG data. The results are obtained using a moving window with the size of 1000 samples (processing the last 10 s). Shown outliers represent short-term calibration changes.  The estimated gain, bias, and weight of all sensors are shown in Figure 9, which presents a comparison of the sensor calibration parameters. Blue circles are "true" values considering the FOG data. The results are obtained using a moving window with the size of 1000 samples (processing the last 10 s). Shown outliers represent short-term calibration changes.
As seen in Table 2 and Figure 9, the bias and gain of sensors were estimated correctly. The true gain and bias were obtained by comparing the results of the MEMS sensor with the readings from FOG. Those reference values lay within bounding boxes of estimated gain and bias. The mean absolute error of the gain estimation was 0.46%, and the mean absolute error of the bias estimation was 0.04 deg·s −1 (0.016% of the gyroscope full scale). The weights of the sensors are estimated roughly, with a mean absolute error of 41%, but it reflects the qualitative difference between the sensors (e.g., sensors j = 3, 4, 14, 15 have low weights). Table 3 compares two scenarios: the first with the real measured data and the second with an artificial white Gaussian noise added to one of the sensors, which emulates the degradation.

Conclusions
This article deals with the real-time calibration and sensor fusion in the homogeneous sensor array of the MEMS gyroscopes, measuring the angular velocity. The proposed method has been validated by both simulation and real-world experiments. The results of the experiments show that the algorithm improves the precision of the estimated angular velocity by approx. 5% compared to a naïve average when all sensors are working. The main advantage of the algorithm is the intrinsic ability to compensate for the degradation of some sensors using an adaptive weighted average, thus improving the reliability of the sensor array. The homogeneous sensor fusion can estimate calibration parameters (gain, bias) of individual sensors within the sensor array without the need for precise offline calibration equipment. Compared to the LVM (linear minimum variance) methods, our method does not require apriori information about error covariance matrices for individual sensors and can capture the changes in the error characteristics in real time. The proposed method applies to all types of sensors (not exclusively MEMS gyroscopes) because it does not require additional information. One of the critical challenges in such a configuration is to keep sensors in sync, which can be achieved by a global synchronization signal routed to all sensors. This drawback can be neglected when the sampling period is significantly smaller than the time constant of the system dynamics.