Next Article in Journal
A Second-Order True-VCO ADC Employing a Digital Pseudo-DCO Suitable for Sensor Arrays
Previous Article in Journal
A Hybrid Harmonic Curve Model for Multi-Streamer Hydrophone Positioning in Seismic Exploration
Previous Article in Special Issue
A Novel Non-Contact Multi-User Online Indoor Positioning Strategy Based on Channel State Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Efficient Training of Gaussian Process Regression Models for Indoor Visible Light Positioning

South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(24), 8027; https://doi.org/10.3390/s24248027
Submission received: 7 November 2024 / Revised: 7 December 2024 / Accepted: 14 December 2024 / Published: 16 December 2024

Abstract

:
A data-efficient training method, namely Q-AL-GPR, is proposed for visible light positioning (VLP) systems with Gaussian process regression (GPR). The proposed method employs the methodology of active learning (AL) to progressively update the effective training dataset with data of low similarity to the existing one. A detailed explanation of the principle of the proposed methods is given. The experimental study is carried out in a three-dimensional GPR-VLP system. The results show the superiority of the proposed method over both the conventional training method based on random draw and a previously proposed line-based AL training method. The impacts of the parameter of active learning on the performance of the GPR-VLP are also presented via experimental investigation, which shows that (1) the proposed training method outperforms the conventional one regardless of the number of final effective training data ( E ), especially for a small/moderate effective training dataset, (2) a moderate step size (k) should be chosen for updating the effective training dataset to balance the positioning accuracy and computational complexity, and (3) due to the interplay of the reliability of the initialized GPR model and the flexibility in reshaping such a model via active learning, the number of initial effective training data (m) should be optimized. In terms of data efficiency in training, the required number of training data can be reduced by ~27.8% by Q-AL-GPR for a mean positioning accuracy of 3 cm when compared with GPR. The CDF analysis shows that with the proposed training method, the 97th percentile positioning error of GPR-VLP with 300 training data is reduced from 11.8 cm to 7.5 cm, which corresponds to a ~36.4% improvement in positioning accuracy.

1. Introduction

Accurate knowledge of the user’s location is a prerequisite for high-quality location-based services (LBSs) which enable a variety of important functions in the era of the Internet of Things, such as asset monitoring and automatic scheduling, product adaptation, service recommendation, etc. To achieve accurate localization in the indoor environment where the satellite-based positioning method cannot work properly due to signal blockage, visible light positioning (VLP) based on the visible light communication technology which can leverage the ubiquitous indoor lighting infrastructure has been proposed and intensively studied [1]. Compared with the radio-frequency-based indoor positioning technologies, VLP based on the optical carrier with a much shorter wavelength is more robust against multi-path fading and electromagnetic interference. Previous work has shown a centimeter-level accuracy can be achieved by VLP [1], which outperforms the conventional radio-frequency-based counterpart (e.g., Wi-Fi and Bluetooth) by a large margin [2,3].
To achieve high positioning accuracy with VLP, a carefully designed positioning algorithm is indispensable. Recent studies show that VLP algorithms based on supervised machine learning (SL) (e.g., artificial neural network (ANN) [4], polynomial regression (PR) [5], k-nearest neighbors (KNN) [6], decision tree (DT), random forest (RF) [7], support vector machine (SVM) [8], and Gaussian process regression (GPR) [9,10,11,12,13]), which have a powerful nonlinear data processing capability, offer superior performance over the conventional positioning algorithms based on trilateration/triangulation using the Lambertian radiation model [14]. For example, the study in [10] shows that, in a received signal strength (RSS)-based VLP system, the GPR positioning algorithm is significantly better than that of the conventional propagation-model-based approach which suffers from model mismatch in practical applications. Similar findings have also been reported in [11] which shows that the data-driven GPR algorithm offers more robust and accurate positioning results than the conventional analytical Angle-of-Arrival multilateration-based algorithm. In SL-VLP, a customized model is created via training based on a labeled dataset (i.e., training data collected at known positions) and used to estimate the position of users at an unknown location via inference. For some SL algorithms (e.g., ANN and PR), the training and inference processes are decoupled as they are conducted at offline and online stages, respectively. In such a case, the computational complexity of position estimation remains unchanged even if a larger dataset is used in training to enhance the accuracy of the SL model. However, for many other SL algorithms, mostly non-parametric algorithms (e.g., RF, SVM, KNN, and GPR), the training and inference processes are coupled, which means that the complexities of both processes increase for a larger training dataset. This leads to a dilemma in these coupled SL-VLP methods where a large training dataset is favored for creating an accurate data-driven model, yet it increases the computational burden for the position estimation at every unknown location. It should be noted that efforts have been made to reduce the complexity of the training of SL-VLP systems [5,15,16,17]. Nevertheless, these works cannot solve the above dilemma because either only the decoupled SL-VLP method is considered [5,15] or the number of training data in the coupled SL-VLP method is not reduced [16,17].
Since different coupled SL methods may be developed based on quite different principles/mechanisms, the solution to avoid such a dilemma in general should be devised specifically for a certain SL positioning algorithm. In this paper, we target the GPR-based VLP system and propose a data-efficient training method to reduce the number of training data without compromising the accuracy of the GPR model in positioning. The reason for choosing GPR is that it is a probabilistic model that can simultaneously estimate the position and report the corresponding confidence of estimation [18], which is not possible for the other SL method based on a deterministic model. The unique probabilistic nature of the GPR model also makes it a valid tool for data augmentation for SL (e.g., NN in [15] and fingerprinting in [12])-based indoor positioning algorithms. Previous studies have shown that GPR outperforms the conventional propagation-model-based method [10,11] and some other SL methods (e.g., multi-layer perceptrons [13]) in terms of positioning accuracy. However, the computational resources and time required by GPR increase significantly when a large training dataset is employed, which negatively affects the performance of positioning accuracy [12]. In other words, the data efficiency in the training of the GPR model needs to be improved to solve the conflict between the positioning accuracy and the computational complexity. In this regard, we propose a data-efficient training dataset construction method for GPR-based VLP systems. The proposed method incorporates the idea of active learning [16,19,20] to progressively refine the data in a collected training dataset according to the data similarity reflected in the “confidence” output from the GPR model. A quadrant-based active learning method is devised to further enhance the data-efficiency in training which in turn reduces the complexity in positioning performed at every location. The experiment of a three-dimensional VLP system is carried out to evaluate the performance of the proposed method. Optimization of the parameter of active learning is also realized experimentally. The results show that the number of training data can be effectively reduced by the new training method which outperforms both the conventional training method and the line-based training method which also employs active learning under the same number of training data.
The contribution of our work is summarized as follows:
(a)
A novel training dataset construction method based on active learning is proposed to reduce the size of the training dataset without compromising the positioning accuracy for VLP with GPR. The higher data efficiency in training offered by the proposed method can reduce the computational complexity of the positioning stage with GPR;
(b)
The effectiveness of the proposed method in improving the data efficiency in training is proved experimentally in a 3D VLP system for both scenarios with and without the receiver tilt;
(c)
The impacts of parameters of active learning on the performance of the proposed method have been studied via comprehensive experimental investigations, which gives insights into the optimization of the proposed method.

2. Principle of Data-Efficient Training in GPR-VLP

In GPR-VLP, the GPR model takes in the feature of received signals (s) and outputs the estimation of position (p) corresponding to the measurement. Without loss of generality, in this study, the received signal strength (RSS) of four carriers of different frequencies (i.e., s = s 1 , s 2 , s 3 , s 4 ) and the three-dimensional coordinates (i.e., p = ( l X , l Y , l Z ) are used as the model’s input and output, respectively. To generate an accurate GPR model, a set of data is used for training which is denoted as follows:
D = s i , p i i = 1 , , N
where s i and   p i are the RSS and label (i.e., position) of the ith training sample, respectively. As a non-parametric model, the training process of a GPR model is not conducted explicitly as an independent procedure before the inference process. Instead, the training and inference are coupled according to a multi-variate Gaussian process. Specially, the unknown location p T can be expressed as a Gaussian process conditioned on the known vector P D as follows:
p p T P D ~ N μ , σ 2
The mean μ and variance σ 2 of this distribution are as follows:
μ = p ^ T = K s T , S D K S D , S D 1 P D
σ 2 = K s T , s T K s T , S D K S D , S D 1 K S D , s T
where s T is the RSS measured at the unknown location p T , and S D = s 1 , s 2 , , s N and P D = p 1 , p 2 , , p N are the RSS and position vectors of N training data, respectively. The μ and σ 2 can be interpreted as the estimated position p ^ T and the confidence of such estimation by the GPR model. The K (X, Y) in (3) and (4) is a matrix/vector with a dimension of length(X) (i.e., length of vector X) by length(Y) (i.e., length of vector Y). The element on the ith row and jth column of K (X, Y) depends only on the ith element of X i . e . ,   x i and jth element of Y i . e . ,   y i according to a certain kernel function k x i , y j . The isotropic radial basis function k x i , y j = exp x i y j 2 2 l 2 with a positive hyper-parameter l is used as the kernel function here as we find it offers good performance in a simple form.
It is obvious that the position estimation at every unknown location requires the evaluation of (3) with a measurement s T to obtain p ^ T . The computational complexity of matrix inversion to obtain K S D , S D 1 is O N 3 . Even if K S D , S D 1 is calculated offline, the computational complexity of (3) still scales proportionally with N. For the estimation confidence in (4), the computational complexity scales proportionally with N2 after excluding the matrix inversion. Therefore, the computational burden at every test location increases if a larger training dataset is employed in GPR to enhance the positioning accuracy. To reduce the complexity of positioning without compromising the positioning accuracy, a data-efficient training method is needed.
Algorithm 1 shows the pseudo-code of our proposed data-efficient training method based on active learning for GPR-VLP systems, which is denoted as “AL-GPR” in the rest of this paper. The first step is to divide the training dataset D into two sets: an initial effective training dataset D e with m randomly drawn data and the complement dataset D . Then, the “similarity” among each s in D and all data contained in D e is evaluated based on (4) using D e as the effective training dataset. After that, k data in D which correspond to the k lowest confidence outputs are added to the effective training dataset (i.e., updating D e ) since these data have the lowest level of similarity to those included in the current GPR model. The effective training dataset D e is updated iteratively following the above process until the amount of data in D e reaches a preset value E . Finally, we use D e as the training dataset and perform positioning with measurement s T at any test location using (3) and (4).
For the initialization step in active learning, it is preferred that the data in the initialized D e have low similarity to each other to enhance the generality of the initialized GPR model. However, the random drawing method in Algorithm 1 does not naturally guarantee that, especially when D is not sampled uniformly in the whole test space or the initial size is small. To further improve the data efficiency in the training of the GPR model, a quadrant-based active learning method is proposed by modifying the initialization step in Algorithm 1 (* Note that the AL process is decoupled from the positioning stage. It only needs to be conducted once before the positioning stage. The operator |·| denotes the number of samples in a dataset). The test space is divided into four quadrants of equal size. A total number of m / 4 data is randomly drawn from one quadrant according to their labels, where x stands for the nearest integer value less than or equal to x. In this way, the data in the initialized D e are evenly distributed in the test space from a quadrant point of view. The pseudo-code of the modified algorithm (namely, Q-AL-GPR) is summarized in Algorithm 2 (* Note that the AL process is decoupled from the positioning stage. It only needs to be conducted once before the positioning stage.).
Algorithm 1 GPR-VLP with Data-Efficient Training Based on Active Learning (AL-GPR)
Input:  training   dataset   D ,   the   numbers   of   the   initial   and   final   effective   training   data   ( m   and   E ,   respectively ) ,   kernel   function   k ,   measurement   s T at an unknown location.
Output:   estimated   position   p ^ T and its confidence σ T 2 .
* Active Learning:
  Initialization: Divide   D   into   an   initial   effective   training   dataset   D e 0 ( D e 0 = m )   with   m   data   randomly   drawn   from   D   and   the   corresponding   complement   dataset   D 0   ( D e 0 = D m ) .
While   the   number   of   data   in   D e i   after   i th   iteration   is   no   larger   than   E (i.e., D e i <   E ) do
1. evaluate the similarity of data in D i to D e i   using   ( 4 )   with   D e i   as   the   training   dataset .   To   be   specific ,   for   each   data   d x = s x , p x D i ,   calculate   σ x 2 = K s x , s x K s x , S D e i K S D e i , S D e i 1 K S D e i , s x .
2 .   update   D e i   and   D i   by   moving   k   data   with   k   lowest   similarity   levels   from   D i   to   D e i .   Mathematically ,   the   step   is   conducted   as   follows :   D e ( i + 1 ) = D e i { d x D i   w i t h   k   l o w e s t   σ x 2   } ,   D ( i + 1 ) = { d x D i   w i t h   ( D i k )   h i g h e s t   σ x 2   } .   The   size   of   the   two   datasets   after   the   update   are   D e ( i + 1 ) = D e i + k   a n d   D ( i + 1 ) = D i k .
3. i = i + 1.
End
  AL Output: The   final   effective   training   dataset   D e   from   the   final   iteration .   Note   D e =   E .
Positioning Stage: Calculate the estimated position p ^ T and the confidence of estimation σ T 2   for   the   measured   RSS   s T at an unknown location using GPR based on (3) and (4) with D e as the training dataset.
Algorithm 2 GPR-VLP under Data-Efficient Training with Quadrant-Based Active Learning (Q-AL-GPR)
  Input/output and all steps are the same as Algorithm 1 except for the initialization step in Active Learning.
* Active Learning:
  Initialization: Divide   D   into   an   initial   effective   training   dataset   D e 0 ( D e 0 = m )   and   the   corresponding   complement   dataset   D 0   D e 0 = D m .   For   each   of   the   four   quadrants   of   the   test   space ,   m / 4   data   are   randomly   drawn   according   to   their   labels   to   build   D e 0 .
  Other steps: The same as Algorithm 1.
Positioning Stage: The same as Algorithm 1.
The AL-GPR and Q-AL-GPR use AL to construct a training dataset with data of low similarities to improve the data efficiency. The AL introduces additional complexity when compared with the random sampling method used in the conventional GPR. However, the construction of the training dataset only requires a one-time effort, which means that the AL only needs to be executed once for all positioning tasks in the future rather than on a per-task basis. Therefore, the overhead induced by AL becomes negligible from a long-term perspective as more positioning tasks share it. On the other hand, the complexity of the positioning stage scales proportionally with the size of the training dataset for both the conventional GPR and AL-GPR/Q-AL-GPR as they use the same procedures for positioning. As we will show in the next section, a smaller number of training data can be used by AL-GPR/Q-AL-GPR to achieve the same level of positioning accuracy. The computational complexity of the positioning stage of AL-GPR/Q-AL-GPR is lower than that of the conventional GPR. Unlike the construction of a training dataset which is realized only once for all positioning tasks, the positioning stage needs to be executed on a per-task basis. The advantage of AL-GPR/Q-AL-GPR in terms of the computational complexity of the positioning stage becomes even more appealing from a long-term perspective due to the accumulation of savings in complexity when more tasks are performed.

3. Experiments and Results

To investigate the performance of the proposed data-efficient training methods in GPR-VLP systems, a three-dimensional VLP experiment is carried out. The number of LEDs is chosen to be four, which is larger than the minimum number required for the 3D VLP (i.e., three), to provide appropriate coverage of lighting and positioning service simultaneously. More LEDs with an appropriate design of distribution can be employed to enhance the system’s robustness against shadowing. The performance of the conventional GPR without AL in training is also evaluated for comparison. In the case of conventional GPR, the training dataset is constructed by sampling at random locations according to the uniform distribution in the positioning space. For conciseness, the conventional GPR is denoted as “GPR” in the rest of this paper. We would like to emphasize that GPR, AL-GPR, and Q-AL-GPR only differ in the way of constructing the training dataset (see Algorithms 1 and 2) but use the same estimation procedures according to (3) and (4). Figure 1a shows a picture of the VLP testbed with dimensions of 150 cm × 240 cm × 270.6 cm. Four light-emitting diodes (LEDs) which serve as both the light source for illumination and the beacon for VLP are installed on the ceiling with coordinates [48, 61, 270.6], [48, 137.5, 270.6], [95.5, 61, 270.6], and [95.5, 137.5, 270.6], respectively. Note that the asymmetrical arrangement of LEDs is caused by the layout of the room where a door/corridor is on the left side. The impact of different arrangements of LEDs on the performance of VLP based on GPR is not the focus of this work and is left for future study. To illustrate the physical layer of the VLP system, Figure 1b shows the schematic diagrams of the transmitter and receiver. At the transmitter side, each LED is driven by the signal from a Bias-tee which combines the sinusoidal signal with a unique frequency (400/500/600/700 kHz) from an electrical signal generator and the signal from a direct-current source. The modulated optical signal from the four LEDs is received by a photodiode (PD) at the receiver. The electrical signal from the PD is a result of the overlapping of four sinusoid signals of different magnitudes and phase delays (see the insert of Figure 1b). A photodiode (PD) together with an analog-to-digital converter (ADC) is used after the PD to sample and digitalize the received signal. The RSS is measured at the aforementioned frequencies after the fast Fourier transform of the time domain signal using a field programmable gate array (FPGA). RSS data are sampled at 1600 different locations which are evenly distributed on four planes (i.e., on a grid with a spacing d of 10 cm) at different heights (0/23/43/63 cm). Additionally, 70% of the collected data (i.e., N = 1120) are used as D . The remaining 30% (i.e., 480) are used as test data to evaluate the positioning error. The positioning error ε is defined as follows:
ε = p p ^ = l X l ^ X 2 + l Y l ^ Y 2 + l Z l ^ Z 2
where p = l X , l Y , l Z and p ^ = l ^ X , l ^ Y , l ^ Z are the true and estimated coordinates of the test location, respectively.
Figure 2 shows the statistics of the average positioning error ε under different training methods after 1000 runs. The mean value together with the 5–95% and 25–75% intervals of ε are shown for the three training methods. In each run, the collected data are randomly split into training and test datasets with a fixed ratio of 7:3. The average position error ε of 480 random test locations is calculated for each run. For both AL-GPR and Q-AL-GPR, the initial number (m) and final number ( E ) of effective training data are set to 160 and 300, respectively. The step size (k) for the update of the effective training dataset is set to 28, which corresponds to five iterations in AL. Note that the conventional GPR uses all data in D for training. The mean values of the empirical distributions for the three training methods are shown by three vertical dashed lines, respectively. Compared with the GPR, the average positioning error is significantly reduced by the two data-efficient training methods. When AL-GPR (Q-AL-GPR) is employed instead of GPR, the mean value of ε can be reduced from 3.46 cm to 2.8 cm (2.76 cm). The widths of the 5–95% and 25–75% intervals shown in Figure 2 imply that ε is less dispersed when active learning is introduced. This is consistent with the measured variances of ε which are 8.77 mm2, 2.04 mm2, and 1.83 mm2 for GPR, AL-GPR, and Q-AL-GPR, respectively.
Table 1 shows the mean and variance of the empirical distribution for the two data-efficient training methods under different numbers of initial data (m). It is obvious that the Q-AL-GPR slightly outperforms the AL-GPR regardless of the value of m, which shows the effectiveness of the improved initialization method. The advantage of Q-AL-GPR over AL-GPR is more obvious for a smaller m since the accuracy of the initialized model is more sensitive to the choice of training data when its size is small.
As shown by the results of AL-GPR and Q-AL-GPR, which only differ in the initialization step, the choice of the initial effective training dataset indeed affects the performance of the GPR model. To give a more comprehensive analysis of such impact, we have conducted further studies to show the performance of GPR with AL under more choices of initial effective training data. Specifically, besides the two existing cases (i.e., random drawn for AL-GPR in Algorithm 1 and quadrant-based uniform drawn for Q-AL-GPR in Algorithm 2), two additional cases that use the data in the central locations (denoted as “central”) and the corner locations (denoted as “corner”) are selected as the initial effective training data, respectively. For a fair comparison, all parameters in active learning are the same as Q-AL-GPR in Table 1.
Compared with the AL-GPR and Q-AL-GPR methods, which use data from the whole space of positioning to construct the initial GPR model, the two new cases have worse performance due to the smaller coverage of the initial effective training dataset. This can be attributed to the heterogeneous illuminance pattern of each light source and the non-symmetric arrangement of light sources. In such a heterogeneous environment, an initial GPR model that covers a larger area has better generality which leads to better positioning accuracy. An interesting observation is that the performance of the “corner”, AL-GPR, and Q-AL-GPR cases are improved when more initial data are employed while the performance of the “center” case becomes worse. This can be explained by the fact that the central area has a higher SNR than the corner area. Therefore, the initial GPR model built on the low SNR corner samples is less accurate than the one built on the center samples. For the three cases that involve the low SNR samples from the corner area (i.e., AL-GPR, Q-AL-GPR, and “corner”), more initial data helps to mitigate the impact of noise. For the “center” case with high SNR samples, overfitting becomes the bottleneck due to the limited coverage of the initial data. As the Q-AL-GPR offers the best performance among all methods which share the same computational complexity, we will focus on the Q-AL-GPR in the following section to gain insight into the data-efficient training.
As stated in section II, there are three adjustable parameters in the active learning for data-efficient training, including the number of initial (m) and final ( E ) effective training data and the number of new data (k) in each update of D e . To properly set these parameters for optimized performance, extensive investigation has been carried out. We first test the system’s performance under different values of k. Figure 3 compares the mean positioning error and computation time for training under different dataset update strategies (i.e., k = 1/10/28/70/140 which corresponds to 140/14/5/2/1 iterations in the update of D e ) after 1000 runs. The other settings remain the same as those for Figure 2.
In general, when the dataset D e is updated iteratively with a smaller step (i.e., a smaller k), the positioning error gradually reduces whereas the computational time for AL increases significantly. The positioning accuracy is improved by 0.23 cm (0.45/0.59/0.7 cm) in terms of the mean positioning error when the number of iterations increases from 1 to 2 (5/14/140) for a fixed number of initial/final effective training data. This is attributed to the fact that more iterations allow for a finer process that selects the data with lower similarity. Nevertheless, the computation time increases for more iterations as the calculation of (4) is required in each iteration. The major complexity is attributed to the matrix inversion which has to be computed in each iteration due to the random nature of D e . The measured computing time for active learning are 77.1/188.9/522.1/1382.2/14,314.9 ms for the five cases with 140/14/5/2/1 iterations, respectively, which is consistent with the complexity analysis in Section 2. To balance the computation complexity and positioning accuracy, five iterations are employed in the remaining test.
Next, the impact of the number of final effective training data ( E ) on the performance of GPR-VLP is evaluated. Figure 4a,b show the mean and variance of the empirical distribution of the average positioning error ε of each run, respectively, when E increases from 180 to 800 in a step of 20. The number of initial effective training data is fixed at 160. The performance of the conventional GPR based on a randomly drawn training dataset of size E is also shown for compassion. The performance of both methods improves when a larger training dataset is used. Thanks to the active learning process, the Q-AL-GPR outperforms the GPR regardless of the value of E , which is more pronounced when a small/moderate number of effective training data is used. As the number of effective data used for training increases, the performance gap between the two methods is narrowed. For example, the mean (variance) of the empirical distribution is reduced by about 1 cm (15.3 mm2) for E = 220 when one uses the Q-AL-GPR instead of the GPR, whereas the above gain is 0.3 cm (1.3 mm2) for E = 550. This is because both methods draw training data from the same fixed set of candidates. The intersection of the training datasets for the two training methods becomes larger when E gradually approaches its limit (e.g., N = 1120 in our test). The same trend is observed from the perspective of variance of the empirical distribution. The positioning performance superiority of Q-AL-GPR over GPR leads to a significant enhancement of data efficiency in training at a certain target level of positioning accuracy. For example, the numbers of final effective training data are about 260 and 360 for Q-AL-GPR and GPR to achieve a mean error of 3 cm, respectively, which corresponds to an improvement of ~27.8% in data efficiency by active learning.
The last parameter in active learning to be investigated is the number of initial effective training data (m). Figure 5 shows the mean positioning accuracy of Q-AL-GPR versus different values of m after 1000 runs. The number of final effective training data in active learning is set to 300. The performance of GPR with the same number of effective training data is shown by the dotted line for comparison.
The parabolic shape of the curve for Q-AL-GPR clearly shows that there exists an optimized value of m to achieve the highest mean positioning accuracy. When m is too small, the initialized GPR model with only a few randomly drawn data has a very poor generality and cannot reliably evaluate the similarity of data in D * , which in turn disturbs the subsequent process of progressively finding the most valuable data for training in the active learning process. On the other hand, though the generality of the initialized GPR model is enhanced when m becomes larger, the number of additional training data that can be used to update the initialized GPR model is reduced as m approaches its limit E (e.g., 300 for Figure 5), which in turn negatively affects the performance of active learning. Therefore, the optimized value of m is determined by the interplay of the above two factors. As shown in Figure 5, the best performance of Q-AL-GPR is obtained when m is set to 200, which corresponds to an improvement of 33% in mean positioning accuracy over GPR (i.e., 2.73 cm versus 3.46 cm).
To give a more comprehensive comparison of Q-AL-GPR and GPR, Figure 6 shows the empirical cumulative distribution function (CDF) of positioning error ε of the two methods with 300 effective training data after 1000 runs. The other parameters for Q-AL-GPR are m = 200 and k = 20. As shown in Figure 6, the curve of Q-AL-GPR is on the left side of that of GPR, which clearly shows the superiority of the active learning-based data-efficient training method over the conventional training method. For example, the positioning error of 97% of all tests is below 7.5 cm with Q-AL-GPR, while the value is 11.8 cm for GPR, which corresponds to an improvement of 36.4% in positioning accuracy.
Receiver tilting is an important factor in practice that affects the performance of the RSS-based VLP system. In the context of VLP with GPR, we have investigated two scenarios concerning the impact of receiver tilting: (A) the receiver is tilted when collecting both the training data and test data, and (B) the receiver is tilted when collecting test data while it is not tilted when collecting the training data. The receiver is tilted in the X-Z plane where the angle between the normal vector of the photodiode and the X-axis is varied from 0 to 15 degrees. The mean values of the average positioning error of 480 random locations after 1000 runs under different angles of tilt are shown in Table 2. The parameters for the positioning algorithms remain the same as in Figure 2 (m = 160, k = 28, and E = 300). To provide more information about the statistics of error, Figure 7 shows the empirical CDF of the average positioning error ε for the tests in Table 2.
As shown in Table 2, for both the GPR and Q-AL-GPR, the impact of receiver tilt is small in scenario A while it is much more obvious in scenario B, because the trained model is mismatched with the test environment in the latter one. Nevertheless, regardless of the angle of receiver tilt, the Q-AL-GPR always outperforms GPR in both scenarios. This shows that the effectiveness of the proposed method still holds when the receiver is tilted. The CDF curves in Figure 7 further show that the advantage Q-AL-GPR over GPR is more obvious when the tilt angle becomes larger.
Finally, we compare the performance of the Q-AL-GPR with a line-based training method which also uses active learning. Figure 8 shows the mean positioning error of the above two training methods versus different numbers of final effective training data ( E ) after 1000 runs. For the line-based AL method, interpolation is used to sample data as in the previous study [16], and the sampling step along the line is set to 10 cm. The setting of Q-AL-GPR is the same as that in Figure 4. Our data-efficient method outperforms the line-based AL significantly in the whole range of tests (i.e., 240 E 500 ). Moreover, when one compares the red curve in Figure 4 (i.e., GPR without AL) and the pink curve in Figure 8, the data efficiency of the line-based AL method is even worse than the conventional training method with random sampling. The reason is that the line-based AL method is designed to improve the efficiency in collecting the training data (i.e., find a physical path as short as possible to collect all training data to achieve a certain level of positioning accuracy) rather than the data efficiency (i.e., find a training dataset as small as possible to achieve a certain level of positioning accuracy). From the perspective of data efficiency, the line-based AL method has poor performance as a large amount of data along the AL predicted lines is collected and added to the training dataset regardless of their similarity to the existing training data.

4. Conclusions

A data-efficient training method (namely Q-AL-GPR) for GPR-VLP systems is proposed and experimentally demonstrated. The proposed method uses the active learning methodology which gradually updates the effective training dataset with data of a low level of similarity. The experimental results of a three-dimensional VLP system verify that the performance of GPR-VLP can be improved by the proposed method when compared with the conventional training method based on a randomly drawn training dataset of the same size. The Q-AL-GPR also outperforms the AL-GPR with the same computational complexity thanks to the improved initialization process with a more uniform distribution of initialized training data. The impact of the parameters of active learning on the performance of Q-AL-GPR VLP is investigated, which reveals that (1) although the performance gain can be obtained by active learning regardless of the number of final effective training data, it is more pronounced for a small/moderate effective training dataset, (2) a moderate step size should be chosen for updating the effective training dataset to balance the conflicting requirements from the perspectives of positioning performance and computational complexity, and (3) there exists an optimized value for the number of initial effective training data due to the interplay of the reliability of the initialized GPR model and the flexibility in reshaping such a model via active learning. In terms of data efficiency in training, the required number of training data can be reduced by ~27.8% by Q-AL-GPR for a mean positioning accuracy of 3 cm when compared with GPR. The CDF analysis shows that the positioning error corresponding to the 97th percentile can be reduced from 11.8 cm (GPR) to 7.5 cm (Q-AL-GPR) with 300 effective training data, which corresponds to a ~36.4% improvement in positioning accuracy. The results show that the effectiveness of active learning still holds when the receiver is tilted. Our study also indicates that, under the same size of the training dataset, Q-AL-GPR is superior to the previous line-based AL training method which adapts the active learning to improve the data collection efficiency rather than data-efficiency in training.

Author Contributions

Conceptualization, X.H.; methodology, R.X. and X.H.; formal analysis, J.W.; data curation, R.H.; writing—original draft preparation, J.W. and R.X.; writing—review and editing, J.W. and X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bastiaens, S.; Alijani, M.; Joseph, W.; Plets, D. Visible light positioning as a next-generation indoor positioning technology: A tutorial. IEEE Commun. Surv. Tutor. 2024, 26, 2867–2913. [Google Scholar] [CrossRef]
  2. Mainetti, L.; Patrono, L.; Sergi, I. A survey on indoor positioning systems. In Proceedings of the 2014 22nd International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split, Croatia, 17 September 2014. [Google Scholar]
  3. Rahman, A.M.; Li, T.; Wang, Y. Recent advances in indoor localization via visible lights: A survey. Sensors 2020, 20, 1382. [Google Scholar] [CrossRef]
  4. Zhang, S.; Du, P.; Chen, C.; Zhong, W.D. 3D indoor visible light positioning system using RSS ratio with neural network. In Proceedings of the 2018 23rd Opto-Electronics and Communications Conference (OECC), Jeju, Republic of Korea, 2 July 2018. [Google Scholar]
  5. Wu, Y.C.; Hsu, K.L.; Liu, Y.; Hong, C.Y.; Chow, C.W.; Yeh, C.H.; Liao, X.L.; Lin, K.H.; Chen, Y.Y. Using linear interpolation to reduce the training samples for regression based visible light positioning system. IEEE Photon. J. 2020, 12, 1–5. [Google Scholar] [CrossRef]
  6. Xu, M.; Jia, W.; Jia, Z.; Zhu, Y.; Shen, L. A VLC-based 3-D indoor positioning system using fingerprinting and K-nearest neighbor. In Proceedings of the 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, 4 June 2017. [Google Scholar]
  7. Irshad, M.; Liu, W.; Wang, L.; Khalil, M.U. Cogent machine learning algorithm for indoor and underwater localization using visible light spectrum. Wireless Pers. Commun. 2021, 116, 993–1008. [Google Scholar] [CrossRef]
  8. Tran, H.Q.; Ha, C. Improved visible light-based indoor positioning system using machine learning classification and regression. Appl. Sci. 2019, 9, 1048. [Google Scholar] [CrossRef]
  9. Knudde, N.; Raes, W.; De Bruycker, J.; Dhaene, T.; Stevens, N. Data-efficient Gaussian process regression for accurate visible light positioning. IEEE Commun. Lett. 2020, 24, 1705–1709. [Google Scholar] [CrossRef]
  10. Raes, W.; Dhaene, T.; Stevens, N. On the usage of Gaussian processes for visible light positioning with real radiation patterns. In Proceedings of the 2021 17th International Symposium on Wireless Communication Systems (ISWCS), Berlin, Germany, 6 September 2021. [Google Scholar]
  11. Aparicio-Esteve, E.; Raes, W.; Stevens, N.; Ureña, J.; Hernández, Á. Experimental evaluation of a machine learning-based RSS localization method using Gaussian processes and a quadrant photodiode. J. Lightwave Technol. 2022, 40, 6388–6396. [Google Scholar] [CrossRef]
  12. Sun, W.; Xue, M.; Yu, H.; Tang, H.; Lin, A. Augmentation of fingerprints for indoor WiFi localization based on Gaussian process regression. IEEE Trans. Veh. Technol. 2018, 67, 10896–10905. [Google Scholar] [CrossRef]
  13. Wu, F.; Stevens, N.; Strycker, L.D.; Rottenberg, F. Comparative study of Gaussian processes, multi layer perceptrons, and deep kernel learning for indoor visible light positioning systems. In Proceedings of the 2023 13th International Conference on Indoor Positioning and Indoor Navigation (IPIN), Nuremberg, Germany, 25–28 September 2023. [Google Scholar]
  14. Tran, H.Q.; Ha, C. Machine learning in indoor visible light positioning systems: A review. Neurocomputing 2022, 491, 117–131. [Google Scholar] [CrossRef]
  15. Zeng, W.; Chen, H.; Chen, J.; Hong, X. Data-efficient artificial neural networks with Gaussian process regression for 3D visible light positioning. In Proceedings of the 2021 Optical Fiber Communications Conference and Exhibition (OFC), San Francisco, CA, USA, 6 June 2021. [Google Scholar]
  16. Garbuglia, F.; Raes, W.; De Bruycker, J.; Stevens, N.; Deschrijver, D.; Dhaene, T. Bayesian active learning for received signal strength-based visible light positioning. IEEE Photon. J. 2022, 14, 1–8. [Google Scholar] [CrossRef]
  17. Tran, H.Q.; Ha, C. High precision weighted optimum K-nearest neighbors algorithm for indoor visible light positioning applications. IEEE Access. 2020, 8, 114597–114607. [Google Scholar] [CrossRef]
  18. Qiu, K.; Zhang, F.; Liu, M. Visible light communication-based indoor localization using Gaussian process. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September 2015. [Google Scholar]
  19. Azzimonti, D.; Rottondi, C.; Tornatore, M. Reducing probes for quality of transmission estimation in optical networks with active learning. J. Opt. Commun. Netw. 2020, 12, A38–A48. [Google Scholar] [CrossRef]
  20. Settles, B. Active Learning Literature Survey; University of Wisconsin: Madison, WI, USA, 2009. [Google Scholar]
Figure 1. (a): A picture of the three-dimensional VLP testbed. A total number of 1600 locations evenly distributed on four planes of different heights are used for data collection in the test. The dotted circles and solid dots on the rightmost figure show the projection of four LEDs and sampling locations on one of the four planes, respectively. The inner and outer area divided by the dashed line corresponds to the “center” and “corner” cases, respectively. (b): Schematic diagrams of the 3D VLP system. Note that the training dataset only needs to be constructed once for all positioning tasks at unknown locations in the future.
Figure 1. (a): A picture of the three-dimensional VLP testbed. A total number of 1600 locations evenly distributed on four planes of different heights are used for data collection in the test. The dotted circles and solid dots on the rightmost figure show the projection of four LEDs and sampling locations on one of the four planes, respectively. The inner and outer area divided by the dashed line corresponds to the “center” and “corner” cases, respectively. (b): Schematic diagrams of the 3D VLP system. Note that the training dataset only needs to be constructed once for all positioning tasks at unknown locations in the future.
Sensors 24 08027 g001
Figure 2. Statistics of the average positioning error ε of 480 random test locations under different training methods after 1000 runs.
Figure 2. Statistics of the average positioning error ε of 480 random test locations under different training methods after 1000 runs.
Sensors 24 08027 g002
Figure 3. Mean positioning error and computing time for AL under different dataset update strategies (i.e., different k values).
Figure 3. Mean positioning error and computing time for AL under different dataset update strategies (i.e., different k values).
Sensors 24 08027 g003
Figure 4. Empirical (a) mean and (b) variance of the average positioning error ε of each run under different sizes ( E ) of the finalized effective training dataset D e .
Figure 4. Empirical (a) mean and (b) variance of the average positioning error ε of each run under different sizes ( E ) of the finalized effective training dataset D e .
Sensors 24 08027 g004
Figure 5. Mean positioning accuracy of Q-AL-GPR with different numbers of initial effective training data (m). The result of GPR with the same number of effective training data ( E = 300) is shown by the dotted line for comparison.
Figure 5. Mean positioning accuracy of Q-AL-GPR with different numbers of initial effective training data (m). The result of GPR with the same number of effective training data ( E = 300) is shown by the dotted line for comparison.
Sensors 24 08027 g005
Figure 6. Empirical cumulative distribution function (CDF) of positioning error ε for Q-AL-GPR and CPR under 300 effective training data after 1000 runs.
Figure 6. Empirical cumulative distribution function (CDF) of positioning error ε for Q-AL-GPR and CPR under 300 effective training data after 1000 runs.
Sensors 24 08027 g006
Figure 7. The empirical CDF of the average positioning error ε for GPR and Q-AL-GPR when the training data are collected (a) with or (b) without tilt. The test data are collected with a certain angle of receiver tilt in both scenarios.
Figure 7. The empirical CDF of the average positioning error ε for GPR and Q-AL-GPR when the training data are collected (a) with or (b) without tilt. The test data are collected with a certain angle of receiver tilt in both scenarios.
Sensors 24 08027 g007
Figure 8. Mean positioning error versus different numbers of final effective training data ( E ) for the two training methods based on AL (i.e., Q-AL-GPR and line-based AL).
Figure 8. Mean positioning error versus different numbers of final effective training data ( E ) for the two training methods based on AL (i.e., Q-AL-GPR and line-based AL).
Sensors 24 08027 g008
Table 1. The empirical mean/variance of ε for AL-GPR, Q-AL-GPR, and cases of “center” and “corner”.
Table 1. The empirical mean/variance of ε for AL-GPR, Q-AL-GPR, and cases of “center” and “corner”.
# of Initial Data (m)120140160180200
Mean (cm)AL-GPR2.882.842.802.772.76
Q-AL-GPR2.852.792.762.742.73
center3.293.383.513.703.96
corner3.513.463.433.453.45
Variance (mm2)AL-GPR2.322.082.041.901.76
Q-AL-GPR2.171.871.831.751.67
center4.184.454.986.168.42
corner6.075.485.715.615.07
Table 2. The empirical mean/variance of ε for GPR and Q-AL-GPR under different angles of receiver tilt using 300 effective training data.
Table 2. The empirical mean/variance of ε for GPR and Q-AL-GPR under different angles of receiver tilt using 300 effective training data.
* Scenario ATilt Angle (degree)02.551015
Mean (cm)GPR3.463.443.423.423.44
Q-AL-GPR2.762.752.762.762.78
Variance (mm2)GPR8.779.469.409.4210.21
Q-AL-GPR1.831.671.731.931.72
* Scenario BTilt angle (degree)02.551015
Mean (cm)GPR3.465.288.6816.5925.59
Q-AL-GPR2.764.658.0615.7024.11
Variance (mm2)GPR8.7710.1214.3235.60115.21
Q-AL-GPR1.832.213.296.8725.81
* Scenarios A (B) refers to the case when the training data are collected with (without) tilt. The receiver is tilted by a certain angle when collecting the test data in both scenarios.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, J.; Xu, R.; Huang, R.; Hong, X. Data-Efficient Training of Gaussian Process Regression Models for Indoor Visible Light Positioning. Sensors 2024, 24, 8027. https://doi.org/10.3390/s24248027

AMA Style

Wu J, Xu R, Huang R, Hong X. Data-Efficient Training of Gaussian Process Regression Models for Indoor Visible Light Positioning. Sensors. 2024; 24(24):8027. https://doi.org/10.3390/s24248027

Chicago/Turabian Style

Wu, Jie, Rui Xu, Runhui Huang, and Xuezhi Hong. 2024. "Data-Efficient Training of Gaussian Process Regression Models for Indoor Visible Light Positioning" Sensors 24, no. 24: 8027. https://doi.org/10.3390/s24248027

APA Style

Wu, J., Xu, R., Huang, R., & Hong, X. (2024). Data-Efficient Training of Gaussian Process Regression Models for Indoor Visible Light Positioning. Sensors, 24(24), 8027. https://doi.org/10.3390/s24248027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop