Next Article in Journal
HMB-I/O: Fast Track for Handling Urgent I/Os in Nonvolatile Memory Express Solid-State Drives
Next Article in Special Issue
Analysis and Comparison of Macro Fiber Composites and Lead Zirconate Titanate (PZT) Discs for an Energy Harvesting Floor
Previous Article in Journal
Cyber Threat Actors for the Factory of the Future
Previous Article in Special Issue
A Novel Sensorless Approach for Speed and Displacement Control of Bearingless Switched Reluctance Motor
 
 
Article
Peer-Review Record

Robust-Extended Kalman Filter and Long Short-Term Memory Combination to Enhance the Quality of Single Point Positioning

Appl. Sci. 2020, 10(12), 4335; https://doi.org/10.3390/app10124335
by Truong-Ngoc Tan 1,*, Ali Khenchaf 1, Fabrice Comblet 1, Pierre Franck 2, Jean-Marc Champeyroux 2 and Olivier Reichert 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2020, 10(12), 4335; https://doi.org/10.3390/app10124335
Submission received: 22 April 2020 / Revised: 8 June 2020 / Accepted: 16 June 2020 / Published: 24 June 2020

Round 1

Reviewer 1 Report

The paper presents two methods to enhance the quality of single point

position estimation combining several constellations. Essentially,

the method consists in a Kalman filter combined with a iterative

reweighted least squares algorithm and finally, a RAIM algorithm

is applied. To further enhance the results, the authors apply a

long short-term memory process. This yields quite impressive results.

 

While the content is interesting, the paper is very difficult to

read for two reasons:

- at many places, the English is so poor that one has to result

  to guessing what the autors actually meant. I would strongly

  suggest the author to have the paper carefully re-read by a

  native speaker

- the notations are inconsistent (see examples below).

The net result being that at some places, one has to resort to

guessing what the authors have actually implemented to obtain

the results they present.

 

Regarding the methodology, while it is globally sound, there are a

few shortcomings and I'd offer these two comments:

- the presented algorithm should be compared to one of those presented

  in [7], [8] or [10]. At the very least, the estimation method

  considered for the "non-robust estimation" should be mentionned.

- one would argue that since the learning set for the LSTM step is

  obtained from a non-moving (or very slowly moving) receiver,

  additional information is provided regarding the velocity. This

  is possibly a major help in achieving the excellent performances

  observed. At the very least, the author should acknowledge this.

  Generalizing to a moving receiver may prove much more difficult

  as obtaining training data will be difficult (how will the "true"

  position be obtained). In data from a non-moving platform is

  used as training data, performance will likely strongly degrade

  on the presence of movement. This should also be acknowledged by

  the authors.

Specific comments:

Missing references:

  - can you provide a reference to the "SIGMA model" (67) and

    justify where (68) comes from.

  - in (70), why is the variance of the pseudo range rate equal

    to the variance of the pseudo range multiplied by 0.01?

Consistency of the notations:

  Be sure that one symbol (one letter) ALWAYS refers to EXACTLY the

  same concept:

  - the letter "e" appears three times in (47) but likely refers

    to two different concepts

  - (27) states y=GX+e while (43) states y=GX+\hat e

  - y also appears in (47) but it is likely a totally different

    concept (upting) than in (27) and (43).

  - in (32) W refers to a (scalar) weighting function, while

    in (41) and further, W is suddenly a matrix.

  - e is first defined in (27) and it is re-defined in (48) in

    an inconsistent way.

  - if Zk is as defined in (59) (vector of length 6) then (61)

    and (62) cannot have M lines. (63) does not hold.

  - (72): Ak does not depend on tau. Please, clarify

  - (73): Why is the PSD of the speed noise used where one would

    expect the PSD of the position noise?

  - (81): Is Wf a vector? If yes, then [h,x] should be written

    as [h,x]^T and Wf as Wf^T.

  - Section 5.1: while in the previous section, the discrete time

    instants where donoted by k, they are now denoted by "t".

    May I suggest the authors to remain consistent.

 

Clarity in the definition of the processing performed

  - In Section 3.3, a matrix Wj is estimated. In section 3.4, a

    matrix W is used but not defined. Is W equal to the last Wj

    obtained in Section 3.3?

  - In Section 3.3, it is stated that the outliers are 'eliminated'.

    My understanding is that that is only the case if the weighting

    function does so, hence only for the bi-Tukey and the bi-square

    weighting functions. Granted, the authors select the bi-Tukey

    in (36) and (40). A word on the reason might be useful.

  - (35) refers to a "median" which means that there is a 'population'

    to compute the median of. Clearly, that 'population' is provided

    by the different pseudoranges obtained from the different satellites.

    This is however not explicit in the text and the notation is

    unclear.

  - Figure 6: it is not clear what the red line refers to. Similarly,

    why are there 4 LSTM blocks? I would expect one per element of X.

  - Figure 7: there are two orange lines leaving the EKF. What does

    the non-labeled orange line contain? Make that clear from the

    drawing.

  - Figure 8: There is something fundamentally wrong with the diagram.

    The short and straight red arrows refer to the (learned)

    weight Wf and that should be indicated on the graph. The long and

    bent red arrow contains the training data used in the training step.

    Where is the input data (in actual usage, not while demonstrating

    the algorithm).

    I would suggest making one diagram for the training step where the

    estimated position is compared to the ground truth, another diagram

    for the "validation" and a third diagram for the usage of the LSTM

    with data for which the ground truth is not available.

  - Section 5.2 states that 20 hidden layers are used. Where are those

    hidden layers in fig 5?

 

Other inconsistencies:

- While the end result is (almost) correct, (42) is probably incorrect:

  consider (41), (42) then becomes X-X^- = X+ Ue + X^- (U well

  known from the equation (42)) which is inconsistent. The errors comes

  from having replaced X by (GTWG)-1GTWy which is incorrect. Actually,

  (GTWG)-1GTWy = an estimate of X.

- The inconsistency between (27) and (43) was already mentioned.

- (43) is also wrong: GX is not equal to (GTWG)-1GTWy and (44) is

  likely also wrong.

- S^TS=S used in (50) does not hold, likely due to the mistake in

  (43) and (44).

- Si as defined in (67) cannot be a covariance matrix as written in

  the text below (67)

- (67): Si increases as CNO decreases: this is not correct.

 

Stylistic:

- avoid references to text that is yet to come. Eg the text between (46)

  and (47) refers to (55) while that is already defined in (10).

- in the introduction of Section 4: can the author rephrase "... with

  the RAIM algorithm to check the accurate zone of the user".

- in section 4.1: "the user state of three systems". What are those

  three systems??

- (64) to (66) are already present in (19)-(21). Refer to them.

Author Response

We would like to thank the reviewer for his questions, comments and suggestions. Thank you for consulting our answers and comments given in the attached file.

Author Response File: Author Response.docx

Reviewer 2 Report

1. Contemporary and interesting issue. 

2. Conceptually rEKF is combination of EKF + RAIM. But, in RAIM, there are many other methods to reject outlier including HPL. It may be better to include why HPL is chosen and other methods are not considered.

3. To process real measurements, there are many pre-processing steps, such as cut-off angles, iono - tropo modeling, multipath reduction and so on. It should be mentioned in the paper to correctly understand the result of experiments. Also the number of satellites, sky-plot, DOP information are required.

4. In Fig.2, why the performance of GLONASS is so bad. It needs explanation. If you know the performance of GLONASS is bad, you can reduce the effect of GLONASS by properly choosing covariance in eq. (70).

5. Weighted Least Squares (WLS) with proper weighting is the most popular GPS/Galileo/GLONASS combining method. The result of WLS and rEKF should be compared to give fair conclusion.

6. LSTM is used to de-noising GPS/Galileo/GLONASS data. How about apply to GLONASS only ? because GPS and Galileo gives reliable results already. Furthermore, how about apply to GPS only? I expect it may be give performance improvement.

7. In practical aspect. Who needs the technique? The user who need high performance utilize carrier phase measurement and obtain mm-level (RTK) or dm-level (PPP) accuracy already.

 

 

Author Response

We would like to thank the reviewer for his questions, comments and suggestions. Thank you for consulting our answers and comments given in the attached file.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper presents two methods to enhance the quality of single point
position estimation combining several constellations. Essentially,
the method consists in a Kalman filter combined with a iterative
reweighted least squares algorithm and finally, a RAIM algorithm
is applied. To further enhance the results, the authors apply a
long short-term memory process. This yields quite impressive results.

The quality of the English vastly improved and as a consequence the
clarity of the paper.
There are however still some inconsistencies in the notations. The results should be compared to other "standard" methods, WLS or others.

Notations: I repeat here my previous comment: a SAME symbol ALWAYS
has the SAME meaning. I provide here additional comments (in chronological order) regarding notation issues
- at several places, a * appears where one would expect a product (e.g. (25), (82), (80), line 374. Please be consistant or define what * means.
- Zk is described in (12) but further refined in (59). It would be helpful if
(59) was moved closer to (12).
- one has G_k (p6) and then G_i (p7). If these denote the same concept,
why is there no X_i in (28) as there is a X_k in (23). Can the authors
clarify this in the text?
- W in (32) is actually not defined. It could be a scalar while in their
comment, the autors state it is a vector of size (1,n) (a horizontal vector).
This is not obvious as all other vectors are column vectors and should be
precised in the paper.
The authors also need to comment on the size of that vector: (31) is
clearly a vector equation with 12 elements (same size as X). (32) might
be the same size (W(rsi) is then a scalar) or a scalar (W(rsi) is then
a 12x1 vector, but I fail to see how it can be a 1xn vector as stated
by the authors.
- line 205: from (30), phi is rather the derivative of mu(lambda) with
respect to lambda, and not the derivative of mu((y-GX)/s) wrt X.
Notice that since lambda is a vector, phi(lambda) has the same size as lambda.
- what do the arrows in table 1 denote? If arrows are used to denote
vectors, use them everywhere.
- it may be useful to specify that W and W0 are diagonal
- G_k: in (34)&(37), the index of G denotes the iteration number
while in (29)-(32) it denotes the index of the "observation"
I guess y_i does not have the same size as y, but that is not specified
anywhere.

- n: the definition of n seems to be different in (30) and (36).
From the size of W (36), and (37), n is the size of y but then
why summing on n in (29)-(32).
- it might help to relate n (28)&(36) to n_s (eq (66)) and Ngps, Ngap, Nglo.
- line 265: to avoir confusion with other e and other n, I'd suggest using
(E,N,U) as in (47).
- line 270: I'd suggest \tilde G_0 in stead of speaking of a "new G0". Same
for \delta X_0 and M0.
- line 277: the authors needs to define \delta X_EN
- line 278: STS=S does not hold with S defined in (44).
- regardless of the comment above, can the authors clarify where beta^2
in (50) comes from
- (70) states Sigma as a function of \tau but that dependence does not
exist in (71).
- line 481: the authors state the data is normalized. How is the variance/
mean obtained?


I'd offer the following comments and suggestions
- line 19: there is a "which" that is odd, or a piece of the sentence is missing
- ABMF
-p2 line 46: Weight -> Weighted
- p6 line 183 refers to \hat Xk in (23). There is no \hat Xk in (23).
- around line 183-185,213 there are several forward references. I would advice
rewording the text to avoid forward references.
- I'd suggest moving the comment of line 226 about the use of the bi-Tukey
window to the end of section 2.3
- line 251: the definition of HPL is incomplete: add "... and is guaranteed
to contain the true position".
- (66): I would suggest to have a sum on k to avoid a clash on the index i
- Fig6: I understand the figures comes from Matlab, however, I'd suggest
being specific about the size of the different vectors
H = dimension of hidden state
N = number of features = 12
S = number of timesteps
- Fig6: the author should specify how they go from h to y (prediction).
- the paragraph in line 549-554 provide some perspective. In the conclusion,
the author should state that the fact that the training data and the test
data share a (near) zero velocity (likely dramatically) helps improving the
performance of the LSTM stage, as stated in my initial comments.

Author Response

Thank you very much for the time you have spent evaluating our article. Thank you for your relevant recommendations, which made the quality of the article significantly better. You will find attached all of our responses (in red) as well as a new version of the paper.

Author Response File: Author Response.pdf

Reviewer 2 Report

- The comments are well reflected.

Author Response

Thank you very much for the time you have spent evaluating our article. Thank you for your relevant recommendations, which made the quality of the article significantly better. You will find attached all of our responses (in red) as well as a new version of the paper.

Author Response File: Author Response.pdf

Back to TopTop