An Improved Approach for RSSI-Based only Calibration-Free Real-Time Indoor Localization on IEEE 802.11 and 802.15.4 Wireless Networks

Assuming a reliable and responsive spatial contextualization service is a must-have in IEEE 802.11 and 802.15.4 wireless networks, a suitable approach consists of the implementation of localization capabilities, as an additional application layer to the communication protocol stack. Considering the applicative scenario where satellite-based positioning applications are denied, such as indoor environments, and excluding data packet arrivals time measurements due to lack of time resolution, received signal strength indicator (RSSI) measurements, obtained according to IEEE 802.11 and 802.15.4 data access technologies, are the unique data sources suitable for indoor geo-referencing using COTS devices. In the existing literature, many RSSI based localization systems are introduced and experimentally validated, nevertheless they require periodic calibrations and significant information fusion from different sensors that dramatically decrease overall systems reliability and their effective availability. This motivates the work presented in this paper, which introduces an approach for an RSSI-based calibration-free and real-time indoor localization. While switched-beam array-based hardware (compliant with IEEE 802.15.4 router functionality) has already been presented by the author, the focus of this paper is the creation of an algorithmic layer for use with the pre-existing hardware capable to enable full localization and data contextualization over a standard 802.15.4 wireless sensor network using only RSSI information without the need of lengthy offline calibration phase. System validation reports the localization results in a typical indoor site, where the system has shown high accuracy, leading to a sub-metrical overall mean error and an almost 100% site coverage within 1 m localization error.


Introduction
Indoor localization is one of the most challenging goals for mobile device application development, as evidenced by the growing interest resulting in the birth of different consortia (i.e., i-Locate [1]) and coarse wireless devices (i.e., Apple iBeacon [2], NexTOme [3]) with simple software development kits (SDKs). In [4], some achievements resulting from the worldwide Microsoft Indoor Localization Competition are outlined. Note that all the proposed systems were developed with certain constraints, including cost-effectiveness, configuration speed and transparency.
Particular attention has been given to infrastructure-free systems [4]; that are systems used by most widespread Component-Off-The-Shelf (COTS) devices (i.e., smartphones, tablets, etc.), which only implement standard communication protocols and achieve localization services, starting from coarse and protocol estimated parameters (i.e., RSSI, LQI). In fact, localization systems compatible with typical user devices are considered the only answer to the development of friendly, cost-effective and simple localization [5]. Therefore, improving localization accuracy is achievable by refining localization algorithms.
In terms of IEEE 802.11 and 802.15.4 compliant systems, the direct physical parameters for packet transmission are the time of arrival (ToA) and the received signal strength indicator (RSSI). Time difference of arrival (TDoA) techniques can produce interesting results [6], and they may be the solution; however, as shown in [6,7] the lack of sufficient timing for resolutions at the protocola data level impose the Component-Off-The-Shelf (COTS) transceiver architecture to open up and achieve more accurate time estimations (likely using higher frequency ADCs) at a lower protocol stack layer. If such modifications could be considered, hardware adaptation would require higher frequency ADCs and DACs, comporting all related mixed-signal hardware changes and incrementing costs. With respect to the constraints of implementing COTS transceivers, the system design can take advantage of the RSSI parameter estimation available in all the IEEE 802.11 and 802.15.4 implementations. The direct RSSI evaluation appears to be an unreliable measure [8] for achieving a sufficient accurate localization in indoor environments. By this, some RSSI-based solutions based on distributed network of routers have been proposed in the past, applying fingerprint-like methods [4,9,10] or trilateration by range estimation [11,12]: in all of these solutions coarse errors arise due to the unpredictability of RSSI estimations in complex environments. Localization accuracy is typically improved by obtaining additional information from user devices' inertial sensors and by applying Kalman filtering [13][14][15], but information fusion requires high computational power and accuracy, both directly influencing the overall localization performance. The best localization accuracy is achieved through fingerprint methods, but a complex off-line calibration phase shall be introduced to make the system operational: such calibration strictly depends on particular environment characteristics (i.e., routers distribution, furnishings distribution, etc.), so it makes the overall localization solution very complex to be installed and managed.
This work aims to propose an IEEE 802.11/802.15.4 network compliant indoor localization system, which is capable of achieving sub-metrical accuracy without any kind of off-line calibration phases. The proposed approach is based on a network of anchor nodes (or rather, typical routers) based on a particular SBA (Switched Beam Antenna) structure [16,17] which is capable of SDMA (Space Division Multiplexing Access).
In Section 2 we demonstrate that such anchor node is able to provide a more predictable radiation pattern distribution across the area, and through SDMA, exploiting more co-operative anchor nodes, the resulting constellation is able to subdivide overall area in small cells thus enabling a coarse metrical space subdivision. In this refined space domain, the proposed localization algorithm estimates effectively the target position. In force of the pattern predictability and space cell subdivision, through a RSSI-based fingerprint-like localization algorithm based on a purely ideal "reference map", the proposed system is able to achieve the sub-metrical localization accuracy for both static and mobile target nodes. Because proposed localization algorithm is based only on RSSI estimations, no more than a typical 802.11/802.15.4 transceiver is required while RSSI values are obtained in a fully transparent way during standard packet network communication.

Proposed Hardware Infrastructure
In [17,18], COTS-only hardware for transparent indoor localization was proposed for use in a distributed network of IEEE 802.15.4 anchor nodes hanging from the ceiling, which offer Ethernet-to-ZigBee connectivity. Every anchor node is capable of transferring packets between any LAN host and each ZigBee node (Figure 1), while the LAN host collects all the localization-related data.
A reference table for complete anchor networks is built in the LAN host, containing the position of each anchor node. As shown in [18,19], an ideal two-dimensional map of the expected RSSI is collected for each antenna and anchor (considering the user node as the transmitter), directly projecting each oriented polar ideal antenna pattern to the plane, with the height being the mean height of the TAG nodes (i.e., a typical height of 1.10 m from the floor is considered).
Each anchor node is built on the concept of a switched-beam array [16,20,21]. In Figure 2, a brief hardware description is shown, including both the antenna array structure and the anchor node block diagram; functionally, the concept is to place a uC-controlled RF switch on a standard ZigBee transceiver (i.e., in [19], a COTS Texas Instruments CC2430 transceiver was used), which connects the RF channel to every antenna of the array.    The antennas are implemented as printed patch antennas, radiating a characteristic and regular far-field pattern ( Figure 3, [22][23][24]). The antennas operate in circular polarization, permitting a reliable link regardless of the relative orientation of the tag [19,25,26]. Furthermore, circular polarization is a strong aid in contrasting multipath impairment [27][28][29]. A set of patterns is projected over different spatial areas [19] and for each anchor tag data packet a complete RSSI vector for each antenna, called the steering vector, is given.

Proposed Localization Method
A steering vector defined through RSSI values contains information only about packet received signal power, thus phase information is totally unavailable. Despite this an accurate design of the array structure ( Figure 2, [16,30]) can provide an excellent pattern differentiation throughout each single anchor domain, thus lowering the expected Cramer-Rao Bound lower limit for localization accuracy [31] and enabling the single anchor node to perform Direction-of-Arrival estimation. In [32] a specific implementation of the DoA MuSiC algorithm [33] has been proposed for phase-less RSSI steering vectors. Such an algorithm was successfully implemented in the 802.15.4 COTS based SBA designed in [16] and proposed in Figure 3, showing that a completely phaseless and RSSI-only based architecture can perform DoA localization [17].
Different anchors DoA estimations could be used to perform a three dimensional localization overall a site, but RSSI DoA estimation is far from being accurate enough to achieve a limited dilution of precision for localization in large areas. To improve overall accuracy some enhanced triangulation algorithms exist [34], but the actual problem is that each different DoA estimation is affected by an angle estimation error while such estimations are applied as arguments of strictly non-linear trigonometrical function to perform final (x, y) estimation [35].
Dealing with a network of N distributed anchors, making the final localization using N different and independent DoA estimation without considering any kind of relationship between them does not exploit the entire information available. A stronger control over estimation error propagation can be achieved by applying an estimation algorithm over the entire set of RSSI data given by the entire set of installed anchors: the effective increase of information quantity available to the final localization algorithm allows to complete reciprocal anchor observations thus reducing overall estimation error.
The error propagation scheme is conceptually depicted in Figures 4 and 5. Note that while the typical triangulation approach in fact gets the different DoA estimations from each anchor node without performing any kind of reciprocity check, dealing with overall RSSI information (thus processing a "global steering vector" given by the entire network of anchors) a single conceptual block can access to the entire information batch allowing to implement smarter localization algorithms.
Note that the error propagation model of Figure 4 is still valid for trilateration approaches [36], as the Friis formula inversion is required to estimate the distance between the anchor and each node. In this case, the RSSI measurements error can have an even bigger impact.   4 reveals how the main source of weakness is caused by a pair of non-linear transformations applied over the collected steering vector, which is affected by RSSI measurement errors modelled as a Gaussian noise distribution [32]. Note that the non-linear function of the localization error depends on non-linear functions g ∆S i applied as trigonometric function t (ϕ, θ) arguments.
To achieve an higher control over error propagation, a one-step localization algorithm is highly preferable. In Figure 5, note that the localization error function becomes directly dependent on RSSI measurement errors, thus final localization error can be better controlled by refining the direct localization estimator function. In one-step localization, error propagation does not depend on trigonometrical functions, and the overall information comes from a distribute set of directive antennas (grouped by anchor nodes), building a more descriptive and fully exploitable data set.
The proposed approach analyses the RSSI values collected from the overall anchor-node antennas. Dealing with an highly spread antenna distribution, an extensive information about environment is given reducing the needing of information fusion with additional sensor data; the needing of an off-line calibration phase is removed thanks to predictability increase overall the observation area. Additional information given by a "global steering vector" can be exploited through different processing blocks (accurately described in Section 3) lying within the "One-Step" localization algorithm ( Figure 5) which are able to feed each other to refine final estimation results.
In summary, the hardware architecture shown in [17,19] was used, the algorithmic layer is deeply different from a simple implementation of the DoA RSSI algorithm on every anchor, obtaining the final spatial localization through the simple triangulation algorithm, as shown in [35]. In [19], each M-dimensional steering vector collected for each of the N = 4 anchor nodes (each composed by M = 7 antenna elements) was used to compose a single MxN-dimensional global steering vector, which became the input of the direct localization algorithm placed on the server (Figure 1).

One-Step Localization Algorithm
In Section 2.1 the localization method has been introduced. As it is depicted in Figure 5 the core of actual localization approach relies within the "One-Step algorithm" block: This paragraph will describe block implementation in depth, thus evaluating algorithm improvements respect RSSI measure noise.
In the "one-step" approach the effective steering vector is the vector containing every RSSI collected from each antenna of every anchor node, and it becomes a global steering vector. Global steering vectors correspond to a long concatenation of all the different steering vectors collected from each anchor node placed in a known order (e.g., lexicographical order based on the name associated with each anchor). For each packet transfer, the host gets a global steering vector, as shown in Equation (1).
Next to the steering vector is a reference map of the whole set of expected global steering vectors for each different position in the localization space (Equation (2)).
A generic Maximum-Likelihood fingerprinting algorithm is based on finding the solution to the problem in Equation (3) [19,31,37]. The C (x, y) function is called the pseudospectrum function or the estimator and is defined as a R 2 → R function in the localization domain.
Maximum likelihood (ML) algorithms differ depending on the estimator. Among the ML estimators, a reduced computational cost subclass can be defined; the generic form is shown in Equation (4).
This paper will cover only reduced computational cost estimators to follow the imposed real-time constraint. In Section 5, the localization results will be compared between the proposed estimators and the State-of-Art, computationally complex MUSIC estimator [32,33]. The least squares estimator is the simplest ML estimator and is the one referred to in [19].
Following a classic fingerprinting approach [10,38], in [19], the concept of predicted fingerprinting is introduced to achieve an acceptable localization accuracy in the site shown in Figure 1. In the predicted fingerprinting reference map, which is compounded by steering vectors collected at each position in classic fingerprinting approaches, the software projects every antenna pattern of each anchor onto the observation floor space. This step was the novelty of [19], as, at the time, there was no need for an extensive offline calibration phase prior to a system's effective utilization by replacing it with an a posteriori tuning/optimization of antenna projection map parameters [38] respect a small set of training observations.
Note that, as shown in [19], antenna projections are built considering a reference height of 1.10 m, the typical height for mobile phone use when the user is standing.

Area Preselection
The reliability of the estimator values is directly related to the reliability of reference map M (x, y) and the function trends can be dramatically altered by RSSI measurement noise; arguments bias can effectively produce some wrong relative minimums, which can become new absolute minimums that alter the final localization estimation. One way of keeping this source of estimation bias under control is exploiting the capability of each anchor to make a coarse spatial subdivision for the area of competence [19,39] (Figure 6).
In the subdivided localization spaces of multiple anchors in smaller sub-zones (or cells), each anchor is uniquely linked to a list of corresponding maximal antennas for each anchor plus the absolute maximal antenna (Figure 7). Therefore, every steering vector directly links to a subselection of localization domain (or rather a sub-cell), reducing the computational cost and the maximal localization error.
Defining a preselection steering vector with related preselection steering vectors reference map as in Equation (6), the preselection algorithm is shown in Equation (7).
antenna id on i-th anchor (6) P map = P expected ∀ (x, y) in the localization domain  The preselection algorithm removes each point from the reference map which does not belong in the subselection condition represented by the masking function W (x, y): The subdomain reference map equals with the cell identifier distance function d P, P map (x, y) defined in Equation (8).
The preselection algorithm relies on unrelated sub-domain removal (i.e., unrelated to selected cells), which shows cell identifiers too different or "far" from the one extracted from the measured data.
Rough area preselection can lead to localization mistakes due to the preselection uncertainty for areas laying near cell boundaries. The feature is associated with an index called the maximum antenna distance (d max ), which permits the choice of the strictness of cell selection. For an example, looking at Figure 7b, if the node position belongs near the red-coloured border, a cell selection mistake could lead to increased localization errors. An effective workaround is to incorporate the cells that belong on adjacent antennas in the domain selection. Imposing a higher d max causes a reduction in domain selection selectivity, so a complete localization routine could allow for an adaptive algorithm to increment the d max only when needed.

Adaptive Masking
If the localization problem is stated as in Equation (3), the main condition for the estimator function is to be convex inside the current localization domain. By this, the presence of a possible correct estimation is identified only if an absolute minimum is present in the observed domain.
The absolute minimum is defined for a R 2 → R function as the point (x,ŷ) corresponding to the minimum value, in which the gradient and the Hessian matrix are defined in the neighborhood and the conditions in Equation (9) is verified.
The approximate solution of the statement in Equation (3) is given computationally by looking for the indeces matching the minimum value of the numerically computed pseudospectrum function; therefore, the direct check of the conditions in Equation (9) is unnecessary. Nevertheless, if the selected subdomain ends before the pseudospectrum value reaches its absolute minimum, the preselection feature can lead to serious estimation errors ( Figure 8).
Making the subdomain selection algorithm adaptive and dependent on each observation could be a valid workaround. Since an a priori knowledge about the maximum position does not exist, an one-step definition of the subdomain mask is unreliable. The idea is to apply an iterative algorithm that verifies whether the minimum relies on an area that is safe and far enough from any subdomain edge for each execution. If not, the subdomain boundary will be extended to enlarge the selection area, increasing the d max parameter shown in Equation (7) and reapplying the mask. An example pseudocode implementation is shown below. Figure 8 shows the progressive building of the pseudospectrum using the adaptive masking algorithm.
In Algorithm 1, the argmin operator is numerically computed, so Equation (9) computation is unnecessary. Thus, the "CheckReliabilityMin" function must only check that the computed minimum does not belong to the masked domain edges. If the condition is true, Equation (9) will be implicitly verified thanks to the pseudospectrum continuity of the domain (Equation (5)) [19]. while not (reliableMin) do 9: M sel ← GetCellDomain(M, P map , P, d max ) apply the map masking M · W (Equation (7)) 10: reliableMin ←CheckReliabilityMin(C, î ,ĵ ) verify Equation (9)  if not (reliableMin) then 14: d max ← d max + 1 increase the max. acceptable cell ID distance (Equation (8)) 15: end if 16: end while 17: 18: (x,ŷ) ← M sel .xVector î , M sel .yVector ˆj extract the (x,ŷ) from the indexed domain 19: return (x,ŷ) 20: end function To minimize the computational cost, it is best to ignore all the pseudospectrum points that belong outside of the masked domain. This marks all the points where the masking function is null (W[i, j] = 0 -Equation (7)) as NaNs (i.e., MATLAB's Not-a-Number marker). Marking the value of a matrix as a NaN makes it a non-existent value, so any further processing propagates the non-existence condition.
An implementation for the "CheckReliabilityMin" function is shown in Algorithm 2. The simplest way to check a condition in a point neighborhood is to write a nested cycle where, for each column of the matrix, all the rows are checked, so the neighbor zone will be square shaped. for i = î − radius to î + radius do 5: for j = ˆj − radius to ˆj + radius do 6: if An effective neighborhood domain is rectangular when the reference map is built and uses different sizes for each spatial dimension. Nevertheless, by Equation (10), it is clear that an R 2 neighbour with an area greater than the standard is always verified (Equation (11)).

Antenna Weighting
RSSI measurements are non-direct physical estimations of a signal state. An RSSI value is obtained after signal decoding through different correlation processes [40,41].
The decoding process achieves data transfer error rate reduction by introducing a high process gain. RSSI measurements can observe high biases for lower signal powers, for which a smart decoding process can achieve a better RSSI in respect to the effectiveness of the signal power. For such cases, in particularly unlucky (x, y) points, reference map projections can produce RSSI values much lower than the obtained ones.
The bias caused by demodulation process gain is expected to grow in the presence of low signal powers, so an external correction gain can be applied in estimator computing to try to remove this effect and reduce the weight of a weak antenna inside the overall estimator computation. For reduced computational cost algorithms (Equation (4)), the function F (x) can be modified as follows: with w i = i-th steering vector element related weight RSSI antenna weighting → w i = 10 A direct RSSI antenna value can be used to estimate the RSSI estimation reliability, so antenna weighting is applied by placing the weights as shown in Equation (12).

Minimum Variance (minVAR) Estimator
In the localization problem stated in Equation (3), the effective kernel of the localization algorithm is the estimator or pseudospectrum function. An ideal estimator should give a singular minimum point (i.e., an absolute minimum) for the overall localization domain, and it must coincide with the right node position. If ideal hardware is used and localization is required in a perfect environment, in which radio propagation acts perfectly and as modelled in the reference map (Equation (2)), a basic estimator function can be used (Equation (5) [19]).
In [18,19], the maximum accuracy limit is stated performing an analytical Cramer-Rao-bound (CRB) computation [37], as presented in [16], but CRB analysis places only an ideal accuracy limit given by the geometrical distributions of antenna gains over the space. CRB analysis evaluates the minimum achievable error for the localization space, supposing that the only source of idealization is the RSSI AWGN added to the obtained steering vector (modelled by the equivalent σ noise parameter as in [19]).
Rather than the measure AWGN, the main source of estimator bias in real applications is the effective inconsistency between the reference map and the effective RSSI distribution over the space. As an example, in Figure 9, the measured RSSI distributions of two different antennas in the experiment site of Figure 1 is shown; next to them, the expected ideal distributions are plotted for two different expected elevations (z ref0 = 1.1 m and z ref1 = 0 m).  In real environments, producing a reliable reference map as proposed in [19,42] is unthinkable without a scenery-related calibration session, as occurs in the fingerprinting approach. Dealing with this, reducing the reference map misalignment effects on localization estimation (causing estimator bias) is required and a new estimator function (respecting Equation (4) definition) is proposed.
In Equation (13), the minVAR estimator is shown; it evaluates the variance associated with the difference vector built from the measured steering vector and the reference map vector (while the standard least squares estimator calculates its norm using Equation (5)). Assuming a perfect reference map is created, both LSE and minVAR functions act as two unbiased estimators; thus, the CRB of the localization network results, which show the expected accuracy related to typical AWG-noised measures, are the same, as shown in [19].

Fading and Multipath Immunity
A straight evaluation of RSSI parameter intended as an estimation of physical RF received power by the anchor node leads to huge localization estimation errors due to effective RF received power fluctuations due to fading and multipath effects. Dealing with standard and crowded environments such behavior could make the proposed system unusable, but RSSI defined as in IEEE 802.11/802.15.4 network protocols is strictly related to effective data packet information and it is uniquely linked to each different data frame. Data packets are coded through Direct Sequence Spread Spectrum techniques, thus effective data retrieval shows improved immunity towards fading and multipath (representable as delayed receiving signal replies) [43][44][45][46].
As depicted in Figure 10 RSSI estimation is averaged overall the preamble sequence window only after recognizing the packet "Start of Frame Delimiter" through spread spectrum decoding, while spread spectrum correlation techniques ensure that only the first coming packet will be evaluated thus ignoring any delayed echo reply. Furthermore averaging RSSI over the entire preamble sequence window allows to reduce highly variable fading effects on RSSI estimations, while highly destructive effects lead to a packet loss which prevents from obtaining wrong RSSI values that can lead to wrong localization estimations. For slow fading issues which belong to a constant and directive interference the antenna multiplicity helps to mitigate such phenomena: Two different cases can happen • omni-directional interference (or rather "diffuse scattering"): any steering vector RSSI value is uniformly altered so the steering vector mean value µ is altered, linearly multiplying the vector for a constant coefficient, but overall linear vector direction remains the same; • highly directional interference: some vector terms are dramatically altered, but the overall linear values steering vector direction is maintained (only few vector terms ratio are changed).
As highlighted in Section 3 and further in Section 4 and in [17,19,32] the steering vector maintains its DoA/positioning information into differences between single RSSI terms (or ratios between linear terms), or rather, into the effective steering vector direction in C M /R M vectorial space.
Having as reference map the ensemble of physically acceptable steering vectors for a given array structure, a good ML algorithm implementation will be able to identify the most similar reliable map vector rejecting/ignoring the extra domain vectorial components.
ML algorithms based on vectorial subspaces decomposition [32,33] evaluates direction match between obtained steering vector and reference vector ignoring at all any constant-term fluctuation while rejecting singular term ratio mismatches. Note that for reduced computational cost subclass algorithms (Equation (4)) such capability depends directly on estimator function.
By this Section 4 will describe how minVAR estimator respect LSE is more able to ignore costant-term mismatches while singular term mismatch effects are minimized increasing the number of distributed antennas, as it will be shown by estimator bias coefficients ratios in (Equations (33)-(35)).

Estimator Function Improvements Assessment
The effective core of "One-Step" algorithm block relies over the new minVAR estimator function, introduced in Section 3.4. Having to process long vectors of RSSI values that describe a set of distributed antenna gains, Maximum-Likelihood algorithms based on vectorial subspace decomposition (like MuSiC [32,33] or Esprit [47]) become unfeasible due to increase of problem complexity order and for the lack of orthogonality conditions between steering vectors collected from different (x, y) points. Consequently, the effective estimator function improvements should be evaluated respect to LSE standard estimator implementation, as proposed in Equation (5) [31,35,38], which is at the best of the author knowledge the only suitable estimator.
To achieve estimator bias immunity, a propagation error model for reference map errors in terms of both LSE and minVAR follows.
Considering the reference map bias, the localization problem statement (Equation (3)) is as shown in Equation (14). The equation defines the reference map bias vector object as the RSSI value difference between the ideal projected gain maps and the physically obtained ones (Equation (15)).
Following Equation (3), in localization estimation, a formally faultless evaluation of the estimator bias should be made to evaluate how much the reference map bias vector argument can alter the conditions in Equation (9) and shift the position of the minimum pseudospectrum point. Despite this, it must be considered that any consideration about the unknown physical gain map projection trends is totally unfeasible; therefore, its derivatives are undefinable.
The analysis can be simplified with a comparative evaluation of the effects of the bias vector directly to the function image between the different estimators. Without evaluating the M ∆ (x, y) trend, if an estimator shows a reduced variability in respect to the m ∆i (x, y) subfunctions, it will be more robust against ideal physical reference map differences. Therefore, a qualitative comparison can be made between the estimator function differences and the m ∆i terms.
The estimator function is directly definable in the R M×N reference map vectorial space. By this, the estimator gradient can be defined as in Equation (16).
Thus, by defining the gradient vector, each point of the estimator function can be written as in Equation (17), which separates the influence of the reference map bias vector.
estimator value alteration (due to map bias) superior order terms (17) By Equation (17), a brief evaluation of the estimator bias is given by the estimator bias gain (Equation (18)), which is a conceptually approximate map bias to estimator bias gain.
As estimator bias gain calculated in a localization point (x,ŷ) points out how much the specific estimator is susceptible of variation on that point due to map biases, comparison between different estimators bias gains over the overall localization area could identify the more reliable estimator.
However, both LSE (Equation (5)) and minVAR (Equation (13)) estimators are strictly non-linear in terms of their vectorial arguments, so it is necessary to verify whether a first-grade approximation (Equation (17)) is reliable enough.
It has already been said in Section 3.1 that map bias can introduce new relative minimums in different (x, y) points, and this can happen in map vector domains, as well. Assuming that the map bias is restrained enough to alter the estimator function trend only in the neighborhood of the effective estimation point (x,ỹ), a corollary condition should be that the M ∆ (x, y) term will not be able to alter the estimator function convexity so that the relative minimum condition (Equation (9)) will still be verified.
Convexity behaviour must be verified using the Hessian matrix (as seen in Equation (9)), but the method is absolutely unfeasible when handling a high dimensional R M×N → R function. An alternative way to impose convexity is to evaluate the influence of superior grade terms. It is clear that all non-linear map bias dependency is defined by the residual term in Equation (19).
By quantifying non-linear estimators, it becomes possible to foresee the reliability of the estimator bias prediction using the estimator bias gain in Equation (18). To evaluate the most reliable estimator between A and B, both conditions in Equation (20) must be verified; the first one verifies which estimator could be the more stable, while the second condition verifies how much the first condition is reliable. [ Note that each condition must be verified in (x,ŷ), while the estimator equations directly depend on the collected steering vector S and the reference map vector M (x,ŷ) (Equation (3)). Following the reduced computational cost estimator class definition (Equation (4)), the correct and biased localization estimation is defined below.
The estimated localization should be equal to the real position. Therefore, Using Equations (21) and (22), each function can be evaluated in the (x,ŷ) point by writing its dependency from the map bias terms directly, as shown in Equation (23).

Algorithms Based on Vectorial Subspace Decomposition
Estimators based on vectorial subspace projections (e.g., MUSIC [32,33], Esprit [47]) can give very high map bias and noise rejection relying on the property of orthogonality between the error vectors and the expected vectorial subspaces. The bias factor for vectorial subspace algorithms is ideally zero because any M ∆ (x, y) vector has a null projection over the map vector subspace [32].
Unfortunately, applying vectorial subspace decomposition algorithms on a one-step algorithm is unfeasible due to the high grade complexity of the computation. A fundamental constraint for indoor localization is real-time tracking, but high global steering vector dimensionality (Equation (1)) prevents singular value decomposition and reference map vector projection [32,47] over subspaces within reasonable timeframes. The localization results of RSSI MUSIC implementation (presented in [32]) will be reported to show the mean execution time and speed ratio of each estimator.

LSE Estimator Bias
Applying the condition in Equation (21) to calculate the LSE estimator gradients results in the following equation: (24) Following Equation (23), the estimator bias factors for the LSE estimator are given below.

minVAR Estimator Bias
By Equation (13), the minVAR estimator is written as follows: Each term of the minVAR estimator gradient is shown below: Expanding on Equation (27) results in the following equation: Following Equation (23), the estimator bias factors for the minVAR estimator are written as follows:

Estimator Bias Immunity Comparison
In Equation (30), a generic bias model is described. The map bias is given as a Gaussian noise vector distributed over M × N steering vector terms; an eventual P TX term is embedded into the mean value of the Gaussian noise, because it results to be common to every steering vector component.
Applying Equation (25), the LSE bias factor can be calculated as follows: Applying Equation (29), the minVAR bias factor can be calculated as follows: To evaluate the minVAR improvement over the LSE, the following Equation (20) ratio By replacing Final conditions in Equation (35) summarize all reliability comparison between minVAR and LSE estimators. When those conditions are verified, minVAR estimator reliability over LSE is proven.
In Figure 11, the estimator bias gain factors and estimator factor ratios are shown for the actual experiment configuration (Figure 1 with 4 anchors, each with 7 antennas) with respect to different (µ, σ) parameters in the Gaussian map bias model. The estimator factor ratios are always much greater than one, making the minVAR estimator more reliable than the LSE.
By Equation (35), a further noticeable improvement is highlighted: The minVAR estimator is fully independent or unbiased with respect to the user node's transmitted power term P TX or rather from each µ steady term (as it can be a ∆P TX due to path loss). In particular, if the reference map deviation belongs only to the constant term, then C BminVAR (x,ŷ) = 0. The improvement is not trivial; every fingerprinting method is dependant on the overall received power value. This dependency causes localization estimation bias for transmitted and received power fluctuations, even though a complete and error-free measured data set is available [19,42].
Note that for minVAR estimator bias gain decreases by increasing the overall number of antennas (Equation (33)) while the estimator non-linearity factor does not change at all (Equation (34)). Therefore, it is proven that minVAR estimator will always enhance its reliability by increasing the number of antennas over the localization space as stated by CRB-analysis [19,37]; LSE estimator instead shows the reverse trend, worsening its bias if each antenna adds its RSSI measurement noise to the steering vector. Two main cases for map bias distribution can be evaluated: the first considers an highly stable map bias |µ| >> σ (e.g., for high |P TX | terms due to path loss), and the second considers an highly variable map bias (e.g., due to coarse unexpected map model errors).

Simulated Localization Estimation Results and Comparison
To make an effective comparison, in Figures 12 and 13, the simulated localization error results for the referenced scenery ( Figure 1) are shown, computing a set of 10 localizations in each (x, y) point. For each localization the steering vectors was the corresponding reference map vector M (x, y) biased with an AWGN map bias defined as in (Equation (30)).
It is remarkable that the plots in Figure 12 follow the predicted trends shown in Figure 11, further highlighting the validity of proposed model. For comparison, Figure 12 and Table 1 show the localization error results of the State-of-Art MUSIC localization algorithm, as shown in [32]. For a 21× slow down in the localization execution time, better localization estimations were achieved. Table 1. Simulated localization estimations of all the cases (N = 10 localizations for each (x, y) point).

Experimental Results
The minVAR estimator can achieve better results than the LSE that are directly comparable to the well-known and high accuracy MUSIC estimator [33,48]. The overall one-step localization algorithm described in Section 3 can be implemented in a known indoor WiFi COTS localization infrastructure. As described in [19] and briefly in Section 1, a network of four IEEE 802.15.4 (as well as IEEE WiFi) compliant anchor nodes (or routers) based on SBA technology [31] (Figure 3) were installed on the ceiling of the office area site, as shown in Figure 1.
The goal is to provide reliable and sufficiently accurate localization without any kind of offline calibration phase, which has been mandatory in the past [1,5,38]. In [19], the novelty has been the replacement of the offline calibration phase with a computed predictive fingerprinting map, but an additional map parameters tuning phase was needed to achieve acceptable results using an LSE-based localization algorithm. Instead in actual experiment the reference map was the straight planar projection of direct angular patch antenna models with parameters as presented in [17] and proposed again in Figure 3: No map tuning phases were performed and the straight ideal map was used.
This work aims to propose an effective implementation of a real-time and calibration-free indoor localization system. To provide a complete characterization of system effectiveness, different kinds of experimental campaigns shall be performed: • static localizations: effective localization accuracy estimation is given through an extensive campaign of localizations achieved throughout the site domain, putting a typical user node on an extensive set of possible positions. Considering a widespread uniform set of positions overall the observation area an highly descriptive error distribution function can be defined, thus characterizing system capability to interact with a non ideal environment which should require specific calibration phases; • static localizations with strong scenario changes: while localizing in a real environment without adding a calibration phase helps to understand how much the localization algorithm is able to overcome "minimal" reference map misalignments, the real complete Plug-and-Play capability is achieved if system functionality is proven also reducing at most any requirement over users deportment (i.e., enabling users to portrait their devices at different heights and with different orientations). As well as a map misalignment is given when an user node is placed at a different height respect reference map z TAG height reference, in the same way using reference maps computed for a z REF different from effective user node height z TAG helps characterizing localization system capability to minimize bias in front of strong utilization scenario changes; • dynamic localizations: the "real-time" localization capability is a specification more linked to the system implementation level than to the effective localization methodology. Despite this, this work aims to show a possible implementation for an effective consumer level Plug-N-Play system: by this, a lifelike example of an actual implementation scenario is given. Different measurements campaign have been done placing a walking user throughout the entire site, making it portraying as a typical standing user holding a standard mobile device; • dynamic localizations with strong echo interferer: to provide an actual demonstration about multipath immunity given by exploiting RSSI measurement through the minVAR estimator (as described in Section 3.5) some additional real-time tracking experiments are done in the presence of a strong echo interferer, thus providing an example of system capability to overcome strong multipath effects.

Static Localizations
A CP antenna equipped TAG node is to be placed on a grid of known positions ( Figure 14) at an equivalent height of about 1.10 m over the floor with anchors pinned on the ceiling at 2.8 m (Figure 1).
In Figure 15, the distribution over (x, y) of the mean Euclidean localization error and its standard deviation over 100 localization trials for each point are shown. These results were computed using an ideal reference map with pattern projections computed as shown in [19] for a height of 1.10 m from the floor (or rather the exact height for TAG positions). Figure 15 highlights the accuracy improvement obtained through a raw minVAR estimator implementation, absolutely without enabling the extended features of the preselected area (Section 3.1), adaptive masking (Section 3.2) and antenna weighting (Section 3.3). Figure 15b shows that a raw minVAR implementation is able to achieve a straight 65% area coverage with a sub-metric localization error. The typical LSE algorithm (Figure 15a) implemented without a reference map calibration shows an halved sub-metric error coverage (30%) while the MuSiC implementation depicts a quite unpredictable error distribution (Figure 15c) due to wrong gain vectorial space definition.
Note that only 36% of all the experiment points belong inside the mesh area (shown as a bounding box in Figure 14). The mesh was built to have a limited coverage of the overall site area to demonstrate the capability of the localization system to work where a high dilution of precision is expected. While Figure 15 offers an introductive performance comparison, more detailed comparisons follow. The overall results are evaluated for the mesh area only and for the complete site area, placing a direct comparison with results of [19] and an indirect comparison with other State-of-Art indoor positioning systems in the existing literature.
In Figure 16, the cumulative distribution functions (CDFs) of the localization error are shown applying the entire "One-Step" algorithm block implementing the LSE and minVAR algorithms. The final "One-Step" implementation corresponds to the "minVAR+features" trace, specifying the enabling of all the proposed extended features (or rather, the preselected area (Section 3.1), adaptive masking (Section 3.2) and antenna weighting (Section 3.3)).  An important achievement for this work is the capability of obtaining a sub-metric localization accuracy without the needing of the calibration phase: The dashed lines in Figure 16 describes localization error results obtained by [19] applying a raw LSE algorithm using a calibrated reference map.
In [19], the reference map was given as a parametrical function over (x, y) with a set of angular parameters regarding each antenna of the SBAs (i.e., the HPBW angles, the expected front-to-back ratio and the ratios of the different antenna gains) plus an α parameter corresponding to the η free-space path loss exponential of the Friis transmission equation [19]. Localization results of [19] have been obtained through a complex parametric tuning of parameters described above, applying some manual corrections after observation of overall RSSI measurements.
From Figure 16, it is evident that results obtained through minVAR estimator using a totally uncalibrated (or rather, purely ideal) reference map are directly comparable with the results of the raw LSE implementation using the carefully calibrated map. Enabling the extended features actual results are far better than the LSE calibrated one: note that in actual implementation the calibration routine is still applicable, but it is interesting to show that effective reference map error reduction achieved through calibration (Figure 16 "minVAR+calibration" trace) is almost useless thanks to minVAR bias immunity improvement.

Static Localizations with Strong Scenario Changes
A trivial method to verify the localization estimation immunity over the map bias is to make the localization estimations using a reference map computed for a TAG height different from the effective one. Figure 17 shows how the z REF height projection parameter alters the map projection considerably. Looking over the projection operation shown in [19], it is clear that given a fixed point (x,ỹ) TAG height variations will lead to highly different global steering vectors due to different distance variations between each antenna and the TAG node.
Note that by Equations (2) and (14) global steering vector bias can be considered equivalent to a specific map bias vector added to the reference map in the (x,ỹ) point. Therefore, computing localization using reference maps computed for different heights will be equivalent to obtain a global steering vector bias due to height variation. Different localization sessions are conducted for the entire set of experiment points using different reference maps calculated for various heights (0 m, 0.55 m, 1.9 m and 2.8 m; Figure 17), which are different from the effective TAG height. Following map bias effect reciprocity as stated above, the computed localization results are able to show estimator stability with respect to vertical TAG motion, placing that the global steering vector variation is due only to geometrical projections differences. In Figures 18 and 19, the resulting CDFs for the same experimental data set are given, each for a different reference map. The estimation stability in respect to height variations is perceivable through CDF dispersion; a lower bias immunity corresponds to a higher variance over the mean error and error parameters.
The dashed lines show the effect of an overfitting map parameter, as in [19]. It is worthwhile to highlight that, while parameter calibration helps the standard LSE algorithm perform better when the reference map height is equal to the TAG height, the difference between the CDF traces for different z REF parameters worsen.
When the considered localization area is limited to the mesh area, the calibrated reference maps are able to bind the accuracy variance below the uncalibrated LSE execution; uncalibrated minVAR with features gives directly comparable results. Extending the localization area to the overall site, the calibration shows remarkable accuracy gain in respect to the LSE algorithm, while uncalibrated minVAR shows a comparable overall accuracy with much less variation over ∆z REF . Tables 2 and 3 summarize the overall localization estimation results. Defining the coverage as the percent of the interested area points with submetrical localization error, the coverage variance column briefly describes the overall dispersion between the CDFs. Overall localization error parameters are evaluated for the estimations calculated using the right z REF = z TAG reference map.  As depicted by Tables 2 and 3, the proposed approach is capable of ensuring remarkable localization accuracy compared to the LSE best-fit calibration without any kind of calibration at all. Obviously, as can be seen in Figure 16, the calibration routines are still applicable to further improve the localization accuracy results without the high sensitivity to experiment parameters variations, which can worsen the standard algorithms.

Dynamic Localizations
Static localization results describe an acceptable systematic sub-metrical accuracy over the entire 35 m 2 office area. Localization error distribution as shown in Figure 15 appears to be quite regular and estimations computed through different heights calculated reference maps have been demonstrated to maintain the same error distribution applying minVAR estimator function (Figures 18 and 19).
Fading and multipath phenomena effects can appear heavily reduced if localization tests are done only on a set of static positions. Within a standard static environment multipath paths could be quite static and predictable throughout the site area thus a static test is not enough to argue that actual system is sufficiently immune to fading effects (i.e., Figure 15 higher localization error distribution could depend on static fading interferences); in addition to this, fast fading effects are absent altogether if the site is clear and the testing user device is in a stationary position.
To exploit effective fading immunity some tracking tests are performed. Such tests assume the observation of one or more user nodes which are keep in motion by their users walking within the area: thinking at a standard WiFi PIFA installed on a typical smartphone (Figure 20a), the testing 802.15.4 user nodes are equipped with a similar pattern antenna oriented towards the ceiling.
Tracking tests are performed having a walking user (moving at about 5 km/h) throughout the office site following paths highlighted in Figure 20b, handling its device as in Figure 20a.
Tracking sampling rate is related only to coarse communication required time: for a single user node a valid global steering vector is obtained when an estimated RSSI is obtained for each antenna of the global anchor network SBAs. Proposed communication scheme suppose to perform a network-user node communication having only one transmitting anchor for each communication while the others can sniff overall network packets (because all the anchors act as a single distributed 802.15.4/802.11 router): for the proposed system each anchor obtains its part of the global steering vector when it completes an RSSI collection cycle on its own antennas.
Final localization sampling time must include the algorithm computing time. Following results in Table 1 for actual sub-optimal MATLAB implementation, global sampling time results equal to T obs = T s + T alg = 53.2 ms + 10 ms ≈ 63.2 ms (38) Considering a typical walking speed of about 5 km/h, for a walking user the effective spatial sampling rate becomes ∆s = T obs · 5 · 10 36 m/s ≈ 8.8 cm thus indicating that proposed system is feasible for real-time tracking. Tracking paths are thought to put the user in "close" areas surrounded by furnishings as various metallic desks (Figure 20b center and right) and some wooden shelves (Figure 20b left). The device is hold at a typical 1.10 m device height for a loose walking user (Figure 20a): for each chosen path different trials are performed using different kinds of antenna on the user device ( Figure 21).
The best match for the "smartphone" use case in Figure 20a is given by the CP antenna configuration but to maximize fading observation and to highlight system strength among different user device antenna types, tracking tests are done also for two generic 2.45 GHz dipole LP antenna cases (Figure 21b,c). Using LP antennas, the multipath rejection aid given by CP polarization will be absent in all cases.
A detailed tracking error analysis is given by overall error Cumulative Distribution Functions proposed below for each tracking test achieved, for both the preceding LSE tuned implementation as in [19] (Figure 22a) and for the actual proposed one (Figure 22b). Tables 4 and 5 summarize the overall results.  Figure 23 depicts tracking results respect proposed reference paths. The CP antenna case is the one which better models a typical smartphone PIFA device: Nevertheless the LP cases allow to estimate system behavior dealing with a fading impaired environment.
Overall results clearly show an accuracy worsening for LP vertical oriented dipole case (Figure 21c). Such behavior is explainable due to exactly vertical dipole radiation pattern inversion, for which the LP dipole shows an equivalent isotropic pattern over the XY plane (thus confusing mesh anchor proximity estimation) while within close proximity effective received power is highly reduced.
Nevertheless, the "One-Step" solution still performs far better than previous calibrated LSE implementation [19]. Regarding fading issues X-path tracking trials are more representative due to furnishings arrangement ( Figure 20): for the worst cases (LP cases on X-path) the "One-Step" implementation clearly shows an high accuracy improvement.

Dynamic Localizations with Strong Echo Interferer
As suggested in Secrion 3.5, localization estimation done using power-only measurements can be highly affected by fading and multipath phenomena. Actual IEEE 802.15.4/802.11 stacks implement advanced protocol techniques to overcome such effects reducing their disruptive action over packet data transmission [43][44][45][46]; in the same way, antenna input power measurements are post-processed before obtaining effective RSSI values.
As briefly described in Section 3.5, a coarse RSSI processing is able to overcome estimation fluctuations due to highly variable effects (the "fast fading") by averaging power measurement over a window. By this, final RSSI linear estimation shows a value shift related to the "slow fading" effect only which can be described in terms of physical spreading over the entire set of distributed antennas.
Considering the possible interferer spreading behavior as classified in Section 3.5, a really rough model for slow fading effect over steering vector observation can be modeled using the generic model of reference map bias described in Equation (30). Applying a first grade approximation, slow fading effect could be brought back to an equivalent map bias vector featuring an unknown mean value and a low probabilistic dispersion throughout the different antennas (i.e., a low σ parameter for Equations (30)- (35).
Theoretically the minVAR estimator is able to overcome such kind of map biases, so actual localization should be highly immune to multipath effects.
Some experiments are done to validate such models. A possible way to impose a bad multipath condition is to put within the site a strong interferer distributed as it can be expected by a coarse first grade RF reflection model. Following such criteria some tracking tests are done using the experimental TAG node (Figures 21 and 24) putting on the antenna RF connector a Wilkinson power splitter, connecting both a linear 2.45 GHz dipole for voluntary TAG transmission and an additional 2.45 GHz dipole through an RF cable to act as a strong interferer ( Figure 24). Dealing with 2.45 GHz signals effective signal delays are totally negligible respect RSSI window averaging length (128 us for 802.15.4, Figure 10) thus effective interferer immunity can be related to algorithmic improvements.
The interferer is placed adherent to the left wall of the site (at x I = 0.30 m, y I = 3.5 m- Figure 24) thus enforcing the interfering action by profiting of equivalent RF signal reflection given by the wall itself. Because a symmetric power divider is used, effective transmitted power at the input of the mobile antenna (the 2.45 GHz linear dipole) and at the input of the interferer path (composed by a "long" RF cable connected to a second identical 2.45 GHz dipole) is absolutely the same. Effective ratio between desired transmitted signal and interferer (or rather an equivalent SIR-Signal-to-Interferer Ratio) is given only by RF cable losses: Table 6 lists actual SIRs for both RF cables applied. A set of 50 localization is computed connecting a 50 Ohm termination on the mobile antenna connector to validate interferer operation: Table 7 shows overall mean localization error and standard deviation for each different case. Direct interferer antenna radiation pattern is dramatically altered by the wall (acting as an adherent ground plane) thus interferer localizations are quite noised, but localization error parameters are still under 1.5 m (or rather considerable "metrical"). Results in Table 7 demonstrate that the interfering antenna is an effective signal source. Average RSSIs collected are absolutely within the typical range of RSSIs for a typical node within the site area (as it can be seen in Figure 9) so putting an identical linear dipole on the mobile antenna connector clearly produce a situation where two identical and identifiable signals are present.
In Table 7 two columns are highlighted: Such columns lists the equivalent parameters which directly feeds the observation/reference map bias model as proposed in Equations (30)-(35) of Section 4.4. Proposed values are extracted evaluating the statistical distribution of RSSI values throughout every single collected steering vector, then the parameters obtained for each steering vector have been averaged throughout the entire set of observations. It is worthwhile to point out how the interferer only observations feature a low σ/µ ratio, due to particular signal spreading case following a "diffuse scattering" propagation model.
Due to RSSI estimation path as depicted in Section 3.5 it has been shown that each diffuse scattering phenomena can be brought back to an highly spreaded RSSI values alteration on collected steering vectors. Following estimator functions theory depicted in Section 4 the minVAR estimator should be more able than LSE to overcome such observation bias, thus tracking experiments should prove this.
Overall tracking error is evaluated moving the node throughout the "Y-path" already described in Section 5.2 and shown in Figure 24. The choice of the "Y-path" is merely due to scenario constraints, because effective tracking experiments are done by moving the mobile node carrying also the RF cable for interferer antenna feeding ( Figure 25). Overall error distribution functions are plotted in Figure 26 while Table 8 summarizes most significant results. Table 8 clearly depicts how uncalibrated One-Step applying the minVAR estimator is able to better overcome steering vectors alterations introduced by the interferer, while results for both LSE and for minVAR shows that applying an advanced protocol scheme incidence of fading problems is widely reduced.   In fact localization results obtained using the old calibrated LSE [19] does not show a dramatic impact of interferer, but the minVAR estimator function result to be more reliable as predicted by model described in Section 4 applying phenomena parameters as identified in Table 7.

System Validation and "State-of-Art" comparison
The applied reference tuned map model projection seen in [19] gives a first-grade approximation of RSSI environment distribution without evaluating any advanced effects, such as fading or scattering [50] or measurement device deviation [51,52], which dramatically alters the effective power distribution [8]. Due to the unpredictability of such phenomena, minimizing the effects of non-idealities will be much more reliable than making any kind of calibration, giving the unknown dependency and environment variables. Some solutions are proposed to reduce fingerprint map errors [53], but the given improvement is far from achieving acceptable accuracy results. The depicted analysis and experimental results show an improved robustness of the proposed system to allow for the implementation of host-side software based solutions. Table 9 lists a comparison of the State-of-Art equivalent indoor localization systems in the literature, considering standard COTS systems based on IEEE 802.11-and 802.15.4-compliant networks only, as the main goal is to produce a new direct-to-use submetrical localization system that can be considered transparent and cost-free in terms of a standard WiFi network configuration [5]. To provide a comparison with systems based on more refined CSI (Channel State Indicator) evaluations ignoring the loss of portability due to unavailability of CSI detection (available only for OFDM WiFi modulation schemes [54]) on wide-spread IEEE WiFi protocols, also results given by the state-of-art CSI-based localization system presented in [54] are cited.
In literature some accurate indoor localization systems based on TDoA method over 802.11 IEEE networks are presented [6,7] but such method is currently unfeasible using COTS due to high accuracy measurement devices required or the needing of low-level communication stack access.
Explicitly smartphone-related algorithms (i.e., all odometry-based algorithms of Table 9) improve localization accuracy by accomplishing RSSI fingerprint localization with odometric information provided by smartphone sensors. Some papers [15] present approaches not based on fingerprinting techniques; nevertheless, all of them rely on an offline training phase. Although the accuracy can be good, the training phase always requires an uncomfortable calibration session. Table 9. Actual State-of-Art Localization System Comparison (inertial measurement units = IMUs).

Conclusions
The advantage given by the proposed solution is not trivial at all. System accuracy in bigger environments can be achieved using less anchor nodes in LOS sites (e.g., a single room) while for non LOS sites (e.g., between different rooms) the number of anchors may have to be increased, so the direct comparison of anchor node density (specifically in comparison with [15,58]) is at risk of being misleading. Good wireless network distribution planning tends to maximize the number of anchors inside more complex non LOS areas to maximize connected users' quality of service; therefore, the necessary conditions allow the proposed localization solution to have good accuracy. The localization service can be provided both through a centralized server or through the user host because steering vectors can be obtained directly from the network of anchors without any network overhead at all. The overall accuracy will not depend on odometric information accuracy, which is dependent from each user node hardware and there is no longer the needing for a tracking filtering.
The overall presented localization solution is thought to be a plug-and-play system without an offline calibration phase to achieve an acceptable localization accuracy. Furthermore, the system is self-consistent and does not require any kind of data fusion with additional sensor information (different from typical fused RSSI and odometric indoor localization with tracking approaches [15,[56][57][58]). Because the hardware is based on standard 802.11 and 802.15.4 anchor nodes with the only hardware improvement being the use of a switched-beam array as the antenna, the localization service is given as additional and transparent, using standard network operations; thus, the network administrator is given the ability to provide indoor localization and data contextualization directly to users using only standard network configuration procedures.
Such solution is considerable "infrastructure-free" respect the user experience because any modification to user device nodes is not necessary: as a matter of fact, the system totally shares the existent topology of typical 802.15.4-802.11 LAN networks, and the anchor nodes substitution will not change pre-existent wireless network functionality. Obviously an effective ad-hoc infrastructure must be installed, but its installation concerns only the initial wireless network planning and setup thus assuring to every user the straightaway access to the localization service accessing it directly through the application layer.
Author Contributions: Eng Marco Passafiume wrote this paper and developed the algorithm, analytical demonstrations and effective software implementation of the system. Eng Stefano Maddio and Alessandro Cidronali developed the hardware system and conceived and designed the experiments. Alessandro Cidronali was the project supervisor.

Conflicts of Interest:
The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this manuscript: