Passive Polarized Vision for Autonomous Vehicles: A Review

This review article aims to address common research questions in passive polarized vision for robotics. What kind of polarization sensing can we embed into robots? Can we find our geolocation and true north heading by detecting light scattering from the sky as animals do? How should polarization images be related to the physical properties of reflecting surfaces in the context of scene understanding? This review article is divided into three main sections to address these questions, as well as to assist roboticists in identifying future directions in passive polarized vision for robotics. After an introduction, three key interconnected areas will be covered in the following sections: embedded polarization imaging; polarized vision for robotics navigation; and polarized vision for scene understanding. We will then discuss how polarized vision, a type of vision commonly used in the animal kingdom, should be implemented in robotics; this type of vision has not yet been exploited in robotics service. Passive polarized vision could be a supplemental perceptive modality of localization techniques to complement and reinforce more conventional ones.


Introduction
Navigating in Global Navigation Satellite System-denied or unmapped environments will, over the coming decade, become one of the 10 biggest challenges in robotics [1].Currently, autonomous robots rely on Global Navigation Satellite System (GNSS), Inertial Navigation Systems (INS) and ground-based antennas to triangulate or correct GNSS signals (5G networks or Real-Time Kinematic (RTK) networks), astronomical navigation, gyrocompass navigation, and vision-based or lidar-based SLAM (SLAM stands for Simultaneous Localization And Mapping).Surprisingly, passive polarized vision has not yet become standard in robotics to improve the SLAM technology brick, geolocation, or true north heading detection, for instance.In contrast, animals are able to navigate or migrate over extremely long distances without using localization techniques developed by humans [2].Migratory birds should be mentioned: they are known for their astonishing navigation capabilities.Studies have shown that some of them, such as Savannah sparrows [3] or Catharus thrushes [4], can navigate by means of a Polarization-based Compass (PbC), which they use to calibrate their magnetic compass.However, the precise mechanism involved in this calibration remains unclear [5].On the other side, how insects use sky polarization to navigate is better understood.For instance, desert ants use a powerful navigational tool termed optical path integration to locate their nest.When returning, desert ants follow the shortest possible route-a straight line-even in featureless or unfamiliar terrains.By integrating a directional compass and distance information from their vision, desert ants calculate a vector from their visual inputs, and this leads them home [6].Since pioneering behavioral experiments on desert ants by Piéron (1904) [7], who manipulated ant position in order to observe their behavior, and Santschi (1911) [8], who manipulated the light perceived by ants by using a mirror, it has taken about a century to understand how desert ants exploit sunlight for navigation purposes [6,9].The ant-inspired path integrator has been recently implemented on board fully autonomous robots: firstly, a wheeled robot, called Sahabot in 2000 [10], then a legged robot, called AntBot in 2019 [11,12] (see Section 3.3 for further information).
Understanding how roboticists can exploit polarized sunlight or light reflection is at an early stage (Figure 1), but it is extremely relevant because this could pave the way for the development of a GNSS-free geolocation for autonomous outdoor navigation as it works in the animal kingdom [13].It should be noted that long before the modern polarization sensing systems described below, between the 10th and 13th century, Viking navigators had been using skylight polarization for navigation to reach Greenland and then North America without a magnetic compass, instead using a sunstone working as an optical compass (see [14][15][16][17] and Section 3.1 for further details).Autonomous robots working in urban environments, e.g., for last-mile delivery services, will have to locate and position themselves with a spatial accuracy of better than 5 cm and 0.2 degrees by 2030.Mobile robots navigating through public environments (in urban areas or on a campus for instance, see Figure 1a) must meet the most stringent safety requirements.They must comply with the Machine Directive (ISO 3691-4 [18]), as well as autonomous vehicle standards such as Directives ISO 26262 [19] (Fusa) and ISO 21448 [20] (Sotif).Using and fusing the polarized sensors' outputs with an Inertial Navigation System (INS) could be a supplemental perceptive modality of localization techniques that would help reach the level of performance required by ISO standards in order to complement and reinforce the conventional localization techniques (3D LiDAR-based SLAM, GNSS, and visual-inertial odometry).
Robots will therefore use all the available visual information including that coming either from the light scattering of the sky or the light reflection from surrounding environment.Even if the sun is hidden or the sky is covered, light scattering remains available, and this includes relevant and robust information for robots' navigation (Figure 1b).Moreover, using polarized light reflection will be useful to improve visual contrast in order to better understand the visual scene through superior object detection.
To help researchers find relevant directions over the next decade in the field of passive polarized vision in autonomous vehicles, we will divide this review into three main sections.Following this logic, the three main sections are as follows: Section 2, "Embedded polarization sensing", focusing on polarimetric sensors which can be embedded on board robots; Section 3, "Polarized vision for navigation", which will emphasize how polarized light scattering can be used for navigation purposes; and lastly Section 4, "Polarized vision for scene understanding", which will suggest how polarized light reflection can be used to better understand a visual scene.Figure 1 illustrates the links between these three sections as they relate to the above main research questions.To address each of these questions, it is required to divide them into more specific research questions on how we can transfer knowledge derived from the physical properties of polarized light to sensors and then use it for navigation purposes.
Previous review articles have focused on the progress of bio-inspired polarized sensors and comprised an exhaustive overview on polarized sensors manufactured by nanotechnology [21], polarization based orientation estimation algorithms, and the combination of polarized sensors with INS, GNSS, SLAM, and other localization systems [22,23] or polarization-based geolocation [23,24] instead of being focused on passive polarized vision in autonomous vehicles in which the dynamic accuracy is more relevant than the static accuracy.Despite the growing interest in bio-inspired polarized sensors for navigation purposes, few of them have been implemented on board mobile robots.
In Section 2, we will introduce the various technological solutions to embed polarization imaging into robots.Current state-of-the-art polarization acquisition techniques will be introduced.Only relevant technologies, comprising passive and linear Stokes polarimetry, in the framework of mobile robotics will be presented in this section.Section 2 will be divided into following five subsections: In Section 3, we will introduce the various implementations of polarization-based navigation systems on board autonomous robots.Starting with a brief historical overview of polarization navigation, we will then describe the skylight polarization pattern from the simplest model to the most advanced ones.Next, we will present an exhaustive overview of polarized sensors implemented on board vehicles or autonomous robots for heading or attitude estimation.Lastly, recent developments in polarization-based geolocation will be presented.Section 3 will be divided into the following five subsections: In Section 4, we will introduce the use of polarized vision for scene understanding.Polarization has been widely used in the classification of materials [25] or reconstruction of object shape [26]: the section will only focus on applications that can be directly extended to autonomous vehicles.After recalling and explaining all the mathematical formula linking the polarization parameters to the normal orientation, object detection will be described.Then, shape from polarization that exploits most of the physical information and the latest techniques using polarization imaging to improve depth estimation and facilitate pose estimation in robots will be presented .Section 4 will be divided into the following five subsections: In Section 5, we will deal with lessons learned from this review and provide new lines of research and future directions in the sensing of polarized light in robotics for the next 10 years.

Embedded Polarization Imaging
We present here the state of the art on polarimetric techniques that allow for the capture of the polarization characteristics of an unknown beam of light.We will first consider Stokes formalism.We focus on technologies that seem appropriate for the framework of on-board acquisition systems in mobile robotics: passive and linear Stokes imaging polarimeters.Non-imaging sensors used for navigation are detailed in Section 3.3.Further information about point-source sensors can be found in [21].

Stokes Formalism
The linear polarization state of light depends on the material properties of the objects in the scene, but also on the geometry of the incident light beam (angle of incidence and angle of reflection), and on the state of polarization of the incident light.The whole polarization information about the scene is contained in the four-component Stokes vector S = [s 0 s 1 s 2 s 3 ] T , sometimes referred to as S = [I Q U V] T [27,28].The first component is related to the total energy in the scene (polarized or not), the second and third components are related to the linear polarization, and the fourth component is related to circular polarization.A convenient representation of the Stokes vector is the Poincaré sphere [29], such as described Figure 2. In the equatorial plane (in pink), we can find purely linear polarizations that are considered in our review.Adapted from original material under CC-BY license [30].
The effect of any optical change on an input Stokes vector S in into an output vector S out can be described by a 4 × 4 Mueller matrix M such as [31,32]: Mueller matrix estimation can be used to study and classify materials [33][34][35][36][37] or for biomedical applications [38,39]; however, in the following, we will restrict ourselves to Stokes estimation.
In outdoor robotic conditions, i.e., in environments with passive illumination, beams with significant elliptical polarization are rarely encountered since first diffusion only produces linear polarization [40,41].For practical purposes, we will therefore limit ourselves to linear polarization for the description of Stokes formalism and thus consider that s 3 = 0, which should be confirmed for each application.
Estimating the linear Stokes vector can be performed through the measurement of foujr elementary intensities measured through a linear polarizer oriented at 0°, 45°, 90°and −45°and therefore named I 0 , I 45 , I 90 , and I −45 : This is probably the most popular method for capturing linear Stokes parameters.However, reduced schemes using only three measures also exist [42].It should be noted that the choice of configuration affects the system condition number, which impacts on performance metrics such as the signal-to-noise ratio (SNR) [43], specifically and thoroughly in ref. [44] for a four-polarizer filter array sensor.Moreover, it has been demonstrated that the polarization angles used for the polarization state analyzer (PSA) that minimize noise influence form a regular polyhedron in the Poincaré space (which is a unit disk for linear polarization or a unit sphere in the general case) [42].
It should be noted that S is not an algebraic vector (no additive inverse, for instance) and that any vector in R 4 is not a Stokes vector.Stokes vector components must fulfill: A generalized measurement framework can be derived for any polarimeter using N polarization channels.The principle is to perform intensity measurements in N different configurations of the PSA after proper calibration, meaning proper determination of the N analyzer vectors.Since in our case s 3 = 0, we can write the following: where W is the polarimetric measurement matrix formed by the analyzer vectors.Provided the N configurations are properly chosen, the Stokes vector is estimated by using Ŝ = W + I, where W + is often called the Data Reduction Matrix (DRM) or the analysis matrix, computed using the pseudo-inverse method [45,46].One can derive polarization metrics from the linear Stokes vector, for instance the Degree of Linear Polarization DoLP: and the Angle of Linear Polarization AoLP: Both DoLP and AoLP are useful for skylight navigation, as detailed in Section 3.2.In many cases, authors often mention using the Degree of Polarization DoP: In our specific case, with no circular polarization, evaluating DoP comes down to evaluating DoLP.

State-of-the-Art Polarization State Analyzers (PSA)
Two main categories of Polarization State Analyzers (PSAs) exist, both of them providing Stokes information: scanning and snapshot systems.All Division-of-Time (DoT) polarimeters belong to the class of scanning instruments, i.e., several sequential acquisitions are needed to obtain the polarimetric information, but some of these are fast enough to be compatible for robotic applications.Thus, we first present the DoT techniques, and then present the snapshot techniques, namely Replication of Aperture (RoAp), Division of Amplitude (DoAmp), Division of Aperture (DoAp) and Division of Focal Plane (DoFP).
Table 1, inspired by that of Tyo et al. [47], lists the various polarimetric imaging techniques detailed below.It is worth noting that we only include the technologies that permit the capture of two to four Stokes parameters in an efficient way.[47]).For two configurations (RoAp and DoAp, labeled with an ' * '), obtaining only 3 Stokes parameters is reported, but obtaining 4 seems reasonably straightforward.For the Division of Focal plane, most systems provide 3 Stokes parameters; getting the fourth parameter is not straightforward but has been demonstrated in prototype polarimeters.Conceptually, this is the simplest kind of PSA.A single rotating element (a polarizer or a waveplate in front of a fixed polarizer) is located in the optical path between the object and the sensor.N azimuth angles are considered for the rotating element, as in Equation (5).The maximum rank for the analysis matrix W is 3, even if we multiply the measurements in various angular configurations.Only a partial Stokes vector is therefore analyzed.To analyze all the Stokes information, it is necessary to use as a polarization element an extra waveplate whose retardance is not an integer multiple of λ/2.Typically, a quarter-wave plate is used.Tyo [43] showed that a waveplate with a retardance of 0.3661λ optimizes the system SNR (provided that optimum angles are chosen).This type of assembly, whether the waveplates are motorized or not, remains slow, but does potentially present the best optical quality.However, it may be necessary to register the different images before calculating the Stokes parameters (see Section 2.3.2).Such techniques have been successfully used for skylight polarization estimation [48][49][50].

PSA Using Liquid Crystal Cells
It is advantageous to replace the mechanically rotating polarization element with an electrically controlled liquid crystal cell.You can consult [71] for an overview of the physics and use of these liquid crystals.Due to Wolff [51], this idea gave rise to many works in the 2000s.In this preliminary implementation, polarization is rotated thanks to two twisted nematic liquid crystal cells controlled in a binary manner; therefore, a maximum of four directions of analysis is obtained.It is reasonable to estimate that the rate of 25 frames per second (fps) can be reached.Gandorfer accelerated this setup thanks to ferroelectric liquid crystal cells (smectic C) [53], thus achieving 250 fps.Some experimental works improved this principle using a single tunable ferroelectric liquid crystal cell which allows a continuous adjustment of the polarization rotation [56,72] and obtaining three Stokes parameters.Optimized use of modulator control and chromatism also achieves the fourth Stokes parameter, but at the cost of higher noise [73].
Commercial liquid crystal polarimetric cameras were marketed as early as the 2000s by BossaNova [57,58].Today, cameras based solely on liquid crystal modulators are no longer an optimum solution and have mainly been replaced by Polarimetric Filter Array (PFA) cameras (see Section 2.2.5) when only the linear polarization is of interest.However, liquid crystal modulators remain interesting for analyzing circular polarization [74] (impossible for commercial PFA cameras) and also for generating polarization states in Mueller imaging as Polarization State Generators.
2.2.2.PSA Using Replication-of-Aperture (RoAp) These systems are conceptually very straightforward: you place as many cameras and optics as you need next to each other.Systems dedicated to three Stokes parameters, measuring along three or four polarization directions, were reported [60][61][62].These systems are rather expensive and require both a calibration of the different systems and a registration of the different polarization images before estimating Stokes parameters.

PSA Using Division of Amplitude (DoAmp) Use of Beam Splitters
This type of assembly is conceptually quite simple, since it consists of dividing the beam into as many sub-beams as there are measurements to be performed.In practice, if one wants to access all the Stokes information, this leads to relatively heavy setups, since four analysis arms are required.The optical elements must be of high quality and the images must be mechanically or digitally registered.An apparently high-performance compact version has been proposed [63]; it allows access to all the linear polarization information.A simplified version consists of placing oneself in a monostatic configuration and analyzing only two crossed polarization components by dividing the beam using a Wollaston prism.The latter makes it possible to shift the two polarization components and thus juxtapose them on the detector [64].It should be noted that a version providing full Stokes information has already been implemented in the infrared [75].
A noteworthy approach combining DoT and DoAmp has been suggested [65].It forms Stokes components calculated from images acquired simultaneously, therefore with a reduced shift between the different images.In this case, only the first three Stokes components are considered, and two measurement arms are used preceded by a ferroelectric liquid crystal modulator acting as a rotator.Thus (I 0 , I 90 ) are simultaneously acquired, and afterwards (I 45 , I −45 ).An approach combining DoA and DoFP is also possible [76].

Use of PGA
The measurement of the different polarization components can also be performed via Polarization Gratings Arrays (PGAs) [77,78].They are composed of anisotropic diffraction optical elements to spatially separate polarization information.PGAs have the property of producing chromatic dispersion proportional to the polarization state of the light, generating a pattern that can be focused and captured on a focal plane.This technique has the advantage of capturing polarization information using a spectral band with a spectral resolution down to 1 nm [79] and allows spectropolarimetric imaging with a simple and compact design.

PSA Using Division of Aperture (DoAp)
This technique is rather similar to the division-of-amplitude method, but the system is more compact since it uses only one camera [66] at the expense of a loss of definition in the polarization images.The optical system is also more complex.It was implemented in the middle-wave infrared but could be considered in the visible range [66].

PSA Using Division of Focal Plane (DoFP)
The idea reported Figure 3 takes up that already proposed by Bayer for RGB cameras [80]: the pixels do not all capture the same state of polarization.An array of microfilters (aluminum nanowires), often referred to as Polarizer Filter Array (PFA) composed of a pattern of four pixelated polarizers which are repeated many times on the grid, is placed in front of the sensor.These four polarizers allow capturing vertical, horizontal, 45°and −45°linear polarizations.This idea, proposed by [67] and implemented in particular by Gruev et al. [68], has been commercially developed by 4D Technology [69] and especially Sony Semiconductors which provide sensors to camera integrators [70].Tremendous progress has been made with this technology over the past ten years.Whereas 4D technology puts the PFA on top of the microlens array, Sony puts the PFA between the microlens array and the sensor itself, which greatly reduces polarization crosstalk, as described Figure 4.   [81] and made available under CC-BY-SA license [82]).Subfigure (i).a highlights a rhabdom (in pink), which can be seen as a waveguide.The cornea acts as a sensor.A section of the rhabdom is shown in Subfigure (i).b, with retinular cells made of microvilli stacks (here coloured in red and blue), as described in Subfigures (i).c and (i).d.These microvilli act as polarizers.Since each rhabdom contains microvilli in crossed directions, each rhabdom allows selection of two crossed polarizations.Since rhabdoms are shifted by 45°between the ventral and dorsal hemispheres as depicted in Subfigures (i).e and (i).f, the eye can actually sense 4 equally spaced directions of polarization.In Subfigure (i).e, polarization direction (red arrow) is aligned with a set of microvilli in the dorsal hemisphere, so the polarization direction is easily detected.In Subfigure (i).f, the eye has rotated by 22.5°; polarization direction (red arrow) is aligned with none of the sets of microvilli in the dorsal or ventral hemispheres, so the eye cannot detect the polarization direction.Subfigure (ii) describes a modern polarization-sensitive camera sensor, such as Sony Polarsens IMX264MZR, which mimics the mantis shrimp eye, with micropolarizers with different orientations placed side by side in front of the photosensitive sensor.In both schemes, most rays (depicted as green arrows) hit the right pixel.For the on-glass scheme, some oblique rays (red arrows) may hit the wrong pixel, which is not possible with the on-chip scheme.Therefore the on-chip scheme used in PolarSens Sony Sensors, with the PFA between the microlenses and the sensor, greatly reduces polarimetric crosstalk.Reproduced with permission from Yilbert Gimenez [83].
Commercially available cameras have resolutions of 5 to 12 Mpixels and provide 12-bit information with moderate noise for less than $1500.Depending on their communication interface, they can be operated up to 90 fps.If we stick to the acquisition of linear polar-ization, they have supplanted the DoT and DoA technologies.Since the operating rate only depends on the sensor technology, PFA cameras able to operate up to 7000 fps are reported [97,98].To acquire the complete Stokes vector, one can combine two PFAs (one of which is equipped with a retarder waveplate) [76] in a hybrid DoA-DoFP architecture or place a liquid crystal modulator in front of the camera (DoT-DoFP architecture) [99].A laboratory device acquiring the full Stokes information using a single PFA has been proposed [100].
In DoFP PFA systems, the most common polarization arrangement is a 2 × 2 repeating pattern of analyzers and has been introduced by Chun et al. [67].Other spatial arrangement patterns for micropolarizers have been found to be less sensitive to visual artifacts in the reconstructed images [101][102][103], but none have been implemented in a camera to our knowledge.

Calibration and Preprocessing Operations
The aforementioned hardware systems, whatever their characteristics, provide raw data that must be preprocessed in order to be used for navigation operations.Such raw data, without any corrections or preprocessing, usually result in polarimetric data full of artifacts.

PSA Calibration
To precisely estimate the Stokes vector from intensity measurements, W must be estimated very accurately.This compensates for the imperfect polarization optics, i.e., the transmission, diattenuation, and polarization angle characteristics.A first solution consists of the component-wise calibration using a reference metrology polarimeter [104].Another popular solution proves to be a block calibration of the whole system from the camera responses.It consists of generating a set of M well-known reference polarization states using an 'ideal' polarizer-the Stokes vectors of which are gathered in a matrix named S m -and taking a set of N measurements with the PSA for each of the reference states gathered in a matrix named I m .Therefore, we can write the following: An estimate of W can then be computed using the pseudo-inverse method [45,46] by Ŵ = I m S + m and thus the Stokes vector is estimated by using Ŝ = Ŵ+ I, where Ŵ+ is often called the Data Reduction Matrix (DRM).Alternative estimators have also been considered, like Singular Value Decomposition (SVD) [105] or the Eigenvalue Calibration Method (ECM) [106].Some works assume that the polarization measurement is mainly affected by signal-dependent Poisson shot and Gaussian noise.These considerations have been used to select the optimal reference polarization states to take into account both Poisson and Gaussian noises [85,107,108].
Depending on the type of PSA, calibration may be simplified; for instance, a CCD camera equipped with a rotating polarizer may not require a pixel-to-pixel characterization.A polarimeter including a liquid crystal (LC) cell will require a careful characterization of the LC cell (which behavior may depend on wavelength and temperature).A PFA CMOS camera may require a full characterization since both the CMOS sensor and the PFA exhibit pixel-to pixel variations [109].
Calibration methods have been specifically designed for DoFP PFA polarimeters, like the super-pixel method [84,87,110], which jointly calibrates a group of 2 × 2 pixels instead of calibrating independently each pixel.A recent study evaluated the efficiency of this method for extreme camera lens configurations (focal lengths and apertures) [89].Other evolved calibration methods have been studied which do not need precise and cumbersome instruments [111,112] or spatially uniform illumination [85].In some cases, for instance if the camera is used in an 8-bit mode, especially when it is based on a Sony PolarSens sensor, a single overall calibration may be sufficient.In this case, only the average figures of the transmission ratio and the orientation angle are considered over the whole image [113].

Spatial Reconstruction of DoFP Images
In the case of DoFP polarimeters, there is a spatial sampling of the analyzers, i.e., the focal plane array is spatially modulated.Thus, each pixel senses only a specific polarization state of a specific point in the scene.A very easy solution consists of subsampling the original raw image into four linear polarization direction images, but this operation produces Instantaneous Field of View (IFoV) errors resulting in strong artifacts in DoLP and AoLP images.A registration seems mandatory [114,115].An alternative consists of reconstructing directional images produced by PFA cameras to their full resolution, in order to avoid possible interpretation errors by computer vision algorithms.This aims to estimate the values of each of the missing polarization channels at a pixel location.The operation is called demosaicing, and has been extensively studied in the literature, especially for RGB Bayer microfilter patterns.

Spatial Registration of Elementary Polarization Images
Many polarimeters exhibit a spatial shift between the various polarization channels.It can be due to an imperfect alignment (Division of Amplitude polarimeters), to imperfect components, for instance wedge effect (Division of Time polarimeters), or to a rigid relative motion between the camera and the scene (Division of Time polarimeters).This shift, even if it is a subpixel shift, is likely to produce strong artifacts in DoLP images for instance [126].Efficient solutions can be applied to circumvent this phenomenon [127,128].
Images produced by PFA DoFP cameras also exhibit a shift when subsampled without demosaicing (see Section 2.3.2) and should also be registered [129].
Images produced by scanning polarimeters exhibit polarization artifacts at the edge of moving objects.This can be solved by optical flow techniques [130], but this solution remains rather computer intensive [131].For this very reason, DoFP PFA polarimeters are now extremely popular, much more than DoT polarimeters.

Denoising Polarization Images
As with any imaging system, an imaging polarimetric system is likely to be affected by noise.An abundant amount of works in the literature deal with this; a starting point could be Refs.[43,[132][133][134].This noise can be due to physics (Poisson noise) or due to imperfect or miscalibrated instruments (Gaussian noise, shift between polarimetric bands, etc.).Modern PSAs, provided they are carefully used and carefully calibrated, can produce intensity images with a reduced noise level, but the polarimetric pipeline leading to DoLP and AoP estimation may amplify this noise [135].Recently, a metric named Accuracy of Polarization Measurement Redundancy (APMR) was proposed to quantify this polarimetric noise [136] regarding intensity images, but it is not necessarily correlated to noise affecting Stokes parameters.
To reduce noise, temporal averaging of intensity images is an obvious solution, although it is not always practicable.Filtering solutions originally dedicated to luminance or intensity imaging, such as BM3D [137], can be successfully adapted to polarization imaging, after Stokes estimation [138].Another popular solution consists of using more than the minimum of three configurations of the PSA necessary to reconstruct the first three Stokes parameters, which is naturally possible for PFA DoFP commercial architectures which in-clude four polarization directions by construction.It can be combined with other solutions consisting of calibrating the PSA (see Section 2.3.1 ) and if necessary compensating for IFoV errors (see Section 2.3.1).

Extension to Multispectral Polarimetric Sensing
The principles described earlier are compatible with broadband imaging, since linear polarizers usually exhibit a rather flat response over the visible range (and beyond).Nevertheless, when using waveplates or liquid crystal devices, a narrow spectral filter may be required since the retardance depends on the wavelength for such components, even for achromatic waveplates.Not taking this phenomenon into account results in errors in the estimation of DoFP [139].
Multispectral sensing can be considered, generally at the expense of further division of space or time [140].In the first implementations, it consisted of a rotating wheel equipped with spectral filters [141,142].As a snapshot alternative, in the past few years [143], color polarization filter arrays referred as CPFAs have been commercially released [70]: they mix two principles, the CFA (Color Filter Array) and the PFA, as described in Figure 5.With such devices, we obtain 12-channel mosaiced images, the information is then rather sparse for each channel.Efficient demosaicing algorithms are required to prevent color and polarization reconstruction artifacts [125,144,145].Alternate geometries combining CFA and PFA were proposed in order to maximize the signal to noise ratio or minimize the reconstruction artifacts [145][146][147].

Summary and Future Directions in Embedded Polarization Sensing
Efficiently capturing linear polarization in 2D in the field of autonomous navigation could benefit from several recent technological developments.
First, most snapshot polarimeters capture the filtered intensities in only one spectral band and in the visible part of the spectrum.This is no longer a real limit with color PFAs, at the expense of a loss in spatial resolution and an attenuation due to the use of spectral filters.This latter point has recently been overcome with a very promising solution consisting in using metasurfaces as routers [148].The loss of spatial resolution could also be solved through the use of vertically stacked detectors, mimicking a mantis shrimp's eye, as suggested and implemented by Garcia et al. [149,150] and Altaqui et al. [151].This could enable the snapshot capture of spectropolarimetric information [152], which could be relevant to computer vision algorithms such as, for example, visibility restoration [153].
Second, robotic navigation using polarization sensing only considers linear polarization since skylight contains mainly linear polarization information.Using circular polarization may help, especially when navigation safety is concerned, as suggested by Geng et al. [154].This will require that full Stokes snapshot PSAs are available.Several solutions could be considered, for instance using two PFA cameras [76], but promising solutions using only one camera also exist [155].
Finally, it seems that imaging sensors have already reached a mature level that has enlarged the audience of polarimetry.Open-source software toolkits such as Polanalyzer [156] and Pola4all [157] will be useful in the near future to help end users in robotics applications and beyond in the implementation of efficient solutions.

Historical Overview of Polarization Navigation
Historically, Viking navigators are assumed to have been the first to exploit polarized light for navigation and exploration purposes.Viking navigators ruled the North Atlantic Ocean for about three centuries between about AD 900 and 1200.Their main sailing route was the 60°21 ′ 55 ′′ latitude between Norway and Greenland.They used a sun compass to determine geographical north instead of a magnetic compass.It has been hypothesized that when the sun was invisible or below the horizon, Viking navigators determined the direction of polarization of skylight with sunstones-dichroic/birefringent crystals as polarizers-and then estimated the geographical north using the sunstone as a sun compass [14][15][16][17].These research achievements suggest that the sky-polarimetric navigation is surprisingly effective on both days of the spring equinox and summer solstice, even under cloudy or foggy conditions.This sunstone-based compass explains why the Viking navigators could reach North America without a magnetic compass.
The United States, 750 years later, was the first to exploit this type of navigation for military purposes.In 1949, the US Army purchased four Pfund sky compasses to equip its Air Force [158].During the Cold War, monitoring Alaska and crossing the North Pole to target the Union of Soviet Socialist Republics (USSR) was militarily vital, and magnetic compasses were not able to indicate a course.Lieutenant Commander Alton B. Moody reported that the Pfund sky compass had a heading accuracy of 1°.It was an optical instrument used manually by rotating a half-wave plate against a linear polarizing filter.In 1954, Scandinavian Airlines (SAS) launched the first scheduled passenger flight between Copenhagen (Denmark) and Los Angeles (USA), the world's first polar shortcut using the Pfund sky compass principle.This new route reduced the travel time between California and Scandinavia from 36 to 22 h.SAS made further improvements and used the sky compass for many years on its polar flights.Since then, navigation based on polarimetric information has fallen by the wayside due to high-precision inertial navigation and GNSS navigation.

The Skylight Polarization Pattern
The most common way to describe atmospheric scattering near the visible spectrum is the Rayleigh model (1871, [184,185]), which describes all the electromagnetic field properties by considering single elastic scattering by particles much smaller than the wavelength [186][187][188].In the Rayleigh scattering model, there are two points featuring null DoLP: the sun direction and the anti-sun direction.However, atmospheric turbidity and multiple scattering causes differences between the Rayleigh scattering model and skylight polarization, which limits its trustworthiness and its use in robotics to determine an accurate celestial heading [189].
Whatever the position of the sun in the sky or below the horizon, the pattern of angles or degrees of polarization is symmetrical with respect to the solar and anti-solar meridians [190] which are formed by the semicircle passing through the zenith and the sun (Figure 6).It is precisely these symmetries that have led to the use of polarization as a form of heading information by navigating insects [13] .The skylight's polarization pattern can be either simulated by using the Berry model [191][192][193] or a more complex approach based on Mie scattering and Monte Carlo simulations [194].The limitation of the Rayleigh model is due to atmospheric turbidity and multiple scattering.Indeed, the global polarization pattern does not correspond to the Rayleigh model and we can actually observe four points featuring null DoLP [195][196][197][198], which are called neutral points.As shown in Figure 7, these four neutral points are named Brewster (below the sun), Babinet (above the sun), Arago (above the anti-sun), and the Fourth (below the anti-sun).They are either located on the solar meridian or the antisolar meridian (Figure 7); however, for a fixed sun position, their respective elevations vary with the level of atmospheric turbidity and the wavelength of light [196,197].
The Berry model, more accurate than the Rayleigh model, can be useful for better heading measurements [199,200] relying on neutral points detection [201] but cannot be fully used as a pattern prediction model because the positions of neutral points are strongly modified by air pollution, clouds, and debris from large volcanic eruptions [187,202], and because it lacks accuracy near the sun and the horizon.A recent study [203], considered the influence of the solar altitude angle on the neutral point position variations to model the pattern of polarized skylight.This model greatly improved the similarity between simulation and measurement data, but it was only developed in clear sky conditions and short periods of time (a couple of hours) [203].6) by introducing four neutral points.These four neutral points are named Brewster (below the sun), Babinet (above the sun), Arago (above the anti-sun) and the Fourth (below the anti-sun).However, the solar-antisolar meridian symmetry remains in the polarization pattern.The points (0) and (Z) represent, respectively, an observer and the zenith.

Polarization-Based Sensors Dedicated to Navigation
There are three main families of "polarimetric" heading detection techniques: imaging method or Stokes (conventional, see Figure 8a), imaging method by optical transformation by mean of a waveplate (S-waveplate or linear waveplate, Figure 8b) , and non-imaging method or biomimetic approach by mean of a set of photoreceptors, each one covered by a polarizing filter (Figure 8c).

Terrestrial robots
Insects possess photoreceptors in the dorsal region of their eye (the Dorsal Rim Area, DRA) that are specialized in detecting the pattern of polarized skylight [204].The first robotic application of the desert ants' DRA was implemented on board a mobile robot, called Sahabot in 1997 [205] and again in 2000 [10] (Figure 9a).The first robust ant-inspired celestial compass composed of only two photodiodes covered with rotating polarized filters (Figure 8c) was implemented on board a hexapod robot, called AntBot [11,12,206] (Figure 9b).The median AoLP error of the AntBot method was 0.4°under clear sky and 0.6°in the case of overcast weather [206].

Aerial robots
A pair of polarized-light sensors based on a group of six photoreceptors, each photoreceptor being covered by a piece of polarizing filter, was tested under a clear sky (Figure 9c) and led a heading accuracy of 0.2° [207].A single unit, shown in Figure 9c, was mounted on board a quadrotor (Figure 9d), providing an outdoor heading accuracy better than 2°w ith an output refresh signal of 10Hz [208].
Yang et al. took inspiration from insects' DRA to design and manufacture a POL-unit (a pair of photosensors covered with a pair of orthogonal polarized filters, see Section 3.3.3for further information) based on a polarizing beam splitter (PBS) in order to avoid quadrature error of polarizing filters [170].Each POL-unit is 5.5 × 5.5 × 6.5 cm in size and 50 g in weight and possess a heading accuracy of 0.12º with an output refresh signal of 10 Hz [170].Three POL-units were mounted on board a 6 kg six-rotor Unmanned Aerial Vehicle (UAV) [209].
In static experiments, they reached a three-dimensional accuracy of less than 0.2°, and in dynamic experiments, they reached three-dimensional accuracy of 2.9°in pitch, and 1.9°in yaw and roll [209].Adding a Polarization-based Compass (PbC) in an integrated navigation system is relevant to better estimate the attitude in flight.In dynamic experiments on board a six-rotor UAV, Qui et al. demonstrated that the estimation error of the integrated navigation system could reach a value as low as 0.3°around each axis [210].A polarized camera (based on Sony sensor IMX250MZR, 2448 × 2048 pixels, 24 fps, such as mentioned in Section 2.2.5) was mounted on board a 15 kg six-rotor UAV [211].Polarimetric images were processed by gated recurrent unit (GRU) neural network generating an output refresh signal of 10 Hz with a heading accuracy of 0.5°.The dynamic experiment was performed at an altitude of 310 m over a flight distance of 500 m [211].
Non-imaging and imaging-based PbCs are therefore relevant for both heading and attitude measurements in outdoor navigation [212].PbCs are useful to work in GNSSdenied or magnetic disturbance environments in which the sky dome is still visible.PbCs can therefore significantly improve both heading and attitude measurements by fusing them with inertial sensors [211].
Field experiments with a polarimetric camera (BFS-U3-51S5P-C, FLIR, based on Sony sensor IMX250MZR, FLIR Systems Inc., Wilsonville, OR, USA) equipped with a 185°fisheye lens (FE185C57HA-1, Fujinon, Fujinon Corporation, Saitama, Japan) achieved real-time, robust, and accurate performance under different weather conditions with a Root Mean Squared Error (RMSE) of 0.1°under a clear sky, 0.18°under an overcast sky with a thin layer of clouds, and 0.32°under an isolated thick cloud cover [213].This level of accuracy is relevant for military applications, where the bearing of true north must be detected with an accuracy of less than 0.1°.

Automotive applications
Celestial compasses were also embedded on the top of automobiles to estimate their headings in dynamic experiments, with an RMSE of 0.81° [214] around a park, an RMSE of 0.55°in [164] along a straight boulevard, and an RMSE of 1.86°in [165] in an urban environment.Of course, dynamic experiments are more relevant for robotic applications rather than static experiments, and the heading accuracy is strongly affected by movements in dynamic experiments (e.g., an RMSE of 0.28°in static experiment versus an RMSE of 0.81°in dynamic experiments in [214]).

Ant-inspired path integration
Several robots have been fitted with a PbC in order to implement an ant-inspired path integration [215].Originally, the mobile robot Sahabot 2 (Figure 9a) suffered from a homing error as small as 0.2% (ratio homing error-traveled distance) along a traveled distance of 70 m in the desert [10]; the hexapod robot AntBot (Figure 9b) had an error as small as 0.7% along a traveled distance of 7 m over a flat terrain; and the mobile robot Turtlebot2 had an error of around 1.1% along a road of 45 m [216].Zhou et al. also tested its PbC/INS along a traveled distance of 125 m comprising 14 checkpoints and reached a positioning error along the trajectory of approximately 0.5% [216], which represents the longest traveled distance of all ant-inspired robots, as far as we know.All these results need to be confirmed over longer distances and in various environments but adding a PbC in an integrated navigation system is always beneficial and can improve the heading and trajectory accuracy by about 40% [22].
The low level of positioning error is certainly correlated to the output refresh rate of the PbC, which was at 1 Hz with the Turtlebot2 [216], barely at 0.05 Hz on board AntBot [11,12,206] (Figure 9b), and an analog output with the Sahabot 2 (no information found in [10].However, non-imaging and imaging-based PbCs can now reach 10 Hz [170,209,211], which could be relevant in the next decade to evaluate the ant-inspired path integration over distances of several hundred meters and observe whether the positioning error remains bounded in the 0.2-0.5% range.

Celestial Compasses Based on Stokes Methods
When designing a pixelated polarized light compass based on the Stokes' formalism (see Figure 8a and Section 2.1), it is crucial to take into account the various errors of the sensors like biases, gains and mechanical errors (also called installation error) due to alignment errors between the CCD pixel array and the micropolarizer array [217].These errors are also considered for non-imaging sensors such as a compound eye polarization compass [218].As a consequence, calibration efficient methods have been developed to compensate for the various errors [84,89,111,112,166,219,220].The orientation of the camera in the inertial frame also called camera model can be determined by means of an inertial measurement unit or from solar observation for a camera equipped with a fish-eye lens [221][222][223].A standard method used to estimate the heading relies on the fact that E-vectors are perpendicular to the sun vector leading to the calculation of eigenvalues [217,224].Other studies exploit the symmetry pattern of the AoLP [172], also called the ∞ characteristic model [225].Similarly, a few studies have applied the Hough transform to the AoLP pattern [200,226].

Celestial Compasses Based on Imaging Methods by Optical Transformation
In 2016, Zhang et al. proposed an unexplored method based on a photosensor coupled to a radial polarizer [227].This method can be classified as an aperture-coded light field capture method.As shown in Figure 8b, the grid of linear polarizing filters is replaced here by a "S-waveplate" and a linear polarizing filter acting as a radial polarization converter.However, the choice of a Raytrix-R29 light field camera limited the maximum frame rate at 1Hz, which is not relevant for mobile robots with fast dynamics even if they could reach a great accuracy (<0.2°) with a 300 × 300 pixels images resolution [227].
The PbC referred to in [227] uses a variation of an optical component called an Swaveplate which has spatially variable properties; depending on the zone where the light ray penetrates, the birefringence and the slow/fast axes will not necessarily be identical.Such a waveplate has the property of transforming a ray with a homogeneous distribution of polarization state into another polarization state (in particular, it transforms a spatially and linearly polarized homogeneous ray into a radially polarized ray).The PbC designed in Zhang et al. [227] is therefore not based on the variation in incidence of the rays of a spatially homogeneous component, but on the variation in the spatial properties of a component different from the one, called here a "S-waveplate".A conventional "waveplate" (also known as a retarder plate) consists of a spatially homogeneous optical material with a certain amount of birefringence, which affects the state of the incident polarization in the same way as two parallel rays passing through the plate along different paths.In 1944, Bernard Lyot theorized the dependence of retardance on the incidence of rays in the case of a homogenous birefringent material, in order to develop his own wavelength polarizing filter [228].Zhang et al. [227] only tested their S-waveplate on skies with a fairly homogeneous luminosity and an overall high DoLP (basically blue skies with no sun in the field of vision or clouds).
Poughon et al. proposed another heading sensor architecture based on polarization pattern estimation using a conventional waveplate [229,230] (see Figure 8b).This optical architecture, called PILONE, is based on variations in the retardance of a waveplate as a function of the angle of incidence of the polarized light rays, resulting in the appearance on the image of iridescent colors depending on the orientation of the incident rays and the state of polarization (Figure 11).The estimation of sun orientation from clear sky images with sun hidden artificially is based on a convolutional neural network [229].Working with a Raspberry Pi color camera capable of 30 fps frame rate, this architecture may be relevant for real-time mobile robotics, with preliminary results showing a clear-sky accuracy in the 1°range with 64 × 64 pixels undersampled images [229,230].The PILONE PbC results in a low-cost lightweight sensor that would cost about the same as the color camera used (here a Raspberry Pi wide angle camera, i.e., a few dozen euros), which may be relevant for applications in both automotive and robots manufacturing.

Celestial Compasses Based on Non-Imaging Methods or Biomimetic Approaches
Non-imaging methods for implementing a PbC (Figure 8c) can be sorted into two main categories.The first relies on the Malus law [231], which gives the output signal S i of a pixel combined with a polarized filter defined as follows: where K is the pixel gain, I is the incident light intensity, Φ is the polarization azimuth of the compass, ϕ i is the theoretical orientation of the polarized filter of the i-th channel, and d is the ratio of the intensity of fully polarized light to the total light intensity.
The Equation ( 10) can be written in a matrix form to estimate the various parameters K, d, and I from the measurements S i by a applying a standard non-linear least-squared method.The second category concerns method based on the so-called Labhart's model of the POL neuron in crickets, the frequency of which is a sinusoidal function of e-vector orientation [204,[232][233][234].As depicted by [204], a POL-unit implements the log ratio of two photosensors S with two orthogonal polarized filters (here ϕ 1 = 0 • and ϕ 2 = 90 • ): As depicted in Ref. [235], the AoLP and DoLP can be calculated from Equation ( 11) by means of an analog logarithm amplifier [171] and by orienting several POL units (Figure 12) along various orientations.The logarithm amplification gives the ability to deal with a large range of lighting conditions over several decades.Thus, it becomes possible to make an array of POL units distributed on a planar surface [236] along a circular [167,237,238] or even spherical shape mimicking a compound eye [239][240][241].Moreover, a spherical sensor composed of several POL units was used to compensate for the error alignments of an inertial measurement unit on the basis of sun and star vectors [241].
Figure 12.Model of a POL unit that accounts for the e-vector response of two crickets' photoreceptors endowed with their orthogonal polarized filter (noted here as 1 and 2).The POL neuron (output signal p 1 , see Equation ( 11)) performs the log ratio of the two photoreceptors' output signal (S 1 and S 2 ).Adapted with permission of the Journal of Experimental Biology [234].

Polarization-Based Geolocalization Using Solar Ephemeris
Yang et al. [180] and Zhang et al. [182] proposed a polarization-based geolocation method relying on the maximum degree of polarization required to find the sun position with an artificial compound eye comprising 54 photodetectors calculating a coarse geographic position (latitude error: 0.11°, longitude error: 0.08°, spatial error: hundreds of kilometers).Powell et al. [242] proposed an underwater polarization-based geolocation method using an imaging method and reached a spatial error of about 100 km.

Polarization-Based Geolocation Using the North Celestial Pole (SkyPole Algorithm)
The use of solar ephemeris combined with an estimation of the 's position through the polarization pattern enables direct geographical positioning [180].Nevertheless, animals do not have access to these ephemerides (Figure 13b), and the utilization of the polarization pattern (Figure 13a) as a reference for their navigation remains poorly understood.In 2023, an alternative method inspired by migratory birds was proposed [254].Migratory birds calibrate their magnetic compass through the celestial rotation of night stars or the daytime polarization pattern [3,255].Similar to Brines [256], the temporal properties of the sky's polarization pattern were considered as relevant navigation information.For this purpose, a bio-inspired method to find geographical north and the observer's latitude was developed [254], requiring only skylight polarization observations provided here by a commercial polarimetric camera.Skylight is mostly linearly polarized and characterized by two parameters: AoLP and DoLP.This method consists of processing only skylight DoLP images taken at different instants in order to find the north celestial pole (NCP) from temporal invariances of the DoLP pattern (Figure 13c,d).Then, the geographical north bearing (true north) and the observer's latitude Φ (Figure 13b) can be deduced from the NCP's coordinates.Thresholding is applied to these images (third row).Finally, the binary images are overlaid, and the NCP is located at the intersection of radial invariance axes.From [254] under CC-BY-SA-ND license, 2023.
To experimentally validate the NCP approach, a polarimetric camera (PHX050S-QC from Lucid Vision Labs, sensor ref.Sony IMX250MYR) equipped with a 185°fisheye lens (FE185C57HA-1, Fujinon) was used and situated on the roof of the Institut de Neurosciences de la Timone (INT), Marseille, France (43.2870135°N, 5.4034385°E).The study yielded a Mean Absolute Error (MAE) of 2.6°in azimuth and 3.8°in latitude [254].

Can the Underwater Sky Polarization Be Useful for Navigation Purposes?
In 1954, Waterman demonstrated that polarized light from the sky was accessible under clear water at depths of up to several hundred meters, using many behavioral observations of underwater animals in connection with their migration mechanisms [257,258].When observed from the calm surface of the water looking upward, the perspective above the water's surface becomes condensed into a conical angle of 97.5°due to the refraction effect.This underwater field of view is commonly referred to as Snell's window [259].The underwater model of polarization patterns available in calm seas is well established; however, few studies have modeled them with waves [260].Although the ability of certain animals to use underwater polarization as a compass for navigation is still under debate, it could be worth studying the properties of underwater polarization.It has been clearly shown that the degree of polarization is stable and consistent with the sun's location at depths of 2 and 5 m only in clear waters [261].However, the influence of water turbidity on the refraction-polarization pattern can probably be ignored within the topmost thin surface layer of seawater, where polarization vision of aquatic animals is located in the UV range [262].In their review, Cronin and Marshall recalled that the polarization pattern is strongly affected by the depth from which the pattern is measured [263].These vertical variations depend on the amount and quality of suspended material in the water.As depth increases, multi-path scattering destroys the pattern coming from the sky and only in-water scattering, produced near the observer, remains.At very low depth, the influence of wavy water surfaces on the polarization pattern has been simulated and measured [264].was revealed that the wind speed also has an influence on the pattern.Powell et al. proposed estimating the position of an observer by processing underwater polarization patterns with a custom-made polarimetric camera [242].Geolocalization was achieved here by means of accurate knowledge of time and date.The accuracy obtained was a 6 m error for every 1km traveled at depths ranging from 2 m to 20 m.A recent study based on deep learning reached geolocation accuracies of 55 km at a depth of 8m and 255 km at a depth of 50 m even in low-visibility waters [265].Accurate heading estimation of an autonomous underwater vehicle was recently obtained by merging inertial and polarization information [266,267].A standard deviation (SD) error of 0.83°was reached here at a depth of 2 m in real oceans in calm seas [267].Cheng et al. confirmed the deterioration of the heading measurement as a function of depth from 0.93°at a depth of 1 m to 4.07°at a depth of 5 m [266].

Summary and Future Directions in Polarized Vision for Robotics Navigation
Combining Strapdown Inertial Navigation Systems (SINSs) or Inertial Measurement Unit (IMUs) with polarized sensors is of great interest for improving the dynamics and accuracy of the estimated variables of interest (pitch, roll, yaw, positions, etc.).For example, heading estimation has been considerably improved by merging SINS with a spherical polarized sensor [240].In addition, it has been shown that a method based on spherical non-imaging polarimetric sensor composed of nine POL units was able to estimate the static position of the sensor with a positioning error as small as 0.07°in latitude and 0.012°i n longitude [239].Finally, an autonomous robot was able to home with a position error twice as small as a method only based on IMU [216].
Challenges remain.By and large, skylight polarization in the UV range has seldom been exploited for robotic applications, none of the commercial or experimental devices detect the surrounding light in a full panoramic view as insects do, and none of them can work in cloudy or extreme weather conditions [268].
Most available polarized sensors are megapixel cameras, which are bulky and expensive for applications in automotive or service robotics, in which low cost will be a prerequisite (<USD 1000).In terms of learning from insects, the desert ant Cataglyphis is able to detect its celestial heading by using only 100 ommatidia in their DRA in each compound eye comprising 1300 ommatidia, each ommatidium in the DRA comprising six UV-sensitive photoreceptors.As a result, the number of required photoreceptors to detect the celestial heading in the same manner as desert ants Cataglyphis [2,269] is 1200 UV-sensitive photoreceptors.This number of photoreceptors is equivalent to a 34 × 34 pixels thumbnail image, which corresponds to an intermediate resolution between non-imaging and imaging sensors.Thumbnail images from 22 × 22 pixels to 64 × 64 pixels have been already used to train and validate neural networks [229,230].Neural networks are promising solutions in robotics to process thumbnail images in real time, but they will require the design and manufacture of dedicated artificial retinas comprising approximately one thousand pixels instead of millions of pixels.
Generating neural networks in order to detect attributes of the polarization pattern, or denoise or interpolate polarization patterns will require image databases, either generated by simulation, or acquired by polarimetric cameras [26,270].These images databases are now available and will be use either to train or to validate neural networks [223,271] which will be relevant in robotics for processing polarimetric images in real time.Optical transform-based imaging methods will be inexpensive but will require a bank of images for calibration.
Non-imaging methods require a grid of integrated polarizing micro filters, which will become cheaper as production methods focused on heading detection improve.
The development of miniaturized and all-day sensors will be compatible in terms of both size and cost with service robotics.These sensors' outputs will be merged into an integrated navigation system as a supplemental perceptive modality of localization techniques to complement and reinforce the conventional ones.Polarization patterns can also be used at night with the moon in the same manner as ants [272].It has already been proven that we can reach a heading accuracy of 2.45°at night with polarized light alone but under the same conditions and by combining all the moon's light pattern properties, we can reach an accuracy up to 0.5° [273].All these experiments were carried out during full moons in favorable environments.Conducting such experiments in unfavorable conditions remains a big challenge.The moon's polarization pattern can also be used for positioning at nighttime.Chen et al. obtained a positioning error within tens of kilometers, yielding a latitude accuracy of 0.62°(1σ) and a longitude accuracy of 0.02°(1σ) [274].Yet, how all light patterns properties (polarization information and intensity information) of the sun or the moon could be combined to obtain the best level of performance remains an open question.

Polarized Vision for Scene Understanding
In nature, light polarization occurs mainly due to two physical phenomena: light scattering and light reflection [275].As an illustration of the latter property, many animal species such as water fleas and butterflies are sensitive to the polarization of light and exploit this ability to discriminate water [276].This section focuses on how robots may take advantage of sensing polarized light to understand scenes through detection, estimation of 3D shapes, depth and pose estimation.Ref. [277] may be the first in the literature to emphasize, in the computer vision field, how the polarization parameters of light are related to the normal estimation of objects.In this section, after recalling and deriving all the mathematical formula linking the polarization parameters to the normal orientation, direct application, i.e., object detection and discrimination, will be described.Shapes from polarization that exploit most of the physical information will be then presentedand the section will end with the latest techniques using polarization imaging to improve depth estimation and or facilitate pose estimation in robots.

Polarization and Reflection
The reflection model employed here is a simplified one, providing a first approximation of the use of polarimetric imaging for the detection and 3D reconstruction of objects.In practice, reflected light is a combination of these two reflection: diffuse and specular.Reaching an interface between two media with different properties, light becomes partially reflected and partially transmitted.Considering a beam traveling through the first medium (characterized by a refractive index n 1 ) then reaching the interface of a second medium (characterized by refractive index n 2 ), the proportions that are reflected and transmitted are defined by the Snell's Law: where θ i , θ t , and θ r are the angle of incidence, transmission, and reflection, respectively.In addition, the incident, transmitted, and reflected beams are in the same plane containing the normal of the surface and called the plane of incidence.

Recall of Fresnel Formulae
The Fresnel formulae can be determined by solving Maxwell's equations and respecting the continuity conditions imposed at the interface on the electric and magnetic fields.Letting r and t denote the reflectivity and transmissivity ratios that represent the ratio of the complex amplitude of the reflected and transmitted values with the amplitude of the electric vector of the incident field, we have the following (see [28]): where ∥ (resp.⊥) denotes the components in the plane of incidence (resp.normal to the plane of incidence).

Partial Polarizer
The Mueller matrices of reflection and transmission are directly related to the Mueller matrix of a partial polarizer as defined in the following equation: where a ⊥ and a ∥ are the ratio coefficients according to the perpendicular and parallel of the incidence plane, respectively.Therefore, assuming the incoming light is unpolarized, this leads to obtaining light that is partially linearly polarized with a degree of polarization (DoP) equal to the following: In addition, it can be deduced that the polarized vibrations are orthogonal to the plane and parallel otherwise.

Specular Reflections
To study the polarization properties of the light that is specularly reflected, Equation ( 15) can be used by replacing a ⊥ and a ∥ by the Fresnel ratio of the reflection r ⊥ and r ∥ given from Equation (13).If we denote θ as the angle of reflection, denote n as the real refractive index of the media that the beam is reflected within, and assume that the refractive index of air is equal to 1, θ i and θ t can be rewritten: Consequently, the Equation ( 15) of the degree of polarization can be rewritten as a function of the angle of reflection θ and the refractive index n [278]: Figure 14a shows the plot of the previous equation with a refractive index n set to 1.5.As can be highlighted here, an ambiguity occurs while trying to determine the θ angle from the DoP.The previous formulation of the DoP is only valid for dielectric materials.To derive a formula for a metallic object, the complex refractive index of the media n = n(1 + iκ), where κ is the attenuation index, must be taken into account [279].The following approximation can be applied if we consider the visible region of the spectrum of light [231]: Applying the same considerations as for dielectric objects, Equation ( 15) of the degree of polarization can be rewritten as the following: The plot presented in Figure 14b assumes a metallic medium and again reveals an ambiguity in the determination of angle θ from the measured DoP.Nevertheless, contrary to a dielectric object, the maximum occurs for a high value of the angle θ, around 80°, and the reconstruction the shape of smoothly curved objects can be applied without solving this ambiguity.In addition, as can be seen in Figure 15, the orthogonal Fresnel ratio |r ⊥ | 2 is always greater than the parallel one r ∥ 2 in both cases: dielectric and metallic media.Therefore, we can deduce that the specularly reflected light becomes polarized orthogonaly to the incidence plane.As a result, polarization contrast that could be measured for materials with both types of reflection tends to reduce.Active polarization imaging, which is outside the scope of this review article, could be used to improve the contrast of such objects.

Diffuse Reflections
The diffuse reflections that can provide polarized light are generally considered as the resulting process of light that first penetrates the surface and becomes partially polarized by refraction.Then, within the medium the light is randomly scattered and becomes depolarized.Some part of the light is then refracted back into air and becomes polarized.
To obtain an expression of the degree of polarization according to the angle of diffuse reflection, Equation ( 15) can be used by replacing a ⊥ and a ∥ by the Fresnel ratio of the transmission t ⊥ and t ∥ given from Equation ( 13).With θ denoting the angle of diffuse reflection, n denoting the refractive index of the media that the beam is refracted from, and assuming that the refractive index of air is equal to 1, θ i and θ t can be rewritten: The degree of polarization of the light in the case of diffuse reflections for dielectric objects can be rewritten: Figure 14c shows the plot of the function linking the DoP to the angle θ.As can be seen, the DoP is lower in the case of a diffuse reflection than of a specular reflection.Nevertheless, the determination of the angle θ from the DoP is performed without any ambiguity if the refractive index n of the media is known.
Also, contrary to specular reflections, as illustrated on Figure 16, the orthogonal Fresnel ratio is lower to that of the parallel one, which leads to the conclusion that light obtained by diffuse reflections is always parallel to the incidence plane.

Detection and Classification
Before finding some applications in robotics, detection and segmentation of objects based on polarimetric imaging were initially developed in the field of computer vision [277].In Ref. [280], the physical basis was developed and detailed to highlight the capabilities of polarimetric imaging to distinguish metallic materials from dielectric materials.More advanced classification techniques can be found in [25].Subsequently, the benefits of this modality to enhance perceiving of transparent objects were revealed [281].This task is essential in robotic gripping systems to manipulate transparent objects with ease [282] and improvements are continuously being made [283].
Autonomous robots are often based on bio-inspired systems regarding the perception task.Polarimetric cues are used by many water beetles and insects to search for bodies of water [276,284].For instance, in ground robotics this modality has been exploited to detect water hazards or mud in conjunction with 3D sensing techniques such as LIDAR [285,286], stereo-vision [287,288], and mono-depth [289].Figure 17 shows that the light reflected by water is made up of a proportion linked to the specular reflection as well as a proportion linked to refraction as described in the previous subsection.As shown in Figure 17, refraction is a combination of light scattered by particles in the water and light reflected by the ground.The Mueller matrix that models this phenomenon can be determined from the following: where M re f l and M re f r are the Mueller matrices of reflection and refraction, respectively.M dep is the Mueller matrix of a depolarizer and µ absorption is the absorption coefficient for both particles in the water and the ground.M re f l and M re f r can both be computed using the generic Mueller matrix M pp defined in Equation ( 14), replacing a ⊥ , a ∥ by the appropriate Fresnel coefficients defined in Equation ( 13): r ⊥ , r ∥ for the reflection case and t ⊥ , t ∥ for refraction.Using Equation ( 12) enables us to write the Mueller matrices as a function of the angle of reflection θ and the refractive index of water n.Glass or transparent object segmentation remains a major issue in mobile robotics in urban environment to prevent collisions or misunderstanding of the scene.For instance, a learning-based method proposed in Ref. [290] that manages both polarization parameters and colorimetric information tends to outperform standard methods.More generally, the benefits of polarimetric imaging in urban scenes are still growing since it drastically improves the segmentation tasks.Among these, we can cite road classification [94,[291][292][293], and semantic segmentation [294].Advanced classification tasks can also be performed such as land mine detection [295,296] and astronomical solid body identification [297][298][299].To increase the segmentation task quality, polarization modality can be used advantageously with infrared imaging [293,300,301] or multispectral imaging [296].Reflection removal [302] can also be seen as a direct application of polarization properties of transparent surfaces to provide high-quality images for navigation tasks.

Shape from Polarization
In most robotics tasks, the perception of three-dimensional objects, the estimation of depth, and 3D reconstruction are all essential.As presented in Section 4.1, the polarization parameters of the light reflected or refracted from an object are directly related to the normal of the surfaces.Historically introduced by Wolff and Boult [303], the way of determining the surface normals from the measured polarization parameters led to a specific field of computer vision named "Shape from Polarization".Assuming as a first approach that an orthographic lens is used in front of the polarimetric sensor, all light rays are parallel according to the optical axis of the camera ⃗ z as illustrated on Figure 18.In this frame, the normal can be written as the following: where θ and ϕ are the zenith and azimuth angles, respectively.
Finally the shape of the object is obtained by integrating the normal fields.The two angles θ and ϕ, respectively, are related to the degree and the angle of polarization.Depending on the nature of the surface, highly reflective or diffuse, some ambiguities appear in the determination of the normals: Diffuse reflection As long as the refractive index is known, there is no ambiguity in determining the zenith angle θ from the DoP.The main drawback is that the DoP is lower for diffuse reflection.An ambiguity remains regarding the azimuth angle Φ which is equal to AoLP or AoLP + π, since the light is polarized in the plane defined by the normal and the reflected ray (Figure 18).Specular reflection Assuming the refractive index is known, as shown in Figure 14, an ambiguity appears in the determination of the zenith angle θ from the DoP.In the same way, there is ambiguity as to the determination of the azimuth angle Φ which is equal to AoLP ± π/2 since the light is polarized orthogonally to the incidence plane (Figure 18).Shape from polarization started with the 3D reconstruction of objects having some priors about their shape to facilitate the disambiguation process [278,[304][305][306]. Active lighting sources [279,307], multi-spectral imaging [308][309][310], multi-view [278] and Shape from Shading techniques [311][312][313][314] were also used in addition to shape from polarization to extract the most out of the techniques.Under flash illumination and using deep learning, Deschaintre et al. [315] captured the shapes of objects including the bidirectional reflectance distribution function by using polarization considerations.It is important to point out that multi-spectral imaging [309,310] in conjunction with polarization imaging enables the estimation of both the refractive index and the normals.Smith et al. [313] started with objects that provide both specular and diffuse reflections under controlled illumination, and later conducted experiments with unknown lighting [316].To manage ambiguity between diffuse or specular dominating reflection [317] they were the first to introduce deep learning and to provide a lighting invariance algorithm based on shape from shading.Yang et al. [318] succeeded in using deep learning to reconstruct an object with shape from polarization information only.Knowing the polarization pattern of the blue sky [319] can also help to determine the object's shape but this method is not suitable in real time.Evolution of shape from polarization is summarized in Table 2.

Three-Dimensional Depth with Polarization Cues
Thanks to its ability to estimate the normals, polarization imaging is increasingly involved in 3D depth estimation.To improve the 3D reconstruction of objects, Kadambi et al. [321] combine polarization imaging with an aligned depth map obtained from a Kinect.Disambiguation is initiated by the depth map and the integration process starts with depth map estimation and is then improved by including the estimation of normals.

Stereo-Vision Systems
Also, in some stereo-vision systems, polarization imaging can solve the reconstruction of specular or transparent surfaces: Berger et al. [288] used a pair of polarimetric cameras to estimate the depth of a scene including the presence of water areas.Instead of using the polarization parameters to simplify the matching process, Fukao et al. [322] integrated all the measured parameters in a cost volume construction and the surface normals are then estimated.In a study carried out by Zhu and Smith [323], the pair comprises one RGB camera and one camera equipped with a linear polarizer (that could be replaced by a polarimetric camera).Even if restricted to controlled lighting, high-quality 3D reconstruc-tion can be obtained combining polarization imaging and binocular stereo vision system thanks to the fusion scheme proposed by Tian et al. [324].Cui et al. [325] proposed a multiview acquisition system using polarization imaging that enables dense 3D reconstruction adapted for texture-less regions and non-Lambertian objects.

Pose Estimation and SLAM
Pose estimation and SLAM (Simultaneous Localization and Mapping) are also of major importance in the field of robotics particularly for navigation tasks and scene analysis.Yang et al. [170] were the first authors to propose a polarimetric dense monocular that can reconstruct 3D in real time and provide improved results compared to conventional techniques when some regions are specular or texture-less.
Cui et al. [326] developed a relative pose estimation algorithm from polarimetric images which reduced the point correspondence to two points but limited the analysis of diffuse reflection.High reflections and transparency objects are handled in Ref. [327].They developed a network called PPP-net (Pose Polarimetric Prediction Network) that uses a two step framework.The fusion of polarization information and physical cues provides after learning the object mask, normal map and NOCS (Normalized Object Coordinate Space) required to a final regression network for monocular 6D object pose estimation [328].Additionally, a learning-based algorithm that focuses on human pose and shape estimation was recently developed by Zou et al. [329].

Summary and Future Directions in Polarized Vision for Scene Understanding
Polarization imaging is becoming an indispensable modality for robotics both as a means of providing additional clues regarding the nature of objects and as a major contributor to 3D object recognition.Nevertheless, as presented in this section, polarization imaging could not be a standalone system providing all necessary information.Ambiguities remain regarding the azimuth and zenith angles, or the priors of the refractive index or shapes, are sometimes unavoidable in robotics.Methods based on deep learning seem to overcome most of these limitations.Huang et al. [330] used a combination of stereo-vision and a polarization system to recover normals and disparity through a deep learning-based algorithm.Assuming only diffuse reflection, ambiguities were solved, and in addition, the authors succeeded in overcoming the restriction of using orthographic cameras.Consequently, standard stereo-vision systems can be advantageously replaced by a pair of color-polarized cameras.
Improved perception of the real world through polarization extends applications to more advanced systems such as event cameras [331,332] or iTOF (indirect Time Of Flight) cameras [333].One solution to the major challenges in scene understanding and 3D estimation could be to fuse polarization cues through various wavelengths additionally to 3D sensor to provide robust reconstruction in the presence of specular or translucent objects that can be found indoors or outdoors.Extending the fusion of polarization imaging and multispectral imaging for the detection task appears to be relevant for scene understanding.

Conclusions
The principles of polarization that we use today have been known since the 19th century, but due to the lack of experimental imaging systems able to operate in real time, few applications were reported until the late 1990s, whatever the field.Availability of digital cameras and liquid crystal modulators made it possible to implement systems and a variety of applications such as skylight navigation was considered.Commercial systems emerged in the early 2010s due to a major increase in the availability of high-definition low-noise commercial cameras able to sense linear polarization at 50-100 fps.
The Division-of-Focal Plane (DoFP) camera for linear polarization image capture is one of these cameras and appears to be the best-suited solution to robotic applications.Like the color filter arrays, this technology seems to have reached significant maturity in terms of performance and repeatability of the measurement, such that its use could be generalized in the future.A variation of this technology also makes it possible to make a joint acquisition of color and polarization images.In our opinion, an effort toward standardization and the definition of dedicated preprocessing pipelines remains to be made, possibly with open-source software toolkits.
There are several advantages to using sky polarization for robotics navigation: this technology is undetectable, it has immunity to GNSS signal spoofing or jamming, the celestial heading detection estimate is driftless, and it could work at night by moonlight, making it exploitable in urban environments for civilian applications such as automated last-mile delivery service.An autonomous vehicle such as that proposed by the French company TwinswHeel (Figure 19) could use polarization for guidance as early as 2030.
The polarimetric systems described in this manuscript can estimate geolocation with a sufficient precision using only skylight, and these systems are so lightweight and inexpensive that they could be embedded into terrestrial, aerial, or underwater autonomous vehicles.Improvements of the technology will enable such vehicles to operate using, for instance, the detection of the multispectral polarization patterns.But UV usage, detection of surrounding light in panoramic view and operation in complex weather conditions remain challenges.

Lidar 3D
Encoders on all 4 wheels + steering system  Moreover, current popular polarimetric sensors are megapixel cameras, which are too bulky and expensive for applications in automotive or service robotics.An alternative could take its inspiration from nature: some animals detect the celestial heading with a visual system corresponding to the equivalent of very low-definition sensors, which was corroborated by simulations with neural networks using low-definition images.Therefore, an artificial retina, consisting of one thousand pixels (instead of one million pixels for a classical camera) and a dedicated trained processing unit could be the first step toward a low-cost polarimetric device aimed at autonomous navigation.

Optic flow sensors
In this review, we also presented the benefits of polarimetric imaging for robots to help them better understand the world in which they will operate.The detection of transparent or potentially dangerous surfaces can be facilitated by analyzing the polarization of light reflected from surfaces.In an even more advanced way, we have seen how the 3D shape of objects can be estimated from the measurement of polarization parameters.Algorithms based on neural networks can now overcome the constraints associated with shape-frompolarization techniques, making it possible to generalize the reconstruction of objects outdoors under a variety of lighting conditions.
Autonomous robots working in urban environments, e.g., for last-mile delivery services, will have to locate and position themselves with a spatial accuracy of better than 5 cm and 0.2 degree by 2030.Concurrently, in public areas, they must meet the most stringent safety requirements.Using and fusing the polarized sensors' outputs with an INS could be a supplemental perceptive modality of localization techniques to reach the requested level of performances in order to complement and reinforce conventional localization techniques (3D LiDAR-based SLAM, GNSS, and visual-inertial odometry, see Figure 19).

Figure 1 .
Figure 1.Illustration of the available polarized light in the environment.Picture credits: Camille Dégargin (2023).(a) Visual environment as seen by the robot with unpolarized light, i.e., light intensity.(b) Visual environment as seen by the robot with polarized light, which can be either due to the light scattering from the sky or the light reflection from surrounding environment.This review article was written to address common research questions in the field of autonomous robotics: • What kind of polarization sensing can we embed into robots?(see Section 2) • Can we geolocate ourselves and find the true north heading by detecting light scattering from the sky? (see Section 3) • How do polarization images relate to the physical properties of reflecting surfaces in the context of scene understanding?(see Section 4)

Figure 2 .
Figure 2.Poincaré sphere.In the equatorial plane (in pink), we can find purely linear polarizations that are considered in our review.Adapted from original material under CC-BY license[30].

Figure 3 .
Figure 3. (i) Mantis shrimp eye is a good example of Division of Focal Plane as far as polarization is concerned (originally published in[81] and made available under CC-BY-SA license[82]).Subfigure (i).a highlights a rhabdom (in pink), which can be seen as a waveguide.The cornea acts as a sensor.A section of the rhabdom is shown in Subfigure (i).b, with retinular cells made of microvilli stacks (here coloured in red and blue), as described in Subfigures (i).c and (i).d.These microvilli act as polarizers.Since each rhabdom contains microvilli in crossed directions, each rhabdom allows selection of two crossed polarizations.Since rhabdoms are shifted by 45°between the ventral and dorsal hemispheres as depicted in Subfigures (i).e and (i).f, the eye can actually sense 4 equally spaced directions of polarization.In Subfigure (i).e, polarization direction (red arrow) is aligned with a set of microvilli in the dorsal hemisphere, so the polarization direction is easily detected.In Subfigure (i).f, the eye has rotated by 22.5°; polarization direction (red arrow) is aligned with none of the sets of microvilli in the dorsal or ventral hemispheres, so the eye cannot detect the polarization direction.Subfigure (ii) describes a modern polarization-sensitive camera sensor, such as Sony Polarsens IMX264MZR, which mimics the mantis shrimp eye, with micropolarizers with different orientations placed side by side in front of the photosensitive sensor.

Figure 4 .
Figure 4. Two assembly schemes for PFA integration: on-glass (a) and on-chip (b) schemes.In both schemes, most rays (depicted as green arrows) hit the right pixel.For the on-glass scheme, some oblique rays (red arrows) may hit the wrong pixel, which is not possible with the on-chip scheme.Therefore the on-chip scheme used in PolarSens Sony Sensors, with the PFA between the microlenses and the sensor, greatly reduces polarimetric crosstalk.Reproduced with permission from Yilbert Gimenez[83].

Figure 5 .
Figure 5. Color polarization filter array, such as those implemented in commercial sensors Sony IMX250MYR and IMX253MYR.An efficient demosaicing procedure is required.

Figure 6 .
Figure 6.Polarization pattern of skylight as a function of the position of the sun and relative to an observer (O).The point (Z) represents the zenith.The light green horizontal disc is considered tangent to the Earth's surface; the O-Z axis is taken as the normal to this plane.The orientation of the black dashes gives the direction of polarization, while the thickness describes the Degree of Linear Polarization (DoLP).The direction of polarization is orthogonal to the solar and anti-solar meridians (pink double arrow).Insert.Photograph taken with a linear polarizing filter under a clear sky.By orienting the polarizing filter to the same direction as the solar meridian (blue line), you can see a darker bar (double pink arrow) perpendicular to the solar meridian.

Figure 7 .
Figure 7.The polarization pattern of the Berry model does not correspond to the Rayleigh model and breaks its circular symmetry (see Figure6) by introducing four neutral points.These four neutral points are named Brewster (below the sun), Babinet (above the sun), Arago (above the anti-sun) and the Fourth (below the anti-sun).However, the solar-antisolar meridian symmetry remains in the polarization pattern.The points (0) and (Z) represent, respectively, an observer and the zenith.

Figure 8 .
Figure 8. State-of-the-art polarimetric compasses.(a) Imaging method or Stokes (conventional).(b) Imaging method by optical transformation by mean of a waveplate (S-waveplate or linear waveplate).(c) Non-imaging method or biomimetic approach by mean of a set of photoreceptors, each one covered by a polarizing filter.

Figure 9 .
Figure 9. (a) The Sahabot 2 robot (2000) with its ant-inspired compass from Ref. [10] with permission of Elsevier.(b) AntBot robot equipped with a pair of UV-polarized light sensors forming a celestial compass from Refs.[11,12].Photographic credits: Julien Dupeyroux, The Institute of Movement Sciences, CNRS/Aix Marseille Université, 2019.(c) Device based on two polarization sensors measuring the heading from Ref. [207] under CC-BY License, 2015.(d) Implementation of an extended Kalman filter on board a quadrotor for incorporating the polarization sensor into a conventional attitude determination system, from Ref. [208] under CC-BY license, 2018.

Figure 10 .
Figure 10.The SkyPASS Gen3-N sensor (size: 10.4 × 9.9 × 8.1 cm, mass: 567 g, max measurement frequency: 1 Hz) employs separate optical channels to image the sun, stars, and sky polarization to provide a highly accurate heading better than 0.1°.Tracking sky polarization improves availability of the sensor in twilight, cloudy skies, and urban environments.Courtesy from Polaris Sensor Technologies Inc. (Huntsville, AL, USA) , see https://www.polarissensor.com/skypass/(accessed on 17 May 2024) for details.

Figure 11 .
Figure 11.PILONE view.Outdoor image acquired by a Raspberry Pi color camera and obtained by clear sky in front of a building.Iridescent colors can be seen.From [230] under CC-BY license, 2023.

Figure 13 .
Figure 13.(a) Scattering angle γ, azimuth α P of a point P, and solar altitude θ S of the sun S. The parameters are depicted in the ENU (East, North, Up) coordinate system centered on the observer O.The color pattern represents the Degree of Linear Polarization (DoLP) in the sky, described by the Rayleigh single scattering model [187].Dark blue represents a near-zero DoLP and yellow represents maximum DoLP values.(b) Sun trajectory in the ENU coordinate system, centered on observer O, positioned at latitude ϕ.The sun moves in a plane perpendicular to the observer-NCP vector.(c) DoLP invariances on the celestial sphere.Invariance circles are computed from analytical calculus.The colored half sphere is the simulated absolute difference of two DoLP patterns linked to the sun's positions S 1 and S 2 at two distinct times.Dark blue represents near zero values.(d) Method for finding the NCP from the sky's DoLP pattern.The first row displays DoLP patterns taken at four different moments.The absolute differences between the DoLP patterns are then computed and shown in the second row.Thresholding is applied to these images (third row).Finally, the binary images are overlaid, and the NCP is located at the intersection of radial invariance axes.From[254] under CC-BY-SA-ND license, 2023.

Figure 14 .
Figure 14.Relationship between the DoP (Degree of Polarization) and the reflected angle θ according to (a) dielectric specular reflection, (b) metallic specular reflection and (c) diffuse dielectric reflection.

Figure 15 .
Figure 15.Fresnel's ratio for specular reflection according to the angle of reflection: (a) dielectric object with refractive index equal to 1.33 , (b) metallic object with refractive index equal to 0.82 + 5.99j.

Figure 16 .
Figure 16.Fresnel ratio for diffuse reflection from a dielectric object according to the angle of reflection[231].

Figure 17 .
Reflection and refraction of light on water.

Figure 18 .
Figure 18.Illustration of the "Shape from Polarization" basis with the two types of reflection: specular and diffuse.The direction of polarization is indicated in green.

Figure 19 .
Figure 19.Logistics droid ciTHy L from TwinswHeel (payload up to 300 kg).This delivery droid is currently equipped with an Integrated Navigation System (INS) based on a triple redundancy of locations: 1st 3D Lidar, 2nd Stereo Camera, and 3rd GNSS + IMU + 4 wheels with encoders.The optical path integrator + polarized geolocation will be the 4th redundancy of location to make the robot geolocation more robust in all weather conditions and complex environments.The ciTHy L picture is courtesy of Vincent and Benjamin Talon, Co-founders of TwinswHeel (https://www.twinswheel.fr/,accessed on 17 May 2024).

Table 1 .
Pros and cons of various imaging Stokes polarimeter architectures (inspired from

Table 2 .
Evolution of Shape from Polarization in the literature.