Next Article in Journal
Neural Architecture Search for Hyperspectral Image Classification: A Comprehensive Review and Future Perspectives
Previous Article in Journal
Satellite-Measured Suspended Particulate Matter Flux and Freshwater Flux in the Yellow Sea and East China Sea
Previous Article in Special Issue
A Multi-Receiver GNSS System Geometry Control Algorithm in Mobile Measurement of Railway Track Axis Position
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Semantic Segmentation-Based GNSS Signal Occlusion Detection and Optimization Method

1
Surveying and Mapping, Henan Polytechnic University, Jiaozuo 454003, China
2
Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
3
School of Geomatics, Xi’an University of Science and Technology, Xi’an 710038, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(15), 2725; https://doi.org/10.3390/rs17152725
Submission received: 30 May 2025 / Revised: 3 August 2025 / Accepted: 4 August 2025 / Published: 6 August 2025
(This article belongs to the Special Issue GNSS and Multi-Sensor Integrated Precise Positioning and Applications)

Abstract

Existing research fails to effectively address the problem of increased GNSS positioning errors caused by non-line-of-sight (NLOS) and line-of-sight (LOS) signal attenuation due to obstructions such as buildings and trees in complex urban environments. To address this issue, we dig into the environmental perception perspective to propose a semantic segmentation-based GNSS signal occlusion detection and optimization method. The approach distinguishes between building and tree occlusions and adjusts signal weights accordingly to enhance positioning accuracy. First, a fisheye camera captures environmental imagery above the vehicle, which is then processed using deep learning to segment sky, tree, and building regions. Subsequently, satellite projections are mapped onto the segmented sky image to classify signal occlusions. Then, based on the type of obstruction, a dynamic weight optimization model is constructed to adjust the contribution of each satellite in the positioning solution, thereby enhancing the positioning accuracy of vehicle-navigation in urban environments. Finally, we construct a vehicle-mounted navigation system for experimentation. The experimental results demonstrate that the proposed method enhances accuracy by 16% and 10% compared to the existing GNSS/INS/Canny and GNSS/INS/Flood Fill methods, respectively, confirming its effectiveness in complex urban environments.

1. Introduction

In modern navigation systems, the Global Navigation Satellite System (GNSS) represents a widely adopted high-precision positioning technology. Capable of providing all-weather and high-precision positioning services, this system has evolved into a critical technology underpinning various domains, including transportation infrastructure and smart city development [1,2]. GNSS can deliver reliable positioning accuracy in open environments [3]. However, in complex environments such as urban canyons and tree cover, it is susceptible to non-line-of-sight (NLOS) errors due to obstructions and other disturbances, leading to a sharp drop in positioning accuracy or even loss of positioning capability [4,5].
Scholars worldwide have conducted extensive research on mitigating the impact of NLOS signals in positioning and navigation systems. In this paper, we primarily categorize the research into three approaches: receiver design, NLOS identification based on signal feature parameters, and NLOS detection assisted by external sensors or information sources. In the domain of receiver design, Suzuki [6] proposed a rotating GNSS antenna configuration to mitigate NLOS-induced errors. Jiang [7] employed a delay lock loop that estimates multipath and combined it with a vector tracking loop to mitigate NLOS induced errors. Xu [8] developed a vector tracking loop (VTL)-based algorithm that uses metrics like equivalent noise bandwidth and multi-correlator peak time delays to detect and correct NLOS errors. However, in urban environments with dynamic positioning where obstacles may change their positions and shapes, these approaches show limited applicability. In the context of signal characteristic parameter-based approaches, Han [9] developed a weighted model incorporating azimuth and elevation angles to enhance positioning accuracy, primarily addressing positioning errors induced by multipath effects. Zidan [10] utilized pseudo-range and Doppler measurements in conjunction with a decision tree classifier to categorize satellite signals as either line-of-sight (LOS) or NLOS. These classified signals are then processed based on their respective categories. Ng [11] developed a signal weighting model aimed at reducing positioning errors, utilizing pseudo-range measurements, Signal-to-Noise Ratio (SNR), and satellite elevation angle. Nonetheless, these weight-based models largely rely on empirical tuning and statistical assumptions. When positioning conditions degrade and measurement errors become more complex, these models require recalibration and refinement [12]. Hsu [13] leveraged key features including SNR, SNR fluctuation amplitude, pseudo-range residuals, and pseudo-range fluctuation rates, achieving a signal classification accuracy of 75% through a Support Vector Machine (SVM) model. Luo [14] enhanced signal recognition accuracy to 81.98% by utilizing a gradient-enhanced decision tree with SNR, pseudo-distance residual and elevation angle as input light gradient boosting machine (LGBM) model. Jang [15] alleviates NLOS errors by integrating a multivariate long short-term memory fully convolutional network with a binary tree. Liu [16] developed a convolutional neural network (CNN)-based model for NLOS signal detection, utilizing six double-difference features extracted from GNSS observations. Li [17] employed machine learning techniques that leveraged pseudo-range, SNR, elevation angle and azimuth angle to detect NLOS signals. These machine learning-based approaches for classifying NLOS/LOS signals through signal feature analysis have significantly reduced reliance on human intervention and empirical knowledge. However, in complex environments, signals are prone to noise-induced distortions, which can compromise model accuracy. Particularly in complex urban settings, such interference may lead to classification errors in GNSS signals categorization and result in erroneous conclusions.
In terms of incorporating external sensors or information sources. Bétaille [18] distinguished between NLOS and LOS signals by integrating a 3D city model and applying pseudo-range measurement corrections to the localization solution, which requires an initial high-accuracy localization result. Luo [19] proposed an enhanced 3DMA RT algorithm to detect NLOS/LOS signals. However, in areas with incomplete 3D building models, the detection performance decreases as it relies on the accuracy and timeliness of the 3D models. Zhang and Wang [20,21] leveraged LiDAR-generated 3D point clouds to rapidly capture the surrounding environment, enabling effective detection of NLOS signals. However, LiDAR systems remain relatively costly and the rapid processing of 3D point cloud data imposes specific computational resource requirements. With the continuous advancement of vision sensor technology and the gradual reduction in cost, fisheye cameras have become an increasingly attractive option due to their high cost-effectiveness. Marais [22] employed the fisher clustering algorithm in combination with a fisheye camera to classify the sky region into occluded and open sky areas for NLOS/LOS signals identification. Kato [23] employed the K-Means clustering algorithm to detect and extract the sky view range. Sánchez [24] utilized the Canny edge detection method to extract sky visibility ranges. Wen [25] determined NLOS/LOS signals by analyzing the average pixel values in the neighborhood of the satellite’s projected position on the camera image. Our team extracted the sky region using image expansion, erosion, and edge detection and then integrated it with the Flood Fill algorithm to determine NLOS/LOS signals [26]. Nevertheless, when capturing a sky image using a fisheye camera, the color of buildings or its edges may resemble the sky, or the sky area may be fragmented into multiple regions. These factors can lead to deviations in sky area extraction from its actual extent. For example, if the Flood Fill algorithm’s initial position is set in an occluded region, there is a high likelihood of extraction errors. Additionally, most existing fisheye-based approaches classify regions only as sky or obstruction without further distinguishing between different types of occluders such as buildings or trees.
As reviewed above, most existing studies, after classifying GNSS signals as LOS or NLOS, apply a binary weighting strategy in the positioning solution. However, these approaches often overlook the distinct propagation effects introduced by different types of obstructions. In practice, satellite signals passing through tree canopies are mainly affected by diffraction and scattering due to vegetation, and often still retain a direct LOS component. In contrast, building-induced obstructions completely block direct signals, allowing only refracted or reflected components to reach the receiver—an inherently different propagation mechanism [27,28,29]. These two types of obstructions result in distinct GNSS error characteristics and positioning biases. Therefore, distinct optimization strategies should be applied to NLOS signals caused by buildings and LOS signals with attenuation due to tree canopies. In this paper, signals obstructed by tree canopies are not categorized as NLOS but are treated as attenuated LOS signals.
To address these issues, building on the team’s previous research, this paper proposes a semantic segmentation-based GNSS signal occlusion detection and optimization method. Leveraging the capability of deep learning to adaptively extract semantic features such as edges and textures, the proposed method effectively distinguishes between sky, vegetation-obstructed, and building-obstructed regions. A fisheye camera and vehicle-mounted sensors are used to capture sky images and heading information. The sky images are then processed using a deep learning model to segment them into sky, tree, and building regions. The received GNSS signals are projected onto the segmented sky image based on the satellite’s altitude angle, azimuth angle, and the carrier’s heading angle, which are derived from the navigation message. Subsequently, based on the pixel classification of the projection location, GNSS signals are identified as NLOS, attenuated LOS, or LOS. Building on this, a satellite signal weight updating strategy that accounts for the impact of different occlusions is further proposed. Finally, different weight optimization strategies are designed for satellite signals affected by different types of obstructions, enabling efficient utilization of various GNSS signals and enhancing positioning accuracy in urban environments. This paper is organized as described below.
Section 1 primarily reviews research conducted by scholars on NLOS signal identification and processing. Section 2 presents the methods for detecting and identifying NLOS signals, along with the weight optimization techniques used in both NLOS signal detection and localization. Section 3 provides the experimental results, analysis, and a comparison of our method with existing approaches. Finally, Section 4 offers conclusions and outlines future work.
The contributions of this paper are as follows:
  • To more accurately extract the sky region and types of occlusions from sky images, this study leverages the semantic feature learning capabilities of deep learning models, particularly their adaptability to edges and textures, for image segmentation. The extraction accuracy is compared with that of existing methods.
  • Based on satellite attitude information, the satellite projection is mapped onto the images processed by the deep learning model, enabling the classification and detection of GNSS signal types. These signals are categorized into LOS, attenuated LOS, and NLOS.
  • The degree of occlusion is quantified by calculating the shortest pixel distance from the satellite projection point to the nearest sky region. Additionally, a weight optimization scheme is developed based on different types of obstacles.
  • Different optimization strategies are applied depending on the type of GNSS signal occlusion. Comparative results demonstrate that the proposed method achieves more accurate positioning and navigation performance than existing approaches.

2. Materials and Methods

In urban environments, complex factors such as buildings, trees, and other occluding objects hinder the LOS propagation of GNSS signals, thereby posing significant challenges to vehicle positioning accuracy. Currently, although environment-aware methods have been applied, they struggle to accurately identify specific occlusions in overhead images and frequently produce NLOS signal recognition errors. In recent years, deep learning technology has gained widespread adoption across various fields particularly in image recognition due to its outstanding performance, where it demonstrates significant potential. In light of this, we propose an innovative approach that leverages deep learning technology for semantic segmentation of sky images. This method enables precise identification of various obstructions, thereby effectively avoiding misclassification of NLOS signals and enhancing the accuracy and reliability of vehicle positioning systems. The primary steps of the methodology proposed in this paper are illustrated in Figure 1.
As illustrated in the overall methodological flowchart (Figure 1), first, sky image data and satellite navigation messages are collected using a fisheye camera and MEMS based integrated navigation device. Training and validation datasets for the model are then created based on the acquired sky environment data. Subsequently, the acquired sky environment data are processed to generate the training and validation datasets required for model training. Next, using the generated weighting model, semantic segmentation is applied to the skyward environmental data collected along the vehicle’s driving path. Satellite projection is then conducted by incorporating the satellite altitude and azimuth information decoded from the navigation message. Based on the pixel classification at the projected locations, the GNSS signal class is identified, and its corresponding weight is optimized using different strategies. Finally, the combination with the IMU is used to obtain the localization and navigation results.

2.1. Semantic Segmentation of Sky-Directed Images

The fisheye camera can capture sky images synchronized with GNSS signals, effectively capturing the receiver’s surroundings and reflecting the geometric distribution of obstructions such as surrounding buildings and trees. To detect and identify the GNSS signal type, it is necessary to extract and classify both the direct sky view and the sheltered area.
In reference [26], edge detection and the Flood Fill algorithm are used for sky region extraction. However, in complex urban environments, buildings such as flyovers and street signs may obscure the center of the image. When the seed points selected by the Flood Fill algorithm fall on occlusions, incorrect or invalid sky region extraction may occur. Other commonly used segmentation algorithms include K-Means clustering and Canny edge detection; however, parameter adjustment during preprocessing relies on experience, which leads to discrepancies between the extracted and actual sky regions. Moreover, these methods typically segment images into only two categories: sky and obstruction. As discussed in Section 1, tree and building occlusions have fundamentally different impacts on GNSS signal propagation. Therefore, it is necessary to further refine obstruction classification during image segmentation to identify specific types of occluders. This enables the development of targeted weighting strategies in the optimization stage based on the identified obstruction type.
To address the aforementioned issues and leverage the advantages of deep learning in image segmentation, this study employs the DeepLabv3+ algorithm (as detailed in references [30,31]) to process fisheye sky images and reliably extract sky regions, vegetation-obstructed areas, and building-obstructed areas. The process is primarily divided into the following steps.

2.1.1. Image Acquisition

In this study, the image data were collected using a camera interfaced through the Robot Operating System (ROS). The overall acquisition process is illustrated in Figure 2.
As shown in Figure 2, the collected data are stored in the ROS bag format. During the acquisition process, the status of the data stream and the integrity of the data are checked to ensure the completeness and reliability of the final dataset. Once verified, the ROS bag files are parsed. During parsing, the image data undergo necessary format conversion and resizing to ensure compatibility with subsequent processing and analysis requirements.

2.1.2. Image Preprocessing and Model Training

Urban sky images include objects beyond buildings and trees such as street lights and power lines, but these elements alone are insufficient to produce a NLOS signal. Therefore, when using the Labelme tool to produce the labels required for deep learning training, objects such as street lights and antennas present in the sky region that do not produce NLOS signals are not extracted separately. Corresponding json files are generated during the labeling process.
After creating the labels, the corresponding 8-bit color mask images are generated for training based on these label files, as shown in Figure 3.
The preprocessed images, along with the original sky-view images captured by the vehicle, are fed into the DeepLabV3+ model for training and validation to obtain the semantic label of each pixel. The overall process is illustrated in Figure 4.

2.1.3. Semantic Segmentation of Images

Based on the parameter settings during training, multiple weight models are generated. The format of the collected data (video or image) and the data are processed using either video streaming or batch processing of image folders. To facilitate subsequent satellite projection onto the image, this paper employs the batch processing method for image folders.
As shown in Figure 5, this paper classifies the pixels within the segmented sky image into three categories: sky, tree shaded regions, and building shaded regions. The sky is represented in red, the tree shaded region in green, and the building shaded region in black.

2.2. Satellite Signal Projection

The method proposed in this paper projects the tracked satellites onto the semantically segmented image and distinguishes the GNSS signal categories. The GNSS signals are mapped onto the segmented image based on the satellite’s elevation angle and azimuth angle, which are derived from the heading angle obtained by the onboard instrument and the navigation message received by the receiver. The projection modeling framework is illustrated in Figure 6.
The fisheye camera used in this study follows the modeling approach described in reference [26], where the pixel distance from the satellite projection point to the image center is defined in (1).
R l = 2 · F c · tan π 4 E 2
where F c represents the focal length of the camera.
Thus, based on the given image center coordinates, the pixel coordinates of the satellite projection on the image are determined as shown in (2).
x s = x c + R l × sin ( H + A ) y s = y c + R l × cos ( H + A )
where x c , y c represent the pixel coordinates of the image center, x s , y s denote the pixel coordinates of the satellite projection on the image, H is the heading angle, and A is the satellite azimuth angle.
An example of satellite projection for a calendar element is shown in Figure 7, where the green color represents an unobstructed LOS signal, with C denoting BDS, G denoting GPS, and E denoting Galileo. The satellite signal types depicted in Figure 7 are listed in Table 1, including C4, C5, C9, G4, G5, G20, G29, G11, G29, E23, E22 and E17. The occluded satellites in this calendar element are G11 and G29, as indicated by red X marks.

2.3. GNSS Signal Weight Optimization

Following the signal classification process described in Section 2.2, and as discussed in Section 1, building and tree occlusions affect satellite signals in different ways. Building obstructions typically block the direct LOS, resulting in refracted NLOS signals. In contrast, tree canopy interference leads to diffracted and scattered signals, which are attenuated but still maintain a direct LOS component [32,33]. In light of these differences, this study does not apply a unified weighting strategy during the optimization stage. Instead, distinct weighting schemes are designed, respectively, for NLOS and attenuated LOS signals.
To characterize the degree of signal occlusion, it is based on the pixel distance from the occluded signal to the nearest sky range, which is determined by counting the pixel coordinates of the satellite projected onto the image, as given in (3).
d i s t min = min j = 1 n x c o n j x s 2 + y c o n j y s 2
where n represents the number of pixel points at the boundary of the sky region and d i s t min denotes the shortest pixel distance to the main sky region based on the pixel coordinates of the satellite projection points. x c o n j and y c o n j correspond to the horizontal and vertical coordinates of the boundary points, while x s and y s represent the horizontal and vertical coordinates of the satellite projection points.
In GNSS positioning, satellite signals are typically weighted based on parameters such as elevation angle and Signal-to-Noise Ratio (SNR). A common approach involves classifying signals into LOS and NLOS categories using a fixed SNR threshold, followed by applying a uniform weighting strategy to NLOS signals. However, the selection of such a threshold is highly sensitive: a threshold that is too low may misclassify LOS signals as NLOS, while a threshold that is too high may result in NLOS signals being incorrectly treated as LOS [34]. In this paper, we optimize the weights of GNSS signals affected by various occlusions based on their types and the degree of occlusion, as characterized. The formula is presented in (4).
w n = 0 , d i s t min > R max d i s t min γ × 1 sin 2 E × 10 S N R n a b × b 10 c a / b 1 c a S N R i a + 1 d i s t min R max & & I = I t r e e d i s t min β × 1 sin 2 E × 10 S N R n a b × b 10 c a / b 1 c a S N R i a + 1 d i s t min R max & & I = I o t h e r
where S N R n denotes the SNR ratio of the nth satellite. a , b , c are empirical values for SNR weighting [34,35]. γ and β are empirically determined parameters for different types of obstructions, obtained through multiple sets of experiments. d i s t min is the shortest pixel distance to the main sky region calculated from the pixel coordinates of the satellite projection point. S N R n indicates the SNR of the satellite corresponding to NLOS signals. R max is the threshold value for satellite image coordinates based on the distance from the sky boundaries. I denotes the picture type. I t r e e represents the occlusion object type, which is tree. And & & denote concatenation.
Finally, the optimization method is applied to GNSS positioning calculations. The weight matrix obtained based on (4) is shown in (5). For the GNSS position calculation process, consult reference [36].
W = w 1 w 2 w n
As shown above, the flowchart of the proposed method in this paper is shown in Algorithm 1.
Algorithm 1. GNSS Signal Occlusion detection and Correction
Input: GNSS information, heading information, and sky environment information.
Output: Weights ( w ) GNSS signal categories, weights, positioning results.
Steps:
for i = 1, 2, … N
        Segment the Nth sky environment image following the steps in Section 2-B.
        for j = 1, 2, … M
               Obtain the array of sky edge pixel coordinates.
               Based on (1) and (2), construct a satellite signal projection model to project the
               Mth satellite from the image’s corresponding epoch onto the segmented image.
               Based on (3), construct an occlusion-degree representation model to calculate
               the distance from the projection point to the sky edge ( d i s t min )
               Obtain the GNSS signal category.
               if d i s t min > R max
                       Satellite discarding.
                       Obtain 0.
               else if d i s t min R max & & I = I t r e e
                       Use the optimization method for tree occluders from (4).
                       Obtain w.
               else
                       Use the optimization method for buildings or other occluders from (4).
                       Obtain w.
               end
        end
end
Perform positioning calculation, obtain the positioning results.

3. Vehicle-Mounted Experiment

To verify the effectiveness of the proposed method, we constructed a vehicle-mounted experimental platform and conducted validation and analysis based on measured data.

3.1. Experimental Platform

The experimental instrument used in this study primarily comprises a USB fisheye camera, a high-precision MEMS-based integrated navigation system, and a high-precision fiber-optic integrated navigation system. The camera captures images at a resolution of 640 × 480 pixels and a rate of 20 frames per second. The high-precision MEMS-based integrated navigation system used to collect satellite navigation messages and IMU information is the CGI-410 (Huace, Hangzhou, China). The HY-P1750 high-precision fiber-optic integrated navigation system comprising a KVH1750 IMU (KVH Industries, Middletown, RI, USA) and a NovAtel GNSS receiver (NovAtel, Calgary, AL, Canada) is used as the reference system. The reference positioning results are derived from post-processing calculations performed with the Inertial Explorer (version 9.00, NovAtel, Canada). Figure 8 displays an enlarged view of the experimental platform and instrument used in this study, while Table 2 lists the parameters of the associated hardware. In the container shown in Figure 8, the NovAtel GNSS receiver and KVH1750 IMU are integrated alongside the display, with the CGI-410 positioned above the HY-P1750. The image data captured by the camera are stored on the NVIDIA Jetson Xavier NX motherboard (NVIDIA Corporation, Santa Clara, CA, USA) housed within the platform.
The hardware specifications of the primary experimental instrument shown in Figure 8 are presented in Table 2.
Figure 9 illustrates the driving path of the experimental vehicle, with the GNSS receiving antenna positioned at approximately 100 cm in height. The primary sources of occlusion along the vehicle’s route are trees and buildings. The presence of tall buildings captured by the camera indicates that the signal path between the satellites and the GNSS antenna is influenced by building height, which may lead to degraded GNSS performance.

3.2. Selection of γ and β

In the weight optimization model, γ and β are empirical parameters used to represent the influence of different obstructions (such as trees and buildings) on GNSS signal. To determine appropriate values for these parameters, multiple sets of experiments were conducted. Figure 9a,b show the two trajectories from replicate experiments.
For the two trajectories depicted in Figure 9, multiple combinations of parameters γ and β were tested to determine the most appropriate range. Through multiple experiments, we found that when parameter γ ranges between 4 and 23 and parameter β ranges between 10 and 70, the model demonstrates better performance with lower positioning errors. Therefore, we selected three combinations from our experiments for further analysis. Figure 10 and Figure 11 illustrate the X, Y, and Z direction errors for three different combinations of γ and β applied to Path 1 and Path 2, respectively.
From Figure 10 and Figure 11, it can be seen that in the X and Y directions, the combination of γ = 10 , β = 50 performs more stably, while the other two combinations show more sudden changes and larger errors. In most cases, the error is smaller than the other two combinations. Table 3 corresponds to the root-mean-square errors (RMSEs) of Path 1 and Path 2.
As shown in Table 3, the combination of γ = 10 and β = 50 yields lower RMSEs in the X and Y directions on both paths compared to the other two combinations. Specifically, compared to the γ = 20 , β = 60 combination, the RMSE in the X and Y directions on Path 1 is reduced by 25% and 36%, respectively, and by 35% and 41% on Path 2. Compared to the γ = 5 , β = 30 combination, the RMSE is reduced by 35% and 18% on Path 1 and by 15% and 36% on Path 2. The Table 4 presents the RMSE in the X, Y, and Z directions on both paths for experiments where parameter γ is fixed at 10 and parameter β is varied. Table 5 presents the RMSE values in the X, Y, and Z directions on both paths when parameter γ is varied while parameter β is fixed at 50.
Although the lowest individual RMSE values do not occur precisely at γ = 10 and β = 50 , this combination provides relatively low and balanced errors across all directions and both paths, with fewer outliers. Therefore, after comprehensive evaluation, this combination is adopted in the subsequent comparative experiments conducted on Figure 12.

3.3. Experimental Results and Analysis

To validate the effectiveness of our proposed approach, comparative experiments with multiple methods were conducted along the trajectory shown in Figure 12. The figure demonstrates that the selected scene satisfies the complexity required for our experiments, featuring a mixture of canopy and building occlusions.
In this paper, sky image segmentation results are evaluated using three different schemes and compared with the reference segmentation results. The reference segmentation results are obtained from result files generated through manual sky image segmentation using Labelme. The area of the region is calculated based on the total number of pixels it contains. This comparison aims to assess the effectiveness of semantic segmentation in extracting sky regions and occlusion types, relative to traditional image processing methods.
  • Canny Edge Detection Method: The original sky image is first converted to a grayscale image, followed by dilation and erosion operations. Then, Canny edge detection is applied. In this paper, this method is referred to as Canny.
  • Flood Fill Method: The center pixel of the image is first selected as the seed point. The algorithm then propagates to neighboring pixels, identifying all points with the same or similar color and filling them with a new color to form a connected region. In this paper, this method is referred to as Flood Fill.
  • The proposed method in this paper involves training and segmenting images using DeepLabV3+, with obstructions classified as buildings and trees. This method is referred to as Deep-Air.
The Canny and Flood Fill methods are compared and evaluated against the proposed Deep-Air method. The original images used for segmentation are shown in Figure 13a, while the segmentation results of the three methods are presented as follows: (b) shows the sky region segmented using the Flood Fill method, (c) displays the result obtained with the Canny method, and (d) presents the segmentation outcome using the proposed Deep-Air method. In the segmentation results, the sky region is marked in red, tree occlusions in green, and building occlusions in black. Table 6 presents a comparative analysis of the segmentation performance of the three methods.
Table 6 presents the sky region segmentation results for the three methods. As shown in the table, the similarity between the color of building edges and the sky region in the collected data causes the Canny edge detection method to deviate significantly from the reference value, resulting in an accuracy of only 79.1%. In contrast, the Flood Fill method achieves an accuracy of 89.0%, while the Deep-Air method attains 98.9%.
Figure 14 illustrates the sky region segmentation results for the three methods in a scenario where the obstacle occludes the sky, dividing it into multiple regions. In this case, the seed point selected by the Flood Fill method falls on the obstacle.
Since the Flood Fill method determines whether surrounding pixels belong to the sky region based on the attributes of the seed point, the seed point itself must also be located within the sky region. The Flood Fill method relies on the assumption that the area directly above the vehicle is generally unobstructed, with the center of the image chosen as the seed point. As shown in Figure 14b, when processing the original sky image in Figure 14a using the Flood Fill method, the selected seed point is the image center, which belongs to the occlusion category. This causes the sky region segmentation to fail, resulting in an empty sky region. When using the Canny method for sky region extraction, the original image must undergo edge detection preprocessing. Since the threshold in this process is set empirically, there is a high probability of under-segmentation. As shown in Figure 14c, the extracted sky region is smaller than the actual area, with a segmented area of 13,799 pixels. In contrast, the proposed method effectively extracts the initial sky region and the tree occlusion region, with areas of 16,844 and 111,154 pixels, respectively, as shown in Figure 14d. It can be seen that when the image center is occluded, the Flood Fill method fails to extract the sky region due to its algorithmic limitations. In contrast, both the Canny method and the proposed method effectively extract the sky region, with the latter additionally identifying the tree occlusion region.
Due to the movement of the experimental vehicle, a scenario arises as illustrated in Figure 15a where the sky region is primarily located in the middle and right portions of the image. Figure 15b–d, respectively, present the sky region extraction results using the three methods.
Figure 15a displays the original sky-direction image, in which the center region is classified as sky. For sky region extraction, all three methods successfully identify a portion of the sky. However, when using the Flood Fill method, only the sky region containing the seed point is extracted, resulting in an area of 4085 pixels. When using the Canny method, the original image is processed by dilation and erosion before extracting the sky regions, resulting in multiple sky regions with a total area of 9252 pixels. In contrast, when using the Deep-Air method, the sky region in the center and right portions of the image, along with the tree occlusion region, are successfully extracted. The sky region covers an area of 15,987 pixels, and the tree occlusion area spans 107,537 pixels. It can be observed that the proposed method achieves higher accuracy in extracting occlusion regions and differentiating obstructions types in multi-sky region scenarios where the middle region is identified as sky.
In complex urban environments, the carrier’s skyward image is often divided into two parts by columnar objects such as power lines. Figure 16a presents the results of different image segmentation methods.
Figure 16b–d, respectively, show the results of the three methods for sky region extraction. Figure 16b shows that when the Flood Fill method is applied, only the sky region in the upper part of the image is extracted due to its algorithmic characteristics. In contrast, Figure 16c,d show that the Canny algorithm and the Deep-Air method successfully extract the sky region below the traffic signal pole. However, the Canny method applies a dilation and erosion operation so that the extracted beam region differs from its actual size, leading to misjudgment of the NLOS signal. In practice, factors such as the distance between the carrier and the beam and the beam’s thickness reduce the likelihood that the beam affects the NLOS signal. Therefore, it is generally considered an unobstructed region. The Deep-Air method, as shown in Figure 16d, correctly judges the beam portion as a sky region.
Figure 13, Figure 14, Figure 15 and Figure 16 demonstrate that the method proposed in this paper outperforms the other two existing methods in accurately extracting the sky region. This is particularly evident in cases involving multiple sky regions or sky regions separated by obstacles such as lines or poles, where the proposed method excels at extracting the sky regions from each section. These results confirm the practical feasibility and effectiveness of using semantic segmentation to extract sky regions and categorize obstructions.
To further verify the performance of the proposed algorithm, the positioning and navigation results are evaluated based on the outputs of three different methods along Figure 12 and compared with the reference trajectory.
(1) GNSS/INS/Canny Solution: The Canny method is used for sky extraction and is re-assigned based on LOS and NLOS satellites. It is then loosely integrated with motion estimation from INS and is referred to as GNSS/INS/Canny in the text.
(2) GNSS/INS/Flood Fill Solution: The Flood Fill method is used for sky extraction; LOS and NLOS satellites are re-assigned accordingly. It is then loosely integrated with motion estimation from INS and is referred to as GNSS/INS/Flood Fill in the text.
(3) GNSS/INS/Deep-Air Solution: The proposed method in this paper is used for sky extraction and is re-assigned based on GNSS satellite signal categories. It is then integrated with motion estimation from INS and is referred to as GNSS/INS/Deep-Air in the text.
In this study, GNSS observations are selected from the L1 and L2 bands for GPS, the B1-2 and B2b bands for BDS, and the G1 and G2 bands for GLONASS. The road segment used for data collection and method evaluation is shown in Figure 12. This segment is selected due to its mixed shading environment, which includes trees, high-rise buildings, and other structures. A comparison of the trajectories of the different methods is shown in Figure 17.
As shown in the trajectory diagram in Figure 17, the method proposed in this paper closely aligns with the reference trajectory. However, in some cases, the number of LOS satellites received by certain receiver elements is insufficient to support accurate localization, as illustrated in Figure 18, which leads to significant deviations from the reference trajectory. Additionally, a low hardware installation position may cause signal interference from pedestrians or nearby vehicles. As shown in Figure 8, the GNSS receiving antenna on the experimental platform is positioned at a height of only 100 cm, lower than the fisheye camera. The selected experimental area is densely built-up, with high pedestrian and vehicle activity. This complex environment significantly degrades satellite signal quality, leading to increased positioning errors and reduced localization accuracy.
As shown in Figure 18, the dataset used for experimental acquisition contains a significant number of NLOS signals, fulfilling the data requirements for validating the performance of the proposed methods. Figure 19 illustrates the error trends along the X, Y, and Z directions for the GNSS/INS/Flood Fill, GNSS/INS/Canny, and GNSS/INS/Deep-Air methods. As shown in Figure 19, the proposed method produces less error and smoother results compared to other methods. When the number of LOS signals drops below four, as seen in the time intervals of 2300–2600 and 3000–3500, errors are more likely to occur. The method proposed in this paper employs distinct weighting optimization techniques for NLOS signals caused by building occlusions and signal attenuation due to trees. Additionally, the method is more effective at identifying signal types in road sections with special conditions. As a result, the probability of a sudden increase in error in the other two methods is significantly higher than the proposed method, in terms of both likelihood and magnitude.
As shown in Figure 19, the error trend of the GNSS/INS/Deep-Air method is smoother and closer to zero compared to the other two methods. Additionally, its error deviation is smaller at most time points. However, in the Z-axis error during epochs 4500–5000, the proposed method shows a larger deviation than the others. This may be due to misclassification of NLOS signals as attenuated LOS signals, as the projection point lies within a tree-covered area that is actually a mixed region of trees and buildings, where buildings behind the trees contribute to NLOS errors. Another possible cause is dense tree coverage with large inter-leaf gaps, which appear small in the camera image due to distance, leading to the area being extracted as tree occlusion and LOS signals being misclassified as attenuated NLOS. The RMSE and mean errors (MEs) in all three directions are presented in Table 7.
Table 7 shows that the RMSE and the ME for the proposed method are smaller than the other three methods. In the X direction, the improvement is 20% better compared to GNSS/INS/Flood Fill and 27% better compared to GNSS/INS/Canny. In the Y direction, the improvement is 5% over GNSS/INS/Flood Fill and 8% over GNSS/INS/Canny. In the Z direction, the RMSE is improved by 18% compared to GNSS/INS/Flood Fill and by 22% compared to GNSS/INS/Canny. The reason why the GNSS/INS/Canny and GNSS/INS/Flood Fill methods’ stability and accuracy are lower than those of the proposed method is likely the presence of occluded objects, multi-sky regions, and sky regions divided by bars at the center of the image, as previously mentioned. These factors result in errors in signal type classification during the vehicle’s movement, which in turn reduces localization accuracy.
In summary, the detection and identification of NLOS/LOS signals are influenced by factors such as building color and sky region distribution, resulting in fluctuations in the stability and accuracy of the positioning process. The method proposed in this paper outperforms the others in improving both the accuracy and stability, as measured by the average error and root-mean-square error. Figure 20 presents the cumulative distribution function (CDF) graphs for the three methods across three directions.
Figure 20 illustrates the CDF of the error in the X, Y, and Z directions for each method, showing the probability distribution of error values in each direction. This representation provides insights into the likelihood of data points occurring within a specific interval, aiding in a comprehensive understanding of the error distribution characteristics. The leftward shift of the curve intuitively indicates the improvement in accuracy. The figure shows that, among the three methods, GNSS/INS/Flood Fill, GNSS/INS/Canny, and GNSS/INS/Deep-Air, the CDF curves for the GNSS/INS/Deep-Air method exhibit a pronounced leftward shift in all three dimensions, particularly along the Y-axis, where the shift is most notable. For example, when the probability is 80%, the error thresholds for the three methods are as follows. In the X-direction, they are 1.04, 1.25 and 0.74 m. In the Y-direction, they are 3.79, 3.89 and 2.73 m. In the Z-direction, they are 3.86, 3.98 and 3.28 m. This systematic advantage further confirms the accuracy and reliability of the method proposed in this paper.

4. Conclusions

To address the impact of NLOS signals induced by complex scenes on positioning and navigation, this paper proposes a GNSS signal occlusion detection and optimization method by integrating sky-semantic segmentation, and different optimization methods are chosen for signals affected by different obstructions. The method is implemented using a fisheye camera mounted on top of the test vehicle. To enhance the detection and classification of GNSS signal categories, the DeepLabv3+ model is employed to segment the skyward area into three classes: sky, trees, and buildings. Finally, optimization methods are chosen based on the obstruction’s category where the GNSS signal is located and integrated with IMU data to obtain positioning and navigation results. Compared to GNSS/INS/Canny and GNSS/INS/Flood Fill, the GNSS/INS/Deep-Air method improves X-axis accuracy by 27% and 20%, Y-axis accuracy by 5% and 8%, and Z-axis accuracy by 22% and 18%. Overall positioning accuracy is enhanced by 16% and 9% compared to GNSS/INS/Canny and GNSS/INS/Flood Fill, respectively.
Although the method proposed in this paper improves positioning accuracy, our vision system may encounter challenges such as lighting issues in scenarios with extensive obstructions like bridges. In such cases, a supplementary system that provides relative positioning can be integrated for assistance. In future work, we will incorporate additional approaches to enhance the proposed methodology and conduct a comparative analysis of the performance and results achieved by different deep learning models within the scope of our research.

Author Contributions

Conceptualization, C.S.; methodology, C.S.; software, C.T.; validation, X.Z.; formal analysis, X.Z.; investigation, C.S.; resources, Z.Y.; data curation, C.S.; writing—original draft preparation, C.S.; writing—review and editing, Z.Y.; visualization, K.L.; supervision, Y.G.; project administration, Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 42204040; and by the Young Elite Scientists Sponsorship Program by Henan province under Grant 2025HYTP039; and by the Key Science and Technology Program of Henan Province under Grant 252102241064; and by the Ph.D. Programs Foundation of Henan Polytechnic University under Grant B2021-17; and by the Young Core Faculty Development Program of Henan Polytechnic University under Grant 2024XQG-15.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, W.; Yue, Z.; Lian, Z.; Li, K.; Sun, C.; Zhang, M. An Elastic Filtering Algorithm with Visual Perception for Vehicle GNSS Navigation and Positioning. Sensors 2024, 24, 8019. [Google Scholar] [CrossRef] [PubMed]
  2. Ning, B.; Zhao, F.; Luo, H.; Luo, D.; Shao, W. Robust GNSS/INS Tightly Coupled Positioning Using Factor Graph Optimization with P-Spline and Dynamic Prediction. Remote Sens. 2025, 17, 1792. [Google Scholar] [CrossRef]
  3. Xiao, G.; Xiao, Z.; Zhou, P.; Liu, C.; Wei, H.; Li, P. Performance evaluation of Galileo high accuracy service for PPP ambiguity resolution. GPS Solut. 2025, 29, 96. [Google Scholar] [CrossRef]
  4. Li, X.; Xu, Q.; Li, X.; Xin, H.; Yuan, Y.; Shen, Z.; Zhou, Y. Improving PPP-RTK-based vehicle navigation in urban environments via multilayer perceptron-based NLOS signal detection. GPS Solut. 2023, 28, 29. [Google Scholar] [CrossRef]
  5. Lyu, Z.; Gao, Y. An efficient pixel shader-based ray-tracing method for correcting GNSS non-line-of-sight error with large-scale surfaces. GPS Solut. 2023, 27, 159. [Google Scholar] [CrossRef]
  6. Suzuki, T.; Matsuo, K.; Amano, Y. Rotating GNSS Antennas: Simultaneous LOS and NLOS Multipath Mitigation. GPS Solut. 2020, 24, 86. [Google Scholar] [CrossRef]
  7. Jiang, C.H.; Xu, B.; Hsu, L.T. Probabilistic approach to detect and correct GNSS NLOS signals using an augmented state vector in the extended Kalman filter. GPS Solut. 2021, 25, 72. [Google Scholar] [CrossRef]
  8. Xu, B.; Jia, Q.; Hsu, L.T. Vector Tracking Loop-Based GNSS NLOS Detection and Correction: Algorithm Design and Performance Analysis. IEEE Trans. Instrum. Meas. 2020, 69, 4604–4619. [Google Scholar] [CrossRef]
  9. Han, J.; Huang, G.; Zhang, Q.; Tu, R.; Du, Y.; Wang, X. A New Azimuth-Dependent Elevation Weight (ADEW) Model for Real-Time Deformation Monitoring in Complex Environment by Multi-GNSS. Sensors 2018, 18, 2473. [Google Scholar] [CrossRef]
  10. Zidan, J.; Anyaegbu, E.; Kampert, E.; Higgins, M.D.; Ford, C. Doppler and Pseudorange Measurements as Prediction Features for Multi-Constellation GNSS LoS/NLoS Signal Classification. In Proceedings of the 2023 IEEE Smart World Congress (SWC), Portsmouth, UK, 28–31 August 2023; pp. 1–8. [Google Scholar]
  11. Ng, H.F.; Zhang, G.H.; Yang, K.Y.; Yang, S.X.; Hsu, L.T. Improved weighting scheme using consumer-level GNSS L5/E5a/B2a pseudorange measurements in the urban area. Adv. Space Res. 2020, 66, 1647–1658. [Google Scholar] [CrossRef]
  12. Liu, C.; LI, F. Comparison and analysis of different GNSS weighting methods. Sci. Surv. Mapp. 2018, 43, 39–44. [Google Scholar] [CrossRef]
  13. Hsu, L.T. GNSS multipath detection using a machine learning approach. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
  14. Luo, H.; Liu, J.; He, D. GNSS Signal Labeling, Classification, and Positioning in Urban Scenes Based on PSO–LGBM–WLS Algorithm. IEEE Trans. Instrum. Meas. 2023, 72, 2528213. [Google Scholar] [CrossRef]
  15. Jiang, Y.; Zhou, Z.; Zhang, Y.; Yang, H.; Gao, Y. Deep-Learning-Enhanced Outlier Detection for Precise GNSS Positioning With Smartphones. IEEE Trans. Instrum. Meas. 2025, 74, 2526013. [Google Scholar] [CrossRef]
  16. Liu, Q.; Gao, C.; Shang, R.; Peng, Z.; Zhang, R.; Gan, L.; Gao, W. NLOS signal detection and correction for smartphone using convolutional neural network and variational mode decomposition in urban environment. GPS Solut. 2022, 27, 31. [Google Scholar] [CrossRef]
  17. Li, L.T.; Elhajj, M.; Feng, Y.X.; Ochieng, W.Y. Machine learning based GNSS signal classification and weighting scheme design in the built environment: A comparative experiment. Satell. Navig. 2023, 4, 12. [Google Scholar] [CrossRef]
  18. Bétaille, D.; Peyret, F.; Ortiz, M.; Miquel, S.; Fontenay, L. A New Modeling Based on Urban Trenches to Improve GNSS Positioning Quality of Service in Cities. IEEE Intel. Transp. Syst. 2013, 5, 59–70. [Google Scholar] [CrossRef]
  19. Luo, H.; Mi, X.; Yang, Y.; Chen, W.; Weng, D. Multiepoch Grid-Based 3DMA Positioning in Dense Urban Canyons by Optimizing Reflection Modeling. IEEE Trans. Instrum. Meas. 2025, 74, 8503213. [Google Scholar] [CrossRef]
  20. Wen, W.W.; Zhang, G.; Hsu, L.T. GNSS NLOS Exclusion Based on Dynamic Object Detection Using LiDAR Point Cloud. IEEE Trans. Instrum. Meas. 2021, 22, 853–862. [Google Scholar] [CrossRef]
  21. Wang, L.; Groves, P.D.; Ziebart, M.K. GNSS Shadow Matching: Improving Urban Positioning Accuracy Using a 3D City Model with Optimized Visibility Scoring Scheme. NAVIGATION 2013, 60, 195–207. [Google Scholar] [CrossRef]
  22. Marais, J.; Meurie, C.; Attia, D.; Ruichek, Y.; Flancquart, A. Toward accurate localization in guided transport: Combining GNSS data and imaging information. Transp. Res. C-Emer. 2014, 43, 188–197. [Google Scholar] [CrossRef]
  23. Kato, S.; Kitamura, M.; Suzuki, T.; Amano, Y. NLOS Satellite Detection Using a Fish-Eye Camera for Improving GNSS Positioning Accuracy in Urban Area. J. Robot. Mechatron. 2016, 28, 31–39. [Google Scholar] [CrossRef]
  24. Sanromà Sánchez, J.; Gerhmann, A.; Thevenon, P.; Brocard, P.; Ben Afia, A.; Julien, O. Use of a FishEye Camera for GNSS NLOS Exclusion and Characterization in Urban Environments. In Proceedings of the ION ITM 2016, International Technical Meeting, Monterey, CA, USA, 25 January 2016. [Google Scholar]
  25. Wen, W.S.; Bai, X.W.; Kan, Y.C.; Hsu, L.T. Tightly Coupled GNSS/INS Integration via Factor Graph and Aided by Fish-Eye Camera. IEEE T Veh. Technol. 2019, 68, 10651–10662. [Google Scholar] [CrossRef]
  26. Yue, Z.; Ma, W.Z.; Gao, Y.T.; Sun, C.C.; Zhang, M.S.; Lian, Z.Z.; Li, K.Z. Vehicle-mounted GNSS navigation and positioning algorithm considering signal obstruction and fuzzy logic in urban environment. Measurement 2025, 248, 116919. [Google Scholar] [CrossRef]
  27. Zhang, Z.; Wang, L.; Li, X. Characterization and modeling of GNSS site-specific unmodeled errors under reflection and diffraction using a data-driven approach. Satell. Navig. 2025, 6, 8. [Google Scholar] [CrossRef]
  28. Kou, R.X.; Tan, R.C.; Wang, S.Y.; Yang, B.S.; Dong, Z.; Yang, S.W.; Liang, F.X. Satellite visibility analysis considering signal attenuation by trees using airborne laser scanning point cloud. GPS Solut. 2023, 27, 64. [Google Scholar] [CrossRef]
  29. Dan, S.; Santra, A.; Mahato, S.; Dey, S.; Koley, C.; Bose, A. Multi-constellation GNSS Performance Study Under Indian Forest Canopy. In Advances in Communication, Devices and Networking; Springer: Singapore, 2022; pp. 179–186. [Google Scholar]
  30. Nishida, Y.; Li, Y.; Kamiya, T. Environment Recognition from A Spherical Camera Image Based on DeepLab v3+. In Proceedings of the 2021 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 12–15 October 2021; pp. 2043–2046. [Google Scholar]
  31. Qu, Z.; Wei, C. A Spatial Non-cooperative Target Image Semantic Segmentation Algorithm with Improved Deeplab V3+. In Proceedings of the 2022 IEEE 22nd International Conference on Communication Technology (ICCT), Nanjing, China, 11–14 November 2022; pp. 1633–1638. [Google Scholar]
  32. Humphrey, V.; Frankenberg, C. Continuous ground monitoring of vegetation optical depth and water content with GPS signals. Biogeosciences 2023, 20, 1789–1811. [Google Scholar] [CrossRef]
  33. Hsu, L.-T. Analysis and modeling GPS NLOS effect in highly urbanized area. GPS Solut. 2017, 22, 7. [Google Scholar] [CrossRef]
  34. Herrera, A.M.; Suhandri, H.F.; Realini, E.; Reguzzoni, M.; de Lacy, M.C. goGPS: Open-source MATLAB software. GPS Solut. 2016, 20, 595–603. [Google Scholar] [CrossRef]
  35. Wen, W.W.; Hsu, L.T. 3D LiDAR Aided GNSS NLOS Mitigation in Urban Canyons. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18224–18236. [Google Scholar] [CrossRef]
  36. Zeng, K.; Wang, Q.; Tang, J.; Li, Z.; Xie, K.; Xie, S. Mitigating NLOS Interference in GNSS Single-Point Positioning Based on Dual Self-Attention Networks. IEEE Internet Things J. 2025, 12, 4318–4330. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the Proposed Method (where red indicates the sky, green represents trees and black denotes buildings).
Figure 1. Flowchart of the Proposed Method (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g001
Figure 2. Data collection.
Figure 2. Data collection.
Remotesensing 17 02725 g002
Figure 3. Labeling Example. (a) Raw data. (b) Data label. (c) Label image (where red indicates the sky, green represents trees and black denotes buildings).
Figure 3. Labeling Example. (a) Raw data. (b) Data label. (c) Label image (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g003
Figure 4. Dataset Training (where red indicates the sky, green represents trees and black denotes buildings).
Figure 4. Dataset Training (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g004
Figure 5. Segmentation Effect. (a) Raw data. (b) Segmentation results (where red indicates the sky, green represents trees and black denotes buildings).
Figure 5. Segmentation Effect. (a) Raw data. (b) Segmentation results (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g005
Figure 6. Diagram of Satellite Pixel Coordinates.
Figure 6. Diagram of Satellite Pixel Coordinates.
Remotesensing 17 02725 g006
Figure 7. Satellite Signal Projection Effect (where red indicates the sky, green represents trees, and black denotes buildings). (a) Segmentation effect. (b) GNSS signal labeling effect.
Figure 7. Satellite Signal Projection Effect (where red indicates the sky, green represents trees, and black denotes buildings). (a) Segmentation effect. (b) GNSS signal labeling effect.
Remotesensing 17 02725 g007
Figure 8. Main Experimental Instrument.
Figure 8. Main Experimental Instrument.
Remotesensing 17 02725 g008
Figure 9. Experimental path of parameters.
Figure 9. Experimental path of parameters.
Remotesensing 17 02725 g009
Figure 10. Position errors in the X, Y, and Z directions of Path 1. (a) X-Direction Positioning Error. (b) Y-Direction Positioning Error. (c) Z-Direction Positioning Error.
Figure 10. Position errors in the X, Y, and Z directions of Path 1. (a) X-Direction Positioning Error. (b) Y-Direction Positioning Error. (c) Z-Direction Positioning Error.
Remotesensing 17 02725 g010aRemotesensing 17 02725 g010b
Figure 11. Position errors in the X, Y, and Z directions of Path 2. (a) X-Direction Positioning Error. (b) Y-Direction Positioning Error. (c) Z-Direction Positioning Error.
Figure 11. Position errors in the X, Y, and Z directions of Path 2. (a) X-Direction Positioning Error. (b) Y-Direction Positioning Error. (c) Z-Direction Positioning Error.
Remotesensing 17 02725 g011aRemotesensing 17 02725 g011b
Figure 12. Experimental path.
Figure 12. Experimental path.
Remotesensing 17 02725 g012
Figure 13. Deep-Air, Flood Fill, and Canny Segmentation Results. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the obstructions and black indicates sky). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Figure 13. Deep-Air, Flood Fill, and Canny Segmentation Results. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the obstructions and black indicates sky). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g013
Figure 14. Multi-Sky Area Scenario with Central Occlusion. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the sky and black indicates obstructions). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Figure 14. Multi-Sky Area Scenario with Central Occlusion. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the sky and black indicates obstructions). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g014
Figure 15. Multi-Sky Area Scenario with Central Sky Region. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the sky and black indicates obstructions). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Figure 15. Multi-Sky Area Scenario with Central Sky Region. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the sky and black indicates obstructions). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g015
Figure 16. Multi-Sky Area with Power Line Obstruction. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the sky and black indicates obstructions). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Figure 16. Multi-Sky Area with Power Line Obstruction. (a) Raw image. (b) Flood Fill (where white represents the sky and black indicates obstructions). (c) Canny (where white represents the sky and black indicates obstructions). (d) Deep-Air (where red indicates the sky, green represents trees and black denotes buildings).
Remotesensing 17 02725 g016
Figure 17. Trajectory Comparison.
Figure 17. Trajectory Comparison.
Remotesensing 17 02725 g017
Figure 18. The Number of Satellites with Epoch Changes.
Figure 18. The Number of Satellites with Epoch Changes.
Remotesensing 17 02725 g018
Figure 19. Position errors in the X, Y, and Z directions. (a) X-Direction Positioning Error. (b) Y-Direction Positioning Error. (c) Z-Direction Positioning Error.
Figure 19. Position errors in the X, Y, and Z directions. (a) X-Direction Positioning Error. (b) Y-Direction Positioning Error. (c) Z-Direction Positioning Error.
Remotesensing 17 02725 g019aRemotesensing 17 02725 g019b
Figure 20. CDF along the X, Y, and Z axes. (a) X axes. (b) Y axes. (c) Z axes.
Figure 20. CDF along the X, Y, and Z axes. (a) X axes. (b) Y axes. (c) Z axes.
Remotesensing 17 02725 g020aRemotesensing 17 02725 g020b
Table 1. Epoch Signal Recognition and Detection Results.
Table 1. Epoch Signal Recognition and Detection Results.
LOSAttenuationNLOS
PRNE12 E17 C9 G5G11 G13 E23 G20C4 E22 G29
Table 2. Data Acquisition Hardware Specifications.
Table 2. Data Acquisition Hardware Specifications.
GNSS and Camera
SPAN-SEdual-frequency L1/L2GPS + GLONASS + B1-2/B2bBDS
CombinationCGI-410
Focal length2.2 mm
VFOV90%
Image resolution640 × 9480
Image frame rate20 fps
Table 3. RMSEs of Path 1 and Path 2.
Table 3. RMSEs of Path 1 and Path 2.
Path 1 RMSE(m)Path 2 RMSE(m)
XYZXYZ
γ = 20 β = 60 2.375.837.091.532.825.40
γ = 10 β = 50 1.775.204.570.991.654.14
γ = 5 β = 30 2.736.365.261.172.583.83
Table 4. RMSE of Path 1 and Path 2 with γ = 10 .
Table 4. RMSE of Path 1 and Path 2 with γ = 10 .
Path 1 RMSE(m)Path 2 RMSE(m)
XYZXYZ
β = 10 1.835.205.841.551.844.43
β = 20 1.845.185.081.261.804.31
β = 30 1.795.344.791.001.834.24
β = 40 1.775.704.691.021.884.19
β = 50 1.775.204.570.991.654.14
β = 60 1.815.434.621.011.704.12
β = 70 1.855.584.631.011.744.10
Table 5. RMSE of Path 1 and Path 2 with β = 50 .
Table 5. RMSE of Path 1 and Path 2 with β = 50 .
Path 1 RMSE(m)Path 2 RMSE(m)
XYZXYZ
γ = 5 1.806.074.761.061.684.30
γ = 10 1.775.204.570.991.654.14
γ = 15 1.775.204.630.981.944.08
γ = 20 1.785.244.650.971.954.06
γ = 25 1.795.114.660.971.964.03
Table 6. Performance Comparison of Three Sky Region Segmentation Methods.
Table 6. Performance Comparison of Three Sky Region Segmentation Methods.
MethodsSky Area AccuracyNumber of Pixels
Canny79.1%44,789
Flood Fill89.0%62,790
Deep-Air98.9%57,196
Table 7. RMSE and ME in the X, Y, and Z directions.
Table 7. RMSE and ME in the X, Y, and Z directions.
RMSE (m)ME (m)
XYZXYZ
GNSS/INS/Flood Fill0.802.913.09−0.352.211.65
GNSS/INS/Canny0.882.993.250.372.401.79
GNSS/INS/Deep-Air0.642.752.52−0.052.251.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yue, Z.; Sun, C.; Zhang, X.; Tang, C.; Gao, Y.; Li, K. A Semantic Segmentation-Based GNSS Signal Occlusion Detection and Optimization Method. Remote Sens. 2025, 17, 2725. https://doi.org/10.3390/rs17152725

AMA Style

Yue Z, Sun C, Zhang X, Tang C, Gao Y, Li K. A Semantic Segmentation-Based GNSS Signal Occlusion Detection and Optimization Method. Remote Sensing. 2025; 17(15):2725. https://doi.org/10.3390/rs17152725

Chicago/Turabian Style

Yue, Zhe, Chenchen Sun, Xuerong Zhang, Chengkai Tang, Yuting Gao, and Kezhao Li. 2025. "A Semantic Segmentation-Based GNSS Signal Occlusion Detection and Optimization Method" Remote Sensing 17, no. 15: 2725. https://doi.org/10.3390/rs17152725

APA Style

Yue, Z., Sun, C., Zhang, X., Tang, C., Gao, Y., & Li, K. (2025). A Semantic Segmentation-Based GNSS Signal Occlusion Detection and Optimization Method. Remote Sensing, 17(15), 2725. https://doi.org/10.3390/rs17152725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop