Next Article in Journal
GeoJed: A Geospatial Grid Model for Data Acquisition and Spatial–Quality Assessment of Healthcare Services in Jeddah
Previous Article in Journal
MTS-RE-GCN: Multi-Task Methods for Enhanced Spatio-Temporal Reasoning in Temporal Knowledge Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Invariant Spatial Relation-Based Road Network Graphics Retrieval for GPS Art

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2026, 15(3), 98; https://doi.org/10.3390/ijgi15030098
Submission received: 5 January 2026 / Revised: 20 February 2026 / Accepted: 25 February 2026 / Published: 27 February 2026

Abstract

In recent years, people have increasingly sought to generate exercise trajectories that embody specific semantic shapes in order to create GPS art and share it on social platforms. This trend has created an urgent demand for navigation paths with specific semantic meanings on smartwatches and smartphones. Current methods mainly rely on manual design and lack efficient automation. Therefore, this study proposes a novel method for automatically obtaining navigation paths with specified shapes by retrieving graphics similar to the input graphic shape from the road network. This method uses invariant spatial relationships, such as turning angles and length ratios, along with graph matching techniques to establish one-to-one or one-to-many correspondences between line segments in the input individual graphics and those in the road network. This enables the retrieval of individual graphics within the road network. Based on this, a greedy strategy-based algorithm is proposed to solve the combined graphics retrieval problem. The results are evaluated to ensure high quality. The accuracy and effectiveness of our method are validated through experimental results using simulated and real road network data from five different regions. Furthermore, shape-constrained graphics retrieval expands the application domain of spatial scene matching.

1. Introduction

In recent years, the growing emphasis on sports and health, coupled with the widespread use of portable devices such as smartwatches and smartphones, has led to a growing number of people using the BeiDou Navigation Satellite System (BDS) or the Global Positioning System (GPS) as a “digital brush” to create meaningful patterns on urban road network maps while exercising [1,2,3,4] (see Figure 1). This practice is known as GPS art, or GPS drawing [5]. Completed GPS art is often shared on social media platforms or given as personalized gifts for occasions such as Valentine’s Day, thereby enhancing the enjoyment of sports and promoting greater public engagement in physical activity. This trend has given rise to a novel application requirement: the automatic retrieval of feasible road paths within urban road networks that collectively form specific semantic shapes, followed by navigation guidance via smartwatches or smartphones to assist users in following preset trajectories to create GPS art.
The main challenge in creating GPS art is designing paths that conform to the target shape while adhering to the constraints of the actual road network. Currently, the dominant approach is to map the input graphic with specific shapes directly onto the road network and manually adjust their positions, sizes, and directions to obtain an optimal output path [6,7]. However, these methods are inefficient and severely limit the widespread application of GPS art. To the best of our knowledge, there is currently no automated method for generating paths with predetermined shapes. Therefore, this work aims to automatically generate paths with specific shapes within the road network from a spatial scene matching perspective. The goal is to establish appropriate relationships between line segments in the input graphic and those in the road network based on spatial relationships. The input graphic will contain specific shapes such as “5”, “520”, “1314”, or other vector graphics that align with the textual semantics. This function allows the retrieval of similar graphics within the road network in specific paths with specific shapes. It is important to note that no geographical coordinates are provided for the input graphic. Additionally, this is an enumeration problem rather than an optimization problem, meaning that users in different geographic locations can obtain nearby paths with the desired shapes.
Recent research has focused on how to match sketches without geographic coordinates to maps [8,9]. These studies employ qualitative spatial scene-matching methods to address the challenge of matching spatial scenes in the absence of geographic coordinates [10,11]. This approach extracts qualitative spatial relationships [12] among objects in the scene and represents different scenes as a qualitative constraint network. Graph matching methods are then utilized to align the two spatial scenes. However, we face several challenges: (1) In contrast to qualitative spatial scene matching, which only requires consistency in topology and directional relationships, our work aims to ensure that the output paths have consistent shape descriptions with the input graphic. To achieve this goal, we utilize invariant spatial relationships such as turning angles and length ratios. (2) Due to the lack of a fixed scale in the input graphic, there may be multiple roads in a road network corresponding to a single line segment in the input graphic. To address this problem, our work formulates a principle for approximating a polyline to a line segment in the road network and treats the polyline in subsequent retrieval processes as an actual line segment. (3) The input graphic can be either a single graphic, such as the heart-shaped graphic shown in Figure 1b, or a combination of several isolated graphics, such as “520” shown in Figure 1a. Due to uncertainty about the relative positional relationships between each sub-graphic, it is difficult to retrieve combined graphics directly from the road network. To overcome this challenge, we reformulate it as a combined graphics problem based on single-graphic retrieval.
This study proposes two algorithms for graphics retrieval and graphics combination, which are used to retrieve the individual graphic and combined graphics, respectively. The graphics retrieval algorithm extracts invariant spatial relationships between line segments in the input graphic and those in the road network, thereby transforming the input graphic into an input graph. It then constructs a dynamic graph using the line segments and the polyline that approximates a line segment in the road network. The matching between the input graph and the dynamic graph is performed using a backtracking search algorithm [13]. Based on the retrieval results obtained for each sub-graphic, the graphics combination algorithm designs a scoring function for combining adjacent sub-graphics and then uses a greedy algorithm [14] to combine all sub-graphics. In addition, two evaluation methods are developed to select high-quality retrieval results for both individual and combined graphics. Finally, the accuracy and effectiveness of the algorithms are validated on simulated and real road network data from five regions.
The remaining parts of this paper are organized as follows: Section 2 reviews related work. Section 3 introduces the proposed graphics retrieval algorithm and graphics combination algorithm. Section 4 provides the details and results of the experiments. Section 5 discusses the strengths and limitations of our work. Finally, in Section 6, we summarize our findings.

2. Related Work

2.1. Generation of Paths with a Specific Shape

Several approaches have been proposed for generating paths with specific shapes. Balduz [15] used image operators to find the optimal match for the input graphic. First, both the road network and the input graphic were rasterized into regular grids. Then, the center pixel of the input image was sequentially placed on each pixel of the road network, and the total sum of the minimum distances between pixels from both images was computed. Finally, the position with the smallest value was selected and then mapped onto the road network. However, this method has limitations, including high computational cost, the inability to derive actual paths from the acquired pixels, and the loss of road directionality and three-dimensional information (e.g., bridges and overpasses) due to rasterization.
In contrast to rasterization methods, Rosner et al. [6] mapped the input graphic onto a road network by manually determining its position, scale, and orientation. They used the mapping system built into iOS to plan a route based on the start and end points of line segments in the input graphic. However, due to different optimization objectives in path-planning tasks, standard path-planning results can sacrifice geometric details in the input graphic and introduce arbitrarily curved roads. Therefore, Waschk and Krüger [7] propose a novel single-source multi-objective shortest path algorithm that generates satisfactory graphs by minimizing the Riemannian distance [16]. However, this approach requires continuous manual adjustments to position, scale, and orientation to achieve improved results, thereby limiting its degree of automation.

2.2. Alignment of Sketches and Maps

The goal of aligning hand-drawn sketches and maps is to geolocate sketches without geographic coordinates onto a map [17,18,19,20]. To address this, Fogliaroni et al. [11] employed a tree-search approach with backtracking and forward-checking capabilities, adapted from subgraph-matching problems, to align sketches and maps. Chipofya et al. [10] proposed a heuristic method based on the local compatibility matrix to align sketches and maps. Lu Yuefeng et al. [9] developed a spatial relationship matrix that incorporates seven spatial relationships and adjacency information, along with an improved tabu search algorithm to solve the matching problem between sketches and maps more effectively. These studies primarily focus on improving the algorithm’s efficiency. Furthermore, Zardiny et al. [21] considered the extreme case where only routes are present in the sketch data. They devised a fitness function utilizing geometric and topological information, such as node degree, node type, connectivity, and directionality, that is independent of geographic coordinates and combined with genetic algorithms to explore novel approaches for matching routes in sketches and maps. However, this method has an average accuracy of only 45.59% and is computationally intensive, resulting in slow speed. Additionally, Rapant et al. [22] obtained an understanding of the region of interest by processing the narratives related to it, creating appropriate computer representations for further processing, and automatically generating sketch maps. Manivannan et al. [23] proposed an algorithm that automatically identifies specific generalization types in sketches based on the nine generalization types used in sketch matching, thereby supporting unbiased alignment in sketch matching. Schneider et al. [24] discussed recent methods for graph-based spatial pattern matching, noting that graph representations, owing to their topological flexibility and structural fault tolerance, can effectively model irregular, incomplete, or scale-heterogeneous spatial relationships. This provides computable, approximate solutions to traditionally difficult-to-formalize problems in spatial similarity determination.
In general, employing spatial relationships to qualitatively represent spatial scenes offers a viable solution that circumvents the problem of missing geographic coordinate data itself, while also providing valuable insights for this study.

3. Methods

The overall framework of the proposed method is illustrated in Figure 2. For each subgraph in the target combined graphic, “520,” invariant spatial relations, such as turning angles and length ratios, as well as the subgraph matching method, are used for retrieval in the road network. The retrieval results are then evaluated and filtered. Next, the retrieval results of the individual sub-graphics are combined using a greedy strategy-based graphics combination algorithm, followed by a second stage of evaluation and filtering. Finally, high-quality retrieval results for the combined graphic, “520”, are obtained.

3.1. Graphics Retrieval Algorithm

3.1.1. Invariant Spatial Relationships

Qualitative spatial scene matching employs qualitative representations of spatial relationships to characterize spatial scenes. For instance, directional relationships are described using terms such as east, south, west, and north for qualitative descriptions, while fuzzy words like far and near or large and small are utilized for distance-related descriptions. However, this qualitative representation of spatial relationships is inadequate in accurately describing shapes and ensuring shape consistency across different spatial scenes. To address this challenge, we propose extracting precise spatial relationships that are translation-, scale-, and rotation-invariant to represent spatial scenes. Specifically, two elements are involved:
  • Turning angle: In traverse surveying techniques, the turning angle between adjacent directed line segments ab and bc is defined as follows:
    A = α a b α b c 180 °
    where α a b represents the azimuth angle of line segment ab and α b c represents the azimuth angle of line segment bc when moving directionally from a→b→c. In this study, coordinate azimuth angles for the input graphic and compass azimuth angles for road networks are used to calculate turning angles. These turning angles can be used to determine the relative positional relationships between line segments.
  • Length ratio: This is the ratio between the lengths of adjacent line segments. The length ratio is independent of scale and can be used to constrain the length of each line segment, thereby avoiding compression or stretching effects on the retrieval results.
Consequently, we represent a spatial scene as a graph structure by associating invariant spatial relationships and entities. Each node in the graph corresponds to a line segment in the spatial scene and is labeled with its azimuth angle. The edges depict pairs of topologically adjacent line segments, incorporating length ratio information, and are annotated with their direction to facilitate turning angle calculation. By mapping the input graphic and the road network onto this graph structure, retrieving individual graphics from the road network can be viewed as a subgraph matching problem [25].

3.1.2. Dynamic Graph of Road Network

Due to the lack of a fixed scale in the input graphic, a line segment in the input graphic corresponds not only to a line segment in the road network but also to a polyline that approximates a line segment. This can lead to larger-scale results when retrieving graphics. To determine whether a polyline in the road network approximates a line segment, we apply the following principle: if the difference between the azimuth angle of each line segment within the polyline and the azimuth angle of the line segment from the start point to the end point of the polyline is within an allowable error range [−θ, θ], then it is considered an approximate line segment. The azimuth angle and length of the approximate line segment are defined as equal to the azimuth angle and the Euclidean distance of the line segment from the polyline start point to the end point, respectively.
To achieve large-scale retrieval, this study abstracted both line segments and approximate line segments in the road network as nodes during graph conversion. However, the construction of static graphs using all line segments and approximate line segments has two limitations: (1) it generates many redundant edges, resulting in high computational cost [26]; (2) when changes occur in the road network, it becomes necessary to reconstruct static graphs.
Therefore, during the graphics retrieval process, dynamic graphs are constructed by dynamically establishing edges between nodes and their adjacent nodes. The main steps are:
  • Randomly selecting a line segment or approximate line segment from the road network as an initial node for dynamic graph construction.
  • Obtaining adjacent and unvisited line segments or approximate line segments that satisfy the length ratio constraint with the line segment or approximate line segment represented by the initial node, considering them as adjacent nodes of the initial node.
  • Expanding the dynamic graph by continuously adding successfully matched nodes from adjacent nodes until successful retrieval is achieved or there are no more matching nodes available among adjacent ones.
It can be concluded that using different line segments or approximate line segments as the initial node will yield different dynamic graphs. Furthermore, nodes with failed matches are filtered during the dynamic construction process, thereby reducing computational costs. The dynamic graph construction process is visualized in Figure 3.

3.1.3. Subgraph Matching

This task aims to establish the correspondence between nodes in a dynamic graph representing a road network and nodes in an input graph. To achieve this, we employ backtracking for the search. Initially, we determine the order of nodes to be matched in the input graph based on their degrees and subsequently perform node matching in the dynamic graph accordingly. Next, the dynamic graph is searched using a depth-first approach. At each step, an unvisited adjacent node of the current matched node is selected as a candidate for matching. The candidate node is then verified to correspond to its counterpart in the input graph. If a successful match is found, it is added to the set of matches and becomes the new current matched node. If a match is not found, the algorithm backtracks to the previous matching node and selects another unvisited adjacent node as a candidate for matching. This recursive process continues until all nodes in the input graph are matched, or no candidates remain during backtracking.
The core of the algorithm lies in determining the correspondence between nodes in the dynamic graph and the input graph, based on the following criteria:
  • Candidate nodes in the dynamic graph should have a degree equal to or greater than that of their corresponding nodes in the input graph to avoid ineffective searching.
  • Corresponding nodes between the dynamic and input graphs should maintain consistent topological adjacency relationships.
  • The difference in length ratio and turning angle between edges from the current matched nodes to the candidate nodes in the dynamic graph and their corresponding edges in the input graph should be within the allowable error ranges of [−ε, ε] and [−φ, φ], respectively. By relaxing the constraints within these error ranges, retrieval results can deviate slightly from the original shapes. This increases retrieval success rates while avoiding situations with too few or no results due to strict constraints.
Additionally, when constructing a graphical representation of rotationally symmetric graphics using line segments that are symmetrically rotated with respect to one another as initial nodes within a road network context, spatial relationships with rotational invariance render these graphs isomorphic, thereby leading to repeated retrieval of such rotationally symmetric graphics. To address this issue and ensure uniqueness, it is crucial to prohibit selecting a node that exhibits rotational symmetry relative to the current initial node as the new initial node after retrieving the first rotationally symmetric graphic.

3.1.4. Graphics Evaluation

After obtaining the retrieval results of the input graphic, the similarity between these results and the input graphic is evaluated by calculating their shape similarity. The retrieval results are first mapped to the coordinate system of the input graphic. Then, the retrieval results are divided into matching and non-matching nodes based on whether each node corresponds to a node in the input graphic. Non-matching nodes refer to other nodes located on approximate line segments except for the start and end points. Finally, the difference score between the retrieval results and the input graphic is calculated using the following formula:
S c o r e S = i = 1 n d ( e i , e i ) + j = 1 u D j
where n is the number of matching nodes, u is the number of non-matching nodes, d ( e i , e i ) is the Euclidean distance between corresponding nodes e i and e i , and D j is the perpendicular distance from non-matching nodes to corresponding line segments in the input graphic. The formula considers two factors when calculating the score: (1) the Euclidean distance between corresponding nodes in the retrieval result and the input graphic, and (2) the total perpendicular distance from non-matching nodes to corresponding line segments in the input graphic. Sorting the results by score in descending order yields retrieval results with higher shape similarity to the input graphic.

3.2. Graphics Combination Algorithm

3.2.1. Graphics Combination Problem

When retrieving the combined graphics composed of multiple sub-graphics in a road network, the graphics retrieval algorithm produces a result set ( Y 1 , Y 2 , Y 3 , , Y m ) for each sub-graphic. This transforms the retrieval problem for the combined graphics into a combination problem for the individual sub-graphic retrieval results.
Two principles should be followed when combining graphics: (1) ensure that individual sub-graphics have similar scales and directions; (2) distribute individual sub-graphics along a straight line as much as possible. For instance, in the case of the combined graphic ‘520’, Figure 4a demonstrates a preferable result while Figure 4b represents an excluded combination situation. Figure 4 illustrates the use of the horizontal azimuth angle δ to represent the direction of each sub-graphic and azimuth angle β between adjacent center points’ connecting lines to indicate their relative positional relationship.
The primary task when identifying sub-graphic y a and its set of adjacent sub-graphics Y b is to filter out the sub-graphic y b from the set Y b that satisfy the combination principle with y a . Firstly, eliminate y b that overlap with y a or have significant scale differences during the filtering process. Then, by using the direction δ s of the first determined sub-graphic as a reference, we assess the changes in direction between δ b and β a b relative to the reference direction for each respective sub-graphics to determine if y b and y a can be combined:
J u d g e A = 1             δ b δ s σ 1   a n d   β a b δ s σ 2 0                   δ b δ s > σ 1   o r   β a b δ s > σ 2
where σ 1 and σ 2 are thresholds for directional changes. Only when the difference between the δ b and β a b directions, with respect to the reference direction δ s , falls within this threshold range, can a combination of y b and y a be achieved. Finally, a difference score is assigned to all sub-graphics in set Y b that satisfy the combination principle:
S c o r e A = 1 / δ b δ s σ 1 + β a b δ s σ 2
A high score obtained through this calculation indicates that y b exhibits a smaller deviation from the reference direction both in its own direction and in relation to its relative y a . By adhering to this principle for combining graphics, it ensures small directional variations among individual sub-graphics and distributes them evenly along the reference direction. This results in an effective combination effect, as shown in Figure 4a.

3.2.2. Graphics Combination Algorithm

When combining graphics, exhaustive enumeration of all possible combinations results in a highly complex problem with a time complexity of O ( t 1 t 2 t m ) , where t j represents the number of elements in set Y j 1 j m . To reduce this time complexity, a greedy strategy is employed by prioritizing retrieval result sets with the fewest number of elements when selecting the initial sub-graphic and finding adjacent sub-graphics. After optimizing the algorithm, the time complexity is reduced to O t m i n , where t m i n represents the minimum number of retrieval results. The specific steps are as follows:
  • Obtain the retrieval result set Y s with the fewest number of elements and randomly select one from Y s as the current sub-graphic being combined.
  • Based on the arrangement order of sub-graphics in the combined graphics, obtain the retrieval result set for adjacent and unvisited sub-graphic to the current one. Next, iterate through all the graphics in this set and determine whether they can be combined with the current sub-graphic using Formula (3). If all attempts fail, select another sub-graphic from set Y s as the current sub-graphic being combined, and restart the graphics combination process. Otherwise, quantitatively evaluate all graphics that satisfy combination principles using the scoring Formula (4). A certain number of high-scoring candidates are then selected from them in descending order based on their scores, and the one closest to the currently combined sub-graphic is selected as the final combination result and replaces the currently combined sub-graphic.
  • Repeat step 2 until there are no uninvolved graphics left in set Y s .

3.2.3. Combined Graphics Evaluation

The combined graphics retrieval results are evaluated using the following formula:
S c o r e C = N S β + N S d
where S β and S d represent the mean square deviation of relative direction and distance between adjacent sub-graphics, respectively. The function N is used to normalize parameters across all retrieval results. By sorting the retrieval results of combined graphics in descending order based on their scores, we can identify those with superior combined effects.

4. Results

4.1. Experimental Data

For this study, we first selected a 10 km × 10 km rectangular area within the Nanshan District of Shenzhen, Hongshan District of Wuhan, Dongcheng District of Beijing, Minhang District of Shanghai, and Chang’an District of Xi’an. We then utilized the OSMnx [27] Python toolkit 1.1.1 to extract pedestrian-friendly road networks from the OpenStreetMap (osm) project [28] as a test instance, as shown in Table 1. The provided input graphics are vector graphics with equivalent semantic meaning to the text. Furthermore, to validate the efficacy and accuracy of the graphics retrieval algorithm, we constructed simulated road network data that included various graphics. These graphics consisted of alphanumeric characters such as “3”, “8”, “V”, “N”, “K”, “G”, “A”, and “M”, as well as heart shapes and pentagrams. We simulated approximate line segments by introducing breakpoints on the line segments in these graphic templates. The experiments were conducted on a Windows PC with a 3.19 GHz Intel quad-core CPU and 16 GB of RAM.

4.2. Graphics Retrieval Algorithm

4.2.1. Experiment Results in the Simulated Road Network

The experimental parameters for the simulated road network data were set to φ = 0°, θ = 0°, and ε = 0.01. Table 2 presents a comparison of the results obtained from the complete graphics retrieval algorithm with the ground truth and variants that exclude approximate line segments or length ratios. The results show that: (1) The observed number of retrieval results perfectly matches the ground truth without any missing or duplicate cases, thereby validating the effectiveness and accuracy of the algorithm. (2) After removing the approximate line segments, the number of retrieval results decreased, and even graphics like “A”, “G”, “V”, heart shapes, and pentagrams were not retrieved. This suggests that using approximate line segments can improve both the success rate and the number of graphic retrievals. (3) Excluding length ratios increased the number of retrieval results for certain graphics (such as “A”, “K”, “N”, “3”, “V”, and “G”) beyond their actual values. Visual analysis shows that Figure 5a’s retrieval results exhibit relatively short or long line segments, while Figure 5b displays severely deformed error retrieval results. This suggests that controlling the relative lengths of internal line segments within a graphic can ensure greater similarity between retrieved and input graphics in terms of shape, thereby filtering out low-similarity or incorrect retrieval results.

4.2.2. Experiment Results in the Real Road Network

The experiment aims to validate the effectiveness of the graphics retrieval algorithm in real road network data by analyzing the retrieval results. Compared with the simulated road network data, it is difficult to retrieve a large number of matching results with the same geometric shape as the input graphic in the real road network. Therefore, we relaxed the geometric tolerance threshold while ensuring the rationality of shape discrimination to increase the number of retrieval results. The parameters φ, θ, and ε jointly control the strictness of shape matching: increasing their values can increase the number of retrievals, but it will reduce the shape similarity; conversely, it is easy to cause missed detections. Through systematic experimental optimization, we determined that the optimal default parameter configuration is φ = 15°, θ = 15°, and ε = 0.3, balancing the number of retrieval results and shape fidelity.
As shown in Table 3, the proposed algorithm successfully retrieves results for each test instance in under 2 s. The algorithm’s runtime is positively correlated with the total number of iterations and is not directly related to the number of edges or to the retrieval results on the test instances. For example, in instance I, although graphic “E” has the longest runtime, its number of retrieval results and edge quantity are neither maximum nor minimum.
Table 4 shows that variations in the parameters φ and ε indirectly affect the number of retrieved results. Lower parameter values yield stricter constraints, resulting in fewer retrieved results. When both φ and ε are set to 0, only the graphic “E” can be retrieved. Therefore, we set φ to 15° and ε to 0.3 to relax the constraints appropriately, thereby improving the success rate and yielding more retrieval results.
To investigate the impact of using approximate line segments on the size and quantity of retrieval results from real road network data, we conducted further ablation experiments on instance II. The size of each retrieval result was determined by adding up the lengths of all line segments within it. Figure 6 shows that using approximate line segments during retrieval yields higher retrieval results than not using them. Furthermore, even when surpassing maximum sizes achieved without using approximate line segments, there are still certain quantities of retrieval results. It is worth noting that the use of approximate line segments even enables a heart shape to be retrieved. This indicates that incorporating approximate line segments into road networks for retrieval can significantly increase the number and size of retrieval results.
The evaluation method used for individual graphics is effective and reliable, as shown in Figure 7. The higher-ranked retrieval results exhibit greater shape similarity to the input graphic, whereas lower-ranked results exhibit issues such as inconsistent line lengths and significant deviations in turning angles.

4.3. Graphics Combination Algorithm

This experiment aims to evaluate the effectiveness of the graphics combination algorithm by analyzing the retrieval results of the combined graphics in the real road network data. The experimental parameters are set as σ 1 = 60 and σ 2 = 60. Table 5 and Figure 8 show that the results demonstrate that the proposed algorithm successfully retrieves combined graphics with special meanings such as “520”, “1314”, “I♥y”, and “LOVE” from real road network data, confirming its effectiveness. Retrieving the combined graphics “1314” takes the longest runtime among the different test instances, reaching up to 1725.6 s, while other combined graphics take less than one minute to retrieve.
Table 6 shows a positive correlation between the number of retrieval results for the first combined sub-graphic and the runtime of the graphics combination algorithm. Since the graphic “O” has the fewest retrieval results, it requires less runtime when chosen as the first combined sub-graphic. Conversely, selecting graphic “E” as the first combined sub-graphic results in a longer runtime.
Our proposed method, as shown in Figure 9, effectively evaluates retrieval results for combined graphics. Compared with cases with lower scores, higher-rated cases exhibit roughly equal spacing between sub-graphic elements, arranged in a straight-line pattern, which enhances overall esthetics.

5. Discussion

5.1. Method Effectiveness Explanation

The research results show that the proposed method can effectively retrieve all output paths in the road network that are similar in shape to the input graphic. The method comprises a graphics retrieval algorithm and a graphics combination algorithm. The former is used to retrieve individual input graphics, while the latter is used to retrieve combined graphics. The graphics retrieval algorithm uses invariant spatial relationships, such as turning angles and length ratios, to address missing geographic coordinates in the input graphic. This ensures consistency of shape between the output paths and the input graphic. The algorithm not only uses existing line segments in the road network but also incorporates approximate line segments to construct a dynamic graph. This results in an increase in both the number and the size of output paths. Figure 6 illustrates that these paths can reach a maximum length of 8 km and are distributed across various length ranges to cater to different preferences among sports enthusiasts regarding path lengths. The graphics combination algorithm improves the efficiency of graphics combination by selecting the graphic with fewer retrieval results as the first sub-graphic to be combined and adopting a greedy strategy. Furthermore, evaluating individual and combined graphics enables the selection of high-quality retrieval results. Ultimately, the precision and effectiveness of our proposed algorithm were validated through quantitative and qualitative results on simulated and five real road network datasets.

5.2. Advantages Compared to Previous Methods

In previous studies on generating specific shape paths, Balduz [15] failed to obtain actual paths, whereas Waschk and Krüger [7] required manual intervention to adjust direction, position, and scale. In contrast, this study proposes a novel method for automatically generating specific shape paths without manual intervention. It can retrieve individual graphics on a 10 km × 10 km road network within seconds and obtain combined graphics through a combination algorithm.
Regarding research methodology, the qualitative spatial scene matching approach used in sketch map alignment tasks offers insights for addressing spatial scene matching in the absence of geographic coordinates. However, this study primarily focuses on the shape similarity between output paths and the input graphic by introducing invariant spatial relationships, such as turning angles and length ratios, instead of qualitative spatial relationships. When designing the graphics matching method, this study aims to exhaust all the output paths that meet the requirements, rather than aiming for optimal or sub-optimal solutions in sketch and map alignment tasks. Overall, this study extends the application domain of spatial scene matching by applying it to GPS art problems. Additionally, because qualitative spatial relationships are invariant, this study provides a universal solution to the spatial scene matching problem in the absence of geographical coordinates: representing the spatial scene as a constraint graph based on invariant spatial relationships, which is then transformed into a graph-matching problem.

5.3. Limitations and Future Works

The essence of the graphics retrieval algorithm lies in controlling the semantic representation by constraining the output path’s shape. For example, an input graphic with semantics equivalent to digit 5 will yield an output path possessing identical semantics. However, since there are multiple graphic representations for expressing the same semantics, it becomes difficult to retrieve paths with the desired semantics but different shapes within a road network. This also makes it challenging to detect complex structures such as Chinese characters and zodiac signs. Additionally, one problem is that creating the combined graphic “1314” is time-consuming. This is likely caused by the excessive number of retrieval results for the sub-graphic “1” in the road network, leading to significant computational resource consumption during each optimal selection process for adjacent sub-graphics.
Future research could focus on retrieving output paths in a road network based on semantic similarity rather than being solely limited by shape similarity. Another consideration is preprocessing retrieval results for sub-graphics within combined graphics to reduce their number and improve efficiency in graphic combinations.

6. Conclusions

This paper proposes a graphics retrieval method that integrates invariant spatial relationships and subgraph matching, along with a greedy strategy-based graphics combination algorithm, to support the automatic retrieval of single and combined graphics in road networks. This method retrieves target paths similar to the input graphic within the road network, thereby providing customizable navigation path services for mobile devices such as smartwatches and smartphones, and expanding the potential applications of GPS art to mass fitness and creative geographic practices. The proposed graphics retrieval algorithm is a variant of spatial-scene-matching methods and has significant implications for extending the application domains of this approach. However, the current method faces limitations when dealing with graphics that have more complex shapes (such as the patterns of the twelve Chinese zodiac signs): first, the same semantic concept often corresponds to multiple graphic shapes, making it difficult for us to confirm the appropriate input graphic; second, complex graphics have stricter shape requirements for the road network, making it difficult for us to detect results with similar shapes to the input graphic. Future work will focus on developing a semantic-driven road network graphic retrieval algorithm that prioritizes semantic consistency over shape similarity, thereby relaxing strict requirements on road network shape.

Author Contributions

Conceptualization, Gang Li and Zhongliang Fu; methodology, Gang Li and Zhongliang Fu; software, Gang Li; validation, Gang Li; formal analysis, Gang Li and Zhongliang Fu; investigation, Zhongliang Fu; resources, Zhongliang Fu; data curation, Gang Li; writing—original draft preparation, Gang Li; writing—review and editing, Zhongliang Fu; visualization, Gang Li; supervision, Zhongliang Fu; project administration, Zhongliang Fu; funding acquisition, Zhongliang Fu. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Huawei Technologies, grant number TC20220614064.

Data Availability Statement

The data and codes are available at https://github.com/liganggis/run_drawing (accessed on 1 January 2026).

Acknowledgments

We thank the anonymous reviewers for their constructive comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Strava. Available online: https://www.strava.com (accessed on 25 December 2025).
  2. A Creative Spin: Pedaling My Art. Available online: https://www.youtube.com/watch?v=OsMMysaZRyg (accessed on 25 December 2025).
  3. Yassan’s GPS Drawing Project. Available online: https://gpsdrawing.info/ (accessed on 25 December 2025).
  4. Joyrun. Available online: https://www.thejoyrun.com/ (accessed on 25 December 2025).
  5. Hajian, A.; Baloian, N.; Inoue, T.; Luther, W. Collaborative Technologies and Data Science in Artificial Intelligence Applications; Universität Duisburg-Essen: Essen, Germany, 2020. [Google Scholar]
  6. Rosner, D.K.; Saegusa, H.; Friedland, J.; Chambliss, A. Walking by Drawing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 397–406. [Google Scholar]
  7. Waschk, A.; Krüger, J. Automatic route planning for GPS art generation. Comput. Vis. Media 2019, 5, 303–310. [Google Scholar] [CrossRef]
  8. Chipofya, M.; Wang, J.; Schwering, A. Towards Cognitively Plausible Spatial Representations for Sketch Map Alignment. In Proceedings of the Conference On Spatial Information Theory; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  9. Lu, Y.; Sun, Y.; Liu, S.; Li, J.; Liu, Y.; Yao, K.; Wang, Y.; Fu, Z.; Lu, S.; Shao, S. Hand-drawn sketch and vector map matching based on topological features. Front. Earth Sci. 2023, 11, 1081445. [Google Scholar] [CrossRef]
  10. Chipofya, M.C.; Schultz, C.; Schwering, A. A metaheuristic approach for efficient and effective sketch-to-metric map alignment. Int. J. Geogr. Inf. Sci. 2015, 30, 405–425. [Google Scholar] [CrossRef]
  11. Fogliaroni, P.; Weiser, P.; Hobel, H. Qualitative Spatial Configuration Search. Spat. Cogn. Comput. 2016, 16, 272–300. [Google Scholar] [CrossRef]
  12. Sioutis, M.; Wolter, D. Qualitative Spatial and Temporal Reasoning: Current Status and Future Challenges. In Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021. [Google Scholar]
  13. Çivicioglu, P. Backtracking Search Optimization Algorithm for numerical optimization problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar] [CrossRef]
  14. DeVore, R.A.; Temlyakov, V.N. Some remarks on greedy algorithms. Adv. Comput. Math. 1996, 5, 173–187. [Google Scholar] [CrossRef]
  15. Balduz, P. Walk Line Drawing. Ph.D. Thesis, Vienna University of Technology, Vienna, Austria, 2017. [Google Scholar]
  16. Riemann, B. Ueber die Darstellbarkeit einer Function durch eine trigonometrische Reihe. In Bernard Riemann’s Gesammelte Mathematische Werke und Wissenschaftlicher Nachlass; Riemann, B., Weber, H.M., Dedekind, R., Eds.; Cambridge Library Collection—Mathematics; Cambridge University Press: Cambridge, UK, 2013; pp. 213–253. [Google Scholar]
  17. Schwering, A.; Wang, J.; Chipofya, M.; Jan, S.; Li, R.; Broelemann, K. SketchMapia: Qualitative Representations for the Alignment of Sketch and Metric Maps. Spat. Cogn. Comput. 2014, 14, 220–254. [Google Scholar] [CrossRef]
  18. Jan, S.; Schwering, A. SketchMapia: A Framework for Qualitative Alignment of Sketch Maps and Metric Maps. 2015. Available online: https://www.researchgate.net/publication/275154644 (accessed on 24 February 2026).
  19. Wang, J.; Schwering, A. Invariant spatial information in sketch maps—A study of survey sketch maps of urban areas. J. Spat. Inf. Sci. 2015, 11, 31–52. [Google Scholar] [CrossRef]
  20. Zardiny, A.Z.; Hakimpour, F.; Shahbazi, M. Sketch maps for searching in spatial data. Trans. GIS 2020, 24, 780–808. [Google Scholar] [CrossRef]
  21. Zare Zardiny, A.; Hakimpour, F. Route Matching in Sketch and Metric Maps. J. Geogr. Syst. 2021, 23, 381–405. [Google Scholar] [CrossRef]
  22. Rapant, P.; Menšík, M.; Albert, A. Automatic sketch map creation from labeled planar graph. Int. J. Geogr. Inf. Sci. 2024, 38, 981–1006. [Google Scholar] [CrossRef]
  23. Manivannan, C.; Krukar, J.; Schwering, A. An algorithmic approach to detect generalization in sketch maps from sketch map alignment. PLoS ONE 2024, 19, e0304696. [Google Scholar] [CrossRef] [PubMed]
  24. Schneider, N.R.; O’Sullivan, K.; Samet, H. The Future of Graph-based Spatial Pattern Matching (Vision Paper). In Proceedings of the 2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW), Utrecht, The Netherlands, 13–16 May 2024; pp. 360–364. [Google Scholar]
  25. Ullmann, J.R. An Algorithm for Subgraph Isomorphism. J. ACM (JACM) 1976, 23, 31–42. [Google Scholar] [CrossRef]
  26. Zhou, K.; Yang, C.; Liu, J.; Xu, Q. Dynamic Graph-Based Feature Learning with Few Edges Considering Noisy Samples for Rotating Machinery Fault Diagnosis. IEEE Trans. Ind. Electron. 2022, 69, 10595–10604. [Google Scholar] [CrossRef]
  27. Boeing, G. OSMnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks. Comput. Environ. Urban Syst. 2017, 65, 126–139. [Google Scholar] [CrossRef]
  28. Baader, D. Openstreetmap Using and Enhancing the Free Map of the World; UIT Cambridge: Cambridge, UK, 2016. [Google Scholar]
Figure 1. Visualization of GPS art, showing (a) GPS art represented by the combined graphics of “520”, and (b) GPS art depicted through a heart-shaped design.
Figure 1. Visualization of GPS art, showing (a) GPS art represented by the combined graphics of “520”, and (b) GPS art depicted through a heart-shaped design.
Ijgi 15 00098 g001
Figure 2. Diagram illustrating the overall framework of the road network graphics retrieval algorithm.
Figure 2. Diagram illustrating the overall framework of the road network graphics retrieval algorithm.
Ijgi 15 00098 g002
Figure 3. Construction of the dynamic graph. In the road network (a), Ei (i = 1, 2, 3, 4, 5, 6) represents line segments, where E34 and E56 are approximate line segments. After identifying line segment E1 as a matching node in (b), its adjacent line segments E2, E3, and E5, along with the adjacent approximate line segments E34 and E56, are considered as its adjacent nodes. Further adjacent nodes are obtained in (c) after identifying the matching node as E5.
Figure 3. Construction of the dynamic graph. In the road network (a), Ei (i = 1, 2, 3, 4, 5, 6) represents line segments, where E34 and E56 are approximate line segments. After identifying line segment E1 as a matching node in (b), its adjacent line segments E2, E3, and E5, along with the adjacent approximate line segments E34 and E56, are considered as its adjacent nodes. Further adjacent nodes are obtained in (c) after identifying the matching node as E5.
Ijgi 15 00098 g003
Figure 4. Schematic of the combined graphic “520”. Figure (a) demonstrates a more favorable combination result, while figure (b) presents a less satisfactory combination effect.
Figure 4. Schematic of the combined graphic “520”. Figure (a) demonstrates a more favorable combination result, while figure (b) presents a less satisfactory combination effect.
Ijgi 15 00098 g004
Figure 5. Visualization of retrieval results on simulated road network data. (a) The retrieval results indicate that certain line segments of graphics “A”, “K”, “N”, “3”, and “V” have either excessively short or long lengths; (b) the retrieval results reveal severe deformation in part of graphic “G”.
Figure 5. Visualization of retrieval results on simulated road network data. (a) The retrieval results indicate that certain line segments of graphics “A”, “K”, “N”, “3”, and “V” have either excessively short or long lengths; (b) the retrieval results reveal severe deformation in part of graphic “G”.
Ijgi 15 00098 g005
Figure 6. Impact of approximate line segments on the size of retrieval results. The bar chart displays the distribution of retrieval results across different length ranges, while the broken line indicates the maximum size of retrieval results under different conditions.
Figure 6. Impact of approximate line segments on the size of retrieval results. The bar chart displays the distribution of retrieval results across different length ranges, while the broken line indicates the maximum size of retrieval results under different conditions.
Ijgi 15 00098 g006
Figure 7. Retrieval results in ascending order according to their ratings. The red box represents the input graphic, while the blue box indicates the corresponding retrieval results. The Rank-k (k = 1, 50, 250, 1500, and 2000) indicates the ranking sequence of the retrieval results.
Figure 7. Retrieval results in ascending order according to their ratings. The red box represents the input graphic, while the blue box indicates the corresponding retrieval results. The Rank-k (k = 1, 50, 250, 1500, and 2000) indicates the ranking sequence of the retrieval results.
Ijgi 15 00098 g007
Figure 8. Visualization of the retrieval results for combined graphics. The input combined graphics are displayed in the red box, while the corresponding retrieval results are shown in the blue box.
Figure 8. Visualization of the retrieval results for combined graphics. The input combined graphics are displayed in the red box, while the corresponding retrieval results are shown in the blue box.
Ijgi 15 00098 g008
Figure 9. Retrieval results for combined graphics in ascending order according to their ratings. The Rank-k (k = 1, 15, 30, 50) indicates the ranking sequence of the retrieval results.
Figure 9. Retrieval results for combined graphics in ascending order according to their ratings. The Rank-k (k = 1, 15, 30, 50) indicates the ranking sequence of the retrieval results.
Ijgi 15 00098 g009
Table 1. Test instances and their corresponding number of nodes and edges.
Table 1. Test instances and their corresponding number of nodes and edges.
I (Shenzhen)II (Wuhan)III (Beijing)IV (Shanghai)V (Xi’an)
Nodes20,3568670455564175468
Edges92,02430,89018,41625,01818,922
Table 2. Number of retrieval results for simulating road network data with different details processed.
Table 2. Number of retrieval results for simulating road network data with different details processed.
“A”“G”“K”“M”“N”“8”“3”“V”Heart ShapePentagram
Ours w/o ALS0011112000
Ours w/o LR2191614140211
Ours1121213111
Truth Value1121213111
ALS: Approximate line segment; LR: Length ratio.
Table 3. Results of five test instances, including the number of retrieval results, algorithm runtime (in milliseconds), and the total number of algorithm iterations.
Table 3. Results of five test instances, including the number of retrieval results, algorithm runtime (in milliseconds), and the total number of algorithm iterations.
“E”“8”Heart Shape
QuantityTimeEpochQuantityTimeEpochQuantityTimeEpoch
I944717925,274,22619450744,49557311890,732
II6349103278,84642109166,62121274214,373
III74,940298801,64510154231,829167179450,347
IV207,5594371,175,46032257445,5191679341903,459
V216,4855221,295,781152232342,10739496021,436,994
Table 4. Impact of parameters φ and ε on the number of retrieval results in test instance II.
Table 4. Impact of parameters φ and ε on the number of retrieval results in test instance II.
φ ε
10°15°0.00.10.20.3
“E”0284854026349327416726349
“8”01926420112042
Heart shape01421512120690212
Table 5. Number of combined graphics retrieval results and the algorithm’s runtime (in seconds) in five different test instances.
Table 5. Number of combined graphics retrieval results and the algorithm’s runtime (in seconds) in five different test instances.
“520”“1314”“I♥y”“LOVE”
NumberTimeNumberTimeNumberTimeNumberTime
I255.64129236.5257.6257.2
II485.7230752.71657.3596.9
III126.019,2171725.611910.5279.8
IV245.939,1131652.4129843.18713.5
V1397.412,129532.1141735.525518.0
Table 6. Runtime (in seconds) of the graphics combination algorithm corresponding to the different first combined sub-graphics.
Table 6. Runtime (in seconds) of the graphics combination algorithm corresponding to the different first combined sub-graphics.
“L”“O”“V”“E”
Quantity217939711,33112,926
Time301388113
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, G.; Fu, Z. Invariant Spatial Relation-Based Road Network Graphics Retrieval for GPS Art. ISPRS Int. J. Geo-Inf. 2026, 15, 98. https://doi.org/10.3390/ijgi15030098

AMA Style

Li G, Fu Z. Invariant Spatial Relation-Based Road Network Graphics Retrieval for GPS Art. ISPRS International Journal of Geo-Information. 2026; 15(3):98. https://doi.org/10.3390/ijgi15030098

Chicago/Turabian Style

Li, Gang, and Zhongliang Fu. 2026. "Invariant Spatial Relation-Based Road Network Graphics Retrieval for GPS Art" ISPRS International Journal of Geo-Information 15, no. 3: 98. https://doi.org/10.3390/ijgi15030098

APA Style

Li, G., & Fu, Z. (2026). Invariant Spatial Relation-Based Road Network Graphics Retrieval for GPS Art. ISPRS International Journal of Geo-Information, 15(3), 98. https://doi.org/10.3390/ijgi15030098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop