Next Article in Journal
A Low-Fragmentation Global Vector Dataset for River and Lake Classification of Surface Water Bodies
Previous Article in Journal
Three-Dimensional Reconstruction of Indoor Building Components Based on Multi-Dimensional Primitive Modeling Method
Previous Article in Special Issue
Research on Spatiotemporal Dynamic and Driving Mechanism of Urban Real Estate Inventory: Evidence from China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Scale Quantitative Direction-Relation Matrix for Cardinal Directions

1
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
2
Department of Geography and Resource Management, Wong Foo Yuan Building, The Chinese University of Hong Kong, Hong Kong SAR 999077, China
3
Institute of Space and Earth Information Science, Fok Ying Tung Remote Sensing Science Building, The Chinese University of Hong Kong, Hong Kong SAR 999077, China
4
Changjiang Wuhan, Waterway Bureau, Wuhan 430010, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2026, 15(1), 11; https://doi.org/10.3390/ijgi15010011 (registering DOI)
Submission received: 6 August 2025 / Revised: 14 December 2025 / Accepted: 22 December 2025 / Published: 25 December 2025
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)

Abstract

Existing qualitative direction-relation matrix models employ rigid classification schemes, limiting their ability to differentiate directional relationships between multiple targets within the same directional tile. This paper proposes two quantitative matrix models for qualitative direction-relation with differing levels of precision. Based on directional tile partitioning derived from qualitative direction-relation models, the new models achieve quantitative expression of qualitative directionality through two distinct descriptive parameters: order and coordinate. The order matrix utilizes angular and displacement measurements as sequential variables, capturing the directional sequence characteristics within the same directional tile. The coordinate matrix employs direction-relation coordinates as matrix elements, integrating directional and distance relationships to identify the distribution of targets at varying distances along the same line of sight. These two novel models operate at distinct scales and achieve soft classification of directional relationships, substantially enhancing descriptive precision. Furthermore, they serve as foundational quantitative frameworks for the qualitative direction-relation models, establishing a bridge between quantitative and qualitative models. Experimental assessment confirms that the new models substantially improve directional relationship precision through their quantitative elements while supporting various application domains.

1. Introduction

A spatial configuration consists of an assemblage of spatial entities arranged in a specific distribution within the reference frame [1]. Spatial relations effectively encapsulate the fundamental structural composition of spatial scenes, with human cognition primarily interpreting these environments through object identification and relational analysis. Spatial relationships primarily encompass topological, directional, and distance relationships among spatial entities [2]. Directional relationships exhibit extensive applicability across multiple domains and everyday contexts [2,3,4,5,6]. Direction-relation formal models constitute the mathematical formalization of directional semantics. By formalizing the directional spatial relationships between target and reference entities through rigorous mathematical models, we enable efficient spatial retrieval and support complex geospatial applications including urban morphological analysis, hazard prediction systems, environmental surveillance protocols, and related spatial decision support frameworks [7,8,9,10]. Direction-relation models are extensively deployed across spatial similarity assessment [2,5,10,11,12,13,14,15], spatial query [9,16], spatial scene matching [17,18], remote sensing image retrieval and processing [19,20,21], spatial reasoning [22,23,24,25,26], natural language processing [27], multi-resolution cartographic integration [28], and geographic information retrieval [1,29,30]. Research on directional-relation formal models not only advances theoretical geospatial science but also bridges the gap between human spatial cognition and computational spatial representation, enabling more sophisticated machine-based spatial reasoning. Emerging technologies, especially digital twins, will heighten the need for refined directional-relation models in spatial information science.
Recent developments in artificial intelligence (AI) and large language models (LLMs) have facilitated the incorporation of spatial relationship frameworks, merging geometric, structural, and semantic comprehension to enhance geographic analysis and interaction. Ji et al. assessed the proficiency of large language models in processing topological relations [31]. The GeoLM framework utilizes natural language processing and geospatial contextual interpretation to derive topological and spatial associations between geographic entities [32]. Furthermore, the Spatial RGPT framework determines relative positioning and proximity based on user-defined regions [33]. Nevertheless, existing research predominantly emphasizes topological relationships, with insufficient investigation of directional relationships. Creating frameworks for directional spatial representation and incorporating them into multiscale data architectures represent critical pathways for advancing spatial artificial intelligence.
Direction-relation models can be categorized into two fundamental paradigms: the quantitative models, which employ precise numerical representations, and the qualitative models, which utilize symbolic abstractions of spatial configurations [34,35].
The quantitative direction-relation models designate a specific ray as the reference direction (e.g., due north) and employ angular measurement to represent directional relationships. While these models enable precise directional definition, their practical application in everyday spatial reasoning is limited due to their continuous value domain spanning 0–360 degrees. Qualitative direction-relation models, conversely, partition the reference object’s spatial domain into distinct directional regions according to cardinal direction reference systems, and classify directional relationships based on the target object’s position within these regions. Human spatial cognition typically employs either absolute directional references (north–south-east–west) or relative directional references (front-back-left-right). The adoption of four cardinal directions as a standard reference system demonstrates optimal dimensionality when analyzed against Cartesian coordinate systems. The four cardinal directions exhibit direct correspondence with two-dimensional coordinate axes: north and south align with the positive and negative y-axis, respectively, while east and west correspond to the positive and negative x-axis. This orthogonal arrangement provides complete coverage for spatial positioning in two-dimensional coordinate space.
Human spatial cognition predominantly employs qualitative descriptors for communicating spatial relationships and concepts, rather than relying on quantitative measurements and coordinates [1,36,37]. Existing models of qualitative cardinal directions principally include cone models [38,39], projection-based models [3,11,40], and Voronoi-based models [8]. The cone-shaped model partitions space into quadrants or octants via angular division; however, the definition of cardinal directions for reference line or polygon may not align with conventional directional perception in certain contexts. Projection-based models, which partition directional tiles according to the reference object’s four cardinal direction projection regions as previously defined, achieve completeness in two-dimensional space [41] and are aligned with the Earth’s geodetic positioning system defined by latitude and longitude [42]. The direction-relation matrix and its extensions represent prevalent projection-based directional models. The basic matrix treats the Minimum Bounding Rectangle (MBR) as a singular directional partition, failing to resolve directional variance within the MBR; its extended models address MBR region subdivision from multiple perspectives, enhancing the description of directional relationships [9,10,37]. However, these extended models mostly employ single-precision arithmetic, which enhances numerical accuracy at the cost of increased algorithmic complexity and computational overhead, thereby significantly constraining their practical applicability. To address this issue, Tang et al. integrated the direction-relation matrix models of different precision to construct a multiscale pyramid model of cardinal directions, effectively addressing the need for multiscale direction-relation representation [43].
However, these qualitative matrix models, constrained by their rigid directional partitioning, fail to discern nuanced directional relationships between distinct targets within the same directional tile. For example, the segmentation direction-relation matrix subdivides the northeast direction tile into three direction tiles, such as east-northeast, northeast, and north-northeast; however, it remains limited in distinguishing targets within the east-northeast partition. Furthermore, the qualitative models may classify targets with closely aligned directions in adjacent partitions as disparate. To resolve the issue of hard classification in qualitative directionality, it is necessary to establish a corresponding quantitative model based on the directional segment definition framework of qualitative directionality models, thereby achieving soft classification of qualitative directional relationships.
In this paper, based on the qualitative direction-relation pyramid model [43], we propose two directional quantitative matrix models, employing sequential parameters and coordinate parameters as matrix elements to construct corresponding order and coordinate matrices respectively. The ordering matrix encodes the sequential characteristics within individual tiles, while the coordinate matrix utilizes coordinate parameters to define the directional properties of discrete objects positioned along identical ray lines. The new model not only enables multiscale soft classification of qualitative directional relationships, thereby enhancing the precision of their description but can also be integrated with existing pyramid model. It serves as a transformative bridge from quantitative coordinates to qualitative directional relationships, thereby constructing a comprehensive directional relationship pyramid model.

2. Related Work

2.1. Direction-Relation Matrix and Its Extended Models

The direction-relation matrix model defines a 3 × 3 matrix [11], which captures a binary relation involving a reference object and a target object. The elements of the matrix are a non-empty subset of nine direction tiles {N, S, E, W, NE, SE, SW, NW, O}. The directional tile partitioning of the direction-relation matrix model aligns with human cognitive patterns for directional perception of extended reference objects. However, this model has been consistently criticized for its inability to distinguish directional relationship characteristics within the MBR (Minimum Bounding Rectangle), as the entire MBR is designated as a single directional segment O.
To distinguish direction-relation definitions within the MBR (Minimum Bounding Rectangle), a detailed directional relationship matrix model partitions the MBR through its central region as the reference origin [9]. However, since using the MBR central region as both the interior and boundary center contradicts cognitive conventions, Tang et al. implemented topological partitioning within the MBR region and defined corresponding topological references for different topological regions, thereby resolving the directional relationship description problem between objects with complex topological relationships [41]. As shown in Figure 1, A represents the reference object, and B represents the target object. Based on the two-tuple model, B is first divided into grids. The centroid angular of each grid is then calculated. Finally, the mean centroid azimuth serves as the quantitative directional value for the target B. The two-tuple model [10] employs the mean centroid azimuth to represent quantitative distributions across directional tiles, presenting several limitations: (1) centroid azimuths utilize the centroid as the reference origin, which may fall outside the reference polygon, contradicting cognitive conventions; (2) all objects within different directional tiles use the centroid as their reference origin, violating the fundamental principle of the projection-based model; (3) substituting actual angular distribution ranges with mean centroid azimuths compromises expressive precision.
Furthermore, the granular partitioning of the directional tile, while enhancing representational fidelity, necessarily incurs computational overhead. This presents a fundamental tension between expressive precision and algorithmic efficiency in direction-relation modeling. Practical implementation contexts necessitate the development of multi-scale directional relationship frameworks to address varying application requirements.

2.2. The Multi-Scale Pyramid Model of Cardinal Directions

The direction-relation matrix and its extensions facilitate qualitative representation of spatial orientations across multiple scales. However, these models employ inconsistent partitioning schemas, thereby precluding inter-scale conversion between different representational frameworks. Tang et al. introduced a pyramid model that addresses the multiscale properties of directional relationships across diverse application domains [43]. This hierarchical framework integrates and extends current spatial representation paradigms, establishing a multiscale direction-relation model capable of accommodating spatial entities with varying geometric typologies and precision requirements. Furthermore, it extends the direction-relation model of the reference point by increasing the division from the basic directional tiles to segmentation directional tiles, thus enhancing the accuracy of qualitative directional description.
However, both the pyramid model and conventional direction-relation matrix implement discrete categorization schemes for direction-relation distribution, failing to capture the continuous gradients of spatial transitions that occur within the same directional tile. The fundamental reason lies in the fact that such rigid classification models are unable to recognize differences in the directional distribution of different targets within the same directional tile.
The direction relation two-tuple model combines a grid-based direction relation matrix and a centroid-based direction relation matrix to characterize both the spatial distribution ratio variations and centroid positional shifts between spatial entities [10]. The model addresses the issue of rigid classification by employing centroid-based angles to differentiate direction-relation variations within a tile. However, its reliance on the centroid as the reference point for angle is inadequate when applied to concave polygons, where the centroid may not represent a meaningful spatial anchor. Furthermore, the model uniformly utilizes the centroid as the reference for all directional tiles, contravening the principle of distinguishing between distinct directional sectors. Additionally, it fails to resolve the ambiguity between targets located at varying distances along the same angular line.

3. Multi-Scale Quantitative Model for Cardinal Directions

3.1. Framework of MultiScale Quantitative Model for Cardinal Directions

The existing direction-relation pyramid model primarily comprises two levels: the basic matrix (the Level 1 of Figure 2) and the segmentation matrix (the Level 2 of Figure 2). The basic matrix essentially records the intersection status between the target and each directional slice using binary values, whereas the segmentation matrix subdivides the directional slices corresponding to the fundamental matrix, recording the intersection status between the target and the subdivided directional slices using powers of two. Both types of matrices can only record whether the target intersects with the directional tiles, failing to reflect the continuity of the target’s directional distribution.
Since the aforementioned direction-relation qualitative matrix model suffers from a hard classification problem, we define two novel quantitative matrix models for cardinal directions across hierarchical levels, thereby facilitating multiscale soft classification: the order matrix and the coordinate matrix. The primary distinction between these two novel quantitative description matrix models and existing direction-relation quantitative models lies in the fact that the new models are grounded in the directional tile partitioning of the direction matrix model, maintaining consistency with the aforementioned qualitative basic matrix and segmentation matrix. Unlike traditional qualitative directional matrix models, the key difference is that the matrix elements in the new models do not denote discrete, rigidly classified values. Rather, they represent soft-classified interval values that reflect the continuous distribution characteristics of the target objects.
The first layer of the quantitative model is the direction-relation order matrix (the Level 3 of Figure 2). The order matrix encodes the order characteristics of targets within the same directional tile, with sequence parameters varying across different tiles. For reference lines or polygons, the azimuths are employed to capture the sequential features within combined directional tiles, while the distances to the central axis are used to represent the sequential characteristics within positive directional tiles.
The order matrix addresses the soft classification of targets with different sequential features within the same tile. However, it remains ineffective for targets at different distances along the same radial direction. From a mathematical perspective, the angular representation corresponds to polar coordinates in a two-dimensional plane, where determining a point’s position requires both angular and radial coordinates. Directional models based solely on angular values neglect the distance parameter. Consequently, regardless of how fine-grained the directional model’s partitioning becomes, whether based on angles or sequential relationships, it cannot reflect the directional relationship differences between targets along the same radial line. On the other hand, this also reflects the constraints of distance relationships on directional relationship definitions, primarily manifested in the distinction of directional relationships between different points along the same radial direction: specifically, how varying distances from the directional reference origin influence the positional directional relationships [41].
To address the aforementioned issues with order matrix, we define the direction-relation coordinate matrix mode (the Level 4 of Figure 2). The coordinate matrix employs the distribution range of the coordinates of cardinal directions as its matrix elements, meticulously capturing the target’s directional characteristics within the same tile. This model facilitates the discrimination of targets at multiple radial distances within a unified directional axis and functions as a computational parameter for directional order matrices and other qualitative direction-relation matrices. Unlike the existing direction-relation coordinate matrix [44], the new coordinate matrix incorporates directional tile partitioning derived from qualitative direction-relation matrix model, thereby establishing a quantitative formal framework for qualitative directional relationships.
The new quantitative matrix models preserve structural congruence with the multi-scale qualitative direction-relation pyramid model, specifically in terms of the directional tile and matrix configuration. These matrices thus function as quantitative analogs to the qualitative matrices, enhancing the model’s representational fidelity. By embedding these quantitative matrix models within the existing pyramid framework, a novel integrated pyramid model is constructed. Figure 1 illustrates the new framework of the new pyramid model, wherein the first and second layers correspond to the basic and segmentation matrices, while the third and fourth layers represent the order matrix and coordinate matrix models. All four layers maintain uniform matrix structures and directional tile schemes, facilitating direct and lossless transformation between representations.
Directional relations constitute the predominant method for expressing relative spatial positioning in natural language. The extraction of directional relation components and the facilitation of directional relation queries via formalized modeling represent critical research directions for endowing computational systems with imprecise spatial reasoning abilities. The innovative pyramid model of directional relations synthesizes topological and metric relationships across hierarchical levels, establishing a multi-scale framework for primary direction representation that bridges quantitative and qualitative domains. This approach effectively addresses the requirements for multi-scale directional relation modeling in diverse application contexts.

3.2. Direction-Relation Order Matrix

3.2.1. Point as Reference Object

The direction-relation order matrix for the reference point is a 3 × 3 matrix that records the azimuthal trend of the target object relative to the reference point (Figure 3a).
Definition 1.
The order matrix of the reference point is a  3 × 3  matrix that captures the partition around the reference point R and records, for each tile, whether the target object falls into it (Equation (1)). The element codes of  N R P ,  E R P ,  S R P ,  W R P , and  O R P  are binary values indicating whether the target object P intersects with the tiles or not, and the other element codes of  O N E ,  O S E ,  O S W , and  O N W  (Equation (2)) record the azimuthal trend in the combined directional tiles using the ratio of the coordinate offsets.
O d i r R , P = O N W N R P O N E W R P R P E R P O S W S R P O S E
The element value of the northeast ( O N E ), southeast ( O S E ), southwest ( O S W ), and northwest ( O N W ) in the order matrix represents the angle between the initial direction and the direction of the ray from the origin to the target point. The initial direction begins from the north-direction (the starting direction for the northeast tile), proceeding clockwise through the east-direction (the starting direction for the southeast tile), the south-direction (the starting direction for the southwest tile), and the west-direction (the starting direction for the northwest sector). As shown in Figure 3b, assuming that the reference point R ( x R , y R ) and the target point P ( x P , y P ) , the elements of the order matrix are defined as:
O N E = a r c t a n | x P x R y P y R | O S E = arctan | y P y R x P x R | O S W = arctan | x P x R y P y R | O N W = arctan | y P y R x P x R |
Since the order matrix only considers ordinal distribution relationships for line and polygon targets, the order matrix essentially records the fan-shaped regions of directional angle distributions within different directional tiles (Figure 4a). Therefore, we must first calculate the tangent values of the angles formed between each node point of the line or polygon boundary and the reference point and then sort these values to determine their range. The elements of the combined directional tiles within the order matrix for the polygon are represented by a value range corresponding to this fan-shaped region (Figure 4b).

3.2.2. Lines/Polygons as Reference Object

To maintain consistency with the direction-relations matrix model, the order matrix for the reference line/polygon employs MBR to define directional tiles and the origin points of each tile. First, centered on the target MBR, the entire space is divided into eight exterior regions and one MBR. Subsequently, the MBR region is further subdivided according to specific requirements.
1. 
Exterior region
Relative to reference point, the four exterior cardinal directional tiles of the reference line/polygon also need to be soft-categorized. According to cognitive habit, the trend of the directional order of the line-plane reference target is shown in Figure 5a, so we adopt distance as the soft categorization parameter for the positive direction tiles, while the angle factor is used as the soft categorization parameter for the combined direction tiles.
Since the definition of directional tiles adopts the MBR coordinate system of the plane as its framework, the origin of each tile is defined differently. For the north tile, since its rotation direction is clockwise, the upper-left corner vertex C4 of the MBR is selected as its origin. Similarly, the origins for the east, south, and west tiles are the C1, C2 and C3 respectively. Similarly, the origin points for the four combined directional tiles are based on the MBR vertices of their respective sections. The origin points for the northeast, southeast, southwest, and northwest directional sections are C1, C2, C3, and C4, respectively.
According to Figure 5b, the four endpoints of the MBR of the reference target are defined as C 1 ( x 1 , y 1 ) , C 2 ( x 2 , y 2 ) , C 3 ( x 3 , y 3 ) , and C 4 ( x 4 , y 4 ) . To distinguish directional tile definitions across different topological regions, we use distinct initial letters to represent each region. For example, E denotes exterior. E O N ,   E O N E , E O E ,   E O S E , E O S ,   E O S W , E O W , and E O N W represent the order directional elements for the exterior north, exterior northeast, exterior east, exterior southeast, exterior south, exterior southwest, exterior west, and exterior northwest tiles. M B R R is the exterior center, i.e., the MBR of the reference polygon. Then the order matrix for the exterior tiles is as follows:
Definition 2.
The order matrix of the reference line/polygon is a 3 × 3 matrix that captures the partition around the reference point R and records whether the target object falls into it for each tile (Equation (3)). The order parameters  E O N ,  E O E ,  E O S , and  E O W  of the cardinal directional tiles are the distances from the target to the centerlines of the directional tiles (Equation (4)), reflecting the tendency in positive and negative terms; the order parameters of the combined directional tiles  E O N E ,  E O S E ,  E O S W , and  E O N W are the angles from the target point to the starting ray of this directional tile (Equation (5)).  M B R R P   record whether MBR of reference object R intersects with the target P.
O d i r E R , P = E O N W E O N E O N E E O W M B R R P E O E E O S W E O S E O S E
E O N =   x P x 4   x 1 x 4 2 E O E =   y P y 2   y 1 y 2 2 E O S =   x P x 3   x 2 x 3 2 E O W =   y P y 3   y 4 y 3 2
E O N E = a r c t a n x E x N = arctan |   x P x 1 y P y 1 | E O S E = a r c t a n x S x E = a r c t a n |   x P x 2 y P y 2 | E O S W = a r c t a n x W x S = a r c t a n |   x P x 3 y P y 3 | E O N W = a r c t a n x N x W = a r c t a n |   x P x 4 y P y 4 |
For line or polygon targets, we use the origin of each directional tile as the reference origin point to determine the order distribution range of the tile. For combined directional tiles, this manifests as a fan-shaped region (e.g., P1 in Figure 6), while, for cardinal directional tiles, it presents as a linear interval (e.g., P2 in Figure 6).
2. 
MBR region
Based on the framework of the multiscale pyramid qualitative model, there are two different ways of dividing directional tiles and defining matrices for the MBR region. If we focus on the overall distribution of the target and ignore the topological division of the reference target, and take the center of the MBR as the reference point to divide the MBR into four ray directions and four faceted direction slices, the form of the corresponding sequential direction relation matrices is similar to that of the order matrix for reference point. If we are concerned about the local boundary distribution of the target, we must perform topological division first, and then construct the order matrices of different topological regions separately.
(1)
MBR overall order matrix
The overall order matrix focuses on the overall distribution range of the reference line/polygon, ignoring the effect of topological division, and divides the MBR into four ray directions and four direction tiles by taking the center of the MBR O C as the reference point (Figure 7); the MBR overall order matrix is similar to the order matrix of the reference point below.
Definition 3.
The MBR overall order matrix of a reference line/polygon is a 3 × 3 matrix that captures the partition around the center  O C   of the reference line/polygon’s MBR region and records, for each tile, whether the target object falls into it (Equation (6)). The four cardinal direction elements of  N O C P ,  E O C P ,  S O C P ,  W O C P , and  O C P   record the intersection between target P and the four principal direction lines centered at  O C   and  O C , following a method similar to that in basic matrix. The order parameters in the combined directional tiles,   M O N E ,  M O S E ,  M O S W , and  M O N W , are calculated with the center of the MBR  O C   as the origin (refer to Equation (1)).
O d i r M R , P = M O N W N O C P M O N E W O C P O C P E O C P M O S W S O C P M O S E
The computational methodology for line or polygon targets follows the same approach for reference points.
(2)
Local order matrix
If the target is mainly distributed in the MBR region and the local boundary influence of the reference target is to be considered, based on the framework of the local matrix in the pyramid model, the MBR region is divided into three topological parts, namely, the MBR exterior, the boundary, and the interior, and the corresponding local order matrices are constructed.
The directional tiles of the three regions are defined as follows:
  • MBR exterior
This region lies outside the reference polygon but within the MBR. Its direction-relation characteristic is influenced by the local boundary of the reference object. The projection distance is defined based on the four cardinal directions. Referring to the local direction matrix [43], the boundary of the reference object is divided into four segments along the cardinal directions: east, south, west, and north. Each part of the boundary corresponding to a cardinal direction consists of the points with the maximum value of the reference object in that cardinal direction. The projection distance of a target point is defined as its projection distance to the corresponding cardinal direction boundary (Figure 8a). As shown in Figure 8b, Target P has a north projection distance and is located north of the MBR. Target Q has both a north projection distance and an east projection distance, and is located northeast of the MBR.
Let the projected distances of the target point in each of the four cardinal directions be  D P N , D P E , D P S ,   a n d   D P W ; R denotes the reference object; then, the order matrix of the MBR exterior is:
Definition 4.
The MBR exterior order matrix is a 3 × 3 matrix that captures the partition of the exterior area around the reference line/polygon and records the directional relationship between the target object and the neighboring local boundary of the reference object (Equation (7)). Where the combined directional element is the ratio of the two projection distances, the rule for taking the value on the cardinal direction ray region is that if the projection distance of this cardinal direction is non-zero, the element of this main direction is one.
O d i r M E R , P = D P W D P N D P N D P N D P E D P W R P D P E D P S D P W D P S D P E D P S
For line or polygon targets, projection distances of boundary node points are first calculated point by point, followed by statistical analysis of the projection distance ranges within different directional tiles.
  • Boundary
When the target object falls into the boundary or interior region, for the reference concave polygon, if the centroid or the center of MBR is used as the reference point to divide the directional tile, the center may fall outside the reference object [41]. Therefore, the inner center O R [41,44] based on the skeleton line algorithm [45,46] is computed and used as the reference point to define the boundary order matrix and the inner order matrix.
The boundary of a reference line is constituted by the two nodes of the line. The boundary of the reference polygon is its boundary line. Figure 9 represents the directional tiles of the boundary matrix for reference polygon. B denotes boundary, with corresponding boundary direction tiles being B N O R (boundary north), B E O R (boundary east), B S O R (boundary south), B W O R (boundary west), and so on. B O R is the boundary center. B O N E , B O S E , B O S W , and B O N W represent the order directional elements for the boundary northeast, boundary southeast, boundary southwest, and boundary northwest tiles.
Definition 5.
The boundary order matrix is a 3 × 3 matrix that captures the directional partition for the boundary of the reference line/polygon (Equation (8)).  B O R   denotes the interior of the reference object R. The element codes of  B N O R P ,  B E O R P ,  B S O R P ,  B W O R P , and  B O R P   are binary values indicating whether the target object P intersects with the tiles or not, and the other element codes of  B O N E ,  B O S E ,  B O S W , and  B O N W   record the sequential characteristics of the boundary tiles using the angles between the target points on the boundary and the original direction (based on the interior center  O R   as the reference point). The order parameters are calculated in a similar way to those for the reference point.
O d i r B R , P = B O N W B N O R P B O N E B W O R P B O R P B E O R P B O S W B S O R P B O S E
  • Interior
For line or polygon targets, the boundary and interior matrices are calculated with the interior center as the origin, employing the same computational methodology as used for the reference point.
The partition of directional tiles facilitates the construction of a 3 × 3 matrix to capture the order characters within the reference polygon (Figure 10). The initial letter I denotes interior, with corresponding internal direction plates being I N O R (interior north), I E O R (interior east), I S O R (interior south), I W O R (interior west), and so on. O R is the interior center of the reference polygon and also the center of the interior region. I O N E , I O S E , I O S W , and I O N W represent the order directional elements for the northeast, southeast, southwest, and northwest tiles.
Definition 6.
The interior order matrix is a 3 × 3 matrix that captures the directional partition for the boundary of the reference line/polygon (Equation (9)). The element codes of  I N O R P ,  I E O R P ,  I S O R P ,  I W O R P , and  O R P   are binary values indicating whether the target P intersects with the tiles or not, and the other element codes of  I O N E ,  I O S E ,  I O S W , and  I O N W   record the azimuthal trend in the combined directional tiles using the angles between the internal target point and the original direction (based on the interior center as the reference point).
O d i r I R , P = I O N W I N O R P I O N E I W O R P O R P I E O R P I O S W I S O R P I O S E

3.3. Direction-Relation Coordinate Matrix

3.3.1. Point as Reference Object

Based on the preceding analysis, the coordinate description can reflect projection values along different cardinal directions. Therefore, four cardinal directions are sufficient in comprehensively representing directional relationships in two-dimensional space, denoted as D 4 = { N , E , S , W } . The positive X-axis aligns with the east direction while its negative counterpart corresponds to the west direction; the positive Y-axis aligns with north while its negative counterpart corresponds to south. The four cardinal directions essentially divide the two coordinate axes into two pairs of opposing directions. The directional coordinates for one target point are defined as follows:
Definition 7.
Given a reference point  R   with two-dimensional coordinates  ( x R , y R )   and a target point P with coordinates  ( x P , y P ) , the directional coordinate of P is represented by the directional coordinate quaternion  x N , x E , x S , x W . The cardinal directional coordinates are calculated as Equation (10):
y P y R < 0 x N = y P y R ; x S = 0 y P y R > 0 x N = 0 ; x S = y P y R x P x R < 0 x = x P x R ; x W = 0 x P x R > 0 x E = 0 ; x W = x P x R y P y R = 0 x N = x S = 0 x P x R = 0 x E = x W = 0
For linear or polygon targets whose distribution spans multiple directional tiles, we define a coordinate matrix corresponding to the qualitative direction-relation matrix (Definition 8).
Definition 8.
The directional coordinates for line/polygon targets begin with equidistant discretization, converting the object into a set of target points. Given a target object  P   with a set of two-dimensional discrete point coordinates  { x i , y i ,   i = 1 ,   2 ,   3 , ,   n }   and a reference object with the coordinate  ( x R ,   y R ) , the directional coordinate matrix is represented by Equation (11).
C d i r R , P = C N W C N C N E C W R P C E C S W C S C S E
where the coordinate parameters for different directional tiles are calculated as follows.
The coordinates for the four cardinal direction tiles are:
C N : { x N i , i = 1 , 2 , , n N } , y P y R < 0 x N = y P y R ;
C E : { x E i , i = 1 , 2 , , n E } ;
C S : { x S i , i = 1 , 2 , , n S } ;
C W : { x W i , i = 1 , 2 , , n W } .
The four parameters n N , n E , n S , and n W are the numbers of discretized points within different directional tiles. Consequently, the elements in the coordinate matrix of linear and areal features contain directional coordinate data comprising multiple coordinate values. During spatial analysis and semantic extraction, these arrays can be directly utilized for computation, or alternatively, they can be transformed into mean values or other statistical indicators.
The coordinates for the four combined direction tiles are:
C N E : { ( x N i , x E i ) , i = 1 , 2 , , n N E } ;
C S E : { ( x S i , x E i ) , i = 1 , 2 , , n S E } ;
C S W : { ( x S i , x W i ) , i = 1 , 2 , , n S W } ;
C N W : { ( x N i , x W i ) , i = 1 , 2 , , n N W } .
  • where the four parameters n N E , n S E , n S W , and n N W are the numbers of discretized points within several combined directional tiles. Unlike single coordinate sets of cardinal directional tiles, combined directional coordinates incorporate coordinate sets from two cardinal directions.
For line/polygon targets, since their coordinates are distributed across a continuous interval, calculate the coordinates for each node individually and then merge the results into a single coordinate interval value.

3.3.2. Line/Polygon as Reference Object

The topological partitioning is initially performed based on reference lines or polygons. Drawing from the qualitative pyramid model concept, the topological partitioning first divides the space into two segments: the exterior and the Minimum Bounding Rectangle (MBR). Using directional sector origins and the MBR center as reference points, coordinate matrices are constructed for both the exterior region and the complete MBR. When the target object falls within the MBR and topological references need to be considered, the MBR is further subdivided into three distinct tiles: MBR exterior, boundary, and interior. Corresponding coordinate matrices are then constructed.
1. 
Exterior region
For regions outside the Minimum Bounding Rectangle (MBR), based on the partitioning concept of the direction-relation matrix, the space is divided into nine tiles with the MBR as the center. For objects falling within each directional sector, the directional coordinates are calculated using the boundary points of each tile as reference origins (Figure 11). Using the sets of directional coordinates of objects within each sector as matrix elements, the external directional relationship coordinate matrix is defined as Definition 9.
Definition 9.
The exterior coordinate matrix is a 3 × 3 matrix (Equation (12)) that records the distribution of cardinal directional coordinates for target objects across different exterior directional tiles. The coordinate matrix elements represent the coordinate sets of the target’s discrete point set in the directional tile along two cardinal directions.
C d i r E R , P = E C N W E C N E C N E E C W M B R R P E C E E C S W E C S E C S E
Assuming that C 1 , C 2 , C 3 , and C 4 represent the boundary points of the four combined tiles—northeast (NE), southeast (SE), southwest (SW), and northwest (NW), respectively—the coordinate parameters are calculated as follows.
(1)
The coordinates for the four cardinal direction tiles are:
E C N : { ( x N i , x E W i ) ,     i = 1 ,   2 , , n N } , where   x N i = y i y 1 ,   x E W i = x i x 1 x 1 x 4 2 ; E C E : { ( x E i , x N S i ) ,     i = 1 ,   2 , , n N } , where   x E i = x i x 1 ,   x N S i = y i y 2 y 1 y 2 2 ; E C S : { ( x S i , x E W i ) ,     i = 1 ,   2 , , n N } , where   x S i = y 2 y i ,   x E W i = x i x 3 x 2 x 3 2 ; E C W : { ( x W i , x N S i ) ,     i = 1 ,   2 , , n N } , where   x W i = x 3 x i ,   x N S i = y i y 3 y 4 y 3 2 .
(2)
The coordinates for the four combined direction tiles are:
  E C N E :   { ( x N i , x E i ) , i = 1 , 2 , , n N E } ,   where   x N i = y i y 1 ,   x E i = x i x 1 ; E C S E :   { ( x S i , x E i ) , i = 1 , 2 , , n S E } ,   where   x S i = y 2 y i ,   x E i = x i x 2 ; E C S W :   { ( x S i , x W i ) , i = 1 , 2 , , n S W } ,   where   x S i = y 3 y i ,   x W i = x 3 x i ; E C N W :   { ( x N i , x W i ) , i = 1 , 2 , , n N W } ,   where   x N i = y i x 4 ,   x W i = x 4 x i .
The discretization methods for line and polygon targets are similar to the reference point, with the key distinction being the different reference origins for each directional tile.
2. 
MBR region
(1)
MBR overall coordinate matrix
Taking the MBR region as a unified entity, the overall directional coordinates within the MBR are calculated using its centroid as the origin point.
Definition 10.
The MBR overall coordinate matrix is a 3 × 3 matrix (Equation (13)) that records the distribution of cardinal directional coordinates for target objects across different MBR directional tiles. The coordinate matrix elements represent the coordinate sets of the target’s discrete point set in the directional tile along different cardinal directions.
C d i r M R , P = M C N W M C N M C N E M C W O C P M C E M C S W M C S M C S E
Similar to point reference targets, coordinate parameters in the formula are calculated using the MBR center as the origin point.
(2)
Local coordinate matrix
Based on the topological reference, the Minimum Bounding Rectangle (MBR) is partitioned into three distinct regions, namely the exterior, MBR exterior, boundary, and interior matrices.
Since directional relationships for the MBR exterior region are defined by local boundaries of the reference object, projection distances are directly utilized as directional coordinates.
Definition 11.
The MBR exterior coordinate matrix is a 3 × 3 matrix (Equation (14)) which represents the directional coordinates within the MBR exterior region. The elements of the matrix are the projection distances of the target on the local boundary of the reference object.
C d i r M E R , P = M E C N W M E C N M E C N E M E C W R P M E C E M E C S W M E C S M E C S E
Definition 12.
The boundary coordinate matrix is a 3 × 3 matrix (Equation (15)) which records the distribution of cardinal directional coordinates on the boundary of the reference. The origin center of the boundary direction tiles is the interior center  O R .
C d i r B R , P = B C N W B C N B C N E B C W B O R P B C E B C S W B C S B C S E
Definition 13.
The interior coordinate matrix is a 3 × 3 matrix (Equation (16)) that records the distribution of cardinal directional coordinates for the target with the directional tiles inside the reference object. The origin center of the boundary direction tiles is the interior center  O R .
C d i r I R , P = I C N W I C N I C N E I C W O R P I C E I C S W I C S I C S E
Taking the interior as the reference center, the calculation of the boundary coordinate matrix and interior coordinate matrix follows a similar approach to the coordinate matrix of the reference point.

3.4. Comparison and Conversion Between Order Matrix and Coordinate Matrix

The order matrix records the directional sequence characteristics of different targets within the same tile, serving as the quantitative computational basis for both the basic matrix and the segmentation matrix. The coordinate matrix records the coordinates of the target along different cardinal directions on the same directional slice, resolving the issue where order matrices cannot distinguish targets at different distances along the same ray direction. The coordinate matrix can serve as the computational foundation for order matrices. For instance, we may first calculate the directional coordinates for each node point of a line target, then convert these directional coordinates into order matrix elements. The following examples demonstrate the conversion between these two models.
As shown in Figure 12, four separate targets are positioned along an identical ray direction.
The coordinate matrices C d i r E R , P corresponding to these four targets are:
C d i r E R , P 1 = 0 0 E C N E = ( x N 1 , x E 1 ) = ( 5 ,   3 ) 0 0 0 0 0 0
C d i r E R , P 2 = 0 0 E C N E = ( x N 2 , x E 2 ) = ( 10 ,   6 ) 0 0 0 0 0 0
C d i r E R , P 3 = 0 0 E C N E = ( x N 3 , x E 3 ) = ( 15 ,   9 ) 0 0 0 0 0 0
C d i r E R , P 4 = 0 0 E C N E = ( x N 4 , x E 4 ) = ( 20 ,   12 ) 0 0 0 0 0 0
The corresponding order matrices O d i r E R , P are:
O d i r E R , P 1 = 0 0 E O N E = x E 1 x N 1 = 3 5 0 0 0 0 0 0
O d i r E R , P 2 = 0 0 E O N E = x E 2 x N 2 = 6 10 = 3 5 0 0 0 0 0 0
O d i r E R , P 3 = 0 0 E O N E = x E 3 x N 3 = 9 15 = 3 5 0 0 0 0 0 0
O d i r E R , P 4 = 0 0 E O N E = x E 1 x N 1 = 12 20 = 3 5 0 0 0 0 0 0

4. Experimental Evaluations and Results

4.1. Expressive Power Analysis and Evaluation of the Accuracy of the Multi-Scale Quantitative Model Description

To validate the expressive capability and completeness of the new models, two sets of simulation data precision experiments were conducted. These experiments employed target object rotation, translation, and specialized scenarios to validate and assess the precision and comprehensiveness of the proposed model relative to existing models.

4.1.1. Rotate the Target Around the Reference Polygon

To evaluate the expressive capacity of the novel model, the initial experiment employed a configuration of target points executing circular motion around a reference target. This experimental protocol encompassed a group of dynamic trajectory points (Figure 13).
Figure 13 illustrates the scenario in which dynamic point P undergoes rotational motion around the reference object, the Xing Lake, generating 25 discrete positions distributed across the three tiles. The data presented in Table 1 demonstrates that target points located within the same tile yield equivalent direction-relation matrices; their order matrices and coordinate matrices display unique element values, thereby facilitating discrimination between distinct spatial configurations within the same tile. Although the segmentation matrix provides partial discrimination capability for target points residing within the same basic tile, it lacks sufficient granularity to distinguish between target points that fall within the same segmented tile. For example, scenes 1–8 exhibit completely identical segmentation matrices, as do scenes 9–15, scene 17–23 and scene 24–25, highlighting this limitation.
The centroid-based matrix can also differentiate target points located within identical tiles. Nevertheless, because it employs the centroid as the unique reference point, it cannot be combined with the other qualitative matrices. Conversely, both the coordinate matrix and order matrix employ the framework of the qualitative matrix, facilitating bidirectional transformation between matrices across different scales. For instance, utilizing the coordinates of the target point 16, we can initially compute its coordinate matrix as follows:
C d i r E = 0 0 ( 5.20 ,   2.30 ) 0 0 0 0 0 0
Using the coordinate matrix, compute its corresponding order matrix as demonstrated in Equation (26).
O d i r E = 0 0 = a r c t a n 5.2 0 2.3 0 0 0 0 0 0 0 =   0 0 66.14 0 0 0 0 0 0
Finally, based on the order matrix, the segmentation matrix (Equation (27)) and the direction-relation matrix (Equation (28)) can be computed. This case illustrates that the new quantitative matrix models can be seamlessly integrated with qualitative matrices of varying scales, owing to the implementation of a standardized framework.
S d i r E = 0 0 4 0 0 0 0 0 0
B d i r E = 0 0 1 0 0 0 0 0 0
According to Table 1, the qualitative matrices (direction-relation matrix and segmentation matrix) yield identical values across all six scenes. By contrast, the quantitative matrices provide the more granular distinction, capturing subtle variations in target direction among the different scenes. Figure 14 visually illustrates the differences in discriminatory capacity and comparative performance among diverse qualitative and quantitative models. The result clearly demonstrates that the new quantitative models exhibit substantially greater discriminatory power than the qualitative models. While the centroid-based matrix demonstrates comparable predictive performance to the novel models for this experiment, the aforementioned analysis reveals systematic deviations from the qualitative matrix framework.

4.1.2. Moving the Target Across the Reference Polygon

Order matrix emphasizes the sequential properties of directions, employing angular measurements and coordinate offsets to parameterize the sequential distinctions among targets. However, it is limited in its ability to differentiate directional variations along the same ray. By contrast, coordinate matrix articulates directional attributes through spatial coordinates, concentrating on projection distances along various principal axes. It enables the discrimination of directional offsets along the same ray, though it does not directly capture sequential properties. From a theoretical standpoint, coordinate matrix provides greater directional descriptive accuracy compared to order matrix. Likewise, the centroid matrix encounters the limitation of being incapable of differentiating between targets positioned along identical radial vectors.
The second experimental series focuses on comparative analysis of expression precision across three quantitative models along identical ray trajectories within distinct topological domains.
The target point P undergoes translation along a consistent ray trajectory from the southwest to the northeast tile (Figure 15). Throughout this process, the target point not only maintains its translation within the same directional tile but also traverses multiple topological domains. This movement enables the validation of the coordinate matrix’s soft classification performance.
As demonstrated in Table 2, the tabulated data demonstrate that the centroid-based matrix fails to discriminate between distinct topological domains within the MBR region and cannot resolve targets positioned along identical rays within the same tile (e.g., P1–P3). While the order matrix successfully differentiates between topological domains, it exhibits similar limitations in resolving co-linear targets within the same tile. The experimental results conclusively demonstrate that the coordinate matrix successfully resolves the directional feature representation problem for discrete targets positioned along identical rays.

4.2. Application Experiments

To further substantiate the viability of the proposed model and assess its scalability across multiple resolutions, we develop a natural language-driven fuzzy query system for directional relationships, enabling a wide range of imprecise directional relation queries.
The initial query function entails designating both the reference and target objects to compute the associated directional matrix outcomes. This functionality principally facilitates the calculation and comparative analysis of matrices across various hierarchical levels within the multi-scale model. As illustrated in Figure 15, benchmark reference objects and candidate targets are selected, and the system computes matrices across various hierarchical levels (Table 3).
The three scenes in the experiment employed the same reference object (Wuhan University), as shown in Figure 16, selecting three nearby targets (Wuhan Sports University, China University of Geosciences West Campus, and China University of Geosciences North Campus). The blue region represents reference targets, and the red region represents targets to be determined. As shown in Table 3, the basic matrix records the intersection between the target and the basic directional slice, while the segmentation matrix records the intersection between the target and the subdivided directional slice. The order matrix documents the sequential distribution range of targets across different directional tiles, while the coordinate matrix records the primary coordinate distribution range of targets within these tiles. These four hierarchical matrices achieve multi-level directional feature characterization. Compared to qualitative matrices, the two quantitative matrices precisely define the distribution range of target directional features, thereby resolving the hard classification bottleneck inherent in qualitative directional matrices.
The second function entails assessing the validity of user-submitted directional relationship assertions. For instance, when a user states, “Wuhan University is located to the southeast outside of East Lake,” the system computes directional attributes based on the specified reference and target entities to verify the statement’s accuracy. The final evaluation confirms the statement’s correctness—Wuhan University is unequivocally situated to the southeast of East Lake (Figure 17).
The third function involves the system identifying all target features that meet the specified directional relationship conditions. For example, the user selects East Lake as the reference object and sets the directional condition as “east”, and the query results are displayed in Figure 18.
To evaluate the performance differences between the LLM and our models, the final experimental phase involved direct comparative analysis using both our geospatial query framework and established LLMs (Gemini3 & ChatGPT5.2). GeoJSON datasets containing target point coordinates and reference polygon geometries were input into the LLMs architectures to identify all charging station locations positioned eastward of the reference polygon (Shalu Lake). Identical spatial query parameters were executed within our system, with comparative results visualized in Figure 19.
The experimental findings reveal pronounced disparities between our direction-relations matrix model (denoted by red markers) and LLM (ChatGPT/Gemini, represented by blue markers) in the eastward charging station retrieval task centered around Shahu Lake. Our model yields search outcomes that are densely clustered and strictly confined to the area directly east of Shahu Lake, thereby affirming its capacity to accurately capture spatial directional relationships and rigorously uphold the geodetic definition of “east.” Conversely, the outputs generated by the ChatGPT/Gemini model display a wider spatial distribution, extending into neighboring directions such as northeast and southeast, which highlights the inherent semantic generalization and directional ambiguity of large language models when tasked with precise orientation-based queries. These results indicate that the directional relationship matrix model offers a substantial advantage in scenarios necessitating exact directional searches, while LLMs may introduce extraneous results owing to their more flexible approach to spatial semantics.

5. Conclusions and Discussion

In this paper, we introduce two novel quantitative matrix models at different scales for characterizing cardinal directions, enabling soft classification of qualitative direction-relation at both order and coordinate levels. Unlike traditional qualitative matrix models that rely on hard classification, these quantitative approaches fundamentally resolve the issue of directional variability within the same directional tile, allowing for a more precise representation of cardinal directions. The order matrix model emphasizes the sequential characteristics of directional relations, capturing the order-based variation in targets across different directional segments, and its outputs can be efficiently transformed into various qualitative directional matrices. The coordinate matrix model, on the other hand, addresses the spatial distribution of targets aligned along the same line—an aspect not distinguishable by the ordinal matrix—by employing directional coordinates for accurate depiction. As the most refined quantitative coordinate-based model, it not only differentiates targets sharing the same line of sight but also serves as a parameter for order matrix calculations. Notably, in contrast to conventional angular quantitative methods for describing directional relationships, both models remain within the qualitative direction-relation framework, defined by four cardinal directions that encapsulate the spatial distribution characteristics of targets. These models provide a quantitative basis for qualitative direction-relation models and, when integrated with existing multi-scale qualitative directional pyramid models, facilitate the development of a new multi-scale qualitative directional pyramid model that bridges quantitative and qualitative descriptions. The introduction of these two quantitative models significantly improves the fidelity of qualitative direction-relation descriptions and establishes a robust conversion mechanism from a quantitative vector to qualitative direction-relations, thereby addressing the computational challenges inherent in qualitative directional matrix analysis.
Based on the preceding discussion and analysis of experimental results, the primary advantages of the quantitative models proposed in this paper can be summarized as follows:
  • By integrating both order and coordinate quantitative parameters, the proposed models facilitate the soft classifications of qualitative directional relationships, effectively addressing the limitations of hard classification within the same directional tile. This approach achieves a significantly higher degree of accuracy compared to traditional qualitative description matrices.
  • The quantitative models not only enable highly accurate characterization of qualitative directional relationships but also serve as the computational parameters for other qualitative direction-relation matrices, thereby establishing a bridge from precise quantitative coordinate descriptions to qualitative directional semantics.
  • By integrating these two quantitative descriptive matrix models with the original multi-scale qualitative direction-relation pyramid model, we build a comprehensive directional relationship pyramid model that spans from quantitative to qualitative analysis, transitioning from precise coordinate-based descriptions to nuanced, fuzzy directional relationship semantics. This establishes a robust framework for the transformation of qualitative directional relationship semantics.
One major drawback of the two novel quantitative models arises from the complexity involved in discretely processing extended targets and their high computational cost. In future work, we aim to improve both the practicality and computational efficiency of these models to better align with the demands of real-world applications. Additionally, this paper focuses on conventional formal models and their representational accuracy, without investigating incorporation with LLMs and AI. Our prospective research involves integrating these novel frameworks with LLMs to augment their capacity for evaluating and inferring directional relations.

Author Contributions

Conceptualization, Xuehua Tang and Mei-Po Kwan; Methodology, Xuehua Tang and Yang Yu; Coding, Yang Yu and Linxuan Xie; Experiments, Xuehua Tang, Linxuan Xie, Yong Zhang and Binbin Lu; Writing—Original Draft Preparation, Xuehua Tang and Linxuan Xie; Writing—Review and Editing, Mei-Po Kwan, Kun Qin, Yong Zhang and Binbin Lu; Supervision, Mei-Po Kwan and Binbin Lu. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2023YFF0611904; the Fundamental Research Funds for the Central Universities, China, grant number 2042022dx0001 and 2042024kf0005; the National Natural Science Foundation of China, grant number 42571480.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nedas, K.A.; Egenhofer, M.J. Spatial-scene Similarity Queries. Trans. GIS 2008, 12, 661–681. [Google Scholar] [CrossRef]
  2. Bruns, H.T.; Egenhofer, M.J. Similarity of Spatial Scenes. In Proceedings of the 7th International Symposium on Spatial Data Handling, Delft, The Netherlands, 12–16 August 1996; Taylor & Francis: London, UK, 1998; pp. 31–42. [Google Scholar]
  3. Frank, A.U. Qualitative Spatial Reasoning: Cardinal Directions as an Example. Int. J. Geogr. Inf. Sci. 1996, 10, 269–290. [Google Scholar] [CrossRef]
  4. Goyal, R.K.; Egenhofer, M.J. Consistent Queries over Cardinal Directions across Different Levels of Detail. In Proceedings of the 11th International Workshop on Database and Expert Systems Applications, 4–8 September 2000; IEEE: London, UK, 2000; pp. 876–880. [Google Scholar]
  5. Li, B.; Fonseca, F. TDD: A Comprehensive Model for Qualitative Spatial Similarity Assessment. Spat. Cogn. Comput. 2006, 6, 31–62. [Google Scholar] [CrossRef]
  6. Zhu, R.; Janowicz, K.; Mai, G. Making Direction a First-class Citizen of Tobler’s First Law of Geography. Trans. GIS 2019, 23, 398–416. [Google Scholar] [CrossRef]
  7. Goyal, R.K.; Egenhofer, M.J. Similarity of Cardinal Directions. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2121, pp. 36–55. [Google Scholar]
  8. Yan, H.; Guo, R. A Formal Description Model of Directional Relationships Based on Voronoi Diagram. Geomat. Inf. Sci. Wuhan Univ. 2003, 28, 468–471. [Google Scholar]
  9. Du, S.; Wang, Q.; Yang, Y. A Qualitative Description Model of Detailed Direction Relations. J. Image Graph. 2004, 12, 1496–1503. [Google Scholar]
  10. Gong, X.; Xie, Z.; Zhou, L.; He, Z. A Two-Tuple Model Based Spatial Direction Similarity Measurement Method. Acta Geod. Cartogr. Sin. 2021, 50, 1705–1716. [Google Scholar]
  11. Goyal, R.K. Similarity Assessment for Cardinal Directions Between Extended Spatial Objects. Ph.D. Thesis, The University of Maine, Orono, ME, USA, 2000; pp. 57–61. [Google Scholar]
  12. Sun, W.; Ouyang, J.; Huo, L.; Li, S. Similarity of Direction Relations in Spatial Scenes. J. Comput. Inf. Syst. 2012, 8, 8589–8596. [Google Scholar]
  13. Chen, Z.; Zhou, L.; Gong, X.; Wu, L. A Quantitative Calculation Method of Spatial Direction Similarity Based on Direction Relation Matrix. Acta Geod. Cartogr. Sin. 2015, 44, 813. [Google Scholar]
  14. Li, P.; Liu, J.; Yan, H.; Lu, X. An Improved Model for Calculating the Similarity of Spatial Direction Based on Direction Relation Matrix. J. Geomat. Sci. Technol. 2018, 35, 216–220. [Google Scholar]
  15. Yan, H. Theoretical system and potential research issues of spatial similarity relations. Acta Geod. Cartogr. Sin. 2023, 52, 1962–1973. [Google Scholar]
  16. Egenhofer, M.J. Query processing in spatial-query-by sketch. J. Vis. Lang. Comput. 1997, 8, 403–424. [Google Scholar] [CrossRef]
  17. Zhang, X.; Ai, T.; Stoter, J.; Zhao, X. Data Matching of Building Polygons at Multiple Map Scales Improved by Contextual Information and Relation. Isprs J. Photogramm. 2024, 92, 147–163. [Google Scholar] [CrossRef]
  18. Chen, Z.; Zhang, D.; Xie, Z.; Wu, L. Spatial Scene Matching Based on Multiple level Relevance Feedback. Geomat. Inf. Sci. Wuhan Univ. 2018, 43, 1422–1428. [Google Scholar]
  19. Chen, J.; Shao, Q.; Deng, M.; Mei, X.; Hou, J. High-Resolution Remote Sensing Image Retrieval via Land-Feature Spatial Relation Matching. J. Remote Sens. 2016, 20, 397–408. [Google Scholar] [CrossRef]
  20. Chen, J.; Dai, X.; Zhou, X.; Sun, G.; Deng, M. Semantic Understanding of Geo-Objects’ Relationship in High Resolution Remote Sensing Image Driven by Dual LSTM. Natl. Remote Sens. Bull. 2021, 25, 1085–1094. [Google Scholar]
  21. Nong, Y.; Wang, J.; Zhao, X. Spatial Relation Ship Detection Method of Remote Sensing Objects. Acta Opt. Sin. 2021, 41, 212–217. [Google Scholar]
  22. Liu, W.; Li, S. Reasoning about Cardinal Directions between Extended Objects: The NP-Hardness Result. Artif. Intell. 2011, 175, 2155–2169. [Google Scholar] [CrossRef]
  23. Du, Y.; Liang, F.; Sun, Y. Integrating Spatial Relations into Case-Based Reasoning to Solve Geographic Problems. Knowl. Based Syst. 2012, 33, 111–123. [Google Scholar] [CrossRef]
  24. Kang, S.; Li, J.; Qu, S. A Qualitative Reasoning Method for Cardinal Directional Relations under Concave Landmark Referencing. Geomat. Inf. Sci. Wuhan Univ. 2018, 43, 24–30. [Google Scholar]
  25. Wang, M.; Wang, X.; Li, S.; Hao, Z. Reasoning with the Original Relations of the Basic 2D Rectangular Cardinal Direction Relation. J. Xi’An Jiaotong Univ. 2020, 54, 133–143. [Google Scholar]
  26. Lan, H.; Zhang, P. Question-Guided Spatial Relation Graph Reasoning Model for Visual Question Answering. J. Image Graph. 2022, 27, 2274–2286. [Google Scholar] [CrossRef]
  27. Xu, J. Formalizing Natural-language Spatial Relations between Linear Objects with Topological and Metric Properties. Int. J. Geogr. Inf. Sci. 2007, 21, 377–395. [Google Scholar] [CrossRef]
  28. Yan, H. Quantitative Relations between Spatial Similarity Degree and Map Scale Change of Individual Linear Objects in Multi-Scale Map Spaces. Geocarto Int. 2015, 30, 472–482. [Google Scholar] [CrossRef]
  29. Vasardani, M.; Winter, S.; Richter, K. Locating Place Names from Place Descriptions. Int. J. Geogr. Inf. Sci. 2013, 27, 2509–2532. [Google Scholar] [CrossRef]
  30. An, X.; Liu, P.; Jin, C.; Xu, D.; Wang, F. A Hand-drawn Map Retrieval Method Based on Open Area Spatial Direction Relation. Acta Geod. Cartogr. Sin. 2017, 46, 1899–1909. [Google Scholar]
  31. Ji, Y.; Gao, S.; Nie, Y.; Majić, I.; Janowicz, K. Foundation Models for Geospatial Reasoning: Assessing the Capabilities of Large Language Models in Understanding Geometries and Topological Spatial Relations. Int. J. Geogr. Inf. Sci. IJGIS 2025, 39, 1866–1903. [Google Scholar] [CrossRef]
  32. Li, Z.; Zhou, W.; Chiang, Y.Y.; Chen, M. Geolm: Empowering Language Models for Geospatially Grounded Language Understanding. arXiv 2023, arXiv:2310.14478. [Google Scholar] [CrossRef]
  33. Cheng, A.C.; Yin, H.; Fu, Y.; Guo, Q.; Yang, R.; Kautz, J.; Wang, X.; Liu, S. Spatialrgpt: Grounded Spatial Reasoning in Vision-Language Models. Adv. Neural Inf. Process. Syst. 2024, 37, 135062–135093. [Google Scholar]
  34. Deng, M.; Li, Z. A Statistical Model for Directional Relations between Spatial Objects. Geoinformatica 2008, 12, 193–217. [Google Scholar] [CrossRef]
  35. Takemura, C.M.; Cesar, R.M., Jr.; Bloch, I. Modeling and Measuring the Spatial Relation “along”: Regions, Contours and Fuzzy Sets. Pattern Recognit. 2012, 45, 757–766. [Google Scholar] [CrossRef]
  36. Lynch, K. The Image of the City; MIT Press: Cambridge, MA, USA, 1960; p. 208. [Google Scholar]
  37. Cao, H.; Chen, J.; Du, D. Qualitative Extension Description for Cardinal Directions of Spatial Objects. Acta Geod. Cartogr. Sin. 2001, 30, 162–167. [Google Scholar]
  38. Haar, R. Computational Models of Spatial Relations; University of Maryland: College Park, MD, USA, 1976. [Google Scholar]
  39. Peuquet, D.; Zhang, C. An Algorithm to Determine the Directional Relationship between Arbitrarily-Shaped Polygons in the Plane. Pattern Recognit. 1987, 20, 65–74. [Google Scholar] [CrossRef]
  40. Papadias, D.; Theodoridis, Y. Spatial Relations, Minimum Bounding Rectangles, and Spatial Data Structures. Int. J. Geogr. Inf. Sci. 1997, 11, 111–138. [Google Scholar] [CrossRef]
  41. Tang, X.; Qin, K.; Meng, L. A Qualitative Matrix Model of Direction-Relation Based on Topological Reference. Acta Geod. Cartogr. Sin. 2014, 43, 396–403. [Google Scholar]
  42. Kulik, L.; Klippel, A. Reasoning about Cardinal Directions Using Grids as Qualitative Geographic Coordinates. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1999; Volume 1661, pp. 205–220. [Google Scholar]
  43. Tang, X.; Kwan, M.; Yu, Y.; Xie, L.; Qin, K.; Zhang, T. A Multiscale Pyramid Model of Cardinal Directions for Different Scenarios. Trans. GIS 2025, 29, e70010. [Google Scholar] [CrossRef]
  44. Tang, X.; Meng, L.; Qin, K. A Coordinate-Based Quantitative Directional Relations Model. In Proceedings of the International Symposium on Computational Intelligence and Design, Piscataway, NJ, USA, 12–14 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 483–488. [Google Scholar]
  45. Chen, T.; Ai, T. Automatic Extraction of Skeleton and Center of Area Feature. Geomat. Inf. Sci. Wuhan Univ. 2004, 29, 443–446. [Google Scholar]
  46. Lu, W.; Ai, T. Center Point Extraction of Simple Area Object Using Triangulation Skeleton Graph. Geomat. Inf. Sci. Wuhan Univ. 2020, 45, 337–343. [Google Scholar]
Figure 1. The centroid matrix model [10].
Figure 1. The centroid matrix model [10].
Ijgi 15 00011 g001
Figure 2. The framework of the new multiscale pyramid model for cardinal directions.
Figure 2. The framework of the new multiscale pyramid model for cardinal directions.
Ijgi 15 00011 g002
Figure 3. The order matrix for the reference point: (a) directional tiles of the order matrix for the reference point; (b) the element values in the order matrix.
Figure 3. The order matrix for the reference point: (a) directional tiles of the order matrix for the reference point; (b) the element values in the order matrix.
Ijgi 15 00011 g003
Figure 4. The order matrix for polygon: (a) azimuth range for reference point; (b) order matrix.
Figure 4. The order matrix for polygon: (a) azimuth range for reference point; (b) order matrix.
Ijgi 15 00011 g004
Figure 5. Directional tiles of the exterior order matrix for the reference polygon: (a) directional tiles and (b) origin points of the exterior order matrix.
Figure 5. Directional tiles of the exterior order matrix for the reference polygon: (a) directional tiles and (b) origin points of the exterior order matrix.
Ijgi 15 00011 g005
Figure 6. The order matrix values.
Figure 6. The order matrix values.
Ijgi 15 00011 g006
Figure 7. Directional tiles of MBR overall order matrix for reference polygon.
Figure 7. Directional tiles of MBR overall order matrix for reference polygon.
Ijgi 15 00011 g007
Figure 8. Projection distance of MBR exterior region: projection distances of (a) cardinal and (b) combined directions.
Figure 8. Projection distance of MBR exterior region: projection distances of (a) cardinal and (b) combined directions.
Ijgi 15 00011 g008
Figure 9. Directional tiles of boundary order matrix for reference polygon.
Figure 9. Directional tiles of boundary order matrix for reference polygon.
Ijgi 15 00011 g009
Figure 10. Directional tiles of interior order matrix for reference polygon.
Figure 10. Directional tiles of interior order matrix for reference polygon.
Ijgi 15 00011 g010
Figure 11. The original points of the exterior tiles.
Figure 11. The original points of the exterior tiles.
Ijgi 15 00011 g011
Figure 12. Targets at varying distances along the same ray.
Figure 12. Targets at varying distances along the same ray.
Ijgi 15 00011 g012
Figure 13. The dynamic point set around the reference object. Trajectory points (dots) are connected by solid lines showing movement paths.
Figure 13. The dynamic point set around the reference object. Trajectory points (dots) are connected by solid lines showing movement paths.
Ijgi 15 00011 g013
Figure 14. Comparison of the accuracy of the direction-relation description across different models.
Figure 14. Comparison of the accuracy of the direction-relation description across different models.
Ijgi 15 00011 g014
Figure 15. The target object P moves across the MBR of the reference object R along the same ray direction.
Figure 15. The target object P moves across the MBR of the reference object R along the same ray direction.
Ijgi 15 00011 g015
Figure 16. Multiscale direction-relation matrix of three adjacent targets: (a) Wuhan Sports University; (b) China University of Geosciences West Campus; (c) China University of Geosciences North Campus.
Figure 16. Multiscale direction-relation matrix of three adjacent targets: (a) Wuhan Sports University; (b) China University of Geosciences West Campus; (c) China University of Geosciences North Campus.
Ijgi 15 00011 g016
Figure 17. Correctness judgment of directional relationship features based on natural language.
Figure 17. Correctness judgment of directional relationship features based on natural language.
Ijgi 15 00011 g017
Figure 18. Directional relationship query based on natural language.
Figure 18. Directional relationship query based on natural language.
Ijgi 15 00011 g018
Figure 19. Comparison of spatial query results between LLM and new quantitative model.
Figure 19. Comparison of spatial query results between LLM and new quantitative model.
Ijgi 15 00011 g019
Table 1. Comparison of formal results from different model for the synthetic point set.
Table 1. Comparison of formal results from different model for the synthetic point set.
SceneDirection-Relation MatrixSegmentation MatrixOrder MatrixCoordinate MatrixCentroid-Based Matrix
Scene 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 42.32 0 0 0 0 0 0 0 0 ( 42.32 , 6.35 ) 0 0 0 0 0 0 0 0 308.90 0 0 0 0 0 0 0
Scene 2 0 36.80 0 0 0 0 0 0 0 0 ( 36.80 , 7.71 ) 0 0 0 0 0 0 0 0 313.98 0 0 0 0 0 0 0
Scene 3 0 33.21 0 0 0 0 0 0 0 0 ( 33.21 , 5.31 ) 0 0 0 0 0 0 0 0 314.91 0 0 0 0 0 0 0
Scene 4 0 27.75 0 0 0 0 0 0 0 0 ( 27.75 , 4.97 ) 0 0 0 0 0 0 0 0 319.74 0 0 0 0 0 0 0
Scene 5 0 23.32 0 0 0 0 0 0 0 0 ( 23.32 , 7.08 ) 0 0 0 0 0 0 0 0 326.23 0 0 0 0 0 0 0
Scene 6 0 18.44 0 0 0 0 0 0 0 0 ( 18.44 , 5.39 ) 0 0 0 0 0 0 0 0 330.94 0 0 0 0 0 0 0
Scene 7 0 8.73 0 0 0 0 0 0 0 0 ( 8.73 , 7.16 ) 0 0 0 0 0 0 0 0 345.97 0 0 0 0 0 0 0
Scene 8 0 3.61 0 0 0 0 0 0 0 0 ( 3.61 , 3.88 ) 0 0 0 0 0 0 0 0 353.49 0 0 0 0 0 0 0
Scene 9 0 4 0 0 0 0 0 0 0 0 2.52 0 0 0 0 0 0 0 0 ( 2.52 , 4.90 ) 0 0 0 0 0 0 0 0 4.40 0 0 0 0 0 0 0
Scene 10 0 9.53 0 0 0 0 0 0 0 0 ( 9.53 , 4.63 ) 0 0 0 0 0 0 0 0 16.38 0 0 0 0 0 0 0
Scene 11 0 15.98 0 0 0 0 0 0 0 0 ( 15.98 , 3.84 ) 0 0 0 0 0 0 0 0 26.81 0 0 0 0 0 0 0
Scene 12 0 20.39 0 0 0 0 0 0 0 0 ( 20.39 , 1.32 ) 0 0 0 0 0 0 0 0 35.01 0 0 0 0 0 0 0
Scene 13 0 28.54 0 0 0 0 0 0 0 0 ( 28.54 , 2.71 ) 0 0 0 0 0 0 0 0 43.10 0 0 0 0 0 0 0
Scene 14 0 34.81 0 0 0 0 0 0 0 0 ( 34.81 , 1.82 ) 0 0 0 0 0 0 0 0 49.61 0 0 0 0 0 0 0
Scene 15 0 40.66 0 0 0 0 0 0 0 0 ( 40.66 , 2.53 ) 0 0 0 0 0 0 0 0 53.29 0 0 0 0 0 0 0
Scene 16 0 0 1 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 O d i r E = 0 0 66.14 0 0 0 0 0 0 0 0 ( 5.20 , 2.30 ) 0 0 0 0 0 0 0 0 59.18 0 0 0 0 0 0
Scene 17 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 26.29 0 0 0 0 0 0 0 0 ( 26.29 , 12.94 ) 0 0 0 0 0 0 0 0 65.68 0 0 0
Scene 18 0 0 0 0 0 18.34 0 0 0 0 0 0 0 0 ( 18.34 , 15.18 ) 0 0 0 0 0 0 0 0 73.11 0 0 0
Scene 19 0 0 0 0 0 12.01 0 0 0 0 0 0 0 0 ( 12.01 , 18.59 ) 0 0 0 0 0 0 0 0 79.35 0 0 0
Scene 20 0 0 0 0 0 9.53 0 0 0 0 0 0 0 0 ( 9.53 , 26.08 ) 0 0 0 0 0 0 0 0 82.39 0 0 0
Scene 21 0 0 0 0 0 9.25 0 0 0 0 0 0 0 0 ( 9.25 , 30.77 ) 0 0 0 0 0 0 0 0 83.06 0 0 0
Scene 22 0 0 0 0 0 3.48 0 0 0 0 0 0 0 0 ( 3.48 , 31.21 ) 0 0 0 0 0 0 0 0 87.39 0 0 0
Scene 23 0 0 0 0 0 2.52 0 0 0 0 0 0 0 0 ( 2.52 , 37.03 ) 0 0 0 0 0 0 0 0 88.24 0 0 0
Scene 24 0 0 0 0 0 4 0 0 0 0 0 0 0 0 1.98 0 0 0 0 0 0 0 0 ( 1.98 , 39.48 ) 0 0 0 0 0 0 0 0 91.34 0 0 0
Scene 25 0 0 0 0 0 11.85 0 0 0 0 0 0 0 0 ( 11.85 , 39.54 ) 0 0 0 0 0 0 0 0 97.96 0 0 0
Table 2. Accuracy comparison with the other models along the same ray direction.
Table 2. Accuracy comparison with the other models along the same ray direction.
PointOrder MatrixCoordinate MatrixCentroid-Based Matrix
P1 O d i r E = 0 0 0 0 0 0 50 0 0 C d i r E = 0 0 0 0 0 0 ( 0.8 , 1 ) 0 0 0 0 0 0 0 0 4.01 0 0
P2 C d i r E = 0 0 0 0 0 0 ( 0.56 , 0.7 ) 0 0
P3 C d i r E = 0 0 0 0 0 0 ( 0.16 , 0.2 ) 0 0
P4 O d i r E = 0 0 0 0 1 0 0 0 0
O d i r M E = 0 0 0 0 0 0 0.04 0 0
C d i r E = 0 0 0 0 1 0 0 0 0
C d i r M E = 0 0 0 0 0 0 ( 0.1 , 0.8 ) 0 0
0 0 0 0 4.01 0 0 0 0
P5 O d i r E = 0 0 0 0 1 0 0 0 0
O d i r M E = 0 0 0 0 0 0 0 0.4 0
C d i r E = 0 0 0 0 1 0 0 0 0
C d i r M E = 0 0 0 0 0 0 0 ( 0.4 , 0.5,0.1 ) 0
P6 O d i r E = 0 0 0 0 1 0 0 0 0
O d i r M E = 0 0 0 0 0 0 0 0.2 0
C d i r E = 0 0 0 0 1 0 0 0 0
C d i r M E = 0 0 2 0 0 0 0 ( 0.2 , 0.3 , 0.3 ) 0
P7 O d i r E = 0 0 0 0 1 0 0 0 0
O d i r M E = 0 0 0 0 1 0 0 0 0
O d i r B = 0 0 0 0 0 0 37 0 0
C d i r E = 0 0 0 0 1 0 0 0 0
C d i r M E = 0 0 0 0 1 0 0 0 0
C d i r B = 0 0 0 0 0 0 ( 0.12 , 0.15 ) 0 0
P8 O d i r E = 0 0 0 0 1 0 0 0 0
O d i r M E = 0 0 0 0 1 0 0 0 0
O d i r B = 0 0 0 0 1 0 0 0 0
O d i r I = 0 0 62 0 0 0 0 0 0
C d i r E = 0 0 0 0 1 0 0 0 0
C d i r M E = 0 0 0 0 1 0 0 0 0
C d i r B = 0 0 0 0 1 0 0 0 0
C d i r I = 0 0 ( 0.08 , 0.1 ) 0 0 0 0 0 0
0 0 0 0 0.87 0 0 0 0
P9 O d i r E = 0 0 0 0 1 0 0 0 0
O d i r M E = 0 0 0 0 1 0 0 0 0
O d i r B = 0 0 53 0 0 0 0 0 0
C d i r E = 0 0 0 0 1 0 0 0 0
C d i r M E = 0 0 0 0 1 0 0 0 0
C d i r B = 0 0 ( 0.32 , 0.4 ) 0 0 0 0 0 0
P10 O d i r E = 0 0 50 0 0 0 0 0 0 C d i r E = 0 0 ( 0.08 , 0.1 ) 0 0 0 0 0 0 0 0 0.87 0 0 0 0 0 0
P11 C d i r E = 0 0 ( 0.48 , 0.6 ) 0 0 0 0 0 0
P12 C d i r E = 0 0 ( 0.8 , 1 ) 0 0 0 0 0 0
Table 3. Multi-scale direction-relation matrix comparison.
Table 3. Multi-scale direction-relation matrix comparison.
SceneBasic MatrixSegmentation MatrixOrder MatrixCoordinate Matrix
1Ijgi 15 00011 i001Ijgi 15 00011 i002Ijgi 15 00011 i003Ijgi 15 00011 i004
2Ijgi 15 00011 i005Ijgi 15 00011 i006Ijgi 15 00011 i007Ijgi 15 00011 i008
3Ijgi 15 00011 i009Ijgi 15 00011 i010Ijgi 15 00011 i011Ijgi 15 00011 i012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, X.; Kwan, M.-P.; Zhang, Y.; Yu, Y.; Xie, L.; Qin, K.; Lu, B. Multi-Scale Quantitative Direction-Relation Matrix for Cardinal Directions. ISPRS Int. J. Geo-Inf. 2026, 15, 11. https://doi.org/10.3390/ijgi15010011

AMA Style

Tang X, Kwan M-P, Zhang Y, Yu Y, Xie L, Qin K, Lu B. Multi-Scale Quantitative Direction-Relation Matrix for Cardinal Directions. ISPRS International Journal of Geo-Information. 2026; 15(1):11. https://doi.org/10.3390/ijgi15010011

Chicago/Turabian Style

Tang, Xuehua, Mei-Po Kwan, Yong Zhang, Yang Yu, Linxuan Xie, Kun Qin, and Binbin Lu. 2026. "Multi-Scale Quantitative Direction-Relation Matrix for Cardinal Directions" ISPRS International Journal of Geo-Information 15, no. 1: 11. https://doi.org/10.3390/ijgi15010011

APA Style

Tang, X., Kwan, M.-P., Zhang, Y., Yu, Y., Xie, L., Qin, K., & Lu, B. (2026). Multi-Scale Quantitative Direction-Relation Matrix for Cardinal Directions. ISPRS International Journal of Geo-Information, 15(1), 11. https://doi.org/10.3390/ijgi15010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop