Next Article in Journal
Evaluating Shallow Landslide Prediction Mapping by Using Two Different GIS-Based Models: 4SLIDE and SHALSTAB
Previous Article in Journal
A New Typification Method for Combined Linear Building Patterns with the Resolution of Spatial Conflicts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Selection Method of Massive Point Cluster Using the Delaunay Triangulation to Support Real-Time Visualization

1
School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China
2
College of Urban and Environmental Sciences, Central China Normal University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2025, 14(4), 143; https://doi.org/10.3390/ijgi14040143
Submission received: 16 December 2024 / Revised: 20 March 2025 / Accepted: 24 March 2025 / Published: 26 March 2025

Abstract

:
One of the goals of map generalization is to achieve real-time visualization of massive entities while adapting to zoom-in/zoom-out conditions. Unlike traditional map generalization, this type of scaling operation does not simplify the data to produce a final result; it only outputs temporary visualization data. To meet the current visualization scale requirements, we insert a simplification algorithm prior to visualization to process the data. Taking point simplification as an example, this study proposes a novel massive point selection method and optimizes the entire algorithmic process, enabling the method to quickly and efficiently handle point selection for datasets ranging from tens of thousands to millions of points. The method employs a geometric construction, namely a Delaunay triangulation, is applied to discover the distribution characteristics with real-time efficiency. Initially, we construct the Delaunay triangulation of the point cluster. Subsequently, we calculate the mean distance of each point as the selection feature. Finally, we incorporate a ‘fixed point’ concept to rank and stabilize the points during the selection process. Experimental results indicate that our method not only achieves commendable performance in considering spatial structure, comparable to both traditional and state-of-the-art methods but also demonstrates significantly higher efficiency. This method can efficiently handle point selection for datasets ranging from tens of thousands to millions of points in a short time, thereby greatly enhancing the practicality of the algorithm in complex point selection scenarios.

1. Introduction

In the era of big data, as the precision and volume of map data continue to improve, applications such as smart cities and navigation systems are increasingly demanding multi-scale map information. Map generalization technology has therefore become an important means for processing large-scale, multi-scale geospatial data. Consequently, efficiently performing comprehensive operations on massive, multi-source map data has become a key and challenging issue in current map generalization research. Meanwhile, there is growing attention on the runtime of algorithms in map generalization [1,2]. As a complex spatial information abstraction process during scale transformation [3,4], map generalization must also enhance the algorithmic efficiency of its operators [5,6,7], such as selection, simplification, mergence, collapse, and displacement when confronted with geospatial big data. Given that map generalization involves multiple complex processes, including modeling, feature construction, and decision-making, achieving rapid map generalization has been a significant challenge.
In the visualization of point features, such as residential areas and road survey points, real-time map generalization is manifested in the selection of a subset from massive point data to adapt to current zoom-in/zoom-out requirements. The selection process must be highly efficient in order to quickly output a generalized result. Feature selection, the process of selecting a subset of geographic objects to represent the overall feature distribution in map space during scaling transformation, is regarded as the most intricate operator in map generalization due to its requirement for global decision-making [8,9]. Cartographers must consider a range of contextual conditions to determine which objects should be selected [10,11]. In particular, a point cluster—where a group of points are selected as a unit—has received significant attention as a potential bottleneck of running time in map generalization [12,13,14]. This is because many map features, such as buildings, electrical and communication stations, and labels, are often greatly abstracted as point features during the selection process, leading to frequent use of point cluster selection during map generalization. Moreover, in practical scenarios, the number of point features can typically reach tens of thousands, and in some national-level applications, even millions or more. Such large volumes of data significantly impact the speed of map generalization and pose a major challenge for designing efficient algorithms.
Methods for achieving point cluster generalization are mostly implemented through selection operations. Research in this area generally focuses on two main issues: quantity control and selection quality. The core of the quantity control problem is determining the number of points that should be selected by the algorithm during the map scaling process from large to small scales. Traditionally, the quantitative reference standard for this problem has relied mainly on the Topfer’s radical model [15]. In addition, selection criteria based on fractal theory are also commonly used to control the quantity [16,17]. Compared with the quantity problem, more research has focused on the issue of selection quality. This problem can be simplified to “which ones to select”—that is, which points to retain during the generalization process to achieve a satisfactory simplification result while maintaining good visual performance. Numerous studies have implemented point selection using methods based on coordinates [18], point importance [19], the Voronoi diagram structure of points [20,21,22], and machine learning [23]. These methods can preserve the original morphological characteristics of point clusters in their respective aspects after the selection process. However, it is worth noting that most of these methods have been validated on small-volume datasets, and when faced with large-scale point cluster data—ranging from tens of thousands to millions—the complex modeling and iterative processes lead to enormous computational overhead, making it difficult to meet the demands of real-time map generalization.
In fact, many software applications (such as QGIS and ArcGIS) still encounter significant challenges when handling massive point cluster selection (data at the level of hundreds of thousands or millions). Therefore, against the backdrop of rapidly growing geospatial big data, algorithms with high computational complexity are struggling to meet the needs for fast map generalization. To address this problem, we propose an improved method for massive point clusters based on Delaunay triangulations. By analyzing the Voronoi-diagram-area-based method proposed by Ai and Liu [20], we investigated the workflow of the algorithm, identified the components that consume significant computational resources, and optimized them using a Delaunay triangulation. In map generalization research, constructing Delaunay triangulations for geographic features is a commonly employed solution. Owing to its excellent connectivity and effective spatial partitioning properties, the Delaunay triangulation has been applied in several studies on the generalization of polyline and polygon features [24,25], and it offers advantages over other methods in eliminating shape details. Based on this, we developed a point cluster selection method capable of handling massive data.
This study contributes significantly to the existing literature in the following ways:
(1)
Realizes a more efficient method for point cluster selection, achieving satisfactory results.
(2)
Presents a method for selecting large-volume point clusters (ranging from tens of thousands to even millions) for the first time.
(3)
Compares our method to traditional methods, machine learning techniques, and professional software to evaluate its performance.
The remainder of this paper is organized as follows. Section 2 introduces and summarizes the current research on point cluster generalization. Section 3 outlines the method used to construct our selection method based on the Delaunay triangulation. Section 4 presents and discusses the experimental results. Finally, Section 5 concludes our work and highlights future directions.

2. Related Works

Current research on algorithms for point cluster generalization can be roughly divided into three categories. The first category of algorithms mainly considers the spatial distribution characteristics of the original point clusters during the selection process, including the preservation of density features, geometric distribution, and topological characteristics. For example, Wu [18] employed multi-level nesting of the convex hulls of point clusters along with polyline feature simplification algorithms to effectively preserve the contour information and distribution characteristics of the original clusters; De Berg [26] used an iterative approach in algorithm to preserve, to the greatest extent, the clustering contours and the number of clusters in the original point cluster; Qian [27] proposed an algorithm that transforms the representation of points from coordinate space to a circle character space, whereby the centroids of point clusters are computed and clustered to preserve the distribution center, distribution range, and clustering characteristics of the original data, thus enabling point cluster selection based on circle character transformation.
The second category of algorithms achieves automatic generalization of point clusters primarily by evaluating the importance parameters of the points, and during the generalization process, these algorithms selectively consider certain distribution features. In this category, the design of the importance parameters is the key focus of the point cluster simplification process. To this end, some scholars have proposed effective parameters to control the selection of points. For instance, Langran and Poiker [19] proposed the residential area spatial ratio algorithm, regarding each residential area as a point feature and drawing a circle with radius R i = C / w i centered at the residential area, where w i represents the weight of point i and C is a constant. In the selection process, the points are first sorted in descending order according to this parameter and then selected based on certain geometric criteria, thereby effectively preserving the neighbor relationships and distribution density of the original point clusters. Similarly, gravity model algorithms and nearest-neighbor index parameters have also been proposed to achieve point cluster selection. Van Kreveld [28] introduced an algorithm for increasing circle radius by regarding the point cluster as a circle centered at the original point cluster with radius R i = C * w i . In this method, the weight of a point is proportional to the circle’s radius, and the constant C is chosen to ensure that no two circles within the cluster overlap. This proportionality ensures that points with higher weights are more likely to be retained during the generalization process. In addition, the local density and relative local density parameters proposed by Sadahiro [29], the density parameters extracted via quadtree by Burghardt [30], and the Voronoi diagram area of point features proposed by Ai and Liu [20] have all proven to be effective control parameters for achieving automatic generalization of point clusters [21,22].
The third category of algorithms realizes automatic point cluster selection through methods such as genetic algorithms and machine learning. Deng [31] first divided the point clusters into smaller clusters based on density, calculated the number of points that needed to be retained in each sub-cluster, and then further simplified the points using convex hull algorithms combined with genetic algorithms. Meanwhile, Cai [23] introduced a method that employs a Kohonen network to capture and map the features of points for generalization, thereby enabling the generalized result to preserve the original clusters’ density and internal texture.
Although existing algorithms have achieved good performance on their respective issues, research on point cluster generalization has been relatively limited in recent years. Moreover, most methods are highly complex and generally only applicable to small-volume point clusters. In addition, while the emergence of deep learning has led some scholars to employ graph neural networks for simplifying larger point clusters (in the order of tens of thousands of points), these approaches still require considerable time for feature extraction and network model training, and thus do not exhibit a clear efficiency advantage [12]. Consequently, it is essential to design an efficient algorithm for point cluster generalization that can be applied to massive point datasets. This study intends to employ a method based on the Delaunay triangulation to achieve automated generalization of massive data points. In fact, owing to its excellent spatial partitioning and connectivity capabilities, the Delaunay triangulation has already been applied in many effective generalization algorithms for polyline and polygon features [32,33,34,35]. Thus, combining the Delaunay triangulation with point feature generalization emerges as a strategy with considerable feasibility.

3. Methods

Point selection attempts to output relatively important subset of points while removing minor points according to some scale conditions. The whole process actually includes two questions, namely “How many points should be selected?” and “Which points should be selected?”. The first question is related to the selection condition, which depends on the scale in the current zoom-in/out context. The latter is related to the judgment of point importance. Usually, this decision requires a geometric model to support the point distribution characteristics as well as consideration of semantic inclusion.

3.1. Framework

The framework of our proposed approach for point cluster selection consists of two main parts (Figure 1). First, for the initial scale point clusters, it is necessary to construct the convex hull. The convex hull construction process mainly involves three steps: (1) constructing the Delaunay triangulation for the original point cluster; (2) simplifying the Delaunay triangulation that violates spatial adjacency relationships between points; (3) fixing the points forming the convex hull. Second, the point features within the convex hull need to be simplified. This simplification process also mainly comprises three steps: (1) calculating a feature for each point to assess its importance; (2) ranking the point cluster and making a decision (i.e., retain or delete) for each point; (3) defining a termination condition for map generalization of the point cluster in each epoch. After completing all these steps, the point cluster selection result for each simplification epoch can be obtained.

3.2. Delaunay Triangulation for Point Cluster

The Delaunay triangulation plays a crucial role in the map generalization to discover spatial pattern using its characteristics of neighborhood connection. It can be used to find spatial conflict, decide displacement performance, aggregate neighbor polygons, and others in map generalization fields. For geographical features that need to be abstracted on a map, this process is challenging without spatial relationships as guidance. Unlike entities with physical connections, such as road and river networks, point clusters lack tangible links. Therefore, determining spatial relationships between geographical features to support the multiscale map generalization of spatial objectives becomes essential. As the Delaunay triangulation can comprehensively express the topological relationships between points, we focus on establishing proximity relationships between points by constructing the Delaunay triangulation (Figure 2).
Constructing the triangulation is to connect points that are close in distance through the triangle edges. However, in the triangulation constructed in Figure 2, there may be unavoidably some excessively long edges in the peripheral areas of the triangulation. This results in the error connection of points that are spatially distant from each other through the triangle edges. Following the Gestalt proximity principle [36], it is appropriate for two points to be connected only if their distance is less than the visual proximity threshold. Therefore, for the triangulation constructed in Figure 2, it is necessary to remove some overly elongated edges. The threshold distance d for removing edges can be pre-set based on experience. A larger d makes the polygons of the triangulation closer to the convex hull of the point cluster, while a smaller d value leads to deeper concavities in the polygons generated. Figure 3 illustrates the Delaunay triangulation of the point cluster after pruning. Another consideration is the impact from context features, such as road, river features, to hamper the point connection.

3.3. Selection Range Construction

The primary idea behind map generalization of point clusters involves assessing the significance or competitive strength of the visual space associated with each point. Subsequently, points with higher importance are retained, while those with lower competitive strength are eliminated. Quantitatively, this importance or strength is represented as a specific feature of each point. For instance, the Voronoi cell area [19] serves this purpose, albeit at the expense of considerable time during Voronoi diagram construction. Since the objective of this study is to simplify massive point cluster data and reduce the associated time cost, we propose adopting an approach that is as computationally simple as possible: quantifying the competitive intensity of each point in the visual space using the mean length (ML) of connected Delaunay edges. A shorter ML indicates weaker competence for a given point. As illustrated in Figure 4, the ML of the target point is defined as the mean length of its connected Delaunay edges, formulated as follows:
M L = i = 1 N l i N ,
where N is the number of connected Delaunay edges of a point and l i is the length of the ith connected edges.

3.4. Point Selection Method

Upon completing the construction of the pruned Delaunay triangulation and calculating ML features for each point, the points constructing the convex hull are fixed and the process of point cluster selection is initiated. In this study, the fundamental principle behind the selection method is to remove points located at the densest locations. For any point in a cluster of points, density is an important factor affecting its visual impact. Generally, the denser a point is in geographical space, the lower its significance is considered to be, which is quantified by the ML, signifying the impact of a point on the visual space. Meanwhile, in order to ensure that the point clusters do not oversimplify the points within a region during the generalization process, it is necessary to design a strategy to maintain spatial density balance within the point cluster while also avoiding the continuous deletion of adjacent points. The strategy adopted in this study is to lock all the points connected to it through Delaunay edges whenever the corresponding point is deleted based on the minimum ML value. The locked points are no longer involved in the ordering of ML values in the current generalization epoch and are also not deleted. In other words, these fixed points remain untouched in the current iteration. Subsequently, the next unfixed point with the minimum ML is identified until all points are either deleted or fixed, marking the completion of one epoch of selection. After a generalization epoch is completed, this does not mean the end of the whole generalization operation. The point cluster obtained after simplification can be recalculated for a new epoch of ML values and further generalization can be performed based on the sorting results of this new ML value list. The generalization operation can continue indefinitely until the number of point features meets the needs of the cartographer. The specific generalization method process is shown in Algorithm 1 (The process of point cluster generalization).
Algorithm 1: Inner Point Generalization by the Delaunay Triangulation
Input: A set of points P
Output: Set of deletion points D ; Set of fixed points F
Step 1
1. T D e l a u n a y T r i a n g u l a t i o n ( P )  // Construct the Delaunay triangulation
2. H C o n v e x H u l l ( P )       // Compute the convex hull
3. I P \ H            // Extract inner points (points not on the convex hull)
4. For each point p I do:
5.    N ( p ) N e i g h b o r s ( p ,   T )  // Determine the Delaunay neighbors of p
6.    d _ a v g ( p ) ( 1 / | N ( p ) | )   ·   [ d i s t a n c e ( p ,   q )   f o r   e v e r y   q N ( p ) ]
7. End For
Step 2
8. U I             // U : Set of unprocessed inner points
9. D ,   F         // Initialize deletion and fixed sets
10. While U do:
11.    p m i n a r g m i n { d _ a v g ( p ) | p U }      // Select point with minimum distance
12.    D   D { p m i n }               // Mark   p m i n   as a deletion point
13.   For each neighbor q N ( p m i n ) do:
14.        F F { q }          // Mark   q   as a fixed point
15.        U U \ { q }          // Remove   q   from further processing
16.    U U \ { p m i n }               // Remove the deleted point from U
17. End While
18. Return D, F
For the algorithmic strategy proposed in this study, this is actually an efficient and feasible point cluster generalization scheme. The proposed ML value in this algorithm avoids additional Voronoi diagram construction operations, which can effectively reduce the algorithm running time. Meanwhile, the mechanism to prevent local over-simplification also allows the algorithm to simplify the point group more uniformly and with better visual effect during the generalization process. These two advantages make this algorithm an effective solution for real-time point group simplification.

4. Experiments and Discussion

The experiments of massive point cluster selection supported by the Delaunay triangulation were implemented with C# in Visual Studio 2012 platform with the support of the Windows 10 systems. The hardware comprises an Intel Core i7-10750H processor, a six-core, twelve-thread CPU clocked at 2.6 GHz, with a maximum turbo frequency of 5.0 GHz. Additionally, it boasts 16 GB of DDR4 RAM. The hardware is further equipped with an NVIDIA GeForce GTX 2070 Super graphics card.

4.1. Experiment Data and Comparing Methods

4.1.1. Experiment Data

Three datasets of point clusters are employed as experimental data. The first two point clusters were abstracted from singular dwelling buildings in Luojiang and Jingyang districts of Sichuan Province (Figure 5), encompassing 16,028 and 16,466 points, respectively. The third point cluster consists of trajectory points from the urban area of Beijing (Figure 6), totaling 1,045,236 points. Abstracted dwelling building datasets are one of the most classical types of point cluster data, and the experimental data selected are appropriate for map generalization studies in terms of both data volume and distribution. Meanwhile, the trajectory point dataset contains a massive amount of data, and its generalization can effectively validate the computational efficiency of the proposed algorithm.

4.1.2. Comparing Methods

To assess the computational efficiency of our approach, we employ state-of-the-art machine learning techniques, specifically Graph Convolutional Networks (GCN) and the ArcEngine software (Version 10.2), for comparative analysis. The reason for selecting these two algorithms as comparison methods is that they generally perform well in generalization results and are usually superior to other generalization methods. The specifics of these comparison methods are as follows:
  • Machine Learning [12]: This method is essentially a data-driven approach based on Graph Convolutional Networks (GCN), which uncovers the implicit rules of point cluster generalization by learning from data samples at two different map scales. The machine learning approach first converts the point clusters into a graph structure, where each point corresponds to a graph node and the edges are defined based on the Delaunay triangles connecting the points. The algorithm extracts various features of the points and inputs them into the constructed graph neural network model for training. The generalization results achieved by this method perform well in most aspects.
  • ArcEngine: The ArcEngine-based method is primarily implemented by constructing a Voronoi diagram for the point clusters. The algorithm builds the Voronoi diagram structure for the point clusters on the basis of the Delaunay triangulation, and then uses the area of the Voronoi cells as a control parameter to progressively delete and simplify the point clusters. This method is considered a classic selection algorithm.
Both of these algorithms are excellent solutions in the field of point cluster generalization research, and compared to earlier generalization algorithms, they can handle slightly larger-volume point cluster data (in the order of tens of thousands). Comparing the selection algorithm proposed in this study with these two outstanding methods provides a more intuitive demonstration of the efficiency advantages of our algorithm.

4.2. Efficiency Comparison

The driving force behind this study is achieving high efficiency. To substantiate this claim, we conducted comparisons between our method and other approaches, documenting the time consumed for each epoch, as presented in Table 1, Table 2 and Table 3. The time consumed for a single epoch is measured after processing all points, whether selected or unselected. Notably, since the Graph Convolutional Network (GCN) method relies on supervised learning with samples of selection results for each epoch individually, we specifically conducted an experiment for the first epoch (denoted as ‘Epoch 1’) and recorded its corresponding time consumption.
As illustrated in Table 1 and Table 2, the time consumed by our proposed method and the ArcEngine method gradually decreased in tandem with a reduction in the number of point clusters in Jingyang and Luojiang districts. Notably, the high efficiency of our proposed method is evident in every epoch within these two cases. For instance, during the 1st epoch in Jingyang district, our proposed method required only 0.283 s, a significant improvement over the 157.701 s needed by the ArcEngine method. This advantage of our method can also be reflective of the scenario of the decreased data volume. Although the GCN method only incurs time consumption for a single epoch, the training and predicting time of 35.448 s and 2.753 s, respectively, are significantly higher than the time consumed by our method.
In contrast to the results obtained with ten-thousand-point clusters, the time consumption of our method increases when processing millions of trajectory points. Nevertheless, it remains lower than the processing times observed with the ArcEngine and GCN methods when handling ten thousand building points. Furthermore, both the ArcEngine and GCN methods prove ineffective in processing millions of trajectory points. This outcome underscores the significant efficiency advantage of our proposed method over other approaches.

4.3. Visualization of Selection Result

Our proposed method has demonstrated remarkable efficiency in processing datasets comprising tens of thousands, and even millions, of points (Section 4.2). Subsequently, we aim to explore its generalized results from both global and local perspectives (Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14).
As depicted in the results of multiple epochs within the Luojiang district (Figure 7 and Figure 8) and the Jingyang district (Figure 9 and Figure 10), our proposed method consistently preserves both the identical number and spatial distribution of points when compared to the ArcEngine method. For instance, in the Luojiang district during the 9th epoch (Figure 7), our method selected 980 points, closely approximating 1023 points chosen by the ArcEngine method. Additionally, the spatial distribution of the selected points in both methods closely mirrors that of the original point clusters. Furthermore, this pattern is evident not only in the 9th epoch but also observed when comparing the 1st epoch results among our proposed method, the ArcEngine method, and the GCN method (Figure 11, Figure 12, Figure 13 and Figure 14).
From a local perspective (Figure 13 and Figure 14), the local structure of the points generalized by our proposed method closely aligns with that of other methods and the original structure before selection. In summary, our method consistently achieves satisfactory results in point cluster selection, comparable to other methods, as evidenced by spatial visualization. This is further demonstrated in the selection of millions of points, as illustrated in Figure 15.

4.4. Evaluation of Selection Result

Using dot density maps to visually assess the results of point cluster selection is a commonly employed method. Dot density maps assign gray values based on the point density in proximity to each pixel, thereby converting point clusters into images that accurately depict their overall visual representation. Dot density maps can be generated using ArcToolbox in ArcGIS. Additionally, the Structural Similarity Index (SSIM), introduced by [37], serves as a crucial quantitative metric for evaluating the similarity between two dot density maps. Hence, in this study, we employ a dot density map and the SSIM metric to assess the selection results of point clusters; the SSIM value for each epoch is used to quantify the similarity between the original point cluster and the selected point cluster during that specific epoch.
During the generalization process, although the number of points is gradually decreasing, the dot density maps produced by our proposed method consistently maintain a similar spatial density distribution in each simplification epoch. This pattern is also observed in the ArcEngine method. From a quantitative standpoint (Table 4), the SSIM gradually decreases in tandem with the strength of generalization. Notably, the SSIM values of our proposed method closely align with those of the ArcEngine method and, in multiple epochs, even surpass them. This outcome underscores that our proposed method can deliver satisfactory performance from a quantitative perspective.
During the generalization process, although the number of points is gradually decreasing, the dot density maps produced by our proposed method consistently maintain a similar spatial density distribution in each simplification epoch.

4.5. Discussion

Our proposed method stands out primarily for its remarkable efficiency in handling point clusters at scales ranging from tens of thousands to even millions. Unlike traditional methods designed for smaller clusters, where efficiency concerns are often overlooked, our approach excels in processing large-volume data. Noteworthy traditional methods, such as convex hull merging [18], genetic algorithms [31], Circle feature transformation [27], and Kohonen networks [23], grapple with efficiency challenges due to complex or iterative operations.
For example, the convex hull merging method incurs a significant time cost when constructing convex hulls for each layer of points, and the algorithm essentially relies on the generalization of polyline features, an approach that easily overlooks the neighbor relationships between adjacent points within convex hulls at various levels. The genetic algorithm involves a binary iterative process and exhibits high time complexity during execution, resulting in poor computational efficiency. The Circle feature transformation algorithm is highly dependent on various subjectively set parameters such as cluster threshold and circle radius. The setting of threshold parameters is easily influenced by the cartographer’s subjective judgment, making it challenging to handle point clusters with large data volumes. Kohonen networks and GCN methods belong to supervised learning approaches and rely on high-quality samples obtained through repeated experiments and precise hyperparameter settings. The labeling of sample data and the extraction of point feature characteristics incur significant time costs, thereby affecting the simplification efficiency. Additionally, the ArcEngine method requires excessive time to construct the Voronoi diagram by connecting the centers of triangles, which further increases the algorithm’s complexity and overall time consumption.
In contrast, our proposed method only requires constructing the Delaunay triangulation, the fundamental operator of traditional methods, without any subsequent operators (e.g., iterative training or subjective hyperparameter setting) that entail high time costs. Consequently, our method achieves remarkable efficiency in processing point clusters at scales of tens of thousands and even millions.
Beyond its efficiency, our method demonstrates exceptional performance. By calculating the mean distance of each point and introducing the concept of a “fixed point”, inspired by [20], it effectively highlights local structures between adjacent points. These simple yet powerful operators contribute to the robustness of our method, enabling it to process numerous point clusters with ease, even at high volumes.

5. Conclusions

In this study, we introduced a point cluster generalization method based on the Delaunay triangulation that is applicable to clusters ranging from tens of thousands to millions or more points. In the algorithm, we first construct the Delaunay triangulation for the point cluster and extract the inner points within the convex hull structure. Next, we design the mean distance of Delaunay edges for each point as an indicator to quantify the importance of each point within the cluster. Finally, we adopt the concept of “fixed points” by sorting the points during the algorithm’s execution and fixing the neighbors of the points targeted for deletion. This prevents local areas within the point cluster from being over-generalized during the selection process. We also provide a detailed explanation of the evaluation procedure for this method and validate its computational efficiency through comparative experiments. Experimental results on large-volume point clusters show that our method not only achieves visual and performance results comparable to those of the ArcEngine and GCN methods but also offers significantly higher computational efficiency. This indicates that our method can efficiently produce a simplified result for large-volume point clusters with minimal time consumption, which is a feature always overlooked by traditional generalization methods. As the scale and volume of map data continue to expand, the demand for efficient map generalization algorithms increases, and the proposed algorithm enables multiscale visualization of large-volume data on publicly available geographic viewers. Therefore, the improvements in algorithm efficiency and practicality underscore the significance of our proposed point cluster selection method.
Meanwhile, our proposed algorithm has certain limitations in terms of point quantity control and the integration of semantic information, both of which require further improvement in future research. First, regarding point quantity control, our method employs the concept of “fixed points”, with a selection epoch terminating only when all points are marked as fixed. However, this termination criterion is not scale-specific. To produce a generalized result for a specific scale, the number of points remaining after selection should serve as a reference threshold. This is achieved through the radical law based on the target scale. In a given epoch, the final result should be determined by choosing the candidate whose number of selected points is closest to the target scale. Second, our method does not take semantic information into account. In practical applications, point features such as landmark buildings and government institutions inherently contain important semantic information, making them less likely to be deleted during map generalization. For example, when simplifying scattered settlements, it is necessary to consider differences in the geographical significance of settlements, such as administrative levels and the distribution of relevant road networks. In future research, semantic information could be translated into weights and incorporated into the calculation of mean distances, thereby promoting the integration of geometric and semantic information.

Author Contributions

Conceptualization, Tinghua Ai; funding acquisition, Tinghua Ai and Pengcheng Liu; methodology, Chongya Gong and Tinghua Ai; software, Chongya Gong; supervision, Tianyuan Xiao, Huafei Yu and Pengcheng Liu; validation, Tianyuan Xiao; visualization, Chongya Gong and Huafei Yu; writing—original draft, Chongya Gong; writing—review and editing, Chongya Gong, Tianyuan Xiao and Huafei Yu. All authors have read and agreed to the published version of the manuscript.

Funding

This article was supported by the National Natural Science Foundation of China [grant number 42471486]; National Natural Science Foundation of China [grant number 42394065].

Data Availability Statement

The data that support the findings of this study are available with a DOI at https://doi.org/10.6084/m9.figshare.28034993.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bello-Orgaz, G.; Jung, J.J.; Camacho, D. Social big data: Recent achievements and new challenges. Inf. Fusion 2016, 28, 45–59. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, C.P.; Zhang, C.-Y. Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Inf. Sci. 2014, 275, 314–347. [Google Scholar] [CrossRef]
  3. Weibel, R. Map generalization in the context of digital systems. Cartogr. Geogr. Inf. Syst. 1995, 22, 259–263. [Google Scholar] [CrossRef]
  4. Brassel, K.E.; Weibel, R. A review and conceptual framework of automated map generalization. Int. J. Geogr. Inf. Syst. 1988, 2, 229–244. [Google Scholar] [CrossRef]
  5. Touya, G.; Zhang, X.; Lokhat, I. Is deep learning the new agent for map generalization? Int. J. Cartogr. 2019, 5, 142–157. [Google Scholar] [CrossRef]
  6. Guo, X.; Liu, J.; Wu, F.; Qian, H. A Method for Intelligent Road Network Selection Based on Graph Neural Network. ISPRS Int. J. Geo-Inf. 2023, 12, 336. [Google Scholar] [CrossRef]
  7. Cheng, L.; Chen, T.; Guo, Q.; Wei, Z.; Chen, M.; Wang, X.; Zhao, J. Integrated generalization method of contours and rivers considering geographic characteristics. Geocarto Int. 2023, 38, 2207543. [Google Scholar] [CrossRef]
  8. Benz, S.A.; Weibel, R. Road network selection for medium scales using an extended stroke-mesh combination algorithm. Cartogr. Geogr. Inf. Sci. 2014, 41, 323–339. [Google Scholar] [CrossRef]
  9. Touya, G. A road network selection process based on data enrichment and structure detection. Trans. GIS 2010, 14, 595–614. [Google Scholar] [CrossRef]
  10. Yang, M. Research on Feature Selection Considering Spatial Context in Map Generalization and Its Application. Acta Geod. Cartogr. Sin. 2014, 43, 877. [Google Scholar]
  11. Karsznia, I.; Wereszczyńska, K.; Weibel, R. Make It Simple: Effective Road Selection for Small-Scale Map Design Using Decision-Tree-Based Models. ISPRS Int. J. Geo-Inf. 2022, 11, 457. [Google Scholar] [CrossRef]
  12. Xiao, T.; Ai, T.; Yu, H.; Yang, M.; Liu, P. A point selection method in map generalization using graph convolutional network model. Cartogr. Geogr. Inf. Sci. 2023, 51, 20–40. [Google Scholar]
  13. Burghardt, D.; Cecconi, A. Mesh simplification for building typification. Int. J. Geogr. Inf. Sci. 2007, 21, 283–298. [Google Scholar]
  14. Wang, X.; Burghardt, D. A typification method for linear building groups based on stroke simplification. Geocarto Int. 2021, 36, 1732–1751. [Google Scholar]
  15. Töpfer, F.; Pillewizer, W. The principles of selection. Cartogr. J. 1966, 3, 10–16. [Google Scholar]
  16. Mandelbrot, B.B. The Fractal Geometry of Nature; WH Freeman New York: New York, NY, USA, 1982; Volume 1. [Google Scholar]
  17. Qiao, W.; Hehai, W. The research on fractal method of automatic generalization of map polygons. Acta Geod. Cartogr. Sin. 1996, 21, 59–63. [Google Scholar]
  18. Wu, H. Principle of convex hull and its applications in generalization of grouped point objects. Eng. Surv. Mapp. 1997, 6, 1–6. [Google Scholar]
  19. Langran, G.E.; Poiker, T.K. Integration of name selection and name placement. In Proceedings of the Second International Symposium on Spatial Data Handling, Seattle, WA, USA, 5–10 July 1986; International Geographical Union and International Cartographic Association: Seattle, WA, USA, 1986; pp. 50–64. [Google Scholar]
  20. Ai, T.; Liu, Y. A method of point cluster simplification with spatial distribution properties preserved. Acta Geod. Cartogr. Sin. 2002, 31, 175–181. [Google Scholar]
  21. Yan, H.; Weibel, R. An algorithm for point cluster generalization based on the Voronoi diagram. Comput. Geosci. 2008, 34, 939–954. [Google Scholar]
  22. Lu, X.; Yan, H.; Li, W.; Li, X.; Wu, F. An algorithm based on the weighted network Voronoi Diagram for point cluster simplification. ISPRS Int. J. Geo-Inf. 2019, 8, 105. [Google Scholar] [CrossRef]
  23. Cai, Y.; Guo, Q. Points group generalization based on Konhonen net. Geomat. Inf. Sci. Wuhan Univ. 2007, 32, 626–629. [Google Scholar]
  24. Liu, P.; Xiao, J. An Evaluation Model of Level of Detail Consistency of Geographical Features on Digital Maps. ISPRS Int. J. Geo-Inf. 2020, 9, 410. [Google Scholar] [CrossRef]
  25. Chen, Z.; Lu, X.; Xu, Y. A building aggregation method based on deep clustering of graph vertices. Acta Geod. Cartogr. Sin. 2024, 53, 736–749. [Google Scholar]
  26. De Berg, M.; Bose, P.; Cheong, O.; Morin, P. On simplifying dot maps. Comput. Geom. 2004, 27, 43–62. [Google Scholar]
  27. Qian, H.; Meng, L.; Zhang, M. Network simplification based on the algorithm of polarization transformation. In Proceedings of the XXIII International Cartographic Conference (ICC), Moscow, Russia, 4–10 August 2007; Cartographic Generalization and Multiple Representation: Moscow, Russia, 2007. [Google Scholar]
  28. Van Kreveld, M.; Van Oostrum, R.; Snoeyink, J. Efficient settlement selection for interactive display. In Proceedings of the Auto-Carto, Seattle, WA, USA, 7–10 April 1997. [Google Scholar]
  29. Sadahiro, Y. Cluster perception in the distribution of point objects. Cartogr. Int. J. Geogr. Inf. Geovisualiz. 1997, 34, 49–62. [Google Scholar]
  30. Burghardt, D.; Purves, R.; Edwardes, A. Techniques for on the-fly generalisation of thematic point data using hierarchical data structures. In Proceedings of the GIS Research UK 12th Annual Conference, Norwich, UK, 28–30 April 2004. [Google Scholar]
  31. Deng, H.; Wu, F.; Qian, H. A model of point cluster selection based on genetic algorithms. J. Image Graph. 2003, 8, 970–976. [Google Scholar]
  32. Li, Z.; Yan, H.; Ai, T.; Chen, J. Automated building generalization based on urban morphology and Gestalt theory. Int. J. Geogr. Inf. Sci. 2004, 18, 513–534. [Google Scholar]
  33. Ai, T.; Ke, S.; Yang, M.; Li, J. Envelope generation and simplification of polylines using Delaunay triangulation. Int. J. Geogr. Inf. Sci. 2017, 31, 297–319. [Google Scholar]
  34. Tang, L.; Ren, C.; Liu, Z.; Li, Q. A road map refinement method using delaunay triangulation for big trace data. ISPRS Int. J. Geo-Inf. 2017, 6, 45. [Google Scholar] [CrossRef]
  35. Zheng, C.; Guo, Q.; Wang, L.; Liu, Y.; Jiang, J. Collaborative Methods of Resolving Road Graphic Conflicts Based on Cartographic Rules and Generalization Operations. ISPRS Int. J. Geo-Inf. 2024, 13, 154. [Google Scholar] [CrossRef]
  36. Wertheimer, M.; Riezler, K. Gestalt theory. Soc. Res. 1944, 11, 78–99. [Google Scholar]
  37. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
Figure 1. Framework of our method.
Figure 1. Framework of our method.
Ijgi 14 00143 g001
Figure 2. Construction of the Delaunay triangulation based on a point cluster.
Figure 2. Construction of the Delaunay triangulation based on a point cluster.
Ijgi 14 00143 g002
Figure 3. The Delaunay triangulation after pruning.
Figure 3. The Delaunay triangulation after pruning.
Ijgi 14 00143 g003
Figure 4. ML computation, quantifying the competitive strength of target point in visual space.
Figure 4. ML computation, quantifying the competitive strength of target point in visual space.
Ijgi 14 00143 g004
Figure 5. Original point cluster data for Jingyang city and Luojiang city.
Figure 5. Original point cluster data for Jingyang city and Luojiang city.
Ijgi 14 00143 g005
Figure 6. Trajectory points in the urban area of Beijing.
Figure 6. Trajectory points in the urban area of Beijing.
Ijgi 14 00143 g006
Figure 7. Generalized results for the Luojiang district using the proposed method on 9 Epochs.
Figure 7. Generalized results for the Luojiang district using the proposed method on 9 Epochs.
Ijgi 14 00143 g007
Figure 8. Generalized results for the Luojiang district using the ArcEngine method on 9 Epochs.
Figure 8. Generalized results for the Luojiang district using the ArcEngine method on 9 Epochs.
Ijgi 14 00143 g008
Figure 9. Generalized results for the Jingyang district using the proposed method on 9 Epochs. (For a better visual presentation, all figures related to the Jingyang district in this study have been rotated 90 degrees counterclockwise).
Figure 9. Generalized results for the Jingyang district using the proposed method on 9 Epochs. (For a better visual presentation, all figures related to the Jingyang district in this study have been rotated 90 degrees counterclockwise).
Ijgi 14 00143 g009
Figure 10. Generalized results for the Jingyang district using the ArcEngine method on 9 Epochs.
Figure 10. Generalized results for the Jingyang district using the ArcEngine method on 9 Epochs.
Ijgi 14 00143 g010
Figure 11. Generalized results using different methods on the first epoch in the Luojiang district.
Figure 11. Generalized results using different methods on the first epoch in the Luojiang district.
Ijgi 14 00143 g011
Figure 12. Generalized results using different methods on the first epoch in the Jingyang district.
Figure 12. Generalized results using different methods on the first epoch in the Jingyang district.
Ijgi 14 00143 g012
Figure 13. Detailed demonstration of local area using different methods in the Luojiang district on the first epoch.
Figure 13. Detailed demonstration of local area using different methods in the Luojiang district on the first epoch.
Ijgi 14 00143 g013
Figure 14. Detailed demonstration of local area using different methods in the Jingyang district on the first epoch.
Figure 14. Detailed demonstration of local area using different methods in the Jingyang district on the first epoch.
Ijgi 14 00143 g014
Figure 15. Generalized results of trajectory points on different epochs through the proposed method.
Figure 15. Generalized results of trajectory points on different epochs through the proposed method.
Ijgi 14 00143 g015
Table 1. Time consumed for the Jingyang district using different methods.
Table 1. Time consumed for the Jingyang district using different methods.
Time Consumed (s)
Proposed MethodArcEngine MethodGCN Method (Epoch 1)
Epoch 10.283Epoch 1157.701Training Time35.448
Epoch 20.181Epoch 297.908
Epoch 30.133Epoch 366.489
Epoch 40.105Epoch 445.533
Epoch 50.091Epoch 532.979
Epoch 60.054Epoch 623.020Predicting Time2.753
Epoch 70.034Epoch 716.789
Epoch 80.025Epoch 811.830
Epoch 90.017Epoch 98.555
Epoch 100.014Epoch 106.320
Table 2. Time consumed for the Luojiang district using different methods.
Table 2. Time consumed for the Luojiang district using different methods.
Time Consumed (s)
Proposed MethodArcEngine MethodGCN Method (Epoch 1)
Epoch 10.260Epoch 1147.079Training Time31.190
Epoch 20.202Epoch 294.333
Epoch 30.160Epoch 363.941
Epoch 40.108Epoch 443.710
Epoch 50.086Epoch 531.409
Epoch 60.052Epoch 622.087Predicting Time2.875
Epoch 70.038Epoch 716.461
Epoch 80.027Epoch 811.873
Epoch 90.022Epoch 98.868
Epoch 100.012Epoch 106.711
Table 3. Time Consumed of trajectory points using the proposed method.
Table 3. Time Consumed of trajectory points using the proposed method.
Epochs12345
Time Consumed24.402 s19.734 s15.095 s11.059 s8.187 s
Remained Points801,171613,674469,414357,938271,965
Epochs678910
Time Consume6.060 s4.411 s3.197 s2.354 s1.668 s
Remained Points205,800155,198116,57487,27065,054
Epochs1112131415
Time Consumed1.216 s0.847 s0.617 s0.449 s0.308 s
Remained Points48,29935,73726,41919,54314,386
Epochs1617181920
Time Consumed0.254 s0.207 s0.128 s0.104 s 0.091 s
Remained Points10,5877790572242223090
Total time consumption100.388 s
Table 4. Comparison of SSIM values of two generalization methods.
Table 4. Comparison of SSIM values of two generalization methods.
MethodProposed MethodArcEngine Method
Epoch 10.9690.970
Epoch 20.9650.964
Epoch 30.9640.954
Epoch 40.9490.939
Epoch 50.9070.885
Epoch 60.8780.872
Epoch 70.8430.836
Epoch 80.8010.796
Epoch 90.7210.740
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, C.; Ai, T.; Xiao, T.; Yu, H.; Liu, P. A Selection Method of Massive Point Cluster Using the Delaunay Triangulation to Support Real-Time Visualization. ISPRS Int. J. Geo-Inf. 2025, 14, 143. https://doi.org/10.3390/ijgi14040143

AMA Style

Gong C, Ai T, Xiao T, Yu H, Liu P. A Selection Method of Massive Point Cluster Using the Delaunay Triangulation to Support Real-Time Visualization. ISPRS International Journal of Geo-Information. 2025; 14(4):143. https://doi.org/10.3390/ijgi14040143

Chicago/Turabian Style

Gong, Chongya, Tinghua Ai, Tianyuan Xiao, Huafei Yu, and Pengcheng Liu. 2025. "A Selection Method of Massive Point Cluster Using the Delaunay Triangulation to Support Real-Time Visualization" ISPRS International Journal of Geo-Information 14, no. 4: 143. https://doi.org/10.3390/ijgi14040143

APA Style

Gong, C., Ai, T., Xiao, T., Yu, H., & Liu, P. (2025). A Selection Method of Massive Point Cluster Using the Delaunay Triangulation to Support Real-Time Visualization. ISPRS International Journal of Geo-Information, 14(4), 143. https://doi.org/10.3390/ijgi14040143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop