Next Article in Journal
Visual Localization Algorithm with Dynamic Point Removal Based on Multi-Modal Information Association
Previous Article in Journal
Analysis of Using Machine Learning Application Possibilities for the Detection and Classification of Topographic Objects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Four-Level LOD Simplification for Single- and Multi-Mesh 3D Scenes Towards Scalable BIM/GIS/Digital Twin Integration

1
China Railway Design Corporation, Tianjin 300251, China
2
Intelligent Transportation System Research Center, Southeast University, Nanjing 211189, China
3
China Construction Eighth Engineering Division Corp., Ltd., Shanghai 200120, China
4
College of Civil Engineering, Tongji University, Shanghai 200092, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2026, 15(2), 61; https://doi.org/10.3390/ijgi15020061
Submission received: 27 October 2025 / Revised: 23 January 2026 / Accepted: 26 January 2026 / Published: 30 January 2026

Abstract

Efficient level-of-detail (LOD) management is crucial for handling large-scale 3D meshes in BIM, GIS, and digital twin applications. In practice, both individual models and complex multi-mesh scenes require multi-resolution representations. Yet two practical issues persist: (i) simplification rates are often fixed a priori, lacking principled guidance and yielding suboptimal fidelity–cost trade-offs; and (ii) after a scene-level target is set, workflows commonly impose a uniform rate on all models, which is ill-suited to heterogeneous geometry and produces uneven visual quality. This paper presents an automatic approach that constructs a cumulative edge collapse loss curve using a QEM (Quadric Error Metrics)-based process. Shape analysis of this curve defines four representative LOD targets, and an automated procedure then determines their corresponding simplification rates. The method is first developed for individual meshes and then extended to multi-mesh scenes, assigning model-specific rates that satisfy a prescribed scene-level reduction while maintaining visual consistency. Experiments on complex engineering datasets show higher fidelity than uniform-rate baselines, especially at high reductions. The approach provides a practical, automated framework for object- and scene-level LOD generation.

1. Introduction

With the increasing scale and complexity of 3D models used in Building Information Modeling (BIM), Geographic Information Systems (GISs), and digital twin applications, efficient management of level of detail (LOD) [1,2,3] has become a key challenge in large-scale mesh processing and rendering [4,5,6,7]. LOD techniques enable models to be rendered or transmitted at varying resolutions depending on application context, user perspective, or computational constraints [8,9,10]. This capability is essential for optimizing performance in web-based visualization, real-time rendering, and progressive transmission of 3D scenes [11,12]. For example, in iterative modeling and coordination workflows [13,14], newly created or revised models emerge continuously; LOD supports rapid, cross-platform visualization and review by providing on-demand, resolution-appropriate representations without requiring full-resolution assets. Meanwhile, large scenes composed of many heterogeneous meshes [15] benefit from scene-level simplification that yields tiered LODs for the entire environment, improving streaming, incremental loading, and real-time rendering efficiency. These practical needs motivate automatic LOD strategies that operate at both object and scene scales, reducing manual tuning while maintaining reliable visual quality.
In the literature, existing LOD research can be broadly grouped into two lines: (1) Semantics-driven LOD, typically grounded in standards such as CityGML/CityJSON, which specifies which objects, parts, and geometric refinements should be present at different map scales or viewing distances [16,17,18,19]. This approach is well established for urban models, but it depends on mature, domain-specific ontologies [20,21]; outside urban domains (e.g., railway infrastructure), standardized LOD semantics are less complete or not yet harmonized. (2) Geometry-driven LOD via mesh simplification, which is more general: it constructs multi-resolution representations by simplifying triangle meshes and then selects the resolution at runtime based on view distance, screen-space error, or resource budgets [22,23,24,25,26]. This paper follows the second line and focuses on automatically determining the simplification rates that define four LODs; for multi-mesh scenes, it further assigns model-specific rates while meeting a prescribed scene-level reduction target.
For geometry-driven LOD, two practical issues persist: (1) Simplification rates are often fixed a priori (e.g., 30/60/90%, 25/50/75%) or tuned by ad hoc heuristics [27,28,29], which is not robust across different meshes and may lead to suboptimal or even redundant LOD levels. As Figure 1 illustrates for a steel truss, both the 30% and 60% simplified meshes remain visually close to the original. In such cases, a 30% high-precision LOD provides limited additional value over the original mesh, whereas 60% offers a more meaningful fidelity–cost trade-off. Manual per-mesh tuning can mitigate this issue but is time-consuming and subjective. On the other hand, at high reductions, fidelity becomes highly sensitive to the ratio: in Figure 2, changing the rate by only 2% (88/90/92/94%) visibly alters the web members, demanding careful selection. Overall, current practice lacks a principled link between chosen ratios and accumulated geometric error. (2) For scene-level simplification, workflows often assign a uniform rate across all meshes to meet a global budget [27,30]. This ignores heterogeneity in size and complexity, yielding uneven perceptual quality, over-simplifying large, simple meshes, or under-degrading detail-rich assets. For example, applying a uniform 90% simplification rate to all the models in a traction substation scene (Figure 3) markedly over-degrades the ground, base, and tower meshes (red boxes).
To address these challenges, this paper proposes an automatic method for determining four-level LOD simplification rates based on a cumulative edge collapse loss (CECL) curve. Built on the Quadric Error Metrics (QEM) edge collapse framework [31,32], the method records, at each collapse, the number of faces removed and the associated collapse loss and then integrates these into a CECL curve that reflects progressive structural degradation. By analyzing geometric features of this curve, four representative LOD targets are defined, and a data-driven strategy is developed to automatically determine the corresponding simplification rates. For multi-mesh scenes, the approach aggregates per-model loss curves to form a scene-level curve and likewise identifies four scene-level LOD targets; a subsequent allocation step assigns model-specific simplification rates such that the combined reduction satisfies the prescribed scene-level target while yielding more uniform perceptual quality across the scene.
The organization of this paper is as follows: Section 2 reviews the QEM edge collapse method. Section 3 details the proposed method, including the CECL curve formulation and the automatic LOD determination process, first for individual meshes, then in its extension to multi-mesh scenes. Section 4 presents experiments on engineering datasets and evaluates the effectiveness of the method. Finally, Section 5 concludes the paper.

2. Background: QEM Edge Collapse Method

The details of the QEM edge collapse method can be found in Garland and Heckbert’s work [32]. In simple terms, edge collapse refers to removing an edge and merging its two endpoints into a new vertex, which visually appears as folding the edge away, as shown in Figure 4. For mesh simplification methods based on edge collapse, the central challenge lies in determining which edge in the mesh should be removed and where the merged vertex should be positioned.
In practice, the problem can be reduced to the latter problem, i.e., identifying the optimal position of the merged vertex. Once this optimal position for a given edge is determined, the collapse quality of that edge can be quantitatively defined. The simplification process can then proceed by iteratively collapsing the edge with the smallest collapse error.
The edge collapse method operation is inherently a localized process. When an edge is collapsed, its influence is limited to the set of triangles adjacent to the original edge. Consequently, after merging, only the attributes of the new vertex and its neighboring triangles require recomputation.
QEM, short for Quadric Error Metrics, are based on the principle that the merged vertex should be positioned as close as possible to the planes of the triangles adjacent to the original edge, a condition that can be mathematically formulated as an optimization problem seeking a point ( v x , v y , v z ) in space that minimizes the squared distance sum Δ e to the planes. Following this formulation, the optimal position ( v x * , v y * , v z * ) of the new vertex and the corresponding collapse error Δ e * can be determined for any given mesh edge.
Specifically, suppose the equation of the plane containing a triangle is given by
a x + b y + c z + d = 0
where ( a , b , c ) is the unit normal vector of the plane and d is the distance from the origin to the plane. The squared distance from a point ( v x , v y , v z ) to this plane can be expressed as
δ ( v ) = ( a v x + b v y + c v z + d ) 2 = ( p v ¯ ) 2
where p = ( a , b , c , d ) denotes the plane parameters and v ¯ = ( v x , v y , v z , 1 ) represents the homogeneous coordinate vector of the point.
Accordingly, the sum of the squared distances from a vertex to all planes can be written as
Δ v ( v ) = p N ( v ) δ ( v ) = p N ( v ) ( p v ¯ ) 2 = p N ( v ) ( v ¯ p ) ( p v ¯ ) = v ¯ p N ( v ) p p v ¯
where N ( v ) denotes the set of all planes associated with the triangles incident to vertex v.
Define
K p = p p
and
Q p = p N ( v ) K p
Therefore,
Δ v ( v ) = v ¯ Q p v ¯
where Q p is the QEM of vertex v, representing the quadratic form of the sum of the squared distances from any point to the planes of the incident triangles.
When performing an edge collapse, the two endpoints of the edge are merged, and the QEM of the resulting vertex can be obtained by summing the QEMs of the two endpoints:
Q e = Q p 1 + Q p 2
where Q e is the QEM of the edge and Q p 1 and Q p 2 are the QEMs of the two endpoints of the edge, respectively.
Therefore, the collapse error of the edge can be expressed as
Δ e ( v ) = v ¯ Q e v ¯ .
The optimal position of the merged vertex v * can then be determined by solving the minimization problem
v * = arg min v v ¯ Q e v ¯ .
Since the error function Δ e ( v ) is quadratic in the vertex coordinates, the minimizer can be found by setting its partial derivatives with respect to v x , v y , and v z to zero:
Δ e ( v ) v x = 0 , Δ e ( v ) v y = 0 , Δ e ( v ) v z = 0 .
which is equivalent to solving the following linear system:
Q 11 Q 12 Q 13 Q 21 Q 22 Q 23 Q 31 Q 32 Q 33 v x * v y * v z * = Q 14 Q 24 Q 34 ,
If the left matrix in Equation (11) is singular, the optimal merging point can be searched for along the edge. If this still fails to produce a valid solution, a fallback strategy is to select, from the two endpoints and the midpoint of the edge, the point that yields the smallest sum of squared distances.
The basic procedure of the QEM-based edge collapse algorithm can be summarized as follows:
1.
Compute the QEM for each vertex in the mesh model using Equation (5).
2.
For each edge, compute the optimal collapse position and the associated collapse error using Equations (7)–(11).
3.
Maintain a set of collapsible edges in a priority queue (min-heap) ordered by collapse error.
4.
Iteratively extract the edge with the smallest error from the queue, perform the collapse, and update the QEMs and collapse errors of the affected vertices and edges, until the desired simplification target is reached.

3. Methods

3.1. Overview

The main idea is to instrumentalize the QEM edge collapse process by recording, at each collapse, the current simplification rate (percentage of faces removed) and CECL. These samples define a monotonic curve of simplification rate versus cumulative loss that captures the growth of structural—and, by proxy, visual—degradation with simplification. Analyzing the curve’s shape yields three LOD targets which, together with the original mesh, constitute four LOD levels. A data-driven procedure then automatically places the operating points (simplification rates) for these targets, removing the need for manual tuning. The approach is further extended to multi-mesh scenes by aggregating per-model loss curves into a scene-level loss curve and solving an allocation problem to assign model-specific rates while satisfying the prescribed scene-level reduction for each LOD tier. This yields perceptually consistent results across heterogeneous geometry and ensures that the scene meets the desired four-level LOD specification. The overall conceptual framework of the proposed method is illustrated in Figure 5. At runtime, applications can select among the pre-generated LOD tiers according to viewing conditions (e.g., view distance). The specific mapping from viewing conditions to an LOD level is application-dependent and can follow standard policies such as distance bands, screen-space error thresholds, or resource-budget-driven selection.

3.2. Simplification Rate–Cumulative Edge Collapse Loss Curve

3.2.1. Curve Construction

To capture a complete edge collapse history for a single mesh, the simplification should proceed to a single remaining vertex (i.e., a 100% simplification rate). At each edge collapse k, we record (i) the number of faces removed and (ii) the collapse loss Δ e defined in Equation (8). The simplification rate x k is computed as the cumulative number of removed faces divided by the mesh’s total face count, and the cumulative edge collapse loss (CECL) y k is the cumulative sum of Δ e . Plotting the pairs ( x k , y k ) yields the simplification rate–CECL curve.
Taking the steel truss model in Figure 1 as an example, Figure 6 illustrates this curve, where the horizontal axis is the simplification rate and the vertical axis is the CECL. As the rate approaches 100%, geometric structure typically degenerates rapidly, causing CECL to rise steeply; the curve thus takes on a characteristic L-shape that can obscure loss behavior in the most simplification process. Simply trimming the near-vertical tail may still leave an L-shaped remainder because the remaining high-rate segments may continue to grow sharply.

3.2.2. Logarithmic Transformation

To better isolate the rapid-degeneration regime and increase sensitivity to changes in fast-growing regions, a logarithmic transform is applied to the loss axis, yielding the representation in Figure 7. The steep rise on the left simply reflects the characteristic behavior of the logarithm near zero—small positive losses expand sharply under log scaling. The gently varying middle segment corresponds to the mesh’s stable simplification process, while the final steep ascent indicates the rapid collapse phase. The logarithmic transform enhances sensitivity in these regions, improving the resolution of changes in the final fast-growing intervals.

3.2.3. Removing the Rapid-Degeneration Regime

To remove the rapid-degeneration regime, manual trimming is possible; however, for automated processing, a knee-detection algorithm [33] is applied to locate the transition point from stable simplification to rapid degeneration. Prior to detection, the curve is smoothed to suppress noise and improve accuracy. Among various smoothing techniques, this work adopts the Savitzky–Golay filter [34,35]. The knee-detection search range is typically set to [ 0.8 , 1 ] but can be adjusted as needed. The smoothed curve and detected knee point are shown in Figure 7.

3.2.4. Recovering Simplification Rate–Cumulative Edge Collapse Loss Curve

Figure 7 shows the smoothed logarithmic CECL curve, together with the knee point that separates the stable simplification stage from the rapid-degeneration stage. By powering the portion of the curve before the knee, a filtered simplification rate–CECL curve is obtained, as illustrated in Figure 8. Specifically, let y ˜ ( x ) denote the smoothed curve in the log domain (the horizontal axis x is still the simplification rate, while the log is applied only to the loss axis). After knee detection, the samples on the left of the detected knee are retained and those on the right are discarded, thereby removing the rapid-degeneration segment of the edge collapse process. Finally, the retained loss values are mapped back to the original CECL scale by applying the inverse of the logarithmic transformation (i.e., exponentiation). The resulting point set ( x , y ) forms the filtered simplification rate–CECL curve shown in Figure 8. This curve therefore reflects the actual cumulative loss evolution of the mesh during the stable simplification stage while excluding the rapid-degeneration tail.

3.2.5. Determining Target Simplification Rates for Different LOD Levels

In Figure 8, since the vertical axis represents cumulative loss, the curve is globally non-decreasing. Moreover, because the algorithm always collapses the edge with the smallest loss first, the curve’s second derivative is generally positive. Dynamic updates may occasionally produce an edge with a lower loss than the previous collapse, causing local variations in the second derivative. However, the overall trend remains unchanged. Given these properties, the curve’s shape must lie between an L-shaped polyline and a straight line (orange dashed line in Figure 8). Typically, it takes the form shown by the blue solid line.
Based on the shape characteristics of the cumulative loss curve, three simplified models of different precision levels are defined for LOD. Each model is associated with a distinct perceptual meaning. Together with the original model, they form four LOD levels. The three simplification targets are as follows:
  • Point A: This achieves the highest simplification rate with negligible visual error. It provides maximum compression with almost no perceptible loss, suitable for close viewing distances.
  • Point C: This marks the onset of acceptable significant visual error and represents the maximum allowable simplification rate. It is suitable for far viewing distances.
  • Point B: Located between Points A and C, it serves as a balanced compromise between visual quality and simplification rate, suitable for medium viewing distances.
Points A, B, and C are illustrated in Figure 8. In this study, Points A, B, and C, together with the original mesh, form a four-level LOD set. These four levels cover the main phases of the CECL curve, where cost–quality behavior changes noticeably. In practice, the number of LOD levels is application-dependent and may be reduced based on user preference and perceptual needs. For example, if Point A is visually indistinguishable from the original, a user may omit the original and use only three levels (A/B/C). Conversely, more than four levels can make detail changes more gradual between adjacent levels. However, this increases preprocessing, storage, and transmission costs. Excessively fine LOD partitions may also cause frequent level switching as view conditions change, degrading system performance. Therefore, adding more LOD levels by default is not recommended. If fewer levels are needed, some tiers can simply be dropped. For more personalized LOD designs, one would need to redefine what each target level represents and develop an extraction strategy accordingly. Such an extension is beyond the scope of this paper. Nevertheless, the proposed CECL curve provides a useful basis for alternative LOD target definitions.
Regarding applicability, the proposed A/B/C target definitions can be applied to a wide range of meshes. However, for extremely simple meshes (e.g., a cube with 8 vertices and 12 faces), any edge collapse may cause noticeable deformation. Even an “A-level” simplification could be perceptually significant. In such cases, simplification is usually unnecessary. For the geometry-rich meshes that motivate this work, the proposed method can consistently extract A/B/C targets and provide reasonable simplification rates without manual tuning.
The next subsection adaptively determines the positions of Points A, B, and C on the CECL curve. This enables automatic selection of simplification rates for different LOD levels, balancing visual fidelity and rendering performance. Furthermore, the method considers variations in curve shape: if the curve is L-shaped, Points A and C tend to cluster near Point B; if the curve is approximately linear, Points A and C are positioned farther from Point B. Finally, a bounding strategy prevents the distance between Points A and C from becoming too small or too large, ensuring reasonable distribution.

3.3. Automatic Determination of Multi-Level LOD Simplification Rates

Based on the description of Points A, B, and C, there is no single fixed position that must be regarded as the “optimal” choice. For example, a simplification rate of 78.3% and one of 78.4% may produce indistinguishable visual results, and both can satisfy the same target. In practice, the most suitable rates for different LOD levels can vary according to user preferences or application requirements. Accordingly, the method identifies reasonable default positions for these points and also provides parameters to fine-tune them, allowing the targets to be adjusted to specific needs.

3.3.1. Normalization

To eliminate the effect of scale differences between models and provide a uniform basis for subsequent calculations, both the filtered simplification rate x and the CECL y (as shown in Figure 8) are subjected to Min-Max normalization, producing normalized values X and Y ( X , Y [ 0 , 1 ] ) , as the blue line shows in Figure 9. The specific computation of X and Y is as follows:
X = x min ( x ) max ( x ) min ( x )
Y = y min ( y ) max ( y ) min ( y )

3.3.2. Determination of Point B

According to the requirement in Section 3.2.5 that Point B should balance visual quality and simplification rate, and given that the curve’s shape always lies between an L-shaped polyline and a straight line, the curve’s knee can be taken as the position of Point B. It can be obtained using a standard knee-detection algorithm (e.g., Kneedle [33]). For reproducibility and because the same intermediate distance sequence is also used in the pipeline to determine Points A and C, an explicit procedure used to identify Point B is provided here. First, we compute the perpendicular distance from each normalized coordinate ( X , Y ) to the main diagonal (from ( 0 , 0 ) to ( 1 , 1 ) , shown as the green dash-dot line), denoted as the sequence d = { d i } . The distance is calculated as
d i = X i Y i 2
The maximum value is selected from the distance sequence d, and its index i B is recorded. The corresponding X B = X [ i B ] gives the simplification rate of Point B on the cumulative loss curve (Figure 9). This point can be regarded as the turning point between the “slow cumulative loss growth” and “rapid cumulative loss growth” regions.

3.3.3. Determination of Points A and C

Among all index positions satisfying d τ × max ( d ) , the smallest index i A and the largest index i C are selected, yielding X A = X [ i A ] and X C = X [ i C ] as the simplification rates of Points A and C, respectively (see Figure 9).
In the example shown in Figure 9, τ = 0.81 , and the resulting Points A and C conform well to the definitions in Section 3.2.5. However, because the shapes of the simplification rate–CECL curves vary across models, a fixed τ value cannot accommodate all cases. Specifically, the following hold:
  • When the loss curve is close to an L-shape, Point B lies near ( 1 , 0 ) . In this case, Point A should be positioned close to Point B to achieve higher compression while preserving visual quality. Likewise, Point C should also be moved toward Point B, as beyond B the curve slope approaches infinity, meaning even a slight increase in the simplification rate can cause significant visual degradation. Therefore, τ should be closer to 1.
  • When the loss curve is nearly linear, Point B lies near ( 0.5 , 0.5 ) . To maintain precision for close-up use, Point A should be placed farther from Point B. Because the slope after Point B is lower than in the L-shaped case, Point C can also be moved farther from Point B to broaden the simplification-rate range. In this case, τ should be closer to 0.
To adaptively adjust for different curve shapes, two shape metrics are introduced:
  • k X = Y B / X B , representing the inclination of the line from Point B to the origin with respect to the X-axis;
  • k Y = ( 1 Y B ) / ( 1 X B ) , representing the inclination of the line from Point B to the endpoint ( 1 , 1 ) with respect to the Y-axis.
Both k X and k Y take values of 0 for an L-shaped curve and 1 for a linear curve, making them effective indicators of curve shape. Their average is used as the final shape factor, from which the adaptive τ value is determined:
τ = 1 1 2 Y B X B + 1 X B 1 Y B
In this formulation, τ approaches 1 for L-shaped curves, where Points A and C lie close to Point B, and approaches 0 for linear curves, where Points A and C are positioned farther from Point B.
In practice, upper and lower bounds, f min and f max , are imposed to prevent extreme τ values that would place Points A and C either too close together or too far apart, thereby influencing the visual distinction between LOD levels. Specifically, τ is computed as
τ = f min + 1 1 2 Y B X B + 1 X B 1 Y B × ( f max f min )

3.3.4. Generation of LOD Models

In Section 3.3.2 and Section 3.3.3, the values X A , X B , and X C obtained are normalized. Using their corresponding indices i A , i B , and i C , these can be directly mapped back to the original simplification rate data x, i.e., x A = x [ i A ] , x B = x [ i B ] , x C = x [ i C ] , to obtain the actual simplification rates of the three points in the original scale.
Based on the determined simplification rates, three LOD models are generated, which, together with the original model, form four LOD levels. This enables LOD switching during rendering, effectively improving system performance while preserving visual quality. In practical applications, only a subset of these levels may be generated as required.

3.4. Multi-Mesh Scene LOD Simplification Rate Allocation

Section 3.2 and Section 3.3 together present an automatic method for setting multi-level LOD simplification rates for individual models. This approach is well suited, for example, to collaborative design workflows in which newly created models are first transmitted in low-precision form for rapid multi-platform synchronization. However, when simplifying a fully assembled scene where all models are already complete, independently assigning three simplification levels to each model often fails to meet the prescribed overall scene-level reduction target.
The following clarifies the terminology used in this section: (i) Scene-level (aggregate) simplification rate refers to the percentage of faces removed from the entire scene, computed as the total number of removed faces divided by the total face count of all models. Points A, B, and C at the scene level correspond to this aggregate rate (e.g., 92.6% at Point C for the traction substation scene in Section 4). (ii) Model-specific (allocated) simplification rate refers to the percentage of faces removed from an individual model, which may differ significantly across models depending on their geometry. Furthermore, two simplification strategies are distinguished: uniform simplification applies the same simplification rate to all models, whereas the proposed adaptive allocation assigns model-specific rates based on the scene-level CECL curve. Adaptive allocation simplifies high-density models more aggressively and low-density models less so (or not at all), achieving perceptually consistent results while satisfying the prescribed scene-level reduction target.
The most straightforward way to enforce a scene-level target is to assign the same simplification rate to every model, i.e., uniform simplification. Yet this often leads to over-simplification of some models and under-simplification of others. A patch-style workaround is to define a minimum allowable face count and halt the edge collapse process when a model’s face count drops below this threshold. This can prevent excessive degradation in simple elements such as the ground model in Figure 3, preserving its appearance. However, in the same figure, the tower model might still retain more faces than the threshold after being over-simplified, resulting in poor quality. Moreover, this method cannot address cases where highly dense meshes remain under-simplified, wasting rendering resources.
Another option is to merge all independent models in the scene into a single mesh and then apply the QEM edge collapse method, which automatically collapses the edges with the smallest loss first. Because this method evaluates collapse errors globally, it can, in theory, yield an optimal simplification result. However, it has clear limitations. First, restoring the merged mesh back into the original individual models after reduction is complex, requiring structural tracking and data mapping. Second and more importantly, merging drastically increases the face count, and the time complexity of QEM grows faster than linearly, making the processing of large face sets computationally expensive. In BIM scenes with continually growing numbers of models, this can even render the simplification task infeasible.
In contrast, performing simplification independently within each model keeps the per-operation face count within a manageable range. This not only improves the scalability and stability of the algorithm but also enables the use of parallel processing to further accelerate the overall simplification workflow.
Since the models in a multi-mesh scene are simplified independently, the simplification of one model does not affect the others. The strategy here is to simplify each model individually to a single vertex (i.e., a 100% simplification rate) while recording (i) the number of faces removed and (ii) the collapse loss Δ e defined in Equation (7), in the same manner as described for the single-mesh case. All records from every model are then combined and sorted in ascending order of collapse loss, and the scene-level simplification rate–CECL curve is constructed by cumulatively summing the number of faces removed (divided by the total face count of the entire scene) together with the cumulative sum of the collapse losses, following the same procedure as in Section 3.2. Although, within an individual model, QEM updates during edge collapses may occasionally cause a newly computed collapse loss to be lower than in the previous step, resulting in a per-model loss sequence that is not strictly sorted, this effect is negligible at the scene scale.
The curve obtained after filtering out the rapid-degeneration segment for the entire scene is plotted as a blue line in Figure 10, which takes the traction substation scene in Figure 3 as an example. Once this scene-level curve is obtained, the method from Section 3.2.5 can be applied to automatically determine the three target LOD simplification rates for the entire scene.
After determining the target simplification rates for the scene, the next step is to assign model-specific (allocated) simplification rates that produce perceptually consistent results while ensuring that the overall simplification rate meets the target; this constitutes the adaptive allocation strategy. To enable this, each [faces removed, collapse loss] entry in the merged dataset is tagged with its source model. Combined with the aforementioned cumulative operation, this produces arrays of the form [simplification rate, cumulative collapse loss, faces removed, model ID], where the first two elements plot the curve shown in Figure 10. Once a target simplification rate is set, the faces removed from each model up to this threshold are summed and divided by the model’s original face count, yielding the model-specific simplification rate to be applied. For example, in Figure 10, the yellow and green arrays represent the model ID and the number of faces removed at the corresponding collapse step for that model. For instance, the [2, 3] rising from the yellow point in Figure 10 indicates that in model 2, three faces were removed at that step. Suppose the original face count of model 2 is 10. For the LOD simplification rate at Point A, the model-specific simplification rate for model 2 is calculated as ( 3 + 2 + 2 + 1 ) / 10 = 80 % . For Points B and C, the calculation is ( 3 + 2 + 2 + 1 + 1 ) / 10 = 90 % .

4. Experimental Results and Discussion

In this section, the proposed methods are validated through four experiments: two engineering models for single-mesh simplification and two engineering scenes for scene-level multi-mesh simplification. All experiments were implemented in Python 3.13. The input mesh models were obtained by extracting geometry and color information from IFC (Industry Foundation Classes) files. The experiments were conducted on a MacBook Air (M1 chip, 16 GB RAM, 256 GB SSD). For a typical single model with tens of thousands of faces, QEM-based mesh simplification takes on the order of a few seconds, while the CECL-curve-based determination of LOD simplification rates is nearly instantaneous. If the mesh simplification core is implemented in a compiled language such as C++ or Rust, the simplification time can be further reduced substantially.

4.1. Single-Mesh Simplification Experiments

4.1.1. Steel Truss Model

Figure 11a,b illustrate a steel truss model and its simplified versions generated using the automatically determined simplification rates at Points A, B, and C, showing both the overall view and local details from the same camera position to facilitate visual comparison across different simplification levels. Visualization of these models was carried out using PyVista [36].
The procedure for determining these simplification rates is as follows. First, the model is simplified to a single vertex (100% simplification rate), while recording at each edge collapse both (i) the number of faces removed and (ii) the corresponding collapse loss, thereby constructing the simplification rate–cumulative loss curve. Based on the methods described in Section 3.3.2 and Section 3.3.3 and with parameters f min = 0 and f max = 0.9 , the simplification rates for Points A, B, and C were determined to be 66.4%, 86.3%, and 91.7%, respectively. The face count is reduced from 37,842 to 12,715, 5184, and 3141, respectively. Figure 12 shows both the smoothed logarithmic simplification rate–CECL curve with the knee point and the filtered curve with the positions of Points A, B, and C.
The high-precision model corresponding to Point A (66.4% simplification) shows virtually no perceivable difference from the original mesh model, demonstrating strong fidelity. The medium-precision model at 86.3% (Point B) exhibits some loss of detail, as shown in the detailed view of Figure 11b, but overall visual quality remains well preserved. At 91.7% simplification (Point C), the low-precision model exhibits substantial reductions in local details, particularly in the web members; however, because this model is only displayed at larger viewing distances, such simplification is barely noticeable to the user.
Compared with conventional approaches that apply fixed simplification ratios (e.g., 30%, 60%, 90%), the proposed method adaptively determines LOD simplification rates by analyzing the simplification rate–cumulative loss curve. This adaptivity enables higher simplification ratios to be achieved while preserving perceptual fidelity, thereby providing a model-specific and interpretable optimization that improves rendering performance without degrading visual realism.

4.1.2. Power Transformer Model

Figure 13a,b illustrate a power transformer model and its simplified versions generated using the automatically determined simplification rates at Points A, B, and C, showing both the overall view and local details from the same camera position to facilitate visual comparison across different simplification levels.
The procedure for determining the simplification rates is the same as for the steel truss model, with parameters set to f min = 0 and f max = 0.9 . The resulting rates for Points A, B, and C are 63.0%, 84.6%, and 92.8%, respectively. The face count is reduced from 52,748 to 19,517, 8123, and 3798, respectively. Figure 14 presents both the smoothed logarithmic simplification rate–CECL curve with its knee point and the filtered curve showing the positions of Points A, B, and C.
Compared with the steel truss model in Figure 12, the power transformer model exhibits a maximum cumulative geometric loss (Figure 14a) that is approximately three orders of magnitude higher. Nevertheless, the proposed method is still able to effectively extract representative simplification rates at Points A, B, and C, demonstrating its robustness and broad applicability.
Moreover, because the CECL curve of the power transformer model is closer to linear in shape than that of the steel truss model, the relative spacing between Points A and C and Point B is larger. This observation is consistent with the design principle of the proposed method, which adapts the distribution of simplification rates to the characteristics of the model.

4.2. Scene-Level Multi-Mesh Simplification Experiments

4.2.1. Traction Substation Scene

Figure 15a–d illustrates a traction substation scene and its simplified versions generated using the automatically determined simplification rates at Points A, B, and C, from the same camera position to facilitate visual comparison across different simplification levels.
The procedure for determining the scene-level (aggregate) simplification rates follows the same approach as in the single-mesh experiments, with parameters set to f min = 0 and f max = 1.0 . The only difference is that all records from every model are combined and sorted to form the CECL curve, as described in Section 3.4. The resulting scene-level simplification rates for Points A, B, and C are 77.8%, 88.1%, and 92.6%, respectively. The face count is reduced from 1,300,094 to 288,621, 154,711, and 96,207, respectively. Figure 16 presents the smoothed logarithmic simplification rate–CECL curve with its knee point, along with the filtered curve marking the positions of Points A, B, and C. Following Section 3.4, the corresponding model-specific simplification rates are then applied to individual meshes. The resulting face counts and simplification rates for representative models at the different LOD levels (Points A, B, and C) are summarized in Table 1, where the column ID corresponds to the numbers in the left subplot of Figure 15d.
By constructing the scene-level simplification rate–CECL curve, it becomes possible to determine, for any given scene-level (aggregate) simplification rate, how many faces should be removed from each model. This in turn yields the model-specific (allocated) simplification rates that collectively realize scene-level perceptual consistency in simplification. As shown in Table 1, the proposed adaptive allocation method assigns higher model-specific simplification rates to models with relatively small volumes but large face counts (where simplification has little visual impact), such as model 4, while assigning lower model-specific simplification rates to larger-volume models with fewer faces (where simplification would cause more noticeable visual degradation), such as models 6, 11, and 12. Compared with uniform simplification with a minimum face-count constraint (set to 50) (Figure 15e), the proposed method achieves significantly greater fidelity under the same scene-level (aggregate) simplification rate.
In Figure 16b, Points A, B, and C are closely spaced, which may give the misleading impression that only a small number of faces are removed between Points A and C. In reality, at high simplification rates even a small change in rate can correspond to a large reduction in face count. For example, as shown for model 10 in Table 1, the simplification rates at Points B and C differ by only about 5%, yet the corresponding numbers of remaining faces differ by nearly a factor of two.

4.2.2. Roadbed with Slope Scene

Figure 17a–d illustrate a roadbed with slope scene and its simplified versions generated using the automatically determined simplification rates at Points A, B, and C, from the same camera position to facilitate visual comparison across different simplification levels. Since the scene is viewed from a relatively long distance, differences from the original model are not readily visible. Therefore, Figure 18 presents local detail views, which show that for the slope model, simplification occurs primarily along its boundaries.
The procedure for determining the scene-level (aggregate) simplification rates is the same as for the traction substation scene, with parameters set to f min = 0 and f max = 1.0 . The resulting scene-level simplification rates for Points A, B, and C are 82.0%, 89.7%, and 93.1%, respectively. Figure 19 presents the smoothed logarithmic simplification rate–CECL curve with its knee point, along with the filtered curve marking the positions of Points A, B, and C. The resulting face counts and simplification rates for representative models at the different LOD levels (Points A, B, and C) are summarized in Table 2, where the column ID corresponds to the numbers in the left subplot of Figure 17d.
Due to the trade-off between detail presentation and scene completeness, the scenes shown in this subsection represent only parts of the full scenes. As a result, the maximum simplification rates reported in Table 2 are lower than those for the overall scenes. Since this scene contains a larger number of high-density models, its high-precision version can still achieve a scene-level simplification rate of 82.0%. The four experiments, conducted across different scales, demonstrate the robustness and adaptability of the proposed method. Finally, similar to the previous subsection, this subsection also includes a comparison with uniform simplification with a minimum face-count constraint (set to 50) (Figure 17e). The proposed adaptive allocation method again achieves better fidelity under the same scene-level (aggregate) simplification rate.

5. Conclusions

This study addressed two practical gaps in geometry-driven LOD: the absence of principled guidance for selecting simplification rates and the inconsistency in visual quality caused by applying a uniform rate across heterogeneous models within a scene. To close these gaps, an automatic framework was presented that instruments a QEM edge collapse process to construct a simplification rate–CECL curve. Shape analysis defines three targets (Points A, B, and C), which, together with the original mesh, yield four representative LOD levels. A data-driven procedure then automatically determines the operating points. Then, the method extends to scenes by merging per-model collapse records into a scene-level CECL curve, from which model-specific (allocated) rates are derived through adaptive allocation to satisfy a prescribed global reduction while preserving perceptual consistency across models.
Experiments on two individual models (steel truss, power transformer) and two multi-mesh scenes (traction substation, roadbed with slope) demonstrated robustness across scales and curve shapes. Despite order-of-magnitude differences in cumulative loss, the procedure consistently identified representative simplification rates (Points A, B, and C) that achieved substantial reductions with minimal perceptual degradation at relevant viewing distances. Compared with uniform simplification enhanced by a minimum face-count constraint, the proposed adaptive allocation method achieved superior fidelity under the same scene-level budget and improved rendering efficiency by concentrating reductions where visual cost was lowest.

Limitations and Future Work

This work focuses on offline generation of a four-level LOD set and the corresponding simplification rates. At runtime, applications can select among the pre-generated LOD tiers according to viewing conditions (e.g., view distance). The specific mapping from viewing conditions to an LOD level is application-dependent and can follow standard policies such as distance bands, screen-space error thresholds, or resource-budget-driven selection; designing and evaluating such runtime selection policies is outside the scope of this paper.
Future work will consider integrating the proposed offline LOD generation with runtime LOD selection strategies and evaluating the end-to-end system behavior under practical constraints (e.g., real-time performance and rendering/streaming budgets).

Author Contributions

Conceptualization, Siyuan Sun, Qilin Zhang, Lin Su and Xukun Yang; methodology, Siyuan Sun and Xinyu Liu; software, Siyuan Sun; validation, Siyuan Sun, Lin Su, Xukun Yang and Chunyu Qi; formal analysis, Siyuan Sun; investigation, Siyuan Sun; resources, Lin Su, Xukun Yang and Chunyu Qi; data curation, Siyuan Sun; writing—original draft preparation, Siyuan Sun; writing—review and editing, Siyuan Sun, Lin Su, Xukun Yang and Licheng Pan; visualization, Siyuan Sun; supervision, Siyuan Sun, Lin Su, Xukun Yang, Chunyu Qi, Qilin Zhang and Licheng Pan; project administration, Siyuan Sun; funding acquisition, Siyuan Sun. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant U2268203 and the Category A Project of the China Railway Design Corporation under Grant 2024A0253802-6.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions that helped to improve the comprehensiveness and clarity of this paper.

Conflicts of Interest

Authors Siyuan Sun, Lin Su, Xukun Yang, Chunyu Qi, Xinyu Liu were employed by the company China Railway Design Corporation. Author Licheng Pan was employed by the company China Construction Eighth Engineering Division Corp., Ltd. The remaining author declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LODlevel of detail
BIMBuilding Information Modeling
GISGeographic Information System
QEMQuadric Error Metrics
CECLcumulative edge collapse loss

References

  1. Xia, J.C.; Varshney, A. Dynamic View-Dependent Simplification for Polygonal Models. In Proceedings of the Seventh Annual IEEE Visualization’96, San Francisco, CA, USA, 27 October–1 November 1996; pp. 327–334. [Google Scholar]
  2. Biljecki, F.; Ledoux, H.; Stoter, J. An Improved LOD Specification for 3D Building Models. Comput. Environ. Urban Syst. 2016, 59, 25–37. [Google Scholar] [CrossRef]
  3. Biljecki, F.; Ledoux, H.; Stoter, J.; Zhao, J. Formalisation of the Level of Detail in 3D City Modelling. Comput. Environ. Urban Syst. 2014, 48, 1–15. [Google Scholar] [CrossRef]
  4. Luebke, D.; Erikson, C. View-Dependent Simplification of Arbitrary Polygonal Environments. In Proceedings of the the 24th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 3–8 August 1997; pp. 199–208. [Google Scholar]
  5. Somanath, S.; Naserentin, V.; Eleftheriou, O.; Sjölie, D.; Wästberg, B.S.; Logg, A. Towards Urban Digital Twins: A Workflow for Procedural Visualization Using Geospatial Data. Remote Sens. 2024, 16, 1939. [Google Scholar] [CrossRef]
  6. Zhang, J.; Cheng, J.C.P.; Chen, W.; Chen, K. Digital Twins for Construction Sites: Concepts, LoD Definition, and Applications. J. Manag. Eng. 2022, 38, 04021094. [Google Scholar] [CrossRef]
  7. Luebke, D.; Reddy, M.; Cohen, J.D.; Varshney, A.; Watson, B.; Huebner, R. Level of Detail for 3D Graphics; Elsevier: Amsterdam, The Netherlands, 2002. [Google Scholar]
  8. Clark, J.H. Hierarchical Geometric Models for Visible Surface Algorithms. Commun. ACM 1976, 19, 547–554. [Google Scholar] [CrossRef]
  9. Funkhouser, T.A.; Séquin, C.H. Adaptive Display Algorithm for Interactive Frame Rates during Visualization of Complex Virtual Environments. In Proceedings of the the 20th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 2–6 August 1993; Siggraph ’93. pp. 247–254. [Google Scholar] [CrossRef]
  10. Buyukdemircioglu, M.; Kocaman, S. Reconstruction and Efficient Visualization of Heterogeneous 3D City Models. Remote Sens. 2020, 12, 2128. [Google Scholar] [CrossRef]
  11. Heok, T.K.; Daman, D. A Review on Level of Detail. In Proceedings of the International Conference on Computer Graphics, Imaging and Visualization, 2004, CGIV 2004, Penang, Malaysia, 2 July 2004; pp. 70–75. [Google Scholar]
  12. Zhu, J.; Wu, P. Towards Effective BIM/GIS Data Integration for Smart City by Integrating Computer Graphics Technique. Remote Sens. 2021, 13, 1889. [Google Scholar] [CrossRef]
  13. Renganathan, B.; Shanthi Priya, R.; Kumar, G.R.; Thiruvengadam, J.; Senthil, R. Intuitive and Experiential Approaches to Enhance Conceptual Design in Architecture Using Building Information Modeling and Virtual Reality. Infrastructures 2025, 10, 127. [Google Scholar] [CrossRef]
  14. Rowaizak, M.; Farhat, A.; Khalil, R. From Brain Lobes to Neurons: Navigating the Brain Using Advanced 3D Modeling and Visualization Tools. J. Imaging 2025, 11, 298. [Google Scholar] [CrossRef]
  15. Lebaku, P.K.R.; Gao, L.; Lu, P.; Sun, J. Deep Learning for Pavement Condition Evaluation Using Satellite Imagery. Infrastructures 2024, 9, 155. [Google Scholar] [CrossRef]
  16. Tang, L.; Li, L.; Ying, S.; Lei, Y. A Full Level-of-Detail Specification for 3D Building Models Combining Indoor and Outdoor Scenes. ISPRS Int. J. Geo-Inf. 2018, 7, 419. [Google Scholar] [CrossRef]
  17. Zhan, W.; Chen, Y.; Chen, J. 3D Tiles-Based High-Efficiency Visualization Method for Complex BIM Models on the Web. ISPRS Int. J. Geo-Inf. 2021, 10, 476. [Google Scholar] [CrossRef]
  18. Zhu, J.; Wu, P.; Anumba, C. A Semantics-Based Approach for Simplifying IFC Building Models to Facilitate the Use of BIM Models in GIS. Remote Sens. 2021, 13, 4727. [Google Scholar] [CrossRef]
  19. Lam, P.D.; Gu, B.H.; Lam, H.K.; Ok, S.Y.; Lee, S.H. Digital Twin Smart City: Integrating IFC and CityGML with Semantic Graph for Advanced 3D City Model Visualization. Sensors 2024, 24, 3761. [Google Scholar] [CrossRef]
  20. Löwner, M.O.; Gröger, G.; Benner, J.; Biljecki, F.; Nagel, C. Proposal for a New Lod and Multi-Representation Concept for Citygml. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, IV-2/W1, 3–12. [Google Scholar] [CrossRef]
  21. Boeters, R.; Arroyo Ohori, K.; Biljecki, F.; Zlatanova, S. Automatically Enhancing CityGML LOD2 Models with a Corresponding Indoor Geometry. Int. J. Geogr. Inf. Sci. 2015, 29, 2248–2268. [Google Scholar] [CrossRef]
  22. Xiang, H.; Huang, X.; Lan, F.; Yang, C.; Gao, Y.; Wu, W.; Zhang, F. A Shape-Preserving Simplification Method for Urban Building Models. ISPRS Int. J. Geo-Inf. 2022, 11, 562. [Google Scholar] [CrossRef]
  23. Hussain, M.; Okada, Y. LOD Modelling of Polygonal Models. Mach. Graph. Vis. 2005, 14, 325. [Google Scholar]
  24. Wang, B.; Wu, G.; Zhao, Q.; Li, Y.; Gao, Y.; She, J. A Topology-Preserving Simplification Method for 3D Building Models. ISPRS Int. J. Geo-Inf. 2021, 10, 422. [Google Scholar] [CrossRef]
  25. Sun, Y.; Ma, J.; She, J.; Zhao, Q.; He, L. View-Dependent Progressive Transmission Method for 3D Building Models. ISPRS Int. J. Geo-Inf. 2021, 10, 228. [Google Scholar] [CrossRef]
  26. Li, J.; Chen, D.; Hu, F.; Wang, Y.; Li, P.; Peethambaran, J. Shape-Preserving Mesh Decimation for 3D Building Modeling. Int. J. Appl. Earth Obs. Geoinf. 2024, 126, 103623. [Google Scholar] [CrossRef]
  27. Yang, Z.; Aihemaiti, M.; Abudureheman, B.; Tao, H. High-Precision Optimization of BIM-3D GIS Models for Digital Twins: A Case Study of Santun River Basin. Sensors 2025, 25, 4630. [Google Scholar] [CrossRef]
  28. Zhao, T.; Jiang, J.; Guo, X. A Novel Quadratic Error Metric Mesh Simplification Algorithm for 3d Building Models Based on ‘Local-Vertex’ Texture Features. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLVIII-3/W2-2022, 109–115. [Google Scholar] [CrossRef]
  29. Sun, Z.; Wang, C.; Wu, J. Industry Foundation Class-Based Building Information Modeling Lightweight Visualization Method for Steel Structures. Appl. Sci. 2024, 14, 5507. [Google Scholar] [CrossRef]
  30. Liu, Z.; Zhang, C.; Cai, H.; Qv, W.; Zhang, S. A Model Simplification Algorithm for 3D Reconstruction. Remote Sens. 2022, 14, 4216. [Google Scholar] [CrossRef]
  31. Hoppe, H. Progressive Meshes. In Proceedings of the the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 4–9 August 1996; Siggraph ’96. pp. 99–108. [Google Scholar] [CrossRef]
  32. Garland, M.; Heckbert, P.S. Surface Simplification Using Quadric Error Metrics. In Proceedings of the the 24th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 3–8 August 1997; pp. 209–216. [Google Scholar]
  33. Satopaa, V.; Albrecht, J.; Irwin, D.; Raghavan, B. Finding a “Kneedle” in a Haystack: Detecting Knee Points in System Behavior. In Proceedings of the 2011 31st International Conference on Distributed Computing Systems Workshops, Minneapolis, MN, USA, 20–24 June 2011; pp. 166–171. [Google Scholar] [CrossRef]
  34. Savitzky, A.; Golay, M.J.E. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  35. Savitzky, A. A Historic Collaboration. Anal. Chem. 1989, 61, 921A–923A. [Google Scholar] [CrossRef]
  36. Sullivan, B.; Kaszynski, A. PyVista: 3D Plotting and Mesh Analysis through a Streamlined Interface for the Visualization Toolkit (VTK). J. Open Source Softw. 2019, 4, 1450. [Google Scholar] [CrossRef]
Figure 1. Steel truss mesh model. In sequence: the original mesh, after 30% simplification, and 60% simplification. Note that the 30% and 60% results remain visually close to the original, illustrating that fixed low ratios can yield redundant LOD levels.
Figure 1. Steel truss mesh model. In sequence: the original mesh, after 30% simplification, and 60% simplification. Note that the 30% and 60% results remain visually close to the original, illustrating that fixed low ratios can yield redundant LOD levels.
Ijgi 15 00061 g001
Figure 2. Steel truss mesh model. In sequence: the original mesh after 88% simplification, 90% simplification, 92% simplification, and 94% simplification.
Figure 2. Steel truss mesh model. In sequence: the original mesh after 88% simplification, 90% simplification, 92% simplification, and 94% simplification.
Ijgi 15 00061 g002
Figure 3. Traction substation scene with a uniform 90% simplification ratio applied to each mesh. The ground and the models within the red box are oversimplified.
Figure 3. Traction substation scene with a uniform 90% simplification ratio applied to each mesh. The ground and the models within the red box are oversimplified.
Ijgi 15 00061 g003
Figure 4. Schematic illustration of an edge collapse, where edge e ( p , q ) is collapsed into vertex v.
Figure 4. Schematic illustration of an edge collapse, where edge e ( p , q ) is collapsed into vertex v.
Ijgi 15 00061 g004
Figure 5. Conceptual framework diagram.
Figure 5. Conceptual framework diagram.
Ijgi 15 00061 g005
Figure 6. Simplification rate–CECL curve (blue line). The yellow points are sampling points on the curve.
Figure 6. Simplification rate–CECL curve (blue line). The yellow points are sampling points on the curve.
Ijgi 15 00061 g006
Figure 7. Smoothed logarithmic simplification rate–CECL curve and knee point (the red dot).
Figure 7. Smoothed logarithmic simplification rate–CECL curve and knee point (the red dot).
Ijgi 15 00061 g007
Figure 8. Filtered simplification rate–CECL curve. The dashed lines show the boundary of the CECL curve.
Figure 8. Filtered simplification rate–CECL curve. The dashed lines show the boundary of the CECL curve.
Ijgi 15 00061 g008
Figure 9. Normalized simplification rate–CECL curve.
Figure 9. Normalized simplification rate–CECL curve.
Ijgi 15 00061 g009
Figure 10. Filtered simplification rate–CECL curve.
Figure 10. Filtered simplification rate–CECL curve.
Ijgi 15 00061 g010
Figure 11. The steel truss model and its simplified versions: (a) Overall view (37,842 faces). (b) Local details.
Figure 11. The steel truss model and its simplified versions: (a) Overall view (37,842 faces). (b) Local details.
Ijgi 15 00061 g011
Figure 12. Edge collapse process data of the steel truss model: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Figure 12. Edge collapse process data of the steel truss model: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Ijgi 15 00061 g012
Figure 13. The power transformer model and its simplified versions: (a) Overall view (52,748 faces). (b) Local details.
Figure 13. The power transformer model and its simplified versions: (a) Overall view (52,748 faces). (b) Local details.
Ijgi 15 00061 g013
Figure 14. Edge collapse process data of the power transformer model: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Figure 14. Edge collapse process data of the power transformer model: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Ijgi 15 00061 g014
Figure 15. Visual comparison of mesh simplification results for the traction substation scene. (a) The original models with numbered labels. (bd) Simplified models generated by the proposed method at scene-level aggregate simplification rates of 77.8%, 88.1%, and 92.6%, respectively. (e) The result of uniform simplification at a 92.6% rate, showing significant geometric distortion compared to the proposed method. Detailed simplification data for the numbered models marked in (a) are provided in Table 1.
Figure 15. Visual comparison of mesh simplification results for the traction substation scene. (a) The original models with numbered labels. (bd) Simplified models generated by the proposed method at scene-level aggregate simplification rates of 77.8%, 88.1%, and 92.6%, respectively. (e) The result of uniform simplification at a 92.6% rate, showing significant geometric distortion compared to the proposed method. Detailed simplification data for the numbered models marked in (a) are provided in Table 1.
Ijgi 15 00061 g015
Figure 16. Edge collapse process data of the traction substation scene: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Figure 16. Edge collapse process data of the traction substation scene: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Ijgi 15 00061 g016
Figure 17. Visual comparison of mesh simplification results for the roadbed with slope scene. (a) The original models with numbered labels. (bd) Simplified models generated by the proposed method at scene-level aggregate simplification rates of 82.0%, 89.7%, and 93.1%, respectively. (e) The result of uniform simplification at a 93.1% rate, showing significant geometric distortion compared to the proposed method. Detailed simplification data for the numbered models marked in (a) are provided in Table 2.
Figure 17. Visual comparison of mesh simplification results for the roadbed with slope scene. (a) The original models with numbered labels. (bd) Simplified models generated by the proposed method at scene-level aggregate simplification rates of 82.0%, 89.7%, and 93.1%, respectively. (e) The result of uniform simplification at a 93.1% rate, showing significant geometric distortion compared to the proposed method. Detailed simplification data for the numbered models marked in (a) are provided in Table 2.
Ijgi 15 00061 g017
Figure 18. Local detail of the slope model and its simplified version (Point C).
Figure 18. Local detail of the slope model and its simplified version (Point C).
Ijgi 15 00061 g018
Figure 19. Edge collapse process data: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Figure 19. Edge collapse process data: (a) Smoothed logarithmic simplification rate–CECL curve and knee point. (b) Filtered simplification rate–CECL curve.
Ijgi 15 00061 g019
Table 1. Model-specific (allocated) face counts and simplification rates for representative models at the different LOD levels (Points A, B, and C).
Table 1. Model-specific (allocated) face counts and simplification rates for representative models at the different LOD levels (Points A, B, and C).
ID 1Original FacesFaces (A) 2Faces (B)Faces (C)Rate (A) 3Rate (B)Rate (C)
137,84211,3465276328870.0%80.1%87.3%
3722853832613139325.5%63.8%80.7%
4193,26024,00912,582769487.6%93.5%96.0%
547631221016334.5%55.9%65.8%
617421586151211229.0%13.2%35.6%
752,74816,5148900492268.7%83.1%90.7%
845882134123185653.5%73.2%81.3%
914,0943534175686274.9%87.5%93.9%
1011,4222662109151076.7%90.4%95.5%
11121212120.0%0.0%0.0%
12929292920.0%0.0%0.0%
1313658504657.4%63.2%66.2%
1 The column ID corresponds to the numbers in the left subplot of Figure 15a. 2 Faces (A) indicate the face count of the simplified model at Point A, and similarly for (B) and (C). 3 Rate (A) indicates the simplification rate at Point A, and similarly for (B) and (C).
Table 2. Model-specific (allocated) face counts and simplification rates for representative models at the different LOD levels (Points A, B, and C).
Table 2. Model-specific (allocated) face counts and simplification rates for representative models at the different LOD levels (Points A, B, and C).
ID 1Original FacesFaces (A) 2Faces (B)Faces (C)Rate (A) 3Rate (B)Rate (C)
13506117793376566.4%73.4%78.2%
222,06834972120158384.1%90.4%92.8%
33636363627.8%27.8%33.3%
4192760239225868.8%79.7%86.6%
53636363616.7%27.8%33.3%
6188446530322175.3%83.9%88.2%
77841911259475.6%84.1%88.0%
8322471347939377.9%85.1%87.8%
9222222220.0%0.0%0.0%
10986453002716180246.3%72.5%81.7%
11363636360.0%0.0%0.0%
1 The column ID corresponds to the numbers in the left subplot of Figure 17a. 2 Faces (A) indicate the face count of the simplified model at Point A, and similarly for (B) and (C). 3 Rate (A) indicates the simplification rate at Point A, and similarly for (B) and (C).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, S.; Su, L.; Yang, X.; Qi, C.; Liu, X.; Pan, L.; Zhang, Q. Efficient Four-Level LOD Simplification for Single- and Multi-Mesh 3D Scenes Towards Scalable BIM/GIS/Digital Twin Integration. ISPRS Int. J. Geo-Inf. 2026, 15, 61. https://doi.org/10.3390/ijgi15020061

AMA Style

Sun S, Su L, Yang X, Qi C, Liu X, Pan L, Zhang Q. Efficient Four-Level LOD Simplification for Single- and Multi-Mesh 3D Scenes Towards Scalable BIM/GIS/Digital Twin Integration. ISPRS International Journal of Geo-Information. 2026; 15(2):61. https://doi.org/10.3390/ijgi15020061

Chicago/Turabian Style

Sun, Siyuan, Lin Su, Xukun Yang, Chunyu Qi, Xinyu Liu, Licheng Pan, and Qilin Zhang. 2026. "Efficient Four-Level LOD Simplification for Single- and Multi-Mesh 3D Scenes Towards Scalable BIM/GIS/Digital Twin Integration" ISPRS International Journal of Geo-Information 15, no. 2: 61. https://doi.org/10.3390/ijgi15020061

APA Style

Sun, S., Su, L., Yang, X., Qi, C., Liu, X., Pan, L., & Zhang, Q. (2026). Efficient Four-Level LOD Simplification for Single- and Multi-Mesh 3D Scenes Towards Scalable BIM/GIS/Digital Twin Integration. ISPRS International Journal of Geo-Information, 15(2), 61. https://doi.org/10.3390/ijgi15020061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop