Next Article in Journal
Research on Locking Velocity Constraints and Influencing Factors of Folding Wings Based on Locking Mechanisms
Next Article in Special Issue
A Case Study on a 7D Landscape Information Model (LIM) for Greenery Maintenance
Previous Article in Journal
Sustainable Innovations in Food Production, Packaging and Storage
Previous Article in Special Issue
Interactive Digital Twin Workflow for Energy Assessment of Buildings: Integration of Photogrammetry, BIM and Thermography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semantic-Vertex-Based Topological Detection for Automatic Dimension Generation in Building Information Modeling (BIM) with Industry Foundation Classes (IFC)

Department of Architectural Engineering, Dankook University, Yongin-si 16890, Republic of Korea
Appl. Sci. 2026, 16(1), 139; https://doi.org/10.3390/app16010139
Submission received: 6 November 2025 / Revised: 3 December 2025 / Accepted: 13 December 2025 / Published: 22 December 2025

Abstract

In this study, a topological matching algorithm is introduced for semantic vertex detection to automate dimension generation in a building information modeling (BIM) environment based on the Industry Foundation Classes (IFC) standard. Conventional IFC-based quantity take-off (QTO) methods provide only standardized attributes, such as height, length, width, and area; therefore, user-defined custom dimensions—such as net opening sizes or parameter lengths—must be calculated manually. This study proposes a method for fully automating the dimensions required by users by automatically tagging and visualizing semantic vertices for geometrically identical IFC objects. These semantic vertices correspond to representative topological feature points (e.g., left–bottom–origin, left–top–front, left–bottom–back, and right–bottom–front). Based on these defined semantic vertices, the method automatically establishes vertex correspondence among objects to generate dimensions. The proposed workflow comprises four main stages: (1) geometry normalization of IFC objects, (2) semantic vertex definition, (3) automatic detection of semantic vertices, and (4) dimension generation and visualization. The experimental results demonstrate that the proposed approach successfully enables the computation of dimensions for geometrically identical objects, thereby significantly improving the efficiency of QTO processes.

1. Introduction

Building information modeling (BIM) is a core technology that enables digital-based integrated management and collaboration across all stages of design, construction, and operation. Moreover, BIM represents a new paradigm in the construction process and requires novel answers to the problems of data sharing and the definition of intellectual property rights [1]. The core of BIM lies in the seamless data exchange between CAD and architecture, engineering, and construction (AEC), that is, the interoperability among different software systems [2]. The Industry Foundation Classes (IFC) format was designed to create an object-oriented data model for civil engineering construction, which is truly interoperable between different software packages [2]. IFC is an open data exchange format that is well-established in the BIM field worldwide. The IFC architecture is based on the structure of the STEP (Standard for the Exchange of Product Model Data) standard [3] and has been continuously developed by buildingSMART since the release of the first data model standard, IFC 1.0, in 1996. The IFC file format (“.ifc”) enables the exchange of data associated not only with the geometrical properties of components, such as walls, beams, and columns, but also with heterogeneous attributes, including mechanical and physical properties, costs, and construction work time.
Although IFC is a powerful object-oriented data standard, in practice, models created on various software platforms are implemented differently, which leads to persistent problems in the QTO, structural analysis, and energy simulation of IFC objects [4]. Consequently, many studies have reported that most practical IFC models are not directly suitable for case-based automation, indicating the need to revisit the fundamental goal of interoperability in IFC models [4].
From a QTO perspective, IFC objects rely on either the QTO defined in the IFC schema or a bounding-box estimation approach. The quantity extraction of IFC objects utilizes hierarchical entities, such as IfcElementQuantity and IfcQuantityLength, which provide basic information, including area, volume, and length, according to these schema rules [5].
The most general method for quantity estimation uses the outer bounding box (IfcBoundingBox) of an IFC object to obtain its overall width, length, and height [6,7]. The outer bounding box (IfcBoundingBox) estimates the volume as “Bounding Box Volume = Lx × Ly × Lz” by extracting minimum and maximum coordinates. However, this method can lead to inaccuracies when reference points are not explicitly defined, making it impossible to measure the internal parametric dimensions of an object. The existing literature identifies the following limitations in IFC-based quantity take-off (QTO) methods.
  • Information loss during IFC data conversion: When the original BIM model is converted into IFC format, certain geometric and semantic data may be lost, which can reduce the precision and completeness of the dimensional information [8,9].
  • Platform dependency on geometric representation: IFC objects can be represented in either parametric or mesh-based geometric formats, and even for the same object, the coordinate system of vertex indices may differ across software platforms. Due to the diversity and complexity of IFC representation methods, different software may interpret or process identical data inconsistently, which affects the reliability of automation and dimensional computation [10].
  • Lack of dimensional invariance: When an IFC object is rotated or scaled, it cannot be consistently recognized as the same object [11]. Specifically, once scale or rotational transformations occur, the vertex coordinates change, making it difficult to guarantee consistent dimensional values.
  • Limited explicit dimension representation: The IFC schema provides only standardized basic dimensions of objects, whereas user-defined dimensions require manual measurement tools [12]. Although IFC viewers allow users to select surfaces or edges to measure length or area, the IFC data do not explicitly provide a logical basis for such dimensional calculations.
In relation to this study, the IFC itself does not explicitly provide the practical dimensions required by users—a limitation that results in significant manual effort for all dimension calculation methods. Even when the dimensions are computed from the original BIM object, their consistency cannot be guaranteed after conversion to the IFC format or when transformed into other BIM-compatible objects. Previous studies have clearly described the problems of information loss and interoperability that occur when BIM is converted to IFC. The IFC-based BIM model does not incorporate all the necessary information, and data may be lost during IFC conversion [8,9]. Some useful information regarding the QTO process may not be transferred from the original BIM model to the IFC model [13]. The IFC format has raised concerns regarding its reliability in handling building materials and quantitative information for data transmission between various software applications [14].
BIM converted from original objects may contain internal errors, and the IFC format itself does not fully include all the information required to perform QTO [15]. Dedicated BIM software features (e.g., Solid Element Operations in ArchiCAD) are not fully compatible or may cause conflicts during IFC conversion, which can result in constraints on QTO automation. The IFC format remains limited to incorporating the necessary information for the BIM-based QTO approach [16]. Although IFC has been the primary exchange format used in BIM applications in recent years, information may be lost during each exchange process, eventually leading to additional workload for manual replenishment [17].
In recent industrial practice, a growing number of attempts have been made to perform geometric computations by converting IFC objects into meshes using open-source libraries such as IfcOpenShell, xBIM, and BlenderBIM [18,19]. Furthermore, the Information Delivery Specification (IDS) standard proposed by buildingSMART has established a foundation for mechanically verifying dimensional constraints required by clients (e.g., door width ≥ 900 mm). However, an IDS primarily focuses on dimension verification, and the methodological framework for extracting and generating such dimensions remains insufficient. In a study by Ellen van den Bersselaar, approximately 66% of the dimensional requirements were verified automatically [20]. Notably, several studies have explored automatic dimension annotation in CAD/BIM environments [21]. However, these efforts have mainly focused on 2D drawing-based systems, and very few studies have addressed the automatic computation of dimensions in IFC models based on semantic vertex principles.
The dimensions of architectural objects are not merely geometric lengths but also serve as core data across multiple applications, such as design verification, construction management, process control, cost estimation, and digital-twin integration.
As discussed previously, the IFC-based digital model still faces limitations, including information loss during IFC data conversion, platform dependency on geometric representation formats, lack of dimensional invariance, and restricted explicit dimension representation. Overcoming these issues requires a fundamentally new approach beyond existing methods.
This study identifies the semantic vertex as a critical starting point for addressing these challenges. The coordinate-based semantic vertices of IFC objects must be explored via automated detection processes, rather than manual operations.
Recent open-source frameworks, such as IfcOpenShell, xBIM, and BlenderBIM, have demonstrated the feasibility of handling the geometric and topological coordinates of IFC objects. Therefore, this study aims to enhance the interoperability and practical applicability of BIM models by proposing a system that automatically detects semantic vertices from the topological coordinates of IFC objects and directly utilizes them for dimension computation.

2. Research Objectives and Methodology

This study introduces semantic-vertex-based topological detection to automate the dimension generation of IFC objects. For vertex-based IFC models converted into points, edges, and faces via 3D platforms such as Blender, this study proposes a three-stage workflow consisting of (1) semantic vertex definition, (2) automatic vertex detection, and (3) dimension generation. The primary objectives of this study are as follows:
  • Automatic detection of semantic vertices: Identifying semantically meaningful reference points for each object (e.g., left–bottom–origin (L_B_O), right–bottom–front (R_B_F), and left–top–front (L_T_F)), assigning unique indices and labels, and visualizing them in a 3D environment using a dimension helper mesh (DHM) method.
  • Dimension generation: A system that automatically generates dimensions—such as length, width, height, and diameter—by calculating the distances between corresponding semantic vertices.
  • Dimension recording and export: The generated dimensional data are stored within IFC property structures or exported into formats such as Excel or JSON to improve interoperability and workflow continuity.
Ultimately, this study achieves generative dimensions for 3D objects defined by semantic vertices through geometric normalization-based conversion of IFC objects. This suggests practical implementation and expansion potential of automatic dimension generation for various BIM-based applications in the future, including digital twins, construction inspection, automated QTO, and AR-based construction management.
Therefore, the experimental result of this study is to convert IFC-based BIM objects into 3D objects with topology-based semantic vertices. First, the concept of semantic vertex was defined; second, an experimental dataset of object models was constructed; third, an algorithm for semantic vertex detection was designed; and fourth, application and verification were carried out. The methodological framework of this study is as follows:
(1)
Concept definition stage: IFC objects were converted into topology-based objects, and Semantic Vertices were defined for 3D models.
(2)
Object data construction stage: Wall and column objects were selected as IFC samples, and then converted into mesh- and vertex-based geometric forms to construct an experimental dataset (IFC objects generated using BlenderBIM were converted into 3D objects with geometric topology).
(3)
Algorithm design stage: Using the topological information of the objects, an automatic Semantic vertex detection algorithm was designed, and a system for generating dimension helper meshes (DHM) for detected vertices was established. Next, an automated system was designed to compute major dimensions—such as length, width, and height—based on DHM pairing relationships (implemented with Blender Python).
(4)
Application and validation stage: The proposed method was applied to sample 3D objects to verify the automation, accuracy, and consistency of dimension generation. Practical applicability was also validated via case study experiments.

3. Previous Research and Status

3.1. Current Status of IFC-Based QTO in the Construction Industry

In the construction field, QTO based on IFC has been a continuous research focus aimed at enhancing the practical applicability of BIM.
Akanbi proposed an algorithm that consistently derives and standardizes QTO extraction rules using a data-driven reverse engineering approach [22]. This study originated from the recognition that, even with identical IFC models, the calculated quantities may vary depending on how the extraction algorithm is applied.
In a follow-up study, Akanbi developed a Python-based program that automatically extracts quantity information, such as area and volume, from IFC structures and applied it to a real residential development BIM project [5]. The results demonstrated accurate quantity estimation for major building elements, such as walls and slabs, proving superiority over manual methods in terms of speed and consistency.
The applicability of QTO automation in the infrastructure domain has also been explored. A Norwegian road project achieved approximately 40% automation of the cost items, whereas the remaining 60% required manual input [23]. This study utilized the basic quantity of data provided in the IFC 2 × 3 format and introduced a natural-language-based ontology classification technique.
Zhang, S. [24] proposed a semantic enrichment approach to enhance the accuracy and reliability of QTO. By targeting a pumped-storage hydropower project, this study adds semantic information, such as cost, materials, and workload, to an IFC-based BIM model to automate cost estimation and improve accuracy. The experiment confirmed that semantic-based enrichment reduces quantity estimation errors (accuracy improvement through explicit information enhancement), minimizes manual data entry via semantic queries, and demonstrates the potential for seamless integration with BIM data workflows.
A study linking Brazil’s national cost database (SINAPI: National System of Costs and Indices for Civil Construction) with IFC data proposed an automated workflow using Python scripts to map IFC elements to QTO requirements [25]. The study offers a practical framework for enhancing the reliability and consistency of cost estimations by linking national standards with IFC-based data.
Isatto (2020) explored the possibility of integrating process-based cost modeling into IFC using the IFC4 × 2 schema [26]. By applying the EXPRESS notation, the study demonstrated the interoperability between BIM and cost models using a Python prototype built on IfcOpenShell and suggested the potential to manage complex processes and cost estimations directly within IFC environments. This study is recognized as one of the first attempts to represent cost information interoperability in an open and standardized manner by linking BIM technology and cost modeling.
Recently, QTO research has evolved toward relationship-centered data processing and intelligent querying. Cypher4BIM converts IFC data into a graph-based model (labeled property graph) that enables complex spatial relationships and object constraints to be queried [27]. Its distinctive feature lies in a customized graph query language that can retrieve not only single objects but also complex relations, such as spatial structures, boundaries, and accessibility, enabling semantic data utilization in smart construction applications, such as digital twins.
Furthermore, Iranmanesh (2025) proposed a system that combined large language models (LLMs) with Graph-RAG techniques to allow intuitive natural language queries over IFC data [28]. This approach contextually interprets the complex hierarchical structure of IFC, enabling nontechnical users to intuitively interact with BIM data, which is a significant advancement in accessibility and usability.
In summary, existing studies demonstrate a clear progression:
  • Identifying algorithmic inconsistencies and standardization efforts [22].
  • Practical validation through residential and infrastructure projects [5,23].
  • Improving accuracy and reliability via semantic enrichment and national standard mapping [24,25].
  • Extending to process-based cost modeling [26].
  • Advancing toward graph-based and LLM-driven intelligent querying [27,28].
This trajectory demonstrates that QTO research has evolved beyond mere automation, progressing toward the integration of BIM and cost models, and the development of next-generation construction information management systems via data semanticization and intelligent interfaces.
However, despite significant progress in IFC-based QTO and semantic enrichment, existing approaches primarily address the semantic annotation of objects and attributes rather than their geometric–topological features. None of the reviewed studies provide a consistent framework for defining or detecting topology-based semantic vertices capable of enabling invariant, automated dimension generation. This unresolved issue forms the central research gap addressed in this study.

3.2. Limitations of IFC-Based QTO

Based on this gap, it is necessary to assess how current IFC-based QTO tools perform in practice. In recent years, various software solutions utilizing IFC have been increasingly adopted by the construction industry for QTO. PriMus IFC supports a fully IFC-native environment, providing automatic identification and template-based QTO functions that allow the Bill of Quantities (BoQ) to be updated automatically [29]. Additionally, PriMus TAKE-OFF supports derivative quantity calculations using both 2D CAD and raster-based data [29].
Autodesk Navisworks offers IFC-compatible quantification features and integrates 4D simulations and clash detection functions, thereby enhancing its utility in project management [30]. Kreo BIM Take-off provides AI-driven automated QTO within a cloud-based environment that operates exclusively using IFC data [31]. BEXEL Manager supports IFC-based QTO and 5D BIM, offering extended interoperability with classification systems and cost databases [32].
Furthermore, RIB CostX and Presto are widely used as hybrid QTO software that combine 2D drawings with BIM models [33]. Cadwork, which specializes in timber construction, provides an IFC-compatible platform that supports an integrated BIM workflow from 3D to 6D [34]. The open-source tool BlenderBIM enables the editing and visualization of IFC models, allowing users to freely modify and export property and geometric data [35]. In particular, BlenderBIM can convert IFC objects into points, lines, and faces and visually represent dimensions and annotations using add-ons such as MeasureIt and MeasureIt_ARCH. However, these features remain limited to visual representation because dimensional data are not recorded in the native IFC schema, thereby restricting external data export [35].
Accordingly, based on the current status of IFC-based QTO, the clear limitations can be summarized as follows.
First, there is a schema conversion loss problem. During the transformation from IFC 2x3 to IFC4, quantity information (QTO data) may be omitted or misinterpreted, resulting in inconsistencies [36]. Additionally, when exporting IFC models from BIM authoring tools, such as Revit, ArchiCAD, or Tekla, quantity-related properties are often corrupted or treated as nonstandard extended attributes [37].
Second, there are clear limitations in handling nonstandard or irregular objects. For instance, complex geometries, such as freeform roofs or curved walls, are processed differently across software tools (e.g., mesh-based vs. mathematically profiled approaches). Consequently, when the same IFC model is analyzed using Navisworks and PriMus IFC, discrepancies of approximately 1–3% may occur because of variations in decimal precision or opening recognition [15].
Third, these issues ultimately lead to a decline in the reliability of QTO results. Current industrial IFC-based QTO software can achieve relatively high accuracy for standard objects with well-defined attributes; however, errors still occur depending on the modeling quality, IFC export processes, and differences in software interpretation methods [9]. Therefore, in practice, compliance with buildingSMART IDS standards and cross-validation using multiple tools are essential to ensure reliable QTO results [13].
A more fundamental limitation is that meaningful dimensional information is not explicitly recorded within IFC objects. Currently, most industrial tools only display dimensions visually on a model rather than storing them as IFC property data (property sets) that can be used for subsequent verification, automated reporting, or cost estimation [8]. This represents a clear disconnection between “visualization” and “validation-ready IFC data”.
In particular, IFC-based QTO focuses primarily on aggregate quantity calculations, such as area and volume, whereas parametric design dimensions, including door width, column diameter, and opening interval, lack an open reference implementation that can consistently extract, record, and verify such values regardless of object representation. Beyond standardized quantity definitions, additional effort is required to calculate nonstandard dimensions, which remains one of the most fundamental limitations of the current IFC framework. Even outside the IFC ecosystem, automated dimensioning technologies have been explored using BIM authoring tools. For example, Autodesk Revit holds a patent for automatic dimensioning that provides a rule-based algorithm that recognizes the geometric features of model objects to automatically place dimensional lines and annotations [38]. However, this mechanism depends on Revit’s internal coordinate system and view-dependent rules. Reference adjustments are required when objects are rotated or undergo a scale transformation. Furthermore, this dimensional functionality is confined to the proprietary Revit ecosystem. Although integration with buildingSMART IDS/EIR frameworks is theoretically possible, support within the broader open IFC ecosystem remains limited. Therefore, a system is required that converts IFC objects into a topological structure and utilizes it for semantic vertex detection and automatic dimension generation as a structured data framework.

4. Semantic Vertex and Dimension Generation

4.1. Semantic Vertex

In the IFC data structure, the geometric vertex of an object is represented through the IfcCartesianPoint entity. This implies that a vertex in IFC merely carries geometric coordinate information and does not include any functional, topological, or structural meaning. The representation scheme of IFC primarily focuses on geometric form, without defining semantic tags or roles, such as whether a vertex represents the “left–bottom–origin” corner, the “top–center,” or an “edge of a reference plane.” Therefore, even when vertex data are extracted from an IFC file, they exist only as coordinate arrays without providing any information regarding the functional meaning of those coordinates.
To overcome this limitation of IFC’s lack of semantic vertices, this study formally proposes the concept of semantic-vertex-based topological detection, which extends the conventional notion of a geometric vertex. A semantic vertex is defined as a specific reference point within a BIM object determined by considering its geometric extremities, topological relationships, and functional references. It expands the simple coordinate point (IfcCartesianPoint) into a semantic reference point that explicitly expresses an object’s orientation, location basis, and design intent.
In a previous study, Zhang (2015) proposed a data representation framework for Interlinking Building Geometry, Topology, and Semantics [39]. The study emphasizes that geometric elements in IFC models (e.g., points, lines, and faces) should not be treated as mere coordinate data, but rather as semantically interpretable units by assigning semantic attributes to each geometric element. Hence, Zhang’s research opened the possibility of directly linking fundamental geometric units, such as vertices, edges, and faces, to semantic labels [39].
Building on this foundation, this study introduces the concept of a user-defined semantic vertex (UDSV), which is a reference vertex systematically defined based on geometric and functional criteria, representing the feature-defining point of an object’s form. For example, most architectural components, except for purely cylindrical or spherical elements, have distinct front and back sides. In particular, they inherently include reference vertices such as left–bottom–origin (L_B_O) or right–bottom–front (R_B_F). Major architectural elements such as walls, windows, openings, beams, columns, and slabs have well-defined geometric structures according to their functional orientation (top, bottom, left, and right). If these structurally directional vertices are semantically tagged, then the object gains a semantic coordinate system that enables a functionally meaningful geometric interpretation.
Most commercial BIM software and IFC-based tools can detect vertex information; however, they do not store the vertex coordinates as part of the object’s property data. In IFC files, vertex data are embedded only within geometric representation entities and not as semantically anchored reference points. Therefore, in this study, Blender, an open-source 3D design platform, is employed to enable the creation and management of semantic vertices. Among the tools capable of directly extracting or redefining vertex coordinates from IFC objects, such as Blender (BlenderBIM), xBIM, FZK Viewer, Rhino, Unity, and Unreal Engine, Blender provides the highest degrees of freedom and extensibility, making it the most suitable environment for experimentally implementing UDSVs.
Figure 1 illustrates an example of user-defined semantic vertices applied to an object converted into a topology-based structure from the IFC format. Table 1 provides descriptions of the semantic vertices defined for the window and wall objects. These defined semantic vertices form the basis for the automatic vertex search algorithm presented in Section 5 and Section 6, which automatically identifies dimension-defining vertices (e.g., length, width, and height) to facilitate automated dimension generation. If a dimensional reference point required for object geometry is needed, a helper mesh (HM) can be generated at the detected semantic vertex coordinates. The HM acts as an auxiliary object hierarchically (as a child) linked to the main object, providing an invariant reference point for dimension measurement.

4.2. Automatic Search of Semantic Vertices and Dimension Generation

Automatic detection of semantic vertices and subsequent dimension generation constitute the core focus of this study. If the vertices of an object within a BIM model can be accurately identified, then the dimensions of all the objects can be effectively determined. Once semantic vertices are recognized, dimension generation becomes a straightforward task of connecting those points with lines, as the definition of dimensions is inherently determined at the stage of semantic vertex identification.
Conventional IFC-based QTO methods are suitable for calculating aggregated quantities such as area or volume. However, they are insufficient for parametric design dimensions, such as the position of a window, the diameter of a column with a capital, or the dimensions of a floor with an opening. In practice, subtractive areas and parametric measurements of architectural objects rely heavily on manual operations.
To overcome the limitations of standardized aggregate quantity extraction, this paper proposes a topology-based method for automatically detecting and defining semantic vertices based on the geometric and topological configurations of an object. Consequently, an object represented by semantic vertices enables consistent, platform-independent dimensional computations.
For example, a geometrically normalized column consists of a set of vertex coordinates and faces. In the set of vertices, a geometric–topological structure is established in which the left–bottom–origin (L_B_O) vertex is defined as the reference point, and based on this anchor, additional vertices such as left–bottom–back (L_B_B), left–top–front (L_T_F), and right–bottom–front (R_B_F). Based on this search-matching process, the primary dimensions of an object—its length, width, and height—can be automatically derived.
The fundamental dimensions of an object are obtained by detecting the extreme vertices located along its outer boundary. In other words, the outermost bounding box of the object represents its approximate size. However, when the geometry of an object is complex, particularly when circular or curved surfaces are included, the number of vertices increases exponentially, rendering automatic detection more challenging. To address this issue, this study introduces two fundamental search principles that can be applied individually or in combination, depending on the geometric characteristics of the object. These principles provide a robust foundation for the automatic discovery of semantic vertices and the subsequent generation of dimensions across diverse BIM objects.
Fundamental Principles
  • Common method: Definition of semantic vertices based on the outermost rectangular bounding box.
  • First method: Vector-based vertex search from reference points.
  • Second method: Axis-based scanning along the X-, Y-, and Z-axes using a rectangular bounding box (detecting tangential intersections between the scanning plane and the object surface and identifying the endpoints of these tangents as semantic vertices).
The first method, referred to as the vector-based search method, geometrically identifies the corresponding vertices using directional vectors originating from a predefined reference point. In contrast, the second method, the scanning-based search approach, divides the object into discrete intervals along its principal axes and generates scanning planes by referencing the vertex coordinates located within each interval. When a scanning plane intersects the surfaces of the object, the resulting vertices or tangential lines are detected. The endpoints of these vertices or tangents are then recognized as candidate semantic vertices.
Although these two methods employ different strategies for vertex detection, they share a common initial step: measuring and constructing an outermost rectangular bounding box that defines the geometric extent of an object. This bounding box provides a global coordinate framework for both approaches, ensuring consistency and robustness in the semantic vertex identification process.
The two detection methods are complementary; they can be applied selectively or jointly, depending on the geometric characteristics of the target object. The vector-based approach is more efficient for geometrically regular and orthogonal structures, whereas the scanning-based approach is more suitable for complex and curved geometries. The detailed methodologies for each approach are presented in Section 5 and Section 6.
Figure 2 presents the conceptual workflow of IFC object transformation and the complete dimension generation process.
In Figure 2, Steps 1 and 2 correspond to the import procedure through which IFC objects are transferred into the semantic vertex analysis coordinate space of the BIM software. For example, the IFC Importer in Blender interprets hierarchical IfcLocalPlacement structures—such as IfcAxis2Placement3D, IfcDirection, and IfcCartesianPoint—and reconstructs each object by combining positional, directional, and scale information. All IFC elements are restored as mesh objects with absolute coordinates within the 3D scene through this process. Boolean-based geometric definitions, such as IfcBooleanClippingResult, Subtraction, and Void, are also resolved through the CSG pipeline and converted into actual solid geometry. Blender then performs triangulation, face-normal evaluation, and vertex buffer generation and ultimately registers the reconstructed geometry as a mesh located in Global XYZ Scene Space.
In other words, Steps 1 and 2 constitute a data-loading and visualization pipeline. This process reconstructs IFC geometry for display and interaction but does not establish the analytical coordinate space required for semantic vertex detection. Step 3 addresses platform-dependent mesh representation. Since triangular mesh forms vary across 3D platforms—and vertices may be added, removed, or duplicated during mesh generation—some vertices do not contribute to the essential topology of the object. Therefore, the imported IFC object must be normalized into a minimal mesh and minimal vertex set. Step 4 marks the first stage of semantic vertexization, in which approximate dimensions are identified via bounding-box evaluation. To perform this, the IFC model must first be aligned to a user-defined world coordinate frame. Although imported IFC objects are placed according to design intent, they do not inherently provide a consistent spatial reference for analysis. The Normalized World coordinate frame is, therefore, not a software-generated coordinate system, but a user-oriented alignment space established specifically for semantic vertex detection. Following common architectural convention—length (X), width (Y), height (Z)—this coordinate realignment standardizes object orientation so that dimensions and semantic vertices can be extracted consistently, independent of the original IFC placement or rotation.
In summary, IFC import and mesh reconstruction perform the pre-processing of raw geometric data, whereas alignment into the Normalized World provides the analytical coordinate environment required for consistent semantic vertex extraction and automatic dimension generation. Steps 4 to 7 represent the process of semantic vertex detection, dimension generation, and export. Among these, Steps 4–6 constitute the core technical contribution of this study. Once an IFC object is normalized and aligned, semantic vertices can be automatically identified, and dimension generation becomes a straightforward computation of distances between those points; thus, dimension definition is inherently established at the semantic vertex detection stage.

5. Vector-Rules-Based Semantic Vertex Detection

5.1. Definition of Vector-Based Exploration

The vector-rules-based exploration method from a reference point is an algorithm designed to automatically identify semantically meaningful vertices of a 3D object based on its geometric features, starting from a defined reference point. This method initially calculates the AABB (AABB is the outermost rectangular bounding box) from the object’s complete vertex set to determine the first-order semantic vertices. Subsequently, using one of these as a reference point, it progressively searches for second-order semantic vertices along specific vector directions. Each detected semantic vertex is assigned a unique index to ensure that it serves as an absolute reference within the object. Figure 3 illustrates a conceptual diagram of each procedure of the reference point vector-rules-based exploration method, using a cube with a rectangular opening as the target object.

5.2. Vector Rules: Purpose and Composition Principle

The vector-rules-based semantic vertex exploration method is designed to reliably identify meaningful reference vertices in objects exhibiting regular geometric configurations, such as orthogonal or planar forms. By deriving vertices through directional vector relationships rather than manual annotation, this method enables the consistent and automated calculation of primary dimensions, including height, length, width, thickness, and opening size. This ensures reproducible and platform-independent dimensional information, regardless of object orientation or transformation within the 3D environment.
The vector-based semantic vertex exploration method comprises three main stages.
  • Primary exploration—definition of basic semantic vertices using the outer bounding box: From all vertices of the object, the minimum and maximum values along the X-, Y-, and Z-axes are calculated to generate an AABB. The eight corner points of this bounding box are used to define the first-order semantic vertices: left–bottom–origin (L_B_O), left–bottom–back(L_B_B), right–bottom–front(R_B_F), right–bottom–back (R_B_B), left–top–front(L_T_F), left–top–back(L_T_B), right–top–front(R_T_F), and right–top–back(R_T_B) (Figure 3. (2) First semantic vertex detection).
    Each semantic vertex identified through this process is assigned a unique index corresponding to its original vertex in the object, and it is visualized as a semantic vertex label and DHM.
    The semantic vertex label serves as a visual marker, whereas the DHM provides a functional line structure for measuring distances between vertices to generate dimension lines.
  • Secondary exploration—internal semantic vertex detection based on vector directions: Using the first-order bounding box as a reference, the algorithm searches for internal vertices along specific vector directions (e.g., ±X, ±Y, and ±Z). The search was conducted according to the following vector-based rules (Figure 3. (3) Second stage of semantic vertex detection):
    Reference point: A previously defined semantic vertex (e.g., L_B_O).
    Search axis: A specified vector direction (e.g., X-, Y-, or Z-axes).
    Primary selection condition: The scalar distance between the reference point and candidate vertices along the search axis (e.g., I_L_B_O).
    Secondary selection condition: The scalar distance between a previously found semantic vertex (e.g., I_L_B_O) and new candidate vertices along the same axis.
    Deterministic rule: When multiple candidates met the same conditions, the vertex or edge with the smallest or largest coordinate value was selected.
    For example, the inner bottom reference point of an opening (I_L_B_O) is determined by selecting the vertex closest to the X-axis parallel to L_B_O (primary selection).
    Using this principle, detailed semantic vertices required for calculating specific dimensions, such as inner width (IW) and inner height (IH), can be automatically defined.
  • Labels and DHM of semantic vertex
    Each detected semantic vertex is assigned a Unique Vertex Index (UVI) that remains invariant within the object. This index encapsulates not only the spatial coordinates of the vertex but also its semantic identity. The semantic vertex label serves as a visual indicator, while the DHM is generated at each semantic vertex as an object containing positional coordinates for measuring distances between vertices.

5.3. Logic of Vector-Rule-Based Detection

The vector-rule-based semantic detection and DHM (Dimension Helper Mesh) generation are performed through the following three stages.
  • Primary detection: AABB-based semantic generation
The first-stage semantic vertex set is defined as
S 1 = { L B O , L B B , R B F , R B B , L T F , L T B , R T F , R T B }
Each semantic vertex corresponds to the AABB extremal coordinates:
L B O = ( X m i n , Y m i n , Z m i n ) , R B F = ( X m a x , Y m i n , Z m i n ) L T F = ( X m i n , Y m i n , Z m a x ) , R T F = ( X m a x , Y m i n , Z m a x ) L B B = ( X m i n , Y m a x , Z m i n ) , R B B = ( X m a x , Y m a x , Z m i n ) L T B = ( X m i n , Y m a x , Z m a x ) , R T B = ( X m a x , Y m a x , Z m a x )
Each vertex is assigned a unique semantic identifier:
U V I ( S 1 ) = { 1,2 , , |   S 1   | }
A DHM anchor is instantiated at every coordinate to serve as a measurable reference:
D H M ( s i ) s i S 1
2.
Secondary detection: vector-rule semantic refinement
Detection parameters:
a x i s { X , Y , Z } , m o d e { m i n , m a x }
Directional unit vectors used for projection:
x ^ = ( 1,0 , 0 ) , y ^ = ( 0,1 , 0 ) , z ^ = ( 0,0 , 1 )
A formal semantic search operator is defined as
Φ ( r e f , a x i s , m o d e ) = a r g m o d e   v V ( ( v r e f ) a x i s ^ )
Selection rule
( 1 )   Primary :   maximize / minimize   scalar   projection   along   axis
( 2 )   If   multiple   satisfy   ( 8 ) ,   select   global   extremum   along   same   axis
( 3 )   Guaranteed   uniqueness :   !   s = Φ ( r e f , a x i s , m o d e )
Example:
I L B O = Φ ( L B O , X , m i n )
Secondary semantic vertex set:
S 2 = { Φ ( s , a x i s , m o d e )     s S 1 }
Thus, second-order refined vertices become
{ I L B O , I L B B , I R B F , I R B B , I L T F , I L T B , I R T F , I R T B }
3.
UVI and DHM registration
Each resolved vertex generates a DHM anchor retaining an absolute world-coordinate reference:
D H M ( s i ) = { i n d e x = i ,   p o s i t i o n = s i }
The Euclidean distance between semantic vertices defines measurable BIM dimensions:
d ( s i , s j ) = s i s j 2 automatic   parametric   dimension   generation

6. Scanning Rules-Based Semantic Vertex Detection

6.1. Definition of Scanning-Based Exploration

The scanning-based semantic vertex exploration method identifies semantic vertices by selecting one or more principal axes of a 3D object, dividing the chosen axis into continuous intervals, and analyzing the intersection vertices or tangency lines generated between the slicing plane and the object’s surface. This method enables the quantitative detection of geometric transition areas, inflection zones, and extremal points regardless of the object’s geometric complexity.
Figure 4 illustrates the conceptual diagram of the scanning-based semantic vertex exploration applied to a 3D cube object featuring a cylindrical opening at its center. For the cube in Figure 4, the first bounding-box-based semantic vertex detection is applied as a common procedure. In the second stage, the scanning-based exploration method is performed, where changes in the number of tangency segments generated by scanning planes are detected and evaluated. Among the three tangency candidates, the one whose Y-value is neither the maximum nor the minimum is selected as the final semantic vertex candidate. Figure 4 assumes objects in which the vertices of a circular opening are precisely aligned with the predefined scanning planes. However, when a scanning plane merely touches the circular opening tangentially—resulting in a planar contact rather than a sectional intersection—tangent-based detection becomes infeasible. In such cases, the vector-based exploration method provides a valid and effective alternative.

6.2. Scanning Rules: Purpose and Composition Principle

The purpose of this approach is to automatically detect semantic vertices based on actual internal geometric transitions rather than relying solely on boundary outlines or bounding-box information. The method defines semantic partitions of the shape and identifies meaningful vertices in real time by evaluating coordinate variations and local geometric transitions at each interval.
A key advantage of this method is its ability to extract accurate and consistent reference points even in non-uniform cross-sections, asymmetric configurations, or curved surface junctions. A scanning plane moves incrementally along the selected axis, and the intersection vertices between the plane and the mesh are computed. These intersection results yield parallel or vertical line segments, which are clustered based on spatial position, segment length, and overlap ratio.
The detected parallel or vertical segments are converged into tangency indicators, forming the stability parameter K for semantic vertex detection. K is not a fixed constant, but a dynamically determined value that varies according to the geometric discontinuities or structural inflection zones present in the object.
For instance, in a round column with a capital, the geometry consists of two distinct structural regions (capital and shaft). Therefore, an additional vertical intersection line (K + 1) is detected at their transitional boundary.
In practical implementation, one or more axes are selected as scanning directions based on the object’s bounding-box domain. The selected axis is discretized into multiple scan intervals according to a specified resolution (e.g., 1/10–1/1000 of the object length) (Figure 4. (3) Second-stage semantic vertex detection).
At each interval, minimum and maximum slicing planes are generated, and vertices located within that interval are collected as candidate subsets.
Slicing planes are then constructed for each candidate set, and tangency-based intersection counts K are measured between the slicing plane and the object surface.
Only minimum, maximum, and internal vertices are used for interval scanning in order to prevent infinite computational growth inherent in fully continuous slicing. Tangency evaluation is restricted to intersections between scanning planes and mesh faces exclusively. Intersections between scanning planes and edges or inferred boundary lines may generate duplicated tangency counts; therefore, such conditions are excluded to avoid errors originating from differing mesh-edge definitions or modeling practices.
Tangency results within each interval may yield one or multiple candidates. When the k-value remains constant across consecutive intervals, that region is interpreted as a geometrically uniform zone with no semantic transition; therefore, no semantic vertex is assigned. The following describes the eight-step procedure of the scanning-based semantic vertex exploration method.
(1)
Select scanning axis or axes of the object;
(2)
Subdivide the axis into scan intervals (geometry-based resolution), with the scanning range defined slightly larger than the object;
(3)
Collect minimum/maximum points and internal vertex candidates per interval;
(4)
Generate slicing planes based on candidate sets;
(5)
Evaluate tangency change and detect transition patterns;
(6)
Select valid tangency indicators relevant to dimension measurement;
(7)
Confirm endpoints of the final tangency lines as semantic vertices;
(8)
Generate dimension helper meshes (DHMs) at the detected semantic vertices.

6.3. Logic of Scanning-Based Detection

The logical procedure of the scanning-based semantic vertex exploration and DHM generation is defined as follows. This method detects geometric transition zones by examining changes in the number and pattern of tangency segments along a chosen scanning axis.
  • Definition of scanning variables
The scanning axis is first selected from the three principal directions of the object.
a x i s { X ,   Y ,   Z }
The scanning range corresponds to the bounding-box limits along the selected axis.
[ a x i s m i n ,   a x i s m a x ]
The axis range is then divided into N scanning intervals.
N N
The interval length is computed as the scanning step size.
Δ = a x i s m a x a x i s m i n N
2.
Interval construction
Each scan interval is defined by its start and end values.
I i = [ s i , e i ] , s i = a x i s m i n + ( i 1 ) Δ , e i = a x i s m i n + i Δ , i = 1 N
Two slicing planes are generated at the beginning and end of each interval.
P i m i n : a x i s = s i , P i m a x : a x i s = e i
3.
Candidate vertex selection
Let V be the full vertex set of the mesh.
Vertices included in the i -th interval are selected as
V i = {   v V s i v a x i s e i   }
Internal slicing planes are then generated using characteristic axis values from the candidate set.
P i = { P i m i n , P i m a x } { P i , k v a x i s = c i , k ,   v V i }
where c i , k values are unique or representative sorted vertex coordinates.
4.
Tangency-segment extraction
For each slicing plane P P i , tangency segments are computed from mesh face intersections.
L i ( P ) = { l l = P F ,   F F ,   l   is   a   tangency   segment   on   F }
All detected tangency segments in interval I i are unified as
L i = P P i L i ( P )
These segments are clustered and the number of clusters is used as a stability index.
C i = Cluster ( L i ) , K i = |   C i   |
5.
Detection of transition zones
The sequence { K i } is evaluated across all intervals.
{ K i } i = 1 N
If multiple consecutive intervals share an identical stability value, the region is uniform and no semantic vertex is assigned.
K i = K i + 1 = = K j I i I j   is   uniform
A change in stability indicates the existence of a geometric transition.
K i 1 K i transition   boundary   detected
For such regions, only segments meaningful for dimension measurement are retained.
L i * L i
6.
Semantic vertex allocation
Semantic vertices are defined as the endpoints of the retained tangency segments.
S V i = { p p   is   endpoint   of   l ,   l L i * }
Each vertex is assigned a unique index.
U V I ( S V i ) = { 1,2 , , S V i }
7.
DHM registration
Finally, a DHM instance is placed at every semantic vertex to support dimension computation.
D H M ( S V i ) reference   basis   for   dimensioning
A more detailed explanation of the axis scanning rules based on semantic vertex detection and the corresponding algorithm has been added to Appendix A.

7. Fundamental Case Study of Dimension Generation

7.1. Basic Example: Wall with a Window Opening

A wall with a window opening contains an internal void. The reference point for the semantic vertex detection was set at the bottom-left corner of the wall. From this reference point, the position of the window opening can be identified through secondary vector-based exploration along the designated axis directions. The following illustrates an example of a vector-based semantic vertex detection for a wall object with a window opening.
  • Outermost rectangular bounding box
For a wall object containing a window, the object itself represents the outermost boundary. When the object is aligned along the world coordinate axes, the X, Y, and Z coordinate values of the semantic vertices are defined as follows:
2.
Second-stage vector-based search using the reference point
After defining the primary semantic vertices through the first outermost bounding box, the second-stage semantic vertices are explored using the reference point “L_B_O.” The conditions for identifying each semantic vertex are as follows:
Condition for WL_B_O: Reference point: L_B_O, The search principle is defined by y = Y 0 , x > X 0 , z > Z 0 . The candidate selection rule chooses p = ( x , y , z ) V that minimizes ( Δ x , Δ z ) = ( x X 0 , z Z 0 ) , meaning both Δ x and Δ z are minimal and non-degenerate (not equal).
Condition for WR_B_F: Reference point: WL_B_O, The search principle is y = Y w , z = Z w , x > X w (minimum x ). The selection rule picks, among candidates satisfying the above, the vertex with the smallest x .
Condition for WL_T_F: Reference point: WL_B_O, The search principle is x = X w , y > Y w , z > Z w (minimum z ). The selection rule picks, among candidates satisfying the above, the vertex with the smallest z .
Condition for WL_B_O_BD: Reference point: WL_B_O, A virtual ray is projected in the −Z-axis direction, and the first intersection coordinate with the mesh defines WL_B_O.BD. The parametric form of the ray is r ( t ) = ( X w , Y w , Z w ) + t ( 0,1 , 1 ) , t > 0 .
Condition for WL_B_O_LD: Reference point: WL_B_O, A virtual ray is projected in the −X-axis direction, and the first intersection coordinate with the mesh defines WL_B_O.LD.
The parametric form of the ray is r ( t ) = ( X w , Y w , Z w ) + t ( 1,0 , 0 ) , t > 0 .
Figure 5 illustrates the common procedure of semantic vertex detection, showing the extraction of first-order semantic vertices based on the initial bounding box of a wall object containing a window, as well as an example of dimension–helper–mesh generation at those semantic vertices. Figure 6 shows the second-stage semantic vertex detection applied to a wall object containing a window, where semantic vertices related to the window are extracted, and helper meshes are generated at those detected semantic vertices. Table 2 and Table 3 present the semantic vertex labels of the wall object with a window, along with descriptions for each label.
As a fundamental case study, dimension generation is performed for wall and column objects after alignment, and the invariance of dimensions is verified following object rotation.

7.2. Dimension Generation for Wall Objects

The automatic dimension generation is verified for a single wall object aligned in the world coordinate system, parallel to the X, Y, and Z axes.
The following steps are carried out:
  • Semantic vertex detection
  • Generation of labels and dimension helper meshes
  • Creation of final dimension lines and numerical dimension values
Finally, after the completion of dimension generation, the invariance of the generated dimensions is verified by rotating or moving the objects. Figure 7 illustrates the generation and visualization of dimension lines for a wall object with a window, referencing the dimension-helper meshes defined at the semantic vertices. Figure 8 presents a case demonstrating that the measured dimensions remain invariant even after the object undergoes translation and rotation. Table 4 provides descriptions of the dimension labels for the object containing a window.

7.3. Example: Round Column with a Capital

A round column with a capital comprises two parts: an upper capital section and a main shaft section. When the slicing plane passes through the capital, intersection points form lines. As the plane continues to scan from the -x axis to the +x axis, a new vertical intersection is detected on the main shaft, indicating a geometric transition. The ends of segments of this newly detected vertical line are identified as semantic vertices, essential for dimension calculation. The following illustrates how the scanning-based method detects the semantic vertices for a circular column with a capital.
  • Outermost rectangular bounding box
    First, the column object is aligned along the user’s viewing reference frame, specifically the X-, Y-, and Z-axes. Next, the definition of semantic vertices begins based on the outermost rectangular bounding box. The outermost bounding box automatically defines eight semantic vertices: Left–Bottom–Front (L_B_O), Left–Bottom–Back (L_B_B), Right–Bottom–Front (R_B_F), Right–Bottom–Back (R_B_B), Left–Top–Front (L_T_F), Left–Top–Back (L_T_B), Right–Top–Front (R_T_F), and Right–Top–Back (R_T_B). To mark the coordinates of these eight semantic vertices, each vertex is assigned a unique index, and visualized labels, as well as dimension helper meshes are generated. The origin of each label and the origin of each helper mesh are set to exactly coincide with the corresponding semantic vertex coordinates. The following figure illustrates an example of first-stage semantic vertex detection and visualization for a cylindrical column object. When the column object is aligned with the world coordinate axes, the X, Y, and Z coordinate values of each semantic vertex are defined as follows.
  • Semantic vertex detection by X-axis scanning method
    The X-axis directional scanning method is applied for second-stage semantic vertex detection. By referencing the outermost bounding box of the target object, a series of virtual cross-sectional planes is sequentially moved across the object from the leftmost to the rightmost side. At each scan position, the intersection lines between the plane and the object’s mesh are calculated. Among the vertical segments generated by these intersections, filtering and clustering are performed based on their height, position, and overlap ratio to extract semantic vertex candidates. In particular, for each scan plane, a stable set of vertical intersection lines is identified where the number of intersection points converges consistently along the vertical direction. These lines are defined as vertical intersection lines, and among them, the one with the lowest Z value is used to define two semantic vertices: the upper endpoint and the lower endpoint.
    On the left side of the column, the vertex with the maximum Z value is defined as CM_L_T_P.
    On the left side, the vertex with the minimum Z value is defined as CM_L_B_P.
    On the right side of the column, the vertex with the maximum Z value is defined as CM_R_T_P.
    On the right side, the vertex with the minimum Z value is defined as CM_R_B_P.
    This axis-based scanning method allows for reliable semantic vertex detection even in objects with circular, bent, or asymmetric geometries. Furthermore, it can detect contact points or tangent lines based on actual geometric intersections, even for complex structures that are not perfectly aligned to a specific axis. Therefore, the scanning method achieves a higher detection reliability than simple outermost bounding box–based approaches. Figure 9 and Figure 10 show the results of first-stage and second-stage (scanning-based) semantic vertex detection, respectively, and Table 5 and Table 6 provide descriptions of the corresponding semantic vertex labels.

7.4. Dimension Generation for Column Objects

Automatic dimension generation is verified for three-column objects of identical shape but different sizes. Initially, all objects are aligned in the world coordinate system, parallel to the X, Y, and Z axes. For each aligned object, the following steps are conducted:
  • Semantic vertex detection
  • Generation of labels and dimension helper meshes
  • Creation of dimension lines and numerical dimension values
Finally, after completing the dimension generation process, the invariance of dimensions is verified after object rotation. Figure 11 illustrates the generation and visualization of dimension lines for a cylindrical column by referencing the dimension-helper meshes generated at the semantic vertices. Figure 12 presents a case demonstrating that the measured dimensions remain invariant even after the object undergoes translation and rotation. Table 7 provides descriptions of the dimension labels for the cylindrical column.

8. Empirical Case Study

An empirical case study was conducted to verify semantic vertex detection and dimension generation using an actual project model. For the reference floor of the structure, dimension generation was visualized for slabs, columns, beams, and walls. In this experiment, no dimension helper meshes were generated. Instead, unique indices were assigned to each semantic vertex, and dimension lines were created automatically based on vertex pair relationships. Figure 13, Figure 14 and Figure 15 present the respective results of this empirical study.
In the implementation, all automatically generated dimensions and semantic vertex IDs are first stored as object-level custom properties in Blender. Although this study does not perform IFC write-back in practice, it conceptually explains that these custom properties can be mapped to a user-defined property set (Pset_SemanticDimensions) and linked to the corresponding IFC element through IfcRelDefinesByProperties. Figure 16 presents an example in which semantic vertex identifiers and dimension data are recorded through Blender custom property assignments. While Blender and IFC do not provide any native mechanism for automatic synchronization between Custom Properties and Property Sets, the mapping process is fully realizable through scripting—particularly using Python and IfcOpenShell—allowing Custom Property values to be programmatically translated into IFC Pset structures when write-back functionality is required.
While IFC Property Sets represent standardized, schema-governed metadata structures intended for interoperability within Building Information Modeling (BIM), Blender Custom Properties constitute highly flexible, user-defined metadata stored internally within Blender objects. This flexible structure is particularly well-suited for holding platform-independent semantic-vertex identifiers and dimensional information prior to any export, transformation, or potential mapping to IFC. Accordingly, the purpose of this semantic property structure is not IFC output itself, but rather to maintain organized, query-ready metadata that can be easily processed or transferred to downstream workflows such as analysis, QTO, or automation.
After generating the dimensions, the results can be retained within these custom properties or exported to external systems along with the dimension objects. Blender natively supports scripting methods that extract object-level custom information in formats such as Excel, CSV, or JSON, enabling seamless integration with diverse analytical or automation pipelines. When desired, the same scripting environment can also be extended to implement IFC write-back, thereby completing the translation of Blender metadata into formally structured IFC Property Sets.

9. Discussion

This study establishes a practical basis for expanding BIM from a geometry-only paradigm to a meaning-preserving framework by introducing a semantic reference layer, an element previously absent from the IFC structure. In this section, five thematic discussions are presented: (1) comparison with existing dimensioning systems, (2) interoperability considerations, (3) accuracy of semantic vertex detection, (4) the role of semantic vertices within IDS/EIR-based BIM workflows, and (5) future research directions.
  • Comparison with existing dimensioning systems
Conventional CAD environments, particularly 2D drafting systems, primarily rely on manual dimension annotation. Although endpoints or vertices can be detected or snapped automatically, these vertex detections do not carry semantic context, meaning that the system recognizes where a geometric endpoint exists, but not what that endpoint represents functionally within the building object (e.g., sill corner, jamb inner edge, structural core boundary, etc.). As a result, dimensional references in CAD are fundamentally dependent on user interpretation and repeated manual selection, which often leads to inconsistency, increased modeling time, and difficulties in maintaining dimensional integrity after geometric modification.
Similarly, advanced 3D BIM software such as Revit provides associative dimension tools and reference-plane snapping, yet the detected points remain geometric-only. Vertices are identified, but they are not assigned persistent semantic labels and, therefore, cannot act as invariant reference anchors across export formats, coordinate transformations, or platform transitions. If the model is rotated, relocated, or converted into another format, dimension information must often be regenerated because the system lacks a stable vertex-tagging mechanism that preserves object meaning beyond geometry.
In contrast, the software solution proposed in this study introduces semantic vertex tagging and dimension helper mesh generation, enabling each reference point to be defined, stored, and retrieved as a functionally meaningful anchor rather than merely a coordinate. Once a point is semantically designated, for example, as a left–bottom–origin corner, an inner jamb vertex, or a window opening reference, it remains invariant regardless of transformation, platform change, or file conversion. Dimension lines are generated automatically by referencing these persistent semantic nodes, ensuring that parametric measurements remain stable even after translation, rotation, or format export. This represents a shift from manual geometry-driven dimensioning to topology- and meaning-based automated dimensioning, thereby improving accuracy, reproducibility, and cross-platform interoperability.
2.
Interoperability
In this study, interoperability does not refer to a quantitatively measurable performance metric at the current stage, but rather to the potential scalability of the proposed framework across different platforms. The automatic dimension generation method extends beyond design-stage BIM measurement and demonstrates clear feasibility for integration into diverse environments, such as AR verification, construction-phase QTO automation, geometric compliance checking, and digital twinning. Thus, interoperability is addressed in this research as a direction for extension, not as a quantitative experiment within the present scope.
Semantic vertices function as invariant dimensional references, enabling object dimensions to remain consistent even when transferred across platforms. This implies that the semantic vertex structure generated in Blender may be exported into Unity or AR platforms to support real-time model-to-reality alignment, deviation visualization, and reduced manual inspection effort. As dimensions are stored as semantic relations rather than platform-dependent IFC attributes, the framework naturally extends toward construction progress comparison, post-build verification, and continuous digital twin updates.
Therefore, interoperability in this study is conceptualized as a future-oriented expansion capability of the semantic vertex framework, rather than a measurable output to be validated through numerical indices. Future research will introduce quantitative studies, including UVI preservation rate across platform conversions, vertex stability after mesh transformation, and measurement verification using AR-based ground truth.
3.
Accuracy of semantic vertex detection
In this study, the authors clarify that the definition of accuracy does not correspond to the numerical correctness of the generated dimensions, but rather to the accuracy of semantic vertex detection itself.
Since the final dimensions are computed as a direct consequence of correctly identified semantic vertices, the verification of accuracy must focus on whether semantic vertices are detected completely and reliably, not on the dimensional values that simply derive from them. Thus, the core accuracy metric in this study is semantic vertex detection, not dimension error.
At the present research stage, however, quantitative evaluation of semantic vertex detection accuracy is not feasible. A fully annotated ground-truth dataset of IFC/mesh models is required to compute measures such as precision, recall, or F1-score. Yet, such datasets do not currently exist, and constructing them manually is a non-trivial process due to geometric variability, IFC heterogeneity, and the absence of standardized semantic labels across platforms. Without such baseline references, quantitative detection accuracy experiments cannot be performed at this time.
Nevertheless, we argue that the proposed method ensures theoretical completeness in semantic vertex detection. When the target mesh satisfies four conditions—(1) fully normalized geometry, (2) complete triangulation of all faces, (3) no orphan or unused vertices, and (4) no duplicated or overlapping vertices/edges/faces—the detection algorithm traverses the entire geometrical–topological domain without omission. Under these assumptions, the AABB-based, vector rule, and scanning-based procedures detect every semantically defined extreme and intersection vertex, enabling 100% completeness of semantic vertex extraction. Consequently, dimensional correctness emerges automatically as a derivative property.
Therefore, accuracy in this work is defined not as deviation in measurement values, but as an algorithmic guarantee of semantic vertex completeness in normalized mesh conditions. Future work will include the construction of labeled datasets, quantitative benchmark testing, and robustness evaluation under noisy, non-normalized, or non-manifold geometries, enabling experimental validation of the theoretical completeness demonstrated here.
4.
Role of semantic vertices within the IDS/EIR-based BIM workflow
Semantic vertices function as a key integration layer within the BIM workflow governed by the EIR (Employer’s Information Requirements) and translated through the IDS (Information Delivery Specification). While the EIR defines dimensional and tolerance requirements, the IDS expresses them as machine-readable validation rules. However, once models are exchanged as IFC, most parametric references disappear, making quantitative verification, QTO, and rule-based checking difficult, especially for localized dimensions such as opening width, spacing intervals, or height conditions.
The proposed method fills this gap by introducing semantic vertices as a computational layer between IDS rule definition and downstream evaluation. After IFC reconstruction, semantic vertices operate as invariant geometric anchors that remain stable across scale, rotation, or coordinate transformation. As a result, IDS rules (e.g., Door_W ≥ 900 mm; opening interval ≤ 600 mm) can be evaluated via direct vertex-to-vertex measurement, enabling automated model checking without manual inspection.
In effect, semantic vertices establish the following:
Quantitative reference points for IDS constraints;
A parametric basis for automated QTO and dimensional extraction;
An extendable interface toward AR verification workflows.
Thus, they act as a downstream bridge linking EIR → IDS → IFC → QTO/Model Checking, ensuring that requirement definitions continue into measurable validation outcomes.
5.
Future work: full round-trip validation and AR/MR deployment
Future work will focus on validating a complete semantic vertex round-trip workflow that spans the entire information lifecycle. The planned pipeline consists of the following:
(1)
Authoring and modification within native BIM tools;
(2)
IFC export;
(3)
Semantic vertex detection and dimension extraction;
(4)
IFC write-back or JSON-based metadata embedding;
(5)
Model re-import into an IDS-compliant viewer for rule-evaluation feedback.
This closed loop will allow not only dimension generation but also bidirectional information flow, meaning that IDS-driven requirements can be verified after conversion and, if necessary, fed back to the authoring environment for correction.
In addition, the framework will be extended toward AR/MR-based field verification, where semantic vertices serve as anchoring points to align virtual models with physical environments. This supports real-scale deviation inspection and on-site dimension overlay. However, this direction also introduces technical challenges, such as the need for stable camera tracking, spatial drift compensation, and device-specific calibration to ensure millimeter-level accuracy. Addressing these limitations will be essential for deploying the method as a practical AR/MR-integrated BIM verification system.

10. Conclusions

This study presented a semantic-vertex-based automatic dimension generation framework for BIM objects whose IFC geometry has been converted into a stabilized mesh. The approach reinterprets the mesh as a topologically and geometrically defined set of vertices, edges, and faces, detects semantic reference vertices, and generates dimensions based on these vertices without manual measurement or annotation. Although the experimental environment was centered on Blender-reconstructed IFC objects, the methodology itself applies to any 3D geometry possessing definable topological characteristics, indicating strong potential for broader BIM and computational design workflows.
Through theoretical formulation and empirical validation, the following findings were derived:
  • Outermost bounding-box detection effectively identified primary semantic vertices for structural objects.
  • Semantic vertices not detected through bounding-box evaluation alone were successfully resolved using either reference vector exploration or axis scanning, selected depending on the object geometry.
  • The scanning-based approach demonstrated stable and reliable performance for non-orthogonal, circular, and asymmetric shapes.
  • The generated dimensions remained invariant under object rotation and realignment, confirming the coordinate-independent robustness of the proposed framework.
  • Case studies of slabs, columns, beams, and walls verified that accurate dimensional outputs can be achieved using semantic vertex pairing alone, without dependence on auxiliary helper meshes.
  • Semantic vertex labels serve primarily as visual indicators, whereas dimension helper meshes function as measurable reference points during dimension calculation.
  • After generating the dimensions, the results could be stored in Blender custom properties or exported to external systems along with the dimension objects.
Collectively, the proposed method enhances automated QTO and geometric verification of BIM data. It also contributes to the broader goal of improving IFC-based interoperability. However, this work was validated at the stage of IFC → Mesh → Semantic Vertex Detection → Dimension Generation, and a full bidirectional pipeline—Revit/ArchiCAD → IFC → Mesh → Dimension → IFC Write-back → Cross-Platform Restoration—was not realized within this study. Future research will aim to implement this end-to-end workflow and evaluate it in practical industry environments, including write-back validation, multi-platform compatibility, and integration with engines such as Unity and Unreal.

11. Patents

A patent application related to this research has been filed.

Funding

The present research was supported by the research fund of Dankook University in 2025.

Data Availability Statement

All data used in this study were directly generated and modeled by the author. All geometric models, semantic vertex definitions, and experimental datasets, including the IFC objects used in the experiments, were self-created specifically for this research. As a patent application related to this research is currently under review, the data are not publicly available; however, they may be provided by the author upon reasonable request.

Acknowledgments

This research was supported by the research fund of Dankook University, and the author gratefully acknowledges this support.

Conflicts of Interest

The author declares no conflicts of interest related to this study. The funding body had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AECArchitecture, Engineering, and Construction
ARAugmented Reality
BIMBuilding Information Modeling
DHMDimension Helper Mesh
IDSInformation Delivery Specification
IFCIndustry Foundation Classes
LLMsLarge Language Models
QTOQuantity Take-Off
UDSVUser-Defined Semantic vertex
UVIUnique Vertex Index

Appendix A. Axis-Scanning-Based Semantic Vertex Detection Algorithm

This appendix describes the full procedure of the axis-scanning-based semantic vertex detection algorithm. The method identifies semantic vertices by discretely slicing a mesh along a selected axis and detecting pattern transitions in the resulting intersection geometry. The algorithm does not sweep continuously along the axis; instead, the scan domain is divided into N intervals, and only the vertices within each interval are processed, which greatly reduces computational cost.

Appendix A.1. Input Parameters

  • M = (V, F): mesh vertices and faces
  • axis ∈ {X, Y, Z}: scanning direction
  • N: number of scan intervals (sampling resolution along axis)
  • K: slicing planes per interval
  • τ_len: minimum segment length threshold
  • τ_overlap: clustering overlap threshold
  • ε: extended scan boundary margin
  • Output
  • S_scan: detected semantic vertex set
  • DHM_scan: optional helper-mesh instances at each vertex

Appendix A.2. Pseudocode (Word-Paste Stable Version)

Algorithm A1. Axis-Scanning-Based Semantic Vertex Detection
1.
Compute axis bounds
   axis_min = min(v_axis), axis_max = max(v_axis)
   axis_min_ext = axis_min – ε
   axis_max_ext = axis_max + ε
   Δ = (axis_max_ext − axis_min_ext)/N
2.
Initialize
  S_cand = ∅
  prev_pattern = ∅
3.
For i = 1 to N
  3.1. Define scan interval
  s_i = axis_min_ext + (i−1)Δ
  e_i = axis_min_ext + iΔ
  V_i = { v ∈ V | s_i ≤ v_axis ≤ e_i}
        3.2. If V_i = ∅
         curr_pattern = EMPTY
         Go to Step 3.6
        3.3. Generate K slicing planes
         for k = 1 to K
           t_k = s_i + (k/(K + 1))(e_i − s_i)
           record plane(axis = t_k)
        3.4. Intersect all planes with faces
         for each plane P
           for each face f ∈ F
          seg = Intersect(f, P)
          if seg exists and length ≥ τ_len → store in L_i
        3.5. Cluster and build pattern
         C_i = ClusterSegments(L_i, τ_overlap)
         curr_pattern = PatternDescriptor(C_i)
        3.6. Pattern change check
         if i > 1 and PatternChanged(prev_pattern, curr_pattern)
          extract endpoints of changed clusters → add to S_cand
        3.7. prev_pattern = curr_pattern
4.
Merge and snap
  S_scan = SnapToNearestVertices(S_cand)
5.
(Optional) Generate DHM
  DHM_scan = CreateHelperMeshAt each v ∈ S_scan
6.
Return S_scan, DHM_scan

Appendix A.3. Key Clarification—Sampling, Not Continuous Scanning

This method does not perform infinite or continuous scanning.
Instead, the axis is sampled at N discrete intervals, and only the vertices within each interval (V_i) are evaluated.
This design dramatically reduces computation while still capturing all significant geometric transitions.

Appendix A.4. Termination Condition

  • Outer loop executes exactly N times (finite).
  • Each interval processes only K slicing planes.
  • No recursion is used.
  • The algorithm always terminates.

Appendix A.5. Computational Complexity

Let |F| = number of faces, |V| = vertices.
  • Time complexity: O(N · K · |F|) ≈ O(N · |F|)
  • With BVH/spatial tree acceleration: ≈O(N · log|F|)
  • Space complexity: O(|V| + |F| + S_i)

References

  1. Maunula, A.; Smeds, R.; Hivernasalo, A. Implementation of Building Information Modeling (BIM)—A Process Perspective. In APMS 2008: Innovations in Networks; IFIP International Federation for Information Processing; Springer: Boston, MA, USA, 2008; pp. 379–386. [Google Scholar]
  2. Gerbino, S.; Cieri, L.; Rainieri, C.; Fabbrocino, G. On BIM Interoperability via the IFC Standard: An Assessment from the Structural Engineering and Design Viewpoint. Appl. Sci. 2021, 11, 11430. [Google Scholar] [CrossRef]
  3. ISO 10303-21:2016; Industrial Automation Systems and Integration—Product Data Representation and Exchange—Part 21. ISO: Geneva, Switzerland, 2016.
  4. Noardo, F.; Arroyo Ohori, K.; Krijnen, T.; Stoter, J. An Inspection of IFC Models from Practice. Appl. Sci. 2021, 11, 2232. [Google Scholar] [CrossRef]
  5. Akanbi, T.; Zhang, J. IFC-Based Algorithms for Automated Quantity Take-Off from Architectural Model: Case Study on Residential Development Project. J. Archit. Eng. 2023, 29, 05023007. [Google Scholar] [CrossRef]
  6. Hesselink, F.; Krijnen, T.; Pannekoek, E. Automatic Analysis and Quantity Estimation of Balcony Elements from IFC Files. In Proceedings of the CIB W78 Conference, Luxembourg, 11–15 October 2021. [Google Scholar]
  7. Taghaddos, H.; Mashayekhi, A.; Sherafat, B. Automation of Construction Quantity Take-Off: Using Building Information Modeling (BIM). In Proceedings of the Construction Research Congress (CRC 2016); San Juan, Puerto Rico, 31 May–2 June 2016, American Society of Civil Engineers (ASCE): Reston, VA, USA, 2016; pp. 2218–2227. [Google Scholar]
  8. Lee, G. What Information Can or Cannot Be Exchanged? J. Comput. Civ. Eng. 2011, 25, 1–9. [Google Scholar] [CrossRef]
  9. Ma, H.; Ha, K.M.E.; Chung, C.K.J.; Amor, R. Testing Semantic Interoperability. In Proceedings of the Joint International Conference on Computing and Decision Making in Civil and Building Engineering, Montreal, QC, Canada, 14–16 June 2006. [Google Scholar]
  10. Noardo, F.; Krijnen, T.; Arroyo Ohori, K.; Biljecki, F.; Ellul, C.; Harrie, L.; Eriksson, H.; Polia, L.; Salheb, N.; Tauscher, H.; et al. Reference Study of IFC Software Support: The GeoBIM Benchmark 2019—Part I. Trans. GIS 2021, 25, 805–841. [Google Scholar] [CrossRef]
  11. Wu, J.; Sadraddin, H.L.; Ren, R.; Zhang, J.; Shao, X. Invariant Signatures of Architecture, Engineering, and Construction Objects to Support BIM Interoperability between Architectural Design and Structural Analysis. J. Constr. Eng. Manag. 2021, 147, 04020148. [Google Scholar] [CrossRef]
  12. Noardo, F.; Cavkaytar, D.; Arroyo Ohori, K.; Krijnen, T.; Ellul, C.; Harrie, L.; Biljecki, F.; Eriksson, H.; Polia, L.; Salheb, N.; et al. IFC Models for Semi-Automating Common Planning Checks for Building Permits. Autom. Constr. 2022, 134, 104097. [Google Scholar] [CrossRef]
  13. Seib, S. Development of Model Checking Rules for Validation and Content Checking. WIT Trans. Built Environ. 2019, 192, 245–253. [Google Scholar] [CrossRef]
  14. Alathamneh, S.; Collins, W.; Azhar, S. BIM-Based Quantity Take-Off: Current State and Future Opportunities. Autom. Constr. 2024, 165, 105549. [Google Scholar] [CrossRef]
  15. Monteiro, A.; Martins, J. A Survey on Modeling Guidelines for Quantity Take-Off-Oriented BIM-Based Design. Autom. Constr. 2013, 35, 238–253. [Google Scholar] [CrossRef]
  16. Valinejadshoubi, M.; Moselhi, O.; Iordanova, I.; Valdivieso, F.; Bagchi, A. Automated System for High-Accuracy Quantity Takeoff Using BIM. Autom. Constr. 2024, 157, 105155. [Google Scholar] [CrossRef]
  17. Chen, B.; Jiang, S.; Qi, L.; Su, Y.; Mao, Y.; Wang, M.; Cha, H.S. Design and Implementation of Quantity Calculation Method Based on BIM Data. Sustainability 2022, 14, 7797. [Google Scholar] [CrossRef]
  18. Löhr, F.; Gerber, A.; Stobbe, M. Semi-Automated Generation of Multi-Zone Thermal Models from Building-Information-Modeling Data. In Proceedings of the BauSIM 2022—9th IBPSA-Germany & Austria Conference, Weimar, Germany, 20–22 September 2022; pp. 1–6. [Google Scholar] [CrossRef]
  19. Gopee, M.A.; Prieto, S.A.; García de Soto, B. Improving Autonomous Robotic Navigation Using IFC Files. Constr. Robot. 2023, 7, 235–251. [Google Scholar] [CrossRef]
  20. Pauwels, P.; van den Bersselaar, E.; Verhelst, L. Validation of Technical Requirements for a BIM Model Using Semantic Web Technologies. Adv. Eng. Inform. 2024, 60, 102426. [Google Scholar] [CrossRef]
  21. Kim, Y.; Chin, S.; Choo, S. Rule-based automation algorithm for generating 2D deliverables from BIM. J. Build. Eng. 2024, 97, 111033. [Google Scholar] [CrossRef]
  22. Akanbi, T.; Zhang, J.; Lee, Y.-C. Data-Driven Reverse Engineering Algorithm Development (D-READ) Method for Developing Interoperable Quantity Take-Off Algorithms Using IFC-Based BIM. J. Comput. Civ. Eng. 2020, 34, 04020036. [Google Scholar] [CrossRef]
  23. Fürstenberg, D.; Hjelseth, E.; Klakegg, O.J.; Lohne, J.; Lædre, O. Automated Quantity Take-Off in a Norwegian Road Project. Sci. Rep. 2024, 14, 458. [Google Scholar] [CrossRef]
  24. Zhang, S.; Zhang, S.; Liu, H.; Wang, C.; Zhao, Z.; Wang, X.; Yan, L. Semantic enrichment of BIM models for construction cost estimation in pumped-storage hydropower using Industry Foundation Classes and interconnected data dictionaries. Adv. Eng. Inform. 2025, 68, 103670. [Google Scholar] [CrossRef]
  25. Moreira, G.d.P.; Carvalho, M.T.M.; Roriz Junior, E. IFC-Based Automated Validation of Quantity Take-Off Requirements for Cost Estimation. SSRN 2025. [Google Scholar] [CrossRef]
  26. Isatto, E.L. An IFC Representation for Process-Based Cost Modeling. In Proceedings of the 18th International Conference on Computing in Civil and Building Engineering (ICCCBE 2020); Lecture Notes in Civil Engineering; Springer: Cham, Switzerland, 2021; Volume 98, pp. 519–528. [Google Scholar] [CrossRef]
  27. Zhang, S.; Pauwels, P. Cypher4BIM: Releasing the Power of Graph for Building Knowledge Discovery. Adv. Eng. Inform. 2021, 47, 101234. [Google Scholar] [CrossRef]
  28. Iranmanesh, S.; Saadany, H.; Vakaj, E. LLM-assisted Graph-RAG Information Extraction from IFC Data. arXiv 2025, arXiv:2504.16813. [Google Scholar] [CrossRef]
  29. ACCA Software S.p.A. PriMus-IFC: Quick Start (User Manual); ACCA Software S.p.A.: Bagnoli Irpino, Italy, 2020. [Google Scholar]
  30. Autodesk, Inc. Navisworks Manage—Quantification User Guide; Autodesk, Inc.: San Rafael, CA, USA, 2021. [Google Scholar]
  31. Kreo Software. Kreo BIM Take-Off: AI-Driven Quantity Take-Off from IFC Models; Kreo: London, UK, 2022. [Google Scholar]
  32. BEXEL Consulting. BEXEL Manager—5D BIM and Cost Management; BEXEL: Belgrade, Serbia, 2021. [Google Scholar]
  33. RIB Software. CostX—Next Generation 2D and BIM Estimating Software; RIB Software: Stuttgart, Germany, 2020. [Google Scholar]
  34. Cadwork Informatik. Cadwork 3D to 6D BIM for Timber Construction; Cadwork Informatik: Basel, Switzerland, 2021. [Google Scholar]
  35. BlenderBIM Development Team. BlenderBIM Add-on Documentation; IfcOpenShell Community (Open-Source Project): Online, 2023. [Google Scholar]
  36. buildingSMART International. IFC Schema Documentation (IFC2x3 and IFC4); buildingSMART International: London, UK, 2018. [Google Scholar]
  37. Karlshoej, J. IFC Interoperability Issues in AEC Industry. Autom. Constr. 2013, 35, 164–177. [Google Scholar]
  38. Applied Software Technology Inc. Auto-Dimensioning REVIT Models. U.S. Patent US10902580B2, 26 January 2021. [Google Scholar]
  39. Zhang, C.; Beetz, J.; Weise, M. Interlinking Building Geometry, Topology and Semantics. Adv. Eng. Inform. 2015, 29, 550–558. [Google Scholar]
Figure 1. Example of a semantic vertex definition for an object with a window.
Figure 1. Example of a semantic vertex definition for an object with a window.
Applsci 16 00139 g001
Figure 2. The conceptual workflow of IFC object transformation and the dimension generation process.
Figure 2. The conceptual workflow of IFC object transformation and the dimension generation process.
Applsci 16 00139 g002
Figure 3. Conceptual diagram of vector rules based semantic vertex detection.
Figure 3. Conceptual diagram of vector rules based semantic vertex detection.
Applsci 16 00139 g003
Figure 4. Conceptual diagram of scanning-based semantic vertex detection.
Figure 4. Conceptual diagram of scanning-based semantic vertex detection.
Applsci 16 00139 g004
Figure 5. Semantic vertex definitions for wall and window first coordinate reference points (normalized world coordinate system, left: wireframe view, right: solid mesh view).
Figure 5. Semantic vertex definitions for wall and window first coordinate reference points (normalized world coordinate system, left: wireframe view, right: solid mesh view).
Applsci 16 00139 g005
Figure 6. Semantic vertex definitions for wall and window second coordinate reference points (normalized world coordinate system, left: wireframe view, right: solid mesh view).
Figure 6. Semantic vertex definitions for wall and window second coordinate reference points (normalized world coordinate system, left: wireframe view, right: solid mesh view).
Applsci 16 00139 g006
Figure 7. Automatic dimension generation for a wall object (left: wireframe view, right: solid mesh view).
Figure 7. Automatic dimension generation for a wall object (left: wireframe view, right: solid mesh view).
Applsci 16 00139 g007
Figure 8. Verification of the invariance and stability of generated dimensions after wall objects translation and rotation.
Figure 8. Verification of the invariance and stability of generated dimensions after wall objects translation and rotation.
Applsci 16 00139 g008
Figure 9. Semantic vertex definitions for a round column showing the first coordinate reference points (normalized world coordinate system, left: wireframe view, right: solid mesh view).
Figure 9. Semantic vertex definitions for a round column showing the first coordinate reference points (normalized world coordinate system, left: wireframe view, right: solid mesh view).
Applsci 16 00139 g009
Figure 10. Semantic vertex definitions added for the shaft part of a round column (normalized world coordinate system, (a) wireframe view, (b) solid mesh view).
Figure 10. Semantic vertex definitions added for the shaft part of a round column (normalized world coordinate system, (a) wireframe view, (b) solid mesh view).
Applsci 16 00139 g010
Figure 11. Automatic dimension generation for a cylindrical column object ((a) wireframe view, (b) solid mesh view).
Figure 11. Automatic dimension generation for a cylindrical column object ((a) wireframe view, (b) solid mesh view).
Applsci 16 00139 g011
Figure 12. Verification of the invariance and stability of generated dimensions after column objects translation and rotation.
Figure 12. Verification of the invariance and stability of generated dimensions after column objects translation and rotation.
Applsci 16 00139 g012
Figure 13. Example of automatic dimension generation for base-level columns in a real project based on semantic vertex definitions. (The column was imported from an external IFC file and normalized within Blender).
Figure 13. Example of automatic dimension generation for base-level columns in a real project based on semantic vertex definitions. (The column was imported from an external IFC file and normalized within Blender).
Applsci 16 00139 g013
Figure 14. Mesh view showing automatic dimension generation for the structural frame of the base floor in a real project. (The beam was imported from an external IFC file and normalized within Blender).
Figure 14. Mesh view showing automatic dimension generation for the structural frame of the base floor in a real project. (The beam was imported from an external IFC file and normalized within Blender).
Applsci 16 00139 g014
Figure 15. Example of automatic dimension generation for base-level walls in a real project based on semantic vertex definitions. (The wall object was modeled in Blender and can be converted into an IFC wall for further use).
Figure 15. Example of automatic dimension generation for base-level walls in a real project based on semantic vertex definitions. (The wall object was modeled in Blender and can be converted into an IFC wall for further use).
Applsci 16 00139 g015
Figure 16. Example of recording semantic vertex indices and beam dimensions within Blender Custom Properties. (The beam was imported from an external IFC file and normalized within Blender).
Figure 16. Example of recording semantic vertex indices and beam dimensions within Blender Custom Properties. (The beam was imported from an external IFC file and normalized within Blender).
Applsci 16 00139 g016
Table 1. Semantic vertex definitions for wall and window object coordinate reference points (normalized world coordinate system).
Table 1. Semantic vertex definitions for wall and window object coordinate reference points (normalized world coordinate system).
CategorySemantic VertexCoordinate Definition (X, Y, Z) and Description
ObjectL_B_OX: min, Y: min, Z: min—Left–bottom–origin vertex (origin corner)
L_B_BX: min, Y: max, Z: min—Left–bottom–back vertex
R_B_FX: max, Y: min, Z: min—Right–bottom–front vertex
R_B_BX: max, Y: max, Z: min—Right–bottom–back vertex
L_T_FX: min, Y: min, Z: max—Left–top–front vertex
C_T_FX: average of L_B_O and R_B_F (X-axis), Y: min, Z: max—Center–top–front vertex
C_T_BX: average of L_B_B and R_B_B (X-axis), Y: max, Z: max—Center–top–back vertex
WindowWL_B_OY: same as L_B_O, minimal vector-scalar distance—Left–bottom–origin of window opening
WR_B_FX: max, Y: same as WL_B_O, Z: same—Right–bottom–front of window opening
WL_B_BX: same as WL_B_O, Y: max, Z: same—Left–bottom–back of window opening
WL_T_FX: same as WL_B_O, Y: same, Z: max—Left–top–front of window opening
WL_B_O_HDX: same, Y: same, Z: min—First edge contact along (−) Z axis; measures vertical offset from window base to wall bottom
WL_B_O_LDX: min, Y: same, Z: same—First edge contact along (−) X axis; measures horizontal offset from window to left wall surface
Table 2. First coordinate reference points of the wall and window (in the normalized world).
Table 2. First coordinate reference points of the wall and window (in the normalized world).
Semantic VertexCoordinate Definition (X, Y, Z)Semantic VertexCoordinate Definition (X, Y, Z)
L_B_OX: min, Y: min, Z: minR_B_FX: max, Y: min, Z: min
L_B_BX: min, Y: max, Z: minR_B_BX: max, Y: max, Z: min
L_T_FX: min, Y: min, Z: maxR_T_FX: max, Y: min, Z: max
L_T_BX: min, Y: max, Z: maxR_T_BX: max, Y: max, Z: max
Table 3. Second coordinate reference points of the wall and window (in the normalized world).
Table 3. Second coordinate reference points of the wall and window (in the normalized world).
Semantic VertexCoordinate Definition (X, Y, Z)Description
WL_B_OX: Min, Y: Min, Z: MinLeft–bottom–outer corner of the window opening
(reference origin).
WR_B_FX: Max, Y: Min, Z: MinRight–bottom–front corner of the window opening (width reference).
WL_T_FX: Min, Y: Min, Z: MaxLeft–top–front corner of the window opening
(height reference).
WL_B_O_BDSame X, Y as WL_B_O;
Z direction = WL_B_O − Depth
Depth offset reference point of the window origin
(derived from WL_B_O origin).
WL_B_O_LDSame Y, Z as WL_B_O;
X direction = WL_B_O − Width offset
Left side offset reference point of the window origin
(WL_B_O reference origin).
Table 4. Definition of dimensions for wall and window objects.
Table 4. Definition of dimensions for wall and window objects.
DimensionFormulaDescription
Wall_LWall_L = R_B_F − L_B_OWall length (distance between left–bottom–front and right–bottom–front vertices)
Wall_HWall_H = L_T_F − L_B_OWall height (distance between bottom and top along Z-axis)
Wall_WWall_W = L_B_B − L_B_OWall thickness (distance between front and back faces)
Win_LWin_L
= WR_B_F − WL_B_O
Window length (horizontal distance between left and right window edges)
Win_HWin_H
= WL_T_F − WL_B_O
Window height (vertical distance between bottom and top window edges)
Win_L_B_O_HDWin_L_B_O_HD
= WL_B_O − WL_B_O_BD
Distance from window base point to lower edge (downward direction from the window reference point)
Win_L_B_O_LDWin_L_B_O_LD
= WL_B_O − WL_B_O_LD
Distance from window base point to left edge (leftward direction from the window reference point)
Table 5. Coordinate reference points of a round column including the capital (normalized world coordinate system).
Table 5. Coordinate reference points of a round column including the capital (normalized world coordinate system).
Semantic VertexCoordinate Definition (X, Y, Z)Semantic VertexCoordinate Definition (X, Y, Z)
L_B_OX: min, Y: min, Z: minR_B_FX: max, Y: min, Z: min
L_B_BX: min, Y: max, Z: minR_B_BX: max, Y: max, Z: min
L_T_FX: min, Y: min, Z: maxR_T_FX: max, Y: min, Z: max
L_T_BX: min, Y: max, Z: maxR_T_BX: max, Y: max, Z: max
Table 6. Coordinate reference points of the round column shaft (normalized world coordinate system).
Table 6. Coordinate reference points of the round column shaft (normalized world coordinate system).
Semantic VertexCoordinate Definition (X, Y, Z)Description
CM_L_T_PScanned from the −X direction; endpoint of the candidate intersection line detected on the surface; Z: maxUpper-left vertex of the column shaft
CM_L_B_PScanned from the −X direction; endpoint of the candidate intersection line detected on the surface; Z: minLower-left vertex of the column shaft
CM_R_T_PScanned from the +X direction; endpoint of the candidate intersection line detected on the surface; Z: maUpper-right vertex of the column shaft
CM_R_B_PScanned from the +X direction; endpoint of the candidate intersection line detected on the surface; Z: minLower-right vertex of the column shaft
Table 7. Definition of dimensions for round column objects.
Table 7. Definition of dimensions for round column objects.
DimensionFormulaDescription
Col_T_HCol_T_H = L_T_P − L_B_OTotal height of the cylindrical column, including the capital
Col_C_WCol_C_W = R_T_F − L_T_FWidth of the column capital (front view)
Col_C_DCol_C_D =
L_T_F − L_T_B
Depth of the column capital (side view)
Col_M_H_LCol_M_H_L =
CM_L_T_P − CM_L_B_P
Height of the left side of the column body (excluding capital)
Col_M_H_RCol_M_H_R =
CM_R_T_P − CM_R_B_P
Height of the right side of the column body (excluding capital)
Col_M_DICol_M_DI =
CM_R_B_P − CM_L_B_P
Diameter of the cylindrical column body (base view)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cho, J. Semantic-Vertex-Based Topological Detection for Automatic Dimension Generation in Building Information Modeling (BIM) with Industry Foundation Classes (IFC). Appl. Sci. 2026, 16, 139. https://doi.org/10.3390/app16010139

AMA Style

Cho J. Semantic-Vertex-Based Topological Detection for Automatic Dimension Generation in Building Information Modeling (BIM) with Industry Foundation Classes (IFC). Applied Sciences. 2026; 16(1):139. https://doi.org/10.3390/app16010139

Chicago/Turabian Style

Cho, Jaeho. 2026. "Semantic-Vertex-Based Topological Detection for Automatic Dimension Generation in Building Information Modeling (BIM) with Industry Foundation Classes (IFC)" Applied Sciences 16, no. 1: 139. https://doi.org/10.3390/app16010139

APA Style

Cho, J. (2026). Semantic-Vertex-Based Topological Detection for Automatic Dimension Generation in Building Information Modeling (BIM) with Industry Foundation Classes (IFC). Applied Sciences, 16(1), 139. https://doi.org/10.3390/app16010139

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop