Next Article in Journal
Development of a Finishing Process for Imbuing Flame Retardancy into Materials Using Biohybrid Anchor Peptides
Previous Article in Journal
Low-Overlap Bullet Point Cloud Registration Algorithm Based on Line Feature Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning

Engineering Design, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(14), 6106; https://doi.org/10.3390/app14146106
Submission received: 8 May 2024 / Revised: 9 July 2024 / Accepted: 10 July 2024 / Published: 12 July 2024

Abstract

:
Digital transformation is omnipresent in our daily lives and its impact is noticeable through new technologies, like smart devices, AI-Chatbots or the changing work environment. This digitalization also takes place in product development, with the integration of many technologies, such as Industry 4.0, digital twins or data-driven methods, to improve the quality of new products and to save time and costs during the development process. Therefore, the use of data-driven methods reusing existing data has great potential. However, data from product design are very diverse and strongly depend on the respective development phase. One of the first few product representations are sketches and drawings, which represent the product in a simplified and condensed way. But, to reuse the data, the existing sketches must be found with an automated approach, allowing the contained information to be utilized. One approach to solve this problem is presented in this paper, with the detection of principle sketches in the early phase of the development process. The aim is to recognize the symbols in these sketches automatically with object detection models. Therefore, existing approaches were analyzed and a new procedure developed, which uses synthetic training data generation. In the next step, a total of six different data generation types were analyzed and tested using six different one- and two-stage detection models. The entire procedure was then evaluated on two unknown test datasets, one focusing on different gearbox variants and a second dataset derived from CAD assemblies. In the last sections the findings are discussed and a procedure with high detection accuracy is determined.

1. Introduction

Digitalization and the appearance of data-driven technologies raise a variety of challenges for the development of new products. The goal of implementing these novel methods is to reduce product development time and costs, adapt quickly to changing markets and improve the efficiency of the development process [1]. Moreover, the application of these data-driven methods for reusing existing data and information helps to exploit the potential for product development, since a wide range of virtual and physical data are available throughout the engineering process [2,3].
A few application data sources are drawings and sketches, which are proven ways of communication in various engineering disciplines to quickly exchange specific content. The symbols act like a language of the respective discipline and are only readable by educated personnel. The examples range from technical drawings of mechanical engineering products to computer science flowcharts, circuit diagrams in electrical engineering or architectural plans for buildings, with some examples shown in Figure 1. To support a given design task with existing sketches, the recognition and processing of these images must be automated. For mechanical engineering, the principle sketches are especially appropriate for this purpose, as they are established in the early phases of the product development process and, therefore, have a great influence on the costs and quality of the final product.

Principle Sketches

These sketches are usually one of the first visual representations of the future product and are derived from the principle solution. They are a rough and general representation of the functional solution and form the transition from the abstract–functional to the concrete design step [4], with one example displayed in Figure 2.
These principle sketches are represented in different ways, e.g., standardized symbolic sketches, free line sketches, 3D freehand sketches or unscaled rough drafts [5]. They provide information about the type of components or assemblies, the number of parts and their arrangement. This deliberate abstraction fosters the creation of alternative solutions and is therefore a key factor for successful product development. The exact specification of the components and their geometrical dimensions are determined in the subsequent embodiment phase.
Figure 2. Example of aprinciple sketch and the derived detailed part drawing according to [6].
Figure 2. Example of aprinciple sketch and the derived detailed part drawing according to [6].
Applsci 14 06106 g002
Figure 2 depicts the difference between a principle sketch and the resulting technical part. The sketch on the left presents a solution to the problem of converting translational motion into rotational one, while the technical drawing on the right shows the finished component, referred to as part (a) in the principle sketch.
The objective is the automated utilization of this product representation, enabling the analysis of existing data for new developments. Potential applications include finding similar sketches, synthesizing new designs or even improving existing ones. For the realization, the first and very important step is the detection of the symbols in a principle sketch, whereby the focus is on their correct classification instead of their pixel-precise position. This is related to the subsequent contextualization, where a certain deviation of the bounding boxes could be compensated.
Consequently, the next chapter outlines the current state-of-the-art for object detection in engineering drawings through a literature study. Based on the findings, a new approach is presented in the Section 3, which is tested thoroughly on unknown sketches in the Section 4.

2. Related Work

After the introduction and explanation of the principle sketches, this chapter introduces the current state of research in this field. This section is structured in two parts, starting with a comprehensive literature review on the recognition of symbols in technical sketches. Subsequently, the current methods for recognizing objects in images are explained in depth.

2.1. Sketch Detection

The basic motivation for symbol and drawing recognition in engineering was around for a long time, since the first approaches were published in the early 1980s. These methods are mostly based on rules for the recognition of symbols. Recognition involves locating and classifying the various objects in a drawing, with algorithms varying widely depending on the task and scope. In the more recent publication of Moreno et al. [7], the entire digitalization process is structured into two sections, the symbol detection and the contextualization. Contextualization aims to automatically reuse the detected symbols, for example, to find linked documents or to check the drawing for errors.
A general overview of the developed methods is given in Figure 3. The two diagrams show the results of a literature search of over 200 publications in the field of symbol detection and contextualization in engineering, gathered from journal and conference papers from the last 40 years. All methods were classified according to their engineering domain, the use of a learning technique (Machine or Deep Learning) and the dataset type. The engineering domains describe the application scenario, whether it is located in the mechanical, chemical or architectural environment. The learning methods are differentiated according to the type of training or learning on data, whereby all supervised, semi-, weakly-supervised and unsupervised methods are classified as learning methods. The table with all references is provided in Table A4 in Appendix C, sorted by the engineering domain and year.
The first diagram (Figure 3a) displays the distribution of developed methods per engineering domain. The bar chart demonstrates that many procedures were created for mechanical, civil, electrical and chemical engineering. In computer science and for the general methods, significantly fewer contributions were found. The reason is that only general methods in the context of the engineering domain are considered, e.g., the recognition of arrows or text in symbols. General methods for digitalization of drawings such as vectorization or Optical Character Recognition (OCR) are not included.
In addition to the distribution among the disciplines, the number of publications over time is shown in Figure 3b in two-year increments. A differentiation was drawn between methods with and without a learning technique, which includes Machine and Deep Learning. For many years most approaches applied no learning techniques and focused on the use of conventional algorithms and heuristics. However, an increase in approaches with Machine or Deep learning is observed from the middle of the 2010s onwards. The expansion in publications over time can be explained by the development of new learning procedures (e.g., CNN or Visual Transformer) and growing computational performance.
In addition to the chronological observation, another comparison of the literature is provided in the sunburst diagram in Figure 4. This shows the number of publications for the different categories of sketch digitalization. The analysis reveals that most of the methods focus on symbol detection and less on contextualization or preprocessing. Furthermore, the proportion of procedures with and without learning methods is almost similarly distributed across the entire study.
Learning methods have the disadvantage of requiring a large amount of labelled training data, which is why Figure 4 shows a breakdown of the individual solutions in the outermost circle. A closer look at the methods with learning reveals that many rely on their own dataset. This can be provided by industry partners or is self-generated through databases or scraping methods. Afterwards, these datasets must be manually labelled to be used for training the model.
Another possibility is to utilize open-source datasets, e.g., the GREC’03 [8] for electrical engineering or BRIDGE  [9] and SESYD [10] for architecture. These labelled and prepared datasets help to compare the results of different approaches and trained models. But, for principle sketches in mechanical engineering, there is currently a lack of publicly accessible datasets. A small proportion of the analyzed paper employs a synthetic dataset to train the model. This approach allows to control the data generation and thus adapts the model to a given task. In addition to the high effort for the initial development of a synthetic dataset, the verification of the trained model is challenging. This requires a small dataset that is independent of the synthetic generation and enables to verify the transferability of the trained model.
Regardless of the dataset, Deep Learning-based object detection models are currently state of the art for symbol detection. Accordingly, the next paragraph will briefly discuss the existing methods.

2.2. Object Detection Models

Deep Learning models for object detection are divided into one- and two-stage models. One of the popular single-stage procedures is the YOLO (You Only Look Once) model by [11]. As the name implies, the image is viewed only once and based on that, multiple bounding boxes and classes are proposed. The whole procedure is described as a regression problem, which makes the whole approach fast to train and run.
One of the first two-stage methods was the region-based CNN (RCNN) model of [12]. This approach avoids the problem of having many possible regions for an object’s position, as with the rigid sliding window approach, by using a search algorithm to narrow down the possible bounding box regions to around 2000 suggestions. The AlexNet from [13] is applied as a CNN architecture for the feature generation of different image regions, which are subsequently classified. The disadvantage of the method is the high training effort, the slow detection speed and the missing ability to improve the search algorithm via training. Therefore, the Fast RCNN [14] and later the Faster RCNN [15] were developed to improve the disadvantages of the first model. The Faster RCNN applies a learning search algorithm, the Region Proposal Network (RPN), and applies the enhanced detector from the Fast RCNN. With the release of the MASK RCNN, object segmentation was integrated into the task spectrum [16]. This is again based on the Faster RCNN, but with the implementation of a backbone network, which is responsible for the generation of the feature map.
These object detection methods were employed for the recognition of symbols in sketches, with a variety of approaches applied in the field of civil engineering, some with YOLO [17,18,19,20], RCNN [21,22,23,24] or both models [9,25]. Both detection types were adopted for chemistry diagrams, examples of YOLO models are found in [26,27,28,29,30,31] and for the RCNN in [32,33,34]. There are fewer methods for electrical and mechanical engineering, mainly due to the lack of available datasets. Examples of RCNN-based detection in electrical engineering are presented in [35,36]. For the mechanical engineering domain different approaches for each detection type can be found, one with YOLO [37,38,39] and one with RCNN [40].

2.3. Summary and Conclusion

In summary, symbol detection in engineering domains is a long-standing research area that is continuously adopting the latest technology to accomplish the task. Currently, Deep Learning models are mainly used for recognizing symbols in technical sketches or drawings. However, the availability of data is a major challenge, as the Deep Learning based models require a lot of labelled data for training. The problem becomes even more apparent when it comes to domain-specific tasks like the detection of principle sketches, where open-source datasets are not available and scraping cannot be used. Thus, the following research question emerges: how can the detection of symbols in principle sketches from the mechanical engineering domain be achieved and has the data generation or model architecture a greater influence on the detection?
In order to answer this question in detail, a general framework for the solution of symbol recognition is presented first. The individual steps are then explained in more detail. Finally, the procedure is analyzed and evaluated with unknown datasets.

3. Method

The presented framework for solving the symbol detection of principle sketches is commonly divided into two parts: data generation and models. The detailed steps are shown in Figure 5, starting with the synthetic data generation.
Two different implementation approaches are investigated, whereby both should enable the training of object detection models through the synthetic generation of training images. The transferability of the model to real sketches is especially relevant for the given application domain. The first approach for the image generation is based on the preliminary work of Bickel et al. [41,42]. The second type is newly developed and employs the SketchGraph package from Seff et al. [43] in combination with PTC Onshape [44] for the sketch generation. Subsequently, various object detection models are trained and evaluated on the different training datasets. The aim is to determine the impact of the two steps on the detection of unknown sketches. The varying types of data generation and detection models are presented in the following subsections.

3.1. Data Generation

The objective of the synthetic data generation is to reproduce the symbols of principle sketches as accurately as possible so they are recognised with a high degree of accuracy on real sketches. However, as there is not enough existing data for this sketch type, the symbols must be generated randomly. For this purpose, two procedures were developed with different python packages (Pillow [45] and SketchGraph [43]), which are presented and examined in detail afterwards. The schematic process of the image generation is given in the lower part of Figure 5.
The same constraints apply to both generation methods, starting with a randomly chosen symbol shape within defined limits. Then, the number and variation of the symbols is freely selectable. The total number of training images is also eligible and is primarily dependent on the computing power and required time.

3.1.1. Pillow

The first approach is based on the widely used python library “Pillow”, which was primarily developed for image processing and manipulation. The ease of operation and the ability to quickly generate images with new content were the reasons for selecting the package. In addition, the procedure is capable of multiprocessing, which allows for quick data generation. The general approach is presented in Algorithm 1 and has one main parameter, the activation of a random background. The idea is to start with the creation of a white image in the given size, then a random background consisting of rectangles, lines and ellipses may be generated, if the option is selected. After that, a random number of symbols are drawn on the input image. For this purpose, first a random position of the new symbol is determined and checked on whether this collides with an existing bounding box, if not, the position is passed to the next function.
Algorithm 1: Overview of the data generation algorithm with the Pillow package
Applsci 14 06106 i001
This decisive function in the algorithm is “drawSymbol”, in which the selected symbol is drawn through the Pillow package according to previously defined rules. The symbol variables like rotation, line thickness, general size, height and width are defined randomly for each symbol. In addition, a variation factor is defined, which generates the deviations for each symbol shape. Afterwards, depending on the symbol type, the relevant points are generated and then drawn via the “ImageDraw” class. The background generation is used as the main parameter to study the different types of data generation. Therefore, two datasets are considered for training, one with and one without the random background. In theory, the aim is to mimic possible connection lines and, thus, train a more robust detection model by using a random generated background. The result of the different Pillow variants are shown in Figure 6 outlined in blue.

3.1.2. SketchGraph

Unlike the Pillow approach, the SketchGraph uses another way of creating symbols, similar to a sketcher in CAD programs. At first, a command sequence is generated that contains all the relevant information for creating the sketch. SketchGraph distinguishes between node and edge operations and their associated properties. The sequence is then converted into a sketch object, which interacts with Onshape’s API. This link is necessary because Onshape works as a geometric constraint solver and generates the final sketch. The solved sketch can then be rendered with various settings. There is an option to select a “hand-drawn” mode, which alters the lines of the sketch to mimic a hand-drawn sketch. The general program flow is again listed in Algorithm 2.
Algorithm 2: Overview of the data generation algorithm with the SketchGraph package and the Onshape API
Applsci 14 06106 i002
Another difference is the possibility to improve the imitation of sketches. Based on the real sketches, symbol pairs are initially formed using previously defined rules, e.g., two gear symbols are directly placed next to each other. In addition, connecting lines between similar domain symbols are also generated, for example, between a rotary joint and a fixed bearing. The selection of contact partners and associated symbols is randomized, taking into account the location of existing symbols for the connecting lines to avoid overlap. As in the previous approach, symbol selection, positioning and variation are randomised, with collision checking performed in each object class. The individual symbol classes employ general rules for the structure of the respective symbol. A special rule was added for the planetary gear, since this symbol requires a lot of space compared to the others. During the creation of the sketch, the number of planetary gears is checked and reduced to one symbol per sketch in order to leave enough space for the other symbols. In addition to the many minor factors, two major elements can be set in the dataset generation: hand-drawn symbols and more realistic sketch representation. Therefore, all possible combinations are used for the training, which results in four different training datasets (SketchGraph 1: no additions, SketchGraph 2: hand-drawn, SketchGraph 3: realistic and SketchGraph 4: hand-drawn + realisitic). The gray framed example images in Figure 6 illustrate the results for the four combinations.
The disadvantage of this method is that it significantly increases the amount of time it takes to generate the data. Although multiprocessing and multiple Onshape documents are used to generate the sketches, the required time is still high. On average, the creation of a sketch with the SketchGraph method takes about 0.7–1 s, whereas the Pillow approach only needs 0.001 s.

4. Experiments

In order to answer the research question, a number of studies are carried out to determine whether the data or the detection model have the larger influence on the recognition accuracy of unknown sketches. For this purpose, a total of six training datasets were generated, two with Pillow and four with the SketchGraph approach. For a better comparison, one-stage and two-stage detection models are investigated, according to the github stars [46] and citation numbers (see Table A1 in Appendix A), popular representatives are the YOLOv5 and the MASK RCNN and Faster RCNN.

4.1. Training Datasets and Evaluation Metrics

The procedure for creating the training datasets was presented in detail in Section 3.1. To ensure that the results are comparable, both methods generated approximately 100,000 training images per dataset with a resolution of 600 × 600 pixel. The SketchGraph method appears to be more prone to error in the generation of images, with the connection to Onshape sometimes failing. This results in up to 100 fewer images per dataset. Both methods produced the same 14 symbol types: fixed bearing, floating bearing, swivel joint, ball joint, straight gear, helical gear, bevel gear, planetary gear, and bearing. Examples of all symbols are shown in Figure 7. The last six symbols are created in two variants; with and without a hollow shaft. The blueprints for the general shapes of the symbols were found in [6,47,48,49,50,51,52]. Subsequently, the trained models will be tested and evaluated on two unknown test datasets, which are presented in detail in the following chapters.
The COCO dataset [53] evaluation metrics consisting of the mean average precision for an IoU of 0.50 (mAP50) and the mean average precision (mAP) are applied to compare the detection capabilities. Both metrics are used for the evaluation, although the mAP50 is more relevant for the present use case, as it is more important for the later contextualization to recognize the symbol correctly than to determine the exact position.

4.2. Implementation

For the realization of the study, the following implementation of the detection models were used: MASK RCNN [54], YOLOv5 [55], Faster RCNN [56]. All of the models were trained using transfer learning with the weights of the COCO dataset as the initial values. The YOLO and Faster RCNN models were trained on a computational server with two AMD EPYC 7643 processors (Advanced Micro Devices (AMD), Santa Clara, CA, USA), 256 GB of RAM and two Nvidia A40 (46 GB), while the MASK RCNN model was trained on a workstation PC with an Intel Xeon W-2125 (Intel, Santa Clara, CA, USA), 32 GB of RAM and an Nvidia Titan V (NVIDIA, Santa Clara, CA, USA).
There are different architectures available for the YOLO model, which vary the width and depth of the model. In this study, the S, M and L models were tested to analyze the influence of the model structure in more detail. For the MASK RCNN models, the backbone networks resnet50 and resnet101 were used and for the Faster RCNN a resnet50 was used.

4.3. Results—Gear Stages Dataset

A collection of principle sketches of gearboxes was used as the first unknown dataset. A total of 30 transmission variants were drawn, which can be roughly grouped according to their gear stage. The sketches consisted of the symbols bearing, straight-, helical- and bevel-gear as well as the housing of each gear unit, with the exact instances counted in Table A2 in Appendix B. The dataset was created and labelled manually, with the simplest drawing containing six and the most complex twenty-two symbols. Subsequently, the dataset was scaled in three steps, resulting in a total of 90 sketches. The largest sketch was defined as the maximum and then drawings with 80% and 66% scaled sketches were generated. The symbols in the training data varied independently in size, so scaling of the training images was not conducted. The different scales were intended to provide an additional challenge for the models and test their abilities in detection of symbols with unknown sizes. An excerpt of the dataset is shown in Figure 8, with an example for each gear stage illustrated.
The results of the recognition are listed in Table 1 and indicate that a good detection of relevant symbols is achievable. In addition, the metrics demonstrated that different datasets lead to significantly varying results, with the lowest mAP50 at 0.042 and highest 0.994. In general, the models trained on the SketchGraph dataset performed better, especially when a more realistic data generation was chosen. The best results were achieved with the SketchGraph 3 and 4 dataset and the YOLO v5 M model, each with an mAP50 of 0.994. The exception is the SketchGraph 2 dataset, which did not work with any model type, neither RCNNs nor YOLO. A comparison of the models revealed a clear tendency in favor of the YOLO models, which achieved better results than those of the RCNN family. The average difference between YOLO and RCNN models was 0.302 (mAP50) and 0.171 (mAP), respectively.

4.4. Results—Unknown Assemblies

As a final test, a more challenging unknown dataset was generated for the models. This was derived from the “3D Assembly Repository” of Lupinetti et al. [57] and consisted of several step assemblies, sorted into different product categories. Complex drawings were derived from these step assemblies, with sketches created for six assembly categories. Again, an example sketch for each category is displayed in Figure 9. The assemblies applied were hydraulic reduction, hydraulic rotor, double rotor turbine, wind turbine, differential and landing gear. In contrast to the first unknown dataset, considerably more symbols and many different variants were included in the individual sketches. The entire dataset consisted of 24 images and a detailed list of the occurring symbols is provided in Appendix B in Table A3.
In general, the result scores, which are shown in Table 2, reveal that the recognition level drops in comparison to the first dataset. This is mainly due to the more complex drawings and increased variety of symbols. Furthermore, finer differences between symbols must be detected in this dataset, e.g., the bevel gear with and without hollow shaft. The findings from the first study are evident in this dataset as well, since the YOLO models performed better than the RCNN architecture and the SketchGraph 2 dataset produced the worst results for all models. The combination of Pillow data generation with the YOLO models also achieves good mAP50 values, although the YOLOv5 L provides better recognition results than the other YOLO derivatives. The highest accuracy is provided by the YOLOv5 L with SketchGraph 3 dataset, resulting in a mAP50 of 0.906 and mAP of 0.562.

4.5. Discussion

The evaluation of the different models and datasets helps in answering the stated research question (see Section 2.3). The first part of the question was successfully answered by the developed procedure, especially through the synthetic generation of the training data. For the second question part, the obvious answer is that the data are more essential for the object detection, since without them no training would be possible. But, this answer is quite simplistic and does not reflect the whole truth.
In general, the results show that the dataset has a higher influence on recognition accuracy than the model, because with the third and fourth SketchGraph dataset it was possible for the YOLO and RCNN models to detect the unknown sketches with high accuracy, as displayed in the top row in Figure 10. Similarly, the second SketchGraph dataset highlights that a less representative dataset leads to problems for all models, as incorrect features are learned from the synthetic sketches. A visual example of the detection results with the SketchGraph 2 dataset and the YOLOv5 S model is depicted in the lower right corner of Figure 10. The influence of the models on the detection results becomes apparent in the Pillow datasets, where the one-stage methods achieved better results than the two-stage ones. A closer look at the RCNN models indicates that the results vary greatly and, thus, no model can be clearly identified as the best for the given task. In contrast, the YOLO models show that the M and L versions are best suited for recognition, as they also delivered higher detection accuracies for the Pillow dataset than the S version.
The structure of the test dataset also influences the evaluation, particularly since the focus was more on realistic drawings and less on quantity. Consequently, symbols were included in the training dataset, but were not present in the test dataset, e.g., the ball joint. However, this problem was tolerated, because the goal of the recognition model was to be as broadly applicable as possible, rather than being a specific solution for only three or four symbols. This particular requirement also leads to detection problems. In this study, the confusion between swivel joints and general bearings occurred frequently, which is also visible in the lower left result image in Figure 10. Since the bearing symbol virtually consists of two swivel symbols, the confusion is quite understandable. One possible solution is to apply a rule-based approach and to ignore the bounding boxes of the swivel joints when they overlap with those of the general bearing. The data generation of the swivel joint could likewise be improved through the integration of connection links, to make the symbols easier to distinguish from each other.
Another recognition challenge was the scaling of the symbols in relation to the drawing. It was noticeable that the RCNN model had significantly larger errors in recognition, especially for smaller symbols. An adjustment of the data generation with a larger variance of the symbol size might lead to a general improvement. For the RCNN models, a smaller anchor size could eventually result in a better detection of smaller symbols. The selection of object recognition models concentrated on the established methods, although other models could be included in the next analyses, e.g., approaches with visual transformers or graph neural networks.
In addition, the synthetic data generation approach offers advantages over generative AI approaches such as generative adversarial networks (GAN) [58] or variational autoencoders (VAE) [59,60]. These techniques cannot be used if no or very little data are available, and it may also be difficult for the automatically trained models to map small details of the symbols, such as a bearing with or without a hollow shaft. Furthermore, GAN and VAE are sometimes very expensive and difficult to train, which leads to a high dependency of the results on hyperparameters and greater training time consumption.

5. Conclusions

In summary, this paper first provided a general overview of current approaches and solutions for the detection and contextualization of principle sketches in engineering, supported by a literature review. The question was raised whether the data or the model architecture has more influence on the recognition accuracy. For this purpose, several datasets were synthetically generated and common one- and two-stage models were trained. Two unknown datasets were produced to test the recognition accuracy, whereby in the first one the symbols were additionally scaled in three steps. The results revealed that the data generation has more influence, since well-prepared data performed overall better with different models. In the opposite case, poorly created data led to almost no detection of the symbols at all.
In the next step, the hyperparameters of the models could be optimized and other architectures be used for training. Recognition of hand-drawn principle sketches is an additional challenge and the SketchGraph approach may be a good starting point.

Author Contributions

Conceptualization, S.B.; methodology, S.B.; software, S.B.; formal analysis, S.B.; investigation, S.B.; resources, S.G. and S.W.; data curation, S.B.; writing—original draft preparation, S.B. and S.G.; writing—review and editing, S.G. and S.W.; visualization, S.B.; supervision, S.G. and S.W.; project administration, S.B.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets presented in this article are not readily available because the data are part of an ongoing study. Requests to access the datasets should be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
BBBounding Box
CADComputer Aided Design
CNNConvolutional Neural Network
COCOCommon Objects in Context
mAPMean Average Precision
OCROptical Character Recognition
RCNNRegion-based Convolutional Neural Network
RPNRegion Proposal Network
SGSketchGraph
stepSTandard for the Exchange of Product model data
YOLOYou Only Look Once

Appendix A. Citation Numbers—Object Detection

Table A1. Overview of citation numbers of popular object detection algorithms according to Google Scholar on 23 April 2024. Algorithm collection derived from object detection roadmap from [61].
Table A1. Overview of citation numbers of popular object detection algorithms according to Google Scholar on 23 April 2024. Algorithm collection derived from object detection roadmap from [61].
NameReferenceCitation Number
Faster RCNN [15]73,707
YOLO: You only look once [11]46,768
Ssd: Single shot multibox detector [62]37,166
RCNN [12]36,358
MASK RCNN [16]34,912
Fast RCNN [14]33,608
RetinaNet [63]28,943
FPN: Feature pyramid [64]25,464
DETR [65]11,003
Cornernet [66]4241

Appendix B. Dataset Instance—Overview

Table A2. Overview of symbol instances in the unknown gear stages dataset without the scaling applied.
Table A2. Overview of symbol instances in the unknown gear stages dataset without the scaling applied.
CategoryInstancesCategoryInstances
bevel gear36general bearing216
bevel gear
hollow shaft
0general bearing
hollow shaft
0
ball joint0swivel joint0
planetary gear0straight gear82
planetary gear
hollow shaft
0straight gear
hollow shaft
0
fixed bearing0helical gear54
floating bearing0helical gear
hollow shaft
0
Table A3. Overview of symbol instances in the unknown assembly dataset.
Table A3. Overview of symbol instances in the unknown assembly dataset.
CategoryInstancesCategoryInstances
bevel gear3general bearing117
bevel gear
hollow shaft
1general bearing
hollow shaft
17
ball joint0swivel joint21
planetary gear28straight gear45
planetary gear
hollow shaft
0straight gear
hollow shaft
0
fixed bearing5helical gear5
floating bearing4helical gear
hollow shaft
0

Appendix C. Literature Study—Overview

Table A4. Overview of the literature study of symbol digitalization in different engineering domains, sorted by the engineering domain and year of publication.
Table A4. Overview of the literature study of symbol digitalization in different engineering domains, sorted by the engineering domain and year of publication.
SourceYearTitleEngineering
Domain
Process StepLearning
Technique
Dataset
[67]1979A Threshold Selection Method from Gray-Level Histogramsgeneralpreprocessingwithout Learningown
[68]1990A system for interpretation of line drawingsgeneraldetectionwithout Learningown
[69]1994Precise line detection from an engineering drawing using a figure fitting method based on contours and skeletonsgeneralpreprocessingwithout Learningown
[70]1994Skeleton generation of engineering drawings via contour matchinggeneralpreprocessingwithout Learning-
[71]1994Finding arrows in utility maps using a neural networkgeneraldetectionwith Learningown
[72]1996Automatic learning and recognition of graphical symbols in engineering drawingsgeneraldetectionwithout Learningown
[73]1998A new algorithm for line image vectorizationgeneralpreprocessingwithout Learning-
[74]2000Adaptive document image binarizationgeneralpreprocessingwithout Learningown
[75]2006Robust and accurate vectorization of line drawingsgeneralpreprocessingwithout Learningown
[76]2012Multi-Level Block Information Extraction in Engineering Drawings Based on Depth-First Algorithmgeneraldetection + contextualizationwithout Learningown
[77]2017Adaptive document image skew estimationgeneralpreprocessingwith Learningown
[78]2019Anchor Point based Hough Transformation for Automatic Line Detection of Engineering Drawingsgeneralpreprocessingwithout Learningown
[79]2020Deep Vectorization of Technical Drawingsgeneralpreprocessingwith Learningopen source
[80]1993Management of graphical symbols in a CAD environment: A neural network approachcivildetectionwith Learningown
[81]1994Symbol recognition in a CAD enviroment using a neural networkcivildetectionwith Learningown
[82]1997A system to understand hand-drawn floor plans using subgraph isomorphism and Hough transformcivildetectionwithout Learningown
[83]1998A constraint network for symbol detection in architectural drawingscivildetectionwithout Learningown
[84]2000A complete system for the analysis of architectural drawingscivildetection + contextualizationwithout Learningown
[85]2001Architectural symbol recognition using a network of constraintscivildetectionwithout Learningown
[86]2001Symbol recognition by error-tolerant subgraph matching between region adjacency graphscivildetectionwithout Learningown
[87]2002An object-oriented progressive-simplification-based vectorization system for engineering drawings: model, algorithm, and performancecivilpreprocessing + detectionwithout Learningown
[88]2003A model for image generation and symbol recognition through the deformation of lineal shapescivildetectionwithout Learningsynthetic
[89]2005Using engineering drawing interpretation for automatic detection of version information in CADD engineering drawingcivildetectionwithout Learningown
[90]2007Automatic analysis and integration of architectural drawingscivildetection + contextualizationwithout Learningown
[91]2008Knowledge Extraction from Structured Engineering Drawingscivildetectionwithout Learningown
[92]2009Symbol Detection Using Region Adjacency Graphs and Integer Linear Programmingcivildetectionwithout Learningopen source
[10]2010Generation of synthetic documents for performance evaluation of symbol recognition & spotting systemscivildetectionwith Learningsynthetic
[93]2010Symbol spotting in vectorized technical drawings through a lookup table of region stringscivildetectionwithout Learningown
[94]2011Statistical Grouping for Segmenting Symbols Parts from Line Drawings, with Application to Symbol Spottingcivildetectionwithout Learningopen source
[95]2012Object recognition in floor plans by graphs of white connected componentscivildetectionwithout Learningown + open source
[96]2013Geometric-based symbol spotting and retrieval in technical line drawingscivildetectionwith Learningown + open source
[97]2013Building a Symbol Library from Technical Drawings by Identifying Repeating Patternscivildetectionwith Learningopen source
[98]2013Combining geometric matching with SVM to improve symbol spottingcivildetectionwith Learningopen source
[99]2013Efficient symbol retrieval by building a symbol index from a collection of line drawingscivildetectionwith Learningopen source
[100]2013A symbol spotting approach in graphical documents by hashing serialized graphscivildetectionwithout Learningopen source
[101]2014Data Extraction from DXF File and Visual Displaycivildetection + contextualizationwithout Learningown
[102]2016Automatic Hyperlinking of Engineering Drawing Documentscivildetectionwithout Learningown
[103]2017Graph-Based Deep Learning for Graphics Classificationcivildetectionwith Learningopen source
[104]2018Extraction of Ancient Map Contents Using Trees of Connected Componentscivildetectionwithout Learningown
[105]2018Object Detection in Floor Plan Imagescivildetectionwith Learningown
[106]2019Graph Neural Network for Symbol Detection on Document Imagescivildetectionwith Learningopen source
[18]2019Symbol spotting for architectural drawings: state-of-the-art and new industry-driven developmentscivildetectionwith Learningown
[9]2019BRIDGE: Building Plan Repository for Image Description Generation, and Evaluationcivildetectionwith Learningopen source
[17]2020Symbol Spotting on Digital Architectural Floor Plans Using a Deep Learning-based Frameworkcivildetectionwith Learningown / Open Source
[21]2020Floor Plan Recognition and Vectorization Using Combination UNet, Faster-RCNN, Statistical Component Analysis and Ramer-Douglas-Peuckercivildetectionwith Learningown
[107]2021Knowledge-driven description synthesis for floor plan interpretationcivildetection + contextualizationwith Learningopen source
[25]2021Fine Grained Feature Representation Using Computer Vision Techniques for Understanding Indoor Spacecivildetectionwith Learningopen source
[108]2021FloorPlanCAD: A Large-Scale CAD Drawing Dataset for Panoptic Symbol Spottingcivildetectionwith Learningopen source
[23]2021Towards Robust Object Detection in Floor Plan Images: A Data Augmentation Approachcivildetectionwith Learningopen source
[109]2021PU learning-based recognition of structural elements in architectural floor planscivildetectionwith Learningown + open source
[110]20213DPlanNet: Generating 3D Models from 2D Floor Plan Images Using Ensemble Methodscivildetection + contextualizationwith Learningopen source
[22]2022Automatic Detection and Classification of Symbols in Engineering Drawingscivildetectionwith Learningown
[111]2022CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawingscivildetectionwith Learningopen source
[112]2022GAT-CADNet: Graph Attention Network for Panoptic Symbol Spotting in CAD Drawingscivildetectionwith Learningopen source
[24]2022Mask-Aware Semi-Supervised Object Detection in Floor Planscivildetectionwith Learningopen source
[113]2022Designing a Human-in-the-Loop System for Object Detection in Floor PlanscivilErkennungwith Learningsynthetisches Dataset
[114]2023Digitalization of 2D Bridge Drawings Using Deep Learning Modelscivildetection + contextualizationwith Learningown + synthetic
[115]2023Improving Symbol Detection on Engineering Drawings Using a Keypoint-Based Deep Learning Approachcivildetectionwith Learningsynthetic
[116]2023You Only Look for a Symbol Once: An Object Detector for symbols and Regions in Documents.civildetectionwith Learningopen source
[20]2023Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Mapscivildetectionwith Learningown
[19]2023Towards Automatic Digitalization of Railway Engineering Schematicscivildetectionwith Learningown
[117]2024Deep learning-based text detection and recognition on architectural floor planscivildetectionwith Learningopen source + synthetic
[118]1997Adaptive Vectorization of Line Drawing Imagescivil + electricalpreprocessingwithout Learningown
[119]2005Symbol recognition via statistical integration of pixel-level constraint histograms: a new descriptorcivil + electricaldetectionwith Learningopen source
[120]2006Symbol recognition with kernel density matchingcivil + electricaldetectionwith Learningopen source
[121]2006Symbol Spotting in Technical Drawings Using Vectorial Signaturescivil + electricaldetectionwithout Learningopen source
[122]2007A Bayesian classifier for symbol recognitioncivil + electricaldetectionwith Learningopen source
[123]2007A New Syntactic Approach to Graphic Symbol Recognitioncivil + electricaldetectionwithout Learningopen source
[124]2008On the Combination of Ridgelets Descriptors for Symbol Recognitioncivil + electricaldetectionwith Learningopen source
[125]2009Graphic Symbol Recognition Using Graph Based Signature and Bayesian Network Classifiercivil + electricaldetectionwith Learningown
[126]2010A Bayesian network for combining descriptors: application to symbol recognitioncivil + electricaldetectionwith Learningopen source
[127]2011A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifiercivil + electricaldetectionwith Learningopen source
[128]2019GSD-Net: Compact Network for Pixel-Level Graphical Symbol Detectioncivil + electricaldetectionwith Learningopen source
[129]2002TIF2VEC, An Algorithm for Arc Segmentation in Engineering Drawingscivil + electrical + mechanicalpreprocessingwithout Learningopen source
[130]2014BoR: BAG-OF-RELATIONS FOR SYMBOL RETRIEVALcivil + electrical + P&IDdetectionwithout Learningopen source
[131]2013Img2UML: A System for Extracting UML Models from Imagescomputer sciencedetection + contextualizationwithout Learningown
[132]2014Automatic Classification of UML Class Diagrams from Imagescomputer sciencedetectionwith Learningown
[133]2021Multiclass Classification of UML Diagrams from Images Using Deep Learningcomputer sciencedetectionwith Learningown
[134]1982Automatic Interpretation of Lines and Text in Circuit Diagramselectricaldetection + contextualizationwithout Learning-
[135]1985Symbol recognition in electrical diagrams using probabilistic graph matchingelectricaldetectionwith Learningown
[136]1988An automatic circuit diagram reader with loop-structure-based symbol recognitionelectricaldetectionwith Learningopen source
[137]1988A topology-based component extractor for understanding electronic circuit diagramselectricaldetectionwith Learningown
[138]1990Translation-,rotation- and scale- invariant recognition of hand-drawn symbols in schematic diagramselectricaldetectionwith Learningown
[139]1992Recognizing Hand-Drawn Electrical Circuit Symbols with Attributed Graph Matchingelectrical
[140]1993Recognition of logic diagrams by identifying loops and rectilinear polylineselectricaldetectionwithout Learningown
[141]1993A symbol recognition systemelectricaldetectionwith Learningown
[142]1993A new system for the analysis of schematic diagramselectricaldetectionwithout Learningown
[143]1995Automatic understanding of symbol-connected diagramselectricalcontextu-alizationwithout Learningown
[144]2003Engineering drawings recognition using a case-based approachelectricaldetectionwithout Learningown
[145]2003Symbol recognition in electronic diagrams using decision treeelectricaldetectionwithout Learningown
[146]2004Extracting System-Level Understanding from Wiring Diagram Manualselectricaldetectionwithout Learningown
[147]2009A visual approach to sketched symbol recognitionelectricaldetectionwithout Learningown
[148]2009On-line hand-drawn electric circuit diagram recognition using 2D dynamic programmingelectricaldetectionwith Learningown
[149]2011Recognition of electrical symbols in document images using morphology and geometric analysiselectricaldetectionwithout Learningown
[150]2012Symbol recognition using spatial relationselectricaldetectionwithout Learningown
[151]1995Electronic Schematic Recognitionelectricaldetection + contextualizationwithout Learningown
[152]2015Detection and identification of logic gates from document images using mathematical morphologyelectricaldetectionwith Learningown
[153]2016Hand Drawn Optical Circuit Recognitionelectricaldetectionwith Learningown
[154]2017Recognizing Electronic Circuits to Enrich Web Documents for Electronic Simulationelectricaldetection + contextualizationwith Learningown
[155]2018Analysis of methods for automated symbol recognition in technical drawingselectricaldetectionwith Learningown
[156]2019Automatic Abstraction of Combinational Logic Circuit from Scanned Document Page Imageselectricalcontextu-alizationwith Learningown
[157]2020CIM/G graphics automatic generation in substation primary wiring diagram based on image recognitionelectricalErkennungwith Learningown
[35]2021Graph-Based Object Detection Enhancement for Symbolic Engineering Drawingselectricalcontextu-alizationwith Learningown
[158]2021A public ground-truth dataset for handwritten circuit diagram imageselectricaldetectionwith Learningopen source
[159]2022Substation One-Line Diagram Automatic Generation Based On Image Recongnitionelectricaldetection + contextualizationwith Learningown
[160]2022Symbol Spotting in Electronic Images Using Morphological Filters and Hough Transformelectricaldetectionwithout Learningopen source
[161]2023Instance segmentation based graph extraction for handwritten circuit diagram imageselectricaldetectionwith Learningopen source
[162]2023ElectroNet: An Enhanced Model for Small-Scale Object Detection in Electrical Schematic Diagramselectricaldetection + contextualizationwith Learningown
[163]2023Single Line Electrical Drawings (SLED): A Multiclass Dataset Benchmarked by Deep Neural NetworkselectricalErkennungwith Learningopen source
[164]2023Intelligent Digitization of Substation One-Line Diagrams Based on Computer Visionelectricaldetection + contextualizationwith Learningsynthetic
[36]2023Song, Aibo and Kun, Huang and Peng, Bowen and Chen, Rui and Zhao, Kun and Qiu, Jingyi and Wang, Kaixuanelectricaldetection + contextualizationwith Learningown
[165]2007An interactive example-driven approach to graphics recognition in engineering drawingselectrical + civildetectionwithout Learningown
[166]2008Spotting Symbols in Line Drawing Images Using Graph Representationselectrical + civildetectionwithout Learningown
[167]1994Isolating symbols from connection lines in a class of engineering drawingselectrical + P&IDdetectionwithout Learningown
[168]1997A System for Recognizing a Large Class of Engineering Drawingselectrical + P&IDdetectionwithout Learningown
[169]2014Accurate junction detection and characterization in line-drawing imageselectrical, civildetectionwithout Learningopen source
[170]1996Vector-based segmentation of text connected to graphics in engineering drawingselectrical, mechanical, civildetectionwithout Learningown
[171]1989Processing of engineering line drawings for automatic input to CADmechanicalpreprocessingwithout Learning-
[172]1989Automatic Scanning and Interpretation of Engineering Drawings for CAD-Processesmechanicaldetectionwithout Learningown
[173]1990Engineering drawing processing and vectorization systemmechanicalpreprocessingwithout Learning-
[174]1990Interpretation of line drawings with multiple viewsmechanicaldetection + contextualizationwithout Learningown
[175]1990Randomized Hough Transform (RHT) in Engineering Drawing Vectorization Systemmechanicaldetectionwithout Learningown
[176]1991Detection of dashed lines in engineering drawings and mapsmechanicaldetectionwithout Learningown
[177]1992Celesstin: CAD conversion of mechanical drawingsmechanicaldetection + contextualizationwithout Learningown
[178]1992Dimensioning analysismechanicaldetectionwithout Learningown
[179]1992Knowledge-directed interpretation of mechanical engineering drawingsmechanicaldetectionwithout Learningown
[180]1993Recognition of dimensions in engineering drawings based on arrowheadmechanicaldetectionwithout Learningown
[181]1994Detection of dimension sets in engineering drawingsmechanicaldetectionwithout Learningown
[182]1994Knowledge organization and interpretation process in engineering drawing interpretationmechanicaldetectionwithout Learningown
[183]1994Syntactic analysis of technical drawing dimensionsmechanicaldetectionwithout Learningown
[184]1995Recognition of dimension sets and integration with vectorized engineering drawingsmechanicaldetection + contextualizationwithout Learningown
[185]1995Vector-based arc segmentation in the machine drawing understanding system environmentmechanicaldetectionwithout Learningown
[186]1996Functional parts detection in engineering drawings: Looking for the screwsmechanicaldetectionwithout Learningown
[187]1996A clustering-based approach to the separation of text strings from mixed text/graphics documentsmechanicaldetectionwith Learningown
[188]1996Perfecting Vectorized Mechanical Drawingsmechanicalpreprocessingwithout Learningown
[189]1996Arrowhead recognition during automated data capturemechanicaldetectionwithout Learningown
[190]1997Orthogonal Zig-Zag: An algorithm for vectorizing engineering drawings compared with Hough Transformmechanicaldetectionwithout Learningown
[191]1998Detection of text regions from digital engineering drawingsmechanicaldetectionwithout Learningown
[192]1998Generating multiple new designs from a sketchmechanicaldetection + contextualizationwithout Learningown
[193]1998Segmentation and Recognition of Dimensioning Text from Engineering Drawingsmechanicaldetectionwithout Learningown
[194]1998A system for automatic recognition of engineering drawing entitiesmechanicaldetection + preprocessingwithout Learningown
[195]1999Automated CAD conversion with the Machine Drawing Understanding System: concepts, algorithms, and performancemechanicaldetectionwithout Learningown
[196]1999Automatic extraction of manufacturable features from CADD models using syntactic pattern recognition techniquesmechanicaldetection + contextualizationwithout Learningown
[197]1999Dimension sets detection in technical drawingsmechanicaldetectionwithout Learningown
[198]1999A complete system for the intelligent interpretation of engineering drawingsmechanicaldetection + contextualizationwithout Learningown
[199]2000Symbol and character recognition: application to engineering drawingsmechanicaldetectionwith Learningown
[200]2000Engineering Drawing Database Retrieval Using Statistical Pattern Spotting Techniquesmechanicaldetectionwith Learningown
[201]2001Intelligent system for extraction of product data from CADD modelsmechanicaldetection + contextualizationwith Learningown
[202]2004Strategy for Line Drawing Understandingmechanicaldetectionwithout Learningown
[203]2004A new way to detect arrows in line drawingsmechanicaldetectionwithout Learningown
[204]2009Information extraction from scanned engineering drawingsmechanicaldetection + contextualizationwithout Learningown
[205]2010An information extraction of title panel in engineering drawings and automatic generation system of three statistical tablesmechanicaldetection + contextualizationwithout Learningown
[206]2011From engineering diagrams to engineering models: Visual recognition and applicationsmechanicaldetection + contextualizationwith Learningsynthetic
[207]2016Dimensional Arrow Detection from CAD Drawingsmechanicaldetectionwithout Learningown
[40]2017ConvNet-Based Optical Recognition for Engineering Drawingsmechanicaldetectionwith Learningown
[208]2019Detection of Primitives in Engineering Drawing using Genetic Algorithmmechanicalpreprocessingwithout Learningopen source
[209]2021Extraction of dimension requirements from engineering drawings for supporting quality control in production processesmechanicaldetectionwith Learningown
[210]2021An Automated Engineering Assistant: Learning Parsers for Technical Drawingsmechanicaldetection + contextualizationwith Learningown
[211]2022Data Augmentation of Engineering Drawings for Data-Driven Component Segmentationmechanicaldetectionwith Learningsynthetic
[37]2022AI-Based Engineering and Production Drawing Information Extractionmechanicaldetectionwith Learningsynthetic
[212]2022Unsupervised and hybrid vectorization techniques for 3D reconstruction of engineering drawingsmechanicaldetection + contextualizationwith Learningown
[213]2023Graph neural network-enabled manufacturing method classification from engineering drawingsmechanicaldetection + contextualizationwith Learningown
[214]2023An Approach to Engineering Drawing Organization: Title Block Detection and Processingmechanicaldetection + contextualizationwith Learningown
[39]2023AI-Based Method for Frame Detection in Engineering Drawingsmechanicaldetectionwith Learningown
[215]2023Component segmentation of engineering drawings using Graph Convolutional Networksmechanicaldetection + contextualizationwith Learningown
[38]2023Integration of Deep Learning for Automatic Recognition of 2D Engineering Drawingsmechanicaldetection + contextualizationwith Learningown
[216]2024Tolerance Information Extraction for Mechanical Engineering Drawings–A Digital Image Processing and Deep Learning-based Modelmechanicaldetectionwith Learningown
[217]2012An improved example-driven symbol recognition approach in engineering drawingsmechanical + civildetectionwithout Learningown + open source
[218]2018Hand-written and machine-printed text classification in architecture, engineering & construction documentsmechanical + civildetectionwith Learningown
[219]2005An image-based, trainable symbol recognizer for hand-drawn sketchesmechanical + electricaldetectionwith Learningown
[220]2006An Efficient Graph-Based Symbol Recognizermechanical + electricaldetectionwith Learningown
[221]2007An efficient graph-based recognizer for hand-drawn symbolsmechanical + electricaldetectionwith Learningown
[206]2011Neural network-based symbol recognition using a few labeled samplesmechanical + electricaldetectionwith Learningsynthetic + open source
[222]1994Graphic symbol recognition using a signature techniqueP&IDdetectionwithout Learningown
[223]1998Computer interpretation of process and instrumentation drawingsP&IDdetectionwithout Learningown
[224]2006Using process topology in plant-wide control loop performance assessmentP&IDcontextu-alizationwithout Learning-
[225]2009Graphic Symbol Recognition Using Auto Associative Neural Network ModelP&IDdetectionwith Learningown
[226]2015A 2D Engineering Drawing and 3D Model Matching Algorithm for Process PlantP&IDdetection + contextualizationwithout Learningown
[227]2016Automatische Analyse und detection graphischer Inhalte von SVG-basierten Engineering-DokumentenP&IDdetectionwithout Learningown
[228]2017Heuristics-Based Detection to Improve Text/Graphics Segmentation in Complex Engineering DrawingsP&IDdetectionwithout Learningown (industry)
[229]2018Symbols Classification in Engineering DrawingsP&IDdetectionwith Learningown
[230]2019A Digitization and Conversion Tool for Imaged Drawings to Intelligent Piping and Instrumentation Diagrams (P&ID)P&IDdetection + contextualizationwithout Learningown
[231]2019Automatic Information Extraction from Piping and Instrumentation DiagramsP&IDdetectionwith Learningown
[232]2019Applying graph matching techniques to enhance reuse of plant design informationP&IDcontextu-alizationwithout Learningown
[233]2019Features Recognition from Piping and Instrumentation Diagrams in Image Format Using a Deep Learning NetworkP&IDdetectionwith Learningown
[26]2020Deep learning for symbols detection and classification in engineering drawingsP&IDdetectionwith Learningown (industry)
[234]2020Symbols in Engineering Drawings (SiED): An Imbalanced Dataset Benchmarked by Convolutional Neural NetworksP&IDdetectionwith Learningopen source
[235]2020Object Detection in Design Diagrams with Machine LearningP&IDdetectionwith Learningsynthetic
[32]2020Deep Neural Network for Automatic Image Recognition of Engineering DiagramsP&IDdetectionwith Learningown
[33]2020CNN-Based Symbol Recognition in Piping DrawingsP&IDdetectionwith Learningsynthetic
[236]2020Graph-Based Manipulation Rules for Piping and Instrumentation DiagramsP&IDcontextu-alizationwithout Learningown
[237]2020Deep Learning for Text Detection and Recognition in Complex Engineering DiagramsP&IDdetectionwith Learningown
[238]2020Automatic Digitization of Engineering Diagrams using Deep Learning and Graph SearchP&IDdetection + contextualizationwith Learningown
[239]2020Reducing human effort in engineering drawing validationP&IDdetection + contextualizationwith Learningown
[240]2020Integrating 2D and 3D Digital Plant Information Towards Automatic Generation of Digital TwinsP&IDdetection + contextualizationwithout Learningown
[241]2020Component detection in piping and instrumentation diagrams of nuclear power plants based on neural networksP&IDdetection + contextualizationwith Learningopen source
[242]2021OSSR-PID: One-Shot Symbol Recognition in P&ID Sheets using Path Sampling and GCNP&IDdetectionwith Learningsynthetic
[243]2021Deep Learning-Based Method to Recognize Line Objects and Flow Arrows from Image-Format Piping and Instrumentation Diagrams for DigitizationP&IDdetectionwith Learningown (industry)
[244]2021Group of components detection in engineering drawings based on graph matchingP&IDcontextu-alizationwithout Learningown
[245]2021Automatic Digitization of Engineering Diagrams using Intelligent AlgorithmsP&IDdetectionwithout Learningown
[246]2021Engineering Drawing Validation Based on Graph Convolutional NetworksP&IDdetectionwith Learningown
[247]2021Digitize-PID: Automatic Digitization of Piping and Instrumentation DiagramsP&IDdetection + contextualizationwith Learningown
[248]2021Automatic digital twin data model generation of building energy systems from piping and instrumentation diagramsP&IDdetectionwith Learningown
[30]2021Identification of Objects in Oilfield Infrastructure using Engineering Diagram and Machine Learning MethodsP&IDdetectionwith Learningown
[249]2021Object detection for P&ID images using various deep learning techniquesP&IDdetectionwith Learningown
[27]2022Pattern Recognition Method for Detecting Engineering Errors on Technical DrawingsP&IDdetectionwith Learningown
[250]2022Enhanced Symbol Recognition based on Advanced Data Augmentation for Engineering DiagramsP&IDdetectionwith Learningown + synthetic
[251]2022Modern Deep Learning Approaches for Symbol Detection in Complex Engineering DrawingsP&IDdetectionwith LearningLown
[252]2022End-to-end digitization of image format piping and instrumentation diagrams at an industrially applicable levelP&IDdetection + contextualizationwith Learningown
[253]2022Automated Valve Detection in Piping and Instrumentation (P&ID) DiagramsP&IDdetectionwith Learningown
[254]2023A Symbol Recognition System for Single-Line Diagrams Developed Using a Deep-Learning ApproachP&IDdetectionwith Learningown + synthetic
[239]2023Reducing human effort in engineering drawing validationP&IDcontextu-alizationwith Learningown
[29]2023Advancing P&ID Digitization with YOLOv5P&IDdetectionwith Learningown
[28]2023Improved P&ID Symbol Detection Algorithm Based on YOLOv5 NetworkP&IDdetectionwith Learningown
[31]2023A Complete Piping Identification Solution for Piping and Instrumentation DiagramsP&IDdetectionwith Learningown
[255]2023Automatic anomaly detection in engineering diagrams using machine learningP&IDdetection + contextualizationwith Learningown
[256]2023Digitization of chemical process flow diagrams using deep convolutional neural networksP&IDdetectionwith Learningown
[257]2023Extraction of line objects from piping and instrumentation diagrams using an improved continuous line detection algorithmP&IDdetection + contextualizationwith Learningown
[258]2023Classification of Functional Types of Lines in P&IDs Using a Graph Neural NetworkP&IDdetectionwith Learningown
[259]2023Demonstrating Automated Generation of Simulation Models from Engineering DiagramsP&IDdetection + contextualizationwith Learningsynthetisches Dataset
[31]2023A Complete Piping Identification Solution for Piping and Instrumentation DiagramsP&IDdetection + contextualizationwith Learningown
[260]2024Rule-based continuous line classification using shape and positional relationships between objects in piping and instrumentation diagramP&IDdetectionohne MLown
[261]2024Image format pipeline and instrument diagram recognition method based on deep learningP&IDdetection + contextualizationwith LearningOpenSource Dataset
[262]2024Semi-supervised symbol detection for piping and instrumentation drawingsP&IDdetection + contextualizationwith Learningown
[34]2024Auto-Routing Systems (ARSs) with 3D Piping for Sustainable Plant Projects Based on Artificial Intelligence (AI) and Digitalization of 2D Drawings and SpecificationsP&IDdetection + contextualizationwith Learningown

References

  1. Vlah, D.; Kastrin, A.; Povh, J.; Vukašinović, N. Data-driven engineering design: A systematic review using scientometric approach. Adv. Eng. Inform. 2022, 54, 101774. [Google Scholar] [CrossRef]
  2. Pakkanen, J.; Huhtala, P.; Juuti, T.; Lehtonen, T. Achieving Benefits with Design Reuse in Manufacturing Industry. Procedia CIRP 2016, 50, 8–13. [Google Scholar] [CrossRef]
  3. Isaksson, O.; Hallstedt, S.I.; Rönnbäck, A.Ö. Digitalisation, sustainability and servitisation: Consequences on product development capabilities in manufacturing firms. In Proceedings of the DS 91: NordDesign 2018, Linköping, Sweden, 14–17 August 2018. [Google Scholar]
  4. Hahne, M. Systematisches Konstruieren: Praxisnah und Prägnant; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  5. Produktentwicklung und Projektmanagement. Konstruktionsmethodik—Methodisches Entwickeln von Lösungsprinzipien; VDI: Dusseldorf, Germany, 1997. [Google Scholar]
  6. Roth, K. Konstruieren mit Konstruktionskatalogen: Band 1: Konstruktionslehre, 3rd ed.; Erweitert und Neu Gestaltet; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar] [CrossRef]
  7. Moreno-García, C.F.; Elyan, E.; Jayne, C. New trends on digitisation of complex engineering drawings. Neural Comput. Appl. 2019, 31, 1695–1712. [Google Scholar] [CrossRef]
  8. Valveny, E.; Dosch, P. Symbol recognition contest: A synthesis. In Proceedings of the International Workshop on Graphics Recognition, Barcelona, Spain, 30–31 July 2003; Springer: Berlin/Heidelberg, Germany, 2003; pp. 368–385. [Google Scholar]
  9. Goyal, S.; Mistry, V.; Chattopadhyay, C.; Bhatnagar, G. BRIDGE: Building Plan Repository for Image Description Generation, and Evaluation. In Proceedings of the 2019 International Conference on Document Analysis and Recognition (ICDAR), Sydney, Australia, 20–25 September 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  10. Delalandre, M.; Valveny, E.; Pridmore, T.; Karatzas, D. Generation of synthetic documents for performance evaluation of symbol recognition & spotting systems. Int. J. Doc. Anal. Recognit. (IJDAR) 2010, 13, 187–207. [Google Scholar] [CrossRef]
  11. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  12. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  13. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
  14. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  15. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar]
  16. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  17. Rezvanifar, A.; Cote, M.; Albu, A.B. Symbol Spotting on Digital Architectural Floor Plans Using a Deep Learning-based Framework. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  18. Rezvanifar, A.; Cote, M.; Branzan Albu, A. Symbol spotting for architectural drawings: State-of-the-art and new industry-driven developments. IPSJ Trans. Comput. Vis. Appl. 2019, 11, 2. [Google Scholar] [CrossRef]
  19. Stefenon, S.F.; Cristoforetti, M.; Cimatti, A. Towards Automatic Digitalization of Railway Engineering Schematics. In Proceedings of the International Conference of the Italian Association for Artificial Intelligence, Rome, Italy, 6–9 November 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 453–466. [Google Scholar]
  20. Huang, W.; Sun, Q.; Yu, A.; Guo, W.; Xu, Q.; Wen, B.; Xu, L. Leveraging Deep Convolutional Neural Network for Point Symbol Recognition in Scanned Topographic Maps. ISPRS Int. J.-Geo-Inf. 2023, 12, 128. [Google Scholar] [CrossRef]
  21. Surikov, I.Y.; Nakhatovich, M.A.; Belyaev, S.Y.; Savchuk, D.A. Floor Plan Recognition and Vectorization Using Combination UNet, Faster-RCNN, Statistical Component Analysis and Ramer-Douglas-Peucker. In Proceedings of the International Conference on Computing Science, Communication and Security, Gujarat, India, 26–27 March 2020; Springer: Singapore, 2020; pp. 16–28. [Google Scholar] [CrossRef]
  22. Sarkar, S.; Pandey, P.; Kar, S. Automatic Detection and Classification of Symbols in Engineering Drawings. arXiv 2022, arXiv:2204.13277. [Google Scholar]
  23. Mishra, S.; Hashmi, K.A.; Pagani, A.; Liwicki, M.; Stricker, D.; Afzal, M.Z. Towards Robust Object Detection in Floor Plan Images: A Data Augmentation Approach. Appl. Sci. 2021, 11, 11174. [Google Scholar] [CrossRef]
  24. Shehzadi, T.; Hashmi, K.A.; Pagani, A.; Liwicki, M.; Stricker, D.; Afzal, M.Z. Mask-Aware Semi-Supervised Object Detection in Floor Plans. Appl. Sci. 2022, 12, 9398. [Google Scholar] [CrossRef]
  25. Goyal, S. Fine Grained Feature Representation Using Computer Vision Techniques for Understanding Indoor Space. Ph.D. Thesis, Indian Institute of Technology Jodhpur and Jodhpur and Computer Science and Engineering, Karwar, India, 2021. [Google Scholar]
  26. Elyan, E.; Jamieson, L.; Ali-Gombe, A. Deep Learning for Symbols Detection and Classification in Engineering Drawings; Elsevier: Amsterdam, The Netherlands, 2020; Volume 129, pp. 91–102. [Google Scholar] [CrossRef]
  27. Dzhusupova, R.; Banotra, R.; Bosch, J.; Olsson, H.H. Pattern Recognition Method for Detecting Engineering Errors on Technical Drawings. In Proceedings of the 2022 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 6–9 June 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  28. Xiao, X.; Li, Z.; Zhao, S.; Yang, L.; Zhao, F.; Ge, C. Improved P&ID Symbol Detection Algorithm Based on YOLOv5 Network. In Proceedings of the 2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Honolulu, HI, USA, 1–4 October 2023; pp. 120–126. [Google Scholar] [CrossRef]
  29. Gajbhiye, S.M.; Bhamre, S.; Tadepalli, L.T.; Pillai, M.; Uplaonkar, D. Advancing P&ID Digitization with YOLOv5. In Proceedings of the 2023 International Conference on Integrated Intelligence and Communication Systems (ICIICS), Kalaburagi, India, 24–25 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  30. Ismail, M.H.A.B. Identification of Objects in Oilfield Infrastructure using Engineering Diagram and Machine Learning Methods. In Proceedings of the 2021 IEEE Symposium on Computers & Informatics (ISCI), 2021, Kuala Lumpur, Malaysia, 16 October 2021; pp. 19–24. [Google Scholar] [CrossRef]
  31. Liu, S.; Li, Z.; Zhao, S.; Yang, L.; Zhao, F.; Ge, C. A Complete Piping Identification Solution for Piping and Instrumentation Diagrams. In Proceedings of the 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Melbourne, Australia, 17–21 December 2023; pp. 9–15. [Google Scholar] [CrossRef]
  32. Yun, D.Y.; Seo, S.K.; Zahid, U.; Lee, C.J. Deep Neural Network for Automatic Image Recognition of Engineering Diagrams. Appl. Sci. 2020, 10, 4005. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Cai, J.; Cai, H. CNN-Based Symbol Recognition in Piping Drawings; American Society of Civil Engineers: Reston, VA, USA, 2020. [Google Scholar] [CrossRef]
  34. Kang, D.H.; Choi, S.W.; Lee, E.B.; Kang, S.O. Auto-Routing Systems (ARSs) with 3D Piping for Sustainable Plant Projects Based on Artificial Intelligence (AI) and Digitalization of 2D Drawings and Specifications. Sustainability 2024, 16, 2770. [Google Scholar] [CrossRef]
  35. Mizanur Rahman, S.; Bayer, J.; Dengel, A. Graph-Based Object Detection Enhancement for Symbolic Engineering Drawings. In Proceedings of the Document Analysis and Recognition—ICDAR 2021 Workshops, Lausanne, Switzerland, 5–10 September 2021; Lecture Notes in Computer Science. Barney Smith, E.H., Pal, U., Eds.; Springer International Publishing: Cham, Switzerland, 2021; Volume 12916, pp. 74–90. [Google Scholar] [CrossRef]
  36. Song, A.; Kun, H.; Peng, B.; Chen, R.; Zhao, K.; Qiu, J.; Wang, K. EDRS: An Automatic System to Recognize Electrical Drawings. In Proceedings of the 2021 China Automation Congress (CAC), Beijing, China, 22–24 October 2021; pp. 5438–5443. [Google Scholar] [CrossRef]
  37. Haar, C.; Kim, H.; Koberg, L. AI-Based Engineering and Production Drawing Information Extraction. In Proceedings of the International Conference on Flexible Automation and Intelligent Manufacturing, Detroit, MI, USA, 19–23 June 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 374–382. [Google Scholar]
  38. Lin, Y.H.; Ting, Y.H.; Huang, Y.C.; Cheng, K.L.; Jong, W.R. Integration of Deep Learning for Automatic Recognition of 2D Engineering Drawings. Machines 2023, 11, 802. [Google Scholar] [CrossRef]
  39. Kashevnik, A.; Ali, A.; Mayatin, A. AI-Based Method for Frame Detection in Engineering Drawings. In Proceedings of the 2023 International Russian Smart Industry Conference (SmartIndustryCon), Sochi, Russia, 27–31 March 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 225–229. [Google Scholar]
  40. Brock, A.; Lim, T.; Ritchie, J.M.; Weston, N. ConvNet-Based Optical Recognition for Engineering Drawings. In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Cleveland, OH, USA, 6–9 August 2017. [Google Scholar] [CrossRef]
  41. Bickel, S.; Goetz, S.; Wartzack, S. From Sketches to Graphs: A Deep Learning Based Method for Detection and Contextualisation of Principle Sketches in the Early Phase of Product Development. Proc. Des. Soc. 2023, 3, 1975–1984. [Google Scholar] [CrossRef]
  42. Bickel, S.; Schleich, B.; Wartzack, S. Detection and classification of symbols in principle sketches using deep learning. Proc. Des. Soc. 2021, 1, 1183–1192. [Google Scholar] [CrossRef]
  43. Seff, A.; Ovadia, Y.; Zhou, W.; Adams, R.P. Sketchgraphs: A large-scale dataset for modeling relational geometry in computer-aided design. arXiv 2020, arXiv:2007.08506. [Google Scholar]
  44. Onshape, P. API-integration with Onshape. Available online: https://www.onshape.com/de/features/integrations (accessed on 8 July 2024).
  45. Clark, A. Pillow (PIL Fork) Documentation. 2015. Available online: https://buildmedia.readthedocs.org/media/pdf/pillow/latest/pillow.pdf (accessed on 23 April 2024).
  46. github. object-detection. 2024. Available online: https://github.com/topics/object-detection (accessed on 23 April 2024).
  47. Roth, K. Konstruieren mit Konstruktionskatalogen: Band 2: Kataloge; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  48. Roth, K. Konstruieren mit Konstruktionskatalogen: Band 3: Verbindungen und Verschlüsse, Lösungsfindung, 2nd ed.; Wesentlich Erweitert und neu Gestaltet; Springer eBook Collection Computer Science and Engineering; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar] [CrossRef]
  49. Labisch, S.; Weber, C. Technisches Zeichnen: Intensiv und Effektiv Lernen und üben, 2nd ed.; überarb. aufl; Studium, Vieweg: Wiesbaden, Germany, 2005. [Google Scholar] [CrossRef]
  50. DIN ISO 3952-2:1995 DE; Vereinfachte Darstellungen in der Kinematik. Standard, International Organization for Standardization: Geneva, Switzerland, 1995.
  51. List, R. CATIA V5—Grundkurs für Maschinenbauer: Bauteil- und Baugruppenkonstruktion, Zeichnungsableitung, 4th ed.; aktualisierte und erw. aufl.; Studium, Vieweg + Teubner: Wiesbaden, Germany, 2009. [Google Scholar] [CrossRef]
  52. Madsen, D.A.; Madsen, D.P.; Standiford, K.; Krulikowski, A. Engineering Drawing & Design, 6th ed.; Cengage Learning: South Melbourne, VIC, Australia, 2017. [Google Scholar]
  53. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
  54. Abdulla, W. Mask R-CNN for Object Detection and Instance Segmentation on Keras and TensorFlow. 2017. Available online: https://github.com/matterport/Mask_RCNN (accessed on 23 April 2024).
  55. Jocher, G.; Stoken, A.; Chaurasia, A.; Borovec, J.; Code, N.; Xie, T.; Kwon, Y.; Michael, K.; Changyu, L.; Fang, J.; et al. ultralytics/yolov5: v6.0—YOLOv5n ’Nano’ models, Roboflow integration, TensorFlow export, OpenCV DNN support. Zenodo 2021. [Google Scholar] [CrossRef]
  56. Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.Y.; Girshick, R. Detectron2. 2019. Available online: https://github.com/facebookresearch/detectron2 (accessed on 23 April 2024).
  57. Lupinetti, K.; Pernot, J.P.; Monti, M.; Giannini, F. Content-based CAD assembly model retrieval: Survey and future challenges. Comput.-Aided Des. 2019, 113, 62–81. [Google Scholar] [CrossRef]
  58. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; Volume 27. [Google Scholar]
  59. Vahdat, A.; Kautz, J. NVAE: A deep hierarchical variational autoencoder. In Proceedings of the Advances in Neural Information Processing Systems 2020, virtual, 6–12 December 2020; Volume 33, pp. 19667–19679. [Google Scholar]
  60. Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar]
  61. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  62. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  63. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  64. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  65. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 213–229. [Google Scholar]
  66. Law, H.; Deng, J. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
  67. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  68. Kasturi, R.; Bow, S.T.; El-Masri, W.; Shah, J.; Gattiker, J.R.; Mokate, U.B. A system for interpretation of line drawings. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 978–992. [Google Scholar] [CrossRef]
  69. Tanigawa, S.; Hori, O.; Shimotsuji, S. Precise line detection from an engineering drawing using a figure fitting method based on contours and skeletons. In Proceedings of the 12th IAPR International Conference on Pattern Recognition (Cat. No.94CH3440-5), Jerusalem, Israel, 9–13 October 1994. [Google Scholar] [CrossRef]
  70. Han, C.C.; Fan, K.C. Skeleton generation of engineering drawings via contour matching. Pattern Recognit. 1994, 27, 261–275. [Google Scholar] [CrossRef]
  71. den Hartog, J.E.; ten Kate, T.K. Finding arrows in utility maps using a neural network. In Proceedings of the 12th IAPR International Conference on Pattern Recognition (Cat. No.94CH3440-5), Jerusalem, Israel, 9–13 October 1994; Volume 2, pp. 190–194. [Google Scholar] [CrossRef]
  72. Messmer, B.T.; Bunke, H. Automatic learning and recognition of graphical symbols in engineering drawings. In Graphics Recognition Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1996; pp. 123–134. [Google Scholar] [CrossRef]
  73. Chiang, J.Y.; Tue, S.; Leu, Y.C. A new algorithm for line image vectorization. Pattern Recognit. 1998, 31, 1541–1549. [Google Scholar] [CrossRef]
  74. Sauvola, J.; Pietikäinen, M. Adaptive document image binarization. Pattern Recognit. 2000, 33, 225–236. [Google Scholar] [CrossRef]
  75. Hilaire, X.; Tombre, K. Robust and accurate vectorization of line drawings. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 890–904. [Google Scholar] [CrossRef] [PubMed]
  76. Zhang, J.Y.; Zhao, L.F.; Hao, Y.P. Multi-Level Block Information Extraction in Engineering Drawings Based on Depth-First Algorithm. Autom. Equip. Syst. 2012, 468–471, 2100–2103. [Google Scholar] [CrossRef]
  77. Rezaei, S.B.; Shanbehzadeh, J.; Sarrafzadeh, A. Adaptive document image skew estimation. In Proceedings of the International MultiConference of Engineers and Computer Scientists 2017—IMECS 2017, Hong Kong, China, 15–17 March 2017; p. 97898814. [Google Scholar]
  78. Liu, T.; Hua, Q.; Yuan, S.; Yin, L.; Cheng, G. Anchor Point based Hough Transformation for Automatic Line Detection of Engineering Drawings. In Proceedings of the 2019 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 21–22 August 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  79. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J.M. Deep Vectorization of Technical Drawings; Springer International Publishing: Cham, Switzerland, 2020; Volume 12358. [Google Scholar] [CrossRef]
  80. Yang, D.S.; Webster, J.L.; Renmdell, L.A.; Garrett, J.H.; Shaw, D.S. Management of graphical symbols in a CAD environment: A neural network approach. In Proceedings of the 1993 IEEE Conference on Tools with Al (TAI-93), Boston, MA, USA, 8–11 November 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 272–279. [Google Scholar] [CrossRef]
  81. Yang, D.; Rendell, L.A.; Webster, J.L.; Shaw, D.S.; Garrett, J.H., Jr. Symbol recognition in a CAD environment using a neural network. Int. J. Artif. Intell. Tools 1994, 3, 157–185. [Google Scholar] [CrossRef]
  82. Lladós, J.; López-Krahe, J.; Martí, E. A system to understand hand-drawn floor plans using subgraph isomorphism and Hough transform. Mach. Vis. Appl. 1997, 10, 150–158. [Google Scholar] [CrossRef]
  83. Ah-Soon, C. A constraint network for symbol detection in architectural drawings. In Graphics Recognition Algorithms and Systems; Springer: Berlin/Heidelberg, Germany, 1998; pp. 80–90. [Google Scholar] [CrossRef]
  84. Dosch, P.; Tombre, K.; Ah-Soon, C.; Masini, G. A complete system for the analysis of architectural drawings. Int. J. Doc. Anal. Recognit. 2000, 3, 102–116. [Google Scholar] [CrossRef]
  85. Ah-Soon, C.; Tombre, K. Architectural symbol recognition using a network of constraints. Pattern Recognit. Lett. 2001, 22, 231–248. [Google Scholar] [CrossRef]
  86. Lladós, J.; Martí, E.; Villanueva, J.J. Symbol recognition by error-tolerant subgraph matching between region adjacency graphs. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1137–1143. [Google Scholar] [CrossRef]
  87. Song, J.; Su, F.; Tai, C.L.; Cai, S. An object-oriented progressive-simplification-based vectorization system for engineering drawings: Model, algorithm, and performance. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1048–1060. [Google Scholar] [CrossRef]
  88. Valveny, E.; Martı, E. A model for image generation and symbol recognition through the deformation of lineal shapes. Pattern Recognit. Lett. 2003, 24, 2857–2867. [Google Scholar] [CrossRef]
  89. Cao, Y.; Li, H.; Liang, Y. Using engineering drawing interpretation for automatic detection of version information in CADD engineering drawing. Autom. Constr. 2005, 14, 361–367. [Google Scholar] [CrossRef]
  90. Lu, T.; Yang, H.; Yang, R.; Cai, S. Automatic analysis and integration of architectural drawings. Int. J. Doc. Anal. Recognit. (IJDAR) 2007, 9, 31–47. [Google Scholar] [CrossRef]
  91. Lu, T.; Yang, Y.; Yang, R.; Cai, S. Knowledge Extraction from Structured Engineering Drawings. In Proceedings of the 2008 Fifth International Conference on Fuzzy Systems and Knowledge Discovery, Jinan, China, 18–20 October 2008; IEEE: Piscataway, NJ, USA, 2008. [Google Scholar] [CrossRef]
  92. Le Bodic, P.; Locteau, H.; Adam, S.; Héroux, P.; Lecourtier, Y.; Knippel, A. Symbol Detection Using Region Adjacency Graphs and Integer Linear Programming. In Proceedings of the 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 26–29 July 2009; IEEE: Piscataway, NJ, USA, 2009. [Google Scholar] [CrossRef]
  93. Rusiñol, M.; Lladós, J.; Sánchez, G. Symbol spotting in vectorized technical drawings through a lookup table of region strings. Pattern Anal. Appl. 2010, 13, 321–331. [Google Scholar] [CrossRef]
  94. Nayef, N.; Breuel, T.M. Statistical Grouping for Segmenting Symbols Parts from Line Drawings, with Application to Symbol Spotting. In Proceedings of the 2011 International Conference on Document Analysis and Recognition, Beijing, China, 18–21 September 2011; IEEE: Piscataway, NJ, USA, 2011. [Google Scholar] [CrossRef]
  95. Barducci, A.; Marinai, S. Object recognition in floor plans by graphs of white connected components. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 298–301. [Google Scholar]
  96. Nayef, N. Geometric-based Symbol Spotting and Retrieval in Technical Line Drawings. Ph.D. Thesis, Technische Universität Kaiserslautern, Kaiserslautern, Germany, 2013. [Google Scholar]
  97. Nayef, N.; Breuel, T.M. Building a Symbol Library from Technical Drawings by Identifying Repeating Patterns. In Graphics Recognition. New Trends and Challenges; Springer: Berlin/ Heidelberg, Germany, 2013; pp. 69–78. [Google Scholar] [CrossRef]
  98. Nayef, N.; Breuel, T.M. Combining geometric matching with SVM to improve symbol spotting. In Proceedings of the IS & T/SPIE Electronic Imaging, 2013, Burlingame, CA, USA, 3–7 February 2013; Volume 8658, pp. 141–149. [Google Scholar] [CrossRef]
  99. Nayef, N.; Breuel, T.M. Efficient symbol retrieval by building a symbol index from a collection of line drawings. In Proceedings of the IS&T-SPIE Electronic Imaging Symposium, Burlingame, CA, USA, 5–7 February 2013; pp. 320–331. [Google Scholar] [CrossRef]
  100. Dutta, A.; Lladós, J.; Pal, U. A symbol spotting approach in graphical documents by hashing serialized graphs. Pattern Recognit. 2013, 46, 752–768. [Google Scholar] [CrossRef]
  101. Zhang, H.; Li, X. Data Extraction from DXF File and Visual Display. In Proceedings of the HCI International 2014—Posters’ Extended Abstracts: International Conference, HCI International 2014, Heraklion, Greece, 22–27 June 2014; Proceedings, Part I 16. Springer: Cham, Switzerland, 2014; pp. 286–291. [Google Scholar] [CrossRef]
  102. Banerjee, P.; Choudhary, S.; Das, S.; Majumdar, H.; Roy, R.; Chaudhuri, B.B. Automatic Hyperlinking of Engineering Drawing Documents. In Proceedings of the 2016 12th IAPR Workshop on Document Analysis Systems (DAS), Santorini, Greece, 11–14 April 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar] [CrossRef]
  103. Riba, P.; Dutta, A.; Llados, J.; Fornes, A. Graph-Based Deep Learning for Graphics Classification. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar] [CrossRef]
  104. Drapeau, J.; Géraud, T.; Coustaty, M.; Chazalon, J.; Burie, J.C.; Eglin, V.; Bres, S. Extraction of Ancient Map Contents Using Trees of Connected Components. In Graphics Recognition. Current Trends and Evolutions; Springer: Cham, Switzerland, 2018; pp. 115–130. [Google Scholar] [CrossRef]
  105. Ziran, Z.; Marinai, S. Object Detection in Floor Plan Images. In Proceedings of the Artificial Neural Networks in Pattern Recognition: 8th IAPR TC3 Workshop, ANNPR 2018, Siena, Italy, 19–21 September 2018; Proceedings 8. Springer: Cham, Switzerland, 2018; pp. 383–394. [Google Scholar] [CrossRef]
  106. Renton, G.; Heroux, P.; Gauzere, B.; Adam, S. Graph Neural Network for Symbol Detection on Document Images. In Proceedings of the 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), Sydney, Australia, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  107. Goyal, S.; Chattopadhyay, C.; Bhatnagar, G. Knowledge-driven description synthesis for floor plan interpretation. Int. J. Doc. Anal. Recognit. (IJDAR) 2021, 24, 19–32. [Google Scholar] [CrossRef]
  108. Fan, Z.; Zhu, L.; Li, H.; Chen, X.; Zhu, S.; Tan, P. FloorPlanCAD: A Large-Scale CAD Drawing Dataset for Panoptic Symbol Spotting. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  109. Evangelou, I.; Savelonas, M.; Papaioannou, G. PU learning-based recognition of structural elements in architectural floor plans. Multimed. Tools Appl. 2021, 80, 13235–13252. [Google Scholar] [CrossRef]
  110. Park, S.; Kim, H. 3DPlanNet: Generating 3D Models from 2D Floor Plan Images Using Ensemble Methods. Electronics 2021, 10, 2729. [Google Scholar] [CrossRef]
  111. Fan, Z.; Chen, T.; Wang, P.; Wang, Z. CADTransformer: Panoptic Symbol Spotting Transformer for CAD Drawings. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  112. Zheng, Z.; Li, J.; Zhu, L.; Li, H.; Petzold, F.; Tan, P. GAT-CADNet: Graph Attention Network for Panoptic Symbol Spotting in CAD Drawings. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  113. Jakubik, J.; Hemmer, P.; Vössing, M.; Blumenstiel, B.; Bartos, A.; Mohr, K. Designing a Human-in-the-Loop System for Object Detection in Floor Plans. In Proceedings of the AAAI Conference on Artificial Intelligence, 2022, Philadelphia, PA, USA, 27 February–22 March 2022; Volume 36, pp. 12524–12530. [Google Scholar]
  114. Mafipour, M.S.; Ahmed, D.; Vilgertshofer, S.; Borrmann, A. Digitalization of 2D Bridge Drawings Using Deep Learning Models. In Proceedings of the 30th International Conference on Intelligent Computing in Engineering (EG-ICE), London, UK, 4–7 July 2023. [Google Scholar]
  115. Faltin, B.; Schönfelder, P.; König, M. Improving Symbol Detection on Engineering Drawings Using a Keypoint-Based Deep Learning Approach. In Proceedings of the 30th EG-ICE: International Conference on Intelligent Computing in Engineering, London, UK, 4–7 July 2023. [Google Scholar]
  116. Smith, W.A.; Pillatt, T. You only look for a symbol once: An object detector for symbols and regions in documents. In Proceedings of the International Conference on Document Analysis and Recognition. Springer, San José, CA, USA, 21–26 August 2023; pp. 227–243. [Google Scholar]
  117. Schönfelder, P.; Stebel, F.; Andreou, N.; König, M. Deep learning-based text detection and recognition on architectural floor plans. Autom. Constr. 2024, 157, 105156. [Google Scholar] [CrossRef]
  118. Janssen, R.D.; Vossepoel, A.M. Adaptive Vectorization of Line Drawing Images. Comput. Vis. Image Underst. 1997, 65, 38–56. [Google Scholar] [CrossRef]
  119. Yang, S. Symbol recognition via statistical integration of pixel-level constraint histograms: A new descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 278–281. [Google Scholar] [CrossRef]
  120. Zhang, W.; Wenyin, L.; Zhang, K. Symbol recognition with kernel density matching. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 2020–2024. [Google Scholar] [CrossRef]
  121. Rusiñol, M.; Lladós, J. Symbol Spotting in Technical Drawings Using Vectorial Signatures. In Proceedings of the International Workshop on Graphics Recognition, Hong Kong, China, 25–26 August 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 35–46. [Google Scholar] [CrossRef]
  122. Barrat, S.; Tabbone, S.; Nourrissier, P. A Bayesian classifier for symbol recognition. In Proceedings of the Seventh International Workshop on Graphics Recognition-GREC’2007, Curitiba, Brazil, 20–21 September 2007. 9p. [Google Scholar]
  123. Yu, Y.; Zhang, W.; Liu, W. A New Syntactic Approach to Graphic Symbol Recognition. In Proceedings of the Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), Curitiba, Brazil, 23–26 September 2007; IEEE: Piscataway, NJ, USA, 2007. [Google Scholar] [CrossRef]
  124. Terrades, O.R.; Valveny, E.; Tabbone, S. On the Combination of Ridgelets Descriptors for Symbol Recognition. In Proceedings of the Graphics Recognition. Recent Advances and New Opportunities: 7th International Workshop, GREC 2007, Curitiba, Brazil, 20–21 September 2007; Selected Papers 7. Springer: Berlin/Heidelberg, Germany, 2008; pp. 40–50. [Google Scholar] [CrossRef]
  125. Luqman, M.M.; Brouard, T.; Ramel, J.Y. Graphic Symbol Recognition Using Graph Based Signature and Bayesian Network Classifier. In Proceedings of the 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 26–29 July 2009; IEEE: Piscataway, NJ, USA, 2009. [Google Scholar] [CrossRef]
  126. Barrat, S.; Tabbone, S. A Bayesian network for combining descriptors: Application to symbol recognition. Int. J. Doc. Anal. Recognit. (IJDAR) 2010, 13, 65–75. [Google Scholar] [CrossRef]
  127. Coustaty, M.; Bertet, K.; Visani, M.; Ogier, J. A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifier. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2011, 41, 1136–1148. [Google Scholar] [CrossRef] [PubMed]
  128. Ghosh, S.; Shaw, P.; Das, N.; Santosh, K.C. GSD-Net: Compact Network for Pixel-Level Graphical Symbol Detection. In Proceedings of the 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), Sydney, Australia, 22–25 August 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  129. Elliman, D. TIF2VEC, An Algorithm for Arc Segmentation in Engineering Drawings. In Graphics Recognition Algorithms and Applications; Springer: Berlin/ Heidelberg, Germany, 2002; pp. 350–358. [Google Scholar] [CrossRef]
  130. Santosh, K.C.; Wendling, L.; Lamiroy, B. BoR: Bag-of-Relations for Symbol Retrieval. Int. J. Pattern Recognit. Artif. Intell. 2014, 28, 1450017. [Google Scholar] [CrossRef]
  131. Karasneh, B.; Chaudron, M.R. Img2UML: A System for Extracting UML Models from Images. In Proceedings of the 2013 39th Euromicro Conference on Software Engineering and Advanced Applications, Santander, Spain, 4–6 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 134–137. [Google Scholar] [CrossRef]
  132. Ho-Quang, T.; Chaudron, M.R.; Samuelsson, I.; Hjaltason, J.; Karasneh, B.; Osman, H. Automatic Classification of UML Class Diagrams from Images. In Proceedings of the 2014 21st Asia-Pacific Software Engineering Conference, Washington, DC, USA, 1–4 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 399–406. [Google Scholar] [CrossRef]
  133. Shcherban, S.; Liang, P.; Li, Z.; Yang, C. Multiclass Classification of UML Diagrams from Images Using Deep Learning. Int. J. Softw. Eng. Knowl. Eng. 2021, 31, 1683–1698. [Google Scholar] [CrossRef]
  134. Bunke, H. Automatic Interpretation of Lines and Text in Circuit Diagrams. In Pattern Recognition Theory and Applications; Springer: Dordrecht, Germany, 1982; pp. 297–310. [Google Scholar] [CrossRef]
  135. Groen, F.C.; Sanderson, A.C.; Schlag, J.F. Symbol recognition in electrical diagrams using probabilistic graph matching. Pattern Recognit. Lett. 1985, 3, 343–350. [Google Scholar] [CrossRef]
  136. Okazaki, A.; Kondo, T.; Mori, K.; Tsunekawa, S.; Kawamoto, E. An automatic circuit diagram reader with loop-structure-based symbol recognition. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 331–341. [Google Scholar] [CrossRef]
  137. Fahn, C.S.; Wang, J.F.; Lee, J.Y. A topology-based component extractor for understanding electronic circuit diagrams. Comput. Vision Graph. Image Process. 1988, 43, 279. [Google Scholar] [CrossRef]
  138. Lee, S.W.; Kim, J.H.; Groen, F.C. Translation-, Rotation- and Scale- Invariant Recognition of Hand-Drawn Symbols in Schematic Diagrams. Int. J. Pattern Recognit. Artif. Intell. 1990, 04, 1–25. [Google Scholar] [CrossRef]
  139. Lee, S.W. Recognizing Hand-Drawn Electrical Circuit Symbols with Attributed Graph Matching. In Structured Document Image Analysis; Springer: Berlin/Heidelberg, Germany, 1992; pp. 340–358. [Google Scholar] [CrossRef]
  140. Kim, S.H.; Suh, J.W.; Kim, J.H. Recognition of logic diagrams by identifying loops and rectilinear polylines. In Proceedings of the 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), Tsukuba Science City, Japan, 20–22 October 1993. [Google Scholar] [CrossRef]
  141. Cheng, T.; Khan, J.; Liu, H.; Yun, D. A symbol recognition system. In Proceedings of the 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), Tsukuba Science City, Japan, 20–22 October 1993. [Google Scholar] [CrossRef]
  142. Hamada, A.H. A new system for the analysis of schematic diagrams. In Proceedings of the 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), Tsukuba Science City, Japan, 20–22 October 1993. [Google Scholar] [CrossRef]
  143. Yu, B. Automatic understanding of symbol-connected diagrams. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995. [Google Scholar] [CrossRef]
  144. Yan, L.; Wenyin, L. Engineering drawings recognition using a case-based approach. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, 2003, Edinburgh, UK, 6 August 2003. [Google Scholar] [CrossRef]
  145. Zesheng, S.; Jing, Y.; Chunhong, J.; Yonggui, W. Symbol recognition in electronic diagrams using decision tree. In Proceedings of the 1994 IEEE International Conference on Industrial Technology—ICIT ’94, Guangzhou, China, 5–9 December 1994; IEEE: Piscataway, NJ, USA, 1994. [Google Scholar] [CrossRef]
  146. Baum, L.; Boose, J.; Boose, M.; Chaplin, C.; Provine, R. Extracting System-Level Understanding from Wiring Diagram Manuals. In Graphics Recognition. Recent Advances and Perspectives; Springer: Berlin/Heidelberg, Germany, 2004; pp. 100–108. [Google Scholar] [CrossRef]
  147. Ouyang, T.Y.; Davis, R. A visual approach to sketched symbol recognition. In Proceedings of the 21st International Joint Conference on Artifical Intelligence, IJCAI ’09, Pasadena, CA, USA, 11–17 July 2009; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 2009. [Google Scholar]
  148. Feng, G.; Viard-Gaudin, C.; Sun, Z. On-line hand-drawn electric circuit diagram recognition using 2D dynamic programming. Pattern Recognit. 2009, 42, 3215–3223. [Google Scholar] [CrossRef]
  149. De, P.; Mandal, S.; Bhowmick, P. Recognition of electrical symbols in document images using morphology and geometric analysis. In Proceedings of the 2011 International Conference on Image Information Processing, Shimla, India, 3–5 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–6. [Google Scholar] [CrossRef]
  150. Santosh, K.C.; Lamiroy, B.; Wendling, L. Symbol recognition using spatial relations. Pattern Recognit. Lett. 2012, 33, 331–341. [Google Scholar] [CrossRef]
  151. Bailey, D.; Norman, A.; Moretti, G. Electronic Schematic Recognition; Massey University: Wellington, New Zealand, 1995. [Google Scholar]
  152. Datta, R.; Mandal, P.D.S.; Chanda, B. Detection and identification of logic gates from document images using mathematical morphology. In Proceedings of the 2015 Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), Patna, India, 16–19 December 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar] [CrossRef]
  153. Rabbani, M.; Khoshkangini, R.; Nagendraswamy, H.S.; Conti, M. Hand Drawn Optical Circuit Recognition. Procedia Comput. Sci. 2016, 84, 41–48. [Google Scholar] [CrossRef]
  154. Agarwal, S.; Agrawal, M.; Chaudhury, S. Recognizing Electronic Circuits to Enrich Web Documents for Electronic Simulation. In Graphic Recognition. Current Trends and Challenges; Springer International Publishing: Cham, Switzerland, 2017; pp. 60–74. [Google Scholar] [CrossRef]
  155. Stoitchkov, D. Analysis of Methods for Automated Symbol Recognition in Technical Drawings. Bachelor’s Thesis, Technical University of Munich, Munich, Germany, 2018. [Google Scholar]
  156. Datta, R.; Mandal, S.; Biswas, S. Automatic Abstraction of Combinational Logic Circuit from Scanned Document Page Images. Pattern Recognit. Image Anal. 2019, 29, 212–223. [Google Scholar] [CrossRef]
  157. Peng, Z.; Yan, G.; Zhongshan, Q.; Huiyong, L.; Mouying, L.; Shengnan, L. CIM/G graphics automatic generation in substation primary wiring diagram based on image recognition. J. Physics Conf. Ser. 2020, 1617, 012007. [Google Scholar] [CrossRef]
  158. Thoma, F.; Bayer, J.; Li, Y.; Dengel, A. A public ground-truth dataset for handwritten circuit diagram images. In Proceedings of the International Conference on Document Analysis and Recognition, Lausanne, Switzerland, 5–10 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 20–27. [Google Scholar]
  159. Shen, C.; Lv, P.; Mao, M.; Li, W.; Zhao, K.; Yan, Z. Substation One-Line Diagram Automatic Generation Based On Image Recongnition. In Proceedings of the 2022 Global Conference on Robotics, Artificial Intelligence and Information Technology (GCRAIT), Chicago, IL, USA, 30–31 July 2022; pp. 247–251. [Google Scholar] [CrossRef]
  160. Ramadhan, D.S.; Al-Khaffaf, H.S.M. Symbol Spotting in Electronic Images Using Morphological Filters and Hough Transform. Sci. J. Univ. Zakho 2022, 10, 119–129. [Google Scholar]
  161. Bayer, J.; Roy, A.K.; Dengel, A. Instance segmentation based graph extraction for handwritten circuit diagram images. arXiv 2023, arXiv:2301.03155. [Google Scholar]
  162. Uzair, W.; Chai, D.; Rassau, A. Electronet: An Enhanced Model for Small-Scale Object Detection in Electrical Schematic Diagrams. 2023. Available online: https://www.researchgate.net/publication/372298462_ElectroNet_An_Enhanced_Model_for_Small-Scale_Object_Detection_in_Electrical_Schematic_Diagrams (accessed on 9 July 2024).
  163. Bhanbhro, H.; Hooi, Y.K.; Zakaria, M.N.B.; Hassan, Z.; Pitafi, S. Single Line Electrical Drawings (SLED): A Multiclass Dataset Benchmarked by Deep Neural Networks. In Proceedings of the 2023 IEEE 13th International Conference on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 2 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 66–71. [Google Scholar]
  164. Yang, C.; Wang, J.; Yang, L.; Shi, D.; Duan, X. Intelligent Digitization of Substation One-Line Diagrams Based on Computer Vision. IEEE Trans. Power Deliv. 2023, 38, 3912–3923. [Google Scholar] [CrossRef]
  165. Wenyin, L.; Zhang, W.; Yan, L. An interactive example-driven approach to graphics recognition in engineering drawings. Int. J. Doc. Anal. Recognit. (IJDAR) 2007, 9, 13–29. [Google Scholar] [CrossRef]
  166. Qureshi, R.J.; Ramel, J.Y.; Barret, D.; Cardot, H. Spotting Symbols in Line Drawing Images Using Graph Representations. In Proceedings of the Graphics Recognition. Recent Advances and New Opportunities: 7th International Workshop, GREC 2007, Curitiba, Brazil, 20–21 September 2007; Selected Papers 7. Springer: Berlin/Heidelberg, Germany, 2008; pp. 91–103. [Google Scholar] [CrossRef]
  167. Yu, Y.; Samal, A.; Seth, S. Isolating symbols from connection lines in a class of engineering drawings. Pattern Recognit. 1994, 27, 391–404. [Google Scholar] [CrossRef]
  168. Yu, Y.; Samal, A.; Seth, S.C. A system for recognizing a large class of engineering drawings. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 868–890. [Google Scholar] [CrossRef]
  169. Pham, T.A.; Delalandre, M.; Barrat, S.; Ramel, J.Y. Accurate junction detection and characterization in line-drawing images. Pattern Recognit. 2014, 47, 282–295. [Google Scholar] [CrossRef]
  170. Dori, D.; Wenyin, L. Vector-based segmentation of text connected to graphics in engineering drawings. In Advances in Structural and Syntactical Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 1996; pp. 322–331. [Google Scholar] [CrossRef]
  171. Joseph, S.H. Processing of engineering line drawings for automatic input to CAD. Pattern Recognit. 1989, 22, 1–11. [Google Scholar] [CrossRef]
  172. Krause, F.L.; Jansen, H.; Großmann, G.; Spur, G. Automatic Scanning and Interpretation of Engineering Drawings for CAD-Processes. CIRP Annals 1989, 38, 437–441. [Google Scholar] [CrossRef]
  173. Nagasamy, V.; Langrana, N.A. Engineering drawing processing and vectorization system. Comput. Vision, Graph. Image Process. 1990, 49, 125–126. [Google Scholar] [CrossRef]
  174. Lysak, D.; Kasturi, R. Interpretation of line drawings with multiple views. In Proceedings of the 10th International Conference on Pattern Recognition, Atlantic City, NJ, USA, 16–21 June 1990. [Google Scholar] [CrossRef]
  175. Kultanen, P. Randomized Hough Transform (RHT) in Engineering Drawing Vectorization System. In Proceedings of the IAPR Workshop on Machine Vision Applications, Tokyo, Japan, 28–30 November 1990. [Google Scholar]
  176. Lai, C.; Kasturi, R. Detection of dashed lines in engineering drawings and maps. In Proceedings of the First International Conference on Document Analysis and Recognition, Saint-Malo, France, 30 September–2 October 1991; pp. 507–515. [Google Scholar]
  177. Vaxiviere, P.; Tombre, K. Celesstin: CAD conversion of mechanical drawings. Computer 1992, 25, 46–54. [Google Scholar] [CrossRef]
  178. Dori, D. Dimensioning analysis. Commun. ACM 1992, 35, 92–103. [Google Scholar] [CrossRef]
  179. Joseph, S.H.; Pridmore, T.P. Knowledge-directed interpretation of mechanical engineering drawings. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 928–940. [Google Scholar] [CrossRef]
  180. Min, W.; Tang, Z.; Tang, L. Recognition of dimensions in engineering drawings based on arrowhead. In Proceedings of the 2nd International Conference on Document Analysis and Recognition (ICDAR ’93), Tsukuba, Japan, 20–22 October 1993. [Google Scholar] [CrossRef]
  181. Lai, C.P.; Kasturi, R. Detection of dimension sets in engineering drawings. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 848–855. [Google Scholar] [CrossRef]
  182. Vaxiviere, P.; Tombre, K. Knowledge Organization and Interpretation Process in Engineering Drawing Interpretation; Centre de Recherche en Informatique de Nancy: Vandœuvre-lès-Nancy, France, 1994. [Google Scholar]
  183. Collin, S.; Colnet, D.D. Syntactic Analysis of Technical Drawing Dimensions. Int. J. Pattern Recognit. Artif. Intell. 1994, 8, 1131–1148. [Google Scholar] [CrossRef]
  184. Das, A.K.; Langrana, N.A. Recognition of dimension sets and integration with vectorized engineering drawings. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995. [Google Scholar] [CrossRef]
  185. Dori, D. Vector-based arc segmentation in the machine drawing understanding system environment. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 1057–1068. [Google Scholar] [CrossRef]
  186. Capellades, M.A.; Camps, O.I. Functional parts detection in engineering drawings: Looking for the screws. In Graphics Recognition Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1996; pp. 246–259. [Google Scholar] [CrossRef]
  187. He, S.; Abe, N. A clustering-based approach to the separation of text strings from mixed text/graphics documents. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; IEEE: Piscataway, NJ, USA, 1996. [Google Scholar] [CrossRef]
  188. Chen, Y.; Langrana, N.A.; Das, A.K. Perfecting Vectorized Mechanical Drawings. Comput. Vis. Image Underst. 1996, 63, 273–286. [Google Scholar] [CrossRef]
  189. Priestnall, G.; Marston, R.E.; Elliman, D.G. Arrowhead recognition during automated data capture. Pattern Recognit. Lett. 1996, 17, 277–286. [Google Scholar] [CrossRef]
  190. Dori, D. Orthogonal Zig-Zag: An algorithm for vectorizing engineering drawings compared with Hough Transform. Adv. Eng. Softw. 1997, 28, 11–24. [Google Scholar] [CrossRef]
  191. Lu, Z. Detection of text regions from digital engineering drawings. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 431–439. [Google Scholar] [CrossRef]
  192. Stahovich, T.F.; Davis, R.; Shrobe, H. Generating multiple new designs from a sketch. Artif. Intell. 1998, 104, 211–264. [Google Scholar] [CrossRef]
  193. Dori, D.; Velkovitch, Y. Segmentation and Recognition of Dimensioning Text from Engineering Drawings. Comput. Vis. Image Underst. 1998, 69, 196–201. [Google Scholar] [CrossRef]
  194. Ablameyko, S.; Bereishik, V.; Frantskevich, O.; Homenko, M.; Paramonova, N. A system for automatic recognition of engineering drawing entities. In Proceedings of the Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170), Brisbane, QLD, Australia, 20 August 1998. [Google Scholar] [CrossRef]
  195. Dori, D.; Wenyin, L. Automated CAD conversion with the Machine Drawing Understanding System: Concepts, algorithms, and performance. IEEE Trans. Syst. Man, Cybern.-Part A Syst. Humans 1999, 29, 411–416. [Google Scholar] [CrossRef]
  196. Prabhu, B.S. Automatic extraction of manufacturable features from CADD models using syntactic pattern recognition techniques. Int. J. Prod. Res. 1999, 37, 1259–1281. [Google Scholar] [CrossRef]
  197. Habed, A.; Boufama, B. Dimension sets detection in technical drawings. In Proceedings of the IAPR Workshop on Graphics Recognition (GREC 1999), Jaipur, India, 20–22 September 1999; Volume 99, pp. 217–223. [Google Scholar]
  198. Devaux, P.M.; Lysak, D.B.; Kasturi, R. A complete system for the intelligent interpretation of engineering drawings. Int. J. Doc. Anal. Recognit. (IJDAR) 1999, 2, 120–131. [Google Scholar] [CrossRef]
  199. Adam, S.; Ogier, J.M.; Cariou, C.; Mullot, R.; Labiche, J.; Gardes, J. Symbol and character recognition: Application to engineering drawings. Int. J. Doc. Anal. Recognit. 2000, 3, 89–101. [Google Scholar] [CrossRef]
  200. Müller, S.; Rigoll, G. Engineering Drawing Database Retrieval Using Statistical Pattern Spotting Techniques. In Graphics Recognition Recent Advances; Springer: Berlin/Heidelberg, Germany, 2000; pp. 246–255. [Google Scholar] [CrossRef]
  201. Prabhu, B.S.; Biswas, S.; Pande, S. Intelligent system for extraction of product data from CADD models. Comput. Ind. 2001, 44, 79–95. [Google Scholar] [CrossRef]
  202. Ramel, J.Y.; Vincent, N. Strategy for Line Drawing Understanding. In Proceedings of the International Workshop on Graphics Recognition, Barcelona, Spain, 30–31 July 2003; Springer: Berlin/Heidelberg, Germany, 2004; pp. 1–12. [Google Scholar] [CrossRef]
  203. Wendling, L.; Tabbone, S. A new way to detect arrows in line drawings. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 935–941. [Google Scholar] [CrossRef] [PubMed]
  204. Ondrejcek, M.; Kastner, J.; Kooper, R.; Bajcsy, P. Information Extraction from Scanned Engineering Drawings; Technical Report: NCSA-ISDA09-001; National Center for Supercomputing Applications: Urbana, IL, USA, 2009. [Google Scholar]
  205. Jiang, Z.; Feng, X.; Feng, X.; Liu, Y. An information extraction of title panel in engineering drawings and automatic generation system of three statistical tables. In Proceedings of the 2010 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE), Chengdu, China, 20–22 August 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar] [CrossRef]
  206. Fu, L.; Kara, L.B. Neural network-based symbol recognition using a few labeled samples. Comput. Graph. 2011, 35, 955–966. [Google Scholar] [CrossRef]
  207. Intwala, A.M.; Kharade, K.; Chaugule, R.; Magikar, A. Dimensional Arrow Detection from CAD Drawings. Indian J. Sci. Technol. 2016, 9, 1–7. [Google Scholar] [CrossRef]
  208. Alwan, S.; Caillec, J.M.; Meur, G. Detection of Primitives in Engineering Drawing using Genetic Algorithm. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, Prague, Czech Republic, 19–21 February 2019; SCITEPRESS—Science and Technology Publications: Setúbal, Portugal, 2019. [Google Scholar] [CrossRef]
  209. Scheibel, B.; Mangler, J.; Rinderle-Ma, S. Extraction of dimension requirements from engineering drawings for supporting quality control in production processes. Comput. Ind. 2021, 129, 103442. [Google Scholar] [CrossRef]
  210. van Daele, D.; Decleyre, N.; Dubois, H.; Meert, W. An Automated Engineering Assistant: Learning Parsers for Technical Drawings. Proc. AAAI Conf. Artif. Intell. 2021, 35, 15195–15203. [Google Scholar] [CrossRef]
  211. Zhang, W.; Chen, Q.; Koz, C.; Xie, L.; Regmi, A.; Yamakawa, S.; Furuhata, T.; Shimada, K.; Kara, L.B. Data Augmentation of Engineering Drawings for Data-Driven Component Segmentation. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, St. Louis, MO, USA, 14–17 August 2022. [Google Scholar] [CrossRef]
  212. Alwan, S. Unsupervised and Hybrid Vectorization Techniques for 3D Reconstruction of Engineering Drawings. Ph.D. Thesis, Ecole Nationale Supérieure Mines-Télécom Atlantique, Nantes, France, 2021. [Google Scholar]
  213. Xie, L.; Lu, Y.; Furuhata, T.; Yamakawa, S.; Zhang, W.; Regmi, A.; Kara, L.; Shimada, K. Graph neural network-enabled manufacturing method classification from engineering drawings. Comput. Ind. 2022, 142, 103697. [Google Scholar] [CrossRef]
  214. Kashevnik, A.; Shilov, N.; Teslya, N.; Hasan, F.; Kitenko, A.; Dukareva, V.; Abdurakhimov, M.; Zingarevich, A.; Blokhin, D. An Approach to Engineering Drawing Organization: Title Block Detection and Processing. IEEE Access 2023. [Google Scholar] [CrossRef]
  215. Zhang, W.; Joseph, J.; Yin, Y.; Xie, L.; Furuhata, T.; Yamakawa, S.; Shimada, K.; Kara, L.B. Component segmentation of engineering drawings using Graph Convolutional Networks. Comput. Ind. 2023, 147, 103885. [Google Scholar] [CrossRef]
  216. Xu, Y.; Zhang, C.; Xu, Z.; Kong, C.; Tang, D.; Deng, X.; Li, T.; Jin, J. Tolerance Information Extraction for Mechanical Engineering Drawings–A Digital Image Processing and Deep Learning-based Model. CIRP J. Manuf. Sci. Technol. 2024, 50, 55–64. [Google Scholar] [CrossRef]
  217. Guo, T.; Zhang, H.; Wen, Y. An improved example-driven symbol recognition approach in engineering drawings. Comput. Graph. 2012, 36, 835–845. [Google Scholar] [CrossRef]
  218. Das, S.; Banerjee, P.; Seraogi, B.; Majumder, H.; Mukkamala, S.; Roy, R.; Chaudhuri, B.B. Hand-Written and Machine-Printed Text Classification in Architecture, Engineering & Construction Documents. In Proceedings of the 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), Niagara Falls, NY, USA, 5–8 August 2018; pp. 546–551. [Google Scholar] [CrossRef]
  219. Kara, L.B.; Stahovich, T.F. An image-based, trainable symbol recognizer for hand-drawn sketches. Comput. Graph. 2005, 29, 501–517. [Google Scholar] [CrossRef]
  220. Lee, W.; Kara, L.B.; Stahovich, T.F. An Efficient Graph-Based Symbol Recognizer. In Proceedings of the Eurographics Workshop on Sketch-Based Interfaces and Modeling, Vienna, Austria, 3–4 September 2006; Stahovich, T., Sousa, M.C., Eds.; The Eurographics Association: Eindhoven, The Netherlands, 2006. [Google Scholar] [CrossRef]
  221. Lee, W.; Burak Kara, L.; Stahovich, T.F. An efficient graph-based recognizer for hand-drawn symbols. Comput. Graph. 2007, 31, 554–567. [Google Scholar] [CrossRef]
  222. Della Ventura, A.; Schettini, R. Graphic symbol recognition using a signature technique. In Proceedings of the 12th IAPR International Conference on Pattern Recognition (Cat. No.94CH3440-5), Jerusalem, Israel, 9–13 October 1994; IEEE: New York, NY, USA, 1994. [Google Scholar] [CrossRef]
  223. Howie, C.; Kunz, J.; Binford, T.; Chen, T.; Law, K.H. Computer interpretation of process and instrumentation drawings. Adv. Eng. Softw. 1998, 29, 563–570. [Google Scholar] [CrossRef]
  224. Yim, S.Y.; Ananthakumar, H.G.; Benabbas, L.; Horch, A.; Drath, R.; Thornhill, N.F. Using process topology in plant-wide control loop performance assessment. Comput. Chem. Eng. 2006, 31, 86–99. [Google Scholar] [CrossRef]
  225. Gellaboina, M.K.; Venkoparao, V.G. Graphic Symbol Recognition Using Auto Associative Neural Network Model. In Proceedings of the 2009 Seventh International Conference on Advances in Pattern Recognition, Kolkata, India, 4–6 February 2009; IEEE: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  226. Wen, R.; Tang, W.; Su, Z. A 2D Engineering Drawing and 3D Model Matching Algorithm for Process Plant. In Proceedings of the 2015 International Conference on Virtual Reality and Visualization (ICVRV), Xiamen, China, 17–18 October 2015; IEEE: New York, NY, USA, 2015; pp. 154–159. [Google Scholar] [CrossRef]
  227. Hoang, X.L.; Arroyo, E.; Fay, A. Automatische Analyse und Erkennung graphischer Inhalte von SVG-basierten Engineering-Dokumenten. Automatisierungstechnik 2016, 64, 133–146. [Google Scholar] [CrossRef]
  228. Moreno-García, C.F.; Elyan, E.; Jayne, C. Heuristics-Based Detection to Improve Text/Graphics Segmentation in Complex Engineering Drawings. In Engineering Applications of Neural Networks; Springer: Cham, Switzerland, 2017; pp. 87–98. [Google Scholar] [CrossRef]
  229. Elyan, E.; Garcia, C.M.; Jayne, C. Symbols Classification in Engineering Drawings. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  230. Kang, S.O.; Lee, E.B.; Baek, H.K. A Digitization and Conversion Tool for Imaged Drawings to Intelligent Piping and Instrumentation Diagrams (P&ID). Energies 2019, 12, 2593. [Google Scholar] [CrossRef]
  231. Rahul, R.; Paliwal, S.; Sharma, M.; Vig, L. Automatic Information Extraction from Piping and Instrumentation Diagrams. arXiv 2019, arXiv:1901.11383. [Google Scholar] [CrossRef]
  232. Rantala, M.; Niemistö, H.; Karhela, T.; Sierla, S.; Vyatkin, V. Applying graph matching techniques to enhance reuse of plant design information. Comput. Ind. 2019, 107, 81–98. [Google Scholar] [CrossRef]
  233. Yu, E.S.; Cha, J.M.; Lee, T.; Kim, J.; Mun, D. Features Recognition from Piping and Instrumentation Diagrams in Image Format Using a Deep Learning Network. Energies 2019, 12, 4425. [Google Scholar] [CrossRef]
  234. Elyan, E.; Moreno-García, C.F.; Johnston, P. Symbols in Engineering Drawings (SiED): An Imbalanced Dataset Benchmarked by Convolutional Neural Networks. In Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference, Munich, Germany, 11–15 June 2020; Springer: Cham, Switzerland, 2020; pp. 215–224. [Google Scholar] [CrossRef]
  235. Nurminen, J.K.; Rainio, K.; Numminen, J.P.; Syrjänen, T.; Paganus, N.; Honkoila, K. Object Detection in Design Diagrams with Machine Learning. In Progress in Computer Recognition Systems; Springer: Cham, Switzerland, 2020; pp. 27–36. [Google Scholar] [CrossRef]
  236. Bayer, J.; Sinha, A. Graph-Based Manipulation Rules for Piping and Instrumentation Diagrams; Center for Open Science: Charlottesville, VA, USA, 2020. [Google Scholar] [CrossRef]
  237. Jamieson, L.; Moreno-García, C.F.; Elyan, E. Deep Learning for Text Detection and Recognition in Complex Engineering Diagrams. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; IEEE: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  238. Mani, S.; Haddad, M.A.; Constantini, D.; Douhard, W.; Li, Q.; Poirier, L. Automatic Digitization of Engineering Diagrams using Deep Learning and Graph Search. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; IEEE: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  239. Rica, E.; Moreno-García, C.F.; Álvarez, S.; Serratosa, F. Reducing human effort in engineering drawing validation. Comput. Ind. 2020, 117, 103198. [Google Scholar] [CrossRef]
  240. Sierla, S.; Azangoo, M.; Fay, A.; Vyatkin, V.; Papakonstantinou, N. Integrating 2D and 3D Digital Plant Information Towards Automatic Generation of Digital Twins. In Proceedings of the 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE), Delft, The Netherlands, 17–19 June 2020; IEEE: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  241. Gao, W.; Zhao, Y.; Smidts, C. Component detection in piping and instrumentation diagrams of nuclear power plants based on neural networks. Prog. Nucl. Energy 2020, 128, 103491. [Google Scholar] [CrossRef]
  242. Paliwal, S.; Sharma, M.; Vig, L. OSSR-PID: One-Shot Symbol Recognition in P&ID Sheets using Path Sampling and GCN. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar] [CrossRef]
  243. Moon, Y.; Lee, J.; Mun, D.; Lim, S. Deep Learning-Based Method to Recognize Line Objects and Flow Arrows from Image-Format Piping and Instrumentation Diagrams for Digitization. Appl. Sci. 2021, 11, 10054. [Google Scholar] [CrossRef]
  244. Rica, E.; Álvarez, S.; Serratosa, F. Group of components detection in engineering drawings based on graph matching. Eng. Appl. Artif. Intell. 2021, 104, 104404. [Google Scholar] [CrossRef]
  245. Ghadekar, P.; Joshi, S.; Swain, D.; Acharya, B.; Pradhan, M.R.; Patro, P. Automatic Digitization of Engineering Diagrams using Intelligent Algorithms. J. Comput. Sci. 2021, 17, 833–838. [Google Scholar] [CrossRef]
  246. Shakhshir, F.S.N. Engineering Drawing Validation Based on Graph Convolutional Networks; Universitat Rovira i Virgili: Tarragona, Spain, 2021. [Google Scholar]
  247. Paliwal, S.; Jain, A.; Sharma, M.; Vig, L. Digitize-PID: Automatic Digitization of Piping and Instrumentation Diagrams. In Trends and Applications in Knowledge Discovery and Data Mining; Springer: Cham, Switzerland, 2021; pp. 168–180. [Google Scholar] [CrossRef]
  248. Stinner, F.; Wiecek, M.; Baranski, M.; Kümpel, A.; Müller, D. Automatic digital twin data model generation of building energy systems from piping and instrumentation diagrams. arXiv 2021, arXiv:2108.13912. [Google Scholar]
  249. Gada, M. Object detection for P&ID images using various deep learning techniques. In Proceedings of the 2021 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 27–29 January 2021; IEEE: New York, NY, USA, 2021; pp. 1–5. [Google Scholar]
  250. Bin, O.K.; Hooi, Y.K.; Kadir, S.J.A.; Fujita, H.; Rosli, L.H. Enhanced Symbol Recognition based on Advanced Data Augmentation for Engineering Diagrams. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 249295170. [Google Scholar] [CrossRef]
  251. Bhanbhro, H.; Hooi, Y.K.; Hassan, Z.; Sohu, N. Modern Deep Learning Approaches for Symbol Detection in Complex Engineering Drawings. In Proceedings of the 2022 International Conference on Digital Transformation and Intelligence (ICDI), Kuching, Sarawak, Malaysia, 1–2 December 2022; IEEE: New York, NY, USA, 2022; pp. 121–126. [Google Scholar]
  252. Kim, B.C.; Kim, H.; Moon, Y.; Lee, G.; Mun, D. End-to-end digitization of image format piping and instrumentation diagrams at an industrially applicable level. J. Comput. Des. Eng. 2022, 9, 1298–1326. [Google Scholar] [CrossRef]
  253. Gupta, M.; Wei, C.; Czerniawski, T. Automated Valve Detection in Piping and Instrumentation (P&ID) Diagrams. In Proceedings of the 39th International Symposium on Automation and Robotics in Construction, Bogota, Colombia, 13–15 July 2022; pp. 12–15. [Google Scholar]
  254. Bhanbhro, H.; Kwang Hooi, Y.; Kusakunniran, W.; Amur, Z.H. A Symbol Recognition System for Single-Line Diagrams Developed Using a Deep-Learning Approach. Appl. Sci. 2023, 13, 8816. [Google Scholar] [CrossRef]
  255. Shin, H.J.; Lee, G.Y.; Lee, C.J. Automatic anomaly detection in engineering diagrams using machine learning. Korean J. Chem. Eng. 2023, 40, 2612–2623. [Google Scholar] [CrossRef]
  256. Theisen, M.F.; Flores, K.N.; Balhorn, L.S.; Schweidtmann, A.M. Digitization of chemical process flow diagrams using deep convolutional neural networks. Digit. Chem. Eng. 2023, 6, 100072. [Google Scholar] [CrossRef]
  257. Moon, Y.; Han, S.T.; Lee, J.; Mun, D. Extraction of line objects from piping and instrumentation diagrams using an improved continuous line detection algorithm. J. Mech. Sci. Technol. 2023, 37, 1959–1972. [Google Scholar] [CrossRef]
  258. Kim, G.; Kim, B.C. Classification of Functional Types of Lines in P&IDs Using a Graph Neural Network. IEEE Access 2023, 11, 73680–73687. [Google Scholar] [CrossRef]
  259. Stürmer, J.M.; Graumann, M.; Koch, T. Demonstrating Automated Generation of Simulation Models from Engineering Diagrams. In Proceedings of the 2023 International Conference on Machine Learning and Applications (ICMLA), Jacksonville, FL, USA, 15–17 December 2023; pp. 1156–1162. [Google Scholar] [CrossRef]
  260. Han, S.T.; Moon, Y.; Lee, H.; Mun, D. Rule-based continuous line classification using shape and positional relationships between objects in piping and instrumentation diagram. Expert Syst. Appl. 2024, 248, 123366. [Google Scholar] [CrossRef]
  261. Su, G.; Zhao, S.; Li, T.; Liu, S.; Li, Y.; Zhao, G.; Li, Z. Image format pipeline and instrument diagram recognition method based on deep learning. Biomim. Intell. Robot. 2024, 4, 100142. [Google Scholar] [CrossRef]
  262. Gupta, M.; Wei, C.; Czerniawski, T. Semi-supervised symbol detection for piping and instrumentation drawings. Autom. Constr. 2024, 159, 105260. [Google Scholar] [CrossRef]
Figure 1. Examples of different sketch types in various engineering domains.
Figure 1. Examples of different sketch types in various engineering domains.
Applsci 14 06106 g001
Figure 3. Overview of the literature review results for symbol detection in engineering drawings. (a) Publication numbers sorted by engineering domain. (b) Chronological progression of publications.
Figure 3. Overview of the literature review results for symbol detection in engineering drawings. (a) Publication numbers sorted by engineering domain. (b) Chronological progression of publications.
Applsci 14 06106 g003
Figure 4. Categorization of the methods depending on the dataset.
Figure 4. Categorization of the methods depending on the dataset.
Applsci 14 06106 g004
Figure 5. Framework for symbol detection and contextualization in engineering sketches.
Figure 5. Framework for symbol detection and contextualization in engineering sketches.
Applsci 14 06106 g005
Figure 6. Example plots of the Pillow and SketchGraph data generation methods.
Figure 6. Example plots of the Pillow and SketchGraph data generation methods.
Applsci 14 06106 g006
Figure 7. Example illustration of the symbols applied in the variant with and without hollow shaft.
Figure 7. Example illustration of the symbols applied in the variant with and without hollow shaft.
Applsci 14 06106 g007
Figure 8. Example sketches for different gear stages from the first unknown test dataset.
Figure 8. Example sketches for different gear stages from the first unknown test dataset.
Applsci 14 06106 g008
Figure 9. Example sketches for different assembly models from the second unknown test dataset.
Figure 9. Example sketches for different assembly models from the second unknown test dataset.
Applsci 14 06106 g009
Figure 10. Good and bad detection examples for the trained models and different training datasets.
Figure 10. Good and bad detection examples for the trained models and different training datasets.
Applsci 14 06106 g010
Table 1. Evaluation metrics for all models and training datasets on unknown data with best results per model highlighted.
Table 1. Evaluation metrics for all models and training datasets on unknown data with best results per model highlighted.
Unknown Gear Stages
DatasetSketchGraph 1SketchGraph 2SketchGraph 3SketchGraph 4Pillow 1Pillow 2
MetricmAP50mAPmAP50mAPmAP50mAPmAP50mAPmAP50mAPmAP50mAP
YOLOv5 S0.5630.3030.1800.0730.9930.6940.9920.6840.3880.1380.7160.245
YOLOv5 M0.8050.4550.0760.0330.9940.6890.9940.6850.6250.2530.8680.309
YOLOv5 L0.7420.4440.0840.0500.9930.6860.9940.6860.8170.3970.8320.315
MASK RCNN r500.1490.0560.0800.0310.7410.3890.8340.4400.4390.2680.2350.154
MASK RCNN r1010.1050.0480.1350.0570.9490.5220.7850.4070.1270.0790.3510.224
Faster RCNN r500.2860.1170.2410.1100.7360.5260.7570.5350.0420.0130.1970.105
Table 2. Evaluation metrics for all models and training datasets on unknown data with best results per model highlighted.
Table 2. Evaluation metrics for all models and training datasets on unknown data with best results per model highlighted.
Unknown Assemblies
DatasetSketchGraph 1SketchGraph 2SketchGraph 3SketchGraph 4Pillow 1Pillow 2
MetricmAP50mAPmAP50mAPmAP50mAPmAP50mAPmAP50mAPmAP50mAP
YOLOv5 S0.6590.4190.2740.1800.9040.5540.8930.5370.4920.1700.5590.239
YOLOv5 M0.7170.4320.1930.1490.9040.5670.8680.5370.4850.1880.5610.248
YOLOv5 L0.6880.4500.2000.1420.9060.5620.8910.5650.5970.2260.7230.283
MASK RCNN r500.1110.0500.0800.0300.5010.1840.4510.1310.1890.1240.0110.007
MASK RCNN r1010.1100.0370.1400.0550.6420.2380.5750.1950.1420.0620.0610.044
Faster RCNN r500.3520.2230.3650.2410.6280.3760.6640.4300.1440.0260.3920.217
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bickel, S.; Goetz, S.; Wartzack, S. Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning. Appl. Sci. 2024, 14, 6106. https://doi.org/10.3390/app14146106

AMA Style

Bickel S, Goetz S, Wartzack S. Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning. Applied Sciences. 2024; 14(14):6106. https://doi.org/10.3390/app14146106

Chicago/Turabian Style

Bickel, Sebastian, Stefan Goetz, and Sandro Wartzack. 2024. "Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning" Applied Sciences 14, no. 14: 6106. https://doi.org/10.3390/app14146106

APA Style

Bickel, S., Goetz, S., & Wartzack, S. (2024). Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning. Applied Sciences, 14(14), 6106. https://doi.org/10.3390/app14146106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop