Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = semantic quality assurance and assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 8923 KB  
Article
Component Recognition and Coordinate Extraction in Two-Dimensional Paper Drawings Using SegFormer
by Shengkun Gu and Dejiang Wang
Information 2024, 15(1), 17; https://doi.org/10.3390/info15010017 - 27 Dec 2023
Cited by 2 | Viewed by 2416
Abstract
Within the domain of architectural urban informatization, the automated precision recognition of two-dimensional paper schematics emerges as a pivotal technical challenge. Recognition methods traditionally employed frequently encounter limitations due to the fluctuating quality of architectural drawings and the bounds of current image processing [...] Read more.
Within the domain of architectural urban informatization, the automated precision recognition of two-dimensional paper schematics emerges as a pivotal technical challenge. Recognition methods traditionally employed frequently encounter limitations due to the fluctuating quality of architectural drawings and the bounds of current image processing methodologies, inhibiting the realization of high accuracy. The research delineates an innovative framework that synthesizes refined semantic segmentation algorithms with image processing techniques and precise coordinate identification methods, with the objective of enhancing the accuracy and operational efficiency in the identification of architectural elements. A meticulously curated data set, featuring 13 principal categories of building and structural components, facilitated the comprehensive training and assessment of two disparate deep learning models. The empirical findings reveal that these algorithms attained mean intersection over union (MIoU) values of 96.44% and 98.01% on the evaluation data set, marking a substantial enhancement in performance relative to traditional approaches. In conjunction, the framework’s integration of the Hough Transform with SQL Server technology has significantly reduced the coordinate detection error rates for linear and circular elements to below 0.1% and 0.15%, respectively. This investigation not only accomplishes the efficacious transition from analog two-dimensional paper drawings to their digital counterparts, but also assures the precise identification and localization of essential architectural components within the digital image coordinate framework. These developments are of considerable importance in furthering the digital transition within the construction industry and establish a robust foundation for the forthcoming extension of data collections and the refinement of algorithmic efficacy. Full article
Show Figures

Figure 1

15 pages, 2369 KB  
Article
A Semantic Approach for Quality Assurance and Assessment of Volunteered Geographic Information
by Gloria Bordogna
Information 2021, 12(12), 492; https://doi.org/10.3390/info12120492 - 25 Nov 2021
Cited by 5 | Viewed by 2916
Abstract
The paper analyses the characteristics of Volunteer Geographic Information (VGI) and the need to assure and assess its quality for a possible use and re-use. Ontologies and soft ontologies are presented as means to support quality assurance and assessment of VGI by highlighting [...] Read more.
The paper analyses the characteristics of Volunteer Geographic Information (VGI) and the need to assure and assess its quality for a possible use and re-use. Ontologies and soft ontologies are presented as means to support quality assurance and assessment of VGI by highlighting their limitations. A proposal of a possibilistic approach using fuzzy ontology is finally illustrated that allows to model both imprecision and vagueness of domain knowledge and epistemic uncertainty affecting observations. A case study example is illustrated. Full article
(This article belongs to the Special Issue Semantic Web and Information Systems)
Show Figures

Figure 1

30 pages, 11905 KB  
Article
Connecting Semantic Situation Descriptions with Data Quality Evaluations—Towards a Framework of Automatic Thematic Map Evaluation
by Timo Homburg
Information 2020, 11(11), 532; https://doi.org/10.3390/info11110532 - 15 Nov 2020
Cited by 3 | Viewed by 3256
Abstract
A continuing question in the geospatial community is the evaluation of fitness for use of map data for a variety of use cases. While data quality metrics and dimensions have been discussed broadly in the geospatial community and have been modelled in semantic [...] Read more.
A continuing question in the geospatial community is the evaluation of fitness for use of map data for a variety of use cases. While data quality metrics and dimensions have been discussed broadly in the geospatial community and have been modelled in semantic web vocabularies, an ontological connection between use cases and data quality expressions allowing reasoning approaches to determine the fitness for use of semantic web map data has not yet been approached. This publication introduces such an ontological model to represent and link situations with geospatial data quality metrics to evaluate thematic map contents. The ontology model constitutes the data storage element of a framework for use case based data quality assurance, which creates suggestions for data quality evaluations which are verified and improved upon by end-users. So-created requirement profiles are associated and shared to semantic web concepts and therefore contribute to a pool of linked data describing situation-based data quality assessments, which may be used by a variety of applications. The framework is tested using two test scenarios which are evaluated and discussed in a wider context. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

56 pages, 4660 KB  
Article
Quality Assessment of Pre-Classification Maps Generated from Spaceborne/Airborne Multi-Spectral Images by the Satellite Image Automatic Mapper™ and Atmospheric/Topographic Correction™-Spectral Classification Software Products: Part 2 — Experimental Results
by Andrea Baraldi, Michael Humber and Luigi Boschetti
Remote Sens. 2013, 5(10), 5209-5264; https://doi.org/10.3390/rs5105209 - 18 Oct 2013
Cited by 9 | Viewed by 8113
Abstract
This paper complies with the Quality Assurance Framework for Earth Observation (QA4EO) international guidelines to provide a metrological/statistically-based quality assessment of the Spectral Classification of surface reflectance signatures (SPECL) secondary product, implemented within the popular Atmospheric/Topographic Correction (ATCOR™) commercial software suite, and of [...] Read more.
This paper complies with the Quality Assurance Framework for Earth Observation (QA4EO) international guidelines to provide a metrological/statistically-based quality assessment of the Spectral Classification of surface reflectance signatures (SPECL) secondary product, implemented within the popular Atmospheric/Topographic Correction (ATCOR™) commercial software suite, and of the Satellite Image Automatic Mapper™ (SIAM™) software product, proposed to the remote sensing (RS) community in recent years. The ATCOR™-SPECL and SIAM™ physical model-based expert systems are considered of potential interest to a wide RS audience: in operating mode, they require neither user-defined parameters nor training data samples to map, in near real-time, a spaceborne/airborne multi-spectral (MS) image into a discrete and finite set of (pre-attentional first-stage) spectral-based semi-concepts (e.g., “vegetation”), whose informative content is always equal or inferior to that of target (attentional second-stage) land cover (LC) concepts (e.g., “deciduous forest”). For the sake of simplicity, this paper is split into two: Part 1—Theory and Part 2—Experimental results. The Part 1 provides the present Part 2 with an interdisciplinary terminology and a theoretical background. To comply with the principle of statistics and the QA4EO guidelines discussed in the Part 1, the present Part 2 applies an original adaptation of a novel probability sampling protocol for thematic map quality assessment to the ATCOR™-SPECL and SIAM™ pre-classification maps, generated from three spaceborne/airborne MS test images. Collected metrological/ statistically-based quality indicators (QIs) comprise: (i) an original Categorical Variable Pair Similarity Index (CVPSI), capable of estimating the degree of match between a test pre-classification map’s legend and a reference LC map’s legend that do not coincide and must be harmonized (reconciled); (ii) pixel-based Thematic (symbolic, semantic) QIs (TQIs) and (iii) polygon-based sub-symbolic (non-semantic) Spatial QIs (SQIs), where all TQIs and SQIs are provided with a degree of uncertainty in measurement. Main experimental conclusions of the present Part 2 are the following. (I) Across the three test images, the CVPSI values of the SIAM™ pre-classification maps at the intermediate and fine semantic granularities are superior to those of the ATCOR™-SPECL single-granule maps. (II) TQIs of both the ATCOR™-SPECL and the SIAM™ tend to exceed community-agreed reference standards of accuracy. (III) Across the three test images and the SIAM™’s three semantic granularities, TQIs of the SIAM™ tend to be significantly higher (in statistical terms) than the ATCOR™-SPECL’s. Stemming from the proposed experimental evidence in support to theoretical considerations, the final conclusion of this paper is that, in compliance with the QA4EO objectives, the SIAM™ software product can be considered eligible for injecting prior spectral knowledge into the pre-attentive vision first stage of a novel generation of hybrid (combined deductive and inductive) RS image understanding systems, capable of transforming large-scale multi-source multi-resolution EO image databases into operational, comprehensive and timely knowledge/information products. Full article
Show Figures

Back to TopTop