Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (76)

Search Parameters:
Keywords = OGC standards

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1542 KiB  
Article
The Impact of Prehabilitation on Patient Outcomes in Oesophagogastric Cancer Surgery: Combined Data from Four Prospective Clinical Trials Performed Across the UK and Ireland
by Sowrav Barman, Beth Russell, Robert C. Walker, William Knight, Cara Baker, Mark Kelly, James Gossage, Janine Zylstra, Greg Whyte, James Pate, Jesper Lagergren, Mieke Van Hemelrijck, Mike Browning, Sophie Allen, Shaun R. Preston, Javed Sultan, Pritam Singh, Timothy Rockall, William B. Robb, Roisin Tully, Lisa Loughney, Jarlath Bolger, Jan Sorensen, Chris G. Collins, Paul A. Carroll, Claire M. Timon, Mayilone Arumugasamy, Thomas Murphy, Noel McCaffrey, Mike Grocott, Sandy Jack, Denny Z. H. Levett, Tim J. Underwood, Malcolm A. West and Andrew R. Daviesadd Show full author list remove Hide full author list
Cancers 2025, 17(11), 1836; https://doi.org/10.3390/cancers17111836 - 30 May 2025
Viewed by 735
Abstract
Background: Prehabilitation is increasingly being used in patients undergoing multimodality treatment for oesophagogastric cancer (OGC). Most studies to date have been small, single-centre trials. This collaborative study sought to assess the overall impact of prehabilitation on patient outcomes following OGC surgery. Methods: Data [...] Read more.
Background: Prehabilitation is increasingly being used in patients undergoing multimodality treatment for oesophagogastric cancer (OGC). Most studies to date have been small, single-centre trials. This collaborative study sought to assess the overall impact of prehabilitation on patient outcomes following OGC surgery. Methods: Data came from four prospective prehabilitation trials conducted in the UK or Ireland in patients undergoing multimodality treatment for OGC. The studies included three randomised and one non-randomised clinical trial, each comparing a prehabilitation intervention group to controls. The prehabilitation interventions included aerobic training delivered by exercise physiologists alongside dietetic input throughout the treatment pathway. The primary outcome was survival (all-cause and disease-specific mortality). Secondary outcomes were differences in complications, cardio-respiratory fitness (changes in VO2 peak and anaerobic threshold (AT)), chemotherapy completion rates, hospital length of stay, changes in body mass index, tumour regression and complication rates of anastomotic leak and pneumonia. Cox and logistic regression analysis provided hazard ratios (HR) and odds ratios (OR), respectively, with 95% confidence intervals (CI), adjusted for confounders. Results: Among 165 patients included, 88 patients were in the prehabilitation group and 77 patients were in the control group. All-cause and disease-specific mortality were not improved by prehabilitation (HR 0.67 95% CI 0.21–2.12 and HR 0.82 95% CI 0.42–1.57, respectively). The prehabilitation group experienced fewer major complications (20% vs. 36%, p = 0.034; adjusted OR of 0.54; 95%CI 0.26–1.13). There was a mitigated decline in VO2 peak following neo-adjuvant therapy (delta prehabilitation −1.07 mL/kg/min vs. control −2.74 mL/kg/min; p = 0.035) and chemotherapy completion rates were significantly higher following prehabilitation (90% vs. 73%; p = 0.016). Hospital length of stay (10 vs. 12 days, p = 0.402) and neoadjuvant chemotherapy response (Mandard 1–3 41% vs. 35%; p = 0.494) favoured prehabilitation, albeit not statistically significantly. Conclusion: Despite some limitations in terms of heterogeneity of study methodology, this study suggests a number of meaningful clinical benefits from prehabilitation before surgery for OGC patients. Current initiatives to agree on national standards for delivering prehabilitation and the results of ongoing trials will help to further refine this important intervention and expand the evidence base to support the widespread adoption and implementation of prehabilitation programs. Full article
(This article belongs to the Special Issue Perioperative and Surgical Management of Gastrointestinal Cancers)
Show Figures

Figure 1

40 pages, 2062 KiB  
Review
State of the Art in Internet of Things Standards and Protocols for Precision Agriculture with an Approach to Semantic Interoperability
by Eduard Roccatello, Antonino Pagano, Nicolò Levorato and Massimo Rumor
Network 2025, 5(2), 14; https://doi.org/10.3390/network5020014 - 21 Apr 2025
Viewed by 1046
Abstract
The integration of Internet of Things (IoT) technology into the agricultural sector enables the collection and analysis of large amounts of data, facilitating greater control over internal processes, resulting in cost reduction and improved quality of the final product. One of the main [...] Read more.
The integration of Internet of Things (IoT) technology into the agricultural sector enables the collection and analysis of large amounts of data, facilitating greater control over internal processes, resulting in cost reduction and improved quality of the final product. One of the main challenges in designing an IoT system is the need for interoperability among devices: different sensors collect information in non-homogeneous formats, which are often incompatible with each other. Therefore, the user of the system is forced to use different platforms and software to consult the data, making the analysis complex and cumbersome. The solution to this problem lies in the adoption of an IoT standard that standardizes the output of the data. This paper first provides an overview of the standards and protocols used in precision farming and then presents a system architecture designed to collect measurements from sensors and translate them into a standard. The standard is selected based on an analysis of the state of the art and tailored to meet the specific needs of precision agriculture. With the introduction of a connector device, the system can accommodate any number of different sensors while maintaining the output data in a uniform format. Each type of sensor is associated with a specific connector that intercepts the data intended for the database and translates it into the standard format before forwarding it to the central server. Finally, examples with real sensors are presented to illustrate the operation of the connectors and their role in an interoperable architecture, aiming to combine flexibility and ease of use with low implementation costs. Full article
Show Figures

Figure 1

23 pages, 2266 KiB  
Article
Building an Indoor Digital Twin—A Use-Case for a Hospital Digital Twin to Analyze COVID-19 Transmission
by Youngin Lee, Min Hyeok Choi, Yong-Soo Song, Jun-Gi Lee, Jin Young Park and Ki-Joune Li
ISPRS Int. J. Geo-Inf. 2024, 13(12), 460; https://doi.org/10.3390/ijgi13120460 - 19 Dec 2024
Cited by 2 | Viewed by 1672
Abstract
As indoor space becomes more important in our daily life, the demand to build digital twins for indoor spaces is increasing accordingly. The properties of indoor spaces, however, differ from those of outdoor spaces, and we need to apply different approaches to build [...] Read more.
As indoor space becomes more important in our daily life, the demand to build digital twins for indoor spaces is increasing accordingly. The properties of indoor spaces, however, differ from those of outdoor spaces, and we need to apply different approaches to build indoor digital twins. In our work, we propose a framework for building an indoor digital twin with a use case for hospitals in general and large hospitals in particular, which may be considered as one of the most complicated types of digital twin. One of our goals is to establish a framework for building indoor digital twins based on standards and our framework starts from OGC IndoorGML, which is a standard for indoor data models and encoding schemes for indoor spatial data. In this paper, each step of the framework is presented for the construction of an indoor hospital digital twin focusing on a use case of epidemic analysis of COVID-19 transmission in a hospital. The use case study covers the entire life cycle of the indoor spatial application from requirement analysis, data modeling, and building indoor spatial data to the development of a COVID-19 transmission analysis. Our work represents a use case for indoor digital twins based on the OGC IndoorGML standard and eventually may serve as a framework and reference for building indoor digital twins. As our work is mainly focused on the construction of hospital digital twins, the study on COVID-19 infection model itself is limited in this paper. Improvement of the infection models and validations will be the next step of our work. As HVAC (heat, ventilation, and air conditioning) was not fully considered in our use case, we also expect that it is possible to strengthen our use case by including HVAC for the analysis of airflow dynamics. Full article
Show Figures

Figure 1

12 pages, 245 KiB  
Article
Intensive Multiprofessional Rehabilitation Is Superior to Standard Orthogeriatric Care in Patients with Proximal Femur Fractures—A Matched Pair Study of 9580 Patients from the Registry for Geriatric Trauma (ATR-DGU)
by Ulf Bökeler, Ulrich Liener, Hannah Schmidt, Nils Vogeley, Vanessa Ketter, Steffen Ruchholtz and Bastian Pass
J. Clin. Med. 2024, 13(21), 6343; https://doi.org/10.3390/jcm13216343 - 23 Oct 2024
Viewed by 927
Abstract
Background: Orthogeriatric treatment, which involves a collaborative approach between orthopedic surgeons and geriatricians, is generally considered to be superior to standard care following hip fractures. The aim of this study was to investigate additional effects of a geriatrician-led multidisciplinary rehabilitation program. Methods: In [...] Read more.
Background: Orthogeriatric treatment, which involves a collaborative approach between orthopedic surgeons and geriatricians, is generally considered to be superior to standard care following hip fractures. The aim of this study was to investigate additional effects of a geriatrician-led multidisciplinary rehabilitation program. Methods: In this matched paired observational cohort study, patients aged 70 years and older with a proximal femur fracture requiring surgery were included. Between 1 January 2016 and 31 December 2022 data were recorded from hospital admission to 120-day follow-up in the Registry for Geriatric Trauma (ATR-DGU), a registry of older adults with hip fractures. Out of 60,254 patients, 9580 patients met the inclusion criteria, 4669 patients received early multiprofessional rehabilitation (EMR) and 4911 patients were treated by standard orthogeriatric co-management (OGC). Results: Compared to standard orthogeriatric treatment, multiprofessional therapy significantly lowered the 7-day mortality rate (2.89% vs. 5.11%) and had a significant impact on walking ability seven days after surgery (86.44% vs. 77.78%). Conclusions: In summary, a geriatrician-led multiprofessional rehabilitation program resulted in lower mortality and improved walking ability than standard orthogeriatric care. Full article
(This article belongs to the Special Issue Advances in Orthopedic Trauma Surgery in Geriatrics)
28 pages, 10559 KiB  
Article
Methodology of Mosaicking and Georeferencing for Multi-Sheet Early Maps with Irregular Cuts Using the Example of the Topographic Chart of the Kingdom of Poland
by Jakub Kuna, Tomasz Panecki and Mateusz Zawadzki
ISPRS Int. J. Geo-Inf. 2024, 13(7), 249; https://doi.org/10.3390/ijgi13070249 - 10 Jul 2024
Cited by 3 | Viewed by 3335
Abstract
The Topographic Chart of the Kingdom of Poland (pol. Topograficzna Karta Królestwa Polskiego, commonly referred to as ‘the Quartermaster’s Map’, hereinafter: TKKP) is the first Polish modern topographic map of Poland (1:126,000, 1843). Cartographic historians acclaim its conception by the General Quartermaster of [...] Read more.
The Topographic Chart of the Kingdom of Poland (pol. Topograficzna Karta Królestwa Polskiego, commonly referred to as ‘the Quartermaster’s Map’, hereinafter: TKKP) is the first Polish modern topographic map of Poland (1:126,000, 1843). Cartographic historians acclaim its conception by the General Quartermaster of the Polish Army, noting its editorial principles and technical execution as exemplars of the early 19th-century cartographic standards. Today, it stands as a national heritage relic, furnishing invaluable insights into the former Polish Kingdom’s topography. Although extensively utilised in geographical and historical inquiries, the TKKP has yet to undergo a comprehensive geomatic investigation and publication as spatial data services. Primarily, this delay stems from the challenges of mosaicking and georeferencing its 60 constituent sheets, owing to the uncertain mathematical framework and irregular sheet cuts. In 2023, the authors embarked on rectifying this by creating a unified TKKP mosaic and georeferencing the map to contemporary reference data benchmarks. This endeavour involved scrutinising the map’s mathematical accuracy and verifying prior findings. The resultant product is accessible via the ‘Maps with the Past’ platform, developed by the Institute of History of the Polish Academy of Sciences The dissemination of raster data services adhering to OGC standards such as WMTS (Web Map Tile Service), ECW (Enhanced Compression Wavelet), and COG (Cloud Optimized GeoTIFF) facilitates the swift and seamless integration of the generated data into web and GIS tools. The digital edition of the TKKP emerges as a pivotal resource for investigations spanning natural and anthropogenic environmental transformations, sustainable development, and cultural heritage studies. Full article
Show Figures

Figure 1

24 pages, 2221 KiB  
Article
Linked Data Generation Methodology and the Geospatial Cross-Sectional Buildings Energy Benchmarking Use Case
by Edgar A. Martínez-Sarmiento, Jose Manuel Broto, Eloi Gabaldon, Jordi Cipriano, Roberto García and Stoyan Danov
Energies 2024, 17(12), 3006; https://doi.org/10.3390/en17123006 - 18 Jun 2024
Viewed by 1326
Abstract
Cross-sectional energy benchmarking in the building domain has become crucial for policymakers, energy managers and property owners as they can compare an immovable property performance against its closest peers. For this, Key Performance Indicators (KPIs) are formulated, often relying on multiple and heterogeneous [...] Read more.
Cross-sectional energy benchmarking in the building domain has become crucial for policymakers, energy managers and property owners as they can compare an immovable property performance against its closest peers. For this, Key Performance Indicators (KPIs) are formulated, often relying on multiple and heterogeneous data sources which, combined, can be used to set benchmarks following normalization criteria. Geographically delimited parameters are important among these criteria because they enclose entities sharing key common characteristics the geometrical boundaries represent. Linking georeferenced heterogeneous data is not trivial, for it requires geographical aggregation, which is often taken for granted or hidden within a pre-processing activity in most energy benchmarking studies. In this article, a novel approach for Linked Data (LD) generation is presented as a methodological solution for data integration together with its application in the energy benchmarking use case. The methodology consists of eight phases that follow the best principles and recommend standards including the well-known GeoSPARQL Open Geospatial Consortium (OGC) for leveraging the geographical aggregation. Its feasibility is demonstrated by the integrated exploitation of INSPIRE-formatted cadastral data and the Buildings Performance Certifications (BPCs) available for the Catalonia region in Spain. The outcomes of this research support the adoption of the proposed methodology and provide the means for generating cross-sectional building energy benchmarking histograms from any-scale geographical aggregations on the fly. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

16 pages, 2260 KiB  
Article
Search Engine for Open Geospatial Consortium Web Services Improving Discoverability through Natural Language Processing-Based Processing and Ranking
by Elia Ferrari, Friedrich Striewski, Fiona Tiefenbacher, Pia Bereuter, David Oesch and Pasquale Di Donato
ISPRS Int. J. Geo-Inf. 2024, 13(4), 128; https://doi.org/10.3390/ijgi13040128 - 12 Apr 2024
Cited by 2 | Viewed by 1992
Abstract
The improvement of search engines for geospatial data on the World Wide Web has been a subject of research, particularly concerning the challenges in discovering and utilizing geospatial web services. Despite the establishment of standards by the Open Geospatial Consortium (OGC), the implementation [...] Read more.
The improvement of search engines for geospatial data on the World Wide Web has been a subject of research, particularly concerning the challenges in discovering and utilizing geospatial web services. Despite the establishment of standards by the Open Geospatial Consortium (OGC), the implementation of these services varies significantly among providers, leading to issues in dataset discoverability and usability. This paper presents a proof of concept for a search engine tailored to geospatial services in Switzerland. It addresses challenges such as scraping data from various OGC web service providers, enhancing metadata quality through Natural Language Processing, and optimizing search functionality and ranking methods. Semantic augmentation techniques are applied to enhance metadata completeness and quality, which are stored in a high-performance NoSQL database for efficient data retrieval. The results show improvements in dataset discoverability and search relevance, with NLP-extracted information contributing significantly to ranking accuracy. Overall, the GeoHarvester proof of concept demonstrates the feasibility of improving the discoverability and usability of geospatial web services through advanced search engine techniques. Full article
Show Figures

Figure 1

21 pages, 3671 KiB  
Article
A Novel Standardized Collaborative Online Model for Processing and Analyzing Remotely Sensed Images in Geographic Problems
by Xueshen Zhang, Qiulan Wu, Feng Zhang, Xiang Sun, Huarui Wu, Shumin Wu and Xuefei Chen
Electronics 2023, 12(21), 4394; https://doi.org/10.3390/electronics12214394 - 24 Oct 2023
Cited by 1 | Viewed by 1578
Abstract
In recent years, remote sensing image processing technology has developed rapidly, and the variety of remote sensing images has increased. Solving a geographic problem often requires multiple remote sensing images to be used together. For an image processing analyst, it is difficult to [...] Read more.
In recent years, remote sensing image processing technology has developed rapidly, and the variety of remote sensing images has increased. Solving a geographic problem often requires multiple remote sensing images to be used together. For an image processing analyst, it is difficult to become proficient in the image processing of multiple types of remote sensing images. Therefore, it is necessary to have multiple image processing analysts collaborate to solve geographic problems. However, as a result of the naturally large volumes of data and the computer resources they consume for analysis, remote sensing images present a barrier in the collaboration of multidisciplinary remote sensing undertakings and analysts. As a result, during the development of the collaborative analysis process, it is necessary to achieve the online processing and analysis of remote sensing images, as well as to standardize the online remote sensing image collaborative analysis process. To address the above issues, a hierarchical collaborative online processing and analysis framework was developed in this paper. This framework defined a clear collaborative analysis structure, and it identifies what kinds of online image processing and analysis activities participants can engage in to successfully conduct collaborative processes. In addition, a collaborative process construction model and an online remote sensing image processing analysis model were developed to assist participants in creating a standard collaborative online image processing and analysis process. In order to demonstrate the feasibility and effectiveness of the framework and model, this paper developed a collaborative online post-disaster assessment process that utilizes radar images and optical remote sensing images for a real forest fire event. This process was based on the BPMN2.0 and OGC dual standards. Based on the results, the proposed framework provides a hierarchical collaborative remote sensing image processing and analysis process with well-defined stages and activities to guide the participants’ mutual collaboration. Additionally, the proposed model can help participants to develop a standardized collaborative online image processing process in terms of process structure and information interactions. Full article
(This article belongs to the Special Issue New Technology of Image & Video Processing)
Show Figures

Figure 1

19 pages, 2014 KiB  
Article
Assessment of Project Management Maturity Models Strengths and Weaknesses
by Valentin Nikolaenko and Anatoly Sidorov
J. Risk Financial Manag. 2023, 16(2), 121; https://doi.org/10.3390/jrfm16020121 - 14 Feb 2023
Cited by 6 | Viewed by 8279
Abstract
The purpose of this article is to analyze the most popular maturity models in order to identify their strengths and weaknesses. Research conducted by international project management communities such as Software Engineering Institute (SEI), Project Management Institute (PMI), International Project Management Association (IPMA), [...] Read more.
The purpose of this article is to analyze the most popular maturity models in order to identify their strengths and weaknesses. Research conducted by international project management communities such as Software Engineering Institute (SEI), Project Management Institute (PMI), International Project Management Association (IPMA), Office of Government Commerce (OGC) and International Organization for Standardization (ISO) showed that organizations with high managerial maturity are more likely to achieve their planned project goals than those that do not identify and standardize their best management practices. This circumstance has encouraged scientists from all over the world to start developing various models that can measure and evaluate managerial maturity in projects. Nowadays, the variety of models created has led to considerable difficulty in understanding the strengths and weaknesses of each model. To solve this problem, the article authors conducted a critical analysis to identify the strengths and weaknesses of the most popular project management maturity models. The results obtained will be of interest to project managers, members of project teams, heads of organizations, project offices and everyone involved in the development of project activities. Based on the analysis, it was found that the most developed maturity models are based on international codes of knowledge of project management. Most maturity models ignore the presence of structural and infrastructural elements, such as a workplace, the necessary equipment and software, the availability of professional standards, instructions, regulations, etc. It was also revealed that there are no processes for assessing the effectiveness and efficiency of using the best practices in the maturity models. Full article
(This article belongs to the Section Business and Entrepreneurship)
Show Figures

Figure 1

35 pages, 14182 KiB  
Article
Future Swedish 3D City Models—Specifications, Test Data, and Evaluation
by Maria Uggla, Perola Olsson, Barzan Abdi, Björn Axelsson, Matthew Calvert, Ulrika Christensen, Daniel Gardevärn, Gabriel Hirsch, Eric Jeansson, Zuhret Kadric, Jonas Lord, Axel Loreman, Andreas Persson, Ola Setterby, Maria Sjöberger, Paul Stewart, Andreas Rudenå, Andreas Ahlström, Mikael Bauner, Kendall Hartman, Karolina Pantazatou, Wenjing Liu, Hongchao Fan, Gefei Kong, Hang Li and Lars Harrieadd Show full author list remove Hide full author list
ISPRS Int. J. Geo-Inf. 2023, 12(2), 47; https://doi.org/10.3390/ijgi12020047 - 31 Jan 2023
Cited by 16 | Viewed by 6770
Abstract
Three-dimensional city models are increasingly being used for analyses and simulations. To enable such applications, it is necessary to standardise semantically richer city models and, in some cases, to connect the models with external data sources. In this study, we describe the development [...] Read more.
Three-dimensional city models are increasingly being used for analyses and simulations. To enable such applications, it is necessary to standardise semantically richer city models and, in some cases, to connect the models with external data sources. In this study, we describe the development of a new Swedish specification for 3D city models, denoted as 3CIM, which is a joint effort between the three largest cities in Sweden—Stockholm, Gothenburg, and Malmö. Technically, 3CIM is an extension of the OGC standard CityGML 2.0, implemented as an application domain extension (ADE). The ADE is semantically thin, mainly extending CityGML 2.0 to harmonise with national standards; in contrast, 3CIM is mainly based on linkages to external databases, registers, and operational systems for the semantic part. The current version, 3CIM 1.0, includes various themes, including Bridge, Building, Utility, City Furniture, Transportation, Tunnel, Vegetation, and Water. Three test areas were created with 3CIM data, one in each city. These data were evaluated in several use-cases, including visualisation as well as daylight, noise, and flooding simulations. The conclusion from these use-cases is that the 3CIM data, together with the linked external data sources, allow for the inclusion of the necessary information for the visualisation and simulations, but extract, transform, and load (ETL) processes are required to tailor the input data. The next step is to implement 3CIM within the three cities, which will entail several challenges, as discussed at the end of the paper. Full article
Show Figures

Figure 1

33 pages, 30060 KiB  
Article
Proposed Methodology for Accuracy Improvement of LOD1 3D Building Models Created Based on Stereo Pléiades Satellite Imagery
by Ana-Ioana Breaban, Valeria-Ersilia Oniga, Constantin Chirila, Ana-Maria Loghin, Norbert Pfeifer, Mihaela Macovei and Alina-Mihaela Nicuta Precul
Remote Sens. 2022, 14(24), 6293; https://doi.org/10.3390/rs14246293 - 12 Dec 2022
Cited by 3 | Viewed by 2615
Abstract
Three-dimensional city models play an important role for a large number of applications in urban environments, and thus it is of high interest to create them automatically, accurately and in a cost-effective manner. This paper presents a new methodology for point cloud accuracy [...] Read more.
Three-dimensional city models play an important role for a large number of applications in urban environments, and thus it is of high interest to create them automatically, accurately and in a cost-effective manner. This paper presents a new methodology for point cloud accuracy improvement to generate terrain topographic models and 3D building modeling with the Open Geospatial Consortium (OGC) CityGML standard, level of detail 1 (LOD1), using very high-resolution (VHR) satellite images. In that context, a number of steps are given attention (which are often (in the literature) not considered in detail), including the local geoid and the role of the digital terrain model (DTM) in the dense image matching process. The quality of the resulting models is analyzed thoroughly. For this objective, two stereo Pléiades 1 satellite images over Iasi city were acquired in September 2016, and 142 points were measured in situ by global navigation satellite system real-time kinematic positioning (GNSS-RTK) technology. First, the quasigeoid surface resulting from EGG2008 regional gravimetric model was corrected based on data from GNSS and leveling measurements using a four-parameter transformation, and the ellipsoidal heights of the 142 GNSS-RTK points were corrected based on the local quasigeoid surface. The DTM of the study area was created based on low-resolution airborne laser scanner (LR ALS) point clouds that have been filtered using the robust filter algorithm and a mask for buildings, and the ellipsoidal heights were also corrected with the local quasigeoid surface, resulting in a standard deviation of 37.3 cm for 50 levelling points and 28.1 cm for the 142 GNSS-RTK points. For the point cloud generation, two scenarios were considered: (1) no DTM and ground control points (GCPs) with uncorrected ellipsoidal heights resulting in an RMS difference (Z) for the 64 GCPs and 78 ChPs of 69.8 cm and (2) with LR ALS-DTM and GCPs with corrected ellipsoidal height values resulting in an RMS difference (Z) of 60.9 cm. The LOD1 models of 1550 buildings from the Iasi city center were created based on Pléiades-DSM point clouds (corrected and not corrected) and existing building sub-footprints, with four methods for the derivation of the building roof elevations, resulting in a standard deviation of 1.6 m against high-resolution (HR) ALS point cloud in the case of the best scenario. The proposed method for height extraction and reconstruction of the city structure performed the best compared with other studies on multiple satellite stereo imagery. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

17 pages, 6437 KiB  
Article
Improved IDW Interpolation Application Using 3D Search Neighborhoods: Borehole Data-Based Seismic Liquefaction Hazard Assessment and Mapping
by Jongkwan Kim, Jintae Han, Kahyun Park and Sangmuk Seok
Appl. Sci. 2022, 12(22), 11652; https://doi.org/10.3390/app122211652 - 16 Nov 2022
Cited by 10 | Viewed by 3760
Abstract
Traditional inverse distance weighting (IDW) interpolation is a process employed to estimate unknown values based on neighborhoods in 2D space. Proposed in this study is an improved IDW interpolation method that uses 3D search neighborhoods for effective interpolation on vertically connected observation data, [...] Read more.
Traditional inverse distance weighting (IDW) interpolation is a process employed to estimate unknown values based on neighborhoods in 2D space. Proposed in this study is an improved IDW interpolation method that uses 3D search neighborhoods for effective interpolation on vertically connected observation data, such as water level, depth, and altitude. Borehole data are the data collected by subsurface boring activities and exhibit heterogeneous spatial distribution as they are densely populated near civil engineering or construction sites. In addition, they are 3D spatial data that show different subsurface characteristics by depth. The subsurface characteristics observed as such are used as core data in spatial modeling in fields, such as geology modeling, estimation of groundwater table distribution, global warming assessment, and seismic liquefaction assessment, among others. Therefore, this study proposed a seismic liquefaction assessment and mapping workflow using an improved IDW application by combining geographic information system (GIS) (ArcGIS (Esri, Redlands, CA, USA)), NURBS-based 3D CAD system (Rhino/Grasshopper (Robert McNeel & Associates, Seattle, WA, USA)), and numerical analysis system (MATLAB (MathWorks, Natick, MA, USA)). The 3D neighborhood search was conducted by the B-rep-based 3D topology analysis, and the mapping was done under the 2.5D environment by combining the voxel layer, DEM, and aerial images. The experiment was performed by collecting data in Songpa-gu, Seoul, which has the highest population density among the OECD countries. The results of the experiment showed between 7 and 105 areas with liquefaction potentials according to the search distance and the method of the approach. Finally, this study improved users’ accessibility to interpolation results by producing a 3D web app that used REST API based on OGC I3S Standards. Such an approach can be applied effectively in spatial modeling that uses 3D observation data, and in the future, it can contribute to the expansion of 3D GIS application. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

26 pages, 7672 KiB  
Article
Open Geospatial System for LUCAS In Situ Data Harmonization and Distribution
by Martin Landa, Lukáš Brodský, Lena Halounová, Tomáš Bouček and Ondřej Pešek
ISPRS Int. J. Geo-Inf. 2022, 11(7), 361; https://doi.org/10.3390/ijgi11070361 - 23 Jun 2022
Cited by 4 | Viewed by 3612
Abstract
The use of in situ references in Earth observation monitoring is a fundamental need. LUCAS (Land Use and Coverage Area frame Survey) is an activity that has performed repeated in situ surveys over Europe every three years since 2006. The dataset is unique [...] Read more.
The use of in situ references in Earth observation monitoring is a fundamental need. LUCAS (Land Use and Coverage Area frame Survey) is an activity that has performed repeated in situ surveys over Europe every three years since 2006. The dataset is unique in many aspects; however it is currently not available through a standardized interface, machine-to-machine. Moreover, the evolution of the surveys limits the performance of change analysis using the dataset. Our objective was to develop an open-source system to fill these gaps. This paper presents a developed system solution for the LUCAS in situ data harmonization and distribution. We have designed a multi-layer client-server system that may be integrated into end-to-end workflows. It provides data through an OGC (Open Geospatial Consortium) compliant interface. Moreover, a geospatial user may integrate the data through a Python API (Application Programming Interface) to ease the use in workflows with spatial, temporal, attribute, and thematic filters. Furthermore, we have implemented a QGIS plugin to retrieve the spatial and temporal subsets of the data interactively. In addition, the Python API includes methods for managing thematic information. The system provides enhanced functionality which is demonstrated in two use cases. Full article
Show Figures

Figure 1

7 pages, 1817 KiB  
Communication
Mode Coupling and Steady-State Distribution in Multimode Step-Index Organic Glass-Clad PMMA Fibers
by Svetislav Savović, Alexandar Djordjevich, Isidora Savović and Rui Min
Photonics 2022, 9(5), 297; https://doi.org/10.3390/photonics9050297 - 27 Apr 2022
Cited by 7 | Viewed by 2094
Abstract
Mode coupling and power diffusion in multimode step-index (SI) organic glass-clad (OGC) PMMA fiber is examined in this study using the power flow equation (PFE). Using our previously proposed approach we determine the coupling coefficient D for this fiber. When compared to standard [...] Read more.
Mode coupling and power diffusion in multimode step-index (SI) organic glass-clad (OGC) PMMA fiber is examined in this study using the power flow equation (PFE). Using our previously proposed approach we determine the coupling coefficient D for this fiber. When compared to standard multimode SI PMMA fibers, the multimode SI OGC PMMA fiber has similar mode coupling strength. As a result, the fiber length required to achieve the steady-state distribution (SSD) in SI OGC PMMA fibers is similar to that required in standard SI PMMA fibers. We have confirmed that optical fibers with a plastic core show more intense mode coupling than those with a glass core, regardless of the cladding material. These findings could be valuable in communication and sensory systems that use multimode SI OGC PMMA fiber. In this work, we have demonstrated a successful employment of our previously proposed method for determination of the coupling coefficient D in multimode SI OGC PMMA fiber. This method has already been successfully employed in the previous research of mode coupling in multimode SI glass optical fibers, SI PMMA fibers and SI plastic-clad silica optical fibers. Full article
(This article belongs to the Special Issue Application of Multimode Optical Fibers)
Show Figures

Figure 1

26 pages, 5257 KiB  
Review
Bibliometric Analysis of OGC Specifications between 1994 and 2020 Based on Web of Science (WoS)
by Mingrui Huang, Xiangtao Fan, Hongdeng Jian, Hongyue Zhang, Liying Guo and Liping Di
ISPRS Int. J. Geo-Inf. 2022, 11(4), 251; https://doi.org/10.3390/ijgi11040251 - 11 Apr 2022
Cited by 5 | Viewed by 3665
Abstract
The Open Geospatial Consortium (OGC) is an international non-profit standards organization. Established in 1994, OGC aims to make geospatial information and services FAIR-Findable, Accessible, Interoperable, and Reusable. OGC specifications have greatly facilitated interoperability among software, hardware, data, and users in the GIS field. [...] Read more.
The Open Geospatial Consortium (OGC) is an international non-profit standards organization. Established in 1994, OGC aims to make geospatial information and services FAIR-Findable, Accessible, Interoperable, and Reusable. OGC specifications have greatly facilitated interoperability among software, hardware, data, and users in the GIS field. This study collected publications related to OGC specifications from the Web of Science (WoS database) between 1994 to 2020 and conducted a literature analysis using Derwent Data Analyzer and VosViewer, finding that OGC specifications have been widely applied in academic fields. The most productive organizations were Wuhan University and George Mason University; the most common keywords were interoperability, data, and web service. Since 2018, the emerging keywords that have attracted much attention from researchers were 3D city models, 3D modeling, and smart cities. To make geospatial data FAIR, the OGC specifications SWE and WMS served more for “Findable”, SWE contributed more to “Accessible”, WPS and WCS served more for “Interoperable”, and WPS, XML schemas, WFS, and WMS served more for “Reusable”. The OGC specification also serves data and web services for large-scale infrastructure such as the Digital Earth Platform of the Chinese Academy of Sciences. Full article
Show Figures

Figure 1

Back to TopTop