Next Article in Journal
A Drift-of-Stay Pattern Extraction Method for Indoor Pedestrian Trajectories for the Error and Accuracy Assessment of Indoor Wi-Fi Positioning
Next Article in Special Issue
An Empirical Evaluation of Data Interoperability—A Case of the Disaster Management Sector in Uganda
Previous Article in Journal
FracL: A Tool for Characterizing the Fractality of Landscape Gradients from a New Perspective
Previous Article in Special Issue
pyjeo: A Python Package for the Analysis of Geospatial Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Testing of istSOS under High Load Scenarios

Institute of Earth Sciences, University of Applied Sciences and Arts of Southern Switzerland, 6952 Cannobio, Switzerland
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(11), 467; https://doi.org/10.3390/ijgi8110467
Submission received: 2 September 2019 / Revised: 3 October 2019 / Accepted: 20 October 2019 / Published: 23 October 2019
(This article belongs to the Special Issue Open Science in the Geospatial Domain)

Abstract

:
In the last 20 years, a mainstream in Earth information and decision making has been drawn by the vision of the digital earth that calls for 3D representation, interoperability and modelling. In this context, the time dimension is essential but despite its importance, not many open standards and implementations are available. The Sensor Observation Service from the Open Geospatial Consortium is one of them and was specifically designed to collect, store and share timeseries of observations from sensors. To better understand the performance and limitation of one software implementation of this standard in real cases, this study executed a load testing of the istSOS application under a high load condition, characterized by a high number of concurrent users, in three cases mimicking existing monitoring networks. The results, in addition to providing reference values for future similar tests, show the general capacity of istSOS in meeting the INSPIRE quality of service requirements and in offering good performance with less than 500 concurrent users. When the number of concurrent users increases to 1000 and 2000, only 80% of the response times are below 30 seconds, performance that is unsatisfactory in most modern usages.

Graphical Abstract

1. Introduction

Along with the evolution of digital technologies in recent years, we have seen the rise of the Open Science concept that emphasizes the need of different declinations of openness to optimize the benefits of investments in research. Open education, open software, open standards, open data, and open access are only some of the approaches that foster the sharing, reusing and application of scientific works, which in turn aims at providing high quality and fair access to knowledge. This paper focuses on the performance analysis of an open geospatial software that implements an open geospatial standard used to collect and share monitoring data: this permits a better understanding of the current capability and limitations of this solution in meeting the needs posed by the current state of the art.

1.1. Earth Information and Decision Making

In the last 20 years, a mainstream in Earth information and decision-making has been drawn by the vision of the digital earth proposed by Gore in 1998 [1]. Since then, many steps have been taken toward the idea of a virtual environment, where the user can explore time and space dimensions, going from coarse to granular through a multitude of layers of information, to better understand and take optimal decision. A decade later, Goodchild [2] identified the challenges that were still to be addressed: (i) visualization and ease of use, (ii) interoperability and mashups, and (iii) modelling and simulation.
In the last decade, the visualization and the ease of use requirements are largely met thanks to the availability of the so-called virtual globes [3]. Pioneered by Google Earth [4] and NASA Word Wind [5] currently Virtual Globes are widely available from different vendors, for example ESRI (ArcGIS Explorer) or open source projects like Cesium [6], and often take advantages of the latest technologies like WebGL that permit their integration in Web browsers. An interesting review of the existing solution has been presented by Keyers [7]. Recently, many efforts were directed toward the integration of these globes with civil data integrating GIS and BIM [8] to permit an incredible level of detail in data representation and exploration.
Similarly, to visualization, interoperability and mashups requirements have been addressed in the latest 10 years by a number of initiative and projects worldwide; some examples are: INSPIRE [9], GEOSS [10], enviroGRIDS [11], ACQWA [12], and GMES [13]. At the same time, a number of Open Standards from the Open Geospatial Consortium (OGC) have been defined and become a de-facto standard for the Web based geospatial technologies [14]. OGC standards enabled the interoperable delivery of maps via Web Mapping Services (WMS), the distribution of geospatial vectors and raster layers respectively through the Web Feature Service (WFS) and the Web Coverage Service (WCS). Standardized accessibility to data processing can be achieved by means of Web Processing Services (WPS). Data from sensors can be standardized by using the Sensor Observation Service (SOS) [15,16] or the SensorThings API [17], which define the interface for collecting and dispatching information on systems and observations. Standard metadata are defined by the International Organization for Standard (ISO): 19139 (metadata encoding), 19115 (resource metadata), and 19119 (service metadata) and are made accessible for search by the Catalog Service for the Web (CSW). Today, some of these standards are widely adopted and/or officially endorsed (e.g., by the INSPIRE directive) while others have not yet been extensively diffused.
Modelling and simulation have also evolved dramatically in recent years by following the growth in computing power and data availability. If we look at hydrological modelling for example, we are facing a growing interest toward hyper-resolution global models [18], which run at incredibly high resolutions in time (from 1 day to few hours and less) and in space (from 0.1 to 1 km). This evolution of models today requires the meeting of several challenges, including: (i) the need for optimization of processing costs which exponentially increase with the resolution and (ii) the necessity to obtain and process updated input data and (iii) the desire to validate the simulation results.
It is clear that georeferenced data have a central role in this digital Earth vision, thanks to their ability to be correctly visualized in space, to be spatially mashed-up and integrated, and to support accurate modelling. The ability to understand the environment in which we live and its evolution in time, which is a key factor for the sustainable management of environmental resources and the development of a safe society, depends on the timeliness, quality, completeness, and availability of these data [19,20,21,22,23].

1.2. Spatio-Temporal Observaions and Sensor Observation Services

In line with the objectives of the digital Earth, GIS science today includes the handling of spatio-temporal dimensions [24]. Specifically, as discussed by Gong et al. [25], there is the need for both temporal GIS (T-GIS) and real-time GIS (RT-GIS), which differs depending on the emphasis given in the final scope: recording and accessing historical data (time-series) or live current data (data stream). It is clear that while the main challenge of T-GIS is big data, the focus of RT-GIS is more on time efficiency. Nevertheless, both these requirements are important aspects in both scopes.
Thanks to the implementation of several services like the Open Data Cube [26] and the Google Earth Engine [27] satellite images, which are available from a number of freely and openly accessible repositories, are today increasingly used for addressing a large number of environmental analysis. Conversely, the lack of collaboration and standardization in sharing in-situ data has often prevented the conduction of data processing over larger geographic domain; as a consequence, the attention to in situ monitoring has declined [28]. However, there is an individuated scientific need of supporting ground-based measurements that are capable to make observations at typically higher spatial and temporal resolution which are essential for validation and calibration of satellite data and for measuring phenomena poorly suited to remote measurement [29] like for example the groundwater level or the underwater Phycocyanin concentration.
A possible response to the needs of data from in-situ sensor, is given by the protocols defined by the Sensor Web Enablement (SWE), which have been successfully used in several projects and applications [30] ranging, for example, from NASA’s Earth Observation [31] to German–Indonesian Tsunami Early Warning System [32], environmentally optimized irrigation systems [33] or integrated in situ ocean monitoring [23]. Those successful implementations, and many others, demonstrated that the usage of these standards may pave the way for enabling the required data interoperability and reusability in different applications.
Nevertheless, at the time of writing, the diffusion of the OGC-SWE protocols is not as large as desirable to make of it the de facto standard for sensor network spatio-temporal data. Possible reasons supposed by the authors of this paper are: (i) the commonly adopted market strategy of sensor vendors to bind customers by providing closed solution for data access; (ii) higher payload and verbosity of the SOS compared to proprietary solutions, which are often not semantically annotated; (iii) lack of open solutions that offer simple-to-use and demonstrated robustness in production environments.

1.3. Motivation of Research and Outline

Despite some SOS implementation being available on the market, at the current knowledge of the authors, no studies on performance of any SOS software are available in the literature to evaluate their applicability in real cases. Poorazizi et al. [34] evaluated the performance of three SOS software implementations in serving data under varying number of sensors and filters on area and time interval but only under normal condition (a single user). The performance evaluation of SOS implementations under high loads (concurrent users) and varying data serving scenarios mimicking existing environmental monitoring network is currently not available in literature.
To partially fill this gap, this study presents a performance evaluation of a specific SOS implementation. While we recognize the importance in evaluating and comparing different existing SOS solutions like for example 52°North SOS [35] and Mapserver [36], this study is focalized on the istSOS project only. We believe the results of this study can highlight general issues in serving spatio-temporal observations and pinpoint reference values for future comparison with other SOS software or similar standard implementation (e.g., SensorThingsAPI). Additionally, as discussed by Zhu [37], the results of a performance testing permit developers to identify strengths and weakness of a system and consequently to improve the service capability to meet user requirements, so this evaluation will support the istSOS community in software improvement. In the next section, a brief introduction of the istSOS project is described.

1.4. istSOS

In 2010, during a renewal of the hydro-meteorological monitoring network of the Canton Ti-cino, the Institute of Earth Sciences at the University of Applied Science and Arts of Southern Switzerland (IST-SUPSI) started the development of an implementation of the Sensor Observation Service standard from the OGC named istSOS [38]. As described in detail in Cannata et al. [39] the software has been based on the objective of creating a simple and open source implementation of the Sensor Observation Service standard written in Python language. This standard met the needs of collecting observations in near-real-time from the sensors, publishing information on the monitoring system and sharing data in near-real-time in an interoperable way.
From the beginning, the istSOS software has tried to incorporate IST-SUPSI’s 20 years of expertise in hydro-meteorological monitoring network management into the application. In 2010, Cannata and Antonovic identified a number of open issues of the SOS version 1.0.0 and for this reason, istSOS started to implement a number of “special features” with the aim of extending the standard capabilities and meet the more practical requirements derived by best practices and guidelines. As a result, today, istSOS has a Web based administration interface (see Figure 1) that permits access to all the service configurations and operations without need of explicitly writing XML requests, making the usage of SOS more user-friendly. In particular, using the Web interface, it is possible to create a new instance and configure it by: setting the database connection; defining the service provider and identification metadata; specifying the coordinate system used for data storage and those accepted for reprojection. It is also possible to define the maximum data-period length accepted with a single request; to create and organize the offerings (groups of sensors); to register new sensors to the network; and define virtual procedures as on-the-fly data processes. Finally, the administrator can define dictionaries of accepted observed properties, units of measures and data quality levels. From the same interface, it is possible to access the data management section that permits to manipulate the data by applying a time-series-calculator or manually adjust the values and the data quality indexes. A data viewer interface is available seamlessly to explore the data in space and time, analysing the different observed properties and plotting for the selected period one or two properties from more sensors.
From a technical point of view, istSOS follows the Web Server Gateway Interface (WSGI) specification [40], so that it can be run on any WSGI compliant web server and offers, in addition to the standard SOS service based on XML, a RESTful interface based on the JSON format.
Some of the remarkable “special features” currently supported by istSOS are:
  • server-side data aggregation based on basic statistics (mean, sum, count, max, min);
  • time-space re-projection of data based on user-specified coordinate system and time zone;
  • observation quality management and assessment through automatic data quality checks on insertion;
  • on the fly data processing with virtual procedures offered as a regular sensor (e.g.,: serving reference evapotranspiration by observing temperature, relative humidity, wind velocity, and solar radiation);
  • multiple data output formats like CSV, JSON and O&M;
  • restful Web API for service configuration and data elaboration through JSON format;
  • notification service to monitor and dispatch messages when specific conditions are met;
  • security, authentication & authorization system based on user’s role: admin, network manager, data manager and viewer.
istSOS has been used in several research applications in recent years [37,39,41,42,43] and is currently undergoing the incubation process of OSGeo, which is aiming to promote the highest quality Open Source Geospatial Software. This process is exploited by verifying that the project has a successfully operating open and collaborative development community, has clear Intellectual Property (IP) oversight of the code and implements OSGeo operating principles of having: (i) clear documentation on how it is managed, (ii) maintained developer and user documentation, (iii) maintained source code management system, (iv) maintained issue tracking system, (v) maintained project mailing lists, (vi) automated build and smoke test systems. istSOS successfully passed OGC compliance tests for SOS version 1.0.0 and 2.0.0.

2. Materials and Methods

2.1. Testing Approach

The performance testing in this study is conducted using the load testing approach. Load testing is a particular procedure aimed at measuring the response of a service when put under stress; this is to measure the service behaviour and performance under normal and peak load conditions. As defined by Menascé [44], the test is done using software that simulates the service usage in terms of users’ (not only humans) behaviour and number. The user behaviour is detailed specifying what requests are executed and with what frequency, while the user number is simulated by specifying the number of concurrent users. The results of load testing are metrics that identify the system resource statistics of the server and the performance of the service.
For the scope of this research, among the numerous tools available for conducting load testing, the authors selected the open source framework named Locust [45]. The selection was due to: its open license, its Python language, which perfectly fit with istSOS, and its event-based (not thread-bound) architecture, which make it possible to benchmarking thousands of users from a single machine. Locust is a scalable and distributed framework developed in Python and available under the MIT-license. This framework enables the set-up of a load test under different scenarios, identified by the number of concurrent users and the hatch-ratio, which indicates the users spawned for seconds. Without entering in technical details of the tools functionality, which can be found in Cannata et al. [46], it is possible to simulate the users’ behaviours by coding different user types each characterized by its relative access frequency, the set of operations they are executing, and the minimum and maximum waiting time between requests.
In each set of operations, it is possible to implement the task to be performed (generally a HTTP requests), to associate a relative execution frequency of the operation and to catch responses that are considered an operation failure (e.g., a returned error message with HTTP status code 200). Creating and combining the collection of operations and user types, it is possible to define the test in code, a fact that provides a high degree of freedom and that facilitates the definition of realistic load tests that actually simulates the desired user behaviour.
Locust allows collecting information on: number of executed requests, number of failed requests, average content size and number of requests per second, response time statistics (i.e., min, max, ave, median). During the load testing, the dstat tool [47] was used to record hardware statistics of the server. This tool allows the monitoring of all the system resources instantly and the direct writing of date, time and metrics to a comma separated values file. The combination of user and server statistics permits to evaluate the whole system performance and behaviour.

2.2. Testing Environment (System Configuration)

In order to perform the tests, two different machines where used: a physical server with the application under test and a physical client which generates the load. Virtual environments were appropriately excluded to prevent possible interferences and I/O latencies due to the virtualisation layer. The configurations of the two used machines are described in Table 1.
On the client side, a Locust environment has been installed and each test has been implemented to record the average number of requests and the response time for each request type for each minute. On the server side, the dstat tool was configured to record the usage of processors (CPU), the usage of the memory (RAM) and the disk performance (I/O—write and read speed) for each minute. The tests were conducted on the istSOS version 2.3. It can be reasonably assumed that during the tests, the measured response time is not affected by any data jam at the production stream since the 1 GB/sec transport rate of the used network is certainly faster than the service response.

2.3. Experiment Definition and Set-Up

Using the previously described selected technologies and configurations, this research tested three different monitoring networks inspired by real case applications. The tested monitoring systems are different in terms of number of sensors, length of stored time series, number of observed properties per sensor and measurement frequency.
The first real case mimics a regional monitoring system: the OASI air-quality monitoring network from the Canton Ticino [48] where about 20 stations measure three observed properties (PM10, O3 and NO2) every 10 minutes. Data are available for the last 25 years. This monitoring network can be considered as mid-size monitoring network with regard to the number of sensors and the number of archived observations (about 100 million).
The second case replicates a national monitoring system: the SwissMetNet (Landl et al., 2009). It is the automatic monitoring network of the MeteoSwiss that comprises 160 stations measuring seven meteorological parameters (air temperature, precipitation, relative humidity, hours of sunshine, wind speed, global radiation, air pressure) with a resolution of 10 minutes. Digital data are available with this time resolution from the 80s. This monitoring network can be considered of large-size in terms of number of stations and number of measurements (about 1700 million).
The third mimicked case is another regional monitoring system: the Canton Ticino GESPOS monitoring network. It collects the regional springs, wells and bore-hole observations. It counts 5120 sensors that register often a single property (e.g., groundwater head, spring flow or water temperature) measured a few times only. Data are therefore sparse in time but the observed interval is about 45 years, ranging from the 70s to the current days. This monitoring network can be considered large-size because of the number of sensors. The tested synthetic monitoring systems, inspired from the three described real case monitoring networks, has been configured as described in Table 2, and the database has been filled with synthetic data.
During the tests, two different types of user has been configured in Locust. One reflecting a data consumer and one reflecting a data producer (see Figure 2). The data producer is likely a sensor that registers once to the system (SOS’s registerSensor request) and then starts to send observations at its sampling frequency (SOS’s insertObservation requests). The data consumer is likely a human that looks at the service capabilities (SOS’s getCapabilities request), then individuates the sensor of its interest, looks for its specifications (SOS’s sensorDescribe request) and finally download the data (SOS’s getObservation request).
Each monitoring system of Table 2 has been tested under different scenarios with increasing number of concurrent data consumers: 100, 200, 500, 1000, and 2000. Data producers are equal to the number of sensors of the tested monitoring system. In order to assign different relative frequencies to the different SOS requests performed by data consumers, weights derived from previous experiments have been used. In particular, the weights values were derived from the results presented in Cannata and Antonovic [33], where the total number of executed SOS requests for each type over a period of two years were extracted from the analysis of server logs of an istSOS service used in the ENORASIS projects. As a result, the selected relative weights of the requests applied during the tests are reported in Table 3.
While getCapabilities and describeSensor do not have complex filter capabilities, the getObservation request allows the selection of a period in time and the observed properties. During the execution of the tests, we decided to retrieve all the observed properties along a time interval randomly shifted in time within the data observed period. The time interval has been selected as a fixed period that is proportional to the observation frequency. For the system A and B the period was set to 1 day and for case C to 6 months. During the testing, data producers and data consumer operated simultaneously using the same SOS service.
In addition to the different scenarios tested for the different monitoring systems (A, B and C), the tests were conducted on two different WSGI server applications:
  • gevent
  • mod_wsgi
mod_wsgi [49] is the Apache module to run Python application service and it is the environment where istSOS has been tested and therefore is the suggested server by the istSOS community. gevent [50] is a coroutine-based networking library that was selected as an alternative due to its promising results. In fact, in Piël [51] a benchmark test comparing several WSGI server application showed that gevent has low server latencies, system loads, CPU usage, response times and error rates.
While gevent configuration was limited to set the number of processes to 12, mod_wsgi has been configured with the following Apache MPM (Multi-Processing Module) worker module settings: StartServers = 12; MinSpareThreads = 50; MaxSpareThreads = 300; ThreadLimit = 128; ThreadsPerChild = 100; MaxRequestWorkers = 300; MaxConnectionsPerChild = 0. Interested readers can find details of theese parameters in the official Apache MPM worker documentation [52].
Response sizes vary per different scenarios while requests sizes are constant within a single test since the number of sensors and data inserted and requested are fixed within the test execution. Values are listed in Table 4.

2.4. Reference for the Evaluation of the Results

To analyse the performance of a service, which is an important aspect of the so-called Quality of Service (QoS), several metrics were used in literature [53,54,55] but two of them are particularly prominent: the availability and the response time. While the availability indicates if a user has access to a web resource, the response time shows how fast this happens. During the load testing these two aspects help to assess how well a user will be satisfied with the web service.
To quantitatively evaluate the results of the tests there is the need to identify a reference. To this end, we selected as reference the Quality of Services (QoS) defined by INSPIRE in the requirements for network services. The QoS for INSPIRE is defined in the implementing directive 2007/2/EC (regulation (EC) No 976/2009) which specifies three criteria: performance, capacity and availability. While the service performance should guarantee to continuously serve the data within given time limits, capacity must handle at least a defined number of simultaneous users without degrading the performance, and availability must guarantee 99% of uptime, excluding planned breaks due to service maintenance. Specific values of limits and concurrent users for performance and capacity are defined in appropriate technical guidance documents for different service type; for example: discovery service like CSW, view service like WMS or download service like WFS. Currently, it seems that there’s no Technical Guidance (TG) on quality of service for SOS, nevertheless in the “Technical Guidance for implementing download services using the OGC Sensor Observation Service and ISO 19143 Filter Encoding”, it is stated that specification of TG for WFS can serve as an orientation for implementing QoS for SOS. TG for download services sets the performance limit to 10 seconds to receive the initial response in normal situation to retrieve metadata (e.g., describe spatial data set) and 30 seconds to retrieve data (e.g., get spatial data set). After that, the downstream speed must be higher than 0.5 MB/s. The capacity criterion is that 10 requests per second are served within the performance limits.

3. Results

The presented results of the load testing are related only to the requests that are repeated in time (therefore excluding the registerSensor request executed only once per sensor at database initialisation). Herein after, we will refer to the different SOS requests with the following acronyms:
  • IO for insertObservation,
  • RS for registerSensor,
  • GO for getObservatin,
  • DS for describeSensor,
  • GC for getCapabilities.
Test results related to a specific request and number of concurrent users will be referred with the acronym of the request followed by the number of concurrent data consumers (e.g., GC500 for the test of getCapabilities request with 500 concurrent users). Finally, specific tests conducted with a selected WSGI interface will be designated with the suffix “m” for mod_wsgi and “g” for gevent, so DS200m refers to results of describeSensor request with 200 concurrent data consumers using the mod_wsgi server application.

3.1. Case A: Air-Quality Monitoring Network in Ticino

Average response time for different SOS requests are illustrated in Figure 3 in logarithmic scale. Table 5 reports the service loads of the tests, in terms of requests per seconds. In general, we can observe two different behaviours along all the requests: fast responses up to 500 concurrent data consumers and exponentially slower responses for 1000 and above. IO request response time up to 500 users is below 1 seconds (from a minimum of 0.2 to a maximum of 0.98) while with 2000 users the registered minimum is 0.2 seconds and the maximum is 67.35 seconds using mod_wsgi. GO is the request that shows a worst performance: between 0.03 and 2.4 seconds up to 500 users and 129 seconds maximum with 2000 and mod_wsgi. In Table 6 and Table 7, the percentage of IO and GO requests completed within given times for the different configuration of concurrent users and WSGI server are reported.
GC responses up to 500 users are comprised between 0.05 and 0.98 seconds (0.5 with gevent) while with higher concurrent data consumers it reaches a maximum of 9.39 (0.3 with gevent). DS responses are between 0.03 and 0.88 seconds up to 500 users and then increase up to 124.46 seconds in the worst case (29.31 with mod_wsgi). In Table 8 and Table 9, the percentage of GC and DS requests completed within given times for the different configuration of concurrent users and WSGI server are reported.
To compare mod_wsg vs gevent, we do not consider the test with 2000 m that produced errors that may affect the registered response time. In the “case A”, in terms of relative percentages gevent outperformed mod_wsgi in average of the 10%. In particular, better performance was constantly registered in GO and mostly in IO requests, while a dual behaviour (faster/slower) is detected in GC and DS. The GC500 request registered the larger difference in average response time with gevent 94 time faster than mod_wsgi.

3.2. Case B: Swiss Meteorological Monitoring Network

In Figure 4 the average response times for different requests and concurrent users and WSGI application are represented in logarithmic scale. The service loads of the tests, in terms of requests per seconds, are reported in Table 10.
Similar to case A, exponential decrease of performance with growing concurrent users is detected from 500 concurrent users on. Response time to IO requests show an average of 12 seconds. For IO100 and IO200 the average is about 3 seconds and gevent better performed. GO average response times are slow and only GO100 and GO200g are within one seconds. GO2000g shows the worst results with an average response of 87 seconds and a maximum of 289.71 seconds. In Table 11 and Table 12 the percentage of IO and GO requests completed within given times for the different configuration of concurrent users and WSGI server are reported.
GC requests are executed in average in 7.8 seconds with mod_wsgi and 11.4 seconds with gevent. Slower recorded response is of about 3 minutes while slower average is in the case of GC2000g with 29.1 seconds. Average DS responses are below 1 seconds for 100 and 200 users but quickly get slower for more concurrent users: passing the 10 seconds waiting time in general and reaching up to 1 minute. The worst absolute response time is registered for DS2000g and is about 2.5 minutes. In Table 13 and Table 14 the percentage of GC and DS requests completed within given times for the different configuration of concurrent users and WSGI server are reported.
Similarly to case A, in comparing the two WSGI servers we do not consider the tests 1000m and 2000m which presented failures that may affect the recorded response time. Comparing the remaining tests, we see that gevent is in average 2.27 time slower than mod_wsgi. This is due to a general worst performance in all the cases but particularly in DS cases, in few cases (IO100, IO200 and GO500) gevent registered a faster response time.

3.3. Case C: Springs and Wells Monitoring Network in Ticino

In case C, the insertObservation operation has not been included in the test since the data registration frequency is 1 month and thus the frequency of this operation is clearly negligible with respect to the others. As a result, only the data consumer user profile has been tested in case C. In Figure 5 the average response times for different requests and concurrent users and WSGI application are represented in logarithmic scale. Table 15 shows the service loads of the tests, in terms of requests per seconds and rate of filure. gevent registers a higher rate of failure but in the cases of no-error performed much faster (83 times in average) than mod_wsgi in registered response-time.
In the pipe of the data consumer request workflow (see Figure 2), GC is a blocking request. In fact, if no answer is received the user has no information to perform either the exploration of specific sensors or any request for data. The requests produced unacceptable long response time (see Figure 5): close to 10 min in average for 100 and 200 concurrent users and rising up to about 26 min with 500 users. For higher concurrent users the time to obtain a response continues to increase with mod_wsgi while gevent was not able to provide any response in the timeframe of the test (see n.a. values in Table 15).
With both mod_wsgi and gevent the percentage of GC response failures for 500 users is about 50% and very close to 100% for 1000 and 2000 users. These percentages of error are invalidating the test results for the DS and GO requests for 500, 1000 and 2000 and thus they are omitted in Figure 6.
In both these cases gevent registered faster responses. For example, it was 20 times faster for DS100 and DS200 (5 sec vs 100 sec) and 2 times faster for GO100 and GO200. In the case of DS500 users gevent over performed mod_wsgi of 64 times (13 vs 887 seconds) while in the case of GO500 of 30 times.
Due to the previously discussed bias introduced by the high error rate on the service testing, this load testing has been re-executed excluding the GC requests. In this case, the list of procedures and observed properties were hardcoded in the testing scripts. This is to overcome the lack of information for performing DS and GO requests. Results of this further test are reported in Table 16 and Figure 7.
The response errors drastically drop down and the executed requests per seconds increased. In average, not considering tests with failures, gevent registered two times faster responses, ratio that became more important with higher concurrent users (for example 10 time faster in DS and GO 500 tests). In Table 17 and Table 18 the percentage of DS and GO requests completed within given times for the different configuration of concurrent users and WSGI server are reported.

3.4. Hardware Performance

The system performance metrics have been monitored and recorded during the test execution by means of dstat. As illustrated in Figure 8, only a minimal part of the 32GB available were used. In case A, it was limited to less than 2 GB; in case B less than 3 GB and in case C it reached a maximum of 6.4 GB.
The usage rate of the 6 available processors during the tests are plotted in Figure 8. In case A it never exceeded the 40%. In case B in tests with more than 200 users it gets close to 90%: about 95% for gevent and 85 for mod_wsgi. In case C processors are almost saturated with more than 500 users.
Along all the tests, the registered disk I/O speed in writing had a mean of 2,049.32 ± 1,915.18 KB/s and a maximum of 77,744.00 KB/s. In reading an average of 32,694.31 ± 11,609.67 KB/s and a maximum of 477,288.00 KB/s. In all the cases below of the maximum SSD speed of 520,000 KB/s. The temporal behaviour of the I/O write speed parameter shows, in all the tests, an almost constant rate interleaved by high peaks in correspondence of insertObservation requests. The I/O read speed is always regular. As a representative example of this behaviours, the test on case B with 500 users and gevent is illustrated in Figure 9.

3.5. WSGI Servers Performance

In the experiments two different WSGI server application were used and its response based on test case and request is reported in the previous section. In Figure 10 the average response time difference between gevent and mod_wsgi WSGI servers registered during the load testing in case A, B and C with 100, 200, 500 and 1000 users is represented. Tests were failures were registered are not considered as they may affect the performance registered. In case C getCapabilities (GC) and insertObservations(IO) were not tested.
In Figure 11, the average response times for the different tests are illustrated to provide a general overview of all the tests. In this image, as discussed previously in the paper, the test cases that registered errors are omitted since they could have affected the actual response time.

4. Discussion

As described in the methodology chapter, in this sections the results are evaluated with respect to the Quality of Services (QoS) defined by INSPIRE in the requirements for network services and respective Technical Guidance (TG) documents.
If we compare the SOS’s getCapabilities and describeSensor requests with INSPIRE’s Get Download Service Metadata, we can observe that in case A, mid-size monitoring system, istSOS is on average satisfying the 10 seconds response time limits for all the concurrency scenarios. If we look at the percentage of requests exceeding this limit we can clearly see that this performance is guaranteed to 100% only up to 500 concurrent users, 98% up to 1000 users and only up to 50% with 2000 users. In the case B, characterized by a big number of observations, the conformance with this limit is never guaranteed to 100% if gevent is used. Nevertheless, with the exception of a low percentage of requests (from 10% to 1%) it is satisfied with up to 200 users using gevent and 500 users using mod_wsgi. In case C, characterized by big number of sensors, in the test that excluded the getCapabilities request due to its high errors, the performance limit is satisfied with up to 500 users using both the WSGIs: only 1% of the responses in DS500m exceeded by the 12% the limit.
Similarly, if we compare the SOS’s getObservation request to the INSPIRE’s Get Spatial Data Set request we can see that istSOS satisfies the performance limits set by INSPIRE for data recovery in all load scenarios in case A, up to 1000 users. In the GO2000g scenario only 1% of the requests exceeded the limit, while in the GO2000m scenario only 80% of the requests were below the 30 seconds. In case B, the service fulfils the INSPIRE performance requirements for scenarios of 200 and 500 concurrent users regardless the WSGI used. With 500 users it is mostly satisfied (up to 95% of the requests with gevent and up to 90% with mod_wsgi) while with 1000 and 2000 it is never fulfilled. In the case C with GC excluded, the limit is fulfilled up to 1000 users with the exception of the 1% of requests in GO1000m scenario. With 2000 users only gevent partially satisfied the limit and only for the 66% of the requests.
There is no equivalent in INSPIRE services for insertObsrvation but being a transactional request to dynamically add data to the system we could, as a first instance, use the same 30 seconds limit used for data requests. Under this assumption, we can see from the load test results that istSOS satisfies the INSPIRE’s performance limits in all the load scenarios of case A, except for the 10% of requests in scenario IO2000m. In the case B, this limit is met only up to 500 users and in the case of IO500g up to 90% of the responses. With mod_wsgi and 1000 and 2000 users only the 80% of the responses are below the limit and with gevent only for 1000 users the system was able to keep the response time in the 66% of within the INSPIRE performance limit.
Since the tests have been performed for a short period of time, it is not possible to draw final conclusions about the robustness of istSOS, intended as the ability of the service to function correctly even in presence of errors. However, during the entire testing periods we have registered the absence of any downtime despite the presence of errors. The availability during the test period was therefore 100% since the service was always up. Nevertheless, the user perceived availability of the service is strongly impacted by the rate of failure. In this regard, if we consider the errors as service unavailability, istSOS showed that if gevent is used as WSGI server, the 99% of availability is guarantee in both case A and B, while for case C it is guarantee with the exception of the getCapabilities request. Results obtained with mod_wsgi are in average of worst quality, in fact they satisfy the availability criterion only up to 1000 simultaneous users in case A and 500 in case B and C.
The capacity limit set by the TG for INSPIRE download services, identifying the load scenario that the service should be able to sustain providing a compliant performance, are 10 requests per seconds (req/sec). The technical guidelines also allowed the server not to respond in orderly fashion to requests exceeding 50 simultaneous requests; in the current study this condition is ignored since the scope of this work is to test the service under high peak usage conditions. From the performance results, we have seen that the case A satisfies the performance criteria up to 1000 users, which means a capacity of 56.38 and 53.98 req/sec respectively with mod_wsgi and gevent (see Table 5). In the case B the performance limits are met at maximum with 200 users, which identify for mod_wsgi 11.89 req/sec and for gevent 15.42 req/sec (see Table 10). For case C (ex-cluding GC requests) the performance limits are respected up to 500 users, which registered 24.40 and 37.47 req/sec (see Table 15).
Looking at the usage of the hardware resources during the tests execution, we can see that the configuration of the server was adequate to support the tested cases. In fact, with the exception in few cases of CPU usage (with 2000 concurrent users), the physical server limits were not reached. It is particularly interesting to see how the disk I/O is particularly stressed when the sensors register new observations (insertObservation requests). In fact, contrary of what can be expected, the data retrieval is using less I/O than the data insertion. To better understand this apparently strange behaviours the database log files of case B with 500 concurrent users were analysed as a representative case. The total time of the queries for each specific SOS request has been evaluated and is reported in Table 19. From these statistics we can clearly see how the execution of an IO request is in average a level of magnitude higher of the GO.
The main reason for the difference is due to the data integrity checks that the software performs before inserting new data. In fact, at each IO request before inserting the data, istSOS:
  • authenticates the sensor trough a unique identifier provided at registration;
  • verifies that the properties observed are exactly those specified at sensor registration.
  • verifies that the new data have later observation time then the latest registered observation for the same sensor;
These three operations take 98.8% on average of the total duration of the request’s queries, and the latest check is particular slow because it executes a “SELECT MAX()” query type over the time-series. The queries of a GO request, which has no data integrity checks but needs to compose the XML response in memory, cost 13.3% on average of the queries of a GO request.
Since we recognize that, in general, hardware configuration affects the results in load testing experiments, we tried to minimize this impact by avoiding the use of virtual systems, using a high-speed band and running the requests from a physically different machine. While we cannot state that the hardware did not affected the results we can safely assume that it does not play a crucial role in this tests. In fact the recorded hardware usage showed that the system was adequately dimensioned with the RAM that were never fully used and the CPU that only in the case C with 1000 and 2000 users was close (but below) to 100%.
Analysing Figure 10, we can see that in general gevent and mod_wsgi, the two WSGI server we have tested, are equivalent for low concurrency. However, for the getObservations request gevent is faster and presents the higher performance differences with respect to the mod_wsgi. On the contrary, in case B (larger number of observations) with 500 concurrent users the performance of mod_wsgi outperformed gevent in all the other requests (DS, GC and IO). A possible motivation for this behaviour could reside in a better capacity of gevent, that is based on co-routines, in handling I/O blocking operations like the GO request, while mod_wsgi using a multi-processes and multi-thread approach execute faster requests like GC, IO and DS without interferences of I/O blocking requests.
The three varying parameters of tests are: number of sensors, number of stored observations and number of concurrent users. From the experiments, it is clear that the increase in size of the observations and of the sensors, from case A (105 million of observations and 20 sensors) to case B (1.7 billions of observations and 130 sensors), produce slower responses regardless of the WSGI server used. Similarly, the increase of size in the sensors and reduction of observations, from case A (20 sensors and 105 million of observations) and case C (5100 sensors and 122 thousand observations) produced slower response time. This indicates that both the increase of stored historical observations and the number of deployed sensors negatively affect the istSOS performance. While it is not possible to separate the negative contribution on performance of the two components it is clear that the number of stored data has a greater negative impact than the number of sensors. In fact, from case A to case C with respect to case A to case B sensors increase of 50 times while observations diminish of 500 time and performance degrades of 5 times.

5. Conclusions

With the presented research the authors have conducted a quantitative test of the istSOS software solution, which implements the SOS standard from the OGC under different conditions. The objective of the test was to understand the capabilities of the software, under ordinary installation, to meet the requirements of interoperable temporal and real-time GIS posed by the current state of the art in environmental modelling, visualization and Earth observation. To this end, the test analysed the service behaviour under different realistic monitoring networks, number of concurrent users and WSGI server type. The synthetic networks used in the test were inspired by existing deployed systems and specifically mimicked a regular monitoring network with 20 sensors and 100 million observations, a monitoring network with more station and data, composed by 130 sensors and 1.7 billion measures, and a monitoring network with a large number of sensors (5100) and limited data (1.7 million). Scenarios with 100, 200, 500, 1000 and 2000 concurrent users were tested for each network and for two different WSGI server (mod_wsgi and gevent). The monitored and registered service metrics for each testing scenarios and different SOS requests were: served requests per second, response times and errors. Additionally, the server resource consumption in term of CPU, RAM and I/O has been monitored and registered.
The presented results show that istSOS is able to meet the INSPIRE requirements for a download service in terms of quality of service with some limitations, in case of high number of concurrent users. This limit is due to the demonstrated degradation of the software performance with spatial scaling (increasing number of deployed sensors) and temporal scaling (increasing length of time-series). In particular, in case of spatial scaling low performance, istSOS produced a large amount of errors because of the large size of the getCapabilities response document which overloaded the service and consequently registered a large amount of time-out failures. This study has also demonstrated that gevent produces less errors than mod_wsgi and is able to have higher throughput (executed requests per second). Specifically, gevent showed better performance in dispatching observations by executing getObservations requests while mod_wsgi is better in dispatching metadata (getCapabilities and describeSensors requests).
The results demonstrate also that with an high number of sensors and high concurrency the getCapabilities request undermine the correct working of istSOS. In fact, its extremely high response time causes timeout errors that block the server from executing other requests.
The study shows that istSOS’ strategy to implement data integrity checks before registering a new observation has a great cost. This cost may represent a great obstacle in case of monitoring systems characterized by high-frequency or high-number of sensors and requiring very fast data insertion to satisfy the real time geospatial applications needs.
As a general remark, the authors underline the fact that the QoS limits set for INSPIRE’s download service are designed to serve data to generally support the formulation, implementation, monitoring and evaluation of policies and activities, which have a direct or indirect impact on the environment. In this regard, this research has demonstrated that istSOS is suitable for most of the scientific or operational applications where data are accessed by tens of users with peaks of one hundred in emergency cases [14,39]. However, it also demonstrated that its capacity of scaling with increasing size of monitoring network, either in term of number of stored observations or sensors, is limited. The degrading performances make istSOS in a standalone installation not capable of providing a sufficient quality of service for Internet of Things (IoT) applications where thousands of concurrent users, directly accessing the data, and applications with thousands of connected sensors are expected.
From the software development perspective, the test has shown the challenges to be addressed by future versions: improve the response speed and the software capacity. The first would require some code optimisation, particularly to speed up the query execution time for data integrity checks. The latter would require the implementation of a strategy to scale istSOS at the application level. this could be addressed in future istSOS versions by exploring micro-services architecture and asynchronous programming.

Author Contributions

Conceptualization, M.C. (Massimiliano Cannata); Data curation, M.A. and M.C. (Massimiliano Cannata); Formal analysis, D.S., M.C. (Mirko Cardoso) and M.C. (Massimiliano Cannata); Investigation, M.C. (Massimiliano Cannata); Methodology, M.C. (Massimiliano Cannata); Software, M.A. and M.C. (Mirko Cardoso); Supervision, M.C. (Massimiliano Cannata); Writing–original draft, M.C. (Massimiliano Cannata); Writing–review & editing, M.C. (Massimiliano Cannata) and D.S.

Funding

The 4onse project has been funded by the Swiss National Science Foundation (SNSF) within the “Swiss Programme for Research on Global Issues for Development” (r4d programme, www.r4d.ch) supported by the Swiss Agency for Development and Cooperation (SDC) with decision IZ07Z0_160906/1.

Acknowledgments

The authors thank the Canton Ticino for their long term support in developing the istSOS software and Luca Ambrosini for the exploratory work done.

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Gore, A. The digital earth. Aust. Surv. 1998, 43, 89–91. [Google Scholar] [CrossRef]
  2. Goodchild, M.F. The use cases of digital earth. Int. J. Digit. Earth 2008, 1, 31–42. [Google Scholar] [CrossRef]
  3. Butler, D. Virtual globes: The web-wide world. Nature 2006, 439, 776–778. [Google Scholar] [CrossRef] [PubMed]
  4. Patterson, T.C. Google Earth as a (Not Just) Geography Education Tool. J. Geogr. 2007, 106, 145–152. [Google Scholar] [CrossRef]
  5. Bell, D.G.; Kuehnel, F.; Maxwell, C.; Kim, R.; Kasraie, K.; Gaskins, T.; Hogan, P.; Coughlan, J. NASA World Wind: Opensource GIS for Mission Operations. In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; pp. 1–9. [Google Scholar]
  6. CesiumJS—Geospatial 3D Mapping and Virtual Globe Platform. Available online: https://cesiumjs.org/ (accessed on 1 March 2019).
  7. Keysers, J.H. Review of Digital Globes 2015: Australia and New Zealand Cooperative Research Centre for Spatial Information. 2015, p. 24. Available online: https://www.crcsi.com.au/assets/Resources/Globe-review-paper-March-2015.pdf (accessed on 22 October 2019).
  8. Liu, X.; Wang, X.; Wright, G.; Cheng, J.C.P.; Li, X.; Liu, R. A State-of-the-Art Review on the Integration of Building Information Modeling (BIM) and Geographic Information System (GIS). ISPRS Int. J. Geo-Inf. 2017, 6, 53. [Google Scholar] [CrossRef]
  9. INSPIRE|Welcome to INSPIRE. Available online: https://inspire.ec.europa.eu/ (accessed on 1 March 2019).
  10. GEO. Available online: http://www.earthobservations.org/index.php (accessed on 1 March 2019).
  11. enviroGRIDS. Available online: http://www.envirogrids.net/ (accessed on 1 March 2019).
  12. ACQWA. Available online: http://www.acqwa.ch/ (accessed on 1 March 2019).
  13. Copernicus. Available online: https://www.copernicus.eu/en (accessed on 1 March 2019).
  14. Giuliani, G.; Lacroix, P.; Guigoz, Y.; Roncella, R.; Bigagli, L.; Santoro, M.; Mazzetti, P.; Nativi, S.; Ray, N.; Lehmann, A. Bringing GEOSS Services into Practice: A Capacity Building Resource on Spatial Data Infrastructures (SDI). Trans. GIS 2017, 21, 811–824. [Google Scholar] [CrossRef]
  15. Na, A.; Priest, M. OpenGIS Sensor Observation Service (SOS) Encoding Standard; 616 OpenGIS standard 06-009r6; 616 OpenGIS Standard: 2007. Available online: https://www.opengeospatial.org/standards/sos (accessed on 22 October 2019).
  16. Bröring, A.; Stasch, C.; Echterhoff, J. OGC® Sensor Observation Service Interface Standard. 2012. Available online: https://www.opengeospatial.org/standards/sos (accessed on 22 October 2019).
  17. Liang, S.; Huang, C.-Y.; Khalafbeigi, T. OGC SensorThings API Part 1: Sensing. 2016. Available online: https://www.opengeospatial.org/standards/sensorthings (accessed on 22 October 2019).
  18. Bierkens, M.F.P.; Bell, V.A.; Burek, P.; Chaney, N.; Condon, L.E.; David, C.H.; de Roo, A.; Döll, P.; Drost, N.; Famiglietti, J.S.; et al. Hyper-resolution global hydrological modelling: What is next? Hydrol. Process. 2015, 29, 310–320. [Google Scholar] [CrossRef]
  19. Chatzikostas, G.; Boskidis, I.; Symeonidis, P.; Tekes, S.; Pekakis, P. Enorasis. Procedia Technol. 2013, 8, 516–519. [Google Scholar] [CrossRef] [Green Version]
  20. Rossetto, R.; De Filippis, G.; Borsi, I.; Foglia, L.; Cannata, M.; Criollo, R.; Vázquez-Suñé, E. Integrating free and open source tools and distributed modelling codes in GIS environment for data-based groundwater management. Environ. Model. Softw. 2018, 107, 210–230. [Google Scholar] [CrossRef]
  21. Godish, T.; Davis, W.T. Air Quality. Available online: https://www.crcpress.com/Air-Quality/Godish-Davis-Fu/p/book/9781466584440 (accessed on 1 March 2019).
  22. Liu, J.; Mooney, H.; Hull, V.; Davis, S.J.; Gaskell, J.; Hertel, T.; Lubchenco, J.; Seto, K.C.; Gleick, P.; Kremen, C.; et al. Systems integration for global sustainability. Science 2015, 347, 1258832. [Google Scholar] [CrossRef] [Green Version]
  23. Pearlman, J.; Jirka, S.; Rio, J.D.; Delory, E.; Frommhold, L.; Martinez, S.; O’Reilly, T. Oceans of Tomorrow sensor interoperability for in-situ ocean monitoring. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; pp. 1–8. [Google Scholar]
  24. Leppelt, T.; Gebbert, S. A GRASS GIS based Spatio-Temporal Algebra for Raster-, 3D Raster- and Vector Time Series Data. In Proceedings of the EGU General Assembly Conference Abstracts, Vienna, Austria, 12–17 April 2015; Volume 17, p. 11672. [Google Scholar]
  25. Gong, J.; Geng, J.; Chen, Z. Real-time GIS data model and sensor web service platform for environmental data management. Int. J. Health Geogr. 2015, 14, 2. [Google Scholar] [CrossRef] [PubMed]
  26. Roberts, D.; Dunn, B.; Mueller, N. Open Data Cube Products Using High-Dimensional Statistics of Time Series. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8647–8650. [Google Scholar]
  27. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  28. Fekete, B.M.; Robarts, R.D.; Kumagai, M.; Nachtnebel, H.-P.; Odada, E.; Zhulidov, A.V. Time for in situ renaissance. Science 2015, 349, 685–686. [Google Scholar] [CrossRef] [PubMed]
  29. Famiglietti, J.S.; Cazenave, A.; Eicker, A.; Reager, J.T.; Rodell, M.; Velicogna, I. Satellites provide the big picture. Science 2015, 349, 684–685. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Conover, H.; Berthiau, G.; Botts, M.; Goodman, H.M.; Li, X.; Lu, Y.; Maskey, M.; Regner, K.; Zavodsky, B. Using sensor web protocols for environmental data acquisition and management. Ecol. Inform. 2010, 5, 32–41. [Google Scholar] [CrossRef]
  31. Su, H.; Houser, P.R.; Tian, Y.; Geiger, J.V.; Kumar, S.V.; Belvedere, D.R. A Land Information Sensor Web (LISW) Study in Support of Land Surface Studies. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; Volume 5, p. V-144. [Google Scholar]
  32. Lauterjung, J.; Rudloff, A. GITEWS—The German-Indonesian Tsunami Early Warning System. In Proceedings of the AGU Fall Meeting Abstracts, San Francisco, CA, USA, 5–9 December 2005; Volume 13, p. U13A-06. [Google Scholar]
  33. Cannata, M.; Antonovic, M. Sensor Observations Service for Environmentally Opti-mizing Irrigation: istSOS within the ENORASIS Example. Int. J. Geoin-Form. 2015, 11, 1–8. [Google Scholar]
  34. Poorazizi, M.E.; Liang, S.H.L.; Hunter, A.J.S. Testing of Sensor Observation Services: A Performance Evaluation. In Proceedings of the First ACM SIGSPATIAL Workshop on Sensor Web Enablement, Redondo Beach, CA, USA, 6 November 2012; ACM: New York, NY, USA, 2012; pp. 32–38. [Google Scholar]
  35. 52°North Sensor Observation Service (SOS) Home Page. Available online: https://52north.org/software/software-projects/sos/ (accessed on 22 October 2019).
  36. Mapserver project Home Page. Available online: https://mapserver.org/ogc/sos_server.html (accessed on 22 October 2019).
  37. Zhou, Y.; Xie, H. The integration technology of sensor network based on web crawler. In Proceedings of the 2015 23rd International Conference on Geoinformatics, Wuhan, China, 19–21 June 2015; pp. 1–7. [Google Scholar]
  38. istSOS project Home Page. Available online: http://istsos.org/ (accessed on 22 October 2019).
  39. Cannata, M.; Antonovic, M.; Molinari, M.; Pozzoni, M. istSOS, a new sensor observation management system: Software architecture and a real-case application for flood protection. Geomat. Nat. Hazards Risk 2015, 6, 635–650. [Google Scholar] [CrossRef]
  40. Eby, P.J. PEP 333—Python Web Server Gateway Interface v1.0. Available online: https://www.python.org/dev/peps/pep-0333/ (accessed on 1 March 2019).
  41. Isufi, F.; Isufi, A.; Bulliqi, S. Development of NASA world wind based application to display sensors observation according to istsos platform. In Proceedings of the 14th SGEM GeoConference on Informatics, Geoinformatics and Remote Sensing, Albena, Bulgaria, 17–26 June 2014; Volume 1, pp. 587–592. [Google Scholar]
  42. Munoz, C.A.; Brovelli, M.A.; Corti, S.; Micotti, M.; Sessa, S.; Weber, E. A FOSS approach to Integrated Water Resource Management: The case study of Red-Thai Binh rivers system in Vietnam. Geomat. Workb. 2015, 12, 471–476. [Google Scholar]
  43. Eberle, J.; Urban, M.; Homolka, A.; Hüttich, C.; Schmullius, C. Multi-Source Data Integration and Analysis for Land Monitoring in Siberia. In Novel Methods for Monitoring and Managing Land and Water Resources in Siberia; Mueller, L., Sheudshen, A.K., Eulenstein, F., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 471–487. ISBN 978-3-319-24409-9. [Google Scholar]
  44. Menasce, D.A. Load testing of Web sites. IEEE Internet Comput. 2002, 6, 70–74. [Google Scholar] [CrossRef]
  45. Heyman, H. Implementing Dynamic Allocation of User Load in a Distributed Load Testing Framework. Bachelor Thesis, Uppsala University, Uppsala, Finland, 2013. [Google Scholar]
  46. Cannata, M.; Antonovic, M.; Molinari, M.E. Load testing of HELIDEM geo-portal: An OGC open standards interoperability example integrating WMS, WFS, WCS and WPS. Int. J. Spat. Data Infrastruct. Res. 2015, 9, 107–130. [Google Scholar] [CrossRef]
  47. Dstat command manual Home Page. Available online: http://linux.die.net/man/1/dstat (accessed on 22 October 2019).
  48. Osservatorio Ambientale della Svizzera Italiana (OASI) Home Page. Available online: http://www.oasi.ti.ch/web/dati/aria.html (accessed on 22 October 2019).
  49. Nielsen, A. Python Programming—Web Serving. 2013; Available online: http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/5946/pdf/imm5946.pdf (accessed on 22 October 2019).
  50. Fonseca, A.; Rafael, J.; Cabral, B. Eve: A Parallel Event-Driven Programming Language. In Euro-Par 2014: Parallel Processing Workshops; Lopes, L., Žilinskas, J., Costan, A., Cascella, R.G., Kecskemeti, G., Jeannot, E., Cannataro, M., Ricci, L., Benkner, S., Petit, S., et al., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 170–181. [Google Scholar]
  51. Piël, N. Benchmark of Python WSGI Servers. Available online: http://www.voidcn.com/article/p-sytdrkvi-bex.html (accessed on 1 March 2019).
  52. Multi-Processing Modules (MPMs)—Apache HTTP Server Version 2.4. Available online: https://httpd.apache.org/docs/2.4/mpm.html (accessed on 19 July 2019).
  53. Han, W.; Di, L.; Yu, G.; Shao, Y.; Kang, L. Investigating metrics of geospatial web services: The case of a CEOS federated catalog service for earth observation data. Comput. Geosci. 2016, 92, 1–8. [Google Scholar] [CrossRef]
  54. Seip, C.; Bill, R. Evaluation and Monitoring of Service Quality: Discussing Ways to Meet INSPIRE Requirements. Trans. GIS 2016, 20, 163–181. [Google Scholar] [CrossRef]
  55. Giuliani, G.; Dubois, A.; Lacroix, P. Testing OGC Web Feature and Coverage Service performance: Towards efficient delivery of geospatial data. J. Spat. Inf. Sci. 2013, 2013, 1–23. [Google Scholar] [CrossRef]
Figure 1. Graphical user interface of the istSOS software. At the top-left a screenshot of the administration interface to register a new sensor to the monitoring network: specifics buttons opens new interface for Web service settings and configurations. At the top-right, the data editor interface to manipulate and validate data. At the bottom, the data viewer interface to explore data in space and time by observed properties.
Figure 1. Graphical user interface of the istSOS software. At the top-left a screenshot of the administration interface to register a new sensor to the monitoring network: specifics buttons opens new interface for Web service settings and configurations. At the top-right, the data editor interface to manipulate and validate data. At the bottom, the data viewer interface to explore data in space and time by observed properties.
Ijgi 08 00467 g001
Figure 2. Users of a Sensor Observation Service and workflow of interactions with the server for information access.
Figure 2. Users of a Sensor Observation Service and workflow of interactions with the server for information access.
Ijgi 08 00467 g002
Figure 3. Service response time for different Sensor Observation Service (SOS) requests in test case A (Air-quality Monitoring Network): insertObservation request in plot (a), getObservation request in plot (b), describeSensor request in plot (c) and getCapabilities request in plot (d).
Figure 3. Service response time for different Sensor Observation Service (SOS) requests in test case A (Air-quality Monitoring Network): insertObservation request in plot (a), getObservation request in plot (b), describeSensor request in plot (c) and getCapabilities request in plot (d).
Ijgi 08 00467 g003
Figure 4. Service response time in logarithmic scale for different SOS requests in test case B (Swiss Meteorological Monitoring Network): insertObservation request in plot (a), getObservation request in plot (b), describeSensor request in plot (c) and getCapabilities request in plot (d).
Figure 4. Service response time in logarithmic scale for different SOS requests in test case B (Swiss Meteorological Monitoring Network): insertObservation request in plot (a), getObservation request in plot (b), describeSensor request in plot (c) and getCapabilities request in plot (d).
Ijgi 08 00467 g004
Figure 5. Service response time for the getCapabilities SOS requests in test case C (Springs and Wells Monitoring Network in Ticino)
Figure 5. Service response time for the getCapabilities SOS requests in test case C (Springs and Wells Monitoring Network in Ticino)
Ijgi 08 00467 g005
Figure 6. Service response time for the getObservation (a) and describeSensor (b) SOS requests in test case C (Springs and Wells Monitoring Network in Ticino)
Figure 6. Service response time for the getObservation (a) and describeSensor (b) SOS requests in test case C (Springs and Wells Monitoring Network in Ticino)
Ijgi 08 00467 g006
Figure 7. Service response time for the getObservation (a) and describeSensor (b) SOS requests in test case C (Springs and Wells Monitoring Network in Ticino) after excluding getCapabilities requests from the test.
Figure 7. Service response time for the getObservation (a) and describeSensor (b) SOS requests in test case C (Springs and Wells Monitoring Network in Ticino) after excluding getCapabilities requests from the test.
Ijgi 08 00467 g007
Figure 8. Average RAM (a) and processors (CPU) (b) usage during different test cases and concurrent users
Figure 8. Average RAM (a) and processors (CPU) (b) usage during different test cases and concurrent users
Ijgi 08 00467 g008
Figure 9. Disk usage in reading (a) and writing (b) during the test case B with 500 concurrent users.
Figure 9. Disk usage in reading (a) and writing (b) during the test case B with 500 concurrent users.
Ijgi 08 00467 g009
Figure 10. Average response time difference (in seconds) between gevent and mod_wsgi WSGI servers registered during the load testing in case A, B and C with 100, 200, 500 and 1000 users. Positive values indicate that mod_wsgi was slower; on the contrary, negative values indicate that gevent was faster.
Figure 10. Average response time difference (in seconds) between gevent and mod_wsgi WSGI servers registered during the load testing in case A, B and C with 100, 200, 500 and 1000 users. Positive values indicate that mod_wsgi was slower; on the contrary, negative values indicate that gevent was faster.
Ijgi 08 00467 g010
Figure 11. Average response time for different requests (IO, GC, DS, GO) and WSGI servers (ge = gevent and wg = mod_wsgi) versus different test cases characterized by different concurrent users (100, 200, 500, 1000) and monitoring networks (A = small size, B = large number of observations, C = large number of sensors).
Figure 11. Average response time for different requests (IO, GC, DS, GO) and WSGI servers (ge = gevent and wg = mod_wsgi) versus different test cases characterized by different concurrent users (100, 200, 500, 1000) and monitoring networks (A = small size, B = large number of observations, C = large number of sensors).
Ijgi 08 00467 g011
Table 1. Hardware configuration for the load testing experiments.
Table 1. Hardware configuration for the load testing experiments.
Server ConfigurationClient Configuration
HD1TB SSD (I/O 520 MB/s)500GB HDD (I/O 520 MB/s)
CPUi7-5820K (6 cores / 12 threads)i7-3770 (4 cores / 8 threads)
RAM32GB8GB
OSLinux Ubuntu 14.04Linux Ubuntu 14.04
Table 2. Configuration of the tested monitoring systems inspired by real case applications.
Table 2. Configuration of the tested monitoring systems inspired by real case applications.
Monitoring System A—Inspired by OASI Air-Quality
N. sensors20
Observed interval25 years
Observed properties per sensor4
Measurement frequency10 minutes
Monitoring System B—Inspired by SwissMetNet Weather
N. sensors130
Observed interval25 years
Observed properties per sensor10
Measurement frequency10 minutes
Monitoring System C—Inspired by GESPOS Springs & Wells
N. sensors5100
Observed interval2 years
Observed properties per sensor1
Measurement frequency1 month
Table 3. Relative weights of Sensor Observation Service (SOS) requests used for the data consumer user type during the test’s execution.
Table 3. Relative weights of Sensor Observation Service (SOS) requests used for the data consumer user type during the test’s execution.
SOS RequestsWeight
getCapabilities1
describeSensor5
getObservations100
Table 4. Size of requests and responses in KB for different tasting case and SOS request.
Table 4. Size of requests and responses in KB for different tasting case and SOS request.
Case ACase BCase C
Request (KB)Response (KB)Request (KB)Response (KB)Request (KB)Response (KB)
GC0.68521.30.68577.20.6852500
DS0.5414.40.5416.10.5393.2
IO3.90.2155.8000.216--
GO0.92558.40.925102.20.9234.6
Table 5. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a medium size monitoring network (case A). Rate of failure (ρ) is reported when errors have been detected.
Table 5. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a medium size monitoring network (case A). Rate of failure (ρ) is reported when errors have been detected.
RequestIOGCDSGOTOT
100m0.030.100.477.458.06
200m0.030.210.9314.8516.03
500m0.030.512.3336.4139.28
1000m0.030.803.4452.1056.38
2000m0.03 (ρ = 12.5%)1.17 (ρ = 1%)4.24 (ρ = 9%)61.49 (ρ = 11%)66.94 (ρ = 10%)
100g0.030.100.487.458.07
200g0.030.200.9614.8616.05
500g0.030.522.3836.9039.84
1000g0.030.773.2349.9553.98
2000g0.031.124.0257.3362.51
Table 6. Percentage of insertObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Table 6. Percentage of insertObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Request50%66%75%80%90%95%98%99%100%
IO100g0.290.310.320.330.360.420.470.480.52
IO200g0.290.310.320.330.360.420.470.490.50
IO500g0.320.350.370.380.400.470.500.520.58
IO1000g2.204.305.506.207.708.308.709.209.67
IO2000g4.005.206.006.107.1013.0019.0021.0022.25
IO100m0.290.300.310.310.340.370.400.440.50
IO200m0.300.320.340.350.400.410.430.450.61
IO500m0.340.380.420.470.600.690.780.870.95
IO1000m2.904.005.206.107.5011.0014.0014.0015.61
IO2000m6.5011.0012.0015.0023.0032.0037.0067.0067.34
Table 7. Percentage of getObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Table 7. Percentage of getObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Request50%66%75%80%90%95%98%99%100%
GO100g0.110.130.140.150.180.210.260.290.56
GO200g0.120.140.150.160.200.230.270.310.76
GO500g0.140.170.200.210.270.320.380.430.94
GO1000g4.606.307.207.608.408.909.309.6011.68
GO2000g14.0017.0019.0020.0021.0022.0023.0023.0032.43
GO100m0.130.150.160.170.210.240.280.320.72
GO200m0.140.170.190.210.260.310.370.431.37
GO500m0.270.370.440.490.650.811.001.202.46
GO1000m4.906.407.307.909.4011.0013.0015.0023.64
GO2000m11.0015.0018.0021.0032.0039.0062.0070.00129.81
Table 8. Percentage of getCapabilities requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Table 8. Percentage of getCapabilities requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Request50%66%75%80%90%95%98%99%100%
GC100g0.050.050.060.070.110.130.200.220.45
GC200g0.050.060.070.090.120.150.200.250.35
GC500g0.060.090.110.130.180.230.290.340.58
GC1000g1.204.205.806.607.908.609.209.4010.54
GC2000g3.5010.0014.0016.0020.0021.0022.0023.0030.97
GC100m0.050.050.060.060.080.100.120.150.18
GC200m0.050.060.080.090.110.160.210.230.94
GC500m0.100.150.190.220.320.400.500.560.98
GC1000m1.602.803.604.105.406.608.2011.0018.44
GC2000m4.406.409.3012.0020.0031.0040.0063.0093.91
Table 9. Percentage of describeSensor requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Table 9. Percentage of describeSensor requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case A).
Request50%66%75%80%90%95%98%99%100%
DS100g0.030.040.050.050.090.120.160.190.33
DS200g0.030.050.050.070.110.140.190.220.48
DS500g0.060.090.110.130.180.230.300.340.81
DS1000g3.005.306.407.008.108.709.209.5010.35
DS2000g9.8015.0017.0018.0020.0022.0022.0023.0029.31
DS100m0.030.030.030.040.060.080.120.140.27
DS200m0.030.040.060.070.100.130.170.201.16
DS500m0.090.130.170.200.270.350.440.500.88
DS1000m2.103.103.704.205.406.508.4011.0018.48
DS2000m6.2011.0014.0016.0026.0035.0049.0065.00124.46
Table 10. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a medium size monitoring network (case B). Rate of failure (ρ) is reported when errors have been detected.
Table 10. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a medium size monitoring network (case B). Rate of failure (ρ) is reported when errors have been detected.
RequestIOGCDSGOTOT
100m0.220.100.467.248.02
200m0.220.160.7110.7911.89
500m0.220.281.0715.3416.90
1000m0.22 (ρ = 7.8%)0.46 (ρ = 6.7%)1.27 (ρ = 8.1%)16.75 (ρ = 9.4%)18.70 (ρ = 9.2%)
2000m0.21 (ρ = 36.8%)0.87 (ρ = 25.9%)2.14 (ρ = 33%)24.01 (ρ = 38.2%)27.23 (ρ = 37.2%)
100g0.220.090.436.937.67
200g0.220.190.8914.1215.42
500g0.220.321.2217.7719.53
1000g0.220.451.3517.8619.88
2000g0.220.731.6217.7520.31
Table 11. Percentage of insertObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Table 11. Percentage of insertObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Request50%66%75%80%90%95%98%99%100%
IO100g2.203.003.704.306.809.2012.0014.0025.29
IO200g2.203.504.505.308.4014.0020.0022.0026.01
IO500g12.0017.0021.0023.0028.0031.0035.0038.0046.47
IO1000g17.0026.0032.0035.0045.0054.0064.0069.0086.54
IO2000g39.0060.0075.0081.00100.00112.00123.00132.00179.36
IO100m5.6010.0014.0015.0017.0023.0027.0027.0028.77
IO200m3.505.506.607.609.8012.0012.0013.0014.60
IO500m8.109.009.7010.0011.0013.0014.0016.0019.32
IO1000m15.0017.0019.0022.0036.0048.0077.0079.00126.82
IO2000m15.0017.0021.0023.0042.0060.0078.0081.00127.55
Table 12. Percentage of getObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Table 12. Percentage of getObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Request50%66%75%80%90%95%98%99%100%
GO100g0.610.841.001.101.602.103.204.4028.06
GO200g0.700.931.201.402.102.904.305.7027.56
GO500g13.0017.0019.0021.0024.0028.0032.0035.0072.41
GO1000g35.0042.0049.0054.0061.0067.0075.0085.00156.84
GO2000g83.0098.00107.00112.00124.00138.00188.00217.00289.71
GO100m0.450.600.700.760.921.001.201.405.22
GO200m5.207.508.609.2011.0012.0013.0013.0018.09
GO500m18.0022.0025.0026.0030.0033.0036.0037.0047.69
GO1000m31.0036.0040.0043.0054.0071.0090.0098.00155.35
GO2000m33.0039.0044.0049.0069.0087.0099.00107.00157.56
Table 13. Percentage of getCapabilities requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Table 13. Percentage of getCapabilities requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Request50%66%75%80%90%95%98%99%100%
GC100g0.080.160.290.390.891.403.205.0028.39
GC200g0.491.302.203.208.3013.0022.0025.0028.03
GC500g5.4011.0014.0016.0021.0024.0028.0032.0042.22
GC1000g7.7025.0032.0035.0052.0064.0072.0075.00126.10
GC2000g8.7020.0049.0063.0092.00112.00128.00138.00192.42
GC100m0.080.090.100.110.160.190.230.240.35
GC200m0.361.903.604.505.706.507.307.608.66
GC500m5.507.208.208.9010.0011.0012.0013.0015.18
GC1000m12.0015.0017.0019.0026.0045.0068.0078.00103.36
GC2000m13.0017.0019.0022.0041.0058.0077.0080.00127.07
Table 14. Percentage of describeSensor requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Table 14. Percentage of describeSensor requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case B).
Request50%66%75%80%90%95%98%99%100%
DS100g0.050.170.330.491.001.602.904.6012.68
DS200g0.210.580.941.202.604.708.6014.0027.59
DS500g12.0017.0019.0020.0024.0027.0031.0033.0044.73
DS1000g33.0040.0047.0051.0061.0066.0072.0075.0089.01
DS2000g69.0083.0094.00101.00114.00124.00132.00136.00150.06
DS100m0.030.040.050.060.100.130.170.190.45
DS200m0.250.560.881.101.702.202.803.405.31
DS500m3.003.904.504.805.906.707.808.4011.13
DS1000m12.0014.0016.0019.0028.0046.0073.0075.00123.30
DS2000m12.0016.0021.0025.0044.0070.0076.0084.00128.73
Table 15. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a large size monitoring network (case C). Rate of failure (ρ) is reported when errors have been detected. Values reported as n.a. indicates that it was not possible to execute the test for that request and users combination.
Table 15. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a large size monitoring network (case C). Rate of failure (ρ) is reported when errors have been detected. Values reported as n.a. indicates that it was not possible to execute the test for that request and users combination.
RequestGCDSGOTOT
100m0.050.162.232.44
200m0.090.293.984.36
500m0.20 (ρ = 45.4%)0.38 (ρ = 77.0%)0.13 (ρ = 74.3%)0.71 (ρ = 74.3%)
1000m3.33 (ρ = 97.5%)0.58 (ρ = 99.1%)n.a.3.92
2000m10.27 (ρ = 99.9%)0.01 (ρ = 100%)n.a.10.28
100g0.08 (ρ = 4.8%)0.304.665.03
200g0.14 (ρ = 13.3%)0.477.217.82
500g0.21 (ρ = 64.8%)0.222.793.22
1000g2.09 (ρ = 100%)n.a.n.a.2.09
2000g9.67 (ρ = 100%)n.a.n.a.9.67
Table 16. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a large size monitoring network (case C) excluding the getObservation request. Rate of failure (ρ) is reported when errors have been detected.
Table 16. Requests per second (total and for different SOS request) with different concurrent data consumers and WSGI applications for the test case of a large size monitoring network (case C) excluding the getObservation request. Rate of failure (ρ) is reported when errors have been detected.
RequestDSGOTOT
100m0.397.397.78
200m0.7814.7115.49
500m1.3223.0824.40
1000m1.71 (ρ = 0.14%)29.16 (ρ = 0.22%)30.87
2000m2.40 (ρ = 5.90%)36.64 (ρ = 7.52%)39.05
100g0.397.407.79
200g0.7814.7615.54
500g1.9335.5537.47
1000g2.0133.9935.99
2000g2.5038.9741.47
Table 17. Percentage of describeSensor requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case C).
Table 17. Percentage of describeSensor requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case C).
Request50%66%75%80%90%95%98%99%100%
DS100g0.040.050.070.100.180.230.330.360.68
DS200g0.050.080.110.140.200.260.350.410.83
DS500g0.210.340.460.560.921.502.703.806.75
DS1000g10.0013.0014.0015.0016.0017.0018.0018.0020.38
DS2000g22.0028.0031.0032.0036.0039.0046.0050.0066.44
DS100m0.040.040.050.050.080.110.170.210.49
DS200m0.040.060.090.100.160.220.300.350.67
DS500m4.906.006.607.008.108.809.6010.0011.19
DS1000m16.0017.0017.0018.0019.0020.0021.0022.0040.70
DS2000m29.0033.0036.0038.0048.0053.0059.0063.00107.97
Table 18. Percentage of getObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case C).
Table 18. Percentage of getObservation requests completed within given times (in seconds) for the different configuration of concurrent users and WSGI server (case C).
Request50%66%75%80%90%95%98%99%100%
GO100g0.180.200.230.250.300.350.430.481.06
GO200g0.190.230.260.280.340.400.480.541.42
GO500g0.350.480.600.701.101.602.803.607.05
GO1000g11.0013.0015.0015.0016.0017.0018.0018.0021.98
GO2000g25.0029.0032.0033.0036.0040.0048.0051.0068.49
GO100m0.180.200.220.240.290.330.390.430.88
GO200m0.210.250.290.310.390.470.590.681.47
GO500m7.908.909.7010.0012.0012.0013.0013.0015.56
GO1000m19.0020.0021.0021.0023.0023.0025.0025.0042.26
GO2000m35.0039.0042.0044.0051.0056.0061.0065.00112.16
Table 19. Database queries time statistics, registered in test case B and 500 concurrent users, aggregated for SOS request type (IO=insertObservation, GC=getCapabilities, DS=describeSensor, GO=getObservation).
Table 19. Database queries time statistics, registered in test case B and 500 concurrent users, aggregated for SOS request type (IO=insertObservation, GC=getCapabilities, DS=describeSensor, GO=getObservation).
RequestMean (sec)Min (sec)Max (sec)Total (hours)
IO13.460. 78445.0222.993
GC0. 7180.0179.1730.110
DS1.0440.00234.5700.935
GO1.8070.41646.08815.751

Share and Cite

MDPI and ACS Style

Cannata, M.; Antonovic, M.; Strigaro, D.; Cardoso, M. Performance Testing of istSOS under High Load Scenarios. ISPRS Int. J. Geo-Inf. 2019, 8, 467. https://doi.org/10.3390/ijgi8110467

AMA Style

Cannata M, Antonovic M, Strigaro D, Cardoso M. Performance Testing of istSOS under High Load Scenarios. ISPRS International Journal of Geo-Information. 2019; 8(11):467. https://doi.org/10.3390/ijgi8110467

Chicago/Turabian Style

Cannata, Massimiliano, Milan Antonovic, Daniele Strigaro, and Mirko Cardoso. 2019. "Performance Testing of istSOS under High Load Scenarios" ISPRS International Journal of Geo-Information 8, no. 11: 467. https://doi.org/10.3390/ijgi8110467

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop