Next Article in Journal
A Novel Hull Girder Design Methodology for Prediction of the Longitudinal Structural Strength of Ships
Next Article in Special Issue
Assessment of the Representativeness and Uncertainties of CTD Temperature Profiles
Previous Article in Journal
Calibration of Marine Pressure Sensors with a Combination of Temperature and Pressure: A Case Study of SBE 37-SM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate, Automatic Diagnostics Based on Insights into Sensor Technology

by
Astrid Marie Skålvik
1,*,
Ranveig N. Bjørk
2,
Enoc Martínez
3,
Kjell-Eivind Frøysa
4 and
Camilla Saetre
1
1
Department of Physics and Technology, University of Bergen, 5020 Bergen, Norway
2
NORCE Norwegian Research Center, 5838 Bergen, Norway
3
SARTI-MAR Research Group, Electronics Department, Universitat Politècnica de Catalunya (UPC), 08800 Vilanova i la Geltrú, Spain
4
Department of Computer Science, Electrical Engineering and Mathematical Sciences, Western Norway University of Applied Sciences, 5020 Bergen, Norway
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(12), 2367; https://doi.org/10.3390/jmse12122367
Submission received: 31 October 2024 / Revised: 17 December 2024 / Accepted: 17 December 2024 / Published: 23 December 2024
(This article belongs to the Special Issue Progress in Sensor Technology for Ocean Sciences)

Abstract

With the rapid development of smart sensor technology and the Internet of things, ensuring data accuracy and system reliability is paramount. As the number of sensors increases with demand for high-resolution, high-quality input to decision-making systems, models and digital twins, manual quality control of sensor data is no longer an option. In this paper, we leverage insights into sensor technology, environmental dynamics and the correlation between data from different sensors for automatic diagnostics of a sensor node. We propose a method for combining results of automatic quality control of individual sensors with tests for detecting simultaneous anomalies across sensors. Building on both sensor and application knowledge, we develop a diagnostic logic that can automatically explain and diagnose instead of just labeling the individual sensor data as “good” or “bad”. This approach enables us to provide diagnostics that offer a deeper understanding of the data and their quality and of the health and reliability of the measurement system. Our algorithms are adapted for real time and in situ operation on the sensor node. We demonstrate the diagnostic power of the algorithms on high-resolution measurements of temperature and conductivity from the OBSEA observatory about 50 km south of Barcelona, Spain.

1. Introduction

Automatic sensor quality control is an integral part of any system of smart sensors. Before using data for updating models, creating predictions or for decision making, data must be checked to ensure they are of high enough quality.
For automatic quality control of oceanographic sensors operating underwater, the main approach has been to flag data with quality labels, such as “pass”/“good” data, “not evaluated”, “suspect”/“probably bad”, “fail”/“bad” data and similar ones [1,2,3]. While the tests for producing these labels are easy to implement, the labels are rather coarse, and it is not always stated in the metadata which thresholds have been used for the different labels. This reduces data re-usability: data that are discarded as they are of too low quality for the original application may be good enough for another, and what is considered noise for some users may be the signal of interest for other users in different domains [4]. Moreover, the established tests are mainly focused on anomalies in single variables, and even though [1] suggests combining different variables through a multivariate test, it notes that such testing is challenging and considered experimental. Multivariate tests are not included in publicly available libraries for oceanographic quality control algorithms such as [5,6,7], and we could not find any documentation that such tests are used in practice elsewhere.
The example provided in [1] considers detecting a simultaneous high rate of change in temperature and a second variable such as salinity, with thresholds for each variable needed to be set by the operator. However, to our knowledge, there is no documented literature on the application of a multivariate algorithm that not only combines insights into the interrelations between variables, but also addresses the challenge of setting comparable thresholds across different oceanic variables to produce concrete diagnoses.
If one extends the scope out of the marine environmental monitoring domain, various methods have been proposed for performing the automatic quality control of sensor systems, ranging from simpler algorithms to sophisticated machine learning approaches for anomaly detection, such as [8,9,10,11], and also adapted for multi-sensor applications [12,13]. Common for many of the recent approaches is that they are data-driven, enabled by the rapid increase in computation power combined with access to large datasets.
Data-driven quality control can be efficient and powerful for cases where enough data are available. However, if few or no labeled data are available, the data-driven approaches become more complex and less powerful. The complexity increases further for applications with a high correlation between the different data sources, in addition to a high temporal correlation, and when the frequency and duration of anomalies cannot readily be estimated before the system is in operation [13]. In contrast to process-control systems where operating conditions often are close to training conditions, the conditions in environmental monitoring systems are dynamic, and both the values and statistical properties of the involved parameters may change substantially on different time-scales.
Rule-based or model-based diagnostic systems, on the other hand, are based on expert knowledge and are domain- and application-dependent [14,15]. Extracting the required information from experts is often a laborious and time-consuming process, though there are several attempts to make this process more effective [16]. One of the arguments for using data-driven methods for automatic sensor quality control instead of methods relying on expert knowledge is that domain-specific knowledge is not required. As the complexity of the data-driven methods increases, we fall into the other extreme: setting up powerful enough data-driven methods necessitates an expert data scientist. Other important challenges with data-driven methods are that they lack transparency and explainability, although there is ongoing research to overcome this [16].
The lack of oceanographic datasets labeled with diagnosis as opposed to simple “Good”/“Bad” labels, makes supervised machine-learning approaches not applicable as these methods depend on large quantities of labeled data. Unsupervised machine-learning approaches, on the other hand, would identify anomalies but not automatically produce transparent and explainable diagnostics.
In this paper, we therefore propose a method for multivariate, rule-based, expert-informed automatic sensor diagnostics, tailored for in situ operation on a sensor node. A sensor node consists of multiple sensors sharing a common prepossessing and communication unit, also referred to as a multiprobe sensor. We provide a framework for transferring domain-expert insights on sensor technology, environmental effects and relation between different environmental and sensed variables into algorithms. With this method, insights into how different environmental conditions and internal errors affect the signals of different sensors, as well as insights into the relation between data recorded by the different sensors, are combined for explainable diagnostics. When an anomaly is detected in a variable measured by one sensor, we use the presence or absence of a similar anomaly in correlated sensor data to automatically propose a physics-informed diagnosis.
We demonstrate the method on a sensor node consisting of a temperature, a conductivity and a pressure sensor, deployed underwater. However, the method is applicable to any multi-sensor system with correlated data.

2. Materials and Methods

We distinguish between anomaly, event and error. Depending on the thresholds that are used, a detected anomaly is not necessarily a measurement error. In the dynamic environment encountered in the ocean, especially close to the coast and close to the surface, a statistical outlier in the measurement can in many cases reflect a real change in the water body. Such real changes we here denote as events. If test thresholds are so strict that only nonphysically rapid changes are detected, or if measurement values are outside the possible range, then the anomaly is probably an error from an erroneous measurement.
We first detail how anomalies can be detected in the data from each individual sensor (Section 2.1). We propose a combination of algorithms that is robust against missing data, out-of-range values and that can continue to run automatically even after periods with erroneous data. In Section 2.2, we describe different ways for determining whether an anomaly detected in the data from one sensor is also present in data from a sensor measuring a correlated parameter. In Section 2.3, we show how different combinations of anomalies across the variables can be explained by different error modes or by the environment dynamics, and how this information can be used to set up robust diagnostics. The overall method is illustrated in Figure 1.
The method can be applied to any system where the data recorded by different sensors are correlated. In Section 2.4, we describe how the method can be applied to an example measurement system where temperature, conductivity and pressure data are measured, and salinity is calculated based on these measurements.
Even though most automatic quality control operations today are performed once the sensors’ data have been transferred to servers, there are a number of advantages in running such operations in situ on the sensor node, before the data are transferred. For sensor nodes relying on wireless communication, in particular for acoustic communication under water, the energy cost of transferring data is high, and much of the data processing needs to be performed at the sensor node level, so that only good data, alarms or diagnoses are communicated onshore. Our algorithms are therefore tailored to work in real-time and in situ, under the following restrictions:
  • Only the N last points are available for identifying anomalies, where N depends on the smart sensor node’s storing capacity and processing power.
  • Thresholds cannot be explored and tweaked until the detected anomalies correspond well with what is visually perceived as anomalies. However, if two-way communication is available for the sensors, thresholds can be adjusted after an initial period.
The R-program code with algorithms described in this section are available as Supplementary Materials.

2.1. Detection of Anomalies in Individual Variables

Depending on the measurement technology and the error condition, different errors can produce different effects on the measurement data, such as drift, increased noise, attenuated measurement signal, flat line, saturation and others [17,18]. Organizations working with oceanographic measurements, such as [1,2,3], propose basic algorithms for checking if parameters are outside a range, for spike detection of single value spikes and for detecting high rates of change.
We propose an algorithm incorporating elements from these basic tests, with logic for making the algorithm robust against missing data and periods with spiky data. In order to limit ourselves to a few examples for illustration, we therefore here focus on the detection of spikes: where one or several data values are significantly different from the adjacent values, as well as data with a high rate of change: when data values change anomalously much from one timestep to the next. The algorithm is illustrated in Figure 2 and is structured as follows:
  • Test if a data point y i is marked as not a number (NaN) or not available (NA).
  • Out-of-range test: check if y i is outside a set range, which could depend on location and depth.
  • High-rate test: if y i and y i 1 have passed both tests 1 and 2, and the difference between y i and y i 1 is above a set threshold y d i f f , m a x , then y i is marked with “high rate”.
  • Spike test: If a minimum number N of data points are available, then y i is compared with the moving mean y a v g and a multiplier k of the standard deviation σ N of the N recent data points that are counted as valid. In order for the algorithm to be robust and not give false detection after periods with very stable conductivity readings and therefore low σ N , a minimum variation should be tolerated, here denoted σ n a t . If one of the following conditions are met, then y i is marked with “spike”:
    y i < y a v g m a x k · σ N , σ n a t y i > y a v g + m a x k · σ N , σ n a t
The algorithm relies both on static and dynamic thresholds. For the out-of-range and the high-rate tests, absolute thresholds are set based on application knowledge of what is considered anomalous at the specific location, depth and time of year, taking into account the specific sensor instrumentation. As discussed further in Section 2.2, once an absolute threshold is set for the high-rate test for one variable, corresponding thresholds can be calculated for correlated variables. The dynamic thresholds for spike detection are partly based on the method described by [19]: a spike is diagnosed based on the running average y N ¯ and a multiple k of the running standard deviation σ N , calculated only from the N last data-points that have passed the tests for NA, out-of-range and high-rate. The influence of data points characterized as spikes on the moving mean and standard deviation can be adjusted using an influence parameter w. If w is set to 0, then these data points will not have any effect on the moving statistics.
Note that the use of the statistics of the N previous points to evaluate if data point i is a spike or not is vulnerable to any erroneous measurements as the logging begins. The diagnosis algorithm should therefore only be started when the sensor is clean, completely deployed and stabilized into the environment.

2.2. Evaluating If an Anomaly Is Present Across Correlated Variables

When taking advantage of multivariate sensor data for diagnostics, one important part of separating between sensor faults or natural events is to determine whether an anomaly is only present in the data from one sensor, or from two or more of the sensors measuring correlated variables. One challenge for detecting if an anomaly is present across different sensors is to set thresholds for detection that are comparable across the variables. In this section, we show how the sensitivity coefficients of the correlated variables can be used to translate a threshold set for one variable into a compatible threshold for a correlated variable. We then explore the use of the co-variation between variables for indicating simultaneous events. Finally, we show how using a running window when comparing variables with slightly different time responses can make the detection of simultaneous signals across variables more robust.

2.2.1. Sensitivity Coefficients for the Correlated Variables

In order to set thresholds so that simultaneous anomalies are correctly detected across the different parameters, it is useful to know how sensitive one of the variables are to changes in the other variables.
The Guide to the Expression of Uncertainty in Measurement [20] gives the following definition for sensitivity coefficients for an output estimate y with respect to an input estimate x i as c i δ f δ x i , where f is the functional relationship for determining the output estimate y. We change the notation slightly here to allow us to distinguish between different output estimates y j , where f j is the functional relationship for determining the output estimate y j : c j , i δ f j δ x i
The use of sensitivity coefficients ensures that thresholds can be set consequently across correlated variables. If a threshold x 1 , m a x is set for variable 1, the threshold for a correlated variable x 2 can be calculated as x 2 , m a x = c 2 , 1 · x 1 , m a x . If an anomaly is already detected in variable x 1 , and one wants to investigate whether there is a similar anomaly in x 2 , a margin could be added so that if the change in variable x 2 is slightly attenuated by any other changes in the environment, then the change in x 2 is still detected. For example, this can be handled by using the lower 99 % confidence interval bound for the predicted sensitivity coefficient, c 1 , 2 l w r .

2.2.2. Running Co-Variation as an Indicator of Simultaneous Events

Both the running correlation coefficient or the running covariance between correlated variables can be used as an indication of overlapping events. It is challenging to set the length of the running window so that dips in the correlation coefficient are discovered timely, without being subject to too much noise. Contrary to setting a threshold for detecting a change in the running correlation, a threshold for detecting a significant covariance can be set based on a threshold chosen for one of the variables, multiplied with a derived threshold for the correlated variable, calculated using the sensitivity coefficients discussed above.

2.2.3. Running Windows for Comparing Signals Across Variables

For sensor systems producing high-resolution data, it may be beneficial to add a running window over which the numbers of anomalous data points detected in each variable are counted. This allows for the detection of simultaneous events across variables, even if there are some time delays between the sensor responses. However, if the window lengths are too long, diagnostic information may become lost. Other key factors to consider when selecting window lengths are sensor response time and the environmental dynamics.

2.3. Combining Test Results from Different Correlated Variables to Validate and Set Diagnosis

Once algorithms are set up for detecting anomalies in single-sensor data (Section 2.1) and evaluating if anomalies are present across sensors (Section 2.2), the detected anomalies and covariances are automatically combined through a diagnostic logic to set a specific diagnosis. Establishing the diagnostic logic is highly dependent on thorough knowledge of the sensor system. The diagnostic logic for a system with N x variables, where each can have symptoms ranging from S 1 to S N s , with diagnoses D 1 to D N d is illustrated in Figure 3 as a flow chart, and in table form in Table 1.
The combination of anomalies from individual variables into a diagnosis will at the same time (a) assign a diagnosis that can help the user deciding whether or not to include the data in his/her application or help the operator determine if any action must be taken for maintaining the sensor node and (b) distinguish between anomalies due to environmental dynamics and due to sensor errors. If two variables X 1 and X 2 have a strong positive correlation, and a symptom S 1 (for example, a positive shift) is observed in X 1 , whereas a symptom S 2 (for example, a negative shift) is observed in X 2 , this would indicate an anomaly due to sensor error, and be assigned a diagnosis D 1 . On the other hand, if the same symptom S 1 or S 2 is observed in both variables, this could indicate a true change in the environment and obtain a “No detection” or similar diagnosis.

2.4. Example Application: CTD Measurement Node

For illustration in this paper, we use measured temperature, conductivity and pressure data, as well as calculated salinity data, from the OBSEA observatory in Barcelona, Spain [21]. The dataset contains data from two Sea-Bird Scientific CTD sensor nodes sequentially deployed, an SBE16 and an SBE37, at a depth of approximately 20 m. When one sensor node is in operation, the other one is onshore for maintenance, and the sensor nodes are never deployed at the same time. As the main differences between the SBE16 and SBE37 sensor nodes are related to battery, data storage and communication, not directly influencing the measurement results when integrated in a cabled observatory, we do not separate between the SBE16 and SBE37 in the presentation of results and in the discussion. Since OBSEA is a cabled observatory, data are streamed to the shore station in real-time, where they are processed and archived. Both sensors have a sampling rate of 10 s. More details are found in [21], and on the SeaBird website [22,23]. Figure 4 shows how the sensor node is installed on the observatory, close to the sea floor. Unprocessed high-frequency data from both sensors can be found at OBSEA’s ERDDAP data service [24].
For illustration, we focus on errors that were expected to occur in conductivity sensors and in the derived salinity. However, this discussion could be extended to include errors in temperature, pressure sensors and others and to include other errors arising from environmental effects or from internal sources such as electronic drift, low battery or other internal malfunctioning.
The conductivity sensor studied here has an electrode-based measurement principle. The conductivity 1 ρ is calculated from the measured conductance G and the known cell geometry in terms of length l and cross-sectional area A following the relation derived from [25] (p. 143):
1 G = ρ l A
Ref. [25] (p. 145) notes that while “steady, biological growth” may result in a linear drift in the conductivity, “biological settling or an increase in biological productivity” can result in a more episodic change.
The measurement technology of the conductivity sensor studied here consists of a protected conductivity cell into which water is pumped and sampled at set intervals. The conductivity measurement is therefore approximate to both temporal and spatial averages of the conductivity of the seawater in the proximity of the pump. Any rapid, instantaneous or local changes in conductivity is smoothed out by that measurement process.
Salinity was calculated from the measured temperature, conductivity and pressure. Refs. [25,26] (p. 145) point out that an offset between the measured temperature and the actual temperature in the conductivity cell may lead to spikes in the calculated salinities.

2.4.1. Detection of Anomalies in Individual Variables

Tests for anomalies in individual variables were carried out for both measured and derived parameters. Salinity was a derived parameter, where errors could stem from both the measured variables used to calculate it (such as temperature, conductivity and pressure), but also from the calculation process itself, where for example asynchronous input data may be an issue.
The threshold for the rate of change in temperature was set to 0.05 °C and translated into corresponding thresholds for conductivity and salinity using the calculated sensitivity coefficients as detailed in the following section. Dynamic thresholds for spike detection were calculated dynamically as described in Section 2.1, with k = 4 times σ N as the number of standard deviations outside the average, but allowing for a minimum of natural variations.

2.4.2. Evaluating if an Anomaly Is Present Across Correlated Variables

Sensitivity coefficients: The UNESCO [27] (pp. 6–12) equations of state allow the calculation of practical salinity as a function of conductivity, temperature and pressure, and conductivity as a function of salinity, temperature and pressure. We calculated the local sensitivity coefficients according to the UNESCO equations of state implemented in the R-package “oce” [28] for the conductivity and salinity calculations (functions “swSCTp” and “swCSTp”, respectively).
Figure 5 shows the sensitivity coefficients calculated for salinity and conductivity with respect to temperature. As a practical approach in the diagnostic algorithms, in the examples shown in this paper, we estimated the sensitivity coefficients based on linear and quadratic regression models built on a subset of recorded data from 2020, excluding obvious outliers. When there was an increase in temperature of 1 °C from approximately 20 °C, but the conductivity measurement was constant, this resulted in a reduction in the calculated salinity of approximately −0.9 PSU (Figure 5a). An increase in temperature of 1 °C, at constant salinity, resulted in an increase in the measured conductivity of approximately 0.1 S/m (Figure 5b).
Running windows: For the 10 s resolution system studied in this paper, we chose a window length of 30 min ( N = 30 · 60 10 ) for the running statistics discussed in Section 2.1. The covariance over different window lengths is shown in Figure 6. For comparing symptoms across variables, we used a running window of 4 data points.

2.4.3. Combining Tests Results for Setting Diagnosis

Based on the insights into possible error sources and effects on the involved sensor signals described above, we systematically mapped out combinations of anomalies with their accompanying diagnosis in Table 2. Below is a explanation of each the different combinations, corresponding to each row in Table 2.
  • Cell error: if a negative spike or negative high rate is only detected in the conductivity and salinity data and not in temperature data, one plausible explanation is that something had entered into the conductivity cell.
  • Delay between temperature and conductivity: if a spike is detected in the salinity, in a period with spikes or high rates of change in the temperature, and with a certain co-variation between temperature and conductivity data, this could indicate a delay between the temperature measurement and the actual temperature inside the conductivity cell.
  • No detection: high spikes in both temperature and conductivity, in the same direction, or any other combination of symptoms.
This diagnostic logic was tailored for the specific error conditions and co-relations we identified for our example system with temperature, conductivity and pressure measurements, with salinity as a derived parameter. This table could also be extended to cover error conditions for the involved pressure sensor, or even other types of errors that could appear in the temperature or conductivity measurements. The diagnostic logic shown in Table 2 is not suited for diagnosing drift in sensor measurements, as there is no direct redundancy in the sensor node, only strongly correlated temperature and conductivity measurements. A drift in conductivity measurements and no drift in temperature measurements could be due to fouling in the conductivity cell, but it could also be explained by a real change in salinity levels. Similarly, a drift in temperature and not in conductivity measurements could be due to a drift in temperature measurement or to changing salinity levels combined with changing temperature, reducing the effect on conductivity measurements.

3. Results

The diagnostic method was applied to OBSEA CTD data for the whole year 2020, available as csv files in the Supplementary Information. The Supplementary Information also include weekly plots for 2020. In Figure 7, we present an interesting period that illustrates the methods’ performance.
Figure 7 shows the diagnostic method applied to temperature, conductivity and salinity data from 5–6 August 2020. The figure shows that the high rates of change in temperature (a) were for much of the period not simultaneously detected in the conductivity data (b). For periods where a high rate of change or a spike was detected in the temperature (a) as well as in salinity (c), at the same time as the covariance between temperature and conductivity was significant, a “delay in conductivity measurement with respect to temperature measurement” was diagnosed. Around 6th August, 01:30, a positive and then a negative spike in salinity data (e) were diagnosed, caused by such a delay. On 6 August, at approximately 04:00–07:00, a drop in conductivity (b) was not accompanied by a drop in temperature (a), and the diagnosis on the salinity plot (e) was therefore a “conductivity cell error”. The same diagnosis was given to the smaller spikes detected in salinity between 5 August 13:00 and 17:00.
The weekly plots for 2020 included as Supplementary Materials show that instances of spiky salinity data were most apparent in May, June, July and August, whereas errors in conductivity cell were present sporadically around the year but strongly dominant from November–December. Figure 8 shows the distribution of diagnoses over the year. For some periods with high variation in both temperature and conductivity, for example around 20 June, some of the spike events in salinity were misdiagnosed as conductivity cell errors, whereas a manual evaluation would indicate spikes from delays between T and C.

4. Discussion

In this section, we first discuss the absence of golden standard reference datasets labeled with diagnoses. We then investigate the validity of the diagnosis, pointing to delays between temperature and conductivity. We then discuss how the modular structure of the diagnosis allows us to incorporate more advanced statistical or machine learning techniques where applicable. We finish off by discussing test thresholds, before we propose how to improve the diagnostics by including other correlated sensor data.

4.1. Validation

Ideally, any new method for automatic quality control of measurement data should be validated against a reference dataset with known labels. Unfortunately, to the authors’ knowledge, there are no reference datasets available for multivariate oceanographic measurements, labeled with diagnoses. Datasets published in databases such as COPERNICUS and ARGO [2,3] are, as discussed in the introduction, only labeled as “Good”, “Bad”, or variants such as “Pass”, “Fail”, or “Probably Good/Bad” or “Suspicious”. Possible diagnoses can sometimes be found indirectly in some reports or journal papers, where authors may discuss reasons for observed anomalies qualitatively in the text or in figure captions. Such qualitative mentions and discussions in text format have been used in our work for identifying different errors that may occur in the temperature, conductivity and salinity measurement system, as discussed in Materials and Methods.
Another approach for validating our proposed method could be to compare its performance with state-of-the-art machine learning algorithms. However, as oceanographic datasets labeled with diagnosis are not available, it is not possible to train machine learning algorithms to perform the diagnostic tests. Another approach could be to compare our proposed method with unsupervised machine learning algorithms, but the process of setting up such algorithms would need to be heavily informed by the diagnostic logic we developed here.

4.2. Investigating if Delays Between Temperature and Conductivity Measurements May Explain the Spikes in Calculated Salinity

Figure 7 shows that both positive and negative spikes are observed in the calculated salinity in periods with rapid variations in temperature. As recommended by the sensor manufacturer [29], the time constants of the temperature and conductivity sensors should be matched in order to reduce spikes in the calculated salinities due to asynchronous temperature and conductivity measurements.
Figure 9 shows that there is a weak shift towards positive lags in the cross-correlation plot of differences in conductivity and temperature from one timestep to the next. This suggests that the correlation between temperature and conductivity would still be strong if all conductivity data were shifted one, two, three or even up to six steps backwards in time, but that the correlation would rapidly decrease if the conductivity data were shifted forward in time. This observation supports the diagnosis “delay between temperature and conductivity measurements”, which, in our example of a diagnostic logic, was found when a high rate of change was detected in temperature data but not in the conductivity data and the covariance between the data was above a certain threshold.
One way to validate that diagnosis could be by adjusting the time constants in the calculation of the salinity according to the sensor manufacturer [29] and observe if this reduced the spikes in salinity in periods with strong temperature variations. If the salinity spikes persisted, another possibility could be to add a redundant temperature sensor and verify if it detected the rapid changes to the same degree as the temperature sensor studied here.
Note that the diagnostic method proposed here should not be used as a substitute for thoroughly setting up the system in the first place.

4.3. Enhancing Data Quality and Operational Efficiency of Autonomous Sensor Nodes

Setting up diagnostic tests intended for autonomous sensor nodes may be time-consuming at first. However, a well-designed diagnosis system will reduce the time required for manual quality control of the data in delayed mode. Another advantage may be that automatic diagnostics can be used for setting up condition-based maintenance programs for sensors.
The diagnostic process can be performed after the data are transferred to shore or in situ on a dedicated datalogger or other component with a central processing unit, if equipped with sufficient processing power, battery capacity and memory. For in situ operation, a two-way communication capability would be useful for fine-tuning the thresholds and window lengths after an initial period of operation. This is particularly relevant when the sensor node is positioned in a location that is remote or challenging to access. In addition to alarms related to specific diagnosis, it would be useful to set up alarms for specific symptoms in individual variables, for example for spikes persisting over a longer time period, that would need the attention of a domain expert for evaluating if sensor maintenance is required.

4.4. Modular Structure Allows Incorporation of Better Performing Tests When Available

The method we propose is modular, so the straightforward methods described in Section 2.1 and Section 2.2 can, if required, be easily replaced by higher-performing algorithms if available.
More advanced statistical procedures or machine learning algorithms can be used for the detection of anomalies in individual variables, as long as they separate between different types of anomalies. The algorithms proposed in this paper rely on both absolute and dynamic thresholds, and the need for setting these can be perceived as a drawback. However, note that machine learning approaches rely on more abstract thresholds, such as thresholds for anomaly score that can be static or adaptive, for instance, set at a certain percentile of the anomaly scores [30].

4.5. Relevance for Machine Learning

As discussed in the introduction, there is a lack of labeled diagnostic data on oceanographic measurement systems. This is an obstacle for supervised machine learning, as such methods depend on large quantities of representative, labeled data. The diagnostic labels produced by our physics/knowledge-driven approach can fill this gap. In [31], the method for diagnosing measurement data proposed here was used to generate labeled data for use with machine learning algorithms.

4.6. Thresholds and Other Test Parameters

As the individual tests described in Section 2.1 build on established quality control tests for range, high rate and spikes, some thresholds and lengths of running windows for statistics are required inputs. These must be set based on insights into the sensor technology and environmental dynamics at the specific location.
However, the method for translating thresholds between variables we propose in Section 2.2 makes it only necessary to set thresholds for one of the variables. Translating thresholds between variables also ensures that results from individual tests can be compared across variables and used for sensor system diagnostics as described in Section 2.3.
The only thresholds that are required using our proposed methods are therefore the threshold (absolute) for the high-rate test for one variable, and the minimum natural variation (absolute) that should be accepted for the spike test, which otherwise has a dynamic threshold based on the standard deviations of a running window. The reason for introducing an allowed minimum natural variation is to make the spike test more robust against false alarms if a period with much natural variation follows a period with little natural variation that would result in a too low dynamic threshold based on standard deviation.
Both these absolute thresholds can be set based on the inherent response time of the sensor, the expected dynamics at the specific location, as well as the sampling interval.

4.7. Application to Other Systems

The proposed method is modular and scalable and applicable for measurement systems with correlated sensor data where physical/chemical knowledge of the system is expected to add diagnostic power. The method is not relevant if very limited knowledge is available regarding expected ranges, variability or if error sources are unknown.
If more correlated sensor data are included, this could improve the performance and robustness of the diagnostic system. For illustrating the proposed method, in this paper, we focused on a limited system consisting of temperature, conductivity and calculated salinity. If more correlated sensor data are available from the same location, the diagnosis system may be extended. Some examples include the following:
  • If a measurement system consists of multiple sensors measuring the same variable at slightly different locations, other measurements of the same variable at a close enough location can be exploited for direct comparison.
  • Any direct measurements of the speed of sound (SoS) can be compared with the SoS calculated from temperature and conductivity measurements, using the equations detailed in [27] (pp. 46–50).
  • Current direction measurements can be compared with changes in temperature and conductivity.
  • If the pressure changes more than what is expected from tidal variations or from changes in density of the water column with changing temperatures, this could indicate that the sensor node has moved in the water. In a robust diagnostic system, a test for detecting changes in pressure should anyway be included at a higher level/earlier in the quality control chain.
It is also possible to use available data outside the sensor node in the diagnostic process. For the sensor node studied in this paper, it could be useful to add meteorological data, particularly on rainfall or any measurements of activity in nearby rivers.
An example application in a more complex sensor system could be for a sensor array for biomarker monitoring, with measurements of chlorophyll-a, eDNA, HAB biomarkers and Dissolved Oxygen (DO). Here, thresholds could be set for example for chlorophyll-a as the primary biomarker, and thresholds for the other variables could be calculated either based on sensitivity coefficients derived from published studies, such as [32], or calculated from periods with known well-functioning sensors, taking into account nonlinearities in the relationships. Similarly, the number of timesteps for statistics in individual tests, as well as the length of the running window for determining if symptoms are present across variables can be set based on knowledge of the dynamics and correlations in the system or derived from periods with well-functioning sensors.
Individual tests can then be run on each of the variables, detecting standard symptoms such as being out of range, a flat line, a high rate, negative or positive spikes, or more tailored for detecting typical symptoms for a specific variable, for example a “consistent negative slope” for detecting effects of biofouling on sensors.
When setting up the diagnostic logic for such a system, it is possible to include diagnosis not only of sensor or system malfunctioning, but also automatically indicate specific environmental events of interest, such as algae bloom, species diversity changes or hypoxia. Examples of symptom combinations giving a specific diagnosis can be as follows:
  • Extreme algal bloom or hardware limitations: a flat maximum value for extended periods for chlorophyll-a and HAB biomarkers, but no saturation symptoms for DO and eDNA.
  • Synchronization or sensor error: a high rate of change or a spike that is present for only one or two of the variables chlorophyll-a, HAB biomarkers or DO, but not all.
  • DO or Chlorophyll sensor error: a lack of strong positive correlation between DO levels and chlorophyll-a under photosynthetic conditions (during daytime).
For large, distributed sensor networks measuring the same variable at different locations, specialized approaches using sensor fusion in networks may be more applicable, as specialized expert knowledge regarding correlation between different variables is not required.

5. Conclusions

In this paper, we demonstrated how the symptoms revealed by established quality control tests for individual variables could be combined for producing a diagnosis of the measurement system. We introduced a method where sensitivity coefficients were used for translating thresholds between variables, reducing the required input when setting up a system and ensuring that comparable thresholds were set across variables. Further, we showed in practice how technical insights into sensor technology and environmental dynamics could be coded into a diagnostic logic table. The diagnosis produced using our method indicates what kind of sensor error is causing the observed symptom, in contrast to simple Good/Bad flags predominant in the domain today. We believe this can be valuable information for both system operators and data users.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jmse1010000/s1. R code for multivariate, automatic diagnostics based on insights in sensor technology; data and results for 2020; zip file with weekly plots for 2020.

Author Contributions

Conceptualization, C.S. and K.-E.F.; methodology, A.M.S.; software, A.M.S.; formal analysis, A.M.S.; investigation, A.M.S.; data curation, E.M.; writing—original draft preparation, A.M.S.; writing—review and editing, A.M.S., C.S., R.N.B., K.-E.F. and E.M.; visualization, A.M.S.; supervision, C.S., R.N.B. and K.-E.F.; project administration, C.S. and K.-E.F.; funding acquisition, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is part of the SFI Smart Ocean (a Centre for Research-based Innovation). The Centre is funded by the partners in the Centre and the Research Council of Norway (project no. 309612).

Data Availability Statement

The original data presented in the study are openly available in the OBSEA data repository, at https://data.obsea.es/erddap/tabledap/OBSEA_CTD_full.html (accessed on 10 December 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. IOOS and QARTOD. Manual for Real-Time Quality Control of In-Situ Temperature and Salinity Data. 2020. Available online: https://ioos.noaa.gov/ioos-in-action/temperature-salinity/ (accessed on 2 April 2024).
  2. Wong, A.; Keeley, R.; Carval, T. Argo Quality Control Manual for CTD and Trajectory Data; Report; Argo Data Management Team: Brest, France, 2024. [Google Scholar] [CrossRef]
  3. EuroGOOS DATA-MEQ Working Group. Recommendations for In-Situ Data Near Real Time Quality Control. Report, 2010. Available online: https://repository.oceanbestpractices.org/handle/11329/656 (accessed on 24 October 2024).
  4. Nguyen, N.T.; Lima, K.; Skålvik, A.M.; Heldal, R.; Knauss, E.; Oyetoyan, T.D.; Pelliccione, P.; Sætre, C. Synthesized Data Quality Requirements and Roadmap for Improving Reusability of In-Situ Marine Data. In Proceedings of the 2023 IEEE 31st International Requirements Engineering Conference (RE), Hannover, Germany, 4–8 September 2023; pp. 65–76. [Google Scholar] [CrossRef]
  5. IOOS QC: QARTOD and Other Quality Control Tests Implemented in Python. 2022. Available online: https://pypi.org/project/ioos-qc/ (accessed on 24 October 2024).
  6. Python Functions defined for computational ION. 2017. Available online: https://github.com/ooici/ion-functions/tree/master/ion_functions/qc (accessed on 24 October 2024).
  7. Lookup Tables for the Automated OOI Quality Control Algorithms. 2024. Available online: https://github.com/oceanobservatories/qc-lookup (accessed on 24 October 2024).
  8. Barbariol, T.; Chiara, F.D.; Marcato, D.; Susto, G.A. A Review of Tree-Based Approaches for Anomaly Detection. In Control Charts and Machine Learning for Anomaly Detection in Manufacturing; Tran, K.P., Ed.; Springer International Publishing: Cham, Switherland, 2022; pp. 149–185. [Google Scholar] [CrossRef]
  9. Teh, H.Y.; Wang, K.I.K.; Kempa-Liehr, A.W. Expect the Unexpected: Unsupervised Feature Selection for Automated Sensor Anomaly Detection. IEEE Sens. J. 2021, 21, 18033–18046. [Google Scholar] [CrossRef]
  10. Han, X.; Jiang, J.; Xu, A.; Bari, A.; Pei, C.; Sun, Y. Sensor Drift Detection Based on Discrete Wavelet Transform and Grey Models. IEEE Access 2020, 8, 204389–204399. [Google Scholar] [CrossRef]
  11. Zhu, M.; Li, J.; Wang, W.; Chen, D. Self-Detection and Self-Diagnosis Methods for Sensors in Intelligent Integrated Sensing System. IEEE Sens. J. 2021, 21, 19247–19254. [Google Scholar] [CrossRef]
  12. Yan, X.; Yan, W.J.; Xu, Y.; Yuen, K.V. Machinery multi-sensor fault diagnosis based on adaptive multivariate feature mode decomposition and multi-attention fusion residual convolutional neural network. Mech. Syst. Signal Process. 2023, 202, 110664. [Google Scholar] [CrossRef]
  13. Belay, M.A.; Blakseth, S.S.; Rasheed, A.; Salvo Rossi, P. Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions. Sensors 2023, 23, 2844. [Google Scholar] [CrossRef] [PubMed]
  14. Angeli, C. Diagnostic Expert Systems: From Expert’ s Knowledge to Real-Time Systems. In Advanced Knowledge Based Systems: Model, Applications & Research; TMRF e-Book; TMRF: Burntwood, UK, 2010; Volume 1, pp. 50–73. [Google Scholar]
  15. Gao, Z.; Cecati, C.; Ding, S.X. A Survey of Fault Diagnosis and Fault-Tolerant Techniques—Part II: Fault Diagnosis With Knowledge-Based and Hybrid/Active Approaches. IEEE Trans. Ind. Electron. 2015, 62, 3768–3774. [Google Scholar] [CrossRef]
  16. Young, A.; West, G.; Brown, B.; Stephen, B.; Duncan, A.; Michie, C.; McArthur, S.D. Parameterisation of domain knowledge for rapid and iterative prototyping of knowledge-based systems. Expert Syst. Appl. 2022, 208, 118169. [Google Scholar] [CrossRef]
  17. Skålvik, A.M.; Saetre, C.; Frøysa, K.E.; Bjørk, R.N.; Tengberg, A. Challenges, limitations, and measurement strategies to ensure data quality in deep-sea sensors. Front. Mar. Sci. 2023, 10, 1152236. [Google Scholar] [CrossRef]
  18. Jesus, G.; Casimiro, A.; Oliveira, A. A Survey on Data Quality for Dependable Monitoring in Wireless Sensor Networks. Sensors 2017, 17, 2010. [Google Scholar] [CrossRef] [PubMed]
  19. Brakel, J. Robust Peak Detection Algorithm Using Z-Scores. 2020. Available online: https://stackoverflow.com/questions/22583391/peak-signal-detection-in-realtime-timeseries-data/2264036222640362 (accessed on 15 January 2024).
  20. BIPM; IEC; IFCC; ILAC; ISO; IUPAC; IUPAP; OIML. Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement. Joint Committee for Guides in Metrology, JCGM 100:2008. Available online: https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6 (accessed on 12 November 2023).
  21. Del-Rio, J.; Nogueras, M.; Toma, D.M.; Martínez, E.; Artero-Delgado, C.; Bghiel, I.; Martinez, M.; Cadena, J.; Garcia-Benadi, A.; Sarria, D.; et al. Obsea: A Decadal Balance for a Cabled Observatory Deployment. IEEE Access 2020, 8, 33163–33177. [Google Scholar] [CrossRef]
  22. Scientific, S. SBE 16plus V2 SeaCAT. Available online: https://www.seabird.com/sbe-16plus-v2-seacat/product?id=60761421598 (accessed on 9 December 2024).
  23. Scientific, S. SBE 37-SM, SMP, SMP-ODO MicroCAT. Available online: https://www.seabird.com/moored/sbe-37-sm-smp-smp-odo-microcat/family?productCategoryId=54627473786 (accessed on 9 December 2024).
  24. Martínez, E. OBSEA ERDDAP Data Service. 2024. Available online: https://data.obsea.es/erddap (accessed on 10 December 2023).
  25. Venkatesan, R.; Tandon, A.; D’Asaro, E.; Atmanand, M. Observing the Oceans in Real Time; Springer: Cham, Switzerland, 2018; pp. 144–145. [Google Scholar]
  26. Jansen, P.; Weeding, B.; Shadwick, E.H.; Trull, T.W. IMOS—Southern Ocean Time Series (SOTS)—Quality Assessment and Control Report Temperature Records; Technical Report; Commonwealth Scientific and Industrial Research Organisation (CSIRO): Tasmania, Australia, 2021. [Google Scholar] [CrossRef]
  27. Fofonoff, N.; Millard, R., Jr. Algorithms for computation of fundamental properties of seawater. Unesco Tech. Pap. Mar. Sci. 1983, 44. [Google Scholar] [CrossRef]
  28. Kelley, D.E. Oceanographic Analysis with R; Springer: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  29. Sea-Bird Scientific. Sea-Bird Scientific University Module 12: Advanced Data Processing: Dynamic Corrections for CTDs. Available online: https://www.seabird.com/training-materials-download (accessed on 10 October 2024).
  30. Komadina, A.; Martinić, M.; Groš, S.; Mihajlović, Ž. Comparing Threshold Selection Methods for Network Anomaly Detection. IEEE Access 2024, 12, 124943–124973. [Google Scholar] [CrossRef]
  31. Nguyen, N.T.; Skalvik, A.M.; Sylligardos, E.; Heldal, R.; Pelliccione, P.; Boniol, P.; Palpanas, T.; Alvsvag, S. Interpretable Multivariate Anomaly Detector Selection for Automatic Marine Data Quality Control. In Proceedings of the 41st IEEE International Conference on Data Engineering—Industrial Track (ICDEIndustrial2025), Hong Kong, 19–23 May 2025. Submitted. [Google Scholar]
  32. Zang, C.; Huang, S.; Wu, M.; Du, S.; Scholz, M.; Gao, F.; Lin, C.; Guo, Y.; Dong, Y. Comparison of Relationships Between pH, Dissolved Oxygen and Chlorophyll a for Aquaculture and Non-aquaculture Waters. Water Air Soil Pollut. 2011, 219, 157–174. [Google Scholar] [CrossRef]
Figure 1. Schematic overview of the proposed method.
Figure 1. Schematic overview of the proposed method.
Jmse 12 02367 g001
Figure 2. Algorithm for detecting symptoms in individual variables. y i refers to the measurement of variable y at timestep i. N refers to the length of the running window. k is a multiplier used to set dynamic thresholds based on the standard deviation. f i l t e r e d y refers to the N recent data points evaluated as valid and used for calculating statistics such as the mean y a v g and the standard deviation σ N . y d i f f , m a x refers to the absolute threshold set for detecting high rates of change. σ n a t is the minimum standard deviation that should be accepted, due to natural variations in the environment the sensor is located in.
Figure 2. Algorithm for detecting symptoms in individual variables. y i refers to the measurement of variable y at timestep i. N refers to the length of the running window. k is a multiplier used to set dynamic thresholds based on the standard deviation. f i l t e r e d y refers to the N recent data points evaluated as valid and used for calculating statistics such as the mean y a v g and the standard deviation σ N . y d i f f , m a x refers to the absolute threshold set for detecting high rates of change. σ n a t is the minimum standard deviation that should be accepted, due to natural variations in the environment the sensor is located in.
Jmse 12 02367 g002
Figure 3. Schematic overview of how different symptoms detected in individual variables can be combined with tests for covariance between the variables, into different diagnoses, through a diagnostic logic module.
Figure 3. Schematic overview of how different symptoms detected in individual variables can be combined with tests for covariance between the variables, into different diagnoses, through a diagnostic logic module.
Jmse 12 02367 g003
Figure 4. Photo of the SeaBird SBE37SMP sensor node upon installation at the OBSEA observatory.
Figure 4. Photo of the SeaBird SBE37SMP sensor node upon installation at the OBSEA observatory.
Jmse 12 02367 g004
Figure 5. Absolute sensitivity coefficients for (a) salinity and (b) conductivity with respect to temperature. Calculated for every 1000th data point (approximately every 3 h) in the OBSEA CTD data for year 2020. The dashed lines indicate the 99 percent confidence levels of the predicted sensitivity coefficients using a quadratic (a) and linear (b) model built on the calculated sensitivity coefficients, excluding extreme outliers.
Figure 5. Absolute sensitivity coefficients for (a) salinity and (b) conductivity with respect to temperature. Calculated for every 1000th data point (approximately every 3 h) in the OBSEA CTD data for year 2020. The dashed lines indicate the 99 percent confidence levels of the predicted sensitivity coefficients using a quadratic (a) and linear (b) model built on the calculated sensitivity coefficients, excluding extreme outliers.
Jmse 12 02367 g005
Figure 6. Rolling co-variation between temperature and conductivity over 15 min, 1 h and 2 h, for 5–6 August 2020.
Figure 6. Rolling co-variation between temperature and conductivity over 15 min, 1 h and 2 h, for 5–6 August 2020.
Jmse 12 02367 g006
Figure 7. Diagnostic plot for OBSEA CTD temperature, conductivity and salinity data 5–6 August 2020. (ac) Result of the rate of change and spike tests from each sensor individually. (d) shows the running covariance between temperature and conductivity. (e) Salinity data, diagnosed based on the combination of (ad) and the logic described in Table 2. Threshold for high rate of change for temperature: 0.05 °C. Threshold for high rate of change for conductivity and salinity: calculated from temperature thresholds and sensitivity coefficients. Running window for statistics: 30 min. Number of standard deviations for detecting spikes: 4. The spike detection thresholds are indicated as blue-gray dotted lines in (ac).
Figure 7. Diagnostic plot for OBSEA CTD temperature, conductivity and salinity data 5–6 August 2020. (ac) Result of the rate of change and spike tests from each sensor individually. (d) shows the running covariance between temperature and conductivity. (e) Salinity data, diagnosed based on the combination of (ad) and the logic described in Table 2. Threshold for high rate of change for temperature: 0.05 °C. Threshold for high rate of change for conductivity and salinity: calculated from temperature thresholds and sensitivity coefficients. Running window for statistics: 30 min. Number of standard deviations for detecting spikes: 4. The spike detection thresholds are indicated as blue-gray dotted lines in (ac).
Jmse 12 02367 g007
Figure 8. Distribution of diagnoses per month for the OBSEA CTD temperature, conductivity and salinity data for the year 2020.
Figure 8. Distribution of diagnoses per month for the OBSEA CTD temperature, conductivity and salinity data for the year 2020.
Jmse 12 02367 g008
Figure 9. Cross-correlation calculated for the differences in temperature and conductivity for each timestep in the year 2020. A weak shift towards positive lags is observed, suggesting a delay in conductivity compared with temperature measurement data.
Figure 9. Cross-correlation calculated for the differences in temperature and conductivity for each timestep in the year 2020. A weak shift towards positive lags is observed, suggesting a delay in conductivity compared with temperature measurement data.
Jmse 12 02367 g009
Table 1. Generic diagnostic logic table illustrating symptoms ( S j ), for different variables X i , covariances between different variables, and diagnoses ( D k ). The label “0” indicates no symptoms detected. For the covariance columns, “1” or “0” indicate whether a significant covariance is detected or not, and “-” indicates that it is not relevant for the specific diagnosis.
Table 1. Generic diagnostic logic table illustrating symptoms ( S j ), for different variables X i , covariances between different variables, and diagnoses ( D k ). The label “0” indicates no symptoms detected. For the covariance columns, “1” or “0” indicate whether a significant covariance is detected or not, and “-” indicates that it is not relevant for the specific diagnosis.
Variable X 1 X 2 X N cov ( X 1 , X 2 ) cov(..,..) cov ( X i , X Nx )
Diagnosis 1 ( D 1 ) S 1 S 2 -10
S 2 S 1 -10
Diagnosis 2 ( D 2 )- S 3 0-1
S 1 - S 3 01
Diagnosis “No Detection” ( D NoDet ) S 1 S 1 1
S 2 S 2 1
Diagnosis Nd ( D Nd )
Table 2. Combination of detected event anomalies and corresponding diagnoses. “-” indicates that any test result is not relevant and not considered. Temperature is abbreviated as T and conductivity as C. The diagnostic logic behind the table is explained in detail in Section 2.4. Strikethrough indicates that the symptom is not present. The symbols used for marking the different diagnosis in Figure 7 and in the Supplementary Materials are shown on the right of the diagnosis, for reference.
Table 2. Combination of detected event anomalies and corresponding diagnoses. “-” indicates that any test result is not relevant and not considered. Temperature is abbreviated as T and conductivity as C. The diagnostic logic behind the table is explained in detail in Section 2.4. Strikethrough indicates that the symptom is not present. The symbols used for marking the different diagnosis in Figure 7 and in the Supplementary Materials are shown on the right of the diagnosis, for reference.
VariableTemperatureConductivitySalinityCov(T,C)
Diagnosis: Jmse 12 02367 i001
Cell error
Spike_Neg and High_Rate_NegSpike_Neg or High_Rate_NegSpike_Neg or High_Rate or OutOfRange-
Diagnosis: Jmse 12 02367 i002
Delay T&C
Spike_Neg or Spike_Pos or HighRate_Pos or HighRate_Neg-Spike_Neg or Spike_Pos or HighRate_Pos or HighRate_Neg1
Diagnosis: Jmse 12 02367 i003
No detection
Spike_NegSpike_Neg--
Spike_PosSpike_Pos--
OtherOtherOtherOther
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Skålvik, A.M.; Bjørk, R.N.; Martínez, E.; Frøysa, K.-E.; Saetre, C. Multivariate, Automatic Diagnostics Based on Insights into Sensor Technology. J. Mar. Sci. Eng. 2024, 12, 2367. https://doi.org/10.3390/jmse12122367

AMA Style

Skålvik AM, Bjørk RN, Martínez E, Frøysa K-E, Saetre C. Multivariate, Automatic Diagnostics Based on Insights into Sensor Technology. Journal of Marine Science and Engineering. 2024; 12(12):2367. https://doi.org/10.3390/jmse12122367

Chicago/Turabian Style

Skålvik, Astrid Marie, Ranveig N. Bjørk, Enoc Martínez, Kjell-Eivind Frøysa, and Camilla Saetre. 2024. "Multivariate, Automatic Diagnostics Based on Insights into Sensor Technology" Journal of Marine Science and Engineering 12, no. 12: 2367. https://doi.org/10.3390/jmse12122367

APA Style

Skålvik, A. M., Bjørk, R. N., Martínez, E., Frøysa, K.-E., & Saetre, C. (2024). Multivariate, Automatic Diagnostics Based on Insights into Sensor Technology. Journal of Marine Science and Engineering, 12(12), 2367. https://doi.org/10.3390/jmse12122367

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop