Next Article in Journal
Birds Detection in Natural Scenes Based on Improved Faster RCNN
Next Article in Special Issue
Tool for Parsing Important Data from Web Pages
Previous Article in Journal
Drift Potential Characteristics of a Flat Fan Nozzle: A Numerical and Experimental Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convolutional Neural Network for Measurement of Suspended Solids and Turbidity

by
Daniela Lopez-Betancur
1,2,†,
Ivan Moreno
2,*,†,
Carlos Guerrero-Mendez
2,*,†,
Tonatiuh Saucedo-Anaya
2,
Efrén González
3,
Carlos Bautista-Capetillo
2 and
Julián González-Trinidad
2
1
Dirección de Posgrados e Investigación, Universidad Politécnica de Aguascalientes, Calle Paseo San Gerardo #201, Fracc. San Gerardo, Aguascalientes 20342, Mexico
2
Unidad Académica de Ciencia y Tecnología de la Luz y la Materia, Universidad Autónoma de Zacatecas, Campus Siglo XXI, Zacatecas 98160, Mexico
3
Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Campus Siglo XXI, Zacatecas 98160, Mexico
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(12), 6079; https://doi.org/10.3390/app12126079
Submission received: 23 April 2022 / Revised: 8 June 2022 / Accepted: 13 June 2022 / Published: 15 June 2022
(This article belongs to the Special Issue Soft Sensors Based on Deep Neural Networks)

Abstract

:

Featured Application

This research covers the development of a soft sensor model for dynamic processes based on convolutional neural networks for the measurement of suspended solids and turbidity.

Abstract

The great potential of the convolutional neural networks (CNNs) provides novel and alternative ways to monitor important parameters with high accuracy. In this study, we developed a soft sensor model for dynamic processes based on a CNN for the measurement of suspended solids and turbidity from a single image of the liquid sample to be measured by using a commercial smartphone camera (Android or IOS system) and light-emitting diode (LED) illumination. For this, an image dataset of liquid samples illuminated with white, red, green, and blue LED light was taken and used to train a CNN and fit a multiple linear regression (MLR) by using different color lighting, we evaluated which color gives more accurate information about the concentration of suspended particles in the sample. We implemented a pre-trained AlexNet model, and an MLR to estimate total suspended solids (TSS), and turbidity values in liquid samples based on suspended particles. The proposed technique obtained high goodness of fit ( R 2 = 0.99). The best performance was achieved using white light, with an accuracy of 98.24% and 97.20% for TSS and turbidity, respectively, with an operational range of 0–800 mgL 1 , and 0–306 NTU. This system was designed for aquaculture environments and tested with both commercial fish feed and paprika. This motivates further research with different aquatic environments such as river water, domestic and industrial wastewater, and potable water, among others.

1. Introduction

In aquatic environments such as intensive aquaculture systems, there is an accumulation of organic and inorganic matter, feed residues, and aquatic microorganisms [1]. This accumulation is associated with total suspended solids ( mgL 1 ) and could be defined as the mass present in a water column. Suspended particles scatter and absorb light, causing turbidity or loss of transparency of the water [2]. Total suspended solids (TSS) is an important parameter in determining water quality. In aquaculture, a high level of TSS reduces the vision and ability of fish to catch their feed. Mathematically, a TSS value can be correlated with turbidity [3], which uses the degree of loss of water transparency due to suspended solids. Therefore, the greater the number of suspended solids in the liquid, the greater the degree of turbidity [4].
For turbidity measurement, an established protocol is method 180.1 by the U.S EPA. The measurement ranges between 0 and 40 NTU (nephelometric turbidity units), and to achieve higher values, the samples must be diluted in water and the measurement must be rescaled. Otherwise, there are several arrangements for turbidimeters that use different light sources and detectors. However, none of them can be used as a low-cost alternative to monitor water quality in a rapid noninvasive way with a wide dynamic range. For example, some low-cost turbidimeters operate with short ranges; their operating ranges are 2.2–54.2 [5], 0–10 [6], 0–12 [7], and 0–100 NTU [8,9,10]. Furthermore, these methods do not allow the estimation of water turbidity using a single data record. There are advanced turbidimeters with a wide operating range, but they are expensive and require two data records for turbidity estimation [6,11].
By contrast, recent advances in computer vision, software development, and other technological advances are being implemented to solve some measurement limitations (the lack of a wide operating range, high cost, etc.) of conventional turbidimeters. Recent advances in turbidimeters are focused on image analysis. Examples include the novel turbidimeter technique of Gimenez et al. that developed a turbidimeter based on image degradation analysis; however, this method needs reference samples, and samples need additional treatment (must be sonicated) [12]. Gu et al. applied a random forest ensemble to space remote sensing data to obtain river turbidity measurement based on hyperspectral images, and obtained precision of 67% [13]. Mullins et al. carried out turbidity measurements in the range 10–250 NTU using image processing methods, where they processed and analyzed image-by-image in the measurement process, reaching a precision of 90% with controlled environmental conditions [14]. In addition, with the advent of smartphones, new turbidimeters have been reported. In the case of Bayram et al., they performed turbidity measurements using a smartphone; however, the precision between the samples measured with the calibrated Hach colorimeter and their smartphone colorimeter was 48% [15]. Leeuw and Boss also measured turbidity using a smartphone by remotely detecting water reflectance based on environmental conditions, reaching precision of 74% [16].
In this study, we measure turbidity and suspended solids by using a CNN that could be used as a low-cost alternative to monitor water quality in a rapid noninvasive way with a wide dynamic range. Our research proposes to use revolutionary emerging technology called convolutional neural network (CNN) and open-source technologies to develop a technique capable of measuring TSS and turbidity values simultaneously from a single image of the liquid sample to be measured. CNN is inspired by the working of the human brain and is able to analyze raw data without human intervention. CNNs have been applied successfully in different areas such as images classification [17,18], image segmentation [19], roughness measurement [20,21], and soft sensors [22,23,24]. Additionally, CNNs have gained popularity due to their ability to approximate any continuous function [25,26,27], so they could have better precision in the turbidity task than other machine learning methods such as additional trees, multilayer perceptron, naive Bayes, random forest, and support vector machine [13,28].
This research describes how to use a soft sensor [22] model for dynamic processes based on a convolutional neural network, with the highlights of measuring the TSS and turbidity values from a single image. This image is registered by a conventional smartphone (Android and IOS system). The measurements, acquired noninvasively, have high precision and a wide dynamic range for aquatic environments.
The rest of this paper is organized as follows. Section 2 describes the CNN architecture and the experiments; Section 3 shows the results and holds the discussion. Finally, Section 4 reports some conclusions and discusses future work.

2. Materials and Methods

2.1. Proposed Classification CNN

This paper is based on the principles of artificial intelligence, more specifically in operating blocks and several layer of neurons that work together to mimic the functioning of the visual cortex of mammals, which are called convolutional neuronal networks (CNNs). In image classification tasks, a CNN first performs a feature extraction step from the input image, then the features are passed to a neural network and finally we obtain, as an output, the probability assigned to the input image that it belongs to a certain category [29].
To work with CNNs, few people train a CNN from scratch, due to it is rare to have a large dataset, and it is more efficient to use the advantages of transfer learning, where a CNN model that was previously trained to perform a task is reused and trained again to learn a new task without the need to use large databases (e.g., 1000 classes for ImageNet dataset). Figure 1 shows the transfer learning process carried out on our CNN model, whose classification task is changed.
A simple and elementary CNN model with a powerful modeling capability is AlexNet [30]. AlexNet architecture is simple and easy to train and optimize, and has a proven ability to classify and recognize simple images with low visual complexity, such as low resolution [31]. For this, only AlexNet was trained, due to its simplicity and popularity. In this research, an AlexNet model is trained using an image dataset of liquid samples with different values of suspended solids. RGB-image input size for AlexNet is 224 × 224 [32]. The AlexNet model used is shown in Figure 2.

2.2. Proposed Estimator Based on Multiple Linear Regression (MLR)

Thus, we have a model capable of classifying the images in the dataset. The output vector of the trained model can be treated as a decoded version of the input image since the model extracts implicit information from the liquid sample. Although the CNN model classifies certain liquid samples, it cannot classify images with intermediate values of suspended solids. Nevertheless, if the feature vectors (CCN output vectors) are used to fit an MLR, we can predict the TSS and turbidity values of any liquid sample. The main requirement is to train the CNN with a number of classes that contain the target dynamic range.
In a convolutional neural network, there are two steps: the first is called feature extraction, and the second is classification; the second step is built from a branch of neurons. For example, a single neuron uses the inputs (features) to compute a response (output) or a single logit, which is the value of multiplying the inputs with weights and adding the bias term (−∞ to +∞), and then passed through the activation function to obtain an output. Therefore, a logit vector can be acquired by using the real number calculated in a group of neurons. In CNN models, in the last layer, to obtain the probabilities of the classes, it is required to input the logit values into a SoftMax function to generate a normalized vector of probabilities (0 to 1) with a value for each predicted class.
The MLR is then fitted with the logits vector obtained from the last layer, and a linear equation is created to approximate new and unknown values. Figure 3 shows how the MLR is fitted using the logit vector of the training dataset, and the general operation of the proposed method. When the CNN input is a new image (unknown sample), a new logit vector is created, and used in the MLR to obtain the estimated value. Hence, in this research, class probabilities are used to predict the TSS and turbidity values in a liquid sample.
Furthermore, the CNN and the MLP were trained separately. The CNN was trained using the Cross-entropy function, so a simple classification task was developed. A trained CNN was used to obtain the probabilities based on the input image, then a trained MLP was used to obtain the approximated value according to the values (obtained in the training process) used to fit the MLP. In other words, an MLP was used to predict the value of a variable based on the value of another variable. We then obtained the TSS and turbidity values based on the features (probabilities) found by the CNN.

2.3. CNN Validation as Classifier

One of the main advantages of using a CNN is that it automatically learns the most relevant features in an input image without human supervision. By viewing the convolutional feature maps of an image, we can look at the regions of the image noted by the CNN to perform the classification. In this research, several sets of feature maps were analyzed to confirm that CNN can detect image changes due to the particles suspended in the liquid samples. This would confirm that CNN is only counting differences in suspended solids and ignoring other parameters such as optic aberrations, spurious radiation, mismatch compensated pulse effects, etc. Figure 4 shows a set of convolutional feature maps extracted from the stacked results in each convolutional layer.

2.4. Samples

Two samples of commercial interest were used: fish feed and paprika. The study was mainly based on fish feed, and to validate our results, paprika (with 2 extra classes) was tested. Commercial fish feed was used as the suspended solid, given that fish feed is one of the components that accumulates the most in intensive aquaculture systems [33,34]. Additionally, because fish feed for tilapia is designed to be in suspension [35], it makes it easy to dose. Twenty liquid samples were prepared by mixing one liter of distilled water with each mass (g) of the fish feed and paprika mass, as shown in Table 1. The masses were created on a Denver Instruments PI-214 high-resolution balance (with four decimal places). The sample concentration ranged from 0 to 0.8 gL 1 , chosen because this is the operating range of most commercially available turbidimeters and/or solids meters, such as HANNA and Hach instruments. Concentration samples (fish feed and paprika mass) used for each class are shown in Table 1.

2.5. Experimental Setup

A transparent container with a cubic shape of approximately 5 × 5 × 5 cm was filled using the liquid sample. A magnetic mixer was used to keep all the particles suspended, for preventing them from settling, and to allow the suspended particles to be recorded by the camera. The mixer speed used was 60 rpm during image registration (images from the training and validation dataset). The experimental setup used for this experiment is shown in Figure 5. The illumination distance was selected to prevent shadows on the liquid sample. The camera distance was selected to avoid imaging the edges of the container, so that the model does not learn the morphology of the container during the CNN training. The experiment was carried out in a dark room. The background was constant throughout the experiment; therefore, the CNN learned the information from the samples for each color, and it did not take information about the background. This is also corroborated in Figure 4, where the activations of the artificial neurons were focused, or the CNN took the information, for the classification of the suspended solid samples. The experiment setup implemented an RGB LED lamp. Red, green, blue, and white were used. Liquid samples illuminated with different colors are shown in Figure 6. The illumination of all colors was kept constant at an irradiance of 0.852 mW / cm 2 . The spectral power distributions of the illumination used in this study are shown in Figure 7, which were measured using a spectrophotometer (USB2000, Ocean Optics, Orlando, FL, USA). The experimental setup was adjusted to the neural network by keeping constant all parameters (smartphone camera mode, LED lamp intensity, etc.) that were not under measurement. The neural network learned to identify the image changing characteristics, and since its training included samples with only distilled water (class 0), and considering that the only parameter which varied was the suspended solids, the effects of spurious light and other sources of noise were minimized.
To create and record the liquid sample dataset, the rear-facing cameras of a Huawei Mate 20 Lite and an iPhone 6 were used. A total of 88,000 images were recorded. For samples with fish feed, 39,600 images were recorded with an Android operating system smartphone, and 48,400 images for paprika samples utilizing the smartphone with an IOS operating system. The size of the images recorded via the smartphones was 2448 × 2448 pixels; however, the images were center-cropped at 224 × 224 pixels, according to the input layer of the CNN. The smartphone camera was used in manual mode with a capture speed of 20 FPS (frames per second) in burst mode. The ISO level and the focus (focused at the container wall) option were kept constant during the experiment. Of the 88,000 recorded images, 80,000 images were randomly selected to create the training images dataset. The remaining 8000 images were used to build the validation dataset, i.e., 90% of the dataset was used for training, while the remaining 10% was used for validation or to test the operation of the CNN, that is, the validation images were not used in the validation process or training test.
The training process was developed and implemented using Google Colaboratory, which is a free cloud service for machine learning education and provides a Python notebook (Jupyter) environment running in a dedicated virtual machine on an Nvidia Tesla K80 GPU with 2496 CUDA cores. The AlexNet model (CNN) was taken from the Torchvision package, which offers some popular pre-trained models and other image processing tools.
In artificial intelligence, the process of finding the “best” or “optimal” parameters for the performance of a CNN model is called optimization. The classic optimization method is called stochastic gradient descent (SGD). It is a simple procedure that involves iteratively finding the values that result in the lowest possible error (loss) based on the training dataset. Although newer and more powerful optimization algorithms exist, SDG provides consistency in the overall training process and results.
One of the most important hyperparameters is the “learning rate”. It is responsible for adjusting the rate at which the model calculates the gradient of the loss function. Therefore, the learning rate controls how much the model changes its predictions as it updates its results based on model error. A high learning rate makes the model change its parameters quickly, while a low learning rate makes the model change its parameters slowly. The best option is the selection of a learning rate value that makes the error decrease correctly (not too quickly), finding the minimal error in the fewest number of epochs. An “epoch” is another relevant hyperparameter; it refers to the number of times that the entire training dataset is passed through the CNN model. However, a model is trained using batches. In the context of a single training epoch, “batch size” refers to the amount of data passed to be processed by the CNN and updates model parameters at a time until an epoch is complete. Larger batches allow for more computational parallelism, and can often lead to better performance.
However, larger batches also require more memory and can cause latency when passed into the training function. Finally, the hyperparameter “momentum” is employed to accelerate the gradient descent by taking into account a fraction of the previous gradients to update to the current one.
For the CNN training, the algorithm executed a total of 75 training epochs. The epoch number was selected by analyzing the loss of training according to previous executions of the training process. The CNN was trained using a stochastic gradient descent algorithm with a momentum of 0.9 and a batch size of 50. The other hyperparameters used in the experiment are listed in Table 2.
The selection of the hyperparameters was established based on the image classification examples exposed in the PyTorch documentation. In addition, several executions were carried out in search of minimizing errors, and until a high accuracy of classification (100%) and estimation (98.24% and 97.20% for TSS and turbidity, respectively) was reached.

2.6. Performance Metrics

The performance of the proposed method was evaluated in two stages.

2.6.1. Performance Metrics for Classifier Evaluation

A confusion matrix is used to analyze the performance of a classification tool [36,37,38]. Four important terms make up a confusion matrix, which describes the following cases: the cases that were predicted as elements that belong to a class and that actually belong to that class are known as true positives (TP). Similarly, the cases of elements that were predicted that do not belong to a class and do not belong to that class are known as true negatives (TN). On the other hand, the cases of elements that were predicted to belong to a class but which do not really belong to the class are named false positives (FP). Finally, the cases that were predicted not to belong to the class and actually belong to the class are known as false negatives (FN). The elements of the confusion matrix are used to calculate the following performance metrics for the evaluation of the classifier:
Accuracy is the percentage of the total number of predictions that were classified correctly and is calculated as:
A c c u r a c y = T P + F N / N
where N is the total number of elements to classify.
Precision is the ability of the classifier to predict a sample according to what it really is, and is defined as:
P r e c i s i o n = T P / T P + F P
Recall is the ability of the classifier to find all the positive samples. In other words, how many examples of positive cases were correctly labeled, and can be written as:
R e c a l l = T P / T P + F N
Similar to Recall, Specificity is the ability of the classifier to find all the negative samples, and is defined as:
S p e c i f i c i t y = T N / T N + F P
F-Score is the harmonic mean of Precision and Recall, and provides a notion of how precise the classifier is. A high F-Score value indicates that the model performs better in positive cases. It is calculated as:
F-Score = 2 × P r e c i s i o n × R e c a l l / P r e c i s i o n + R e c a l l
Receiver operating characteristic (ROC) is a plot of the rate of true positives (Recall) versus the rate of false positives (Specificity). This graph characterizes the ability of a CNN to identify positive cases as positive, and negative cases as negative. Meanwhile, the area under the ROC curve (AUC) is the probability that a randomly chosen pair of positive and negative cases will be classified correctly.

2.6.2. Performance Metrics for MLR Evaluation

The following metrics were used to evaluate the performance of the MLR, whose task is to estimate the correct measured value.
The coefficient of determination ( R 2 ) is a statistical measure of the goodness of fit or reliability of the model according to the data. This coefficient determines the quality of the model to replicate the results. The values of R2 are between 0 and 1. Zero implies that there is no linear relationship, and a value of one means that there is a perfect linear relationship. The coefficient of determination is calculated as:
R 2 = 1 i = 1 n y _ p r e d i c t e d i y _ m e a n i = 1 n y _ t r u e i y _ m e a n
where y_predicted is defined as the predicted value, y_true as the true value, and y_mean as the average of the y data.
Mean absolute error (MAE) is evaluation metric used in regression models. It is the mean of the difference between the original values (y_true) and the predicted values (y_predicted). Mathematically, it is described as:
M A E = i = 1 N a b s y _ t r u e i y _ p r e d i c t e d i N
Mean square error (MSE) is defined as the difference between the original values and the predicted values, and squared by the mean difference; the higher this value, the worse the model. Mathematically, it is represented as:
M S E = i = 1 N y _ t r u e i y _ p r e d i c t e d i 2 N .

3. Results and Discussion

This section shows the results of the evaluation of CNN’s performance metrics. The CNN model was independently trained four times; in each training we used a color training dataset (white, red, green, and blue datasets). In addition, the efficiency of the MLR was evaluated to estimate the TSS and turbidity values. The validation dataset that was used consisted of nine classes for each color with 100 images in each class. For the CNN evaluation, a confusion matrix was implemented in which the rows are related to the true labels and the columns to the labels predicted by the CNN. Diagonal cells are linked to observations that are correctly classified. A perfect classification is reached when each space of the diagonal elements counts 100. The confusion matrix obtained at the end of the training process for the four color datasets was the same for fish feed and paprika (no extra classes) and is shown in Figure 8. If we visualize the extra classes in the confusion matrix, the diagonal also has 100 elements. The success of this classification may be due both to the size of the dataset (which is relatively large) and to the lack of complexity of the classification objects. In addition, since the MLR evaluation metrics had a percentage of error, we can guarantee that the CNN did not memorize the dataset and was not overfitted. The trained CNN for each color dataset reached a maximum score on its performance evaluation metrics for accuracy, precision, recall, F-score, and ROC (see Figure 9). The performance obtained for the trained CNN for each color dataset is shown in Table 3.
Although the trained CNN achieved the same high score on the CNN performance metrics for each color dataset, it should be noted that the training time was different for each color dataset. The best training time was reached using the white light dataset. On the other hand, the worst training time was obtained using the green dataset. The difference between the CNNs trained with the white and green databases was 266% in training time. The CNN trained with the red dataset was the second-best model, which reached the maximum score in almost double the amount of time in comparison to the white one. These differences could be attributed to the fact that the CNN creates individual feature maps for each RGB color channel [39]. Therefore, the white color, which is the combination of the three color channels, could generate more detailed feature maps using the three color channels, which could allow the CNN to classify all classes more effectively. Training time is a parameter that gives us information about which color dataset is best categorized by the CNN. Once the CNN has been trained, when entering a new image, it calculates its TSS and turbidity values in fractions of a second. The accuracy and loss curves in the training process are shown in Figure 10. It is easy to see that the blue classifier (CNN model trained with the blue dataset) was the first to achieve a high accuracy score; however, its training process started with a low accuracy value of 42% in its first epoch, and only reached 100% accuracy at the 15th epoch. Another important aspect concerns the green classifier, which started with the lowest accuracy score of 12%, and reached a high accuracy value at the 12th epoch, but continued its training process with some fluctuations. The red classifier started with a low accuracy value of 45% in its first epoch, and reached 100% accuracy at the 12th epoch. Meanwhile, the white classifier obtained an accuracy value of 90% in its first epoch, and obtained an accuracy of 100% in its seventh one.
Differences between RGB colors in terms of accuracy and loss curves can be related to the spectral characteristics of the light. In particular, the sample was a brown color for fish feed, whose spectrum has more red-light content, and then the red image may have more information, which may improve the red channel analysis. Regarding the green and blue channels, the wavelength peak of green LED illumination is about 40 nm displaced from the peak wavelength of the green pixel responsivity of the CMOS camera, while the wavelength peak of the blue LED lamp is about 10 nm displaced. This spectral mismatch may reduce the information of the green channel in comparison with blue and red channels. This is similar for paprika samples whose spectrum has more red-light content.
MLR performance measures were calculated and the results obtained for the different color datasets are shown in Table 4. When looking at the results in Table 4, it is observed that the MLR for the white dataset had the highest value of the coefficient of determination R 2 , and furthermore, the highest model quality to replicate the results. In addition, the MLR with the white dataset had the lowest MAE and MSE values. Therefore, the best MLR performance for the estimation task was performed with the white dataset, unlike the green illuminated samples, which had the worst MLR performance. The study was mainly focused on fish feed and, to validate our results, paprika was tested. For both samples, the CNN + MLR have the performance listed in Table 4. The liquid samples created in this research have a TSS range of 0–800 mgL 1 . The TSS values, estimated by both the CNN + MLR, are shown in Table 5 for the different color datasets.
Performance metrics shown in Table 4 and Table 5 indicate that the best light illumination for the proposed method was white color for the fish feed sample. This is because the white color showed an error of 2.53% compared to 3.16% for red, 4.16% for blue, and 9.57% for green. The proposed method with the white dataset had an operational range of 0 to 0.8 gL 1 and high goodness of fit ( R 2 = 0.99). Therefore, the method with the white dataset obtained the lowest error of ±2.53% and a general standard deviation of ±0.018, which implies an accuracy of 97.46%. These errors are the measurement errors (observational errors), which were calculated by using the values of the reference samples listed in the Table 1 (taken as the true values). In general, the results indicate that the measuring precision is reasonably good for the fish feed sample. However, the two smaller concentrations (Classes 0 and 1) had lower precisions. To improve this problem and test repeatability, we repeated and expanded the measurement with a different sample, paprika, and a smartphone with a different operating system (iPhone 6, IOS system).
In order to improve and validate our results, we analyzed a set of liquid samples made with paprika. This additional set of samples incorporated two new classes (1/2 and 3/4) into the CNN training process, which significantly reduced the measuring error of classes 0 and 1. Additionally, Table 5 shows TSS values for the paprika samples with each color illumination dataset. In view of the training of the new classes, the CNN + MLR improved its accuracy to 98.2400% for the TSS values with the white dataset.
It should be highlighted that between TSS and turbidity there is a correlation due to the coefficient of proportionality that creates a linear regression between them. This coefficient of proportionality between TSS and turbidity depends on the geometric and optical properties of the suspended solids (i.e., size, shape, refraction index, mass density) [40,41,42]. In other words, the samples of fish feed and paprika have the same concentration of TSS, but have different concentrations of turbidity since they have a different coefficient of proportionality.
In order to estimate the turbidity values in the liquid samples, the system was trained and validated using reference values measured with another instrument. These turbidity reference values were measured with a HACH DR900 colorimeter, within the operating range 0–263 NTU, and are shown in Table 6. The reference measurements were replicated six times to obtain the standard deviation of the device. Note that when making measurements near 200 NTU with the HACH DR900, the standard deviations increased, due to it being adjusted with a calibration curve using the reading obtained with the 200 NTU formazin standard. Additionally, according to the user manual, the instrument error is ±21 NTU [43]. Table 6 shows the TSS and turbidity reference values for the fish feed and paprika samples. The turbidity values estimated by our method are shown in Table 7 for the different color datasets. The white dataset showed the lowest standard deviation values among all color datasets. It had a maximum standard deviation of ±13.68 NTU for the fifth class and, in turn, had a lower standard deviation than the instrument used as reference. In addition, for the fish feed samples, the white color presented an error of 9.84% compared to 11.64% for red, 12.55% for blue, and 42.50% for green. Therefore, the 0–263 NTU range is appropriate for aquatic environments, as the safe turbidity level for aquatic life should not exceed 25 NTU [44]. For the turbidity measurement, the standard deviation in our proposed method was ±6.98 NTU and an accuracy of 90.16% for the white dataset, which was the best color dataset for the fish feed samples. Additionally, Table 7 shows the turbidity values estimated by our method for paprika samples, and it can be noted that by training the new classes, the CNN + MLR improved its accuracy to 97.20% for turbidity values using the white dataset. In order to test our method, eight new samples with paprika were prepared with fractional concentrations of TSS. To validate the proposed method, these samples were estimated without training the CNN or the MLR for these TSS values. The TSS values estimated by the CNN + MLR are shown in Table 8, with an accuracy of 96.88% for the white dataset. The turbidity values estimated by the proposed method are shown in Table 9, with an accuracy of 96.14% for the white dataset. The CNN + MLR were validated for extra concentration samples.
The results showed that the proposed method can be improved by including more classes with small concentrations in the CNN training. This can be noticed by comparing the test results for the fish feed and paprika samples. For example, with white light illumination, the measurement error is reduced from 2.53% to 1.76% for TSS estimation, and from 9.84% to 2.79% for turbidity estimation. These results provide evidence for the effectiveness of the proposed method, and indicate high resolution and accuracy. Nevertheless, despite the high performance this method offers, its associated limitations should be recognized. Among these are the type of samples that can be measured. The proposed method was tested with commercial fish feed and paprika as suspended solids, which do not represent all types of suspended solids. There are other aquatic environments such as river water, domestic and industrial wastewater, drinking water, among others, where the size of the particles may be smaller than that of the fish feed or paprika mass. However, as CNN analyzes the images in depth, with the high-level convolutional layers calculating all the differences between the images, their potential for application to other types of suspended solids is promising. This is because the CNN image analysis not only differentiates particle distribution, but also contrast, brightness, and color. This means that further research should explore this issue, considering other types of samples such as river water, domestic and industrial wastewater, and potable water, among others. In addition, this research is expected to become a prototype, in which we would have an encapsulation box with LED lighting included. In the box, we could place the sample and the smartphone in certain fixed positions and perform the measurement. In a future work, it could be tested if the LED flash (white light) of the smartphone could be used because the white dataset presented the best performance.

4. Conclusions

In this paper, a novel method was proposed to estimate TSS and turbidity values in liquid samples using a CNN and an MLR together with a smartphone. The main conclusions of the study follow:
  • The CNN and the MLR developed can estimate the TSS and turbidity values using images recorded by a common smartphone (Android and IOS systems).
  • The proposed method is capable of estimating concentration values for unknown classes (validated with samples of unknown concentrations for the CNN).
  • The proposed method is capable of estimating TSS and turbidity measurements in homogeneous material samples with different particle size, which are in motion or suspended. The use of a mixer is to keep all the particles suspended, for preventing them from settling, and to allow for the suspended particles to be recorded by the camera.
  • This method of measuring turbidity and TSS is inexpensive and reduces human intervention. The results show the effectiveness of the proposed method, indicating high accuracy of 98.24% and 97.20% for TSS and turbidity measurements, respectively, which is high compared to measurements made with commercial turbidimeters, and prevailing algorithms of machine learning.
  • Once the CNN model is trained and the MLR is fitted, the algorithm can be used on a smartphone or other devices with a lower cost Nvidia GPU. In this research, the execution of the TSS and turbidity estimation algorithm was validated on a smartphone (Huawei Nova 3) using “Pydroid 3”, and the same results were obtained according to the Google Colaboratory service.
  • As further work, a range extension (adding larger TSS values) and an expansion of the training dataset (more images) could be performed to achieve better method performance. Moreover, further research should also investigate heterogeneous material samples such as river water, domestic and industrial wastewater, potable water, and different sediments on the seabed, among others. In addition, it is expected that this research study will be helpful for developing a device for actual applications; for example, a mobile device where both the camera and the liquid sample are encapsulated, thus avoiding influence on the measurements by external conditions of light and background.

Author Contributions

Conceptualization, D.L.-B., I.M., and C.G.-M.; methodology, D.L.-B., I.M., and C.G.-M.; software, C.G.-M.; validation, T.S.-A. and E.G.; formal analysis, D.L.-B. and C.G.-M.; investigation, D.L.-B.; resources, C.B.-C. and J.G.-T.; data curation, D.L.-B. and C.G.-M.; writing—original draft preparation, D.L.-B. and I.M.; writing—review and editing, D.L.-B., I.M., and C.G.-M.; visualization, T.S.-A. and E.G.; supervision, I.M. and C.G.-M.; project administration, I.M., T.S.-A., and C.G.-M.; funding acquisition, C.B.-C. and J.G.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors appreciate the help given by Ana Isabel Veyna Gomez, a collaborator from the Water Geochemistry Laboratory at the Autonomous University of Zacatecas. Furthermore, the authors thank Ivanna Sofia Renteria Estrada for her editorial suggestions. Finally, the authors also thank the reviewers for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Avnimelech, Y. Feeding with Microbial Flocs by Tilapia in Minimal Discharge Bio-Flocs Technology Ponds. Aquaculture 2007, 264, 140–147. [Google Scholar] [CrossRef]
  2. Qin, S.; Cai, X.; Ma, L. A Novel Light Fluctuation Spectrum Method for In-Line Particle Sizing. Front. Energy 2012, 6, 89–97. [Google Scholar] [CrossRef]
  3. Bin Omar, A.F.; Bin MatJafri, M.Z. Turbidimeter Design and Analysis: A Review on Optical Fiber Sensors for the Measurement of Water Turbidity. Sensors 2009, 9, 8311–8335. [Google Scholar] [CrossRef] [PubMed]
  4. Yang, Y.; Wang, H.; Cao, Y.; Gui, H.; Liu, J.; Lu, L.; Cao, H.; Yu, T.; You, H. The Design of Rapid Turbidity Measurement System Based on Single Photon Detection Techniques. Opt. Laser Technol. 2015, 73, 44–49. [Google Scholar] [CrossRef]
  5. Murphy, K.; Heery, B.; Sullivan, T.; Zhang, D.; Paludetti, L.; Lau, K.T.; Diamond, D.; Costa, E.; O’Connor, N.; Regan, F. A Low-Cost Autonomous Optical Sensor for Water Quality Monitoring. Talanta 2015, 132, 520–527. [Google Scholar] [CrossRef]
  6. Wang, H.; Hu, J.; Wan, W.; Gui, H.; Qin, F.; Yu, F.; Liu, J.; Lü, L. A Wide Dynamic Range and High Resolution All-Fiber-Optic Turbidity Measurement System Based on Single Photon Detection Technique. Measurement 2019, 134, 820–824. [Google Scholar] [CrossRef]
  7. Toivanen, T.; Koponen, S.; Kotovirta, V.; Molinier, M.; Chengyuan, P. Water Quality Analysis Using an Inexpensive Device and a Mobile Phone. Environ. Syst. Res. 2013, 2, 9. [Google Scholar] [CrossRef] [Green Version]
  8. Gillett, D.; Marchiori, A. A Low-Cost Continuous Turbidity Monitor. Sensors 2019, 19, 3039. [Google Scholar] [CrossRef] [Green Version]
  9. Azman, A.A.; Rahiman, M.H.F.; Taib, M.N.; Sidek, N.H.; Bakar, I.A.A.; Ali, M.F. A Low Cost Nephelometric Turbidity Sensor for Continual Domestic Water Quality Monitoring System. In Proceedings of the 2016 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Selangor, Malaysia, 22 October 2016; IEEE: Selangor, Malaysia, 2016; pp. 202–207. [Google Scholar]
  10. Godoy, A.C.; Nakano, A.Y.; Siepmann, D.A.B.; Schneider, R.; Pfrimer, F.W.D.; Santos, O.O. Snapshots Analyses for Turbidity Measurements in Water. Water Air Soil Pollut. 2018, 229, 378. [Google Scholar] [CrossRef]
  11. Zhu, Y.; Cao, P.; Liu, S.; Zheng, Y.; Huang, C. Development of a New Method for Turbidity Measurement Using Two NIR Digital Cameras. ACS Omega 2020, 5, 5421–5428. [Google Scholar] [CrossRef] [Green Version]
  12. Gimenez, A.J.; Medellin-Rodriguez, F.J.; Contreras-Martinez, L.M.; Lopez-Romero, J.M.; Sanchez, I.C.; Villasenor-Ortega, F.; Luna-Barcenas, G. Turbidimetry by Image Degradation Analysis. IEEE Trans. Instrum. Meas. 2020, 69, 7574–7579. [Google Scholar] [CrossRef]
  13. Gu, K.; Zhang, Y.; Qiao, J. Random Forest Ensemble for River Turbidity Measurement from Space Remote Sensing Data. IEEE Trans. Instrum. Meas. 2020, 69, 9028–9036. [Google Scholar] [CrossRef]
  14. Mullins, D.; Coburn, D.; Hannon, L.; Jones, E.; Clifford, E.; Glavin, M. A Novel Image Processing-Based System for Turbidity Measurement in Domestic and Industrial Wastewater. Water Sci. Technol. 2018, 77, 1469–1482. [Google Scholar] [CrossRef] [PubMed]
  15. Bayram, A.; Yalcin, E.; Demic, S.; Gunduz, O.; Solmaz, M.E. Development and Application of a Low-Cost Smartphone-Based Turbidimeter Using Scattered Light. Appl. Opt. 2018, 57, 5935–5940. [Google Scholar] [CrossRef] [PubMed]
  16. Leeuw, T.; Boss, E. The HydroColor App: Above Water Measurements of Remote Sensing Reflectance and Turbidity Using a Smartphone Camera. Sensors 2018, 18, 256. [Google Scholar] [CrossRef] [Green Version]
  17. Maeda-Gutierrez, V.; Galvan-Tejada, C.E.; Zanella-Calzada, L.A.; Celaya-Padilla, J.M.; Galván-Tejada, J.I.; Gamboa-Rosales, H.; Luna-Garcia, H.; Magallanes-Quintanar, R.; Guerrero Mendez, C.A.; Olvera-Olvera, C.A. Comparison of Convolutional Neural Network Architectures for Classification of Tomato Plant Diseases. Appl. Sci. 2020, 10, 1245. [Google Scholar] [CrossRef] [Green Version]
  18. Jafarzadeh Ghoushchi, S.; Ranjbarzadeh, R.; Najafabadi, S.A.; Osgooei, E.; Tirkolaee, E.B. An Extended Approach to the Diagnosis of Tumour Location in Breast Cancer Using Deep Learning. J. Ambient. Intell. Humaniz. Comput. 2021. [Google Scholar] [CrossRef]
  19. Romanuke, V. A Prototype Model for Semantic Segmentation of Curvilinear Meandering Regions by Deconvolutional Neural Networks. Appl. Comput. Syst. 2020, 25, 62–69. [Google Scholar] [CrossRef]
  20. Beemaraj, R.K.; Chandra Sekar, M.S.; Vijayan, V. Computer Vision Measurement and Optimization of Surface Roughness Using Soft Computing Approaches. Trans. Inst. Meas. Control 2020, 42, 2475–2481. [Google Scholar] [CrossRef]
  21. Vijayan, V.; Senthilkumar, G. Performance Analysis of Surface Roughness Modeling Using Soft Computing Approaches. Appl. Math. Inf. Sci. 2018, 12, 1209–1217. [Google Scholar] [CrossRef]
  22. Brunner, V.; Siegl, M.; Geier, D.; Becker, T. Challenges in the Development of Soft Sensors for Bioprocesses: A Critical Review. Front. Bioeng. Biotechnol. 2021, 9, 730. [Google Scholar] [CrossRef] [PubMed]
  23. Yuan, X.; Qi, S.; Shardt, Y.A.W.; Wang, Y.; Yang, C.; Gui, W. Soft Sensor Model for Dynamic Processes Based on Multichannel Convolutional Neural Network. Chemom. Intell. Lab. Syst. 2020, 203, 104050. [Google Scholar] [CrossRef]
  24. Wang, K.; Shang, C.; Liu, L.; Jiang, Y.; Huang, D.; Yang, F. Dynamic Soft Sensor Development Based on Convolutional Neural Networks. Ind. Eng. Chem. Res. 2019, 58, 11521–11531. [Google Scholar] [CrossRef]
  25. Hajian, A.; Styles, P. Artificial Neural Networks. In Application of Soft Computing and Intelligent Methods in Geophysics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–69. [Google Scholar]
  26. Lopez-Betancur, D.; Duran, R.B.; Guerrero-Mendez, C.; Rodriguez, R.Z.; Anaya, T.S. Comparison of Convolutional Neural Network Architectures for COVID-19 Diagnosis. Comput. Sist. 2021, 25, 601–615. [Google Scholar]
  27. Guerrero-Mendez, C.; Saucedo-Anaya, T.; Moreno, I.; Araiza-Esquivel, M.; Olvera-Olvera, C.; Lopez-Betancur, D. Digital Holographic Interferometry without Phase Unwrapping by a Convolutional Neural Network for Concentration Measurements in Liquid Samples. Appl. Sci. 2020, 10, 4974. [Google Scholar] [CrossRef]
  28. Batista, L.V. Turbidity Classification of the Paraopeba River Using Machine Learning and Sentinel-2 Images. IEEE Lat. Am. Trans. 2022, 20, 799–805. [Google Scholar] [CrossRef]
  29. Ferentinos, K.P. Deep Learning Models for Plant Disease Detection and Diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  30. Han, X.; Zhong, Y.; Cao, L.; Zhang, L. Pre-Trained AlexNet Architecture with Pyramid Pooling and Supervision for High Spatial Resolution Remote Sensing Image Scene Classification. Remote Sens. 2017, 9, 848. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, Z.; Ning, G.; Cen, Y.; Li, Y.; Zhao, Z.; Sun, H.; He, Z. Progressive Neural Networks for Image Classification. arXiv 2018, arXiv:1804.09803. [Google Scholar]
  32. Krizhevsky, A. One Weird Trick for Parallelizing Convolutional Neural Networks. arXiv 2014, arXiv:1404.5997. [Google Scholar]
  33. Avnimelech, Y. Biofloc Technology. In A Practical Guide Book; The World Aquaculture Society: Baton Rouge, LA, USA, 2009; Volume 182. [Google Scholar]
  34. Lopez-Betancur, D.; Moreno, I.; Guerrero-Mendez, C.; Gómez-Meléndez, D.; Macias, P.D.M.J.; Olvera-Olvera, C. Effects of Colored Light on Growth and Nutritional Composition of Tilapia, and Biofloc as a Food Source. Appl. Sci. 2020, 10, 362. [Google Scholar] [CrossRef] [Green Version]
  35. Begum, N.; Islam, M.S.; Haque, A.; Suravi, I.N. Growth and Yield of Monosex Tilapia Oreochromis Niloticus in Floating Cages Fed Commercial Diet Supplemented with Probiotics in Freshwater Pond, Sylhet. Bangladesh J. Zool. 2017, 45, 27–36. [Google Scholar] [CrossRef] [Green Version]
  36. Deng, X.; Liu, Q.; Deng, Y.; Mahadevan, S. An Improved Method to Construct Basic Probability Assignment Based on the Confusion Matrix for Classification Problem. Inf. Sci. 2016, 340–341, 250–261. [Google Scholar] [CrossRef]
  37. Qiu, S.; Xu, H.; Deng, J.; Jiang, S.; Lu, L. Transfer Convolutional Neural Network for Cross-Project Defect Prediction. Appl. Sci. 2019, 9, 2660. [Google Scholar] [CrossRef] [Green Version]
  38. Sammut, C.; Webb, G.I. Encyclopedia of Machine Learning; Springer: New York, NY, USA, 2011; ISBN 978-0-387-30768-8. [Google Scholar]
  39. Schwarz, M.; Schulz, H.; Behnke, S. RGB-D Object Recognition and Pose Estimation Based on Pre-Trained Convolutional Neural Network Features. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; IEEE: Seattle, WA, USA, 2015; pp. 1329–1335. [Google Scholar]
  40. Hannouche, A.; Chebbo, G.; Ruban, G.; Tassin, B.; Lemaire, B.J.; Joannis, C. Relationship between Turbidity and Total Suspended Solids Concentration within a Combined Sewer System. Water Sci. Technol. 2011, 64, 2445–2452. [Google Scholar] [CrossRef]
  41. Rügner, H.; Schwientek, M.; Beckingham, B.; Kuch, B.; Grathwohl, P. Turbidity as a Proxy for Total Suspended Solids (TSS) and Particle Facilitated Pollutant Transport in Catchments. Environ. Earth Sci. 2013, 69, 373–380. [Google Scholar] [CrossRef]
  42. Schwarz, K.; Gocht, T.; Grathwohl, P. Transport of Polycyclic Aromatic Hydrocarbons in Highly Vulnerable Karst Systems. Environ. Pollut. 2011, 159, 133–139. [Google Scholar] [CrossRef]
  43. Hach USA DR900 Multiparameter Portable Colorimeter—Overview. Available online: https://www.hach.com/dr900-multiparameter-portable-colorimeter/product?id=15684103251 (accessed on 1 August 2019).
  44. Lloyd, D.S. Turbidity as a Water Quality Standard for Salmonid Habitats in Alaska. N. Am. J. Fish. Manag. 1987, 7, 34–45. [Google Scholar] [CrossRef]
Figure 1. Representation of the TL process in the CNN model used: (1) training dataset (ImageNet) and CNN; (2) input image to classify; (3) classification result; (4) reuse of the pre-trained model for a new task; (5) new training dataset (images of the new task) and modification of the CNN; (6) new input image to be classified; and (7) classification result.
Figure 1. Representation of the TL process in the CNN model used: (1) training dataset (ImageNet) and CNN; (2) input image to classify; (3) classification result; (4) reuse of the pre-trained model for a new task; (5) new training dataset (images of the new task) and modification of the CNN; (6) new input image to be classified; and (7) classification result.
Applsci 12 06079 g001
Figure 2. Details of the trained CNN model used in this research. The layer FC8 is marked to indicate the resizing of this layer according to the number of classes in the new task.
Figure 2. Details of the trained CNN model used in this research. The layer FC8 is marked to indicate the resizing of this layer according to the number of classes in the new task.
Applsci 12 06079 g002
Figure 3. Extraction of logit vectors used to fit the MLR and the complete measurement system.
Figure 3. Extraction of logit vectors used to fit the MLR and the complete measurement system.
Applsci 12 06079 g003
Figure 4. Convolutional feature maps of a liquid sample.
Figure 4. Convolutional feature maps of a liquid sample.
Applsci 12 06079 g004
Figure 5. Experimental setup: (a) scheme of the experimental setup; (b) experimental setup used.
Figure 5. Experimental setup: (a) scheme of the experimental setup; (b) experimental setup used.
Applsci 12 06079 g005
Figure 6. Recorded images of samples illuminated with a concentration of 0.1 gL 1   of the fish feed mass in distilled water and under different lighting colors: (a) white light; (b) red; (c) blue; (d) green.
Figure 6. Recorded images of samples illuminated with a concentration of 0.1 gL 1   of the fish feed mass in distilled water and under different lighting colors: (a) white light; (b) red; (c) blue; (d) green.
Applsci 12 06079 g006
Figure 7. The spectral power distributions of the RGB LED lamp used in this study: (a) blue; (b) green; (c) red; and (d) white.
Figure 7. The spectral power distributions of the RGB LED lamp used in this study: (a) blue; (b) green; (c) red; and (d) white.
Applsci 12 06079 g007
Figure 8. Confusion matrix reached for all colors.
Figure 8. Confusion matrix reached for all colors.
Applsci 12 06079 g008
Figure 9. ROC curve obtained in the CNN performance evaluation (all colors). Note: all classes overlap in the horizontal line.
Figure 9. ROC curve obtained in the CNN performance evaluation (all colors). Note: all classes overlap in the horizontal line.
Applsci 12 06079 g009
Figure 10. (a) Classifier training accuracy for each validation color dataset. (b) Loss for each validation color dataset.
Figure 10. (a) Classifier training accuracy for each validation color dataset. (b) Loss for each validation color dataset.
Applsci 12 06079 g010
Table 1. Concentration samples (fish feed and paprika mass) used for each class.
Table 1. Concentration samples (fish feed and paprika mass) used for each class.
Classes0½¾12345678
Fish mass (g)0.000--0.1000.2000.3000.4000.5000.6000.7000.800
Paprika mass (g)0.0000.0020.0070.1000.2000.3000.4000.5000.6000.7000.800
Table 2. Hyperparameters used in the training process.
Table 2. Hyperparameters used in the training process.
HyperparametersValue
Algorithm optimizerSGD
Learning rate0.0005
Momentum0.9
Batch size50
Number of epochs75
Table 3. Performance measures of the CNN as classifier for each color dataset.
Table 3. Performance measures of the CNN as classifier for each color dataset.
MetricsWhiteRedBlueGreen
Accuracy1.001.001.001.00
Precision1.001.001.001.00
Recall1.001.001.001.00
F-Score1.001.001.001.00
Fish Feed training time (min)65.38102.88156.53174.11
Paprika training time (min)162.49200.58232.62236.08
Table 4. MLR performance metrics for each color dataset for the fish feed and paprika samples.
Table 4. MLR performance metrics for each color dataset for the fish feed and paprika samples.
MetricsSampleWhiteRedBlueGreenSampleWhiteRedBlueGreen
R 2 Applsci 12 06079 i0010.9910.9830.9580.913 Applsci 12 06079 i0020.9990.9910.9970.984
MAE0.0160.0210.0350.0520.0060.0160.0090.030
MSE0.00040.00090.00220.00450.00020.00090.00040.0004
Table 5. Mean values ± standard deviation of the TSS estimated by the CNN + MLR for each color dataset for the fish feed and paprika samples.
Table 5. Mean values ± standard deviation of the TSS estimated by the CNN + MLR for each color dataset for the fish feed and paprika samples.
SamplesClassesColors
WhiteRedBlueGreen
Applsci 12 06079 i00100.019 ± 0.0240.021 ± 0.0020.021 ± 0.0350.083 ± 0.102
10.109 ± 0.0130.117 ± 0.0260.118 ± 0.0310.120 ± 0.056
20.189 ± 0.0140.203 ± 0.0160.215 ± 0.0410.215 ± 0.019
30.296 ± 0.0170.303 ± 0.0160.310 ± 0.0360.262 ± 0.020
40.396 ± 0.0200.397 ± 0.0330.399 ± 0.0610.319 ± 0.043
50.494 ± 0.0360.499 ± 0.0520.496 ± 0.0390.512 ± 0.041
60.592 ± 0.0150.592 ± 0.0350.590 ± 0.0630.549 ± 0.065
70.705 ± 0.0140.713 ± 0.0230.685 ± 0.0500.693 ± 0.054
80.809 ± 0.0130.778 ± 0.0220.785 ± 0.0690.848 ± 0.010
Applsci 12 06079 i00200.000 ± 0.0130.000 ± 0.0190.000 ± 0.0430.000 ± 0.119
½0.002 ± 0.0120.005 ± 0.0300.004 ± 0.0190.005 ± 0.027
¾0.007 ± 0.0150.006 ± 0.0190.004 ± 0.0190.015 ± 0.020
10.102 ± 0.0110.103 ± 0.0250.091 ± 0.0400.113 ± 0.107
20.199 ± 0.0210.202 ± 0.0190.195 ± 0.0450.183 ± 0.024
30.291 ± 0.0190.303 ± 0.0270.298 ± 0.0600.267 ± 0.027
40.390 ± 0.0100.366 ± 0.0450.426 ± 0.0410.324 ± 0.049
50.496 ± 0.0110.459 ± 0.0320.494 ± 0.0380.468 ± 0.041
60.603 ± 0.0150.600 ± 0.0550.612 ± 0.0580.584 ± 0.060
70.689 ± 0.0200.682 ± 0.0300.694 ± 0.0450.722 ± 0.049
80.789 ± 0.0160.805 ± 0.0200.799 ± 0.0580.776 ± 0.035
Table 6. Turbidity samples of the reference measured with the HACH DR900 for each TSS of the fish feed and paprika samples. Turbidity samples are shown as mean data ± standard deviation.
Table 6. Turbidity samples of the reference measured with the HACH DR900 for each TSS of the fish feed and paprika samples. Turbidity samples are shown as mean data ± standard deviation.
SamplesClasses TSS   ( g L 1 ) Turbidity (NTU)Samples TSS   ( g L 1 ) Turbidity (NTU)
Applsci 12 06079 i00100.0000 ± 0.55 Applsci 12 06079 i0020.0000 ± 0.48
½--0.0022 ± 2.45
¾--0.0073 ± 3.69
10.10038 ± 2.610.10027 ± 5.21
20.20077 ± 7.060.20061 ± 2.87
30.30095 ± 13.620.30090 ± 4.95
40.400132 ± 12.370.400123 ± 4.75
50.500168 ± 12.670.500145 ± 7.71
60.600189 ± 22.660.600170 ± 4.92
70.700245 ± 21.650.700248 ± 2.39
80.800263 ± 23.730.800306 ± 3.87
Table 7. Turbidity measured with the proposed method. It shows mean values ± standard deviation of the turbidity estimated by the CNN + MLR for each color dataset for the fish feed and paprika samples.
Table 7. Turbidity measured with the proposed method. It shows mean values ± standard deviation of the turbidity estimated by the CNN + MLR for each color dataset for the fish feed and paprika samples.
SamplesClassesColors
WhiteRedBlueGreen
Applsci 12 06079 i00107.14 ± 9.008.09 ± 6.088.05 ± 13.4131.43 ± 38.65
141.39 ± 4.9444.70 ± 10.0344.98 ± 11.6645.68 ± 21.28
275.05 ± 5.3677.54 ± 6.1979.72 ± 15.5879.74 ± 7.11
394.26 ± 6.3896.22 ± 5.9698.81 ± 13.7688.21 ± 7.49
4130.48 ± 7.71130.89 ± 12.65131.78 ± 23.06102.07 ± 16.30
5165.91 ± 13.68167.89 ± 19.61166.52 ± 14.70170.52 ± 15.58
6187.24 ± 5.77187.32 ± 0.04187.07 ± 23.83178.35 ± 24.55
7245.97 ± 5.24247.30 ± 8.81236.60 ± 19.04241.14 ± 20.56
8266.25 ± 4.79259.11 ± 8.39260.34 ± 26.07278.65 ± 3.99
Applsci 12 06079 i00200.09 ± 3.560.36 ± 5.100.18 ± 11.690.27 ± 33.43
½1.91 ± 2.002.85 ± 5.101.16 ± 11.692.65 ± 33.43
¾2.95 ± 2.151.67 ± 5.401.16 ± 15.665.17 ± 9.56
127.54 ± 2.2927.92 ± 6.6924.70 ± 10.8831.49 ± 29.31
260.55 ± 2.9961.46 ± 5.0259.33 ± 12.1555.15 ± 6.48
387.44 ± 5.0791.06 ± 7.3289.45 ± 16.2380.52 ± 7.18
4119.70 ± 2.78111.68 ± 12.23128.61 ± 10.9997.92 ± 13.20
5144.03 ± 2.89136.05 ± 8.53143.68 ± 10.39137.89 ± 10.96
6172.10 ± 3.10170.16 ± 14.87179.05 ± 15.66166.03 ± 16.15
7239.34 ± 2.91234.12 ± 8.15243.01 ± 12.15260.59 ± 13.20
8299.44 ± 3.37307.24 ± 5.40305.88 ± 15.66292.25 ± 9.56
Table 8. The estimated TSS of the extra concentration samples with the proposed method. It shows mean values ± standard deviation of the TSS estimated by the CNN and the MLR for each color dataset for paprika samples.
Table 8. The estimated TSS of the extra concentration samples with the proposed method. It shows mean values ± standard deviation of the TSS estimated by the CNN and the MLR for each color dataset for paprika samples.
Samples TSS   ( g L 1 ) Colors
WhiteRedBlueGreen
Applsci 12 06079 i0020.050.05 ± 0.0110.02 ± 0.0110.004 ± 0.0510.013 ± 0.026
0.150.14 ± 0.0180.11 ± 0.0150.08 ± 0.0490.14 ± 0.056
0.250.24 ± 0.0160.20 ± 0.0210.17 ± 0.0490.16 ± 0.050
0.350.33 ± 0.0170.30 ± 0.0180.29 ± 0.0580.26 ± 0.049
0.450.44 ± 0.0110.36 ± 0.0400.42 ± 0.0390.32 ± 0.040
0.550.54 ± 0.0150.45 ± 0.0300.49 ± 0.0400.46 ± 0.040
0.650.63 ± 0.0180.60 ± 0.0400.61 ± 0.0500.58 ± 0.048
0.750.72 ± 0.0190.80 ± 0.0290.80 ± 0.0460.77 ± 0.040
Table 9. The estimated turbidity of the extra concentration samples with the proposed method. It shows mean values ± standard deviation of the turbidity estimated by the CNN and the MLR for each color dataset for paprika samples.
Table 9. The estimated turbidity of the extra concentration samples with the proposed method. It shows mean values ± standard deviation of the turbidity estimated by the CNN and the MLR for each color dataset for paprika samples.
SamplesTurbidity (NTU)Colors
WhiteRedBlueGreen
Applsci 12 06079 i0021212.69 ± 2.976.31 ± 2.941.16 ± 13.633.56 ± 6.88
4239.90 ± 4.8331.48 ± 3.9921.89 ± 13.2041.51 ± 15.12
7371.55 ± 4.3261.46 ± 5.5652.53 ± 13.2548.35 ± 13.20
10599.43 ± 4.5991.05 ± 4.8889.44 ± 15.6880.51 ± 13.12
134132.30 ± 2.91111.68 ± 10.82128.61 ± 10.7497.92 ± 10.77
156153.95 ± 4.05136.04 ± 8.20143.68 ± 10.80137.89 ± 10.69
207199.26 ± 4.91170.15 ± 10.93179.04 ± 13.47166.02 ± 12.96
272254.33 ± 5.05307.24 ± 8.04305.88 ± 12.28292.25 ± 10.90
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lopez-Betancur, D.; Moreno, I.; Guerrero-Mendez, C.; Saucedo-Anaya, T.; González, E.; Bautista-Capetillo, C.; González-Trinidad, J. Convolutional Neural Network for Measurement of Suspended Solids and Turbidity. Appl. Sci. 2022, 12, 6079. https://doi.org/10.3390/app12126079

AMA Style

Lopez-Betancur D, Moreno I, Guerrero-Mendez C, Saucedo-Anaya T, González E, Bautista-Capetillo C, González-Trinidad J. Convolutional Neural Network for Measurement of Suspended Solids and Turbidity. Applied Sciences. 2022; 12(12):6079. https://doi.org/10.3390/app12126079

Chicago/Turabian Style

Lopez-Betancur, Daniela, Ivan Moreno, Carlos Guerrero-Mendez, Tonatiuh Saucedo-Anaya, Efrén González, Carlos Bautista-Capetillo, and Julián González-Trinidad. 2022. "Convolutional Neural Network for Measurement of Suspended Solids and Turbidity" Applied Sciences 12, no. 12: 6079. https://doi.org/10.3390/app12126079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop