A Deep Learning Approach to Organic Pollutants Classification Using Voltammetry

This paper proposes a deep leaning technique for accurate detection and reliable classification of organic pollutants in water. The pollutants are detected by means of cyclic voltammetry characterizations made by using low-cost disposable screen-printed electrodes. The paper demonstrates the possibility of strongly improving the detection of such platforms by modifying them with nanomaterials. The classification is addressed by using a deep learning approach with convolutional neural networks. To this end, the results of the voltammetry analysis are transformed into equivalent RGB images by means of Gramian angular field transformations. The proposed technique is applied to the detection and classification of hydroquinone and benzoquinone, which are particularly challenging since these two pollutants have a similar electroactivity and thus the voltammetry curves exhibit overlapping peaks. The modification of electrodes by carbon nanotubes improves the sensitivity of a factor of about ×25, whereas the convolution neural network after Gramian transformation correctly classifies 100% of the experiments.


Introduction
Fast and reliable detection and classification of organic pollutants in water is a major challenge in today's society, addressing the goals of sustainable development [1]. Classical and well-assessed techniques such as those based on chromatographic and spectrophotometric analysis are known to provide reliable detection and classification (e.g., [2][3][4]). However, these methods require in-presence sample processing and high employment of reagents and time. For these reasons, recently, attention has been paid to electrochemical methods such as voltammetry (VA), with disposable screen-printed electrodes (SPEs) as sensing platforms. Indeed, these approaches have been shown to provide good sensing performance [5][6][7], while enabling the possibility of moving the analysis from the lab to in situ [8]. Furthermore, recently improved performance by modifying SPEs with 2D nanocarbon materials (e.g., carbon nanotubes or graphene nanoplatelets films [9][10][11]) was demonstrated.
Cyclic voltammetry (CV) is undoubtedly the most widely used technique for acquiring qualitative information about electrochemical behavior and characterization of target analyte, and it is often the first experiment performed in an electroanalytical study. CV is a powerful tool for the rapid determination of formal potentials, and detection of chemical reactions, preceding by following the electron transfer process and by evaluating the The limits of CV characterization, discussed in the introduction, can lead to poor sensitivity (especially at low concentration rates) and/or unacceptable selectivity. An example is provided in Figure 2, which reports the measured peak potentials and the corresponding current values related to the peaks of the cyclic voltammograms obtained with HQ at the concentration of 5 mM with different platforms: bare SPEs (green), MWCNT SPEs (blue), and SWCNT SPEs (red). It can be highlighted that changing the platforms (based on SPE) leads to a huge variation in the measured potential and current peaks. In addition, even with the same platform (i.e., same colors in Figure 2), a sensible spread of the value (low reproducibility in terms of RSD) is observed from one measurement to another. Similar behavior is also obtained for the other pollutant (BQ). An example of a CV electrochemical characterization ( Figure 1) using hydroquinone, as an elective compound, is carried out using several dilutions (Figure 1b) of this compound to build the calibration curve ( Figure 1c) to understand the sensitivity of the SPE. The analyte solution (80 µL) is dropped onto the WE, whose electrical potential V (concerning the reference electrode) is linearly cycled n times between a maximum and a minimum value. The resulting electrical current I, flowing through the working and counter electrodes, is measured, leading to the final I-V curves with a characteristic duck-shaped plot profile (see Figure 1b).
Studying the current (I) vs. potential (V) voltammogram allows the detection of a specific analyte in solution since its electrochemical properties determine the specific redox potentials profile of the compound, used to identify it in a mixture of real samples. In Figure 1b, an example is given referring to the detection of HQ in fortified water (adding a known amount of HQ in a fixed volume of water) with a characteristic potential profile (two reduction peaks and one in oxidation). In addition to the classification of the analyte, the I-V curves obtained at different concentration levels also allow "calibrating" the sensor, as shown in Figure 1c. Specifically, the peaks of the current for each cycle corresponding to the oxidation can be related to the pollutant concentration value, giving the response in Figure 1c. For instance, in this case, the calibration curve is quite linear in the range of interest (fitting parameters provided in inset).
The Randles-Sevcik equation [14] can be used to extrapolate important analytical parameters from the peak current value, e.g., the electrode surface (A) and the diffusion coefficient (D 0 ): where v is the scan rate (Volt/s), n is the number of electrons involved in the process, F is the Faraday constant (1/mol), T is the temperature (K), and R is the universal gas constant (J/K mol). The limits of CV characterization, discussed in the introduction, can lead to poor sensitivity (especially at low concentration rates) and/or unacceptable selectivity. An example is provided in Figure 2, which reports the measured peak potentials and the corresponding current values related to the peaks of the cyclic voltammograms obtained with HQ at the concentration of 5 mM with different platforms: bare SPEs (green), MWCNT SPEs (blue), and SWCNT SPEs (red). It can be highlighted that changing the platforms (based on SPE) leads to a huge variation in the measured potential and current peaks. In addition, even with the same platform (i.e., same colors in Figure 2), a sensible spread of the value (low reproducibility in terms of RSD) is observed from one measurement to another. Similar behavior is also obtained for the other pollutant (BQ). The limits of CV characterization, discussed in the introduction, can lead to poor sensitivity (especially at low concentration rates) and/or unacceptable selectivity. An example is provided in Figure 2, which reports the measured peak potentials and the corresponding current values related to the peaks of the cyclic voltammograms obtained with HQ at the concentration of 5 mM with different platforms: bare SPEs (green), MWCNT SPEs (blue), and SWCNT SPEs (red). It can be highlighted that changing the platforms (based on SPE) leads to a huge variation in the measured potential and current peaks. In addition, even with the same platform (i.e., same colors in Figure 2), a sensible spread of the value (low reproducibility in terms of RSD) is observed from one measurement to another. Similar behavior is also obtained for the other pollutant (BQ). This high variability of the data, together with similar behavior of the CV responses to the two pollutants, makes it extremely difficult for their resolution with these kinds of techniques. This high variability of the data, together with similar behavior of the CV responses to the two pollutants, makes it extremely difficult for their resolution with these kinds of techniques.
A way to face the first issue is based on the modification of SPE using carbon nanomaterials, as briefly reviewed in the following subsection. The second issue will be addressed using the machine learning approach discussed in Section 3.

Carbon Nanotube Modified Platform
To improve the CV response of the SPE, the WE was modified by CNT films by dropcasting. Single-walled CNTs (Heji Inc., Hong Kong) and multiwalled CNTs (Nanointegris Technology Inc., Boisbriand, QC, Canada) were used. CNT thin films were produced using the filtration method. Briefly, 0.2 mg of CNT powder was dispersed for 1 h in a 1% aqueous solution of sodium dodecyl sulfate in a 200 W ultrasonic bath at 44 kHz. To remove CNT agglomerates, the suspension was centrifuged at 8000 g for 20 min, and then filtered through a cellulose-ester membrane (Millipore, 0.22 µm pore size). As a result, SWCNTs and MWCNTs form thin-0.2 and 1 µm-films on the top of filter paper. To remove surfactant, CNT films on the filter paper were washed at 80 • C and then dried in air overnight.
To transfer on the SPEs, CNT films on the filter paper were put into water. The water was substituted with acetone to dissolve the filter paper, and then the acetone was replaced with ethanol. After its removal, the wet films were transferred to the WE and dried finally ready to be used.
The images obtained with scanning electron microscopy (SEM, see Figure 3) demonstrate a larger density of SWCNT than MWCNT film. dressed using the machine learning approach discussed in Section 3.

Carbon Nanotube Modified Platform
To improve the CV response of the SPE, the WE was modified by CNT films by dropcasting. Single-walled CNTs (Heji Inc., Hong Kong) and multiwalled CNTs (Nanointegris Technology Inc., Boisbriand, QC, Canada) were used. CNT thin films were produced using the filtration method. Briefly, 0.2 mg of CNT powder was dispersed for 1 h in a 1% aqueous solution of sodium dodecyl sulfate in a 200 W ultrasonic bath at 44 kHz. To remove CNT agglomerates, the suspension was centrifuged at 8000 g for 20 min, and then filtered through a cellulose-ester membrane (Millipore, 0.22 μm pore size). As a result, SWCNTs and MWCNTs form thin-0.2 and 1 μm-films on the top of filter paper. To remove surfactant, CNT films on the filter paper were washed at 80 °C and then dried in air overnight.
To transfer on the SPEs, CNT films on the filter paper were put into water. The water was substituted with acetone to dissolve the filter paper, and then the acetone was replaced with ethanol. After its removal, the wet films were transferred to the WE and dried finally ready to be used.
The images obtained with scanning electron microscopy (SEM, see Figure 3) demonstrate a larger density of SWCNT than MWCNT film.  It is assumed that the large density of SWNT film prevented a complete removal of the surfactant from its surface. Raman spectra of CNTs (see Figure 4) show a low (0.08) It is assumed that the large density of SWNT film prevented a complete removal of the surfactant from its surface. Raman spectra of CNTs (see Figure 4) show a low (0.08) and a high (0.86) ratio of D-mode to G-mode intensities for SWCNT and MWCNT films, respectively.
This indicates a low density of defect in the crystalline structure of SWCNTs compared to that of MWCNTs. and a high (0.86) ratio of D-mode to G-mode intensities for SWCNT and MWCNT films, respectively. This indicates a low density of defect in the crystalline structure of SWCNTs compared to that of MWCNTs.

Characterization of Hydroquinone and Benzoquinone and Dataset Generation
The characterization of HQ and BQ was carried out by using both bare and modified SPEs. Potassium ferri/ferrocyanide [Fe(CN)6] 4−/3− (hereafter PF) was also analyzed as a reference electroactive species. All reagents from commercial sources were of analytical grade. Potassium ferri/ferrocyanide, p-benzoquinone, and hydroquinone were purchased from Sigma-Aldrich (Steinheim, Germany). The buffer solution used is a 0.05 M phosphate buffer saline (PBS), 0.1 M KCl, pH = 7.4.
CV was performed using a Palmesens4™ portable potentiostat system (Palmsens, Houten, The Netherlands) as an analytical tool and in-house produced screen-printed electrodes (SPEs) as a transducer.
An experiment of a simultaneous determination of the above three species (BQ, HQ, and PF) by using CV was carried out, referring to the concentration of 10 mM, and the use of bare electrodes. The results are shown in Figure 5, which reports the combined cycles ( Figure 5a) and the separate ones (Figure 5b-d). These results evidence the challenge of correctly classifying the compounds.
The generated dataset refers to the CV characterization of HQ, BQ, and PF in water solution, with three different platforms: bare, SWCNT, and MWCNT modified SPEs. Three cycles were measured for each combination of compounds/platform/concentration, thus obtaining a dataset with a total of 291 experimental CV curves.
Examples of CV cycles are provided in Figure 6 for HQ, characterized by bare and CNT-modified SPEs. Table 1 quantifies the main performance parameters, such as the limit of detection (LOD) and reproducibility (RDS%). These results highlight improved performance obtained, once the SPEs are modified by carbon nanotubes, as modification results in a much lower LOD (more than one order of magnitude for MWCNT) and better reproducibility (lowering RDS% by a factor of two).

Characterization of Hydroquinone and Benzoquinone and Dataset Generation
The characterization of HQ and BQ was carried out by using both bare and modified SPEs. Potassium ferri/ferrocyanide [Fe(CN) 6 ] 4−/3− (hereafter PF) was also analyzed as a reference electroactive species. All reagents from commercial sources were of analytical grade. Potassium ferri/ferrocyanide, p-benzoquinone, and hydroquinone were purchased from Sigma-Aldrich (Steinheim, Germany). The buffer solution used is a 0.05 M phosphate buffer saline (PBS), 0.1 M KCl, pH = 7.4.
CV was performed using a Palmesens4™ portable potentiostat system (Palmsens, Houten, The Netherlands) as an analytical tool and in-house produced screen-printed electrodes (SPEs) as a transducer.
An experiment of a simultaneous determination of the above three species (BQ, HQ, and PF) by using CV was carried out, referring to the concentration of 10 mM, and the use of bare electrodes. The results are shown in Figure 5, which reports the combined cycles ( Figure 5a) and the separate ones (Figure 5b  Examples of CV cycles are provided in Figure 6 for HQ, characterized by bare and CNT-modified SPEs. Table 1 quantifies the main performance parameters, such as the limit of detection (LOD) and reproducibility (RDS%). These results highlight improved performance obtained, once the SPEs are modified by carbon nanotubes, as modification results in a much lower LOD (more than one order of magnitude for MWCNT) and better reproducibility (lowering RDS% by a factor of two).

Gramian Angular Fields Transformations
The first step of the proposed approach consists of mapping any I-V cycle obtained by CV into an equivalent red-green-blue (RGB) image. To this end, the Gramian angular fields (GAF) transformation was used [35], which is here briefly recalled. Starting from a time series = , , … , of n real-valued observations, the vector is normalized in such a way that all its values fall into the interval [−1,1]: The re-scaled vector is represented in polar coordinates by associating the value to the angular cosine, ∅ , and the time instant to the radius, Here, is the time stamp and is a constant factor to regularize the span of the polar coordinate system.

Gramian Angular Fields Transformations
The first step of the proposed approach consists of mapping any I-V cycle obtained by CV into an equivalent red-green-blue (RGB) image. To this end, the Gramian angular fields (GAF) transformation was used [35], which is here briefly recalled. Starting from a time series X = {x 1 , x 2 , . . . , x n } of n real-valued observations, the vector X is normalized in such a way that all its values fall into the interval [−1,1]: The re-scaled vector X is represented in polar coordinates by associating the value to the angular cosine, ∅ i , and the time instant to the radius, r Here, t i is the time stamp and N is a constant factor to regularize the span of the polar coordinate system. This transformation has two important properties: (i) it is bijective as cos(∅) is monotonic when ∅ ∈ [0, π]; (ii) as opposed to Cartesian coordinates, polar coordinates preserve absolute temporal relations. The Gramian summation angular field (GASF) and Gramian difference angular field (GADF) are defined as follows: By using the above transformations, each I-V cycle was mapped into an image, as shown in Figure 7. Specifically, the GASF related to the potential was transformed in the red (R) color plane, and the GADF and GASF related to the current into the green (G) and blue (B) planes, respectively. Examples of generated RGB images are provided in Figure 8, referring to the detection of BQ in water solution.
This transformation has two important properties: (i) it is bijective as cos (∅) is monotonic when ∅ ∈ [0, ]; (ii) as opposed to Cartesian coordinates, polar coordinates preserve absolute temporal relations.
The Gramian summation angular field (GASF) and Gramian difference angular field (GADF) are defined as follows:

= [cos ∅ + ∅ ] = [sin (∅ − ∅ )]
(4) By using the above transformations, each I-V cycle was mapped into an image, as shown in Figure 7. Specifically, the GASF related to the potential was transformed in the red (R) color plane, and the GADF and GASF related to the current into the green (G) and blue (B) planes, respectively. Examples of generated RGB images are provided in Figure  8, referring to the detection of BQ in water solution.  otonic when ∅ ∈ [0, ]; (ii) as opposed to Cartesian coordinates, polar coordinates preserve absolute temporal relations. The Gramian summation angular field (GASF) and Gramian difference angular field (GADF) are defined as follows:

= [cos ∅ + ∅ ] = [sin (∅ − ∅ )]
(4) By using the above transformations, each I-V cycle was mapped into an image, as shown in Figure 7. Specifically, the GASF related to the potential was transformed in the red (R) color plane, and the GADF and GASF related to the current into the green (G) and blue (B) planes, respectively. Examples of generated RGB images are provided in Figure  8, referring to the detection of BQ in water solution.

Deep Learning Model
Once all the 291 cycles have been mapped into 291 RGB images, a suitable deep neural network (DNN) can be trained to classify them [27].
In this paper, we adopted a convolutional neural network (CNN), a category of DNN inspired by biological processes [38,39] as the model of connectivity between neurons recalls the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a narrow visual field region known as the receptive field. The receptive fields of different neurons partially overlap to cover the entire visible area.
CNN uses relatively little preprocessing compared to other image classification algorithms. This means that the network learns to optimize filters (or kernels) through machine learning, whereas in traditional algorithms, these filters are designed manually. This independence from prior knowledge and human intervention in feature extraction is a great advantage when using such an approach. Table 2 reports the characteristics of the relatively simple CNN designed for our purpose. The CNN contains five blocks with a 2D convolution (kernel size 3 × 3), a max-pooling (stride of 2), and two fully connected layers, with a hidden layer of size 64, a dropout of 0.5, and an output layer of size 3. Specifically, Table 2 reports the number of parameters layer by layer, with a total amount of 51,799. The dropout is typically used to reduce overfitting on the training set.
This model was finally chosen after a preliminary experimental phase assuming as a goal the minimization of the complexity of the network.
The selected network has a straightforward structure with a sequence of convolutional and max-pooling layers, a flattened layer, and two dense and fully connected layers

Deep Learning Model
Once all the 291 cycles have been mapped into 291 RGB images, a suitable deep neural network (DNN) can be trained to classify them [27].
In this paper, we adopted a convolutional neural network (CNN), a category of DNN inspired by biological processes [38,39] as the model of connectivity between neurons recalls the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a narrow visual field region known as the receptive field. The receptive fields of different neurons partially overlap to cover the entire visible area.
CNN uses relatively little preprocessing compared to other image classification algorithms. This means that the network learns to optimize filters (or kernels) through machine learning, whereas in traditional algorithms, these filters are designed manually. This independence from prior knowledge and human intervention in feature extraction is a great advantage when using such an approach. Table 2 reports the characteristics of the relatively simple CNN designed for our purpose. The CNN contains five blocks with a 2D convolution (kernel size 3 × 3), a maxpooling (stride of 2), and two fully connected layers, with a hidden layer of size 64, a dropout of 0.5, and an output layer of size 3. Specifically, Table 2 reports the number of parameters layer by layer, with a total amount of 51,799. The dropout is typically used to reduce overfitting on the training set.
This model was finally chosen after a preliminary experimental phase assuming as a goal the minimization of the complexity of the network.
The selected network has a straightforward structure with a sequence of convolutional and max-pooling layers, a flattened layer, and two dense and fully connected layers for the classification, with a dropout to limit overfitting on the training set.

Dataset
As described in Section 3, the dataset generated by the CV experiments contains a total of 291 experimental points, referring to the characterization of PF, HQ, and BQ in water by means of three different platforms: bare SPEs, SWCNT, and MWCNT modified SPEs. Table 3 provides the complete description of the dataset.  The 291 available experiments were randomly assigned to the training, validation, and test sets in a ratio of 75% (227 images), 10% (29 images) and 15% (35 images), respectively.
To augment the statistical significance of the experiments, this subdivision was randomly repeated five times (5-fold cross-validation), obtaining a different train/validation/test set each time. For each of these subdivisions, a model was trained and evaluated.

Results and Discussion
The result of applying the GAF transformation to the dataset is an impressive stabilization of the pattern associated with any single test (detection of a given substance at a given concentration). This is a consequence of a dramatic reduction of the uncertainty related to using different platforms at different times for the given test. To show it, let us refer to the test case analyzed in Figure 2, i.e., the CV response associated with the detection of HQ at a concentration of 5 mM.
Once all the I-V curves related to this case are transformed into equivalent RGB images by using GAF, we can compute for each pixel and each color the average and standard deviation values associated with experiments available for the above test case. These values are shown in Table 4 for each color: the ratio STD/average ranges from 0.23 to 10.5%. The subsequent deep learning processing was performed by using an Intel Core i7-7700 CPU@3.60GHz, 256GB of RAM with a GPU Titan Xp. As a deep learning framework, we used Keras version 2.4.0 with TensorFlow version 2.4.0 as the back end [40,41].
During the training of the network, the following hyperparameters were used: At the end of each epoch, the loss was evaluated on the validation set to save the model with the best performance, avoiding overfitting. An early stopping policy was implemented with a patience of 100 to stop the learning phase if the loss does not improve for some time. Figures 9 and 10 report examples of the accuracy and loss curve during the training phase on the first fold, evaluated on the training and the validation set at the end of each epoch.    In the loss curve ( Figure 10), a plateau is reached at epoch 1100; after this epoch the loss evaluated on the validation set does not decrease for at least 100 epochs, so determining the stop of the training phase. The model saved at epoch 1100 became the best model found and was then used during the test phase.
The entire training phase on a Dell laptop with Core i7 as a processor, 32GB of RAM, and an NVIDIA 3060 GPU with 6 GB of dedicated memory employed around 1 h to reach the convergence ad epoch 1200.
Global performances were evaluated in terms of accuracy (5) and confusion matrix (CM).

=
(5) Figure 10. Evolution with the epochs of the loss during the training phase on the training set (red dotted curve) and validation set (blue solid curve) for the first fold.
In these curves, it is possible to observe that the loss on the validation set is lower than the loss on the training set; at the same time, it is possible to observe that the accuracy on the validation set is greater than the accuracy on the training set. This can happen in the presence of a dropout layer, where during training, a percentage of the features is set to zero (with a 50% of probability because we adopted a dropout of 0.5). During the validation, all features are used; consequently, the model is more robust, leading to higher accuracy.
In the loss curve ( Figure 10), a plateau is reached at epoch 1100; after this epoch the loss evaluated on the validation set does not decrease for at least 100 epochs, so determining the stop of the training phase. The model saved at epoch 1100 became the best model found and was then used during the test phase.
The entire training phase on a Dell laptop with Core i7 as a processor, 32GB of RAM, and an NVIDIA 3060 GPU with 6 GB of dedicated memory employed around 1 h to reach the convergence ad epoch 1200.
Global performances were evaluated in terms of accuracy (5) and confusion matrix (CM).

Accuracy =
Correctly Classi f ied Samples Total Samples A CM summarizes the results of the testing phase on different classes. The CM ij element represents the percentage of elements labeled as class i and predicted as class j. The ideal case is represented by a diagonal matrix, which means that all samples for each class were correctly classified. Furthermore, the global accuracy can be evaluated as the ratio between the CM trace (sum of the correctly classified) and the sum of all CM values.
The results of the 5-fold repetitions of the CNN training are reported in Figure 11 in terms of mean values. A CM summarizes the results of the testing phase on different classes. The CMij element represents the percentage of elements labeled as class i and predicted as class j. The ideal case is represented by a diagonal matrix, which means that all samples for each class were correctly classified. Furthermore, the global accuracy can be evaluated as the ratio between the CM trace (sum of the correctly classified) and the sum of all CM values.
The results of the 5-fold repetitions of the CNN training are reported in Figure 11 in terms of mean values.
It can be noticed that the confusion matrix is fully diagonal, hence indicating that the proposed network generates an accuracy of 100% in all five repetitions of the experiments. Consequently, Figure 11 does not report the standard deviation along the experiments, given that its values are equal to zero. Figure 11. Confusion matrix evaluated on the test set. Figure 12 reports the global overview of the system after its training: the input is the entire curve, this curve is transformed via GAF in an RGB image, and the trained CNN classifies the image into one of three classes (BQ, HQ, or FP). It can be noticed that the confusion matrix is fully diagonal, hence indicating that the proposed network generates an accuracy of 100% in all five repetitions of the experiments. Consequently, Figure 11 does not report the standard deviation along the experiments, given that its values are equal to zero. Figure 12 reports the global overview of the system after its training: the input is the entire curve, this curve is transformed via GAF in an RGB image, and the trained CNN classifies the image into one of three classes (BQ, HQ, or FP).
proposed network generates an accuracy of 100% in all five repetitions of the experiments. Consequently, Figure 11 does not report the standard deviation along the experiments, given that its values are equal to zero.  Figure 12 reports the global overview of the system after its training: the input is the entire curve, this curve is transformed via GAF in an RGB image, and the trained CNN classifies the image into one of three classes (BQ, HQ, or FP). The entire process on a standard machine such as a laptop with Core i7 of 11th generation takes less than 1 s for the entire classification process, as reported in Figure 12. This time is adequate for an online detection of these substances.

Conclusions
The paper proposes a further step toward the realization of a low-cost AI-based embedded sensor for the detection and classification of organic pollutants in water. The The entire process on a standard machine such as a laptop with Core i7 of 11th generation takes less than 1 s for the entire classification process, as reported in Figure 12. This time is adequate for an online detection of these substances.

Conclusions
The paper proposes a further step toward the realization of a low-cost AI-based embedded sensor for the detection and classification of organic pollutants in water. The proposed solution is based on suitable screen-printed electrodes connected to measurement micro-platforms capable of performing cyclic voltammetry tests and embedded with an innovative deep-learning algorithm for classification and detection. In detail, the paper is mainly focused on the optimization of the classification task that is executed with convolutional neural networks. The main novelty of the paper is the innovative use of Gramian angular fields transformations to transform in suitable RGB image data coming from voltammetry tests. To demonstrate the goodness of the solution, two challenging pollutants, i.e., hydroquinone and benzoquinone, that have very similar electroactivity and consequently very similar voltametric footprint, were considered. In addition, results coming from different types of screen-printed electrodes were considered. In this way, the obtained results are not related to a specific sensor but are a feature of the proposed platform.
Obtained results show that this preliminary conditioning of the measurement information allows us to deeply improve the performance of the convolutional neural networks allowing us to reach a classification accuracy of 100%.