# Radial Wavelet Neural Network with a Novel Self-Creating Disk-Cell-Splitting Algorithm for License Plate Character Recognition

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Radial Wavelet Neural Network (RWNN)

_{w}and q are the numbers of hidden and output neurons respectively.

## 3. Self-Creating Disk-Cell-Splitting (SCDCS) and Least Square (LS) Method Based RWNN

#### 3.1. Self-Organizing Map

- Randomize the map’s nodes’ weight vectors
**W**_{v}s. - Grab an input vector
**X**. - Traverse each node in the map.
- (i) Use Euclidean distance formula to find similarity between the input vector and the map’s node’s weight vector.
- (ii) Track the node that produces the smallest distance (this node is the best matching unit, BMU).

- Update the nodes in the neighborhood of BMU by pulling them closer to the input vector as$${\mathbf{W}}_{v}(t+1)={\mathbf{W}}_{v}(t)+\theta (v,t)\alpha (t)(\mathbf{X}(t)-{\mathbf{W}}_{v}(t))$$
- Increase t and repeat from Step 2 while $t<\lambda $, where λ is the limit on time iteration.

#### 3.2. SCDCS-LS Algorithm for RWNN

- ♦ The second module: Competitive learning
- 1. Select an input
**x**randomly from input samples and find the best-matching neuron c in neurons such that $\left|\right|{\mathbf{W}}_{c}-\mathbf{x}\left|\right|\le \left|\right|{\mathbf{W}}_{i}-\mathbf{x}\left|\right|$, $\forall i$. Increase the activation number of the winner neuron c by 1, i.e., ${\lambda}_{c}={\lambda}_{c}+1$. - 2. Update weight vectors of the winner neuron c and the neighboring neurons b
_{j}of c by Equations (9) and (10).$${\mathbf{W}}_{c}={\mathbf{W}}_{c}+{\alpha}_{1}(\mathbf{x}-{\mathbf{W}}_{c})$$$${\mathbf{W}}_{{b}_{j}}={\mathbf{W}}_{{b}_{j}}+{\alpha}_{2}(\mathbf{x}-{\mathbf{W}}_{{b}_{j}}),{\alpha}_{2}<{\alpha}_{1}$$If the number of neurons N= 2, then the neighboring neuron is the not activated neuron, i.e., j = 1. If the number of neurons N = 2, then the neighboring neurons are the disk clockwise and counterclockwise adjacent neurons with respect to the winner neuron c, i.e., j = 2. - 3. Set iterations $t=t+1$. If t is less than the pre-defined time value t
_{max}, then go to Step 3.

- ♦ The third module: Least square method
- 4. Cluster all input data by the generated neurons and their weight vectors which have been trained in the second module. Find all activated neurons $\left\{{c}_{k}\right\}$ and their weights ${\mathbf{W}}_{{c}_{k}}$, $k=1,2,\mathrm{...},{N}_{1}$. ${N}_{1}$ is the number of “valid neurons”, which satisfies ${N}_{1}\le N$. Determine the class radius r
_{k}by Euclidean distances between input data and centers (i.e., weights) of each class. - 5. The center vectors
**b**_{k}and scaling parameters d_{k}of hidden wavelet neurons in RWNN can be determined as follows:$${\mathbf{b}}_{k}={\mathbf{W}}_{{c}_{k}}$$$${d}_{k}=\frac{\beta}{\sigma}{r}_{k}\phantom{\rule{0.2em}{0ex}}$$σ is the window radius of wavelet function $\psi (x)$, $\beta $ is a relaxation parameter which satisfies $\beta \ge 1$. - 6. By means of the determined N
_{1}wavelet neurons with their center vectors ${\mathbf{b}}_{k}$ and scaling parameters ${d}_{k}$, according to the input and output patterns ${\mathbf{x}}^{(l)}$, ${\mathbf{f}}^{(l)}$, $l=1,2,\mathrm{...},M$, the weights ${w}_{pk}$ and bias term ${\overline{y}}_{p}$ ( $p=1,2,\mathrm{...},q$) between the hidden and output layer are straightforwardly solved by the linear optimization strategy—LS method. Here $M$ is the number of training patterns, $q$ is the number of output neurons.

- ♦ The fourth module: Judge the termination condition
- 7. Compute the network output ${\mathbf{y}}^{(l)}$ by (7) where ${N}_{w}={N}_{1}$ and judge the termination condition of experiment. If satisfied, then stop. Otherwise go to Step 10.

- ♦ The fifth module: Disk cell splitting
- 8. The neuron $\widehat{c}$ which has most activation number (i.e., (13)) in competitive module splits to two new neurons,$$\widehat{c}=\mathrm{arg}\mathrm{max}\left\{{\lambda}_{i}\right\}$$

**A**and

**sita**recording the number of splitting times that each neuron went through and the argument of each neuron label point are as follows:

- 9. Initialize the new weights of the new generated neurons through circle neighbor strategy (explained in the following “Remarks”) and maintain the weights of nonsplitting neurons. Set the activation number ${\lambda}_{i}=0$ for the ith neuron, where $i=1,2,\mathrm{...},N+1$, and the iterations $t=1$.
- 10. Perform Steps 3–9 to all $N+1$ neurons with weights illustrated in Step 11.

**Remarks:**

- A. In the competitive module, ${\alpha}_{1}$ and ${\alpha}_{2}$ ( ${\alpha}_{2}<{\alpha}_{1}$) denote the leaning rate of the winner neuron and the neighboring neurons respectively. Fixed values or varied values monotonically decreasing with iterations are all acceptable. In the experiment of this paper, we choose the fixed values as ${\alpha}_{1}=0.09$, ${\alpha}_{2}=0.0045$.
- B. In order to create an ordered topology on the unit disk, the “circle neighbor strategy” is employed to initialize the weights of new neurons. Because the neuron label points are distributed on the unit circle, the circumferential distance between two points is only decided by the angle between corresponding two circle radiuses. For ensuring the close neurons with similar weights, the strategy can be executed as follows.
- ➢ Case 1: the neuron number is 2, i.e., 2 neurons splits to 3 neurons.

**R**is an random vector satisfying $\left|\right|\mathbf{R}\left|\right|\ll \left|\right|\mathbf{W}\left|\right|$.

- ➢ Case 2: the neuron number $N>2$, i.e., $N$ neurons splits to $N+1$ neurons.

## 4. License Plate Character Recognition and Results

#### 4.1. Example 1: Recognition of English Letters

_{1}= 26, the total success recognition rate for training set and testing set of SCDCS-LS based RWNN reaches 99.89% and 99.76% respectively, and after that the false recognition rate will no longer be significantly decrease with newly added neurons. But when the neuron number of K-means-LS based RBF increases to N = 32 as shown in Figure 9, the total success recognition rate for training set and testing set are 95.34% and 96.71% respectively. The detailed recognition results for testing samples by two models are in Table 1 and Table 2, which adopt 26 and 32 hidden neurons respectively.

**A**and

**sita**of these neurons which record the number of splitting times that each neuron went through and the argument of each neuron label point. The comparison results of different models are concluded in Table 4. It can be seen that the SVM algorithm can make the highest recognition rate for training samples, but a lower recognition rate for testing samples. The proposed SCDCS-LS based RWNN can get higher recognition rates both for training and testing samples although fewer hidden neurons are employed.

#### 4.2. Example 2: Recognition of Numbers or English Letters

## 5. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Bai, Y.; Hu, H.; Li, F.; Shuang, W.; Gong, L. A Recognition System of China-style License Plates Based on Mathematical Morphology and Neural Network. Int. J. Math. Models Methods Appl. Sci
**2010**, 4, 66–73. [Google Scholar] - Kim, S.K.; Kim, D.W.; Kim, H.J. A Recognition of Vehicle License Plate Using a Genetic Algorithm Based Segmentation, Proceedings of the IEEE International Conference on Image Processing, Lausanne, Switzerland, 16–19 September 1996; pp. 661–664.
- Pan, X.; Ye, X.; Zhang, S. A Hybrid Method for Robust Car Plate Character Recognition. Eng. Appl. Artif. Intell
**2005**, 18, 963–972. [Google Scholar] - Wu, C.; On, L.C.; Weng, C.H.; Kuan, T.S.; Ng, K. A Macao License Plate Recognition System, Proceedings of the IEEE International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August 2005; pp. 4506–4510.
- Cheng, R.; Bai, Y. A Novel Approach for License Plate Slant Correction, Character Segmentation and Chinese Character Recognition. Int. J. Signal Process. Image Process. Pattern Recognit
**2014**, 7, 353–364. [Google Scholar] - Foggia, P.; Sansone, C.; Tortorella, F.; Vento, M. Character Recognition by Geometrical Moments on Structural Decompositions 6–10.
- Koval, V.; Turchenko, V.; Kochan, V.; Sachenko, A.; Markowsky, G. Smart License Plate Recognition System Based on Image Processing Using Neural Network, Proceedings of IEEE the Second International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, Lviv, Ukraine, 8–10 September 2003; pp. 123–127.
- Trier, Ø.D.; Jain, A.K.; Taxt, T. Feature Extraction Methods for Character Recognition-A Survey. Pattern Recognit
**1996**, 29, 641–662. [Google Scholar] - Anagnostopoulos, C.N.E.; Anagnostopoulos, I.E.; Loumos, V.; Kayafas, E. A License Plate-Recognition Algorithm for Intelligent Transportation System Applications. IEEE Trans. Intell. Transp. Syst
**2006**, 7, 377–392. [Google Scholar] - Kocer, H.E.; Cevik, K.K. Artificial Neural Networks Based Vehicle License Plate Recognition. Procedia Comput. Sci
**2011**, 3, 1033–1037. [Google Scholar] - Frank, R.J.; Davey, N.; Hunt, S.P. Time Series Prediction and Neural Networks. J. Intell. Robot. Syst
**2001**, 31, 91–103. [Google Scholar] - Lotric, U.; Dobnikar, A. Predicting Time Series Using Neural Networks with Wavelet-Based Denoising Layers. Neural Comput. Appl
**2005**, 14, 11–17. [Google Scholar] - Setiono, R.; Liu, H. Feature Extraction via Neural Networks. In Feature Extracton, Construction and Selection: A Data Mining Perspective; The Springer International Series in Engineering and Computer Science; Volume 453, Springer; New York, NY, USA, 1998; pp. 191–204. [Google Scholar]
- Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image Processing with Neural Networks—A Review. Pattern Recognit
**2002**, 35, 2279–2301. [Google Scholar] - Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Clarendon, Canada, 1995. [Google Scholar]
- Balasubramanian, M.; Palanivel, S.; Ramalingam, V. Real Time Face and Mouth Recognition Using Radial Basis Function Neural Networks. Expert Syst. Appl
**2009**, 36, 6879–6888. [Google Scholar] - Zhang, G.P. Neural Networks for Classification: A Survey. IEEE Trans. Syst. Man Cybern
**2000**, 30, 451–462. [Google Scholar] - Ghate, V.N.; Dudul, S.V. Optimal MLP Neural Network Classifier for Fault Detection of three Phase Induction Motor. Expert Syst. Appl
**2010**, 37, 3468–3481. [Google Scholar] - Zhang, Q.; Benveniste, A. Wavelet Networks. IEEE Trans. Neural Netw
**1992**, 3, 889–898. [Google Scholar] - Zhang, J.; Walter, G.G.; Miao, Y.; Lee, W.N.W. Wavelet Neural Networks for Function Learning. IEEE Trans. Signal Process
**1995**, 43, 1485–1497. [Google Scholar] - Billings, S.A.; Wei, H.L. A New Class of Wavelet Networks for Nonlinear System Identification. IEEE Trans. Neural Netw
**2005**, 16, 862–874. [Google Scholar] - Zhang, Q. Using Wavelet Network in Nonparametric Estimation. Available online: https://hal.inria.fr/inria-00074353/document accessed on 9 June 2015.
- Bodyanskiy, Y.; Vynokurova, O. Hybrid Adaptive Wavelet-Neuro-Fuzzy System for Chaotic Time Series Identification. Inf. Sci
**2013**, 220, 170–179. [Google Scholar] - Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar]
- Mallat, S.G. A Theory for Multi-resolution Signal Decomposition: The Wavelet Representation. IEEE Trans. Pattern Anal
**1989**, 11, 674–693. [Google Scholar] - Grossmann, A.; Morlet, J. Decomposition of Hardy Functions into Square Integrable Wavelets of Constant Shape. J. Math. Anal
**1984**, 15, 723–736. [Google Scholar] - Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed; Printice-Hall. Inc.: Upper Saddle River, NJ, USA; 1998. [Google Scholar]
- Kohonen, T. Self-Organizing Maps; Springer: Berlin, Germany, 2001. [Google Scholar]
- Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Disc
**1998**, 2, 121–167. [Google Scholar] - Chang, C.C.; Lin, C.J. LIBSVM: a Library for Support Vector Machines. ACM Trans. Intel. Syst. Technol
**2011**, 2, 1–27. [Google Scholar]

**Figure 8.**

**(a)**Original character figure (16×16);

**(b)**The approximation components of 2-level wavelet decomposition (4×4).

**Figure 14.**The disk distribution map of 39 neurons got by SCDCS-LS for number or English letter samples.

**Table 1.**Recognition results of SCDCS-LS based RWNN for testing samples (Example 1, hidden neuron number N

_{w}= 26).

Structure | 16-26-23 | ||||||||
---|---|---|---|---|---|---|---|---|---|

Character | A | B | C | D | E | F | G | H | J |

Number of testing samples | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |

Number of success recognized samples | 20 | 20 | 20 | 20 | 20 | 19 | 20 | 20 | 20 |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 95% | 100% | 100% | 100% |

Character | K | L | M | N | P | Q | R | S | U |

Number of testing samples | 20 | 20 | 19 | 16 | 13 | 20 | 20 | 20 | 12 |

Number of success recognized samples | 20 | 20 | 19 | 16 | 13 | 20 | 20 | 20 | 12 |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |

Character | V | W | X | Y | Z | ||||

Number of testing samples | 11 | 20 | 20 | 20 | 14 | Total number of testing samples | 425 | ||

Number of success recognized samples | 11 | 20 | 20 | 20 | 14 | Total number of success recognized samples | 424 | ||

Success recognition rate | 100% | 100% | 100% | 100% | 100% | Total success recognition rate | 99.76% |

**Table 2.**Recognition results of Kmean-LS based RBF for testing samples (Example 1, hidden neuron number N

_{w}= 32).

Structure | 16-32-23 | ||||||||
---|---|---|---|---|---|---|---|---|---|

Character | A | B | C | D | E | F | G | H | J |

Number of testing samples | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |

Number of success recognized samples | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% |

Character | K | L | M | N | P | Q | R | S | U |

Number of testing samples | 20 | 20 | 19 | 16 | 13 | 20 | 20 | 20 | 12 |

Number of success recognized samples | 20 | 19 | 19 | 16 | 0 | 20 | 20 | 20 | 12 |

Success recognition rate | 100% | 95% | 100% | 100% | 0% | 100% | 100% | 100% | 100% |

Character | V | W | X | Y | Z | ||||

Number of testing samples | 11 | 20 | 20 | 20 | 14 | Total number of testing samples | 425 | ||

Number of success recognized samples | 11 | 20 | 20 | 20 | 14 | Total number of success recognized samples | 411 | ||

Success recognition rate | 100% | 100% | 100% | 100% | 100% | Total success recognition rate | 96.71% |

**Table 3.**Vectors

**A**and

**sita**of neurons in Figure 11 which record the number of splitting times that each neuron went through and the argument of each neuron label point (neuron number N = 27, valid neuron number N

_{1}= 26, ★ denotes the invalid neuron).

Neuron label | 1 | 2 | 3 | 4 | 5★ | 6 | 7 | 8 | 9 |

A | 3 | 4 | 4 | 4 | 4 | 6 | 7 | 7 | 6 |

sita | −2.945 | −2.651 | −2.454 | −2.258 | −2.062 | −1.939 | −1.902 | −1.878 | −1.841 |

Neuron label | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |

A | 6 | 4 | 3 | 5 | 5 | 4 | 5 | 6 | 7 |

sita | −1.792 | −1.669 | −1.374 | −1.129 | −1.031 | −0.884 | −0.736 | −0.663 | −0.626 |

Neuron label | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 |

A | 7 | 4 | 3 | 2 | 3 | 3 | 3 | 3 | 2 |

sita | −0.601 | −0.491 | −0.196 | 0.393 | 0.982 | 1.374 | 1.767 | 2.160 | 2.749 |

Model | Number of hidden neurons | Success recognition rate | |
---|---|---|---|

Training | Testing | ||

GD based BP [1,7] | 32 | 94.57% | 95.53% |

K-means-LS based RBF [27] | 32 | 95.34% | 96.71% |

SVM [29,30] | – | 100% | 98.59% |

SCDCS-LS based RWNN | 26 | 99.89% | 99.76% |

**Table 5.**Recognition results of SCDCS-LS based RWNN for testing samples (Example 2, hidden neuron number N

_{w}= 38).

Structure | 16-38-33 | ||||||||
---|---|---|---|---|---|---|---|---|---|

Character | A | B | C | D | E | F | G | H | J |

Number of testing samples | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |

Number of success recognized samples | 20 | 19 | 20 | 20 | 20 | 19 | 20 | 20 | 20 |

Success recognition rate | 100% | 95% | 100% | 100% | 100% | 95% | 100% | 100% | 100% |

Character | K | L | M | N | P | Q | R | S | U |

Number of testing samples | 20 | 20 | 19 | 16 | 13 | 20 | 20 | 20 | 12 |

Number of success recognized samples | 20 | 20 | 19 | 16 | 13 | 19 | 20 | 20 | 12 |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 95% | 100% | 100% | 100% |

Character | V | W | X | Y | Z | 0 | 1 | 2 | 3 |

Number of testing samples | 11 | 20 | 20 | 20 | 14 | 20 | 20 | 20 | 20 |

Number of success recognized samples | 11 | 20 | 20 | 20 | 14 | 19 | 20 | 19 | 20 |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 95% | 100% | 95% | 100% |

Character | 4 | 5 | 6 | 7 | 8 | 9 | |||

Number of testing samples | 20 | 20 | 20 | 20 | 20 | 20 | Total number of testing samples | 625 | |

Number of success recognized samples | 20 | 20 | 20 | 20 | 20 | 20 | Total number of success recognized samples | 620 | |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 100% | Total success recognition rate | 99.20% |

**Table 6.**Recognition results of K-means-LS based RBF for testing samples (Example 2, hidden neuron number N

_{w}= 44).

Structure | 16-44-33 | ||||||||
---|---|---|---|---|---|---|---|---|---|

Character | A | B | C | D | E | F | G | H | J |

Number of testing samples | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |

Number of success recognized samples | 20 | 0 | 20 | 20 | 20 | 19 | 20 | 20 | 20 |

Success recognition rate | 100% | 0% | 100% | 100% | 100% | 95% | 100% | 100% | 100% |

Character | K | L | M | N | P | Q | R | S | U |

Number of testing samples | 20 | 20 | 19 | 16 | 13 | 20 | 20 | 20 | 12 |

Number of success recognized samples | 20 | 19 | 19 | 16 | 13 | 19 | 20 | 20 | 12 |

Success recognition rate | 100% | 95% | 100% | 100% | 100% | 95% | 100% | 100% | 100% |

Character | V | W | X | Y | Z | 0 | 1 | 2 | 3 |

Number of testing samples | 11 | 20 | 20 | 20 | 14 | 20 | 20 | 20 | 20 |

Number of success recognized samples | 11 | 20 | 20 | 20 | 14 | 19 | 20 | 20 | 20 |

Success recognition rate | 100% | 100% | 100% | 100% | 100% | 95% | 100% | 100% | 100% |

Character | 4 | 5 | 6 | 7 | 8 | 9 | |||

Number of testing samples | 20 | 20 | 20 | 20 | 20 | 20 | Total number of testing samples | 625 | |

Number of success recognized samples | 20 | 18 | 20 | 20 | 20 | 20 | Total number of success recognized samples | 599 | |

Success recognition rate | 100% | 90% | 100% | 100% | 100% | 100% | Total success recognition rate | 95.84% |

**Table 7.**Vectors

**A**and

**sita**of neurons in Figure 14 which record the number of splitting times that each neuron went through and the argument of each neuron label point (neuron number N = 39, valid neuron number N

_{1}= 38, ★ denotes the invalid neuron ).

Neuron label | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8★ | 9 |

A | 3 | 3 | 2 | 3 | 5 | 5 | 4 | 3 | 3 |

sita | −2.945 | −2.553 | −1.963 | −1.374 | −1.129 | −1.031 | −0.884 | −0.589 | −0.196 |

Neuron label | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |

A | 2 | 5 | 7 | 8 | 8 | 9 | 10 | 10 | 8 |

sita | 0.393 | 0.834 | 0.896 | 0.914 | 0.927 | 0.936 | 0.940 | 0.943 | 0.951 |

Neuron label | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 |

A | 8 | 9 | 10 | 13 | 13 | 12 | 12 | 13 | 13 |

sita | 0.963 | 0.973 | 0.977 | 0.979 | 0.979 | 0.980 | 0.981 | 0.981 | 0.982 |

Neuron label | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 |

A | 7 | 7 | 6 | 5 | 3 | 3 | 3 | 5 | 5 |

sita | 0.994 | 1.019 | 1.055 | 1.129 | 1.374 | 1.767 | 2.160 | 2.405 | 2.503 |

Neuron label | 37 | 38 | 39 | ||||||

A | 4 | 4 | 4 | ||||||

sita | 2.651 | 2.847 | 3.043 |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cheng, R.; Bai, Y.; Hu, H.; Tan, X.
Radial Wavelet Neural Network with a Novel Self-Creating Disk-Cell-Splitting Algorithm for License Plate Character Recognition. *Entropy* **2015**, *17*, 3857-3876.
https://doi.org/10.3390/e17063857

**AMA Style**

Cheng R, Bai Y, Hu H, Tan X.
Radial Wavelet Neural Network with a Novel Self-Creating Disk-Cell-Splitting Algorithm for License Plate Character Recognition. *Entropy*. 2015; 17(6):3857-3876.
https://doi.org/10.3390/e17063857

**Chicago/Turabian Style**

Cheng, Rong, Yanping Bai, Hongping Hu, and Xiuhui Tan.
2015. "Radial Wavelet Neural Network with a Novel Self-Creating Disk-Cell-Splitting Algorithm for License Plate Character Recognition" *Entropy* 17, no. 6: 3857-3876.
https://doi.org/10.3390/e17063857