# Taylor’s Law in Innovation Processes

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Urn Model with Triggering

- if the color of the extracted ball is a new one, (it appears for the first time in $\mathcal{S}$, i.e., it is a realization of a novelty), then we add $\tilde{\rho}$ balls of the same color plus $\nu +1$ distinct balls of different new colors, which were not yet present in the urn; note that we use here the word new in two different acceptations: on one hand we refer to events that occur for the first time, on the other one to new colors that enter the space $\mathcal{S}$ of events
- if the color of the extracted ball is already present in $\mathcal{S}$, we add $\rho $ balls of the same color.

#### Values of the Model Parameters

## 3. Triangular Urn Schemes and Innovation Rate

- if the color of the extracted ball is black, then we replace the extracted ball with a white ball and we add $\tilde{\rho}$ white balls plus $\nu +1$ black balls;
- if the color of the extracted ball is white, we return the extracted ball in the urn together with $\rho $ additional white balls.

**(Case $\mathbf{0}<\mathbf{\nu}<\mathbf{\rho}$)**$${t}^{-\nu /\rho}{D}_{t}\stackrel{a.s.}{\u27f6}D,$$$$f\left(x\right)=c{x}^{{N}_{0}/\nu}{f}_{ML}\left(x\right)\phantom{\rule{2.em}{0ex}}\mathrm{for}x0,$$$$\mu [{D}^{q}]=\frac{\mathsf{\Gamma}({N}_{0}/\nu +q)\mathsf{\Gamma}({N}_{0}/\rho )}{\mathsf{\Gamma}({N}_{0}/\nu )\mathsf{\Gamma}({N}_{0}/\rho +q\nu /\rho )}.$$**(Case $\mathbf{\nu}=\mathbf{\rho}$)**$$\frac{ln\left(t\right)}{t}{D}_{t}\stackrel{a.s.}{\u27f6}\frac{\rho}{\widehat{\rho}}$$$$ln\left(t\right)\left(\right)open="("\; close=")">\frac{ln\left(t\right)}{t}{D}_{t}-\frac{\rho}{\widehat{\rho}}-\frac{\rho}{\widehat{\rho}}\frac{ln(ln(t\left)\right)}{ln\left(t\right)}$$**(Case $\mathbf{\nu}>\mathbf{\rho}$)**$${t}^{-1}{D}_{t}\stackrel{a.s.}{\u27f6}\frac{(\nu -\rho )}{a}$$- –
- for $\rho /\nu <1/2$,$$\sqrt{t}\left(\right)open="("\; close=")">\frac{{D}_{t}}{t}-\frac{(\nu -\rho )}{a}$$
- –
- for $\rho /\nu =1/2$,$$\sqrt{t/ln\left(t\right)}\left(\right)open="("\; close=")">\frac{{D}_{t}}{t}-\frac{(\nu -\rho )}{a}$$
- –
- for $\rho /\nu >1/2$,$${t}^{1-\rho /\nu}\left(\right)open="("\; close=")">\frac{{D}_{t}}{t}-\frac{(\nu -\rho )}{a}$$

## 4. Taylor’s Law

**(Case $\mathbf{0}<\mathbf{\nu}<\mathbf{\rho}$)**From the almost sure convergence (6), we guess $\sigma [{D}_{t}]\propto \mu [{D}_{t}]$, where the constant of proportionality is $\sigma [D]/\mu [D]$.**(Case $\mathbf{\nu}=\mathbf{\rho}$)**Since the limit in (7) is a constant, we can not exploit the almost sure convergence (7) in order to obtain a Taylor’s law as done for the previous case $0<\nu <\rho $. However, from the convergence in distribution (8), we can guess$$\frac{ln\left(t\right)}{t}\mu [{D}_{t}]\u27f6\frac{\rho}{\widehat{\rho}}$$$$\begin{array}{cc}\hfill \frac{ln{\left(t\right)}^{4}}{{t}^{2}}{\sigma}^{2}[{D}_{t}]& =ln{\left(t\right)}^{2}{\sigma}^{2}\left(\right)open="["\; close="]">\frac{ln\left(t\right)}{t}{D}_{t}-\frac{\rho}{\widehat{\rho}}\hfill \end{array}2$$Hence, combining together the above two limit relations, we find$${\sigma}^{2}[{D}_{t}]\sim {\sigma}^{2}[Z]\frac{{\widehat{\rho}}^{2}}{{\rho}^{2}}\phantom{\rule{0.166667em}{0ex}}\frac{\mu {[{D}_{t}]}^{2}}{ln{\left(t\right)}^{2}},\phantom{\rule{1.em}{0ex}}\mathrm{that}\mathrm{is}\phantom{\rule{1.em}{0ex}}\sigma [{D}_{t}]\propto \frac{\mu [{D}_{t}]}{ln\left(t\right)}\sim \frac{\mu [{D}_{t}]}{ln\left(\mu [{D}_{t}]\right)+ln(\widehat{\rho}/\rho )}.$$**(Case $\mathbf{\nu}>\mathbf{\rho}$)**Since ${D}_{t}/t\in [0,1]$ for all t, the almost sure convergence (9) implies the convergence of the moments (see [43]) for that equation. However, it is not enough in order to get a Taylor’s law, but we need to use (10), (11) and (12). First of all, we observe that$${t}^{-2}{\sigma}^{2}[{D}_{t}]={\sigma}^{2}\left(\right)open="["\; close="]">\frac{{D}_{t}}{t}-\frac{(\nu -\rho )}{a}$$Hence:- –
- for $\rho /\nu <1/2$, we guess from (10) that the first term on the right hand of the above equality behaves as ${\sigma}^{2}/t$, while the second term is $o(1/t)$, and so we get ${\sigma}^{2}[{D}_{t}]\sim {\sigma}^{2}t$ and$$\sigma [{D}_{t}]\propto \mu {[{D}_{t}]}^{\frac{1}{2}}$$
- –
- for $\rho /\nu =1/2$, we guess from (11) that the first term on the right hand of the above equality behaves as ${\sigma}^{2}ln\left(t\right)/t$, while the second term is $o(ln(t)/t)$ and so we get ${\sigma}^{2}[{D}_{t}]\sim {\sigma}^{2}tln\left(t\right)$ and$$\sigma [{D}_{t}]\propto \mu {[{D}_{t}]}^{\frac{1}{2}}{\left(\right)}^{ln}\frac{1}{2}$$
- –
- for $\rho /\nu >1/2$, we guess from (12) that the first term and the second term on the right hand of the above equality behave as $\mu [{Z}^{2}]{t}^{2(\rho /\nu -1)}$ and $\mu {[Z]}^{2}{t}^{2(\rho /\nu -1)}$ respectively and so we get ${\sigma}^{2}[{D}_{t}]\sim {\sigma}^{2}[Z]{t}^{2\rho /\nu}$ and$$\sigma [{D}_{t}]\propto \mu {[{D}_{t}]}^{\rho /\nu}$$

## 5. Taylor’s Law in Real World Systems

## 6. Two Mechanisms that Increase Fluctuations

#### 6.1. Random Parameters

**(Case $\mathbf{\nu}>\mathbf{\rho}$)**As seen before, the Taylor’s exponent in the case $\nu >\rho $ is always smaller than 1. Suppose now that $\rho $ and $\widehat{\rho}$ are constants and there exists a random variable ${X}_{0}$, with ${\sigma}^{2}[{X}_{0}]>0$, that gives the value of $\nu $. Given the value of ${X}_{0}$, the urn process behaves as described before. If ${X}_{0}$ is concentrated on $(\rho ,+\infty )$, that is ${X}_{0}/\rho >1$ almost surely, then, on the event $\{{X}_{0}=\nu \}$, the sequence ${D}_{t}/t$ converges almost surely to the value $(\nu -\rho )/(\nu +\widehat{\rho}-\rho )$. Therefore, since ${D}_{t}/t$ is bounded, we have [43]$$\mu [{D}_{t}]\sim t\mu \left(\right)open="["\; close="]">\frac{({X}_{0}-\rho )}{({X}_{0}+\widehat{\rho}-\rho )}.$$Therefore, by setting $\tilde{D}=\frac{({X}_{0}-\rho )}{({X}_{0}+\widehat{\rho}-\rho )}=1-\frac{\widehat{\rho}}{{X}_{0}+\widehat{\rho}-\rho}$, we find$${\sigma}^{2}[{D}_{t}]\sim \frac{{\sigma}^{2}[\tilde{D}]}{\mu {[\tilde{D}]}^{2}}\mu {[{D}_{t}]}^{2},\phantom{\rule{1.em}{0ex}}\mathrm{that}\mathrm{is}\phantom{\rule{1.em}{0ex}}\sigma [{D}_{t}]\propto \mu [{D}_{t}].$$This means that while a deterministic parameter $\nu >\rho $ gives a Taylor’s exponent smaller than 1, a random parameter $\nu $, with $\nu /\rho >1$ almost surely, gives a Taylor’s exponent equal to 1.**(Case $\mathbf{\nu}<\mathbf{\rho}$)**As seen before, the Taylor’s exponent in the case $\nu <\rho $ is equal to 1. Suppose now, as before, that ${X}_{0}$ is a random variable, with ${\sigma}^{2}[{X}_{0}]>0$, that gives the value of $\nu $, while the other parameters are constant. If ${X}_{0}$ is concentrated on $(0,\rho )$, that is ${X}_{0}/\rho <1$ almost surely, then, on the event $\{{X}_{0}=\nu \}$, the sequence ${t}^{-\nu /\rho}{D}_{t}$ converges almost surely to a suitable random variable ${D}_{\nu}$. Moreover, from [33], we have$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& {g}_{q}\left(\nu \right):=\mu [{D}_{\nu}^{q}]=\frac{\mathsf{\Gamma}({N}_{0}/\nu +q)}{\mathsf{\Gamma}({N}_{0}/\nu )\mathsf{\Gamma}(q\nu /\rho )}{\int}_{0}^{+\infty}{x}^{q\nu /\rho -1}{\left(\right)}^{1}-{N}_{0}/\nu -q\phantom{\rule{0.166667em}{0ex}}\mathrm{d}x\hfill \end{array}$$Assuming, as in the previous section, a condition of uniform integrability, we can say that$$\mu [{D}_{t}]\sim \mu [{t}^{{X}_{0}/\rho}{g}_{1}\left({X}_{0}\right)],$$$$\mu [{D}_{t}^{2}]\sim \mu [{t}^{2{X}_{0}/\rho}{g}_{2}\left({X}_{0}\right)],$$If we neglect the terms ${g}_{q}\left({X}_{0}\right)$ in the above mean values, we have$$\begin{array}{cc}\hfill \mu [{D}_{t}]& \sim \mu [{t}^{{X}_{0}/\rho}]=\mu [{e}^{{X}_{0}ln\left(t\right)/\rho}]={\mathcal{G}}_{{X}_{0}}(ln\left(t\right)/\rho ),\hfill \\ \hfill \mu [{D}_{t}^{2}]& \sim \mu [{t}^{2{X}_{0}/\rho}]=\mu [{e}^{2{X}_{0}ln\left(t\right)/\rho}]={\mathcal{G}}_{{X}_{0}}(2ln\left(t\right)/\rho ),\hfill \end{array}$$$$\mu [{D}_{t}]\sim \frac{t-1}{ln\left(t\right)}\sim \frac{t}{ln\left(t\right)},\phantom{\rule{2.em}{0ex}}\mu [{D}_{t}^{2}]\sim \frac{{t}^{2}-1}{2ln\left(t\right)}\sim \frac{{t}^{2}}{2ln\left(t\right)},$$Similarly, if ${X}_{0}$ is exponentially distributed on $(0,\rho )$, that is ${f}_{{X}_{0}}\left(x\right)=c(\rho ,\lambda ){e}^{-\lambda x}{I}_{(0,\rho )}\left(x\right)$ with $\lambda >0$ and $c(\rho ,\lambda )=\lambda /(1-{e}^{-\rho \lambda})$, we get$$\begin{array}{cc}\hfill \mu [{D}_{t}]& \sim c(\rho ,\lambda )\rho {e}^{-\rho \lambda}\frac{(t-{e}^{\rho \lambda})}{(ln(t)-\rho \lambda )}\sim c(\rho ,\lambda )\rho {e}^{-\rho \lambda}\frac{t}{ln\left(t\right)},\hfill \\ \hfill \mu [{D}_{t}^{2}]& \sim c(\rho ,\lambda )\rho {e}^{-\rho \lambda}\frac{({t}^{2}-{e}^{\rho \lambda})}{(2ln(t)-\rho \lambda )}\sim c(\rho ,\lambda )\rho {e}^{-\rho \lambda}\frac{{t}^{2}}{2ln\left(t\right)},\hfill \end{array}$$From Figure 5 we see that the above predictions are valid asymptotically, after a long transient where a law $\sigma [{D}_{t}]\propto \mu {[{D}_{t}]}^{\beta}$, $\beta >1$ seems to be valid.

#### 6.2. Urn Model with Semantic Triggering

- (i)
- we give weight 1 to: (a) each element in $\mathcal{U}$ with the same label, say C, as ${s}_{t-1}$ (the last element added in the sequence), (b) to the element that triggered the enter in the urn of ${s}_{t-1}$, and (c) to the elements triggered by ${s}_{t-1}$; a weight $\eta \le 1$ is given to any other element in $\mathcal{U}$;
- (ii)
- The element ${s}_{t}$ is chosen by drawing randomly from $\mathcal{U}$, each element with a probability proportional to its weight;
- (iii)
- the element ${s}_{t}$ is added to the sequence $\mathcal{S}$ and put back into $\mathcal{U}$ along with $\rho $ additional copies of it;
- (iv)
- if and only if the chosen element ${s}_{t}$ is new (i.e., it appears for the first time in the sequence $\mathcal{S}$), $\nu +1$ brand new distinct elements (balls with different colors, not yet present in the urn), all with a common brand new label, are added to $\mathcal{U}$.

## 7. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Zipf, G.K. Relative Frequency as a Determinant of Phonetic Change. Harv. Stud. Class. Philol.
**1929**, 40, 1–95. [Google Scholar] [CrossRef] - Zipf, G.K. The Psychobiology of Language; Houghton-Mifflin: New York, NY, USA, 1935. [Google Scholar]
- Zipf, G.K. Human Behavior and the Principle of Least Effort; Addison-Wesley: Reading MA, USA, 1949. [Google Scholar]
- Herdan, G. Type-Token Mathematics: A Textbook of Mathematical Linguistics; Janua linguarum. series maior. no. 4; Mouton en Company: Berlin, Germany, 1960. [Google Scholar]
- Heaps, H.S. Information Retrieval-Computational and Theoretical Aspects; Academic Press: Cambridge, MA, USA, 1978. [Google Scholar]
- Taylor, L. Aggregation, Variance and the Mean. Nature
**1961**, 189, 732. [Google Scholar] [CrossRef] - Eisler, Z.; Bartos, I.; Kertész, J. Fluctuation scaling in complex systems: Taylor’s law and beyond. Adv. Phys.
**2008**, 57, 89–142. [Google Scholar] [CrossRef][Green Version] - Newman, M.E.J. Power laws, Pareto distributions and Zipf’s law. Contemp. Phys.
**2005**, 46, 323–351. [Google Scholar] [CrossRef][Green Version] - Corominas-Murtra, B.; Hanel, R.; Thurner, S. Understanding scaling through history-dependent processes with collapsing sample space. Proc. Natl. Acad. Sci. USA
**2015**, 112, 5348–5353. [Google Scholar] [CrossRef][Green Version] - Cubero, R.J.; Jo, J.; Marsili, M.; Roudi, Y.; Song, J. Statistical Criticality arises in Most Informative Representations. J. Stat. Mech.
**2019**, 1906, 063402. [Google Scholar] [CrossRef][Green Version] - Lü, L.; Zhang, Z.K.; Zhou, T. Zipf’s law leads to Heaps’ law: Analyzing their relation in finite-size systems. PLoS ONE
**2010**, 5, e14139. [Google Scholar] [CrossRef][Green Version] - Tria, F.; Loreto, V.; Servedio, V.D.P.; Strogatz, S.H. The dynamics of correlated novelties. Sci. Rep.
**2014**, 4. [Google Scholar] [CrossRef][Green Version] - Kauffman, S.A. Investigations; Oxford University Press: New York, NY, USA, 2000. [Google Scholar]
- Monechi, B.; Ruiz-Serrano, A.; Tria, F.; Loreto, V. Waves of novelties in the expansion into the adjacent possible. PLoS ONE
**2017**, 12, e0179303. [Google Scholar] [CrossRef] - Iacopini, I.; Milojević, S.C.V.; Latora, V. Network Dynamics of Innovation Processes. Phys. Rev. Lett.
**2018**, 120, 048301. [Google Scholar] [CrossRef][Green Version] - Kilpatrick, A.M.; Ives, A.R. Species interactions can explain Taylor’s power law for ecological time series. Nature
**2003**, 422, 65–68. [Google Scholar] [CrossRef] - Ballantyne, F.; Kerkhoff, A.J. The observed range for temporal mean-variance scaling exponents can be explained by reproductive correlation. Oikos
**2007**, 116, 174–180. [Google Scholar] [CrossRef] - Cohen, J.E.; Xu, M.; Schuster, W.S.F. Stochastic multiplicative population growth predicts and interprets Taylor’s power law of fluctuation scaling. Proc. Biol. Sci.
**2013**, 280, 20122955. [Google Scholar] [CrossRef][Green Version] - Cohen, J.E. Stochastic population dynamics in a Markovian environment implies Taylor’s power law of fluctuation scaling. Theor. Popul. Biol.
**2014**, 93, 30–37. [Google Scholar] [CrossRef] - Giometto, A.; Formentin, M.; Rinaldo, A.; Cohen, J.E.; Maritan, A. Sample and population exponents of generalized Taylor’s law. Proc. Natl. Acad. Sci. USA
**2015**, 112, 7755–7760. [Google Scholar] [CrossRef][Green Version] - Gerlach, M.; Altmann, E.G. Scaling laws and fluctuations in the statistics of word frequencies. New J. Phys.
**2014**, 16, 113010. [Google Scholar] [CrossRef] - Simon, H. On a class of skew distribution functions. Biometrika
**1955**, 42, 425–440. [Google Scholar] [CrossRef] - Zanette, D.; Montemurro, M. Dynamics of Text Generation with Realistic Zipf’s Distribution. J. Quant. Linguist.
**2005**, 12, 29. [Google Scholar] [CrossRef] - Tria, F.; Loreto, V.; Servedio, V.D.P. Zipf’s, Heaps’ and Taylor’s Laws are Determined by the Expansion into the Adjacent Possible. Entropy
**2018**, 20, 752. [Google Scholar] [CrossRef][Green Version] - Zabell, S. Predicting the unpredictable. Synthese
**1992**, 90, 205–232. [Google Scholar] [CrossRef] - Pitman, J. Exchangeable and partially exchangeable random partitions. Probab. Theory Relat. Fields
**1995**, 102, 145–158. [Google Scholar] [CrossRef] - Pitman, J. Combinatorial Stochastic Processes; Ecole d’Eté de Probabilités de Saint-Flour XXXII; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Pitman, J.; Yor, M. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Ann. Appl. Probab.
**1997**, 25, 855–900. [Google Scholar] [CrossRef] - Buntine, W.; Hutter, M. A Bayesian View of the Poisson-Dirichlet Process. arXiv
**2010**, arXiv:1007.0296. [Google Scholar] - Teh, Y.W. A Hierarchical Bayesian Language Model based on Pitman-Yor Processes. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, Sydney, Australia, 17–21 July 2006. [Google Scholar] [CrossRef]
- Du, L.; Buntine, W.; Jin, H. A segmented topic model based on the two-parameter Poisson-Dirichlet process. Mach. Learn.
**2010**, 81, 5–19. [Google Scholar] [CrossRef][Green Version] - Janson, S. Functional limit theorems for multiple branching processes and generalized Pólya urns. Stoch. Process. Appl.
**2004**, 110, 177–245. [Google Scholar] [CrossRef][Green Version] - Janson, S. Limit theorems for triangular urn schemes. Probab. Theory Relat. Fields
**2006**, 134, 417–452. [Google Scholar] [CrossRef][Green Version] - Kauffman, S.A. The Origins of Order: Self-Organization and Selection in Evolution; Oxford University Press: New York, NY, USA, 1993. [Google Scholar]
- Kauffman, S.A. Investigations: The Nature of Autonomous Agents and the Worlds They Mutually Create; SFI Working Papers; Santa Fe Institute: Santa Fe, NM, USA, 1996. [Google Scholar]
- James, L.F. Large Sample Asymptotics for the Two-Parameter Poisson-Dirichlet Process. Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh; Institute of Mathematical Statistics: Beachwood, OH, USA, 2008; pp. 187–199. [Google Scholar]
- Hoppe, F.M. The sampling theory of neutral alleles and an urn model in population genetics. J. Math. Biol.
**1987**, 25, 123–159. [Google Scholar] [CrossRef][Green Version] - Bassetti, F.; Crimaldi, I.; Leisen, F. Conditionally identically distributed species sampling sequences. Adv. Appl. Probab.
**2010**, 42, 433–459. [Google Scholar] [CrossRef][Green Version] - Aguech, R. Limit theorems for random triangular urn schemes. J. Appl. Probab.
**2009**, 46, 827–843. [Google Scholar] [CrossRef][Green Version] - Mahmoud, H.M. Pólya Urn Models; Texts in Statistical Science Series; CRC Press: Boca Raton, FL, USA, 2009; p. xii+290. [Google Scholar]
- Flajolet, P.; Dumas, P.; Puyhaubert, V. Some exactly solvable models of urn process theory. DMTCS Proc.
**2006**, 59–118. [Google Scholar] - DasGupta, A. Moment Convergence and Uniform Integrability. In Asymptotic Theory of Statistics and Probability; Springer Texts in Statistics; Springer: New York, NY, USA, 2008. [Google Scholar]
- Williams, D. Probability with Martingales; Cambridge Mathematical Textbooks; Cambridge University Press: Cambridge, UK, 1991; p. xvi+251. [Google Scholar] [CrossRef]
- Hart, M. Project Gutenberg. 1971. Available online: https://www.gutenberg.org/ (accessed on 18 May 2020).
- Miller, F.; Stiksel, M.; Breidenbruecker, M.; Willomitzer, T. Last.fm. 2002. Available online: https://www.last.fm/ (accessed on 7 April 2020).
- Schachter, J. del.icio.us. 2003. Available online: http://delicious.com/ (accessed on 8 January 2018).
- Jack, D.; Noah Glass, B.S.; Williams, E. Twitter.com. 2006. Available online: https://twitter.com/ (accessed on 7 April 2020).

**Figure 1.**

**Taylor’s law in the urn model with triggering.**Left: Taylor’s law from 100 realizations of the stochastic process described in Section 2 (the urn model with triggering), for each of the indicated values parameters. The values of parameters are chosen in order to have a representative curve for each of the analyzed regimes, i.e., $\rho <\nu /2$, $\rho =\nu /2$, $\nu /2<\rho <\nu $, $\rho =\nu $, $\rho >\nu $. Each realization is a sequence of ${10}^{6}$ elements. Right: Taylor’s law from the same sequences as in the left side picture, individually reshuffled so that to loose the temporal order (refer to the parallel file random reshuffling procedure discussed in Section 5 and in Figure 2).

**Figure 2.**

**Shuffling procedures.**In this example we consider three different streams A, B, C, consisting of five tokens each. When the analysis is carried in parallel the streams are aligned respecting their natural order (left panel). In the parallel file random case (middle panel), each stream is reshuffled singularly. Eventually, the parallel random case shuffles the elements all together (right panel).

**Figure 3.**

**Taylor’s law in real systems and in their randomized instances.**The standard deviation $\sigma \left(N\right)$ of the number of different tokens after N total tokens appeared, is plotted vs the average number of different tokens $\mu \left(N\right)$ in four different datasets. The shuffled counterparts are also evaluated. The shuffling schemes are shown in Figure 2.

**Figure 4.**

**Stability of Taylor’s law results in the Gutenberg corpus.**Left: the analogous of Figure 3 (top left) for three different sets of $M=100$ books from the Gutenberg corpus. Right: as in Figure 3 (top left), with 20 different realizations of the parallel file random reshuffling procedure. We see that the difference between the curve referred to the ordered sequences and those referred to the reshuffled ones is much higher than fluctuations due to different realizations of the reshuffling.

**Figure 5.**

**Taylor’s law in the urn model with triggering and quenched stochasticity of the parameters and in the urn model with semantic triggering.**Top: Taylor’s law in the urn model with triggering, with parameter’s ${N}_{0}=100$, $\rho =1$ and $\nu $ is randomly extracted for each simulation of the process from a uniform distribution on the interval $(0,1)$ (left) and from an exponential distribution on the interval $(0,1)$ and parameter $\lambda =1$ (right), as discussed in the main text. Center: Taylor’s law in the urn model with triggering, with parameter’s respectively: (left) ${N}_{0}=100$, $\nu =2$, $\rho =3+{r}_{i}$, with ${r}_{i}$ randomly extracted for each simulation of the process from an exponential distribution with mean $\overline{{r}_{i}}=1$; $\nu =2$, $\rho =3$, ${N}_{0}=1+{n}_{i}$, with ${n}_{i}$ randomly extracted for each simulation of the process from an exponential distribution with mean $\overline{{n}_{i}}={10}^{4}$. Bottom: (left) Taylor’s law in the urn model with semantic triggering, with parameters ${N}_{0}=100$, $\nu =6$, $\rho =9$, $\eta =0.6$; (right) Taylor’s law in the urn model with semantic triggering, with parameters $\nu =2$, $\rho =3$, $\eta =0.6$, ${N}_{0}=1+{n}_{i}$, with ${n}_{i}$ randomly extracted for each simulation of the process from an exponential distribution with mean $\overline{{n}_{i}}={10}^{4}$. The parameters of the simulations were chosen such to lie in the regime $\rho <\nu $. The parameter $\eta =0.6$ used in the bottom graphs was chosen in the regime where the Heaps’ and Zipfs’ laws feature exponents compatible with those observed in real systems. In all the figures the curves for the Taylor’s law are constructed from 100 independent realizations of the process (M = 100 in Equation (16)).

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Tria, F.; Crimaldi, I.; Aletti, G.; Servedio, V.D.P.
Taylor’s Law in Innovation Processes. *Entropy* **2020**, *22*, 573.
https://doi.org/10.3390/e22050573

**AMA Style**

Tria F, Crimaldi I, Aletti G, Servedio VDP.
Taylor’s Law in Innovation Processes. *Entropy*. 2020; 22(5):573.
https://doi.org/10.3390/e22050573

**Chicago/Turabian Style**

Tria, Francesca, Irene Crimaldi, Giacomo Aletti, and Vito D. P. Servedio.
2020. "Taylor’s Law in Innovation Processes" *Entropy* 22, no. 5: 573.
https://doi.org/10.3390/e22050573