Optimal sensing for fish school identification

Fish schooling implies an awareness of the swimmers for their companions. In flow mediated environments, in addition to visual cues, pressure and shear sensors on the fish body are critical for providing quantitative information that assists the quantification of proximity to other swimmers. Here we examine the distribution of sensors on the surface of an artificial swimmer so that it can optimally identify a leading group of swimmers. We employ Bayesian experimental design coupled with two-dimensional Navier Stokes equations for multiple self-propelled swimmers. The follower tracks the school using information from its own surface pressure and shear stress. We demonstrate that the optimal sensor distribution of the follower is qualitatively similar to the distribution of neuromasts on fish. Our results show that it is possible to identify accurately the center of mass and even the number of the leading swimmers using surface only information.


Introduction
Fish navigate in their environments by processing and adapting to cues from the surrounding flow fields. The flow environment is replete with mechanical disturbances (pressure, vorticity) that can convey information about the sources that generated them. Fish swimming in groups have been found to process such external information and balance it with social cues [1,2]. Fish may perceive mechanical disturbances in terms of surface pressure and shear stresses through neuromasts. Early studies and experiments showed that the functioning of the lateral line is crucial for several tasks [3,4]. Experiments with trouts in the vicinity of objects showed its importance for Kármán gaiting and bow wake swimming as well as energy efficient station keeping [5,6] Additionally, it was shown that the flow information allows to determine the cylinder diameter and flow velocity as well as the position relative to the generated Kármán vortex street by taking measurements of the flow [7,8]. Using blind cave fish a study showed the importance of the lateral line to detect location and shape of surrounding objects and avoid obstacles [9][10][11][12]. In another study the feeding behavior of blinded mottled sculpin was tested and it was found that they use their lateral line system to detect prey [13]. Also it was found that blind fish manage to keep their position in schools and lose this ability with disabled lateral line organ [14]. Further importance of the lateral line was shown for communicational behavior [15], selection of habitats [16] and rheotaxis [17].
The lateral line consists of two functional classes of receptors: mechanoreceptors and electroreceptors. In this work we mimic the mechanosensory receptors, more specifically the sub-surface 'canal' neuromasts and superficial neuromasts [18,19]. The neuromast on the fish skin are used to detect shear stresses, where the ones residing in the lateral line canals are used to detect pressure gradients [20][21][22][23][24]. Due to the filtering nature of the canals, the detection of small hydrodynamic stimuli against background noise is improved for the subsurface neuromasts [25].
In order to better use and understand the capabilities of the artificial sensors several studies regarding the information content in the flow and optimal harvesting of this information were performed: The prevalence of information on the position of a vibrating source was shown to be linearly coded in the pressure gradients measured by the subsurface neuromasts [42]. Furthermore, it was shown that the variance of the pressure gradient correlated with the presence of lateral line canals [43]. A study with fish robots equipped with distributed pressure sensors for flow sensing and Bayesian filtering for estimating the flow speed, angle of attack, and foil camber [44]. Other studies focused on dipole sources to develop methods to extract information and optimize the parameters of the sensing devices [45,46]. In a recent study artificial neural networks were employed to classify wakes from airfoils [47]. In order to find effective sensor positions weight analysis algorithms were employed [48].
In this study, following earlier work for detection of flow disturbances by single obstacles, [49] we examine the optimality of the spatial distribution of sensors in a self-propelled swimmer who infers the number in a group of neighboring (here leading) fish and their relative positions. We combine numerical simulations of the two-dimensional Navier-Stokes equation and Bayesian optimal sensor placement to examine the extraction of flow information by pressure gradients and optimal positioning of sensors. The paper is organised as follows: In Section 2.1 we describe the numerical simulations and in Section 2.2 the process of Bayesian optimal experimental design. We present our results in Section 3 and conclude in Section 4.

Flow simulations
The swimmers are modeled by slender deforming bodies of length L which are characterized by their half-width w(s) along the midline [50,51] A sketch of the parametrization is presented in Fig. 1. Following [52], we use w h = s b = 0.04L, s t = 0.95L and w t = 0.01L. The swimmers propel themselves by performing sinusoidal undulations of their midline. This motion is described by a prescribed time-dependent parameterization of the curvature, Here T p = 1 is the tail-beat period and A is the undulation amplitude which linearly increases from A(0) = 0.82/L to A(L) = 5.7/L to replicate the anguilliform swimming motion described by [53]. Given the curvature along s and a center of mass, the coordinates r(s, t) of the swimmer's midline can be computed by integrating the Frenet-Serret formulas [52]. In turn, the half-width w(s) and the coordinates r(s, t) characterize the swimmer's surface. The flow environment is described by numerical simulations of the two-dimensional incompressible Navier-Stokes equations (NSE) in velocity-pressure (u − p) formulation. The NSE are discretized with second order finite differences and integrated in time with explicit Euler time stepping. The fluid-structure interaction is approximated with Brinkman penalization [54][55][56] by extending the fluid velocity u inside the swimmers' bodies and by including in the NSE a penalization term to enforce no-slip and no-through boundary conditions, Here, ν is the kinematic viscosity, λ = 1/δt is the penalization coefficient, N s is the number of swimmers, u k s,i is the velocity field imposed by swimmer i (composed of translational, rotational and undulatory motions), and χ i is its characteristic function which takes value 1 inside the body of swimmer i and value 0 outside. The characteristic function χ i is computed, given the distance of each grid-point from the surface of swimmer i, by a second-order accurate finite difference approximation of a Heaviside function [57]. The pressure field is computed by pressure-projection [56,58], whereũ k = u k − (u k · ∇)u k + ν∆u k . The terms inside the summation in Eq. (4) are due to the non-divergence free deformation of the swimmers.

Schooling formation
The tail-beating motion that propels forwards a single swimmer generates in its wake a sequence of vortices. The momentum contained in the flow field induces forces to the swimmers that fish in schooling formation must overcome to maintain their positions in the group [59]. In this study, we maintain the schooling formation for multiple swimmers by employing closed-loop parametric controllers. We increase or decrease the tail-beating frequency T p,i of each swimmer i if it lags behind or surpasses a desired position in the direction of the school's motion, In order to straighten the school's trajectory we impose additional curvature k C,i along each swimmer's midline in order to minimize its lateral deviation ∆y i and its angular deflection ∆θ i , Here, · defines an exponential moving average with weight δt/T p , which approximates the integral term found in PI controllers and, The formulation in Eq. (6) indicates that if both the lateral displacement and the angular deviation are positive (or both negative) the swimmer will gradually revert to its position in the formation. Conversely, if ∆y i and ∆θ i have different signs the displacement has to be corrected by adding (or subtracting) curvature to the swimmer's midline.

Flow sensors
We distinguish two types of sensors on the swimmer body. The superficial neuromasts detect flow stress and the subcanal neuromasts pressure gradients [28,60,61]. From the numerical solution of the 2D Navier-Stokes equation we obtain the flow velocity u = (u, v) and the pressure p at every point of the computational grid. The surface values of these quantities are obtained through a bi-linear interpolation from the nearest grid points. We perform offline analysis by recording the interpolated pressure p and flow velocity u in the vicinity of the body. We remark that we have neglected points near the end of the body to reduce the influence of large flow gradients that are generated by the motion and sharp geometry of the tail. The shear stresses are computed on the body surface using the local tangential velocity in the two nearest grid points. Moreover we compute pressure gradients along the surface by first smoothing thes pressure along the surface a using splines implemented in SCIPY [62,63].

Optimal sensor placement based on Information gain
In the present work a swimmer is equipped with sensors that are used to identify the size and location of a nearby school. The optimal sensor locations are identified using Bayesian experimental design [64] so that the information obtained from the collected measurements is maximized. We define the information gain as the distance between the prior belief on the quantities of interest and the posterior belief after obtaining the measurements. Here, we choose as measure of the distance the Kullback-Leibler divergence between the prior and the posterior distribution.

Bayesian Estimation of Swimmers
In the present experiment setup, we consider a group of swimmers followed by a single swimmer. The follower needs to identify (i) the relative location r of the center of mass and (ii) the population n f of the leading group. We denote with ϑ = r or ϑ = n f these unknown quantities and allow the follower to update its prior belief p(ϑ) about the leading group of fish by collecting measurements on its sensors. These sensors are distributed symmetrically on both sides of the fish and can be represented by a single point on its mid-line. We denote the k-th measurements location at the upper and the lower part with x 1 (s k ) and x 2 (s k ), respectively (see Fig. 2 for a sketch of the setup).
We denote by F(ϑ; s) ∈ R 2n the flow output and include an error term ε to account for inaccuracies such as as numerical errors and imperfections in the sensors. The measurements on the fish body can be expressed as, We model the error term by a multivariate Gaussian distribution ε ∼ N (0, Σ(s)) with zero mean and covariance matrix Σ(s) ∈ R 2n×2n so that the likelihood of a measurement is expressed as, Figure 2. Simulation setup used for determining the optimal sensor distribution on a fish-like body. The follower is initially located inside the rectangular area. The number of fish in the leading group is varied between one and eight. The sensor-placement algorithm attempts to find the arrangement of sensors s that allows the follower to determine with lowest uncertainty the relative position r and the number of fish n f in the leading group of fish. For each sensor s i the fish collects measurements y 1 i and y 2 i at locations x 1 (s i ) and x 2 (s i ) on the skin, respectively.
The covariance matrix depends on the sensor positions s and we assume that the prediction errors are correlated for measurements on the same side of the swimmer and uncorrelated if they originate from opposite sides. Finally, we assume that the correlation is decaying exponentially with the distance of the measurement locations. The functional form of the resulting covariance matrix is given by, where > 0 is correlation length and σ is the correlation strength. For all the cases described in this work, the correlation length is set to one tenth of the fish length = 0.1L. The correlation strength is set to be two times the mean over the signal coming from our simulations: We remark that the covariance matrix must be symmetric and positive definite. In order to ensure that it is positive definite we have to handle the case where we try to pick a sensor location twice as in this case the positive definiteness is violated. We do so by setting the argument of the exponential to 10 −7 . This form of the correlation error reduces the utility when sensors are placed too close together and prevents excessive clustering of the sensors [65,66]. We wish to identify the locations s yielding the largest information gain about the unknown parameter ϑ of the disturbance. A measure for information gain is defined through the Kullback-Leibler (KL) divergence between the prior belief of the parameter values and the posterior belief, i.e. after measuring the environment. The prior and posterior beliefs are represented through the density functions p(ϑ) and p(ϑ|y, s), respectively. We denote by T the support of p(ϑ). The two densities are connected through Bayes' theorem, where p(y|ϑ, s) is the likelihood function defined in Eq. (9) and p(y) is the normalization constant.
The utility function is defined as [67], The expected utility is defined as the average value over all possible measurements, where Y is the support of measurements. Using Eq. (12) the expected utility can be expressed as, 2.2.2. Estimated expected utility for continuous random variables: school location When ϑ = r, the random variable takes values in a continuous domain. The estimator for the expected utility in this case can be obtained by approximating the two integrals by Monte Carlo integration using N ϑ samples from p(ϑ) and N y from p(y|ϑ, s) [68]. The resulting estimator is given by, where ϑ (i) ∼ p ϑ for i = 1, . . . , N ϑ and y (i,j) ∼ N (F(ϑ (i) ; s), Σ(s)) and j = 1, . . . , N y . We remark that the computational complexity of this procedure is mainly determined by the number of Navier-Stokes simulations N ϑ . There is no additional computational burden to compute the N y samples following the measurement error model Eq. (8).

Estimated expected utility for discrete random variables: school size
When ϑ is a discrete random variable with finite support taking values in the set {ϑ 1 , . . . , ϑ N ϑ } the expected utility in Eq. (15) is given by, Here, ϑ = n f represents the number of fish in the leading group. An estimator of the given utility can be obtained by Monte Carlo integration using N y samples from the likelihood distribution p(y|ϑ i , s). The estimator is given by where y (i,j) ∼ p(y|ϑ i , s) for j = 1, . . . , N y . Let ϕ be the random variable representing one of the group configurations. Each group configuration is associated with a unique number ϕ i,j for j = 1, . . . , n i , where n i is the total number of configurations containing i fish. With this notation, ϕ takes values in the set {ϕ i,j | j = 1, . . . , n i , i = 1, . . . , 8}. For an example of different configurations see Appendix A The likelihood can be written in terms of the variable ϕ as Using the fact that for i = 1, . . . , N ϑ , p(y, ϕ = ϕ k, |ϑ = ϑ i , s) = 0, for k = i , and p(y|ϑ = ϑ i , ϕ = ϕ i, , s) = p(y|, ϕ = ϕ i, , s), for = 1, . . . , n i , and the assumption the likelihood in Eq. (19) is written as, Notice that the likelihood is a mixture of distributions with equal weights and that p(y|ϕ = ϕ i, ) = N (y|F(ϕ i, ; s), Σ(s)). In order to draw a sample from the likelihood, first we draw an integer * with equal probability from 1 to 8 and them draw yy ∼ p(y|ϕ = ϕ i, * ). The final form of the estimator is given bŷ

Optimization of the expected utility function
In order to determine the optimal sensor arrangement we maximize the utility estimatorÛ(s) described in Eq. (16). It has been observed that the expected utility for many sensors often exhibit many local optima [65,69]. Heuristic approaches, such as the sequential sensor placement algorithm described by [70], have been demonstrated to be effective alternatives. Here, following [70], we perform the optimization iteratively, placing one sensor after the other. We start by placing one sensor s 1 by a grid search in the interval [0, 0.9]. In the next step we compute the location of the second sensor by setting s = (s 1 , s) and repeating the grid search for the new optimal location s 2 . This procedure is then continued by defining We note that the scalar variable s denotes the mid-line coordinate of a single sensor-pair, whereas the vector s holds the coordinate of all sensors-pairs. Besides the mentioned advantages, sequential placement allows to quantify the importance of each sensor placed and provides further insight into the resulting distribution of sensors.  Figure 3. Snapshots of the pressure field in the environment of the coasting swimmer generated by one (Fig. 3a), four (Fig. 3b) and seven (Fig. 3c) schooling swimmers. The snapshots are taken at the moment the measurement was performed for one particular location of the follower in the prior region. High pressure is shown in red and low pressure in blue.

Results
We examine the optimal arrangement of pressure gradient and shear stress sensors on the surface of a swimmer trailing a school of self-propelled swimmers. We consider two sensing objectives: (a) the size of the leading school and (b) the relative position of the school. The simulations correspond to a Reynolds number Re = L 2 ν = 2000. In all experiments we use 4096 points to discretize the horizontal direction x ∈ [0, 1] and all artificial swimmers have a length of L = 0.1.
For the "size of the leading school" experiment the size of the group takes values ϑ i = 1, . . . , 8. First we regard one configuration per group-size. In this case inferring the configuration is equivalent to inferring the number of fish in the group. To increase the difficulty we consider n i different initial configurations. In each configuration we assign a number ϕ i, for i = 1, . . . , 8 and = 1, . . . , n i . In total, we consider N tot = ∑ i n i = 61 distinct configurations each having the same prior probability 1/N tot . In Appendix A we present the initial condition for all configurations. The center of mass of the school is located at x = 0.3 and in the y-axis in the middle of the vertical extent of the domain. We use a controller to fix the distance between x and y coordinates of two fish to ∆x = ∆y = 0.15, see Section 2.1.1.
For the "relative position" experiment we consider three independent experiments with one, four and seven leading fish. Snapshots of the pressure field for these simulations are presented in Fig. 3. The prior probability for the position of the group is uniform in the domain [0.6, 0.8] × [0.1, 0.4]. The support of the prior probability is discretized with 21 × 31 gridpoints. Since the experiments are independent, the total expected utility function for the three cases is the sum of the expected utility of each experiment [49].
For both experiments we record the pressure gradient and shear stress on the surface of the fish using the methods discussed in Section 2.1.2. The motion of the fish introduces disturbances on it's own surface.  Figure 4. Utility curves for the first sensor using pressure measurements. In Fig. 4a the utility estimator for the "size of the leading school" experiment is presented. Figure 4b corresponds to the utility estimator for the "relative position" experiment. We show the resulting curves for one, three and seven fish in the leading group and the total expected utility. We observe that although the form does not drastically change, the total utility increases with increasing size of the leading group. In order to distinguish the self-induced from the environment disturbances we freeze the movement of the following fish and set its curvature to zero. The freezing time is selected by evolving the simulation until the wakes of the leading group are sufficiently mixed and passed the following swimmer. We found that this is the case for T = 22. The transition from swimming to coasting motion takes place during the time interval [T, T + 1]. Finally, we record the pressure gradient and the shear stress at time T + 2. The resulting sensor-signal associated to the midline coordinates s for a given configuration ϑ is denoted F(ϑ; s), see Eq. (8).

Utility function for the first sensor
In this section we discuss the optimal location of a single pressure gradient sensor using the estimators in Eq. (16) and Eq. (21). Higher values of the expected utility correspond to preferable locations for the sensor since the information gain at these sensors is higher. The resulting utilities are plotted in Fig. 4. For all experiments we find that the tip of the head (s = 0) exhibits the largest utility independent of the number of fish in the leading group.
At the tip of the head the two symmetrically placed sensors have the smallest distance. In Eq. (10) we have assumed that the two fish-halves are symmetric and uncorrelated. Due to the small distance of the sensors at the head, spatial correlation between the sensors across the fish-halves would decrease the utility of this location. In order to test whether the utility for sensors at the head is influenced by this symmetry assumption, we perform experiments where we place a single sensor on one side of the fish. Again in this case the location at the head is found to have the highest expected utility.
Based on the evidence that the head experiences the largest variance of pressure gradients F(ϑ; s) and that this is correlated with the density of sub-canal neuromast [43], we examine the variance of the values obtained from our solution of the Navier-Stokes equation. We confirm that this observation is consistent with our simulations. We find that independent of the number of fish, the variance in the sensor signal var ϑ (F(ϑ; s)) is largest at s = 0. (c) Figure 5. Optimal sensor placement for the pressure sensors and the "size of the leading school" experiment. In Fig. 6a the utility estimator for the first five sensors and in Fig. 6b the value of the utility estimator at the optimal sensor location for the first 20 sensors are presented. In Fig. 6c the distribution of the sensors on the fish surface is presented.

Sequential sensor placement
In this section we discuss the results of the sequential sensor placement described in Section 2.2.4. For the "size of the leading school" experiment we present the results in Fig. 5. In Fig. 5a the utility curve for the first five sensors is shown. We observe that the utility curve becomes flatter as the number of sensors increase. Furthermore, we observe that the location where the previous sensor was placed is a minimum for the utility for the next sensor. Figure 5b shows the utility estimator at the optimal sensor for up to 20 sensors and it is evident that the value of the expected utility reaches a plateau. In Fig. 5c the location of the sensors on the skin if the fish is presented. The numbers correspond to the iteration in the sequential procedure that the sensor was placed. Note that the sensors are being placed symmetrically.
The optimal sensor placement results for the "relative position" experiment can be found in Fig. 6. Similar to the other experiment the utility curves become flatter after every placed sensor and the location for the previous sensor is a minimum for the utility for the next sensor (see Fig. 6a). We plot the maximum of the utility for up to 20 sensors (see Fig. 6b) and observe a convergence to a constant value. In Fig. 6c the location of the first 20 sensors is presented.
For both experiments it is evident that the utility of the optimal sensor location approaches a constant value. This fact can be explained by recalling that the expected utility Eq. (15) is a measure of the averaged distance between the prior and the posterior distribution. Increasing the number of sensors leads to an increase in the number of measurements. By the Bayesian central limit theorem, increasing the number of measurements leads to convergence of the posterior to a Dirac distribution. As soon as the posterior has converged, the expected distance from the prior, and thus the expected utility, remains constant. (c) Figure 6. Optimal sensor placement for the pressure gradient sensors for the "relative position" experiment. In Fig. 6a the utility estimator for the first five sensors and in Fig. 6b the value of the utility estimator at the optimal sensor location for the first 20 sensors are presented. In Fig. 6c the distribution of the sensors on the fish surface is presented.

Inference of the environment
In this section we demonstrate the importance of the optimal sensor locations and examine the convergence of the posterior distribution. We compute the posterior distribution via Bayes rule given in Eq. (12). We set y = F(ϑ, s) and compute the posterior for different values of ϑ in the prior region. We consider measurements collected at: (a) the optimal and (b) the worst sensors location.
The posterior probability for the "size of the leading school" experiment is shown in Fig. 7. We observe that the worst sensor location implies an almost uniform posterior distribution, reflecting that measurements at this sensor carry no information. On the other hand, the posterior distribution for the optimal sensor is more informative. We observe that for groups with small size the follower is able to identify the size with more confidence, as opposed to larger groups. We compare the posterior for an experiment with only one configuration per group-size to an experiment with multiple configurations. For multiple configurations the posterior is less informative. This indicates that the second case occurs to be a more difficult problem. Finally, notice that the posterior for one configuration is symmetric, where when adding multiple configurations this symmetry is broken. This fact is discussed in Appendix B.
The posterior density for the "relative position" with one leading fish is presented in Fig. 8. The posterior for the configuration with three and seven fish is similar. We compute the posterior for measurements at the best and the worst location for one and three sensors. For the three sensors the worst location has been selected in all three phases of the sequential placement. The results for the normalized densities are shown in Fig. 8. We observe that one sensor at the optimal location gives a very peaked posterior. Three optimal sensors can infer the location with low uncertainty. This is not the case for the worst sensors, where adding more sensors does not immediately lead to uncertainty reduction.

Shear stress sensors
In this section we discuss the results for the optimal positioning of shear stress sensors. We follow the same procedure as in Section 3.1 and Section 3.2. Here, we omit the presentation of the results and focus on the similarities and differences to the pressure gradient sensors.
The optimal location for a single sensor for the "size of the leading school" experiment is at s * = 3.01 · 10 −4 . For the "relative position" experiment we find the optimal location s * = 3.84 · 10 −4 . In contrast to the optimal location for one pressure gradient sensor, the found sensor is not at the tip of the head and is at different positions for the two experiments. Examining the variance in the shear signal shows quantitatively the same behaviour as the utility. Comparing the location of the maxima shows that they do not coincide for shear sensors.
We perform sequential placement of 20 sensors. The resulting distribution of sensors is shown in Fig. 9. In Section 3.2 we argue that the expected utility must reach a plateau when placing many sensors using the Bayesian central limit theorem. For shear stress sensors we observe that the convergence is slower compared to the pressure gradient sensors. We conclude that the information gain per shear stress sensor placed is lower as for the pressure gradient sensors.
The posterior density obtained for both experiments is less informative when using the same number of sensors. This indicates that shear is a less informative quantity yielding a slower convergence of the posterior.

Discussion
We present a study of the optimal sensor locations on a self-propelled swimmer for detecting the number and location of a leading group of swimmers. This optimization combines Bayesian experimental design with large scale simulations of the two dimensional Navier-Stokes equations. Mimicking the   function of sensory organs in real fish, we used the shear stress and pressure gradient on the surface of the fish to determine the sensor feedback generated by a disturbance in the flow field.
The optimization was performed for different configurations of fish, ranging from a simple leader-follower configuration with two fish, to a group of up to eight fish leading a single follower. We regarded two types of information, the number of fish in the leading group and the relative location of the leading group. We find that although the general shape of the utility function varies between the two objectives, the preferred location of the first sensor on the head of the fish is consistent. Furthermore, we find that the objective is only weakly influenced when varying the number of members in the leading group.
We perform a sequential sensor placement and find that the utility converges to a constant value and thus we can conclude that few sensors suffice to infer the quantities of the surrounding flow. Indeed we find that the optimal sensor locations correspond to a posterior distribution that is strongly peaked around the true value of the quantity of interest.
In summary, we find that changing the number of fish in the leading group does not influence the followers ability to infer the location of the leading group, for groups with small size. Furthermore we were able to show that choosing the locations for the measurements in a systematic way we are able to infer the number of fish in the leading group and the location of our agent to high accuracy. Ongoing work examines the inclusion of further parameters such as the motion of the swimmers in optimally detecting disturbances from neighboring fish.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A Configurations
The configuration used for the number of fish experiment. For the configurations with three rows the vertical extent y ∈ [0, 0.5] was discretized using 2048 gridpoints, for the ones with four rows it was extended to y ∈ [0, 0.75] and discretized using 3072 gridpoints.

Appendix B Posterior is not symmetric
The posterior in Fig. 7 has no mirror symmetry along the diagonal axis. This observation indicates that the posterior is not symmetric with respect to an exchange of the parameter ϑ we try to infer and the parameter used in the simulation ϑ true . In the following we want to formally examine this observation. For sake of brevity we neglect the influence on the sensor location s. The posterior for some ϑ i with i = 1, . . . , 8 is given by Bayes rule p(ϑ i |y) ∝ p(ϑ i )p(y|ϑ i ) . (A1) In Section 2.2.3 we showed that for multiple configurations ϕ i, for a given number of fish ϑ i the likelihood is an equally weighted mixture of Gaussian distributions. Further assuming an uniform prior distribution, we have p(ϑ i )p(y|ϑ i ) = Let us take y = F(ϕ * ) for some fixed configuration ϕ * belonging to some number of fish ϑ * . If we replace ϕ * → ϕ i, * for some fixed configuration ϕ i, * corresponding to ϑ i and ϕ i, → ϕ * , for the number of fish ϑ * , we realize that for n i , n * = 1 the exponents in the resulting sum of Gaussian densities are different We remark that in case we have only one configuration n i = 1 for all i = 1, . . . , 8 the contrary is true and the posterior becomes symmetric with respect to an exchange of the parameter ϑ we try to infer and the parameter used in the simulation ϑ true .