Multiscale compression entropy of microvascular blood flow signals: comparison of results from laser speckle contrast and laser Doppler flowmetry data in healthy subjects

: Microvascular perfusion is commonly used to study the peripheral cardiovascular system. Microvascular blood ﬂow can be continuously and non-invasively monitored with laser speckle contrast imaging (LSCI) or with laser Doppler ﬂowmetry (LDF). These two optical-based techniques give perfusion values in arbitrary units. Our goal is to better understand the perfusion time series given by each technique. For this purpose, we propose a nonlinear complexity analysis of LSCI and LDF time series recorded simultaneously in nine healthy subjects. This


Introduction
Analysis of the microvascular perfusion presents a particular challenge in cardiovascular studies because, for pathologies, such as diabetes or hypertension, microcirculation is affected long before organ dysfunctions become clinically manifest (see, e.g., [1][2][3]).However, perfusion can exhibit high variability over time [4].Therefore, a continuous monitoring of microvascular perfusion has become of clinical interest.For this purpose, several optical-based techniques have been developed [5].Among them, laser Doppler flowmetry (LDF) [6] and laser speckle contrast imaging (LSCI) [7] are commercially-available modalities that have the advantage of leading to real-time perfusion data [8].
LDF was introduced in the 1970s [6] and has now a well-established theory [9].In LDF, the tissue under study (skin, for example) is illuminated with a low-power laser light through a probe containing optical fiber light guides.One optical fiber leads the light to the tissue, and a second optical fiber collects the backscattered light that is transmitted to a photodetector.LDF relies on the Doppler frequency shift that appears when light is scattered by moving blood cells of the microcirculation.The LDF perfusion is defined as the first moment of the power spectrum of the photocurrent fluctuations.An example of an LDF signal recorded at rest is shown in Figure 1.At the beginning, LDF was a single-point measurement technique.However, because of the spatially inhomogeneous structure of the microcirculation, laser Doppler imagers (scanning imagers) were introduced in the early 1990s [10,11].Full-field laser Doppler instruments have more recently been proposed and are still the object of recent research [12].
LSCI is another recent and full-field technique to monitor microvascular blood flow.LSCI relies on the wide-field illumination of a tissue with a laser light [7,13,14].In the tissue, some of the photons scatter dynamically from moving blood cells.Because of the motions of the moving blood cells in the illuminated tissue, a dynamic speckled image is obtained on the detector that is a camera.When the exposure time of the camera is longer than the time scale of the speckle intensity fluctuations (typically less than 1 ms for biological tissues), the camera integrates the intensity variations, which results in a blurring of the speckle pattern.The degree of blurring is quantified by the speckle contrast K, which is calculated as: where σ N and µ N are, respectively, the standard deviation and mean of the pixel intensity in a neighborhood N around the pixel P (x, y).For a static speckle pattern and under ideal conditions (perfectly monochromatic and polarized waves and the absence of noise), the standard deviation equals the mean intensity, and therefore, the speckle contrast is equal to one.With movements in the tissue under study, the speckle pattern will be blurred and the standard deviation of the intensity will be smaller than the mean intensity, leading to a reduced speckle contrast.Perfusion is then computed from the inverse of the contrast K.An example of a perfusion image recorded at rest is shown in Figure 2. LSCI leads to high temporal and spatial resolution images [15].Moreover, laser speckle contrast imagers can be designed with low-cost devices [16].In order to better understand LDF signals and to extract physiological information from the data, LDF time series have been the subject of many works, such as wavelet-based or entropy-based studies (see e.g., [17][18][19][20][21][22]). By opposition, and probably because laser speckle contrast imagers have been commercialized only recently, LSCI data have been the subject of very few analyses [23].Moreover, as both LDF and LSCI give perfusion values in arbitrary units, works aiming at comparing the results given by the two techniques have become necessary [24][25][26][27].Thus, it has been reported that LSCI and LDF do not probe the same volume of tissue [28]: the penetration depth of LSCI is lower than the one of LDF for similar wavelengths.Moreover, the possible linearity between the perfusion values given by the two techniques has also been studied [29].However, other comparisons between LDF and LSCI perfusion values are needed.Furthermore, works aiming at better understanding laser speckle contrast images are also necessary.
Our goal herein is two-fold: (i) to better understand LSCI through an investigation of LSCI time series complexity on multiple time scales; for this purpose, an algorithmic information theory-based concept relying on a coarse-graining procedure and a data compressor is applied on LSCI time series (multiscale compression entropy); (ii) to compare the multiscale compression entropy of LSCI data with the one of LDF signals; For this purpose, our framework is also applied on LDF signals recorded simultaneously to laser speckle contrast images.

Measurement Procedure
The measurement procedure was performed in nine healthy subjects between 20 and 41 years old who provided written, informed consent prior to participation.The study was carried out in accordance with the Declaration of Helsinki.
For each subject, LSCI and LDF data were recorded simultaneously on the dorsal face of the right forearm.For this purpose, a PeriCam PSI System (Perimed, Sweden) having a laser wavelength of 785 nm and an exposure time of 6 ms was used to record LSCI data.The distance between the laser head to skin was set around 15 cm [30].This gave images with a resolution of 0.4 mm.LSCI data were recorded with a sampling frequency of 18 Hz in arbitrary laser speckle perfusion units (APU LSCI ).Moreover, a laser Doppler flowmeter having a 780-nm wavelength (PeriFlux System 5000, Perimed, Sweden) was used to acquire LDF signals.The LDF probe was positioned directly on the forearm.LDF perfusion values were assessed in arbitrary laser Doppler perfusion units (APU LDF ) and recorded on a computer via an analog-to-digital converter (Biopac System) with a sampling frequency of 20 Hz.A sub-sampling to 18 Hz was then performed.
In what follows and for each subject, 1,600 laser speckle contrast images and 1,600 LDF samples recorded simultaneously are processed.For each subject, a pixel was randomly chosen in the first laser speckle contrast image of the image sequence.This pixel was followed with time to obtain a time series of 1,600 samples.Moreover, around each pixel chosen above, square regions of interest (ROI) of a size of 3 × 3 pixels 2 (1.44 mm 2 ) and 15 × 15 pixels 2 (36 mm 2 ) were determined [26,31,32].For each ROI, the mean of the pixel values (in APU LSCI ) inside the ROI was computed and followed on each image of the sequence to obtain a time-evolution signal (see Figure 3).For each subject, we therefore had three time series from LSCI data: one from a ROI of size of 1 × 1 pixels 2 , another from a ROI of size of 3 × 3 pixels 2 and a third one from a ROI of size of 15 × 15 pixels 2 , as well as one time series from the LDF signal.Each of these time series were processed to compute their multiscale compression entropy.

Compression Entropy
From algorithmic information theory, the smallest algorithm that produces a string is the entropy of that string (Chaitin-Kolmogorov entropy) [33].However, the development of such an algorithm is theoretically impossible, but data compression techniques might provide a good approximation.If the length of the text to be compressed is sufficiently large and if the source is an ergodic process, the ratio of compressed text to original text length represents the compression entropy per symbol.Based on this, we propose to assess the entropy of LSCI and LDF time series through their compressibility [34].For this purpose, a modified version of the original LZ77 algorithm for lossless data compression introduced by Ziv and Lempel [35] is used.
The main steps of our algorithm, when applied on a time series s = {s 1 , . . ., s L }, are the following (in what follows, a subsequence (s m , s m+1 , . . ., s n ) of s is noted s m→n ) [34]: (1) choose a sliding window size w and a lookahead buffer size b; (2) encode the first w elements of the time series without compression; (3) set a pointer p = w + 1; (5) encode the integers n and v into unary-binary code and the value of x p+n without compression; (6) set the pointer p to p = n + 1 and iterate to Step 4, till the end of the time series.Therefore, compression entropy gives an indication to which degree the time series under study can be compressed using the detection of recurring sequences.The more frequent the occurrences of sequences (and thus, the more regular the time series), the higher the compression rate.The ratio of the lengths of compressed to uncompressed time series is used as a complexity measure and identified as compression entropy.Compared to Shannon entropy, compression entropy has the advantage of considering order within the sequence under study.Moreover, in contrast to the Shannon formula, the compression entropy takes the previous values of the system output into account.
In our work, different values for the sliding window size w and the lookahead buffer size b have been tested: 1 ≤ w ≤ 20 and 1 ≤ b ≤ 10 [34].Moreover, for all of our time series, a normalization was performed before the compression entropy computation (subtraction of the mean and division by the standard deviation).

Multiple Time Scales Compression Entropy
In our work, we propose to compute compression entropy on multiple time scales.For this purpose, a coarse-graining procedure is used [36]: the original time series {s 1 , . . ., s i , . . ., s L } is divided into non-overlapping windows of length τ , and the values of the data points inside each window are averaged, as: where τ represents the scale factor and 1 ≤ j ≤ L/τ .Each coarse-grained time series is L/τ long.Afterwards, the coarse-grained time series have been multiplied by 10 and rounded to integer values in order to reduce the alphabet in which they take their values.Then, for each coarse-grained time series, the above-mentioned compression entropy algorithm was applied.The multiscale assessment has been performed on 8 scales to take into account the finite length of our time series, postulating that at least 200 samples are required for reliable entropy estimation [37].

Results on LSCI Time Series
Our results show that, for LSCI data, the pattern of the compression entropy shows a global decreasing trend with scales, whatever the ROI size chosen and whatever the window size w or the lookahead buffer size b (see Figures 4 and 5).However, this decrease is less pronounced with the largest ROI size considered (15 × 15 pixels 2 ); see Figure 6.
Moreover, we note that, for a given sliding window size w larger than one and for all of the ROI sizes, the compression entropy values are higher for a lookahead buffer size b = 1 than for the other buffer sizes (see Figures 4 and 5).
Furthermore, for a given lookahead buffer size b and a given scale factor τ , the compression entropy values decrease when the window size w increases.This is true for the three ROI sizes analyzed (see Figure 7).All of the compression entropy values are between 0.45 and 0.96.
We also note that, for scale factors larger than three, the highest ROI size gives the highest compression entropy values and, therefore, the lowest decreasing trend (see Figure 6).

Results on LDF Signals
For LDF signals, our results show that the pattern of the multiscale compression entropy changes when the window size w changes.For a window size w between one and four, we observe an increase of the compression entropy from scale factor τ = 1 to a distinctive scale factor τ (the value of τ depends on the window size w) and then a decrease till scale factor τ = 8 (see Figure 8).This non-monotonic pattern is the same for all of the lookahead buffer sizes b analyzed (1 ≤ b ≤ 10): the curves are close whatever the value of the lookahead buffer size (see Figure 8).For sliding windows larger than four, the increasing trend is less pronounced (see Figure 9).Moreover, we observe that the compression entropy values are lower than one for all of the scales analyzed and for all of the window sizes w and lookahead buffer sizes b studied.For a given lookahead buffer size b and a given scale factor τ , the compression entropy decreases when the sliding window size w increases; see Figure 10.
We also note that, for scale factors larger than three, compression entropy values of LDF signals become larger than the ones of LSCI time series, whatever the ROI sizes and the sliding window size w (see Figure 11).For each panel, a two-way ANOVA test shows a significant scale effect (p < 0.0001).In the lower panel, compression entropy values from LDF signals show significant statistical differences from those of Gaussian white noise, for scale five and beyond.

Discussion
The measure of entropy used in the present study describes the information contained in the time series as a ratio of compressed time series length to original time series length.Our results show that compression entropy values for LSCI and LDF time series are less than one for all of the scales analyzed, suggesting that, for all of the scales, there are repetitive structures within the data fluctuations.
We also report that, for LSCI time series, the pattern of the multiscale compression entropy shows a global decreasing trend with scales, especially for the smallest ROI sizes chosen on LSCI data (1 × 1 pixels 2 and 3 × 3 pixels 2 ) and whatever the window size w or the lookahead buffer size b.This decrease is similar to the one observed for Gaussian white noise (see Figure 11).Thus, for white noise, as the length of the window used for coarse-graining increases, the average value inside each window converges to a fixed value, because no new structures exist on larger scales (when the time scales increase, no new structures are revealed).Therefore, the standard deviation monotonically decreases with the scale factor, and the same is found for compression entropy.The global decreasing trend observed for LSCI time series computed from the smallest ROI sizes are in accordance with other works, where it has been reported that LSCI single pixels (or small ROI sizes) in time have statistical properties similar to those of white noise [31,32].It has also been reported that, for larger ROI sizes on LSCI data, the signal-to-noise ratio increases and the behavior asymptotically approaches the one of LDF [31,32].Our findings are therefore consistent with these previous results.
Moreover, for LSCI and LDF time series, we observe that the compression entropy values decrease when the window size w increases.Improved compressibility for larger windows might be due to the fact that matching patterns are more likely.This could be due to physiological activities.Other works are needed to better understand these variations.It should also be noted that larger values for the window size w increase the temporal duration of the dynamics that are considered.We also observe that, for scale factors larger than three, compression entropy values of LDF signals become larger than the ones of LSCI time series, but are closer to the ones of LSCI time series computed with an ROI size of 15×15 pixels 2 than to the ones computed with smaller ROI sizes.This means that, for scale factors larger than three, LDF signals complexity becomes closer to the ones of LSCI time series with the largest ROI size.These results are in accordance with previous works where it has been reported that: (i) there are different dynamical patterns for LSCI and LDF data [26]; (ii) when the ROI size increases in LSCI data and when LSCI pixel values are averaged in ROI and followed with time, the patterns of the resulting time series approach the patterns of LDF signals [26,31,32,38].
Several studies have been carried on compression entropy [34,37,[39][40][41][42][43][44][45][46].Through these works, it has been shown that compression entropy is sensitive to vagal heart rate modulation [40].Moreover, some authors reported different entropy functions of heart rate variability (HRV) before the onset of ventricular tachycardia [34,42] and a decrease of the compression entropy of HRV time series of athletes after a training camp compared to the one computed before the training camp [39].Furthermore, through compression entropy analyses, some authors have shown a significant reduction in the complexity of heart rate modulation in acute, untreated patients suffering from schizophrenia compared to control subjects [41,46].Studies on HRV from patients with chronic heart failure and type 1 diabetes mellitus have also been conducted [43,44].Thus, it has been shown that compression entropy is significantly lower in high risk heart failure subjects than in low risk heart failure subjects and that compression entropy is significantly reduced in patients with type 1 diabetes mellitus compared to healthy controls [44].The influence of negative mood on heart rate compression entropy in healthy subjects has also been investigated [45].
However, very few studies used compression entropy on multiple time scales [37].Computation of compression entropy on multiple time scales was applied on RR interval time series recorded in young and aged subjects [37].The authors reported significantly different entropy characteristics over multiple time scales for aged than for younger subjects [37].Reduced heart rate variability in elderly subjects is well known and has been hypothesized to be caused by reduced vagal heart rate modulations [47].However, from the best of our knowledge, compression entropy on single or multiple time scales has never been studied nor on LSCI data nor on LDF time series.Our work shows that LSCI and LDF signals contain repetitive structures within their fluctuations.Moreover, the compression entropy decreases when the window size w increases.Further work is needed in order to analyze the role played by the physiological activities in this decrease.Compression entropy may also depend on data frequency acquisition, because the latter may modify the alphabet in which the time series take their sample values.Our data were sampled to 18 Hz, but further investigations are needed to analyze how compression entropy values may change with the sampling rate.Furthermore, future works have to be conducted to compare multiscale compression entropy values in healthy and pathologic subjects.
In our work, the multiscale assessment has been performed on eight scales postulating that at least 200 samples are required for reliable entropy estimation.In their initial algorithm dealing with multiscale entropy (algorithm based on sample entropy), Costa et al. proposed that the shortest coarse-grained time series from the cardiovascular system contained 1,000 samples or even more [36,48].The latter authors mentioned that the minimum number of data points required applying the multiscale entropy method depends on the level of accepted uncertainty.For the multiscale compression entropy algorithm used in our study, further work is needed to determine the optimal length of the time series at the maximum scale.
LSCI and LDF techniques are, by definition, very sensitive to movements.Therefore, long recordings (several hours or even minutes) cannot be obtained without having numerous movement artifacts.As a consequence, the time scales studied with multiscale compression entropy cannot reach the ones where physiological activities have been traditionally studied.

Conclusions
Our findings show that LSCI and LDF signals present compression entropy values lower than one for all of the scales analyzed.This suggests that, for all scales, there are repetitive structures within the data fluctuations.Moreover, for scale factors larger than three, compression entropy values of LDF signals become larger than the ones of LSCI time series, but are closer to the ones of LSCI time series computed with an ROI size of 15 × 15 pixels 2 than to the ones computed with smaller ROI sizes.For scale factors larger than three, LDF complexity may therefore become different from the one of LSCI time series, but the larger the ROI size on LSCI data, the closer the complexity of LSCI time series to the one of LDF signals.Finally, LDF signals at larger scales seem to have structures that are different from those of a Gaussian white noise process.This is not observed for LSCI time series when small ROI sizes are taken into account.

Figure 1 .
Figure 1.Laser Doppler flowmetry signal recorded on the forearm of a healthy subject at rest.The signal has been resampled to 18 Hz.

Figure 2 .
Figure 2. Perfusion image recorded with the laser speckle contrast imaging (LSCI) technique.The image has been recorded on the forearm of a healthy subject at rest.

Figure 3 .
Figure 3. Laser speckle contrast time series determined from a stack of laser speckle contrast images recorded on the forearm of a healthy subject at rest.The signals have a sampling frequency of 18 Hz.The time series have been computed from an ROI size of 1 × 1 pixels 2 (top), 3 × 3 pixels 2 (middle) and 15 × 15 pixels 2 (bottom) on the laser speckle contrast images (see the text for details).

Figure 4 .
Figure 4. Multiscale compression entropy values computed from nine LSCI time series with an ROI size of 1 × 1 pixels 2 (top), 3 × 3 pixels 2 (middle) and 15 × 15 pixels 2 (bottom).Data are presented as group means and standard deviations.The computations have been performed with a window size w = 5.The results for different lookahead buffer sizes b are shown.For each panel, a two-way ANOVA test shows a significant statistical effect of scale (for a given b value, we obtain significant different compression entropy values between scales).The test also shows significant statistical differences between compression entropy values computed with b = 1 and compression entropy values computed with b = 5 or 10 (p < 0.0001).

Figure 5 .
Figure 5. Multiscale compression entropy values computed from nine LSCI time series with an ROI size of 1 × 1 pixels 2 (top), 3 × 3 pixels 2 (middle) and 15 × 15 pixels 2 (bottom).Data are presented as group means and standard deviations.The computations have been performed with a window size w = 10.The results for different lookahead buffer sizes b are shown.For each panel, a two-way ANOVA test shows a significant statistical effect of scale (for a given b value, we obtain significant different compression entropy values between scales).The test also shows significant statistical differences between compression entropy values computed with b = 1 and compression entropy values computed with b = 5 or 10 (p < 0.0001).

Figure 6 .
Figure 6.Multiscale compression entropy values computed from nine LSCI time series with different ROI sizes.Data are presented as group means and standard deviations.The computations have been performed with a lookahead buffer size b = 6 and with a window size w = 2 (top), w = 15 (middle) and w = 20 (bottom).A two-way ANOVA test shows a significant scale effect (p < 0.0001) and an ROI size effect for the middle and bottom panels (p < 0.01).

2 Figure 7 .
Figure 7. Multiscale compression entropy values computed from nine LSCI time series with an ROI size of 1 × 1 pixels 2 (top), 3 × 3 pixels 2 (middle) and 15 × 15 pixels 2 (bottom).Data are presented as group means and standard deviations.The computations have been performed with a lookahead buffer size b = 6.The results for different window sizes w are shown.For each panel, a two-way ANOVA test shows a significant scale effect and window size effect (p < 0.0001).

Figure 8 .
Figure 8. Multiscale compression entropy values computed from nine laser Doppler flowmetry (LDF) signals.Data are presented as group means and standard deviations.The computations have been performed with a window size w = 1 (top), w = 2 (middle) and w = 3 (bottom).The results for different lookahead buffer sizes b are shown.For each panel, a two-way ANOVA test shows a significant scale effect, but no significant effect of b.

Figure 9 .
Figure 9. Multiscale compression entropy values computed from nine LDF signals.Data are presented as group means and standard deviations.The computations have been performed with a window size w = 18 (top) and w = 20 (bottom).The results for different lookahead buffer sizes b are shown.For each panel, a two-way ANOVA test shows significant statistical differences between scales and between compression entropy values computed with b = 1 and compression entropy values computed with b = 5 or 10 (p < 0.0001).

Figure 10 .
Figure 10.Multiscale compression entropy values computed from nine LDF signals.Data are presented as group means and standard deviations.The computations have been performed with a lookahead buffer size b = 6.The results for different sliding window sizes w are shown.A two-way ANOVA test shows a significant scale effect and window size w effect (p < 0.0001).

Figure 11 .
Figure 11.Multiscale compression entropy values computed from a Gaussian white noise (mean: zero; variance: one) and from nine LDF signals and nine LSCI time series recorded simultaneously.Data are presented as group means and standard deviations.The computations have been performed with a window size w = 4 and a lookahead buffer size b = 2 (top) and with a window size w = 8 and a lookahead buffer size b = 5 (bottom).For each panel, a two-way ANOVA test shows a significant scale effect (p < 0.0001).In the lower panel, compression entropy values from LDF signals show significant statistical differences from those of Gaussian white noise, for scale five and beyond.