Next Article in Journal
Social Robot Navigation Tasks: Combining Machine Learning Techniques and Social Force Model
Previous Article in Journal
Evaluation of User-Prosthesis-Interfaces for sEMG-Based Multifunctional Prosthetic Hands
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regression Based Iterative Illumination Compensation Method for Multi-Focal Whole Slide Imaging System †

1
Image Analysis Department, 3DHISTECH Ltd., 1141 Budapest, Hungary
2
Department of BioTech Research Center, Óbuda University, 1034 Budapest, Hungary
*
Author to whom correspondence should be addressed.
This paper is an extension version of the conference paper: Paulik, R.; Élő, L.; Kiszler, G.; Kozlovszky, M.; Molnar, B. Compensation Methods for Inhomogeneous Illumination in Whole Slide Imaging System. In Proceedings of the 2019 IEEE International Work Conference on Bioinspired Intelligence (IWOBI), Budapest, Hungary, 3–5 July 2019.
Sensors 2021, 21(21), 7085; https://doi.org/10.3390/s21217085
Submission received: 3 October 2021 / Revised: 22 October 2021 / Accepted: 22 October 2021 / Published: 26 October 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Image quality, resolution and scanning time are critical in digital pathology. In order to create a high-resolution digital image, the scanner systems execute stitching algorithms to the digitized images. Due to the heterogeneity of the tissue sample, complex optical path, non-acceptable sample quality or rapid stage movement, the intensities on pictures can be uneven. The evincible and visible intensity distortions can have negative effect on diagnosis and quantitative analysis. Utilizing the common areas of the neighboring field-of-views, we can estimate compensations to eliminate the inhomogeneities. We implemented and validated five different approaches for compensating output images created with an area scanner system. The proposed methods are based on traditional methods such as adaptive histogram matching, regression-based corrections and state-of-the art methods like the background and shading correction (BaSiC) method. The proposed compensation methods are suitable for both brightfield and fluorescent images, and robust enough against dust, bubbles, and optical aberrations. The proposed methods are able to correct not only the fixed-pattern artefacts but the stochastic uneven illumination along the neighboring or above field-of-views utilizing iterative approaches and multi-focal compensations.

1. Introduction

The integration of digital imaging in medical diagnostics first started in radiology and, due to the advantages of the digital file, such as the possibility of sharing, integration and archiving, the same request appeared in routine histopathology. The digital revolution started with the introduction of whole slide imaging (WSI) technology in pathology. Different scanner devices creating large files have been introduced, presenting tissue structures in an adequate resolution with a high color fidelity [1]. Nowadays, two main WSI technologies are available which solves the narrow field of view (FOV) problem of traditional optical microscopes [2,3,4,5]. The line scanning process scans the samples along a horizontal line; and in contrast to this, the area scanners use a stop and go mechanism to take images in each microscopic field of view individually. The scanning process results in a series of images, which builds up one full montaged picture of the whole tissue [6]. It is difficult to find an adequate solution for stitching these large sets of images, because the computational time and the quality is a critical step in whole slide imaging.
The area scanning process is quite like the widely used panorama photography and the base of the applied stitching methods are common. The moving camera takes several pictures of one object following a lane path. A software composes them into a large object connecting the small parts into one integrated object. These methods typically use an oversized FOV recording process, which results an overlapping area in each fitting side of the image rectangle. There are four overlapping stripes along all sides of one FOV rectangle in area scanners which contain a few tens or hundreds of common pixel columns and rows. These areas are adequate to create the fine alignment (stitching), and this can be used also for intensity compensation [7].
The images that come directly from the scanner camera output channel contain FOVs presenting heterogeneous intensity even though they were recorded with the same exposition time. This inhomogeneity originates from the structural heterogeneity of the tissue sample and from the optical features of the scanner hardware, such as the rapid stage movement, complex optical path or the camera and illumination characteristics [8,9,10]. Therefore, next to the image stitching process, an intensity compensation is needed. However, there are some special requests related to this compensation task. The scanning time is a critical point because the diagnostic time is limited in the pathology department: while the available expert pathologists are limited, the number of the samples increase dynamically. Accordingly, both the image quality and scanning time are critical points. This situation needs an ultrafast scanning system, to create the digital slide without any delay in an appropriate quality. Stitching operation should be fast-processing with minimal use of the available system resources.
There are existing techniques in the field of digital microscopes or in other fields of science for calibrating the intensity inhomogeneities [11]. For instance, in astronomy, the related methods are known as flat-field correction [12], which is a common method to remove artifacts from the images that are caused by the noisiness of the detector or by distortions of the optical path. Another related area is panoramic photography, where the different exposure times or lens distortions can cause intensity differences between the images [13,14,15]. Nowadays the camera software in mobile phones can create and visualize panoramic photos in real-time [16,17,18]. The problem of uneven illumination also appears with other medical devices like Endoscopes [19] or Magnetic Resonance Imaging (MRI) [20].
In the field of digital microscopy a few studies are researching solutions for the intensity inhomogeneities like the BaSiC tool [21] which is based on low-rank and sparse decomposition methods [22,23]. Some researchers applied compensation for fluorescent illuminated slides [10,24,25,26] while others for brightfield slides [8,9,27] to eliminate the mosaicking of artifacts. A few studies utilized traditional color histogram matching methods [6,28,29,30] which is the base of the proposed method.
Similar research found in the literature can be divided into two groups. Studies in the first group try to improve locally the boundaries of the neighbouring images but do not consider the wider environment. This approach is useful if we only want to compensate the images with their immediate environment, with their distant neighborhood independent from them. The other category of approach is to try to estimate background intensities and optical characteristics from a series of images and then compensate for the images. The latter group does not have local compensation, it cannot compensate the adjacent fields of view relative to each other. Our proposed solutions have both properties, and they are specialized to the field of digital microscopy.
In this article we introduce five approaches which can be used as potential solutions to reduce tile effects of multi-focal fluorescent and brightfield digital slides. After implementation, we investigate the advantages and disadvantages of each approach and make attempt to find an adequate solution that meets to WSI needs.

2. Materials and Methods

Routine breast cancer slides were used from the archive of the 1st Department of Pathology and Experimental Cancer Research of the Semmelweis University, Budapest, Hungary. Brightfield and fluorescent stained samples randomly collected. The survey was performed with the permission of the Institutional Review Board and Regional Ethics Committee of the Semmelweis University (permit no. 7/2006).
Slides were digitized using a PannoramicTM P250 Flash and PannoramicTM P1000 digital slide scanners (3DHISTECH Ltd., Budapest, Hungary) that utilizes Plan-Apochromat objective with 20× (for IHC) or 40× (for FISH) magnification (Zeiss, Jena, Germany) and a CIS color camera (CIS Corporation, Tokyo, Japan) for brightfield and PCO.edge Scientific Complementary Metal-Oxide Semiconductor camera (PCO, Kelheim, Germany) for fluorescence image acquisition.
The sizes of the digital slides of brightfield samples ranged from 0.8 to 2.4 GB and the number of pixels from 50,000 × 50,000 to 100,000 × 170,000. The sizes of the used fluorescent slides were between 9 and 10 GB; the pixel numbers were around 85,000 × 75,000.
The proposed methods are compensating the image channels independently. It means red-green-blue channels in case of brightfield samples and N channel in case of fluorescent slides. All slides in the validation set have 8 bit per channel bit-depth. The methods are not sensitive to whether the background intensity is black (fluorescent) or white (brightfield).

2.1. Histogram Matching

In our first approach we try to compensate the tiles using the classical histogram matching algorithm. Since it calculates a function for the whole intensity range, it can match the nonlinear differences: for example, the intensity change is exponential: i = i 0.5 .
The histogram function for an image with N intensities:
h i = n i ,             i 0 ,   ,   N 1
where ni is the number of pixels with intensity i and N is the number of intensities (e.g., 256).
The Cumulative Distribution Function (cdf) [21] gives the probability that any given pixel from the image has a value less than or equal to i:
c d f i + 1 = j = 0 i h j + 1 j = 0 N 1 h j + 1
For compensation, we calculate the histograms for the two overlapping areas (h1 and h2). Following, we calculate their cumulative distribution functions (cdf1 and cdf2). For each intensity value (i1) we find i2 for which cdf1(i1) = cdf2(i2)
c d f 1 i 1 = c d f 2 i 2 ,             i 1 , i 2 0 , ,   N 1
The result of the histogram matching function (hm, Figure 1.):
h m i 1 = i 2
On Figure 2, two neighboring field of views are shown; their original histograms of their overlapping areas (Left and Right); the compensated image (Output) and its histogram. The Output image is transformed from the Right image using compensation to the Left one. The base of this method is when we compensate two images by matching their histograms, the output images will look similar.

2.2. Patch Based Histogram Matching

Executing the histogram matching based method we can compensate the differences which are caused by the stochastics of illumination—these differences are homogeneous along the whole area of field of views. However, if the necessary compensation is different on the parts of FOV, a more adaptive method is needed.
The simplest approach would be to execute the histogram matching method for each row [3]. However, in this case, there may not be enough information in each line to execute the matching: if only a few intensities are present, we cannot interpolate and extrapolate since not every intensity has its cdf pairs.
As an intermediate solution we can divide the overlap area into patches, where the size of patches are small enough to compensate the differences inside the FOVs but large enough to contain sufficient information for interpolation and extrapolation.
We extend the histogram function for patches:
p h p i = n p , i ,             i 0 ,   ,   N 1
where np,i is the number of pixels with intensity i in the patch p and N is the number of intensities. After we calculate the histograms for two neighbouring patches (ph1 and ph2), we can calculate their cumulative distribution functions (cdfp1 and cdfp2):
c d f p 1 i 1 = c d f p 2 i 2 ,             i 1 , i 2 0 , ,   N 1
We extend the histogram matching for patch-based usage (phm)
p h m p ,   i = h m p i ,             i 0 ,   ,   N 1
where hmp is calculated from the p patch using the cdfp1 and cdfp2 distribution functions. Figure 3 shows a field of view, its four overlap areas (top-left-bottom-right) and the patches created from them.

2.3. Executing Polynomial Regression

Along the overlapping areas usually there are intensities which is not contained by the overlapping pairs. However, transformation needs to be estimated since they can appear on the internal regions. This means we need to extrapolate and interpolate for these intensities.
Assuming the illumination inhomogeneities have the same characteristics for the whole range of intensities, we can estimate regression by the known intensities.
In general, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y|x).
We can combine the patched based histogram matching with polynomial regression:
x i = i ,             i k n o w n   i n t e n s i t i e s y i = p h m i ,             i k n o w n   i n t e n s i t i e s
The polynomial regression model
y i = β 0 + β 1 x i + β 2 x i 2 + + β m x i m + ε i
can be expressed in matrix form
y 1 y 2 y 3 y n = 1 x 1 x 1 2 x 1 m 1 x 2 x 2 2 x 2 m 1 x 3 x 3 2 x 3 m 1 x n x n 2 x n m   β 1 β 2 β 3 β m + ε 1 ε 2 ε 3 ε n ,
where x is the vector of known intensities, y is the vector of histogram matching, β is the vector of parameters, is the error vector. Using pure matrix notation:
y = X   β +
The vector of polynomial regression coefficients ( β ) can be estimated using ordinary least squares estimation (reference):
β = X T X 1   X T y
We can combine the patched based histogram matching with polynomial regression ( p r p , i ):
p r p , i = β 0 + β 1 i + β 2 i 2 + + β n i n
where β is calculated by the patch p and n is the length of polynomial function.

2.4. Applying the Compensation

We calculated regressions for each patch along the overlapping areas of tiles: one polynomial function for each patch. In order to apply the result of regression to the tiles first we need to define an interpolation function between the patches:
Let lerp(a, b, t) is the linear interpolation (t based) value between a and b:
l e r p a , b ,   t = a 1 t + b t p r i x ,   i = l e r p p r p , i ,   p r p + 1 , i ,   t
where
p = f l o o r x N / W t = m o d x / W N , 1.0
where W is the width (or height) of tiles; N is the number of patches horizontally or vertically.
We are able to interpolate between the patches. Now, we can define a horizontal (H) and a vertical (V) compensation function:
c o m p H x ,   i = l e r p p r i T x , i ,   p r i B x , i ,   t
where T and B are the top/bottom overlapping areas of a tile. Similarity we can define the vertical case:
c o m p V y ,   i = l e r p p r i L y , i ,   p r i R y , i ,   t
where L and R are the left/right overlapping areas of a tile.
The following formula (iw) gives the linear interpolation weight (Figure 4.) for an x,y index between [0..1]:
i w x , y = 0.5 0.5 x w 1 + 0.5 x w + 0.5 y h
We can compose the vertical pairs of the above formulas similarly.
c o m p x , y ,   i = l e r p ( c o m p H x ,   i , c o m p V y ,   i , i w x , y )
To apply the interpolated adaptive histogram matching to the source image we use the following calculation:
d s t x , y = c o m p src x , y , x , y 2 + s r c x , y 2
where src(x,y) is the intensity on the input image at x,y position and dst(x,y) is the compensated intensity in this position.
A brightfield scanning usually uses three channels (red, green, blue) while in case of fluorescent illumination the channel number can be up to ten or more. Applying the proposed compensation methods, we can compensate each channel independently [4,12,19].

2.5. Iterative Approach

After executing the compensation, the intensity differences along the borders of tiles are eliminated, however local brighter and darker tiles are still perceptible.
We can improve the algorithm using iterative approach to compensate the tiles considering their farther neighbors.
During the iteration we compensate using the above methods, but in each iteration, we use simulated annealing, when we spread the compensation values to the neighbors, with decreasing strength. Applying the proper iteration steps the tiles are compensated by not only their adjacent pairs, but by using farther tiles, reaching a more global optimum.
Iteration steps:
  • Calculate the histogram functions for each patch of overlap areas: ph(i)
  • Calculate the histogram matching for the patches: phm(p,i)
  • Estimate the polynomial regression for them: pr(p,i)
  • Condition:
    • Stop the loop after N iteration and continue with Step 5.
    • Calculate the average histogram matching values along the patches of each tile:
      p h m a v g p , i = p h m p , i N
      where p’ are the patches of the tile and N is the number of patches in a tile.
    • Spread the average histogram matching with decreasing strength (w):
      p h m p , i = l e r p p h m p , i ,   p h m a v g p , i ,   w
    • Continue with the Step 3.
  • Compensate the tiles using the compensation function: comp(x,y,i)
On Figure 5, it is shown how a darker field of view is eliminated during the iteration steps. The arrows only illustrate compensations started from the middle field of view during the iterations, in fact, in each iteration, all fields of view are compensated by their neighbors. After the first iteration, the neighbors of the middle FOV became darker, but after a few iterations, the darkness can be eliminated: the fault of the middle FOV disappears without spoiling its neighborhood.

2.6. Multi-Focal Compensation

When we compensate the layers of a multi-focal scanning independently one-by-one, we get homogeneous 2-dimensional layers. However, when we compare these layers with each other, we see that the intensities along the focal layers (Z layers) are different. It is possible to enhance the method with inter-layer compensation.
Take the case when we scan the slides in Z-X-Y order: the scanner moves to a specific X-Y position, scans each focal layers, and then moves to the next position. In this case the overlapping areas are the same which we can take advantage for compensating between the layers.
In this case an overlapping area not only has left and right (or above/below) pairs, but it has pairs also in Z order (Figure 6.). The idea is to use the Z neighbors in the “spreading” step of the iterative approach (4. b. step):
p h m a v g p , i = p h m p , i N
where p′′ are the patches of the tile in horizontal/vertical and Z direction and N’ is the number of patches these patches.

3. Validation

The validation process was done by using 10 slides of 10 clinical cases. The slide set was selected based on image quality. Only slides with tile effect issues were selected by using manual investigation. 1200 FOVs were used from the selected slide set with an average of 2-megapixel-FOV-size. The methods were executed native resolution.
Three different aspects were investigated in the validation protocol:
  • Average intensity error (error)
  • Heterogeneity of tile intensities (heterogeneity)
  • Intensity error between focal layers (zerror)

3.1. Average Intensity Error

The average intensity error (error) was measured in each tile, that was calculated based on the average intensity differences in the overlapping area of the tiles (errora,b):
errora,b is the average intensity error of a tile-pair:
e r r o r a , b = x , y o a , b x , y o b , a x , y n
where a and b are two FOVs, x and y are the pixel coordinates on the overlap areas, n is the number of pixels and oa,b is the pixels of overlap area of the FOV a with the FOV b.
e r r o r = c a , b e r r o r a , b n c c n c , a , b
where c is the case number, a,b is a tile-pair of the case c, nc is the number of cases and nc,a,b is the number of tile-pairs of case c.

3.2. Heterogeneity of Compensation

One aim of the optimizations is to compensate the tiles in a way that make them look homogenous. We calculated the differences under the overlapping areas (error), but we need to measure the heterogeneity between the tile intensities.
Let avga is the average intensity of the pixels of tile a:
a v g a = x , y s r c a x , y n
where a is a tile, x and y are the pixel indices on the tile a and n is the number of pixels of tile.
Let diffa,b is the absolute difference of average intensities of two neighboring tiles:
d i f f a , b = a v g a a v g b
where a and b are adjacent tiles (horizontally or vertically).
Let diffoa,b is the original (uncompensated) difference of a and b tiles, and let diffca,b is the compensated difference.
Let heterogeneity is average difference ratio between the compensated (diffca,b) and uncompensated (diffoa,b) tiles:
h e t e r o g e n e i t y = c a , b d i f f c a , b d i f f o a , b n c n c , a , b
where c is the case index, a,b is a tile-pair of the case c, nc is the number of cases and nc,a,b is the number of tile-pairs of case c. This formula gives relative heterogeneity for the differences: before the compensation this value is 1.0 and the compensation needs to decrease this value. A lower value means that the differences after the compensation are lower.

3.3. Intensity Error between Focal Layers

We can extend the error function for analyze the differences between focal layers using their overlap areas:
zerrora,b is the average intensity error of a tile-pair which are located one above the other:
z e r r o r i , j , = x , y o i , j x , y o i , j x , y n
where i and j are two FOVs in a neighboring focal layers, x and y are the pixel coordinates on the overlap areas, n is the number of pixels and oi,j is the pixels of overlap area of the FOV i with the FOV j.
z e r r o r = c i , j z e r r o r i , j n c c n c , i , j
where c is the case number, i,j is a tile-pair of the case c, nc is the number of cases and nc,i,j is the number of tile-pairs of case c.

4. Results

The Figure 7 and Figure 8, illustrations show the differences between the original and iteratively compensated images along the edges of tiles. To make the differences easier to see on these illustrations, their contrast has been increased.
Table 1 shows the validation results of five algorithms analyzing them with three metrics. The average value and standard deviation are displayed for results. Regarding the two-dimensional average compensation error (Figure 9.), applying histogram matching-based compensation, more than two-times less error (2.68) has been found compared to the input image (6.60). The patch based (1.72) and regression based (1.71, second-order polynomial) methods result quite the same errors. The iterative method (1.92) and multi-focal approach (2.13) gave a bit more errors compared with the regression-based method. We use second-order regression, since the samples are almost linear (first order), but they contain exponential-like aberration. We founded the third-order regression unnecessary. We choose three patches since most of the unevenness homogeneous inside the field of views, only a little difference can be observed in the corners of tiles which are not larger than one third of the width of the field of views.
Analyzing the heterogeneity (Figure 10), we can see that except the iterative approach (0.89) the other methods give almost the same results (0.9×). It is not surprising; the iterative approach was made to eliminate the inter-tile heterogeneity. We use five iteration steps for the validation set. The number of needed iterations is greatly determined by the characteristics of sample, since using lower iteration number, the compensation is not able to compensate a local unevenness, but applying larger iteration count the possibility of compensating the not related tissue areas to each other are increasing. For the validation set five iteration is enough.
Examining the errors between the focal layers (Figure 11), the input images had value of 4.39. It is more than twice as applying histogram (2.26), patch based (2.16), regression based (2.08) or iterative (2.11) approaches. However, the multi-focal optimization gave the best results (1.14) for z-error.

5. Discussion

The purpose of this study was to investigate how different histogram matching-based compensation methods perform in the task of eliminating the illumination differences in a whole slide imaging system. The present results suggests that the most accurate compensation can be obtained by the patch-based polynomial regression method extended with the iterative and multi-focal approaches, however, all five methods have its own benefits.
The methods are designed to reduce the perceptible intensity differences along the tile boundary. In general, using more sophisticated algorithms to do the compensation, the probability of causing undesirable artefacts at the middle of the tiles are increasing. The simplest histogram matching-based method calculates the compensation factor about the four overlapping areas of field of views, so doing a false compensation is minimal. The other end is the iterative patch-based approach, where the more parts we divide the overlapping area, the more we can reduce the differences at the tile boundaries, but the chances increase that we need to compensate intensities at the tile center which are not represented in the overlapping areas. In such cases, we need to extrapolate excessively.
All five proposed methods have their own characteristics, from which we need to choose one that is flexible enough to correct the uneven intensities. For instance, when the inhomogeneities in our WSI system are caused by the stochasticity of the illumination, whose unevenness is linear and homogeneous throughout the field of views, the histogram matching method could be appropriate. However, if the strength of the inhomogeneities are different in parts of the field of view, we need to use a more adaptive approach. For some scanner configurations, it can be observed that the illuminating stochasticity is not completely random, only some FOVs are darker than others. In this case, using the iterative extension, we can eliminate these darker areas, since the local differences became visible to the algorithm considering wider neighborhood. Multi-focal extension can compensate the intensity differences between layers in multi-focus scanning, however, it should be noted that not all focus layers contain exactly the same tissue structure, they can be visually different, and its extent is depending on the sample thickness and distance between layers. If the layer distance is greater (e.g., greater than a few microns), there is an increased chance of compensating by independent tissue structures.
The present study validates the field of views before and after the execution of compensation mostly by synthetic methods. There are two main reasons why we want to produce images with as little unevenness as possible: to make the doctor’s job easier and to improve the reliability of automatic and algorithmic evaluation. In both cases, the visible and detectable intensity differences decrease the accuracy. No matter how adaptive or intensity-independent is the processing, be it a traditional algorithm or even a more sophisticated convolutional neural network, if it is unaware that there can be intensity errors at a certain point of the slide, it will detect incorrectly. The effect of compensation on human and machine measurements and processing could be examined in a future study.
Analyzing the results of the validation set, several ideas have been put forward to improve the proposed methods. One such optimization is when there is just a little information in an overlapping area, such as when it only contains background intensities. Then, there is a greater chance that the real tissue structure in the middle of the field of view can be compensated using a large-scale extrapolation. However, it can be detected, for example, if the intensities in the middle of FOV are considered by how much they deviate from the overlapping area. If the discrepancy is large, we need to reduce the strength of local compensation and take better account of the wider neighborhood in the iterative approach.
Another improvement possibility is the dynamic determination of the number of iterations, which is currently defined and fixed at the beginning of the model execution. In case of a larger number of iterations, there is a chance to do the compensation by a more distant area that does not belong to the same tissue structure as the reference field-of-view, causing incorrect compensation. Recognition of different tissue structures is a well-studied research field in the literature, with many approaches and methods already available from traditional pattern recognition algorithms [31] to sophisticated, deep learning based convolutional neural network methods [32,33,34].
In addition to digital microscopy, the proposed methods can be adapted to other disciplines, in which images must be compensated to each other, improving the quality and consistency of images or samples. The compensated images may be side by side, such as in case of panoramic photography or satellite pictures, may follow one another in time, such as in a motion picture or in astronomical images, or even spatially behind each other, such as in optical coherence tomography or magnetic resonance imaging. The field of application can be very diverse, and the proposed methods are flexible enough to be transferred with little modifications.

6. Conclusions

The use of tile compensation methods is critical in whole slide imaging (WSI). In this study we collect a set of digital slides with tile effect issues and after the implementation of five different compensation methods we have made a validation study to determine which models provides the best results.
According to the characteristics of the collected validation set, the polynomial regression method gave the best results regarding the average compensation error validation, however considering the tile homogeneity, the iterative method can provide better results. In the case of multi-layer slides, the multi-focal optimization can enhance the quality of compensation.
Finally, we can conclude that each approach can be useful for compensation of FOV in whole slide imaging systems, but in our validation study, we find the iterative approach most promising using the multi-focal extension for multi-layer slides.

Author Contributions

Conceptualization, R.P.; methodology, R.P.; software, R.P.; validation, R.P.; formal analysis, R.P.; writing—original draft preparation, R.P.; writing—review and editing, R.P. and M.K.; supervision, B.M. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board and Regional Ethics Committee of the Semmelweis University (permit no. 7/2006).

Informed Consent Statement

Not applicable, the dataset used for validation was anonymized and did not include patient data or personal information.

Data Availability Statement

Data used in this study can be made available upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krenacs, T.; Zsakovics, I.; Micsik, T.; Fonyad, L.; Varga, S.V.; Ficsor, L.; Kiszler, G.; Molnar, B. Digital microscopy – the upcoming revolution in histopathology teaching, diagnostics, research and quality assurance. In Microscopy: Science, Technology, Applications and Education Book Series; FORMATEX Research Center: Badajoz, Spain, 2010; Number 4; pp. 965–977. [Google Scholar]
  2. Kalinski, T.; Zwönitzer, R.; Sel, S.; Evert, M.; Guenther, T.; Hofmann, H.; Bernarding, J.; Roessner, A. Virtual 3D Microscopy Using Multiplane Whole Slide Images in Diagnostic Pathology. Am. J. Clin. Pathol. 2008, 130, 259–264. [Google Scholar] [CrossRef] [Green Version]
  3. Pantanowitz, L.; Farahani, N.; Parwani, A. Whole slide imaging in pathology: Advantages, limitations, and emerging perspectives. Pathol. Lab. Med. Int. 2015, 7, 23–33. [Google Scholar] [CrossRef] [Green Version]
  4. Zarella, M.; Bowman, D.; Aeffner, F.; Farahani, N.; Xthona, A.; Absar, S.F.; Parwani, A.; Bui, M.; Hartman, D.J. A Practical Guide to Whole Slide Imaging: A White Paper from the Digital Pathology Association. Arch. Pathol. Lab. Med. 2019, 143, 222–234. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Guo, C.; Bian, Z.; Jiang, S.; Murphy, M.; Zhu, J.; Wang, R.; Song, P.; Shao, X.; Zhang, Y.; Zheng, G. OpenWSI: A low-cost, high-throughput whole slide imaging system via single-frame autofocusing and open-source hardware. Opt. Lett. 2019, 45, 260–263. [Google Scholar] [CrossRef] [Green Version]
  6. Tian, Q.-C.; Cohen, L.D. Histogram-Based Color Transfer for Image Stitching. J. Imaging 2017, 3, 38. [Google Scholar] [CrossRef] [Green Version]
  7. Zakaria, M.F.; Ibrahim, H.; Suandi, S.A. A review: Image compensation techniques. In Proceedings of the 2010 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–18 April 2010; pp. V7-404–V7-408. [Google Scholar] [CrossRef]
  8. Chernavskaia, O.; Guo, S.; Meyer, T.; Vogler, N.; Akimov, D.; Heuke, S.; Heintzmann, R.; Bocklitz, T.; Popp, J. Correction of mosaicking artifacts in multimodal images caused by uneven illumination. J. Chemom. 2017, 31, e2901. [Google Scholar] [CrossRef]
  9. Piccinini, F.; Lucarelli, E.; Gherardi, A.; Bevilacqua, A. Multi-image based method to correct vignetting effect in light microscopy images. J. Microsc. 2012, 248, 6–22. [Google Scholar] [CrossRef]
  10. Kask, P.; Palo, K.; Hinnah, C.; Pommerencke, T. Flat field correction for high-throughput imaging of fluorescent samples. J. Microsc. 2016, 263, 328–340. [Google Scholar] [CrossRef]
  11. Brown, M.; Lowe, D.G. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  12. Manfroid, J. On CCD standard stars and flat-field calibration. Astron. Astrophys. Suppl. Ser. 1996, 118, 391–395. [Google Scholar] [CrossRef] [Green Version]
  13. Maojun, Z.; Jingni, X.; Yunhao, L.; Defeng, W. Color histogram correction for panoramic images. In Proceedings of the Proceedings Seventh International Conference on Virtual Systems and Multimedia, Berkeley, CA, USA, 25–27 October 2001; pp. 328–331. [Google Scholar] [CrossRef]
  14. Xiong, Y.; Pulli, K. Color and luminance compensation for mobile panorama construction. In Proceedings of the 18th International Conference on Multimedea, Firenze, Italy, 25 October 2010; pp. 1547–1550. [Google Scholar]
  15. Dasari, R.; Zhang, D.-Q.; Chen, C.W. Reference Image Based Color Correction for Multi-camera Panoramic High Resolution Imaging. In Proceedings of the 2016 13th Conference on Computer and Robot Vision (CRV), Victoria, BC, Canada, 1 June 2016; pp. 410–415. [Google Scholar] [CrossRef]
  16. Xiong, Y.; Pulli, K. Color correction for mobile panorama imaging. In Proceedings of the First International Conference on Internet Multimedia Computing and Service, ICIMCS ’09, Kunming, China, 23 November 2009; pp. 219–226. [Google Scholar]
  17. Yao, W.; Li, Z. Instant Color Matching for Mobile Panorama Imaging. IEEE Signal Process. Lett. 2014, 22, 6–10. [Google Scholar] [CrossRef]
  18. Xiong, Y.; Pulli, K. Color matching for high-quality panoramic images on mobile phones. IEEE Trans. Consum. Electron. 2010, 56, 2592–2600. [Google Scholar] [CrossRef] [Green Version]
  19. Bergen, T.; Wittenberg, T.; Münzenmayer, C. Shading correction for endoscopic images using principal color components. Int. J. Comput. Assist. Radiol. Surg. 2015, 11, 397–405. [Google Scholar] [CrossRef] [PubMed]
  20. Ivanovska, T.; Laqua, R.; Wang, L.; Schenk, A.; Yoon, J.H.; Hegenscheid, K.; Völzke, H.; Liebscher, V. An efficient level set method for simultaneous intensity inhomogeneity correction and segmentation of MR images. Comput. Med. Imaging Graph. 2016, 48, 9–20. [Google Scholar] [CrossRef] [PubMed]
  21. Peng, T.; Thorn, K.; Schroeder, T.; Wang, L.; Theis, F.J.; Marr, C.; Navab, N. A BaSiC tool for background and shading correction of optical microscopy images. Nat. Commun. 2017, 8, 14836. [Google Scholar] [CrossRef] [PubMed]
  22. Peng, T.; Wang, L.; Bayer, C.; Conjeti, S.; Baust, M.; Navab, N. Shading Correction for Whole Slide Image Using Low Rank and Sparse Decomposition. Implement. Funct. Lang. 2014, 17, 33–40. [Google Scholar] [CrossRef]
  23. Faridul, H.S.; Stauder, J.; Kervec, J.; Tremeau, A. Approximate cross channel color mapping from sparse color correspondences. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’13)—Workshop in Color and Photometry in Computer Vision (CPCV’13), Sydney, Australia, 8 December 2013; pp. 860–867. [Google Scholar]
  24. Model, M.A. Intensity Calibration and Shading Correction for Fluorescence Microscopes. Curr. Protoc. Cytom. 2006, 37. [Google Scholar] [CrossRef]
  25. Model, M.A.; Burkhardt, J. A standard for calibration and shading correction of a fluorescence microscope. Cytometry 2001, 44, 309–316. [Google Scholar] [CrossRef]
  26. Model, M. Intensity Calibration and Flat-Field Correction for Fluorescence Microscopes. Curr. Protoc. Cytom. 2014, 68, 10.14.1–10.14.10. [Google Scholar] [CrossRef]
  27. Tak, Y.-O.; Park, A.; Choi, J.; Eom, J.; Kwon, H.-S.; Eom, J.B. Simple Shading Correction Method for Brightfield Whole Slide Imaging. Sensors 2020, 20, 3084. [Google Scholar] [CrossRef] [PubMed]
  28. Paulik, R.; Elo, L.; Kiszler, G.; Kozlovszky, M.; Molnar, B. Compensation Methods for Inhomogeneous Illumination in Whole Slide Imaging System. In Proceedings of the 2019 IEEE International Work Conference on Bioinspired Intelligence (IWOBI), Budapest, Hungary, 3–5 July 2019; pp. 000153–000158. [Google Scholar] [CrossRef]
  29. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; Romeny, B.T.H.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  30. Zimmerman, J.; Pizer, S.; Staab, E.; Perry, J.; McCartney, W.; Brenton, B. An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement. IEEE Trans. Med. Imaging 1988, 7, 304–312. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Burges, C.J. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  32. Liu, X.; Song, L.; Liu, S.; Zhang, Y. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
  33. Vu, Q.D.; Graham, S.; Kurc, T.; To, M.N.N.; Shaban, M.; Qaiser, T.; Koohbanani, N.A.; Khurram, S.A.; Kalpathy-Cramer, J.; Zhao, T.; et al. Methods for Segmentation and Classification of Digital Microscopy Tissue Images. Front. Bioeng. Biotechnol. 2019, 7, 53. [Google Scholar] [CrossRef] [PubMed]
  34. Wetteland, R.; Engan, K.; Eftestøl, T.; Kvikstad, V.; Janssen, E.A.M. A Multiscale Approach for Whole-Slide Image Segmentation of five Tissue Classes in Urothelial Carcinoma Slides. Technol. Cancer Res. Treat. 2020, 19. [Google Scholar] [CrossRef]
Figure 1. An example of histogram matching: cdf1 and cdf2 are the cumulative distribution function, i1 is the input intensity, and i2 is the output one.
Figure 1. An example of histogram matching: cdf1 and cdf2 are the cumulative distribution function, i1 is the input intensity, and i2 is the output one.
Sensors 21 07085 g001
Figure 2. Compensation by histogram matching: first row: (Input) a part of the source FOVs, dashed rectangle: overlap area; (Output) result image compensated by histogram matching; second row: (Left and Right) the overlap areas of the input FOVs and their histograms; (Output) the overlap area of the compensated image and its histogram.
Figure 2. Compensation by histogram matching: first row: (Input) a part of the source FOVs, dashed rectangle: overlap area; (Output) result image compensated by histogram matching; second row: (Left and Right) the overlap areas of the input FOVs and their histograms; (Output) the overlap area of the compensated image and its histogram.
Sensors 21 07085 g002
Figure 3. Patched based overlap division: (a) original tile; (b) patches extracted from overlap areas (4 × 3 patches); red lines: patches.
Figure 3. Patched based overlap division: (a) original tile; (b) patches extracted from overlap areas (4 × 3 patches); red lines: patches.
Sensors 21 07085 g003
Figure 4. Interpolation weights: (a) horizontally; (b) vertically.
Figure 4. Interpolation weights: (a) horizontally; (b) vertically.
Sensors 21 07085 g004
Figure 5. Illustration of iterative steps: (a) first iteration; (b) second iteration; (c) third iteration; (d) fourth iteration; light red arrows: compensations in the step.
Figure 5. Illustration of iterative steps: (a) first iteration; (b) second iteration; (c) third iteration; (d) fourth iteration; light red arrows: compensations in the step.
Sensors 21 07085 g005
Figure 6. Illustration of multi-focal compensation: (a) left neighbors of a FOV; (b) right neighbors of a FOV; (c) compensated result of the six adjacent images; red rectangles: overlapping areas.
Figure 6. Illustration of multi-focal compensation: (a) left neighbors of a FOV; (b) right neighbors of a FOV; (c) compensated result of the six adjacent images; red rectangles: overlapping areas.
Sensors 21 07085 g006
Figure 7. Results of iterative approach: (a) input images; (b) compensated images.
Figure 7. Results of iterative approach: (a) input images; (b) compensated images.
Sensors 21 07085 g007
Figure 8. Results of iterative approach: (a) input image; (b) compensated image without iterations; (c) iterative compensation.
Figure 8. Results of iterative approach: (a) input image; (b) compensated image without iterations; (c) iterative compensation.
Sensors 21 07085 g008
Figure 9. The results of the intensity error measurement (average and std. dev.).
Figure 9. The results of the intensity error measurement (average and std. dev.).
Sensors 21 07085 g009
Figure 10. The results of the heterogeneity measurement (average and std. dev.).
Figure 10. The results of the heterogeneity measurement (average and std. dev.).
Sensors 21 07085 g010
Figure 11. The results of the focal layer error measurement (average and std. dev.).
Figure 11. The results of the focal layer error measurement (average and std. dev.).
Sensors 21 07085 g011
Table 1. The compensation results of different methods.
Table 1. The compensation results of different methods.
MethodIntensity Error
(error)
Heterogeneity
(heterogeneity)
Focal Error
(zerror)
AverageStd. Dev.AverageStd. Dev.AverageStd. Dev.
Input image6.601.341.000.0004.390.12
Histogram Matching2.680.620.970.0262.260.45
Patch-based Histogram Matching1.720.310.960.0232.160.42
Polynomial Regression1.710.290.960.0242.080.41
Iterative Approach1.920.440.890.0532.110.46
Multi-focal Approach2.130.480.910.0481.140.23
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Paulik, R.; Kozlovszky, M.; Molnár, B. Regression Based Iterative Illumination Compensation Method for Multi-Focal Whole Slide Imaging System. Sensors 2021, 21, 7085. https://doi.org/10.3390/s21217085

AMA Style

Paulik R, Kozlovszky M, Molnár B. Regression Based Iterative Illumination Compensation Method for Multi-Focal Whole Slide Imaging System. Sensors. 2021; 21(21):7085. https://doi.org/10.3390/s21217085

Chicago/Turabian Style

Paulik, Róbert, Miklós Kozlovszky, and Béla Molnár. 2021. "Regression Based Iterative Illumination Compensation Method for Multi-Focal Whole Slide Imaging System" Sensors 21, no. 21: 7085. https://doi.org/10.3390/s21217085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop