Next Article in Journal
Revisiting Local Descriptors via Frequent Pattern Mining for Fine-Grained Image Retrieval
Previous Article in Journal
On Consensus and Stability under Denial-of-Service Attacks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Maximum Entropy Method and Estimating the AIF via DCE-MRI Data Analysis

by
Zahra Amini Farsani
1,2,* and
Volker J. Schmid
2
1
Statistics Department, School of Science, Lorestan University, Khorramabad 68151-44316, Iran
2
Bayesian Imaging and Spatial Statistics Group, Institute of Statistics, Ludwig-Maximilians-Universität München, Ludwigstraße 33, 80539 Munich, Germany
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(2), 155; https://doi.org/10.3390/e24020155
Submission received: 6 December 2021 / Revised: 16 January 2022 / Accepted: 17 January 2022 / Published: 20 January 2022

Abstract

:
Background: For the kinetic models used in contrast-based medical imaging, the assignment of the arterial input function named AIF is essential for the estimation of the physiological parameters of the tissue via solving an optimization problem. Objective: In the current study, we estimate the AIF relayed on the modified maximum entropy method. The effectiveness of several numerical methods to determine kinetic parameters and the AIF is evaluated—in situations where enough information about the AIF is not available. The purpose of this study is to identify an appropriate method for estimating this function. Materials and Methods: The modified algorithm is a mixture of the maximum entropy approach with an optimization method, named the teaching-learning method. In here, we applied this algorithm in a Bayesian framework to estimate the kinetic parameters when specifying the unique form of the AIF by the maximum entropy method. We assessed the proficiency of the proposed method for assigning the kinetic parameters in the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), when determining AIF with some other parameter-estimation methods and a standard fixed AIF method. A previously analyzed dataset consisting of contrast agent concentrations in tissue and plasma was used. Results and Conclusions: We compared the accuracy of the results for the estimated parameters obtained from the MMEM with those of the empirical method, maximum likelihood method, moment matching (“method of moments”), the least-square method, the modified maximum likelihood approach, and our previous work. Since the current algorithm does not have the problem of starting point in the parameter estimation phase, it could find the best and nearest model to the empirical model of data, and therefore, the results indicated the Weibull distribution as an appropriate and robust AIF and also illustrated the power and effectiveness of the proposed method to estimate the kinetic parameters.

1. Introduction

Determining the probability density function of a random variable based on observations is a major and old issue in statistics. In recent years, various parametric and non-parametric methods have been introduced for the estimation of the probability density function for a random variable based on observations, but there is very limited work reported on the optimization methods. The maximum entropy method (MEM) is one of the major methods for estimating and determining the probability density with a high level of accuracy and efficiency and minimum bias. It is applied to gain the unknown density via resolution of an optimization problem. The principle of maximum entropy, as a method of statistical inference, is due to Jaynes [1]. His idea is that this principle leads to the selection of a probability density function that is consistent with our knowledge and introduces no unwarranted information. Any probability density function satisfying the constraints that has smaller entropy will contain more information (less uncertainty), and thus says something stronger than what we are assuming [1,2,3,4]. Entropy maximization or related concepts has been frequently utilized in the past ten years to analyze large biological datasets in various fields. These fields range from determining macromolecular interactions and structures [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20] to inferring signaling [21,22,23,24,25] and regulatory networks [26,27,28] and the coding organization in neural populations [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43] based on DNA sequence analyzes (the detection of specific binding sites, for instance) [42,43,44,45,46]. MEM is a powerful vehicle to reconstruct images based on various datasets. It is also commonly used in radio astronomical interferometry, which deals routinely with images with high dynamic range and up to a million pixels [47,48].
In this paper, a concise and basic introduction to entropy maximization and its applicability for deriving models from biological datasets—especially in kinetic models and image processing via DCE-MRI—is provided. DCE-MRI is a fast and noninvasive method for quantitative perfusion analysis in soft tissue, using a contrast agent (CA). It is used extensively for the analysis of microvascular blood in a variety of clinical usages like detecting, characterizing, and therapeutic monitoring of different diseases [49,50,51,52,53,54]. In DCE-MRI, quantitative analysis is often applied on a whole tumor region of interest (ROI) [55], in which the contrast agent concentration time curve for all voxels in the tumor is used to estimate a single set of kinetic parameters (like—volume transfer constant between blood plasma and extracellular extravascular space per minute—and—rate constant between extracellular extravascular space and blood plasma per minute [56]) for each patient study.
In recent years, several approaches have been proposed to quantify the perfusion of CA into tissues and to estimate the related parameters of perfusion (indices) from concentration-time or signal-curves [57,58,59,60]. In DCE-MRI, quantification of the perfusion involves measuring the concentration of the CA in tissue over time. This time curve is then modeled using kinetic processes, where the kinetic parameters are of clinical interest [55]. The kinetic model of the tissue is explicit as an ordinary-differential equation, dissolved analytically result in a nonlinear format in the contrast agent concentration [61,62,63,64,65,66,67,68,69].
One fundamental necessity for the Tofts pharmacokinetic analysis is the knowledge of the arterial input function (AIF), that is the time curve of CA concentration on the left-side ventricular blood pool. Although the AIF itself is not clinically relevant, its correct determination is very important for the correct estimation of kinetic parameters [62,70]. Since the obtained rate constants are heavily dependent on the AIF [71,72,73,74,75], an accurate and precise measurement is necessary for their absolute and reliable quantification. Instead, a simplified method, such as a population averaged AIF can be used. However, (large) variabilities in cardiac output—between patients and within patients over time—are no longer taken into account with this method. If this variability in cardiac output can be accounted for by precise measurement of the AIF, the accuracy and repeatability of the kinetic parameters should be superior over use of a population averaged AIF. Some researchers have shown that a population averaged AIF can result in better repeatability [76,77], whereas others report the opposite [78,79]. It is possible that repeatability depends on the imaged body part and imaging sequence parameters, but also on the choice of the artery for AIF measurement.
However, in many imaging applications, e.g., for patients with breast cancer, it is not possible to directly measure the AIF from imaging, as no large vessel is in the field of view. Thus, assumed AIFs from the literature are often used, e.g., bi-exponential functions with parameters derived by [80] or [81] or a mix of the two Gaussian with an exponential [61,62,63,64,65,66,67,68,69,76].
We have previously developed a method for estimating the AIF and the kinetic parameters in DCE-MRI [82]. This method was developed in response to a need in the medical imaging community for the objective comparison of estimations made using different statistical methods, for example, the Bayesian method and MLE. The main problem of our previous algorithm was the dependence of Newton’s method on the starting point, which was a uniform random number.
The previous algorithm was the maximum entropy method in combination with Newton’s method in order to estimate the AIF using the CA time curve data in plasma, also called “blind estimation” of the AIF and the maximum a posterior approach (MAP) to determine the kinetic parameters [82]. In this paper, we propose an improved algorithm for blind AIF estimation using a mixture of the maximum entropy method (MEM) and teaching-learning based optimization in the step of λ ’s estimation for assessing observer performance in the classification tasks using available information. This circumvents issues with the random start points in the previous algorithm. This approach works on the effect of the influence of a teacher on students. Like other nature-inspired algorithms, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. The population is considered as a group of learners or a class of learners. The process of TLBO has two parts: the first part consists of the ‘Teacher Phase’ and the second part consists of the ‘Learner Phase’. ‘Teacher Phase’ means learning from the teacher, and ‘Learner Phase’ means learning by the interaction between learners [83,84]. The proposed algorithm is, therefore, more robust. It finally proposed the Weibull distribution in between all other selective models as a model for the AIF via MEM approach. We performed extensive studies using empirical data to better understand the performance of our method. In addition, a comparison was conducted among four other different estimation methods in DCE-MRI dataset, and the new recommended method results were compared with the previous work [55,82].
The rest of this paper is structured as follows: Section 2 describes the basic structure of the proposed method for DCE-MRI analysis: the proposed modified maximum entropy method (MMEM), TLBO algorithm, and MAP. Section 3 includes alternative approaches for estimating of the parameters. Section 4 gives an example to show the step-by-step analysis of the dataset. Section 5 contains the application of a complete DCE-MRI study using the proposed method and their evaluations. Section 6 concludes the aim of the present work.

Data Description

As an example data set, we use a previously analyzed breast cancer data set [55]. The data were provided by the Paul Strickland Scanner Center at Mount Vernon Hospital in Northwood, UK. Pre-treatment DCE-MRI scans of twelve patients were available. In each case, 46 images were recorded every 11.9 s after administration of the contrast agent Gadolinium-DTPA. For the calculation of T 1 values, we used a two-point measurement with calibration curves as described in [85,86]. In DCE-MRI, T 1 is the relaxation time, also known as the spin-lattice relaxation time. In addition, it is a measure of how quickly the net magnetization vector recovers and its ground state or it is the time constant for regrowth of longitudinal magnetization. The T 1 values are computed as a ratio of a T 1 -weighted fast low-angle shot (FLASH) image and a proton-density-weighted FLASH image.
To measure contrast agent concentration C t ( t ) , the signal intensity is converted to T 1 relaxation time values using T 1 -weighted images, proton density weighted images and data from calibration phantoms with known T 1 relaxation times [87]. The Gd-DTPA concentration can then be computed via C t ( t ) = 1 r 1 1 T 1 ( t ) 1 T 10 , where T 10 is the T 1 value without contrast, computed as mean value of the first four images, and r 1 = 4.24 l / s / mmol is the longitudinal relativity of protons in vivo due to Gd-DTPA. The imaging parameters of the T1-weighted FLASH images were TR = 11 ms, TE = 4.7 ms, α = 35, the parameters of the proton density-weighted image were TR = 350 ms, TE = 4.7 ms, α = 6. Field of view was the same for all scans, 260 × 260 × 8 mm per slice, so voxel dimensions are 1.016 × 1.0168 mm. A scan includes three sequential slices of 256 × 256 voxels and one slice placed in the contralateral breast as control, which we do not use for our analysis. A dose of D = 0.1 mmol per kg body weight of Gd-DTPA was injected after the fourth scan via a power injector with 4 mL/s with a 20 mL saline flush also at 4 mL/s.
Figure 1 shows the empirical model of data for two patients C t i s 1 , C t i s 2 and C p based on the time t using Kernel distribution which actually finds an empirical density function of the sample data (See: Using “KernelDistribution” Objects and “ksdensity” in Matlab). In statistics, kernel density estimation is a non-parametric way to estimate the probability density function of a random variable and it is a smoothing function that determines the shape of the curve used to generate the pdf, and a bandwidth value that controls the smoothness of the resulting density curve [88]. This is the primary model of data C p ( t ) changing the inverse problem to forward. It is not clear the sample data belongs to which family of distributions, it is the main issue in here we try solving here by the maximum entropy method.

2. Theory and Methods

In this study, we estimated different probability density functions for AIF using our proposed algorithm. The accurate estimated model of AIF is the cornerstone of the present work, because its correct determination is very important for the correct estimation of kinetic parameters. In the previous study [82], we have estimated the gamma and exponential distributions using the maximum entropy method and Newton’s approach. Both models were acceptable approximated AIF in comparison to the literature models.
Here, the modified algorithm examines different moments constraints in MEM to build the best probability model fit to data. In addition, besides TLBO, the various parameter estimation methods then suggest the estimated parameters which help to find more appropriate AIF. Kinetic parameters are estimated in the next step via MAP.

2.1. Kinetic Model

The kinetic process in the tissue can be modeled using an ordinary-differential equation, dissolved analytically resulting in a nonlinear model for the contrast agent concentration [61,62,63,64,65,66,67,68,69]. In this study, we adopted the commonly used pharmacokinetic model [89] that assumes the CA resides in and exchanges between two compartments in the tissue: the vascular space and EES.
Considering the kinetic properties of the contrast agent (CA) in the tissue of interest ( C t i s ) using DCE-MRI, we apply the differential equation system as follows:
d C t i s ( t ) d t = K 1 C p ( t ) K 2 C t i s ( t ) ,
in which C p ( t ) is the CA concentrations in the vascular blood pool, that is, the arterial input function AIF. Both K 1 and K 2 are the rate constants of the CA exchanges between extravascular-extracellular space (EES) and plasma. Subject to C p ( 0 ) = 0 , Equation (1) can be solved with the following result
C t i s ( t ) = K 1 0 t C p ( u ) e K 2 ( t u ) d u .
This equation has been used to analyze MR data in a number of studies [81]. Murase [90] proposed a different way to solve Equation (1) using discretization:
C t i s ( t ) = K 1 0 t C p ( u ) d u K 2 0 t C t i s ( u ) d u ,
This can be written in matrix form as follows:
C = A × K ,
where matrix A n × 2 : A = { A ( 1 ) , , A ( n ) } for each row of I = 1 , 2 , , n :
A ( I ) = ( 0 t I C p ( u ) d u , 0 t I C t i s ( u ) d u ) ,
K = K 1 K 2
and
C = C t i s ( t 1 ) C t i s ( t 2 ) C t i s ( t n ) .
From the mathematical view, when C t i s ( t i ) and C p ( t i ) are measured, it is possible to use conventional linear least-squares (LLSQ) method to determine K and the trapezoidal rule for the elements of A . Unfortunately, this method measures approximate values for the kinetic parameters. A great number of image processing problems can be presented as inverse problems. In here the linear system of equations which are obtained after the discretization of the integral equations which arises in various tomographic image restoration and reconstruction concerns are considered: Therefore, we write Equation (2) as follows:
y t i s ( t i ) = A ( i ) K + ε i ε i N ( 0 , σ 2 )
where y t i s ( t i ) is the observed tissue concentration at time t i and the measurement uncertainty (noise) which is assumed to be additive, centered, white, Gaussian and independent of K [91].
For that, the estimation procedure of the analysis here includes Bayesian methods, which its advantage is to overcome the integration process involved in estimating the model parameters. Therefore, Bayesian methods can provide exact estimates of the model parameters, not approximating them. In Bayesian statistics, parameters are viewed as random variables. Each parameter involved in a Bayesian model has a distribution attached to it in order to express the uncertainty about its true value. The distribution is named the prior distribution. They represent the prior knowledge about the parameter of interest, which is often obtained from historical data (data-based priors) [92,93].

2.2. Maximum a Posterior Approach

Image reconstruction belongs to the class of ill-posed inverse problems of mathematical physics [94]. In 1967, the physicist V. Turchin suggested using the Bayesian method of maximum a posteriori (MAP) for solving inverse ill-posed problems with stochastic data, naming this approach ‘statistical regularization’ [95]. Bayesian maximum a posteriori (MAP) approaches can be used to solve ill-posed problems as they arise in image reconstruction [96,97,97]. The solution of MAP obviously depends on the priori models. The main challenge of the Bayesian method is how to determine the a priori probability distribution of the studied image and specify its parameters using its data. In here, we assume a form of a priori information named entropy-based prior, which relies on the principle of entropy. Such approaches have been successfully used in the fields of plasma-tomography, X-ray, radio, and gamma-astronomy [93,96,97,98]. To estimate the kinetic parameters, we consider the general form of Equation (8) in the following as proposed by [92,93]. Estimating the positive-vector x (the pixel-intensities in an object) subject to a vector of measurement y (e.g., a degraded-image or the projections of an object) and a linear-transformation A which relates both vectors by
y = A x + b ,
where b is uncorrelated noise with normal distribution, and zero-mean. We consider only approximate information about the variance of the noise Σ 2 and general information about the object.
To estimate the unknown vector x , we use a Bayesian approach. Given the probability density functions (pdf) f ( x ) and f ( y | x ) and f ( y ) we obtain the pdf of the conditional distribution of x subject to y using the Bayesian formula [99]:
f ( x | y ) = f ( y | x ) · f ( x ) f ( y ) .
The MAP estimator x ^ maximizes the posterior pdf f ( x | y ) obtained by Bayes formula. In the Equation (10), f ( y ) is independent of x, f ( y | x ) relates to the noise probability distribution and finally, f ( x ) is a prior distribution on x.
If we are not able to directly determine f ( x ) and f ( y | x ) , we can apply the maximum entropy method. For the MEM, knowledge of some constraints on f ( x ) can be used. Among all probability distributions satisfying these constraints, we select the one which has maximum entropy [92,93,93], see Section 2.3. To determine f ( y | x ) , the noise pdf, we have
f ( y | x ) exp [ T ( x ) ] , T ( x ) = [ y A x ] t [ y A x ] / σ 2 .
A possible way to select a priori-distribution, f ( x ) , is to apply the MEM where the general model is in the form of the exponential family. The advantage of applying MEM for finding a priori is, that this method is the most objective, and maximally uncommitted [99].

2.3. Maximum Entropy Method

The maximum entropy principle allows one to determine the least biased probability distribution function when the information available is limited by some macroscopic constraints [100]. The MEM determines the randomness of the primary data by the concept of information entropy. It is the mathematical expectation of the uncertainty associated with an outcome in terms of its occurrence probability. It is suggested that the most likely probability distribution should be the one that maximizes the information entropy. Maximizing Shannon’s entropy is the basic idea of MEM [4]:
h ( X ) = f ( x ) l o g f ( x ) d x ,
subject to known constraint, typically moment constraints
E ( ϕ k ( x ) ) = ϕ k ( x ) f ( x ) d x = μ k .
Here ϕ 0 ( x ) = 1 , and ϕ k ( x ) , k = 0 , , N are N + 1 known-functions. These could be, for example, x n , log ( x ) , x log ( x ) or trigonometric or geometric functions. The idea is to assign the appropriate known-function relating the ME distribution to the exponential family via the mentioned constraints [2,3,101]. Using the method of Lagrange multipliers method, where the objective function is Shannon’s entropy Equation (11), J ( f ) is as follows:
J ( f ) = f ( x ) l o g f ( x ) d x + λ 0 f ( x ) d x + k = 1 N λ k f ( x ) ϕ k ( x ) d x .
For obtaining f ( x ) , we differentiate J subject to f ( x ) :
Δ J ( f ) Δ f ( x ) = l o g f ( x ) 1 + λ 0 + k = 1 N λ k ϕ k ( x ) = 0 .
Adopting Taylor’s theorem (based on using a Taylor series approximation), the required expected values μ 1 , , μ m can be obtained numerically from the data set [101]. Applying an optimization method with Shannon’s entropy as the objective function, and setting Equation (14) equal to zero, the general form of ME distribution will be [4]:
f ( x ) = e k = 0 N λ k ϕ k ( x ) , x S ,
where λ k can be selected such that, f ( x ) in Equation (15) satisfies the known constraints in Equation (12). The parameters λ = [ λ 0 , , λ N ] are determined to fit an appropriate class of the ME distributions. To determine the N + 1 unknown-parameters, the set of N + 1 nonlinear equations are solved as follows ( 1 k m ):
G k ( λ ) = ϕ k ( x ) e k λ k ϕ k ( x ) d x = μ k .

2.4. Teaching-Learning Based Optimization

To solve Equation (13) we suppose a population base algorithm named teaching-learning based optimization (TLBO), which resolves the problem of random starting points. Instead, it can measure the mean of all possible parameter estimations in order to fit a better model to data. The idea of this algorithm is to assume the relationships between a teacher and all learners in a classroom [83]. It considers the same probability for learners to get information from others. TLBO has two steps to find the best solutions, which named teaching and, learning. The most important features of this algorithm is the easy implementation, and rapid convergence [102].
It considered the population as a group of students and their related subjects are the design variables of the problem. In TLBO, the different presented courses for students are supposed to be different variables and their scores are like the objective function. It is very important for teachers to share their knowledge between students to improve the class’s level of knowledge, then it can cause achieving acceptable scores by students based on their talents. The teacher is assumed to be the most informed person in the classroom, which distributes her/his knowledge to the students. For that, the teacher will be the best solution (the best person in the population) among all. There is a fact that the student’s level of knowledge highly depends on the teacher’s quality of the teaching, and the quality of the others in the class. Therefore, the two main steps of learning for the students are implemented After the creation of the initial population and calculation of the objective value for each individual as follows:

2.4.1. Teacher Phase

In this step, the teacher tries to improve the mean scores of the students subject to her/his situation. The random procedure here is to produce a new solution instead of the old one:
X n e w , D = X o l d , D + r ( X t e a c h e r , D T F M D )
where, D shows the number of courses, X o l d , D (a vector 1 × D ) is the old solution, when there is no contribution between the students to improve their knowledge, and it includes the results of each specific course, a random number r is in [ 0 , 1 ] , X t e a c h e r , D is the best solution of the whole population, T f is a teaching factor changes from 1 to 2 randomly with the same probability, and M D is a vector ( 1 × D ) involving the mean values of the classroom results for each specific course. The new solution X n e w , D is accepted if it is better than the old one [102].

2.4.2. Learner Phase

To improve the knowledge of each student when randomly cooperating with other students, Equation (18) is implemented to all of them, so that, one can achieve the new information in the situation when the other learner has more knowledge than her/him.
X n e w , i = X o l d , i + r i ( X j X k )
where i = 1 , 2 is the total number of solutions, X o l d , i when there is no cooperation with other students, r i is a random number in the range [ 0 , 1 ] , and X j and X k are two learners randomly chosen with j k and in which X j presents a better objective value than X k . The solution X n e w , i is accepted if it is better than the old solution X o l d , i .

2.5. Implementation

We have implemented the code for the proposed method in MATLAB. These are the main steps of the algorithm:
(1)
Determining ϕ k ( x ) and their numerical expectations using dataset via Taylor’s theorem [4],
(2)
Using TLBO (or an alternative optimization method, see below) to determine the unknown function with the Shannon’s entropy as target function. The general form is given in Equation (15), ( f C t i s ( t ) and f C p ( t ) ),
(3)
Applying the proposed method to find λ k in which f ( x ) (Equation (15)) matches the constraints in (Equation (12)), ( f ^ C t i s ( t ) and f ^ C p ( t ) ),
(4)
Estimating the kinetic parameters K , we replace f ^ C t i s ( t ) and f ^ C p ( t ) in Equation (8) and resolve them via MAP,
(5)
Using the Kullback–Leibler divergence D K L ( f | | g ) to check the accuracy of the estimated AIF, f ^ C p ( t ) in comparison with the empirical distribution of dataset g ( C p ) ,
D K L ( f ^ | | g ) = s f ^ C p ( t ) l o g f ^ C p ( t ) g ( C p ) d t .
(6)
With the predicted values x ^ 1 , , x ^ m and the observed values x 1 , , x m :
R M S E = 1 m i = 1 m ( x i x ^ i ) 2 1 / 2 ,
χ 2 = i = 1 m ( x i x ^ i ) 2 m n ,
R 2 = 1 i = 1 N ( x i x ^ i ) 2 i = 1 N ( x i x ¯ ) 2 ,

3. Alternative Parameter Estimation

3.1. Weibull Distribution

The Weibull distribution is widely used in reliability and life data analysis due to its versatility. It is also established a close approximation to the probability laws of various natural-phenomena. The pdf of the Weibull distribution has two parameters:
f ( x ) = k c k x k 1 e ( x c ) k ,
where k is the shape and c is the scale parameter [103,104].
Using the Weibull model, we can also use different approaches to determine the parameters k and c:

3.2. Methods of Moments

The method of moments (MM) calculates the first and second moments to estimate shape and scale parameters. The algorithm relies on the mean, variance of the gamma-function of ( 1 + 1 / k ) [105]. The sample mean and standard error are
x ¯ = c Γ ( 1 + 1 / k ) , σ = c ( Γ ( 1 + 1 / k ) Γ 2 ( 1 + 1 / k ) ) 1 / 2 ,
where Γ ( x ) = 0 t x 1 e t d t is the gamma function.

3.3. Empirical Measurement Method

The empirical measurement method is a special case of MM [105,106,107]:
k = ( σ x ¯ ) 1.086 , c = x ¯ Γ ( 1 + 1 k ) ,
where σ is the sample standard deviation.

3.4. Maximum Likelihood Method

For the maximum likelihood estimator (MLE), an iterative algorithm can be used. With n the number of non-zero data points, the shape and scale parameters ( k , c ) are by iteratively solving
k = i = 1 n x i k ln ( x i ) i = 1 n x i k i = 1 n ln ( x i ) n 1 ,
and
c = ( 1 n i = 1 n x i k ) 1 / k ,

3.5. Modified Maximum Likelihood Method

When the data is available in the form of the frequency distribution, we can apply the modified maximum likelihood method (MMLE).
k = ( i = 1 n x i k ln ( x i ) P ( x i ) i = 1 n x i k P ( x i ) i = 1 n ln ( x i ) P ( x i ) P ( x 0 ) ) 1 , c = ( 1 P ( x 0 ) i = 1 n x i k P ( x i ) ) 1 / k ,
where P ( x i ) represents the frequency of data x i , n the number non-zero data points, and P ( x 0 ) the probability of the random variable equal or exceeding zero. In Equation (26), k can be resolved iteratively, then c can be solved explicitly [108,109].

3.6. Non-Linear Least Squares Method

For the non-linear least squares method (NLSM) the observations are ordered in an ascendant form, and coupled to the failure probabilities, gained by the estimators. The Gauss–Newton’s algorithm is used to gain the best-fitted curve of a Weibull model [110].
Based on the individual results by the method of maximum likelihood, modified maximum likelihood, and least-squares regression, Seguro and Lambert [111] concluded that maximum likelihood or modified maximum likelihood proposed here provided more reasonable and accurate values for parameter estimation of Weibull distribution than least-squares regression. Later it is proven once again by Cook [112].

4. Example of Application

In this section, we show the application of the modified maximum entropy method (MMEM). Due to the limitation of length, this paper only provides a brief description of formulas and fitting curves.
An additional challenge is the fact that usually there are an infinite number of statistical models that are consistent with a given set of global properties measured from data. Therefore, one needs an additional criterion to decide which model to use. The benefit of using the maximum entropy method is to find the simplest model with the lowest bias, which maximizes the entropy. For the same dataset, there could be many complicated models to describe data, but finding the best fitted model with a few numbers of constraints, a few steps of computations, and finally, with the known family of distribution, is the advantage of choosing this method. For that, the maximum entropy method, searches, examines and, applies different moment constraints (see Section 2.3) [3], and adopts the minimum number of them to form an appropriate probability density model for the sample data. There could be a large difference between the maximum entropy estimated model with two constraints and three constraints, but for the estimated ME model with four or five cases, it would not be a big difference in the estimated models. In other words, there is no guarantee to estimate the better maximum entropy model when adding more constraints.
After examining several known functions [3] and their exceptions (constraints) which are computed numerically based on Taylor’s theorem from the sample data, the estimated probability density function of the data fits well to the Weibull distribution (Figure 2). For data C p ( t )
t f C p ( t ) d t = 1 , t log ( C p ) f C p ( t ) d t = 0.4465 , t C p 3 f C p ( t ) d t = 1.0930
using the general form of the maximum entropy distribution Equation (15), f ^ C p ( t ) can be as follows:
f C p ( t ) = e λ 0 λ 1 log ( C p ( t ) ) λ 2 C p 3 ( t ) ,
where the final ME multipliers λ s and the Weibull parameters are estimated as follows
f ^ C p ( t ) = exp ( 0.7466 1.4944 log ( C p ( t ) ) 0.1128 C p 3 ( t ) ) + 0.5 .
and based on the ME form of Equation (23)
f ( x ) = e log ( k c k ) + ( k 1 ) log ( x ) ( x c ) k .
in which
λ 0 = log ( k c k ) ,
λ 1 = ( k 1 ) ,
λ 2 = c k .
Then, according to Equations (27) and (28), the Weibull parameters will be c = 1.8498 , k = 3 where the mean of absolute error, D K L divergence and entropy are 0.0470, 0.0438, and 0.2026, respectively, see Figure 2.
Table 1 lists the mean absolute error (MAE), Kullback–Leibler distance D K L and the entropy of different AIF models via MMEM described in Section 2.3 and the empirical one. In each case, we have applied evaluation methods to check the validity of the estimated model. The high measurement of entropy shows the superiority of the Weibull probability density function to fit the data, too. The MMEM can not optimize the different values of k and c by itself (based on the uniqueness of the maximum entropy distribution [4]), but it can be applied on a grid on k. However, using MAE, D K L and entropy, we achieve different optimal models with the optimal values of their parameters. Additionally, Figure 3 pictures the fit for the different CDFs.
Table 2 lists the estimated Weibull parameters via different estimation methods as mentioned in Section 3. All the estimated models are presented in Figure 4. Table 3 indicates the evaluation measurements for all the mentioned method in Section 3 to investigate how the proposed method works. Among all these models, the MMEM has the best fit to the data.
Based on the results of Table 1, Table 2 and Table 3 MMEM has achieved a much better fit model to the data. Actually, Table 2 shows the parameter estimations of the Weibull distribution via different methods in comparison to those via MMEM to see in which case the estimated model fits well to the data. In (Table 3), we examined the results via the root mean square error ( R M S E ), the goodness of fit ( χ 2 ), determination coefficient ( R 2 ) and the adjust determination coefficient ( R 2 ) which highlights the MMEM. The proposed MMEM gives the estimation with the lowest absolute error and D K L divergence with the highest entropy.

5. Evaluation

To better evaluate the desired method (MMEM), we considered the data of 12 more patients in total. MMEM and MAP were utilized to estimate the AIF and the kinetic parameters, and the results were compared accordingly, (Figure 5 and Figure 6). The difference within empirical AIF and estimated AIF via the MMEM is clear.
Actually, the AIF in the first two minutes is typically estimated higher than the assumed AIF, whereas there would be negligible difference after about two and a half minutes. However, the correct estimation of the AIF at the onset is the most important for the correct estimation of the kinetic parameters. The K-L divergence measurements range from 0.001 to 0.0637 for all patients.
Figure 7 depicts the estimated k 1 values using MMEM/MAP and assumed AIF/ML & MEM/MAP for all 12 patients. For the MMEM, the k values (Table 4) are more on the same level between patients—compared to the MEM/MAP—which biologically makes sense. Still, the AIF is estimated from the data, which makes the estimation of k more realistic than the estimation using an assumed AIF.

6. Discussion and Conclusions

The main purpose of this study is to connect an important problem in statistics which is to determine the probability density function of a random variable based on observations to an important problem in image processing which is to determine the AIF in situations where not enough information about the AIF is available and then accurate estimation of the kinetic parameters. In recent years, various parametric and non-parametric methods have been introduced for estimation of the probability density function for a random variable based on observations, but there is very limited work reported on the optimization methods.
Therefore, we have introduced a new algorithm which is the combination of MEM/TLBO named MMEM. The maximum entropy method (MEM) is one of the major and strong methods for estimating and determining the probability density with a high level of accuracy and efficiency and minimum bias. The core idea of this approach is to determine the statistical models agree with data. In other words, the MEM provides a method to find the least biased model that is consistent with the data, i.e., the maximally noncommittal with regard to missing information [1,2,3,4].
A number of calculations and comparisons have been conducted for the estimation of the AIF and the kinetic parameters, respectively, (Figure 5, Figure 6 and Figure 7 and Table 1, Table 2, Table 3 and Table 4 ). The results have revealed the characteristics of the empirical PDF of AIF as well as the fact that the modified maximum entropy approach performs adequately in fitting an appropriate model by comparing with the Weibull distributions to the AIF with consideration of its accuracy and applicability.
The aim of this work is not exactly to make decisions on the AIF or Cp, but it is very important how exactly the accurate kinetic parameters are determined. These parameters are so significant and are of clinical relevance, for that, clinicians can easily interpret them. Using the MMEM/MAP guarantees to minimize bias in the estimation of the AIF and the kinetic parameters
Since the AIF plays an important role in the analysis of DCE-MRI, in cases where the AIF could not be determined in the image, the literature AIF is a standard technique. The proposed method in this study gives an alternative way to assess the input function from the existing data. We have shown that the proposed method allows a good fit of the data and a good estimation of the kinetic parameters.
To further evaluate this method, we propose to apply it on terms of energy efficiency and system complexity.

Author Contributions

Supervision, V.J.S.; Writing—review & editing, Z.A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Data from this study were ac–quired in accordance with the recommendation given by Leach et al., 2005, Br. J. Cancer, 2005, vol. 92, pp. 1599–1610. Informed consent was obtained from all patients.

Data Availability Statement

Patient data are not made available.

Acknowledgments

The authors thank the institute of Statistics, Ludwig-Maximilian university (LMU) of Munich, for the financial support of this publication.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar] [CrossRef]
  2. Pougaza, D.B.; Djafari, A.M. Maximum Entropy Copulas. AIP Conf. Proc. 2011, 1305, 2069–2072. [Google Scholar]
  3. Ebrahimi, N.; Soofi, E.S.; Soyer, R. Multivariate maximum entropy identification, transformation, and dependence. J. Multivar. Anal. 2008, 99, 1217–1231. [Google Scholar] [CrossRef] [Green Version]
  4. Thomas, A.; Cover, T.M. Elements of Information Theory; John Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  5. Cofré, R.; Herzog, R.; Corcoran, D.; Rosas, F.E. A comparison of the maximum entropy principle across biological spatial scales. Entropy 2019, 21, 1009. [Google Scholar] [CrossRef] [Green Version]
  6. Jaynes, E.T. Probability Theory: The Logic of Science; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  7. Ozer, H.G. Residue Associations in Protein Family Alignments. Ph.D. Thesis, The Ohio State University, Columbus, OH, USA, 2008. [Google Scholar]
  8. Seno, F.; Trovato, A.; Banavar, J.R.; Maritan, A. Maximum entropy approach for deducing amino acid interactions in proteins. Phys. Rev. Lett. 2008, 100, 078102. [Google Scholar] [CrossRef] [Green Version]
  9. Weigt, M.; White, R.A.; Szurmant, H.; Hoch, J.A.; Hwa, T. Identification of direct residue contacts in protein–protein interaction by message passing. Proc. Natl. Acad. Sci. USA 2009, 106, 67–72. [Google Scholar] [CrossRef] [Green Version]
  10. Pitera, J.W.; Chodera, J.D. On the use of experimental observations to bias simulated ensembles. J. Chem. Theory Comput. 2012, 8, 3445–3451. [Google Scholar] [CrossRef] [PubMed]
  11. Hopf, T.A.; Colwell, L.J.; Sheridan, R.; Rost, B.; Sander, C.; Marks, D.S. Three-dimensional structures of membrane proteins from genomic sequencing. Cell 2012, 149, 1607–1621. [Google Scholar] [CrossRef] [Green Version]
  12. Roux, B.; Weare, J. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method. J. Chem. Phys. 2013, 138, 02B616. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Cavalli, A.; Camilloni, C.; Vendruscolo, M. Molecular dynamics simulations with replica-averaged structural restraints generate structural ensembles according to the maximum entropy principle. J. Chem. Phys. 2013, 138, 03B603. [Google Scholar] [CrossRef]
  14. Jennings, R.C.; Belgio, E.; Zucchelli, G. Does maximal entropy production play a role in the evolution of biological complexity? A biological point of view. Rendiconti Lincei Scienze Fisiche e Naturali 2020, 31, 259–268. [Google Scholar] [CrossRef]
  15. Ekeberg, M.; Lövkvist, C.; Lan, Y.; Weigt, M.; Aurell, E. Improved contact prediction in proteins: Using pseudolikelihoods to infer Potts models. Phys. Rev. E 2013, 87, 012707. [Google Scholar] [CrossRef] [Green Version]
  16. Boomsma, W.; Ferkinghoff-Borg, J.; Lindorff-Larsen, K. Combining experiments and simulations using the maximum entropy principle. PLoS Comput. Biol. 2014, 10, e1003406. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zhang, B.; Wolynes, P.G. Topology, structures, and energy landscapes of human chromosomes. Proc. Natl. Acad. Sci. USA 2015, 112, 6062–6067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Cesari, A.; Reißer, S.; Bussi, G. Using the maximum entropy principle to combine simulations and solution experiments. Computation 2018, 6, 15. [Google Scholar] [CrossRef] [Green Version]
  19. Farré, P.; Emberly, E. A maximum-entropy model for predicting chromatin contacts. PLoS Comput. Biol. 2018, 14, e1005956. [Google Scholar] [CrossRef] [Green Version]
  20. D’haeseleer, P.; Liang, S.; Somogyi, R. Genetic network inference: From co-expression clustering to reverse engineering. Bioinformatics 2000, 16, 707–726. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Lezon, T.R.; Banavar, J.R.; Cieplak, M.; Maritan, A.; Fedoroff, N.V. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns. Proc. Natl. Acad. Sci. USA 2006, 103, 19033–19038. [Google Scholar] [CrossRef] [Green Version]
  22. Dhadialla, P.S.; Ohiorhenuan, I.E.; Cohen, A.; Strickland, S. Maximum-entropy network analysis reveals a role for tumor necrosis factor in peripheral nerve development and function. Proc. Natl. Acad. Sci. USA 2009, 106, 12494–12499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Remacle, F.; Kravchenko-Balasha, N.; Levitzki, A.; Levine, R.D. Information-theoretic analysis of phenotype changes in early stages of carcinogenesis. Proc. Natl. Acad. Sci. USA 2010, 107, 10324–10329. [Google Scholar] [CrossRef] [Green Version]
  24. Sanguinetti, G.; Huynh-Thu, V.A. Gene regulatory network inference: An introductory survey. In Gene Regulatory Networks; Springer: New York, NY, USA, 2019; pp. 1–23. [Google Scholar]
  25. Locasale, J.W.; Wolf-Yadlin, A. Maximum entropy reconstructions of dynamic signaling networks from quantitative proteomics data. PLoS ONE 2009, 4, e6522. [Google Scholar] [CrossRef]
  26. Graeber, T.; Heath, J.; Skaggs, B.; Phelps, M.; Remacle, F.; Levine, R.D. Maximal entropy inference of oncogenicity from phosphorylation signaling. Proc. Natl. Acad. Sci. USA 2010, 107, 6112–6117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Sharan, R.; Karp, R.M. Reconstructing Boolean models of signaling. J. Comput. Biol. 2013, 20, 249–257. [Google Scholar] [CrossRef]
  28. Schneidman, E.; Berry, M.J.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef] [Green Version]
  29. Shlens, J.; Field, G.D.; Gauthier, J.L.; Grivich, M.I.; Petrusca, D.; Sher, A.; Litke, A.M.; Chichilnisky, E. The structure of multi-neuron firing patterns in primate retina. J. Neurosci. 2006, 26, 8254–8266. [Google Scholar] [CrossRef] [Green Version]
  30. Quadeer, A.A.; McKay, M.R.; Barton, J.P.; Louie, R.H. MPF–BML: A standalone GUI-based package for maximum entropy model inference. Bioinformatics 2020, 36, 2278–2279. [Google Scholar] [CrossRef]
  31. Tang, A.; Jackson, D.; Hobbs, J.; Chen, W.; Smith, J.L.; Patel, H.; Prieto, A.; Petrusca, D.; Grivich, M.I.; Sher, A.; et al. A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J. Neurosci. 2008, 28, 505–518. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Cocco, S.; Leibler, S.; Monasson, R. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods. Proc. Natl. Acad. Sci. USA 2009, 106, 14058–14062. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Roudi, Y.; Nirenberg, S.; Latham, P.E. Pairwise maximum entropy models for studying large biological systems: When they can work and when they ca not. PLoS Comput. Biol. 2009, 5, e1000380. [Google Scholar] [CrossRef]
  34. Tkačik, G.; Prentice, J.S.; Balasubramanian, V.; Schneidman, E. Optimal population coding by noisy spiking neurons. Proc. Natl. Acad. Sci. USA 2010, 107, 14419–14424. [Google Scholar] [CrossRef] [Green Version]
  35. Ohiorhenuan, I.E.; Mechler, F.; Purpura, K.P.; Schmid, A.M.; Hu, Q.; Victor, J.D. Sparse coding and high-order correlations in fine-scale cortical networks. Nature 2010, 466, 617–621. [Google Scholar] [CrossRef]
  36. Yeh, F.C.; Tang, A.; Hobbs, J.P.; Hottowy, P.; Dabrowski, W.; Sher, A.; Litke, A.; Beggs, J.M. Maximum entropy approaches to living neural networks. Entropy 2010, 12, 89–106. [Google Scholar] [CrossRef]
  37. Granot-Atedgi, E.; Tkačik, G.; Segev, R.; Schneidman, E. Stimulus-dependent maximum entropy models of neural population codes. PLoS Comput. Biol. 2013, 9, e1002922. [Google Scholar] [CrossRef] [Green Version]
  38. Tkačik, G.; Marre, O.; Mora, T.; Amodei, D.; Berry II, M.J.; Bialek, W. The simplest maximum entropy model for collective behavior in a neural network. J. Stat. Mech. Theory Exp. 2013, 2013, P03011. [Google Scholar] [CrossRef]
  39. Ferrari, U.; Obuchi, T.; Mora, T. Random versus maximum entropy models of neural population activity. Phys. Rev. E 2017, 95, 042321. [Google Scholar] [CrossRef] [Green Version]
  40. Rostami, V.; Mana, P.P.; Grün, S.; Helias, M. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models. PLoS Comput. Biol. 2017, 13, e1005762. [Google Scholar] [CrossRef] [Green Version]
  41. Nghiem, T.A.; Teleńczuk, B.; Marre, O.; Destexhe, A.; Ferrari, U. Maximum entropy models reveal the correlation structure in cortical neural activity during wakefulness and sleep. bioRxiv 2018, 243857. [Google Scholar] [CrossRef] [Green Version]
  42. Yeo, G.; Burge, C.B. Maximum entropy modeling of short sequence motifs with applications to RNA splicing signals. J. Comput. Biol. 2004, 11, 377–394. [Google Scholar] [CrossRef]
  43. Mora, T.; Walczak, A.M.; Bialek, W.; Callan, C.G. Maximum entropy models for antibody diversity. Proc. Natl. Acad. Sci. USA 2010, 107, 5405–5410. [Google Scholar] [CrossRef] [Green Version]
  44. Santolini, M.; Mora, T.; Hakim, V. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites. PLoS ONE 2014, 9, e99015. [Google Scholar] [CrossRef] [Green Version]
  45. Fariselli, P.; Taccioli, C.; Pagani, L.; Maritan, A. DNA sequence symmetries from randomness: The origin of the Chargaff’s second parity rule. Brief. Bioinform. 2020, 22, 2172–2181. [Google Scholar] [CrossRef]
  46. Fernandez-de Cossio-Diaz, J.; Mulet, R. Maximum entropy and population heterogeneity in continuous cell cultures. PLoS Comput. Biol. 2019, 15, e1006823. [Google Scholar] [CrossRef] [Green Version]
  47. Jackson, A.; Constable, C.; Gillet, N. Maximum entropy regularization of the geomagnetic core field inverse problem. Geophys. J. Int. 2007, 171, 995–1004. [Google Scholar] [CrossRef] [Green Version]
  48. De Martino, A.; De Martino, D. An introduction to the maximum entropy approach and its application to inference problems in biology. Heliyon 2018, 4, e00596. [Google Scholar] [CrossRef] [Green Version]
  49. Khalifa, F.; Soliman, A.; El-Baz, A.; Abou El-Ghar, M.; El-Diasty, T.; Gimel’farb, G.; Ouseph, R.; Dwyer, A.C. Models and methods for analyzing DCE-MRI: A review. Med. Phys. 2014, 41, 124301. [Google Scholar] [CrossRef]
  50. Fennessy, F.M.; McKay, R.R.; Beard, C.J.; Taplin, M.E.; Tempany, C.M. Dynamic contrast-enhanced magnetic resonance imaging in prostate cancer clinical trials: Potential roles and possible pitfalls. Transl. Oncol. 2014, 7, 120–129. [Google Scholar] [CrossRef] [Green Version]
  51. Huang, W.; Li, X.; Chen, Y.; Li, X.; Chang, M.C.; Oborski, M.J.; Malyarenko, D.I.; Muzi, M.; Jajamovich, G.H.; Fedorov, A.; et al. Variations of dynamic contrast-enhanced magnetic resonance imaging in evaluation of breast cancer therapy response: A multicenter data analysis challenge. Transl. Oncol. 2014, 7, 153. [Google Scholar] [CrossRef] [Green Version]
  52. Sobhani, F.; Xu, C.; Murano, E.; Pan, L.; Rastegar, N.; Kamel, I.R. Hypo-vascular liver metastases treated with transarterial chemoembolization: Assessment of early response by volumetric contrast-enhanced and diffusion-weighted magnetic resonance imaging. Transl. Oncol. 2016, 9, 287–294. [Google Scholar] [CrossRef] [Green Version]
  53. Usuda, K.; Iwai, S.; Funasaki, A.; Sekimura, A.; Motono, N.; Matoba, M.; Doai, M.; Yamada, S.; Ueda, Y.; Uramoto, H. Diffusion-weighted magnetic resonance imaging is useful for the response evaluation of chemotherapy and/or radiotherapy to recurrent lesions of lung cancer. Transl. Oncol. 2019, 12, 699–704. [Google Scholar] [CrossRef]
  54. Stoyanova, R.; Huang, K.; Sandler, K.; Cho, H.; Carlin, S.; Zanzonico, P.B.; Koutcher, J.A.; Ackerstaff, E. Mapping tumor hypoxia in vivo using pattern recognition of dynamic contrast-enhanced MRI data. Transl. Oncol. 2012, 5, 437. [Google Scholar] [CrossRef]
  55. Schmid, V.J.; Whitcher, B.; Padhani, A.R.; Taylor, N.J.; Yang, G.Z. Bayesian methods for pharmacokinetic models in dynamic contrast-enhanced magnetic resonance imaging. IEEE Trans. Med. Imaging 2006, 25, 1627–1636. [Google Scholar] [CrossRef]
  56. Tofts, P.S.; Brix, G.; Buckley, D.L.; Evelhoch, J.L.; Henderson, E.; Knopp, M.V.; Larsson, H.B.; Lee, T.Y.; Mayr, N.A.; Parker, G.J.; et al. Estimating kinetic parameters from dynamic contrast-enhanced T1-weighted MRI of a diffusable tracer: Standardized quantities and symbols. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 1999, 10, 223–232. [Google Scholar] [CrossRef]
  57. Shao, J.; Zhang, Z.; Liu, H.; Song, Y.; Yan, Z.; Wang, X.; Hou, Z. DCE-MRI pharmacokinetic parameter maps for cervical carcinoma prediction. Comput. Biol. Med. 2020, 118, 103634. [Google Scholar] [CrossRef]
  58. Lingala, S.G.; Guo, Y.; Bliesener, Y.; Zhu, Y.; Lebel, R.M.; Law, M.; Nayak, K.S. Tracer kinetic models as temporal constraints during brain tumor DCE-MRI reconstruction. Med. Phys. 2020, 47, 37–51. [Google Scholar] [CrossRef] [Green Version]
  59. Zou, J.; Balter, J.M.; Cao, Y. Estimation of pharmacokinetic parameters from DCE-MRI by extracting long and short time-dependent features using an LSTM network. Med. Phys. 2020, 47, 3447–3457. [Google Scholar] [CrossRef]
  60. Dikaios, N. Stochastic Gradient Langevin dynamics for joint parameterization of tracer kinetic models, input functions, and T1 relaxation-times from undersampled k-space DCE-MRI. Med. Image Anal. 2020, 62, 101690. [Google Scholar] [CrossRef]
  61. Tofts, P.S.; Kermode, A.G. Measurement of the blood-brain barrier permeability and leakage space using dynamic MR imaging. 1. Fundamental concepts. Magn. Reson. Med. 1991, 17, 357–367. [Google Scholar] [CrossRef]
  62. Larsson, H.B.W.; Tofts, P.S. Measurement of blood-brain barrier permeability using dynamic Gd-DTPA scanning—A comparison of methods. Magn. Reson. Med. 1992, 24, 174–176. [Google Scholar] [CrossRef]
  63. Brix, G.; Kiessling, F.; Lucht, R.; Darai, S.; Wasser, K.; Delorme, S.; Griebe, J. Microcirculation and microvasculature in breast tumors: Pharmacokinetic analysis of dynamic MR image series. Magn. Reson. Med. 2004, 52, 420–429. [Google Scholar] [CrossRef]
  64. Berg, B.; Stucht, D.; Janiga, G.; Beuing, O.; Speck, O.; Thovenin, D. Cerebral Blood Flow in a Healthy Circle of Willis and Two Intracranial Aneurysms: Computational Fluid Dynamics Versus Four-Dimensional Phase-Contrast Magnetic Resonance Imaging. ASME J. Biomech. Eng. 2014, 15, 041003. [Google Scholar] [CrossRef]
  65. Orton, M.R.; Collins, D.J.; Walker-Samuel, S.; d’Arcy, J.A.; Hawkes, D.J.; Atkinson, D.; Leach, M.O. Bayesian estimation of pharmacokinetic parameters for DCE-MRI with a robust treatment of enhancement onset time. Phys. Med. Biol. 2007, 52, 2393–2408. [Google Scholar] [CrossRef]
  66. Dikaios, N.; Arridge, S.; Hamy, V.; Punwani, S.; Atkinson, D. Direct parametric reconstruction from undersampled (k, t)-space data in dynamic contrast enhanced MRI. Med. Image Anal. 2014, 18, 989–1001. [Google Scholar] [CrossRef]
  67. Bender, R.; Heinemann, L. Fitting nonlinear regression models with correlated errors to individual pharmacodynamic data using SAS software. J. Pharmacokinet. Biopharm. 1995, 23, 87–100. [Google Scholar] [CrossRef]
  68. Cheng, H.L.M. T1 measurement of flowing blood and arterial input function determination for quantitative 3D T1-weighted DCE-MRI. J. Magn. Reson. Imaging JMRI 2007, 25, 1073–1078. [Google Scholar] [CrossRef]
  69. Gauthier, M. Impact of the arterial input function on microvascularization parameter measurements using dynamic contrast-enhanced ultrasonography. World J. Radiol. 2012, 4, 291. [Google Scholar] [CrossRef]
  70. Cheng, H.L.M. Investigation and optimization of parameter accuracy in dynamic contrast-enhanced MRI. J. Magn. Reson. Imaging 2008, 28, 736–743. [Google Scholar] [CrossRef]
  71. Lavini, C. Simulating the effect of input errors on the accuracy of Tofts’ pharmacokinetic model parameters. Magn. Reson. Imaging 2015, 33, 222–235. [Google Scholar] [CrossRef]
  72. Peled, S.; Vangel, M.; Kikinis, R.; Tempany, C.M.; Fennessy, F.M.; Fedorov, A. Selection of fitting model and arterial input function for repeatability in dynamic contrast-enhanced prostate MRI. Acad. Radiol. 2019, 26, e241–e251. [Google Scholar] [CrossRef]
  73. Huang, W.; Chen, Y.; Fedorov, A.; Li, X.; Jajamovich, G.H.; Malyarenko, D.I.; Aryal, M.P.; LaViolette, P.S.; Oborski, M.J.; O’Sullivan, F.; et al. The impact of arterial input function determination variations on prostate dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic modeling: A multicenter data analysis challenge. Tomography 2016, 2, 56–66. [Google Scholar] [CrossRef]
  74. Huang, W.; Chen, Y.; Fedorov, A.; Li, X.; Jajamovich, G.H.; Malyarenko, D.I.; Aryal, M.P.; LaViolette, P.S.; Oborski, M.J.; O’Sullivan, F.; et al. The impact of arterial input function determination variations on prostate dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic modeling: A multicenter data analysis challenge, part II. Tomography 2019, 5, 99–109. [Google Scholar] [CrossRef]
  75. Keil, V.C.; Mädler, B.; Gieseke, J.; Fimmers, R.; Hattingen, E.; Schild, H.H.; Hadizadeh, D.R. Effects of arterial input function selection on kinetic parameters in brain dynamic contrast-enhanced MRI. Magn. Reson. Imaging 2017, 40, 83–90. [Google Scholar] [CrossRef]
  76. Parker, G.J.; Roberts, C.; Macdonald, A.; Buonaccorsi, G.A.; Cheung, S.; Buckley, D.L.; Jackson, A.; Watson, Y.; Davies, K.; Jayson, G.C. Experimentally-derived functional form for a population-averaged high-temporal-resolution arterial input function for dynamic contrast-enhanced MRI. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2006, 56, 993–1000. [Google Scholar] [CrossRef]
  77. Rata, M.; Collins, D.J.; Darcy, J.; Messiou, C.; Tunariu, N.; Desouza, N.; Young, H.; Leach, M.O.; Orton, M.R. Assessment of repeatability and treatment response in early phase clinical trials using DCE-MRI: Comparison of parametric analysis using MR-and CT-derived arterial input functions. Eur. Radiol. 2016, 26, 1991–1998. [Google Scholar] [CrossRef] [Green Version]
  78. Rijpkema, M.; Kaanders, J.H.; Joosten, F.B.; van der Kogel, A.J.; Heerschap, A. Method for quantitative mapping of dynamic MRI contrast agent uptake in human tumors. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 2001, 14, 457–463. [Google Scholar] [CrossRef] [Green Version]
  79. Ashton, E.; Raunig, D.; Ng, C.; Kelcz, F.; McShane, T.; Evelhoch, J. Scan-rescan variability in perfusion assessment of tumors in MRI using both model and data-derived arterial input functions. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 2008, 28, 791–796. [Google Scholar] [CrossRef]
  80. Weinmann, H.J.; Laniado, M.; Mützel, W. Pharmokinetics of Gd-DTPA/Dimeglumine after intravenous injection into healthy volunteers. Physiol. Chem. Phys. Med. NMR 1984, 16, 167–172. [Google Scholar]
  81. Fritz-Hansen, T.; Rostrup, E.; Larsson, H.B.W.; Sø ndergaard, L.; Ring, P.; Henriksen, O. Measurement of the Arterial Concentration of Gd-DTPA Using MRI: A step toward Quantitative Perfusion Imaging. Magn. Reson. Med. 1996, 36, 225–231. [Google Scholar] [CrossRef]
  82. Farsani, Z.A.; Schmid, V.J. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging. Methods Inf. Med. 2017, 56, 461–468. [Google Scholar] [CrossRef]
  83. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  84. Zou, F.; Chen, D.; Xu, Q. A survey of teaching–learning-based optimization. Neurocomputing 2019, 335, 366–383. [Google Scholar] [CrossRef]
  85. Parker, G.J.; Suckling, J.; Tanner, S.F.; Padhani, A.R.; Revell, P.B.; Husband, J.E.; Leach, M.O. Probing tumor microvascularity by measurement, analysis and display of contrast agent uptake kinetics. J. Magn. Reson. Imaging 1997, 7, 564–574. [Google Scholar] [CrossRef]
  86. d’Arcy, J.A.; Collins, D.J.; Padhani, A.R.; Walker-Samuel, S.; Suckling, J.; Leach, M.O. Magnetic resonance imaging workbench: Analysis and visualization of dynamic contrast-enhanced MR imaging data. Radiographics 2006, 26, 621–632. [Google Scholar] [CrossRef] [Green Version]
  87. Buckley, D.; Parker, G. Measuring Contrast Agent Concentration in T1-Weighted Dynamic Contrast-Enhanced MRI. In Dynamic Contrast-Enhanced Magntic Resoncance Imaging in Oncology; Jackson, A., Parker, G.J.M., Buckley, D.L., Eds.; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2005; Chapter 5; pp. 69–80. [Google Scholar]
  88. Parzen, E. On estimation of a probability density function and mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  89. Choyke, P.; Dwyer, A.; Knopp, M. Functional tumor imaging withdynamic contrast-enhanced magnetic resonance imaging. Magn. Reson. Med. 2003, 17, 509–520. [Google Scholar]
  90. Murase, K. Efficient method for calculating kinetic parameters using T1-weighted dynamic contrast-enhanced magnetic resonance imaging. Magn. Reson. Med. 2004, 51, 858–862. [Google Scholar] [CrossRef] [PubMed]
  91. Mohammad-Djafari, A. Bayesian Image Processing. In Proceedings of the Fifth International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences (MCO 2004), Metz, France, 1–3 July 2004. [Google Scholar]
  92. Mohammad-Djafari, A.; Demoment, G. Estimating priors in maximum entropy image processing. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Albuquerque, NM, USA, 3–6 April 1990; pp. 2069–2072. [Google Scholar]
  93. Mohammad-Djafari, A. A full Bayesian approach for inverse problems. In Maximum Entropy and Bayesian Methods; Springer: Berlin/Heidelberg, Germany, 1996; pp. 135–144. [Google Scholar]
  94. Hadamard, J. Le Probleme de Cauchy et les Équations aux Dérivées Partielles Linéaires Hyperboliques; Paris Russian Translation; 1932; Volume 220. [Google Scholar]
  95. Turchin, V.F. Solution of the Fredholm equation of the first kind in a statistical ensemble of smooth functions. USSR Comput. Math. Math. Phys. 1967, 7, 79–96. [Google Scholar] [CrossRef]
  96. Denisova, N. Bayesian maximum-a posteriori approach with global and local regularization to image reconstruction problem in medical emission tomography. Entropy 2019, 21, 1108. [Google Scholar] [CrossRef] [Green Version]
  97. Sparavigna, A.C. Entropy in image analysis. Entropy 2019, 21, 502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Skilling, J. The axioms of maximum entropy. In Maximum-Entropy and Bayesian Methods in Science and Engineering; Springer: Berlin/Heidelberg, Germany, 1988; pp. 173–187. [Google Scholar]
  99. Elfving, T. An Algorithm for Maximum Entropy Image Reconstruction form Noisy Data. Mathl. Comput. Model. 1989, 12, 729–745. [Google Scholar] [CrossRef]
  100. Akpinar, S.; Akpinar, E.K. Wind energy analysis based on maximum entropy principle (MEP)-type distribution function. Energy Convers. Manag. 2007, 48, 1140–1149. [Google Scholar] [CrossRef]
  101. Casella, G.; Berger, R. Statistical Inference 2; Duxbury: Belmont, CA, USA, 2002. [Google Scholar]
  102. García, J.A.M.; Mena, A.J.G. Optimal distributed generation location and size using a modified teaching-learning based optimization algorithm. Int. J. Electr. Power Energy Syst. 2013, 50, 65–75. [Google Scholar] [CrossRef]
  103. Bain, L.J.; Antle, C.E. Estimation of parameters in the weibdl distribution. Technometrics 1967, 9, 621–627. [Google Scholar] [CrossRef]
  104. Stevens, M.; Smulders, P. The estimation of the parameters of the Weibull wind speed distribution for wind energy utilization purposes. Wind Eng. 1979, 3, 132–145. [Google Scholar]
  105. Justus, C.; Hargraves, W.; Mikhail, A.; Graber, D. Methods for estimating wind speed frequency distributions. J. Appl. Meteorol. 1978, 17, 350–353. [Google Scholar] [CrossRef]
  106. Morgan, E.C.; Lackner, M.; Vogel, R.M.; Baise, L.G. Probability distributions for offshore wind speeds. Energy Convers. Manag. 2011, 52, 15–26. [Google Scholar] [CrossRef]
  107. Akdağ, S.A.; Dinler, A. A new method to estimate Weibull parameters for wind energy applications. Energy Convers. Manag. 2009, 50, 1761–1766. [Google Scholar] [CrossRef]
  108. Werapun, W.; Tirawanichakul, Y.; Waewsak, J. Comparative study of five methods to estimate Weibull parameters for wind speed on Phangan Island, Thailand. Energy Procedia 2015, 79, 976–981. [Google Scholar] [CrossRef] [Green Version]
  109. Zhang, H.; Yu, Y.J.; Liu, Z.Y. Study on the Maximum Entropy Principle applied to the annual wind speed probability distribution: A case study for observations of intertidal zone anemometer towers of Rudong in East China Sea. Appl. Energy 2014, 114, 931–938. [Google Scholar] [CrossRef]
  110. Li, T.; Griffiths, W.; Chen, J. Weibull modulus estimated by the non-linear least squares method: A solution to deviation occurring in traditional Weibull estimation. Metall. Mater. Trans. A 2017, 48, 5516–5528. [Google Scholar] [CrossRef] [Green Version]
  111. Seguro, J.; Lambert, T. Modern estimation of the parameters of the Weibull wind speed distribution for wind energy analysis. J. Wind. Eng. Ind. Aerodyn. 2000, 85, 75–84. [Google Scholar] [CrossRef]
  112. Cook, N.J. “Discussion on modern estimation of the parameters of the Weibull wind speed distribution for wind speed energy analysis” by J.V. Seguro, T.W. Lambert. J. Wind. Eng. Ind. Aerodyn. 2001, 89, 867–869. [Google Scholar] [CrossRef]
Figure 1. Empirical PDF of the contrast agent in plasma ( C p ( t ) ) named empirical AIF and in tissue ( C t i s ( t ) ) for two patients.
Figure 1. Empirical PDF of the contrast agent in plasma ( C p ( t ) ) named empirical AIF and in tissue ( C t i s ( t ) ) for two patients.
Entropy 24 00155 g001
Figure 2. Maximum entropy probability density function of AIF ( f ^ C p ( t ) ) and empirical AIF.
Figure 2. Maximum entropy probability density function of AIF ( f ^ C p ( t ) ) and empirical AIF.
Entropy 24 00155 g002
Figure 3. Maximum entropy distribution of AIF, empirical CDF of data and gamma and exponential CDFs.
Figure 3. Maximum entropy distribution of AIF, empirical CDF of data and gamma and exponential CDFs.
Entropy 24 00155 g003
Figure 4. Modified maximum entropy PDF of AIF, empirical AIF, maximum entropy PDFs combined with parameter estimation methods.
Figure 4. Modified maximum entropy PDF of AIF, empirical AIF, maximum entropy PDFs combined with parameter estimation methods.
Entropy 24 00155 g004
Figure 5. Empirical density model of the contrast agent in plasma ( C p ( t ) ) as AIF and tissue ( C t i s ( t ) ) for 12 patients.
Figure 5. Empirical density model of the contrast agent in plasma ( C p ( t ) ) as AIF and tissue ( C t i s ( t ) ) for 12 patients.
Entropy 24 00155 g005
Figure 6. Estimating AIF via MMEM for 12 patients and empirical AIF.
Figure 6. Estimating AIF via MMEM for 12 patients and empirical AIF.
Entropy 24 00155 g006
Figure 7. k 1 estimated using MMEM/MAP and assumed AIF/ML & MEM/MAP for all 12 patients.
Figure 7. k 1 estimated using MMEM/MAP and assumed AIF/ML & MEM/MAP for all 12 patients.
Entropy 24 00155 g007
Table 1. Comparison of the estimated maximum entropy AIFs, and the empirical AIF.
Table 1. Comparison of the estimated maximum entropy AIFs, and the empirical AIF.
Estimated DistributionMAE D K L Entropy
Gamma0.07750.02850.0303
Exponential0.03750.03630.0872
Weibull ( k = 3 ) 0.04700.04380.2026
Weibull ( k = 2.6 ) 0.04030.03890.1755
Weibull ( k = 2 ) 0.04710.03420.1471
Table 2. Weibull parameters via different parameter estimation methods.
Table 2. Weibull parameters via different parameter estimation methods.
MethodsKC
EM1.64690.7787
MOM1.91250.7850
MLE1.80050.7890
MMLE2.02010.7758
NLSM2.77670.7518
MMEM2.61.7380
Table 3. Evaluating methods to compare the empirical AIF and the maximum entropy PDFs of AIF.
Table 3. Evaluating methods to compare the empirical AIF and the maximum entropy PDFs of AIF.
MethodsRMSEChi-Square R 2 Adjust R 2
EM0.2860.07550.6310.622
MOM0.2550.06910.6700.663
MLE0.2780.11910.5700.580
MMLE0.2740.07710.6360.628
NLSM0.1940.28540.5350.525
MMEM0.03207.5687 × 10 4 0.9950.995
Table 4. Kinetic parameters estimation via MMEM/MAP for 12 patients.
Table 4. Kinetic parameters estimation via MMEM/MAP for 12 patients.
Patient123456
k 1 0.16370.10160.71750.16500.59591.0477
k 2 0.02100.36880.10730.20790.12330.0072
Patient789101112
k 1 0.63090.79800.10850.43270.5441.0225
k 2 0.07010.38610.23770.08390.2350.0271
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amini Farsani, Z.; Schmid, V.J. Modified Maximum Entropy Method and Estimating the AIF via DCE-MRI Data Analysis. Entropy 2022, 24, 155. https://doi.org/10.3390/e24020155

AMA Style

Amini Farsani Z, Schmid VJ. Modified Maximum Entropy Method and Estimating the AIF via DCE-MRI Data Analysis. Entropy. 2022; 24(2):155. https://doi.org/10.3390/e24020155

Chicago/Turabian Style

Amini Farsani, Zahra, and Volker J. Schmid. 2022. "Modified Maximum Entropy Method and Estimating the AIF via DCE-MRI Data Analysis" Entropy 24, no. 2: 155. https://doi.org/10.3390/e24020155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop