Next Article in Journal
Initial-Boundary Value Problems for Nonlinear Dispersive Equations of Higher Orders Posed on Bounded Intervals with General Boundary Conditions
Next Article in Special Issue
Biophysics and Modeling of Mechanotransduction in Neurons: A Review
Previous Article in Journal
Transferable Utility Cooperative Differential Games with Continuous Updating Using Pontryagin Maximum Principle
Previous Article in Special Issue
Inelastic Deformable Image Registration (i-DIR): Capturing Sliding Motion through Automatic Detection of Discontinuities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling and Analysis of Cardiac Hybrid Cellular Automata via GPU-Accelerated Monte Carlo Simulation

1
Institute of Computer Engineering, TU Wien, 1040 Vienna, Austria
2
Department of Engineering, University of Rome Campus Bio-Medico, 00128 Rome, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(2), 164; https://doi.org/10.3390/math9020164
Submission received: 30 November 2020 / Revised: 9 January 2021 / Accepted: 10 January 2021 / Published: 14 January 2021
(This article belongs to the Special Issue Mathematical Modeling in Biomechanics and Mechanobiology)

Abstract

:
The heart consists of a complex network of billions of cells. Under physiological conditions, cardiac cells propagate electrical signals in space, generating the heartbeat in a synchronous and coordinated manner. When such a synchronization fails, life-threatening events can arise. The inherent complexity of the underlying nonlinear dynamics and the large number of biological components involved make the modeling and the analysis of electrophysiological properties in cardiac tissue still an open challenge. We consider here a Hybrid Cellular Automata (HCA) approach modeling the cardiac cell-cell membrane resistance with a free variable. We show that the modeling approach can reproduce important and complex spatiotemporal properties paving the ground for promising future applications. We show how GPU-based technology can considerably accelerate the simulation and the analysis. Furthermore, we study the cardiac behavior within a unidimensional domain considering inhomogeneous resistance and we perform a Monte Carlo analysis to evaluate our approach.

1. Introduction

Computational cardiology provides a safe, non-invasive, and ethical method to investigate the heart and its dysfunctionalities. Early mathematical models [1,2,3,4] extend the pioneering work of Hodgkin-Huxley [5] to describe and explain, with a set of ordinary differential equations (ODEs), the ionic mechanisms responsible for the initiation and the electrical propagation of action potentials traversing excitable cells such as cardiac myocytes and neurons [6,7]. These early studies enabled innovative in-silico research and clinically oriented applications [8]. For example, they allow us to understand a variety of nonlinear phenomena affecting cardiac dynamics [9,10,11,12,13] and unexpected, chaotic properties of the heart [14,15]. The mathematical complexity of such models has stimulated several works [16,17,18] trying to improve the analysis by simplifying the model representation.
Cellular Automata (CA) are a well-known class of discrete simplified dynamical systems widely used to reproduce pattern formation and growth [19,20]. One of the first approaches using CA and its theory was proposed by von Neumann et al. [21]. A cellular automaton consists of a regular grid of cells where a discrete state is associated to each cell. The state of each cell can evolve in time according to a set of rules operating over the states of the neighboring cells. This class of models provides a simple mathematical framework to reproduce several emergent behaviors that can be observed in excitable media such as the cardiac tissue [22] and the nervous system [23]. Other extensions of CA allow one to customize the model describing how each cell reacts to the external stimuli from the neighboring cells. For example, the electrophysiological process in a myocyte is driven by a set of voltage-dependent thresholds that control the opening/closing of ionic channels regulating the flux of ions across the cellular membrane. These thresholds generally identify specific phases of the myocyte’s behavior, such as the rapid depolarization, the initial repolarization, the plateau, and the late repolarization/resting phase, characterized by different dynamics.
A well-established approach to describe such processes is via Hybrid Automata [24,25,26], a modeling formalism that extends finite automata with a set of ODEs describing the continuous behavior of the action potential at each phase (a phase corresponds to a discrete state in the finite automata). A set of guard conditions over voltage-dependent thresholds determine how each cell switches the continuous dynamics that modeling the inward/outward flux of ionic currents at each phase. Previous works [24,25,27] have shown that HA can accurately reproduce the action potential occurring in myocytes. HA can be simulated using numerical integration, but they are also amenable to formal analysis techniques such as model checking [18,28]. HA and CA can be then combined in a Hybrid Cellular Automata (HCA) that can model both the reaction of each cell to external stimuli and the diffusion of the ionic currents among neighboring cells.
In this work, we consider an HCA model where the single cell behavior is described by a simplified but very accurate four-variables hybrid model proposed in [16] and the cells are coupled with a simple link (resistance) to propagate signals as a unidirectional cable or as a ring of cardiac cells. The considered phenomenological modeling approach stems from experimental works on one-dimensional rings of cardiac cells and whole heart models studying the onset of alternans and irregular rhythms [29,30,31].
In addition to modeling, one can reduce simulation time and associated computational costs using special hardware accelerators such as Graphical Processing Units (GPU) [32,33,34] and Field Programmable Gate Arrays (FPGA) [35,36]. To optimally simulate our cardiac cell model, we compare common numerical methods in combination with different coding approaches: Matlab®Simulink, a sequential C implementation running on a CPU and an NVIDIA®CUDA-based implementation running on a modern GPU. Such a comparison is a required step to evaluate the limitations and trade-offs of the different technologies as well as the reliability of our modeling approach.
The literature is populated by a multitude of physiological and phenomenological mathematical models of cardiac cell electrophysiology [37]. These dynamical systems are usually analyzed in terms of local restitution features (e.g., changes of the action potential duration versus decreasing stimulation pacing periods) as well as bifurcation diagrams (when period-doubling occurs) and space-time visualization [38,39,40,41,42]. Like the biological original, models depend on a variety of additional parameters, i.e., including temperature, membrane resistance, and nonlinearities within the tissue [11,43,44,45].
One-dimensional cables and rings of cardiac cells fulfill standard physiology studies on reentry circuits in cardiology [46,47]. Monolayers of cardiomyocytes [29], as well as multiorgan structures [30], introduced a fine control of system’s dynamics improving our understanding of complex neurocardiac diseases. Due to certain pathological conditions, such as ischemia, current sinks might develop. This behavior can be simulated by changing the amount of heterogeneity within the tissue and the coupling between cells [48]. In this perspective, we will conduct an extended analysis investigating emergent phenomena associated with alternans and conduction blocks by using bifurcation diagrams and modeling the cell-cell coupling with a random variable. First, we investigate the propagation and benchmarks of our model, with respect to conduction velocity (CV) and action potential duration (APD). Then we analyze the resiliency of the proposed HCA by decreasing the length (cell count) within the ring structure after a freely chosen amount of time-steps. Doing so, we achieve shortening and spatiotemporal adaptation of the excitation wave, without the need of electrical pacing, also observing the onset of electrical alternans.
The final object of the paper is to assess the proposed HCA approach under different implementations. We do so by investigating the results of a Monte Carlo simulation analysis. As we are using a free resistance variable, we test the robustness of the approach using random values for this variable. We then use a numerical analysis of the bifurcation over APD in a decreasing number of cells. This study allows us to draw conclusions on usability, deficiencies, and further improvement of HCA as a cardiac modeling technique.
The paper is structured as follows: we first present the considered computational models and analysis techniques in Section 2. In Section 3, we summarize the main findings of our computational experiments, and we compare the performance of different model implementations. We discuss our results in Section 4. We conclude in Section 5 by discussing limitations and perspectives of our work.

2. Computational Models and Methods

In this section, we present the computational models that we have considered in our analysis and different computational approaches to simulate them.

2.1. Unidirectional Hybrid Cellular Automata Model

The presented HCA model satisfies the properties of a Cellular Automata by following simple principles: each node represents a cell and the smallest entity of an HCA, which can have different states. The state of a cell depends on the states of its neighbors in the previous step, leading to a unidirectional coupling between cardiac cells.
For the determination of the cell state, we are using the four-variable phenomenological Minimal Model [16], recapitulated in Equations (1) and (2), adopting the minimum set of ODEs to recover specific cardiac behaviors, e.g., action potential shape and duration, restitution properties, conduction velocity, and alternans dynamics [49,50]:
t u = · ( D ˜ u ) ( j f i + j s o + j s i ) + j e x t t v = ( 1 H ( u θ v ) ) ( v v ) / τ v H ( u θ v ) v / τ v + t w = ( 1 H ( u θ w ) ) ( w w ) / τ w H ( u θ w ) w / τ w + t s = ( 1 tanh ( k s ( u u s ) ) ) / 2 s ) / τ s
j f i = v H ( u θ v ) ( u θ v ) ( u u u ) / τ f i j s o = ( u u o ) ( 1 H ( u θ w ) ) / τ o + H ( u θ w ) / τ s o j s i = H ( u θ w ) w s / τ s i
Equation (1) lists the four state-variables reaction-diffusion dynamics, where u is the normalized transmembrane voltage variable and v , w , and s are normalized virtual gates, namely gating variables, ranging between 1 and 0. These gating variables are a phenomenological equivalent of ion-channels kinetics within a cell and used to compute the three currents listed in Equation (2): fast inward, j f i , slow inward, j s i and slow outward, j s o . Such a phenomenological description has been proved to recreate the spatiotemporal action potential features of cardiac cells and tissues. In particular, the diffusion term · ( D ˜ u ) allows for cell-cell signal transmission in space, where D ˜ represents the diffusion coefficient based on human ventricular tissue experiments. Finally, an external time-dependent stimulation current, j e x t , is added to the ionic currents to reproduce electrical pacing of the tissue and H is the standard Heaviside function.
Model parameters, defining time-constants and threshold values, refer to epicardial cells behavior (see Appendix A). More precisely, the constant values are listed in Table A1, the determination for the varying definitions are shown in Equation (A1) and the infinity values in Equation (A2). According to [16,51] the critical parameters ruling system’s features include, amongst others, the maximum AP upstroke velocity (via τ f i , τ v + , θ v ), AP amplitude (via u u , u o ), maximum and minimum APD (via τ w + , τ s i , τ s o ).
The local state of a cell is determined by the difference between its two neighboring cells. To control the CV and other expected physical properties of the HCA, we introduced a free variable for the cell membrane resistance. Accordingly, the computation for propagation reads:
( u i 1 u i ) R V
where u i , represents the voltage of the current cell, i, and R V the free resistance variable among cell i and cell i 1 . Therefore, the resulting discretized reaction-diffusion equation (cable equation) for the voltage variable u becomes:
t u = u ( ( u i 1 u i ) R V + j f i + j s o + j s i )
Equation (4) together with the local kinetics (1) are the mathematical foundation of the present approach, as the state of the HCA cells is represented by the four state-variables of the Minimal Model. For state-change propagation within the HCA, i.e., the update function, we use Equation (3) using the voltage-state of the predecessor in the previous time step. The activation wave in the HCA is created by setting an excited initial condition for the cell at index 0 with values u = 1 , v = 1 , w = 1 and s = 0 . For all other cells we set instead as initial condition the values u = 0 , v = 1 , w = 1 and s = 0 corresponding to resting state. The state values for computation are dimensionless to recreate phenomena and not scaled to any biophysical properties or ion concentrations. According to [16], u can be scaled to mV using the equation V m V = 85.7 u 84 .
A highly simplified illustration of the propagation flow within the proposed HCA is shown in Figure 1. Each node represents a cell C x at position x. In particular, the first cell is activated at time step 0 (T0), the cell in the second position updates only in the next step (T1). This pattern is valid for all the cells. Due to the difference of potential between neighboring cells, the action potential u will propagate in time across all the cells. Restoration of resting (quiescent) conditions follows the cardiac action potential time constant with vanishing activation wave propagation. Figure 1 shows different shades of gray that corresponds, and we assume that at time step 2 (T2), the impulse created at C 2 is too weak to activate the last cell; therefore, it does not change color in Figure 1e.

2.2. Model Behavior Evaluation

The original Minimal Model by Bueno-Orovio et al. can simulate a wide range of cardiac behavioral patterns. Therefore, to evaluate our HCA model, we assess which patterns and characteristics we can recover. One of the most important benchmarks to evaluate the model, is the CV which is approximately 0.5 m / s [52]. This benchmark asserts the usability of a free resistance variable in the model, which we use to control CV. Further, we evaluate the AP propagation and its duration within the model to ensure correct physiological behavior. These benchmarks are also used to determine the optimal value for the resistance variable.
Additionally, we use highly simplified structures to evaluate the functionality and behavior of our HCA. We are using a unidirectional propagation in 1D cell structures, shown in Figure 2. The cells are a voltage source, propagating from previous to next (left to right). Between each cell is a resistor, resembling the computation of Equation (3). In Figure 2b the last cell propagates to the first cell, creating a closed ring. If a circular cell construct is stimulated at a single point, it propagates bidirectionally. Propagation waves, then, meet in the middle and extinguish each other (annihilate) [53]. If the stimulus is delivered with hyperpolarization on one site (i.e., one of the two concurring cells cannot be activated), the wave starts to propagate only in one direction, as the different state of the cells does not allow stimulation in the other. Due to connections between first and last cell (i.e., periodic boundary conditions), propagation of state-change runs in a circle. The wave is, therefore, "trapped" in our ring, showing self-sustained propagation, a prototype of cardiac reentry and arrhythmias. To simulate this one-directional wave, we designed the previously described 1D ring of cells in Figure 2b. With this unidirectional flow of waves, we are able to simulate a trapped circuit wave [54] and omit the necessity to establish a one-sided block in the opposite direction.
Figure 2a shows the cell cable, with no-flux boundary conditions, represented as a barrier at beginning and end. These two constructs will give us insights into the properties and recoverable characteristics of our model, as the ring represents the smallest closed circuit entity in cardiac arrhythmias [46]. The cable, on the other hand, gives us insight into the correct propagation properties and alternans dynamics of the presented approach.
Specifically, using 1D structures, we study CV and APD features. Further, by reducing the ring length during the simulation, we highlight the self-adapting properties of cardiac cells within a ring. To avoid unnecessary computation steps, we validate the sum of the voltage over the cable in each time step: if it falls below a value of 0.5, the simulation stops assuming that no action potential occurs. Additionally, we investigate the impact of the resistance value on our model in these scenarios. Each scenario is performed once in a homogeneous setup, with a constant resistance value for each cell. Then we introduce tissue heterogeneity using different resistance in each cell and time step. The adopted values follow the von Mises distribution, and a Monte Carlo study is performed as described below.

2.3. Computational Methods

High-performance CPUs and GPUs are the most common hardware technologies employed to simulate complex cardiac models [32]. Additionally, Matlab®Simulink provides a suitable graphical programming framework enabling fast model prototyping and revision.

2.3.1. Matlab®Simulink

Matlab®Simulink is a proprietary graphical programming environment designed to model, simulate and analyze dynamical systems. It provides a graphical block diagramming language and a set of block libraries that can be easily customized by the user. Each block represents a mathematical function and the graphical environment provides the user with the possibility to build complex models by just dragging-and-dropping blocks and connecting them, without the need to have much experience in programming. Moreover, the language enables one to derive new composite blocks out of other primitive/atomic or composite blocks.
We have implemented our HCA model using this framework (see Figures in Appendix E.1). In particular, we have implemented the Minimal Model as a single Simulink block representing a cell. We then compose several cell blocks in a cable by connecting the input/output of each cell block with its neighboring cells. In Figure A1, we show the first cell of the cable, where the constant mode in this block, as well as the switch, decide if the first cell is connected to the last cell, forming a ring of cells. The computation of input connection is performed within a separate blockResistor, shown in Figure A2. The external impulse ( I e x ) in both blocks represents an external stimulation (Pulse Generator block) which can be activated in the main Model block, as shown in Figure A4.
We created special block that groups a certain amount of cells block together. Using these special blocks, we were able to create and simulate structures of maximum 3000 cells. For example, in Figure A3, we show the smallest entity of five grouped cells.

2.3.2. Central Processing Unit (CPU) and Graphics Processing Unit (GPU)

Despite Matlab®Simulink being suitable for fast model prototyping and revision, it is less efficient when it comes to handling the simulation of thousands or millions of cardiac cells. Thus, we first implemented in C-language a second version of our simulator that runs sequentially on a CPU. In this implementation, we represent the HCA model as an array of cells, avoiding the use of complex pointer-arithmetic; for example, the access to the neighboring cells and the decreasing of the cable size at runtime are performed using only integer indices instead of pointers.
We use this sequential implementation as a baseline to both check the correctness and compare the performance of a third parallel implementation that we developed to run on a GPU. GPUs are programmable chipsets with flexible throughput-oriented processing architecture and were originally designed to solve problems that require high-performance computing, such as 3D graphic renderings. Figure 3 illustrates how GPU architecture is organized around an array of streaming multiprocessors (SMs). Each SM includes several scalar processors (SPs), and each SP consists of an arithmetic logic unit that can perform a floating-point operation.
The GPU code development is generally simplified by a software layer that provides access to the instruction set and parallel computational elements of the GPU. In this work, we used the Compute Unified Device Architecture (CUDA) designed specifically for NVIDIA®GPU cards. The CUDA parallel model enables the execution of thousands of lightweight threads organized in thread blocks. Each thread executes in parallel the code of a function called kernel while using different parameters. The threads that are within the same thread block can cooperate using different mechanisms. For example, a synchronization point in the code of the kernel makes sure that all the threads must reach that point before the execution can continue. Threads in the same block can also exchange data among each other using on-chip shared memory. Threads that are located in different blocks cannot be synchronized during the kernel execution and operate independently. The CUDA environment provides the possibility to use C/C++ languages extended with built-in primitives enabling the programmer to launch thousands of threads and to specify parameters for the threads and their blocks.
The CUDA model refers a GPU with the term device while a CPU with the term host. A device has different types of memory, and their correct use can highly impact the performance of the implementation. The global memory is a device memory that can be read/written by all the threads running on the device. The host can also access the global memory of the device using special CUDA primitives. This enables the exchange of data between the host and the device memories. The global memory is generally available in large amounts (e.g., in the order of gigabytes), but its access is also very expensive. Within the SM, there are instead two very fast (but limited in size) types of memory: the registers and the shared memory. Registers have the largest bandwidth, but very small size and can only be accessed by a single thread. They are partitioned among all the threads of a thread block running on a SM. The amount of available registers also limits the number of threads that can be executed in the same thread block. Shared memory can be accessed as fast as the registers, and it facilitates the communication among the threads in the same thread block.
In our implementation, we developed two kernels: the first calculates the diffusion of the action potential among the cells using the values of the first neighbors for each cell; the second computes the numerical integration for a time-step of the piece-wise nonlinear ordinary differential equations modeling the inward/outward current flows in each cardiac cell. The program starts with a main function running on the CPU. First, it allocates the necessary global memory space in the GPU device to store the cardiac cells, and then it initializes the simulation. This procedure is followed by a while loop terminating when the simulation is completed. At each loop iteration, the program first calls the kernel function to compute the action potential diffusion across the cells and then calls the kernel updating the values of the ionic current flows and of the action potential for each cell. When a kernel is called, its code is executed in parallel on a grid with multiple blocks of threads running on the device, one thread assigned to each cell. At the end of the execution of a kernel, the control flow returns to the CPU, and this provides an implicit point of synchronization, enabling data consistency before the launch of the threads for the next kernel. When other analysis are necessary, intermediate results can be transferred from the GPU to the CPU after multiple loop iterations. However, this operation is computationally expensive and can slow down the computation.
The work in [32] shows that the use of shared memory can increase the efficiency of the simulation when it is employed to compute the diffusion for large 2D/3D cardiac domains. However, for the simulation of a one-dimensional cable or ring of cardiac cells, we have not observed a significant increase in efficiency by using shared memory.
It is worth noting that we use a pre-generated lookup table (LUT) for the resistance variables following the von Mises distribution, as we do not have enough memory available on GPU it is only accessible on CPU. Therefore, for the GPU implementation in each step, we randomly generate an array of indices to access the LUT and read the values into another array, which is then copied onto the device, and the indices correspond to the indices of the cells. To exploit the full usage of GPU, we use cuRand, a built-in random number generation library. On the CPU, we use a similar approach to sample the resistance values, with the C-library function rand(). These functions must be initialized with a different seed to produce different sequences each time. In both versions, we use the current system time to initialize the seed. For LUT, we pre-generated several distributions with various κ and μ for a greater sample size pool.
For reasonable results, we introduced a virtual cell size, meaning a size that one cell supposedly has in length. In all following images and representations, each cell has a length of 100 μ m . The virtual cell size is mostly used to compute the CV and APD. If not said otherwise, we measure the length of the structures in the number of cells and the simulation time in time-steps per cell.

3. Results

In this section, we present the results for the different test scenarios that we have considered. All the experiments were performed on a workstation equipped with an NVIDIA®GeForce GTX TITAN X (with 12 GB GDDR5 of memory) and an Intel®CoreTM i7-5820K CPU (with 32 GB of RAM). We used Matlab®version 2019b, including the full toolbox set for Simulink. The sample size is determined by the number of cells in the HCA as well as the number of performed time-steps in the simulations without decreasing size.

3.1. Accuracy of GPU

We assume the calculations in the CPU-based implementation are correct. To determine the correctness and truncation error and hence resulting loss of accuracy, we computed the mean squared error, E, comparing the voltage values obtained by simulating a cell cable with both the GPU-based ( u ¯ ) and CPU-based ( u ^ ) implementations:
E = 1 N T s = 0 N 1 t = 0 T 1 ( u ^ t s u ¯ t s ) 2 ,
where N refers to the number of cells and T to the simulation time in time-steps computed per cell. As Figure 4 shows, the error is negligible for a variety of resistance variables, number of cells, and simulation time (i.e.,: 20,000 steps ~1 s, as 1 step = 0.05 ms). In Figure 4a the boxplot was computed for each R V in Table A2 using the MSE of the 12 simulations with increasing number of executed computation steps ( N T ). The boxplot in Figure 4b was computed for each number of executed computations ( N T ) in Table A2 using the MSE corresponding to 6 different R V values. It is worth noticing that the error is independent of the resistance variable and is, in general, very small.

3.2. Acceleration using GPU

To measure the gain in execution time using GPU w.r.t. CPU, we have considered only the time spent for the simulation. Therefore, we did not consider the time for the analysis (i.e., the time necessary for computing the APD and CV), to store the results in the file system and for generating the plots.
Table A4 and Table A5 provide the execution times for the simulation of the decreasing ring-structure using CPU and GPU, respectively. The number of cells (length), as well as the values for the resistance variable, are varied to increase the complexity of the simulation. Unlike simulations with a fixed number of time-steps, the decreasing ring simulation stops when no AP can be recovered; therefore, the execution time in this scenario depends also on cell resistance. So, it is worth noting that as the simulation automatically stops depending on when no AP of the trapped wave can be recovered, the execution time varies heavily. Further, CPU and GPU simulations end before when no AP can be recovered within a short ring with a high value of the free resistance.
Table A12, Table A13 and Table A14 list the the execution times for the GPU-based simulation of a cable-structure while Table A15, Table A16 and Table A17 using the ring-structure. Each table represents 20,000, 30,000, and 40,000 time-steps, respectively. Again, additionally to the simulation time, we vary the value of R V and the number of cells to increase the computation load. The same principle applies for the CPU times in Table A6, Table A7 and Table A8 for the cable-structure and Table A9, Table A10 and Table A11 for the ring-structure. Figure 5 provides a comparison of the mean execution times using GPU-based vs CPS-based implementations.
In the case of cable simulation for 40,000 time-steps (2 s), Figure 5a, the GPU takes on average less than half of the execution times of CPU. For the ring in Figure 5b the average execution times of the GPU is nearly ten times less. We also compared the execution times of the decreasing ring simulation, Figure 5c, where GPU takes less than half of CPU time.

3.3. Model Behavior

In order to evaluate the behavior of the proposed model, we have considered two main aspects. First, we shortly describe if we were able to achieve the CV we aimed for, and secondly, we evaluate the HCA’s self-adapting properties. These properties consider which behavior can be observed when inhomogeneities within the cell resistance occur. Even with small discrepancies, the propagation of the signal should be possible. The effect on CV and APD will also be investigated throughout a Monte Carlo analysis, where we assign a random value for the free resistance variable.

3.3.1. Conduction Velocity

For Matlab®Simulink, we studied a maximum size of 3000 cells, recovering a conduction velocity of ∼ 400 μ m / ms .
As for the implementations on CPU and GPU, Figure 6 shows the impact of the introduced free resistance variable on the computed CV (due to the low error discussed before, we present CPU-based results). We considered a fixed voltage threshold of u = 0.5 , and the computed wave-front velocity refers to the whole structure length. We observe that CV depends both on the resistance value and the overall length of the structure. For a length of ≥3000 cells (~30 cm) and a resistance variable R V = 0.5 , it was impossible to create a propagating wave (no CV). For all the other cable lengths, CV depends on the value of R V . In particular, a high resistance variable, representing a high diffusivity, leads to higher CV values. In view of physiological application, the value of R V should span within 2.0 and 2.2, such to get ∼ 400 μ m / ms . A tabular recapitulation of the CV is provided in Appendix C.

3.3.2. Self-Adapting Properties: Monte Carlo Study

We investigated the self-adapting properties of our model by simulating a decreasing ring. In this case, the same ring structure is solved for 500 ms (10,000 time-steps), then the last cell is virtually deleted, and its predecessor is connected to the first cell. Such a procedure incorporates a valid alternative methodology to the electrical pacing protocol [11].
The Monte Carlo Study was performed on such a decreasing ring scenario, with random resistance variable values following a von Mises distribution to simulate fluctuations and spatial heterogeneity of the cell-cell coupling. We conducted the simulation analysis on nine different sized ring structures, with an initial length of 750 cells (7.5 cm) up to 2750 cells (27.50 cm) by increasing 250 cells (2.5 cm). We assigned a random spatial distribution of resistance for every initial length, further sampling R V at each time step. Considering a preliminary analysis on the limiting values of AP wave propagation, we found a stable excitation wave for 0.5 < R V < 3.0 using different parametrizations of von Mises distribution, i.e., varying the concentration parameter κ and the expected value μ together with a proper normalization of the distribution. This procedure allows us to reproduce the intrinsic heterogeneity of the cardiac tissue, as shown in Figure 7. Among the three distributions investigated, the case characterized by 0.5 < R V < 5.0 yields a global CV 540 μ m / ms , not realistic for the myocardium but appropriate for the fast conduction system. To focus on a myocardial tissue analog, we do not discuss further case Figure 7c in this paper.
To investigate CV dependence on R V heterogeneity, we analyzed in detail the two cases Figure 7a,b. In the first case we chose μ = 1.25 , κ = 8 . We then adapt accordingly to fit the range 0.5 < R V < 2.0 and obtained an effective μ = 1.55 with a global CV 320 μ m / ms ; in the second test we adopted μ = 1.5 , κ = 10 and fit this distribution to 0.5 < R V < 2.5 yielding shifted μ = 1.98 with a global CV 380 μ m / ms resulting in the closest approximation of the deterministic homogeneous ring. In both analyses, CV fluctuates ± 10 μ m / ms due to the imposed heterogeneity. Besides, the minimum number of cells, i.e., shortest ring, producing a stable excitation wave consisted of 1000 cells (10 cm).
We evaluated the dynamical behavior of the system by constructing the APD bifurcation diagram plotting two consecutive APDs corresponding to the excitation wave’s sequential rotations. In a stable regime (no alternans), two identical APDs are obtained. Once a critical point is reached, electrical alternans appear, and the APD starts to fluctuate (long-short). When the ring length is short enough, sustained propagation is no longer feasible, and wave annihilation (autoannihilating) occurs.
In Figure 8 we show the APD bifurcation diagram for four different ring locations: first cell Figure 8a, first quarter Figure 8b, central cell Figure 8c, and last quarter Figure 8d. Consistently in the four locations, alternans appear for a ring size of ∼ 7.1 cm, increases dramatically at ∼ 6.8 cm, and stops at about 6.7 cm. Notably, the last oscillation before autoannihilation is characterized by quasi-null APD values such that a high level of spatial dispersion of repolarization appears in the tissue. Besides, small fluctuations are the result of the heterogeneous distribution of the resistance value over the ring.
Figure 9 shows the comparison of the APD bifurcation diagrams obtained for two rings of 17.5 cm considering two von Mises distribution parametrization. It is worth noting that both alternans onset and autoannihilation occur for different ring lengths. In particular, a narrower distribution allows for smaller lengths and higher alternans amplitudes. Such a result is further confirmed by the bifurcation diagrams obtained from rings with a much higher spatial dimension. Figure 10 shows the computed APD bifurcation diagrams extracted from the four selected cells and using a von Mises distribution resulting in a faster CV. Consistently, the alternans start below 9 cm and autoannihilation occurs when the length is about 8.4 cm.

4. Discussion

We presented a simplified phenomenological cardiac model based on a Hybrid Cellular Automata formulation by introducing the cell-cell coupling membrane resistance as a free variable. This approach enables us to thoroughly investigate the CV and APD features in the presence of distributed conductivity thus linked to excitation wave spatiotemporal organization and alternans onset [55,56,57,58].
From the implementation point of view, we found Matlab®Simulink very suitable for the development of rapid model prototypes. However, it is limited when it is necessary to handle large scale simulations. Furthermore, it was not possible to change the length of the cable/ring at runtime. Thus, we were not able to test the self-adapting properties described in the previous sections. In addition, the values of the variables at a certain time step cannot be updated using just the values of the previous step but rather all the values already computed in the current step. Furthermore, we found that the computational resources required to compile large HCA models with several cells were prohibitive (we could consider only models with a length of max. 3000 cells). Due to these restrictions, we could not implement the decreasing ring and the related Monte Carlo analysis in this environment. Nevertheless, as illustrated in the space-time plot shown in Figure 11b, we were able to induce electrical alternans with a trapped wave in a ring with a small number of cells by setting the value of the resistance to 2.0 .
We performed the same numerical experiments by using CPU-based and GPU-based implementations (see Figure 11d). Due to the negligible error between the results obtained using GPU and CPU, in Figure 11 we only provide the result obtained using CPU as representative. While Figure 11b,d look very similar, there is actually a small difference in the APD alternans profile. Specifically, the APD in Figure 11d, is longer than in Figure 11b.
If we observe the color representing the value of the variable u, we notice that the transition in Figure 11a is softer and shorter than in Figure 11c. This is due to the different mechanisms for updating the variables in Matlab®Simulink previously discussed and that are not suitable to implement CA updating rules that depend on the values of the previous step.
We, therefore, focused on the implementations using CPU (sequential) and GPU (parallel), producing instead results with a negligible difference (the average mean squared error is 1.3361 × 10 6 ). As expected, the parallel GPU-based implementation benefits of the use of many-cores accelerating considerable the simulation. In Figure 5 we compute the average execution times for all used lengths and R V . For simulations with a fixed number of time-steps, only the number of cells and time-steps increase the execution times.
The longest execution time measured was the decreasing ring simulation in both implementations, as we stop when no AP can be recovered. Still, the average execution time overall lengths and R V for this scenario using GPU is 1 / 4 compared to the measured times using CPU, as shown in Figure 5c. Since this kind of analysis does not depend on a fixed number of time-steps, but it relies on the AP dynamics, the execution time for this scenario depends highly on the number of cells and the chosen value for the resistance R V , shown in Table A4 and Table A5.
If we consider simulations with a fixed number of time-steps, the GPU implementation started to perform approximately 10× faster on average than the CPU-based implementation for simulations with more than 40,000 time-steps. If we consider the simulations with the highest number of required time-steps, this becomes more evident: the average measured execution time for simulating the ring with 40,000 time-steps and 2750 cells on GPU is ~2.60 ms while on the CPU required on average 21.60 ms. For the cable using these lengths and number of time-steps, the GPU implementation finishes consistently with an average of ~2.60 ms, while the CPU implementation required (less time than in the ring setting) within ~14.86 ms. In all implementations, we were able to control the CV with the free resistance variable.
Although it is possible to observe APD fluctuations at the beginning of the Monte Carlo simulations of shorter rings, the critical limit in length for the onset of the alternans is consistent in all ring sizes simulated using both CPU and GPU implementations. The occurrence of alternans depends only on the distribution of the resistance and thus on the resulting CV.
The observed decrease in the CV and the occurrence of APD fluctuations in inhomogeneous settings can be compared to the behavior observed in studies of ischemic cardiac tissue [59,60]. Even though we have slight inhomogeneities in the cell’s resistance, our analysis shows that the APD is stable until the length falls below a certain critical point when too many cells undergo large oscillations, thus a continuous propagation. Such a condition depends on the chosen initial number of cells and CV, as we could observe an early onset with both von Mises distributions that we have considered. Considering a CV of 380 μ m / ms , the obtained APD is similar to the results in [11] attained considering a regular body temperature of 37 °C. It is also similar to the original measurement data for epicardial cells referenced in [16].

5. Conclusions

We have presented an HCA modeling the cardiac cell-cell membrane resistance with a free variable.
We provide three different implementations comparing pros and cons: one using the Matlab®Simulink graphical environment, a second implementation running on a CPU sequentially, and a third implementation running on a GPU in parallel. While Matlab®Simulink provides a suitable environment for fast prototyping complex models is less efficient in handling simulation instances of large HCA models. The GPU-based implementation could handle instead larger instances accelerating considerably the simulation (in our experiments, we could observe 10× speed-up with respect to the CPU-based implementation) and indeed the overall analysis of the model.
We then study the cardiac behavior within a unidimensional domain considering inhomogeneous resistance by performing a Monte Carlo simulation. We show that our modeling approach can reproduce important and complex spatiotemporal properties such as self-adaptive properties and the onset of alternans, paving the ground for promising future applications.
In future work, we plan to extend our analysis from the one-dimensional cases to two and three spatial dimensions, including the possibility to use different cell types. We foresee to generalize the present study towards an efficient computation of ECG signals [61] simulating pathological conditions adapting critical parameters such as the free resistance variable and the ionic time constants. Additionally, we aim to apply our HCA computational approach in a more physiological context by implementing species-specific models [62,63,64,65] and investigating the underlying genetics [66] linked to cell-to-cell communications problems [67,68,69].
As mentioned in [58,70,71], the estimation of cardiac conductivity is still under investigation; therefore, we need to consider approaches to determine a more precise value for this parameter for future work. One strategy worth investigating is to use machine learning approaches or optimization algorithms such as the one described in [58] that map the free resistance variable to a specific conduction velocity. We also plan to improve our GPU-based implementation by combining our current approach with other optimizations described in other papers such as [34,72,73].
The use of other hardware accelerators, such as FPGAs that are becoming more and more employed to handle computational extensive problems in simulation [74,75] and graphic processing [76,77] represents a promising perspective.

Author Contributions

Conceptualization, E.B. and A.G.; methodology, E.B. and A.G.; software, L.M.T.; formal analysis L.M.T.; investigation, L.M.T.; resources, E.B. and A.G.; writing–original draft preparation L.M.T., E.B. and A.G.; writing–review and editing, L.M.T., E.B. and A.G.; visualization, L.M.T.; supervision E.B. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

A.G. acknowledges the support of the Italian National Group for Mathematical Physics (GNFM-INdAM).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APDAction Potential Duration
APAction Potential
CACellular Automata
CVConduction Velocity
HCAHybrid Cellular Automata (Model)
R V (Value of the) Resistance Variable

Appendix A. Model Parameters

τ v = ( 1 H ( u θ v ) ) τ v 1 + H ( u θ v ) τ v 2 τ w = τ w 1 + ( τ w 2 τ w 1 ) ( 1 + t a n h ( k w ( u u w ) ) ) / 2 τ s o = τ s o 1 + ( τ s o 2 τ s o 1 ) ( 1 + t a n h ( k s o ( u u s o ) ) ) / 2 τ s = ( 1 H ( u θ w ) ) τ s 1 + H ( u θ w ) τ s 2 τ o = ( 1 H ( u θ o ) ) τ o 1 + H ( u θ o ) τ o 2
w = ( 1 H ( u θ w ) ) ( 1 u / τ w ) + H ( u θ w ) w v = 1 , u < θ v 0 , u θ v
Table A1. Model Parameters.
Table A1. Model Parameters.
τ v + τ 1 v τ 2 v τ w + τ 1 w τ 2 w τ s 1 τ s 2 τ f i τ o 1 τ o 2
1.450660115020060152.7342160.114006
θ v θ w θ w θ s o θ s i θ o τ s o 1 τ s o 2 τ s i τ w θ v θ v
0.0060.130.0060.130.130.00630.01810.99571.88750.070.30.006
k w + k w k s k s o k s i u w u s u o u u u s o w
5.7652.09942.045897.80.030.908701.550.650.94

Appendix B. Tabulated Mean Squared Error

Table A2. MSE of u between CPU and GPU Sample size as number of cells times simulation time (in time-steps) on the Left Value of the R V on Top.
Table A2. MSE of u between CPU and GPU Sample size as number of cells times simulation time (in time-steps) on the Left Value of the R V on Top.
Sample Size| R V 0.51.72.02.23.03.5
1000 20,0003 × 10 6 3 × 10 6 1 × 10 6 0.00.00.0
1000 30,0008 × 10 6 3 × 10 6 2 × 10 6 0.00.00.0
1000 40,0001 × 10 5 2 × 10 6 5 × 10 6 3 × 10 6 0.00.0
2000 20,0001 × 10 6 1 × 10 6 0.00.01 × 10 6 0.0
2000 30,0001 × 10 6 1 × 10 6 1 × 10 6 1 × 10 6 2 × 10 6 0.0
2000 40,0006 × 10 6 0.02 × 10 6 1 × 10 6 2 × 10 6 0.0
3000 20,0000.01 × 10 6 1 × 10 6 0.00.01 × 10 6
3000 30,0001 × 10 6 1 × 10 6 1 × 10 6 0.00.00.0
3000 40,0002 × 10 6 3 × 10 6 1 × 10 6 0.01 × 10 6 0.0
4000 20,0000.01 × 10 6 2 × 10 6 0.00.00.0
4000 30,0000.01 × 10 6 7 × 10 6 0.00.00.0
4000 40,0001 × 10 6 0.2 × 10 6 1 × 10 5 0.00.01 × 10 6

Appendix C. Tabulated Conduction Velocity

Table A3. CV in a cell ring.
Table A3. CV in a cell ring.
Length| R V 0.51.72.02.2.3.03.5
1000139.059021341.596832384.452545411.704071516.145691577.957764
2000139.070541341.621796384.49700411.740479516.137329578.07977
3000 341.610657384.51181411.752563516.34583578.064758
4000 341.619690384.500732411.758636516.133179578.057251

Appendix D. Tabulated Execution Times CPU vs. GPU

Table A4. Execution times in milliseconds for the CPU-based implementation of the decreasing ring simulation. On the left the number of cells (length) while on top the value considered for R V ).
Table A4. Execution times in milliseconds for the CPU-based implementation of the decreasing ring simulation. On the left the number of cells (length) while on top the value considered for R V ).
Length| R V 0.501.001.702.002.203.003.50
750240.192238.4682.902532.89682.848442.783412.84889
1000564.484579.127350.292220.442131.9324.298284.35958
1250969.2751011.07784.288653.249558.842202.4076.14406
15001455.631526.111319.381185.81092.33718.903483.882
17502018.122117.921939.511817.111725.461345.711100.71
20002660.242791.342641.932527.152439.142073.841819.97
22503383.763544.193429.593323.343244.672890.82640.81
25004183.634386.914300.84204.94133.383800.713556.48
27505062.565291.625246.475169.955096.364826.34566.59
Table A5. Execution times in milliseconds for the GPU-based implementation of the decreasing ring simulation. On the left the number of cells (length) while on top the value considered for R V .
Table A5. Execution times in milliseconds for the GPU-based implementation of the decreasing ring simulation. On the left the number of cells (length) while on top the value considered for R V .
Length| R V 0.501.001.702.002.203.003.50
75041.8529125.0581.324570.1460410.1656241.268631.25893
1000254.9255.257135.83382.886648.45791.478091.4264
1250464.078488.732344.528266.477226.14977.66422.07172
1500647.826649.191512.527455.636411.114253.568165.798
1750818.234820.921684.815625.691580.426423.808336.593
20001009.941025.43890.362799.599754.914597.915509.719
22501212.771205.461070.181023.48967.128808.924721.257
25001402.381415.231282.11192.581147.75986.991896.333
27501616.381635.491480.981426.671349.121189.271099.99
Table A6. Execution times in milliseconds on CPU with cable structure for 20,000 time-steps (1.0 S); R V on the Left and number of cells (length) on the Top.
Table A6. Execution times in milliseconds on CPU with cable structure for 20,000 time-steps (1.0 S); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.52.177792.918753.603914.227354.891025.484446.114416.744177.31926
1.02.460552.884223.597944.314265.054425.702776.339516.976677.6054
1.72.13952.861733.58424.282185.014755.708566.423137.086277.77064
2.02.159322.864573.654344.389995.146645.901046.648117.398588.11657
2.22.134372.848823.567444.232315.0055.693076.404777.069737.81166
3.02.128422.83673.554784.250044.953895.681696.338737.080787.79899
3.52.173632.890223.686924.432045.118495.819536.599567.442438.19544
Table A7. Execution times in milliseconds on CPU with cable structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
Table A7. Execution times in milliseconds on CPU with cable structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.53.122634.156515.215246.251497.295878.219469.2259510.173411.0834
1.03.027764.131035.137796.211797.242848.276899.3119810.353811.3794
1.73.078454.115065.16056.186927.074778.234369.0883710.234611.1813
2.03.042074.158625.218186.245237.197688.423839.4889210.559511.5156
2.23.079434.106175.051796.169227.168858.221078.9706410.273511.3051
3.03.073244.097315.127666.147027.17818.200019.1295410.256411.1974
3.53.119484.188575.25856.326077.396998.458689.5356610.443111.6698
Table A8. Execution times in milliseconds on CPU with cable structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
Table A8. Execution times in milliseconds on CPU with cable structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.54.072715.401476.747287.914069.4996610.864512.220913.542614.7493
1.04.051335.393916.710238.100649.4572710.796312.046413.451114.8274
1.74.034465.377146.70958.072199.3700910.696712.08813.42814.7969
2.04.052475.354286.775588.179059.5702710.946912.327713.713815.0427
2.24.029115.370496.680628.053329.3962410.743612.088213.415114.7604
3.04.006945.361616.701488.042189.2685810.603411.996213.359114.7013
3.54.067235.447166.832558.226219.5989110.986812.337613.689715.1424
Table A9. Execution times in milliseconds on CPU with ring structure for 20,000 time-steps (1.0 s); R V on the Left and number of cells (length) on the Top.
Table A9. Execution times in milliseconds on CPU with ring structure for 20,000 time-steps (1.0 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.52.402323.312264.410175.238235.900226.52867.12367.752938.39601
1.02.444753.394424.402935.551356.904268.306959.3232710.008510.6383
1.72.282153.393364.462885.607016.879338.279439.8025411.553113.0987
2.02.309263.362794.556915.7517.058948.4316310.054711.766613.625
2.22.301053.311524.441785.638296.876348.288939.8191511.469713.3359
3.02.299243.173124.292495.564226.891968.317439.8252411.351813.2487
3.52.33783.26884.317925.600917.089238.563110.144111.652413.6406
Table A10. Execution times in milliseconds on CPU with ring structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
Table A10. Execution times in milliseconds on CPU with ring structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.53.544594.787166.202197.851899.6871711.142711.994913.318114.2619
1.03.628554.964146.285327.779059.4282811.265113.301715.431617.8885
1.73.232754.919636.42037.936819.5306211.25413.155315.214717.4927
2.03.279144.87566.474537.984719.7256911.182913.098815.428217.7083
2.23.229434.768116.36937.920689.5935211.321513.031715.203417.3761
3.03.227244.432796.075187.849529.596211.388513.265715.225217.3078
3.53.277024.527745.891237.780629.6514211.583613.604715.61417.8365
Table A11. Execution times in milliseconds on CPU with ring structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
Table A11. Execution times in milliseconds on CPU with ring structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.54.681566.244527.98219.8983112.078714.491917.223119.434721.376
1.04.80076.499788.2002210.002611.913214.12916.42319.049321.7869
1.74.17216.460228.3409910.174512.206714.254216.471718.82221.3789
2.04.193166.369988.4346910.411612.342614.468516.575619.060721.6732
2.24.156346.236838.1877410.29712.305614.353116.577518.922421.3154
3.04.172165.626967.8717910.138912.310814.498416.626318.959721.552
3.54.221945.787497.46719.999712.430814.721617.101519.512222.0179
Table A12. Execution times in milliseconds on GPU with cable structure for 20,000 time-steps (1.0 s); R V on the Left and number of cells (length) on the Top.
Table A12. Execution times in milliseconds on GPU with cable structure for 20,000 time-steps (1.0 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.50.8489990.8893331.162741.161881.185381.26621.312231.311981.31173
1.00.8418140.8717421.126121.150561.212531.262071.296151.271761.29431
1.70.8218170.8658281.119411.14271.208521.246881.279631.254921.2792
2.00.813530.8643341.162211.144171.16491.247751.275781.297291.27352
2.20.8376170.8634851.160831.144431.169721.243711.274551.296261.27503
3.00.8352240.8617631.161461.142681.165471.238391.267141.289831.26722
3.50.8386920.860051.159081.138511.161791.237071.264721.286041.26323
Table A13. Execution times in milliseconds on GPU with cable structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
Table A13. Execution times in milliseconds on GPU with cable structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.51.250721.309911.760311.724921.755751.896031.953131.914312.01845
1.01.246371.296811.743411.715311.744961.871231.91691.880331.98441
1.71.247011.287531.734671.714991.762221.854811.849591.865491.96853
2.01.210061.28761.736081.736411.7511.849131.898631.858361.89367
2.21.199421.290341.732571.701411.740621.849631.897081.859491.89444
3.01.198591.283341.734391.707661.73711.845591.886061.852131.93722
3.51.199921.28211.735511.703651.730561.841271.884061.851431.95201
Table A14. Execution times in milliseconds on GPU with cable structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
Table A14. Execution times in milliseconds on GPU with cable structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
R V |Lenght75010001250150017502000225025002750
0.51.670111.67292.315252.375522.336792.501822.567932.523472.66213
1.01.661081.65862.320682.35992.322252.469852.512932.526082.60121
1.71.65771.653492.317792.359872.317782.461872.430512.566762.60954
2.01.658751.714812.225052.353022.343892.372362.507142.466582.61
2.21.654211.710052.221872.351252.363542.375442.507282.46062.60991
3.01.652371.694122.223132.342392.315022.447092.501812.462472.60179
3.51.654791.645812.216372.352962.309232.436732.445662.4572.59723
Table A15. Execution times in milliseconds on GPU with ring structure for 20,000 time-steps (1.0 s); R V on the Left and number of cells (length) on the Top.
Table A15. Execution times in milliseconds on GPU with ring structure for 20,000 time-steps (1.0 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.50.8248870.8582921.177561.20211.226421.226811.263531.331391.35563
1.00.8259410.8676121.176051.195981.215881.215141.245541.312831.33772
1.70.8093180.8869471.176611.192861.186171.210871.275851.302471.32589
2.00.8373680.8642771.178221.196461.216671.206941.234231.302221.32612
2.20.806680.8596211.179541.199141.217081.209131.235551.301581.32694
3.00.803940.836841.153831.196791.219191.20731.238381.30311.32707
3.50.8020610.831341.153121.198691.21751.203651.235581.30251.32064
Table A16. Execution times in milliseconds on GPU with ring structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
Table A16. Execution times in milliseconds on GPU with ring structure for 30,000 time-steps (1.5 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.51.272121.284481.673521.786681.814321.821091.94491.97571.94173
1.01.232731.278321.728461.781951.787861.813671.91681.945221.91128
1.71.204661.281491.76471.791051.749711.803541.91121.942471.91323
2.01.250881.281561.702821.791371.816181.803971.850091.938071.92427
2.21.249491.288511.699431.792391.820371.80651.916061.940381.90788
3.01.250071.242731.690351.791641.822771.806181.916351.940811.91116
3.51.246321.240531.639941.793211.820511.803421.9161.942561.91027
Table A17. Execution times in milliseconds on GPU with ring structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
Table A17. Execution times in milliseconds on GPU with ring structure for 40,000 time-steps (2.0 s); R V on the Left and number of cells (length) on the Top.
R V |Length75010001250150017502000225025002750
0.51.708211.768682.248552.37732.421972.412532.556412.513972.65875
1.01.70821.767852.250072.375852.379592.406082.535772.496862.63478
1.71.661911.772.257142.382522.422652.400742.539532.495342.62592
2.01.603761.771242.314132.298252.421992.44192.509592.564672.5395
2.21.596951.777382.343872.306942.426552.403522.54532.58572.54421
3.01.629561.71262.335122.302692.428192.404532.547062.588982.54902
3.51.654741.710352.20772.357572.429432.400462.545672.557262.60797

Appendix E. Implementations Additional Material

Appendix E.1. Matlab Simulink Block Examples

Figure A1. Example cell block Matlab Simulink.
Figure A1. Example cell block Matlab Simulink.
Mathematics 09 00164 g0a1
Figure A2. Example resistor block Matlab Simulink.
Figure A2. Example resistor block Matlab Simulink.
Mathematics 09 00164 g0a2
Figure A3. 5 cells grouped Matlab Simulink.
Figure A3. 5 cells grouped Matlab Simulink.
Mathematics 09 00164 g0a3
Figure A4. Model block Matlab Simulink.
Figure A4. Model block Matlab Simulink.
Mathematics 09 00164 g0a4

References

  1. Beeler, G.W.; Reuter, H. Reconstruction of the action potential of ventricular myocardial fibres. J. Physiol. 1977, 268, 177–210. [Google Scholar] [CrossRef]
  2. Luo, C.H.; Rudy, Y. A model of the ventricular cardiac action potential. Depolarization, repolarization, and their interaction. Circ. Res. 1991, 68, 1501–1526. [Google Scholar] [CrossRef] [Green Version]
  3. Karma, A. Electrical alternans and spiral wave breakup in cardiac tissue. Chaos 1994, 4, 461–472. [Google Scholar] [CrossRef]
  4. Iyer, V.; Mazhari, R.; Winslow, R.L. A Computational Model of the Human Left-Ventricular Epicardial Myocyte. Biophys. J. 2004, 87, 1507–1525. [Google Scholar] [CrossRef] [Green Version]
  5. Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
  6. Noble, D. A modification of the Hodgkin—Huxley equations applicable to Purkinje fibre action and pacemaker potentials. J. Physiol. 1962, 160, 317–352. [Google Scholar] [CrossRef]
  7. Noble, D. From the Hodgkin–Huxley axon to the virtual heart. J. Physiol. 2007, 580, 15–22. [Google Scholar] [CrossRef]
  8. Dierckx, H.; Fenton, F.H.; Filippi, S.; Pumir, A.; Sridhar, S. Simulating normal and arrhythmic dynamics: From sub-cellular to tissue and organ level. Front. Phys. 2019, 7, 89. [Google Scholar] [CrossRef] [Green Version]
  9. Bartocci, E.; Singh, R.; von Stein, F.B.; Amedome, A.; Caceres, A.J.; Castillo, J.; Closser, E.; Deards, G.; Goltsev, A.; Ines, R.S.; et al. Teaching cardiac electrophysiology modeling to undergraduate students: Laboratory exercises and GPU programming for the study of arrhythmias and spiral wave dynamics. Adv. Physiol. Educ. 2011, 35, 427–437. [Google Scholar] [CrossRef] [Green Version]
  10. Gizzi, A.; Loppini, A.; Ruiz-Baier, R.; Ippolito, A.; Camassa, A.; La Camera, A.; Emmi, E.; Di Perna, L.; Garofalo, V.; Cherubini, C.; et al. Nonlinear diffusion and thermo-electric coupling in a two-variable model of cardiac action potential. Chaos Interdiscip. J. Nonlinear Sci. 2017, 27, 093919. [Google Scholar] [CrossRef] [Green Version]
  11. Fenton, F.H.; Gizzi, A.; Cherubini, C.; Pomella, N.; Filippi, S. Role of temperature on nonlinear cardiac dynamics. Phys. Rev. E 2013, 87, 042717. [Google Scholar] [CrossRef] [Green Version]
  12. Ruiz Baier, R.; Gizzi, A.; Loppini, A.; Cherubini, C.; Filippi, S. Modelling Thermo-Electro-Mechanical Effects in Orthotropic Cardiac Tissue. Commun. Comput. Phys. 2019, 27. [Google Scholar] [CrossRef] [Green Version]
  13. Bini, D.; Cherubini, C.; Filippi, S. On vortices heating biological excitable media. Chaos Solitons Fractals 2009, 42, 2057–2066. [Google Scholar] [CrossRef]
  14. Welsh, A.J.; Delgado, C.; Lee-Trimble, C.; Kaboudian, A.; Fenton, F.H. Simulating waves, chaos and synchronization with a microcontroller. Chaos Interdiscip. J. Nonlinear Sci. 2019, 29, 123104. [Google Scholar] [CrossRef]
  15. Fenton, F.; Karma, A. Vortex dynamics in three-dimensional continuous myocardium with fiber rotation: Filament instability and fibrillation. Chaos Interdiscip. J. Nonlinear Sci. 1998, 8, 20–47. [Google Scholar] [CrossRef]
  16. Bueno-Orovio, A.; Cherry, E.M.; Fenton, F.H. Minimal model for human ventricular action potentials in tissue. J. Theor. Biol. 2008, 253, 544–560. [Google Scholar] [CrossRef]
  17. Bartocci, E.; Corradini, F.; Berardini, M.R.D.; Entcheva, E.; Grosu, R.; Smolka, S.A. Spatial Networks of Hybrid I/O Automata for Modeling Excitable Tissue. Electron. Notes Theor. Comput. Sci. 2008, 194, 51–67. [Google Scholar] [CrossRef] [Green Version]
  18. Grosu, R.; Batt, G.; Fenton, F.H.; Glimm, J.; Guernic, C.L.; Smolka, S.A.; Bartocci, E. From Cardiac Cells to Genetic Regulatory Networks. In Proceedings of the CAV 2011: The 23rd International Conference on Computer Aided Verification, Snowbird, UT, USA, 14–20 July 2011; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6806, pp. 396–411. [Google Scholar] [CrossRef] [Green Version]
  19. Wolfram, S. Cellular automata as models of complexity. Nature 1984, 311, 419–424. [Google Scholar] [CrossRef]
  20. Murray, J. Mathematical Biology II: Spatial Models and Biomedical Applications; Springer Nature: Berlin/Heidelberg, Germany, 2013. [Google Scholar] [CrossRef]
  21. Neumann, J.; Burks, A.W. Theory of Self-Reproducing Automata; University of Illinois Press Urbana: Champaign, IL, USA, 1966; Volume 1102024. [Google Scholar] [CrossRef]
  22. Amorim, R.M.; Campos, R.S.; Lobosco, M.; Jacob, C.; dos Santos, R.W. An electro-mechanical cardiac simulator based on cellular automata and mass-spring models. In Proceedings of the International Conference on Cellular Automata, Santorini, Greece, 24–27 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 434–443. [Google Scholar] [CrossRef]
  23. Lehotzky, D.; Zupanc, G.K. Cellular Automata Modeling of Stem-Cell-Driven Development of Tissue in the Nervous System. Dev. Neurobiol. 2019, 79, 497–517. [Google Scholar] [CrossRef]
  24. Grosu, R.; Smolka, S.A.; Corradini, F.; Wasilewska, A.; Entcheva, E.; Bartocci, E. Learning and Detecting Emergent Behavior in Networks of Cardiac Myocytes. Commun. ACM 2009, 52, 97–105. [Google Scholar] [CrossRef] [Green Version]
  25. Bartocci, E.; Corradini, F.; Di Berardini, M.; Entcheva, E.; Smolka, S.; Grosu, R. Modeling and simulation of cardiac tissue using hybrid I/O automata. Theor. Comput. Sci. 2009, 410, 3149–3165. [Google Scholar] [CrossRef] [Green Version]
  26. Andalam, S.; Ramanna, H.; Malik, A.; Roop, P.; Patel, N.; Trew, M.L. Hybrid automata models of cardiac ventricular electrophysiology for real-time computational applications. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 5595–5598. [Google Scholar] [CrossRef]
  27. Ye, P.; Grosu, R.; Smolka, S.A.; Entcheva, E. Formal Analysis of Abnormal Excitation in Cardiac Tissue. In Proceedings of the CMSB 2008: The 6th International Conference on Computational Methods in Systems Biology, Rostock, Germany, 12–15 October 2008; Volume 5307, pp. 141–155. [Google Scholar] [CrossRef] [Green Version]
  28. Bartocci, E.; Lió, P. Computational Modeling, Formal Analysis, and Tools for Systems Biology. PLoS Comput. Biol. 2016, 12, e1004591. [Google Scholar] [CrossRef] [Green Version]
  29. Cytrynbaum, E.N.; MacKay, V.; Nahman-Lévesque, O.; Dobbs, M.; Bub, G.; Shrier, A.; Glass, L. Double-wave reentry in excitable media. Chaos Interdiscip. J. Nonlinear Sci. 2019, 29, 073103. [Google Scholar] [CrossRef]
  30. Sigalas, C.; Cremer, M.; Bose, S.; Burton, R.A. Combining tissue engineering and optical imaging approaches to explore interactions along the neuro-cardiac axis. R. Soc. Open Sci. 2020. [Google Scholar] [CrossRef]
  31. Bub, G.; Glass, L.; Publicover, N.G.; Shrier, A. Bursting calcium rotors in cultured cardiac myocyte monolayers. Proc. Natl. Acad. Sci. USA 1998, 95, 10283–10287. [Google Scholar] [CrossRef] [Green Version]
  32. Bartocci, E.; Cherry, E.M.; Glimm, J.; Grosu, R.; Smolka, S.A.; Fenton, F.H. Toward real-time simulation of cardiac dynamics. In Proceedings of the 9th International Conference on Computational Methods in Systems Biology, Paris, France, 21–23 September 2011; ACM: New York, NY, USA, 2011; pp. 103–112. [Google Scholar] [CrossRef] [Green Version]
  33. Mena, A.; Ferrero, J.M.; Matas, J.F.R. GPU accelerated solver for nonlinear reaction–diffusion systems. Application to the electrophysiology problem. Comput. Phys. Commun. 2015, 196, 280–289. [Google Scholar] [CrossRef]
  34. Pires, C.W.S.; Vasconcellos, E.C.; Clua, E.W.G. GPU Memory Access Optimization for 2D Electrical Wave Propagation Through Cardiac Tissue and Karma Model Using Time and Space Blocking. In Proceedings of the International Conference on Computational Science and Its Applications, Cagliari, Italy, 1–4 July 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 376–390. [Google Scholar] [CrossRef]
  35. Xu, Q.; Zhu, D. FPGA-based Experimental Validations of Electrical Activities in Two Adjacent FitzHugh–Nagumo Neurons Coupled by Memristive Electromagnetic Induction. IETE Tech. Rev. 2020, 1–15. [Google Scholar] [CrossRef]
  36. Adon, N.A.; Jabbar, M.H.; Mahmud, F. FPGA implementation for cardiac excitation-conduction simulation based on FitzHugh-Nagumo model. In Proceedings of the 5th International Conference on Biomedical Engineering in Vietnam, Ho Chi Minh City, Vietnam, 16–18 June 2014; Springer: Berlin/Heidelberg, Germany, 2015; pp. 117–120. [Google Scholar] [CrossRef]
  37. Fenton, F.; Cherry, E. Models of cardiac cell. Scholarpedia 2008, 3, 1868. [Google Scholar] [CrossRef]
  38. Hastings, H.M.; Fenton, F.H.; Evans, S.J.; Hotomaroglu, O.; Geetha, J.; Gittelson, K.; Nilson, J.; Garfinkel, A. Alternans and the onset of ventricular fibrillation. Phys. Rev. E 2000, 62, 4043–4048. [Google Scholar] [CrossRef] [Green Version]
  39. Watanabe, M.; Fenton, F.; Evans, S.; Hastings, H.; Alain, K. Mechanisms for Discordant Alternans. J. Cardiovasc. Electrophysiol. 2003, 12, 196–206. [Google Scholar] [CrossRef]
  40. Cherry, E.M.; Fenton, F.H. Suppression of alternans and conduction blocks despite steep APD restitution: Electrotonic, memory, and conduction velocity restitution effects. Am. J. Physiol.-Heart Circ. Physiol. 2004, 286, H2332–H2341. [Google Scholar] [CrossRef] [PubMed]
  41. Gizzi, A.; Cherry, E.; Gilmour, R.F., Jr.; Luther, S.; Filippi, S.; Fenton, F.H. Effects of pacing site and stimulation history on alternans dynamics and the development of complex spatiotemporal patterns in cardiac tissue. Front. Physiol. 2013, 4, 71. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Islam, M.A.; Cleaveland, R.; Fenton, F.H.; Grosu, R.; Jones, P.L.; Smolka, S.A. Probabilistic reachability for multi-parameter bifurcation analysis of cardiac alternans. Theor. Comput. Sci. 2019, 765, 158–169. [Google Scholar] [CrossRef]
  43. Burger, M. Inverse problems in ion channel modelling. Inverse Probl. 2011, 27, 083001. [Google Scholar] [CrossRef]
  44. Clerx, M.; Beattie, K.A.; Gavaghan, D.J.; Mirams, G.R. Four ways to fit an ion channel model. Biophys. J. 2019, 117, 2420–2437. [Google Scholar] [CrossRef]
  45. Weinberg, S. Ephaptic coupling rescues conduction failure in weakly coupled cardiac tissue with voltage-gated gap junctions. Chaos 2017, 27, 093908. [Google Scholar] [CrossRef]
  46. Courtemanche, M.; Glass, L.; Keener, J.P. Instabilities of a propagating pulse in a ring of excitable media. Phys. Rev. Lett. 1993, 70, 2182. [Google Scholar] [CrossRef]
  47. Glass, L.; Josephson, M.E. Resetting and Annihilation of Reentrant Abnormally Rapid Heartbeat. Phys. Rev. Lett. 1995, 75, 2059–2062. [Google Scholar] [CrossRef]
  48. Steinberg, B.E.; Glass, L.; Shrier, A.; Bub, G. The role of heterogeneities and intercellular coupling in wave propagation in cardiac tissue. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2006, 364, 1299–1311. [Google Scholar] [CrossRef]
  49. Fink, M.; Niederer, S.A.; Cherry, E.M.; Fenton, F.H.; Koivumäki, J.T.; Seemann, G.; Thul, R.; Zhang, H.; Sachse, F.B.; Beard, D.; et al. Cardiac cell modelling: Observations from the heart of the cardiac physiome project. Prog. Biophys. Mol. Biol. 2011, 104, 2–21. [Google Scholar] [CrossRef] [Green Version]
  50. Patel, R.B.; Ng, J.; Reddy, V.; Chokshi, M.; Parikh, K.; Subacius, H.; Alsheikh-Ali, A.A.; Nguyen, T.; Link, M.S.; Goldberger, J.J.; et al. Early Repolarization Associated with Ventricular Arrhythmias in Patients with Chronic Coronary Artery Disease. Circ. Arrhythmia Electrophysiol. 2010, 3, 489–495. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Kaboudian, A.; Cherry, E.M.; Fenton, F.H. Real-time interactive simulations of large-scale systems on personal computers and cell phones: Toward patient-specific heart modeling and other applications. Sci. Adv. 2019, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Klabunde, R. Cardiovascular Physiology Concepts; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2011. [Google Scholar]
  53. Mines, G.R. On circulating excitations in heart muscles and their possible relation to tachycardia and fibrillation. Trans. R. Soc. Can. 2010, 1914, 43–52. [Google Scholar]
  54. Rytand, D.A. The Circus Movement (Entrapped Circuit Wave) Hypothesis and Atrial Flutter. Ann. Intern. Med. 1966, 65, 125–159. [Google Scholar] [CrossRef] [PubMed]
  55. Loppini, A.; Gizzi, A.; Ruiz-Baier, R.; Cherubini, C.; Fenton, F.H.; Filippi, S. Competing Mechanisms of Stress-Assisted Diffusivity and Stretch-Activated Currents in Cardiac Electromechanics. Front. Physiol. 2018, 9. [Google Scholar] [CrossRef]
  56. Cherubini, C.; Filippi, S.; Gizzi, A.; Ruiz-Baier, R. A note on stress-driven anisotropic diffusion and its role in active deformable media. J. Theor. Biol. 2017, 430, 221–228. [Google Scholar] [CrossRef]
  57. Loppini, A.; Gizzi, A.; Cherubini, C.; Cherry, E.M.; Fenton, F.H.; Filippi, S. Spatiotemporal correlation uncovers characteristic lengths in cardiac tissue. Phys. Rev. E 2019, 100, 020201. [Google Scholar] [CrossRef] [Green Version]
  58. Barone, A.; Gizzi, A.; Fenton, F.; Filippi, S.; Veneziani, A. Experimental validation of a variational data assimilation procedure for estimating space-dependent cardiac conductivities. Comput. Methods Appl. Mech. Eng. 2020, 358, 112615. [Google Scholar] [CrossRef]
  59. Gottwald, E.; Gottwald, M.; Dhein, S. Enhanced dispersion of epicardial activation–recovery intervals at sites of histological inhomogeneity during regional cardiac ischaemia and reperfusion. Heart 1998, 79, 474–480. [Google Scholar] [CrossRef]
  60. Taggart, P.; Sutton, P.M.; Opthof, T.; Coronel, R.; Trimlett, R.; Pugsley, W.; Kallis, P. Inhomogeneous Transmural Conduction During Early Ischaemia in Patients with Coronary Artery Disease. J. Mol. Cell. Cardiol. 2000, 32, 621–630. [Google Scholar] [CrossRef]
  61. McSharry, P.E.; Clifford, G.D.; Tarassenko, L.; Smith, L.A. A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. Biomed. Eng. 2003, 50, 289–294. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Hund, T.J.; Rudy, Y. Rate Dependence and Regulation of Action Potential and Calcium Transient in a Canine Cardiac Ventricular Cell Model. Circulation 2004, 110, 3168–3174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Sarai, N.; Matsuoka, S.; Kuratomi, S.; Ono, K.; Noma, A. Role of Individual Ionic Current Systems in the SA Node Hypothesized by a Model Study. Jpn. J. Physiol. 2003, 53, 125–134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Mahajan, A.; Shiferaw, Y.; Sato, D.; Baher, A.; Olcese, R.; Xie, L.H.; Yang, M.J.; Chen, P.S.; Restrepo, J.G.; Karma, A.; et al. A Rabbit Ventricular Action Potential Model Replicating Cardiac Dynamics at Rapid Heart Rates. Biophys. J. 2008, 94, 392–410. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Cherry, E.M.; Ehrlich, J.R.; Nattel, S.; Fenton, F.H. Pulmonary vein reentry—Properties and size matter: Insights from a computational analysis. Heart Rhythm 2007, 4, 1553–1562. [Google Scholar] [CrossRef] [PubMed]
  66. Rice, J.J.; Wang, F.; Bers, D.M.; De Tombe, P.P. Approximate Model of Cooperative Activation and Crossbridge Cycling in Cardiac Muscle Using Ordinary Differential Equations. Biophys. J. 2008, 95, 2368–2390. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Lenarda, P.; Gizzi, A.; Paggi, M. A modeling framework for electro-mechanical interaction between excitable deformable cells. Eur. J. Mech.-A/Solids 2018, 72, 374–392. [Google Scholar] [CrossRef] [Green Version]
  68. McCain, M.L.; Lee, H.; Aratyn-Schaus, Y.; Kléber, A.G.; Parker, K.K. Cooperative coupling of cell-matrix and cell–cell adhesions in cardiac muscle. Proc. Natl. Acad. Sci. USA 2012, 109, 9881–9886. [Google Scholar] [CrossRef] [Green Version]
  69. Rohr, S. Role of gap junctions in the propagation of the cardiac action potential. Cardiovasc. Res. 2004, 62, 309–322. [Google Scholar] [CrossRef]
  70. Barone, A.; Fenton, F.; Veneziani, A. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities. Chaos 2017, 27, 093930. [Google Scholar] [CrossRef]
  71. Barone, A.; Michele G., C.; Alessio, G.; Simona, P.; Veneziani, A. Efficient estimation of cardiac conductivities: A proper generalized decomposition approach. J. Comput. Phys. 2020, 423, 109810. [Google Scholar] [CrossRef]
  72. Campos, J.; Oliveira, R.; dos Santos, R.; Rocha, B. Lattice Boltzmann method for parallel simulations of cardiac electrophysiology using GPUs. J. Comput. Appl. Math. 2016, 295, 70–82, VIII Pan-American Workshop in Applied and Computational Mathematics. [Google Scholar] [CrossRef]
  73. Arthurs, C.J.; Bishop, M.J.; Kay, D. Efficient simulation of cardiac electrical propagation using high-order finite elements II: Adaptive p-version. J. Comput. Phys. 2013, 253, 443–470. [Google Scholar] [CrossRef]
  74. Yang, C.; Geng, T.; Wang, T.; Patel, R.; Xiong, Q.; Sanaullah, A.; Wu, C.; Sheng, J.; Lin, C.; Sachdeva, V.; et al. Fully integrated FPGA molecular dynamics simulations. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Denver, CO, USA, 17–22 November 2019; pp. 1–31. [Google Scholar] [CrossRef] [Green Version]
  75. Bakhteri, R.; Cheng, J.; Semmelhack, A. Design and Implementation of Cellular Automata on FPGA for Hardware Acceleration. Procedia Comput. Sci. 2020, 171, 1999–2007. [Google Scholar] [CrossRef]
  76. Siddiqui, F.; Amiri, S.; Minhas, U.I.; Deng, T.; Woods, R.; Rafferty, K.; Crookes, D. FPGA-Based Processor Acceleration for Image Processing Applications. J. Imaging 2019, 5, 16. [Google Scholar] [CrossRef] [Green Version]
  77. Georgis, G.; Lentaris, G.; Reisis, D. Acceleration techniques and evaluation on multi-core CPU, GPU and FPGA for image processing and super-resolution. J. Real-Time Image Process. 2019, 16, 1207–1234. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic propagation in a 4-cell-HCA with vanishing activation wave propagation.
Figure 1. Schematic propagation in a 4-cell-HCA with vanishing activation wave propagation.
Mathematics 09 00164 g001
Figure 2. 1D cell structures circuit schematic. Panel (a) sketches a cable and Panel (b) a ring with 3 cells each. Cells are represented as voltage source and the resistors in between represent the membrane resistance.
Figure 2. 1D cell structures circuit schematic. Panel (a) sketches a cable and Panel (b) a ring with 3 cells each. Cells are represented as voltage source and the resistors in between represent the membrane resistance.
Mathematics 09 00164 g002
Figure 3. Schematic of GPU and CPU communication and memories. Left, memory interaction on a single execution grid created by a single kernel call, with N sketched (thread) blocks and multiple threads within the blocks on GPU. Right, CPU interface and legend.
Figure 3. Schematic of GPU and CPU communication and memories. Left, memory interaction on a single execution grid created by a single kernel call, with N sketched (thread) blocks and multiple threads within the blocks on GPU. Right, CPU interface and legend.
Mathematics 09 00164 g003
Figure 4. Boxplots of the mean squared error E of the variable u among CPU and GPU solutions. Panel (a) are sampled per R V over increasing number of executed computations ( N T ) while panel (b) per number of executed computations ( N T ) over varying R V .
Figure 4. Boxplots of the mean squared error E of the variable u among CPU and GPU solutions. Panel (a) are sampled per R V over increasing number of executed computations ( N T ) while panel (b) per number of executed computations ( N T ) over varying R V .
Mathematics 09 00164 g004
Figure 5. Mean execution times over all cable lengths and resistance values for GPU (orange) and CPU (blue). (a) Cable, (b) ring, and (c) decreasing ring structure.
Figure 5. Mean execution times over all cable lengths and resistance values for GPU (orange) and CPU (blue). (a) Cable, (b) ring, and (c) decreasing ring structure.
Mathematics 09 00164 g005
Figure 6. CV (in μ m / ms ) in different sized cell rings with varying value for R V .
Figure 6. CV (in μ m / ms ) in different sized cell rings with varying value for R V .
Mathematics 09 00164 g006
Figure 7. Von Mises distribution functions for R V spatial characterization.
Figure 7. Von Mises distribution functions for R V spatial characterization.
Mathematics 09 00164 g007
Figure 8. APD bifurication diagrams of four sampled cells within the decreasing ring experiment. The original number of cells is 1000 (10 cm). Parameters of the used von Mises distribution μ = 1.55 , κ = 8 .
Figure 8. APD bifurication diagrams of four sampled cells within the decreasing ring experiment. The original number of cells is 1000 (10 cm). Parameters of the used von Mises distribution μ = 1.55 , κ = 8 .
Mathematics 09 00164 g008
Figure 9. APD bifurication diagrams of two rings with 1750 cells (17.5 cm) and different von Mises distributions: (a) μ = 1.55, κ = 8; (b) μ = 1.98, κ = 10.
Figure 9. APD bifurication diagrams of two rings with 1750 cells (17.5 cm) and different von Mises distributions: (a) μ = 1.55, κ = 8; (b) μ = 1.98, κ = 10.
Mathematics 09 00164 g009
Figure 10. Bifurication diagrams of four sampled cells within the decreasing ring experiment. The original number of cells is 2500 (25 cm). von Mises Distribution parameters μ = 1.98 , κ = 10 .
Figure 10. Bifurication diagrams of four sampled cells within the decreasing ring experiment. The original number of cells is 2500 (25 cm). von Mises Distribution parameters μ = 1.98 , κ = 10 .
Mathematics 09 00164 g010
Figure 11. Space-time diagram comparison. The x-axis indicates the simulation time (in ms), the y-axis the length (in mm). Colormaps refer to the voltage variable u.
Figure 11. Space-time diagram comparison. The x-axis indicates the simulation time (in ms), the y-axis the length (in mm). Colormaps refer to the voltage variable u.
Mathematics 09 00164 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Treml, L.M.; Bartocci, E.; Gizzi, A. Modeling and Analysis of Cardiac Hybrid Cellular Automata via GPU-Accelerated Monte Carlo Simulation. Mathematics 2021, 9, 164. https://doi.org/10.3390/math9020164

AMA Style

Treml LM, Bartocci E, Gizzi A. Modeling and Analysis of Cardiac Hybrid Cellular Automata via GPU-Accelerated Monte Carlo Simulation. Mathematics. 2021; 9(2):164. https://doi.org/10.3390/math9020164

Chicago/Turabian Style

Treml, Lilly Maria, Ezio Bartocci, and Alessio Gizzi. 2021. "Modeling and Analysis of Cardiac Hybrid Cellular Automata via GPU-Accelerated Monte Carlo Simulation" Mathematics 9, no. 2: 164. https://doi.org/10.3390/math9020164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop