Next Article in Journal
On the Impact of Some Fixed Point Theorems on Dynamic Programming and RLC Circuit Models in R-Modular b-Metric-like Spaces
Previous Article in Journal
A Theory for Interpolation of Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brain Connectivity Dynamics and Mittag–Leffler Synchronization in Asymmetric Complex Networks for a Class of Coupled Nonlinear Fractional-Order Memristive Neural Network System with Coupling Boundary Conditions

INSA Rennes, CNRS, IRMAR-UMR 6625, F-35000 Rennes, France
Axioms 2024, 13(7), 440; https://doi.org/10.3390/axioms13070440
Submission received: 4 April 2024 / Revised: 28 May 2024 / Accepted: 24 June 2024 / Published: 28 June 2024
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)

Abstract

:
This paper investigates the long-time behavior of fractional-order complex memristive neural networks in order to analyze the synchronization of both anatomical and functional brain networks, for predicting therapy response, and ensuring safe diagnostic and treatments of neurological disorder (such as epilepsy, Alzheimer’s disease, or Parkinson’s disease). A new mathematical brain connectivity model, taking into account the memory characteristics of neurons and their past history, the heterogeneity of brain tissue, and the local anisotropy of cell diffusion, is proposed. This developed model, which depends on topology, interactions, and local dynamics, is a set of coupled nonlinear Caputo fractional reaction–diffusion equations, in the shape of a fractional-order ODE coupled with a set of time fractional-order PDEs, interacting via an asymmetric complex network. In order to introduce into the model the connection structure between neurons (or brain regions), the graph theory, in which the discrete Laplacian matrix of the communication graph plays a fundamental role, is considered. The existence of an absorbing set in state spaces for system is discussed, and then the dissipative dynamics result, with absorbing sets, is proved. Finally, some Mittag–Leffler synchronization results are established for this complex memristive neural network under certain threshold values of coupling forces, memristive weight coefficients, and diffusion coefficients.

1. Introduction and Mathematical Setting of the Problem

Complex networks of dynamical systems appear naturally in real-world systems such as biology, physics (e.g., plasma, laser cooling), intelligent grid technologies (e.g., power grid networks, communications networks), neuromorphic engineering, social networks, as well as neuronal networks. Of particular interest are complex memristive neural networks in the brain network. The macroscopic anatomical brain network, which is composed of a large number of neurons (≈ 10 11 ) and their connections (≈ 10 15 ), is a complex large-scale network system that exhibits various subsystems at different spatial scales (micro and macro) and timescales, yet is capable of integrated real-time performance. These subsystems are huge networks of neurons, which are connected with each other and are modified based on the activation of neurons. The communication, via a combination of electrical and chemical signals between neurons, occurs at small gaps called synapses (see, e.g., [1]). This brain network structure implements mechanisms of regulation at different scales (from microscopic to macroscopic scales). On the microscopic level, neural plasticity regulates the formation and behavior of synaptic connections between individual neurons in response to new information. The association matrix of pair-wise synaptic weights between neurons will be of the order of 10 100 . In view of the scale of the network and the specific difficulties of accurately measuring all synaptic connections, an accurate description of a complete network diagram of brain connections is an ongoing challenge and a task of great difficulty (see, e.g., [2]).
However, the rise of functional neuroimaging and related neuroimaging techniques, such as Electroencephalography (EEG), Magnetoencephalography (MEG), functional Magnetic Resonance Imaging (fMRI), diffusion-weighted Magnetic Resonance Imaging (dMRI), and Transcranial Magnetic Stimulation (TMS), has led to good mapping and deeper understanding of the large-scale network organization of the human brain. The fMRI technique can be used to determine which regions of the brain are in charge of crucial functions, and assess the consequences of stroke, malignant brain tumors, abscess, other structural alterations, or direct brain therapy. For this last one, see, e.g., [3] for the used of online sensor information to automatically adjust, in real time, the brain tumor drugs through MRI-compatible catheters, via nonlinear model-based control techniques and a mathematical model describing tumor-normal cell interaction dynamics.
The EEG and MEG signals measure, respectively, electrical activity and magnetic fields induced by the electrical activity, from various brain regions with a high temporal resolution (but with limited spatial coverage). However, fMRI measures whole-brain activity indirectly (by detecting changes associated with blood flow for each network, over a specified interval of time) with a high spatial resolution (but with limited temporal resolution). Consequently, in order to improve both spatial and temporal resolution, EEG and MEG signals are often associated with fMRI (see, e.g., [4]).
This whole-brain connectomics approach, which relies on macroscopic measurements of structural and functional connectivity, has notably favored a major development in the identification and analysis of effects of brain injury or neurodegenerative and psychiatric diseases on brain systems related to cognition and behavior, for better diagnosis and treatments (see, e.g., [5,6,7]).
The dynamic interaction between neuronal networks and systems, which takes into account the dynamic flows of information that pass through different interconnected, widely distributed, and functionally specialized brain regions, is crucial for normal brain function (see, e.g., [8,9]). Measuring electrical activity (from, e.g., dMRI connectivity, magnetoencephalography or electroencephalography recordings) has allowed researchers to point out the existence of oscillations characterized by their frequency, amplitude, and phase (see, e.g., [8,10,11,12]). This phenomenon is considered to result from oscillatory neuronal (local) synchronization and long-range cortical synchronization (which is linked to many cognitive and memory functions). Moreover, several studies have established that the activity pattern of cortical neurons depends on the history of electrical activities (e.g., caused by the long-range interaction of ionic conductances), as a result of changes in synaptic strength or shape of synaptic plasticity (see, e.g., [13,14,15]). In addition, the diffusion terms play an important role in dynamics and stability of neural networks when, e.g., electrons are moving in asymmetric electromagnetic fields (see, e.g., [16,17,18]). So, diffusion phenomena cannot be neglected. Understanding mechanisms behind these synchronized oscillations and their alterations is very important for better diagnosis and treatments of neurological disorders. In particular, the relationship between stability of synchronization and graph theory is established. This relationship characterizes the impact of network topology on the disturbances. Disturbances or alteration of such synchronized networks, taking into account memory characteristics of neurons and their past history, play an important role in several brain disorders, such as neuropsychiatric diseases, epileptic seizures, Alzheimer’s disease, and Parkinson’s disease (see, e.g., [19,20,21,22,23] and references therein).
Moreover, noninvasive brain stimulation is attracting considerable attention due to its potential for safe and effective modulation of brain network dynamics. In particular, in the context of human cognition and behavior, the targeting of cortical oscillations by brain stimulation with electromagnetic waveforms has been widely used, whether it is to ensure a safe treatment, to improve quality of life, or to understand and explain the contribution of different brain regions to various human Cognitive Brain Functions (see, e.g., [24] and references therein).
It is well known that the dynamic behavior of neurons depends on the architecture of the network and on external perturbations such as electromagnetic radiation or stimulation by an electric field. A memristor, with plasticity and bionic characteristics, is a nonvolatile electrical component (i.e., retains memory without power) that limits or regulates the flow of electrical current and is capable of describing the impacts of electromagnetic induction (radiation) in neurons, by coupling membrane potential and magnetic flux. Moreover, electromagnetic induction currents in the nervous system can be emulated by memristive autapses (an autapse is a synaptic coupling of a neuron’s axon to its own dendrite and soma), which play a critical role in regulating physiological functions (see Figure 1).
The concept of a memristor, which is a passive and nonlinear circuit element, was first introduced by Chua [25]. The author estimates that the memristor was to be considered as basic as the three classical passive electronic elements: the resistor, the capacitor, and the inductor. In the resistor, there is a relation between the instantaneous value of voltage and the instantaneous value of current. Unlike the resistor, the memristance depends on the amount of charge passing through it and consequently, the memristor can remember its past dynamic history. It is a natural nonvolatile memory.
Memristor-based neural network models can be divided into (see, e.g., [26,27]) (a) memristive synaptic (autapse or synapse) models, in which a memristor is used as a variable connection weight for neurons; (b) neural network models affected by electromagnetic radiation, in which a memristor is used to simulate the electromagnetic induction effects. The memristive synaptic model uses the bionic properties of the memristor to realistically mimic biological synaptic functions, while the neural network model under electromagnetic radiation employs a magnetic flux-controlled memristor to imitate the electromagnetic induction effect on membrane potential.
Recently, considerable efforts have been made to mimic significant neural behaviors of the human brain by means of artificial neural networks. Due to the attractive property of this new type of information storage and processing device, it can be implemented using synaptic weights in artificial neural networks. It can also be an ideal component for simulating neural synapses in brain networks due to its nano-scale size, fast switching, passive nature, and remarkable memory characteristics (see, e.g., [28,29,30] and references therein). In recent years, the dynamical characteristics of memiristor-based neural networks have been extensively analyzed and, in this context, several studies have been reported as regards chaos, passivity, stability, and synchronization (see, e.g., [31,32,33,34,35,36,37] and references therein).
Currently, fractional calculus is particularly efficient, when compared to the classical integer-order models, for describing the long memory and hereditary behaviors of many complex systems. Fractional-order differential systems have been studied by many researchers in recent years; the genesis of fractional-order derivatives dates back to Leibniz. Since the beginning of the 19th century, many authors have addressed this problem and have devoted their attention to developing several fractional operators to represent nonlinear and nonlocal phenomena (such as Riemann, Liouville, Hadmard, Littlewood, Hilfer, and Caputo among others). Fractional integrals and fractional derivatives have proved to be useful in many real-world applications; in particular, they appear naturally in a wide variety of biological and physical models (see, for instance, Refs. [38,39,40,41,42,43] and references therein). Both the Riemann–Liouville and Caputo derivative operators are the most common and widely used way of defining fractional calculus. Unlike this Riemann–Liouville operator, when solving a fractional differential equation, we use the Caputo fractional operator [44], for which there is no need to define the fractional-order initial conditions. One of the most important characteristics of fractional operators is their nonlocal nature, which accounts for the infinite memory and hereditary properties of underlying phenomena. Recently, in [39], we proposed and analyzed a mathematical model of the electrical activity in the heart through the torso, which takes into account cardiac memory phenomena (this phenomenon, also known as the Chatterjee phenomenon, can cause dynamical instabilities (as alternans) and give rise to highly complex behavior including oscillations and chaos). We have shown numerically the interest of modeling memory through fractional-order derivatives, and that, with this model, we are able to analyze the influence of memory on some electrical properties, such as the duration of action potentials (APD), action potential morphology, and spontaneous activity.
On the other hand, synchronization of neural networks plays a significant role in the activities of different brain regions. In addition, compared with the concept of stability, the synchronization mechanism (with possible control), for two or more apparently independent systems, is being paid increasing attention in neuroscience research and medical science, because of its practicality. The study of dynamical systems and the synchronization of biological neural networks, with different types of coupling, have attracted a large amount of theoretical research, and consequently, the literature in this field is very extensive, especially in the context of integer-order differential systems. Since the literature on this topic has been receiving a significant amount of attention, it is not our intention to comment in detail here on all the works cited. For a general presentation of the synchronization phenomenon and its mathematical analysis, we can cite, e.g., [45,46,47,48]. Concerning problems associated with integer-order partial differential equations various methods and technique, we can refer to, e.g., [49,50,51,52,53,54]. Finally, for problems associated with fractional-order partial differential equations, various methods have recently been studied in the literature, such as, for example, [55], which considers the synchronization control of a neural network’s equation via Pinning Control, ref. [56], which investigates the dissipativity and synchronization control of memristive neural networks via feedback controller, and [57], which explores the stability and pinning synchronization of a memristive neural network’s equation.
Motivated by the above discussions, to take into account noninvasive brain stimulation and the effect of memory in the propagation of brain waves, together with other critical brain material parameters, we propose and analyze a new mathematical model of fractional-order memristor-based neural networks with multidimensional reaction–diffusion terms, by combining memristor with fractional-order neural networks. The models with time fractional-order derivatives integrate all the past activities of neurons and are capable of capturing the long-term history-dependent functional activity in a network. The diffusion can be seen as a local connection (at a lower scale), like, e.g., in [58], whereas the coupling topology relates to dynamical properties of network dynamical systems, corresponding to physical or functional connections at the upper scale.
Thus, the derived brain neural network model is precisely the system (1)–(5) (see further), which is a nonlinear coupled reaction–diffusion system in the shape of a set of fractional-order differential equations coupled with a set of fractional-order partial differential equations (interacting via a complex network).
In the present work, we are interested in the synchronization phenomenon in a whole network of diffusively nonlinear coupled systems which combines past and present interactions. First, we will impose initial data on a closed and bounded spatial domain, and analyze some complex dynamical property of the long-time behavior of a derived fractional-order large-scale neural network model. After a rigorous investigation of dissipative dynamics, different synchronization problems of the developed complex dynamical networks are studied.

2. Formulation of Memristive-Based Neural Network Problem

We shall consider a network of m coupled neurons denoted by N G = { N i : i = 1 , 2 , , m } , where the network size m 2 is a positive integer. Our model of a memristor-autapse-based neural network with external electromagnetic radiation can be depicted as in Figure 2.
In this paper, motivated by the above discussions (see Section 1), we introduce the following new fractional-order memristive neural network of coupled neurons, modeled by the following Caputo-fractional system, including the magnetic flux coupling (on each neuron N i of the network, for i = 1 , m ):
c α 0 + α ϕ i d i v ( K f ( x ) ϕ i ) = F ( x , ϕ i ) + σ u i + ξ h m j = 1 m a i j ( x , t ) H j ( ϕ j ) + f e x ( x , t ) κ ϕ i Ψ ( w i ) + ξ f m G i ( ϕ 1 , , ϕ m ) , in Q c β 0 + α u i = a b u i c 2 ϕ i 2 + c 1 ϕ i , in Q c γ 0 + α w i d i v ( K e ( x ) w i ) = ν 1 ϕ i ν 2 w i + g e x ( x , t ) + ξ e m G i ( w 1 , , w m ) , in Q
where Q = Ω × ] 0 , + [ and the spatial domain Ω is a bounded open subset and its boundary denoted by Γ = Ω is locally Lipschitz continuous. Here, 0 + α denotes the forward Caputo fractional derivative with α a real value in ] 0 , 1 ] . The state variables ϕ i , u i , and w i describe, respectively, the membrane potential, the ionic variable, and the magnetic flux across the membrane of the i-th neuron N i , for i = 1 , m . The term F is the nonlinear activation operator. The functions f e x and g e x represent the external forcing current and the external field electromagnetic radiation, respectively. The parameters ξ h 0 , ξ f 0 , ξ e 0 are the coupling strength constants with ξ e ξ f 0 . The values a and σ can be any nonzero number constants and all the parameters b, c 1 , c 2 , κ , ν 1 , and ν 2 can be any positive constants.
The fractional parameter c α is c α = κ s C α > 0 , where C α is the membrane pseudo-capacitance per unit area and κ s is the surface area-to-volume ratio (homogenization parameter). The membrane is assumed to be passive, so the pseudo-capacitance C α can be assumed to be constant. Moreover, since the electrical restitution curve (ERC) is affected by the action potential history through ionic memory, we have represented the memory via u (respectively, via w) by a time fractional-order dynamic term c β 0 + α u (respectively, by c γ 0 + α w ), where the positive parameters c β and c γ are assumed to be constants. The fractional parameters c α , c β , and c γ depend on the fractional-order α .
Remark 1.
According to the expression of the Caputo derivative, we can obtain that the unit for the dimension of c α C is s α 1 , with s the unit for the dimension of time and C the capacitance c α for α = 1 (this result remains valid for c β and c γ ).
The functions a i j , for i , j = 1 , m , represent the memristor’s synaptic connection weight coefficient (an example with three neurons is depicted in Figure 3).
The term j = 1 m a i j ( x , t ) H j ( ϕ j ) could be regarded as an emulation of a neurological disease, e.g., epileptic seizures, in which the nonlinear operators H j are some activation functions. We assume that (for i , j = 1 , m )
a i j L ( [ 0 , + ) × Ω ¯ ) with a ̲ i j a i j a ¯ i j , a . e . , in [ 0 , + ) × Ω ¯
where ( a ¯ i j , a ̲ i j ) are in I R 2 , and we denote
a s i j = max ( a ¯ i j , a ̲ i j ) and a d i j = a ¯ i j a ̲ i j .
The operator Ψ = q 1 Ψ 1 + q 2 Ψ 2 is the memory conductance of the flux-controlled memristor, where Ψ 1 ( w ) = δ 1 + η 11 w + η 12 w 2 , Ψ 2 ( w ) = δ 2 + ζ 2 tanh ( w ) , all the parameters δ 1 , δ 2 , η 11 , η 12 , ζ 2 can be any positive constant, and q 1 q 2 0 , with q k 0 , for k = 1 , 2 . The magnetic flux coupling κ ϕ Ψ ( w ) could be regarded as an additive induction current on the membrane and represents the dynamic effect of electromagnetic induction on neurological diseases (examples with three neurons are depicted in Figure 4). For simplicity, we can write Ψ ( w ) = δ + η 1 w + η 2 w 2 + ζ tanh ( w ) , where δ = q 1 δ 1 + q 2 δ 2 , η 1 = q 1 η 11 , η 2 = q 1 η 12 , and ζ = q 2 ζ 2 .
The operator G i , which contains the information of network topology, is defined by (for i = 1 , m )
G i ( v 1 , , v m ) = k = 1 , k i m G i k ( v k v i )
with the coupling (or connectivity) matrix G = ( G i j ) 1 i , j m ( G is called the Laplacian matrix of the graph), in which G i j are the coefficients of connection from the i-th to the j-th neuron, satisfying the assumption
(HG) 
G i k 0 for i k and G i i = k = 1 , k i m G i k , i.e., the matrix G has vanishing row and column sums and non-negative off-diagonal elements.
Then, G i ( v 1 , , v m ) can be written as G i ( v 1 , , v m ) = k = 1 m G i k v k .
In graph theory, the Laplacian matrix G , also called the graph Laplacian or Kirchhoff matrix, defines the graph topology with m the number of vertices/nodes N i in graph G ( N G , E G ) (with N G set of vertices and E G set of edges/links). The diagonal matrix D = diag ( G 11 , , G m m ) is called the degree matrix of graph and A : = D + G is called the adjacency matrix of the graph.
The state variable ( ϕ i , u i , w i ) , for i = 1 , m in this network, is coupled with the other neurons { N j : j i } in the network through the state equation by G i and/or through the boundary conditions as follows (fully connected network on boundary):
K f . ϕ i · n p f m k = 1 , m ( ϕ k ϕ i ) = 0 , on Γ , K e . w i · n p e m k = 1 , m ( w k w i ) = 0 , on Γ .
where n is the outward normal to Γ and p f > 0 , p e > 0 are the coupling strength constants on boundary. The tensors K f and K e are the effective diffusion tensors that describe the heterogeneity of brain tissue and local anisotropy of cell diffusion.
The initial conditions of (1) to be specified will be denoted by (for i = 1 , m )
( ϕ i ( 0 ) , u i ( 0 ) , w i ( 0 ) ) = ( ϕ 0 i , u 0 i , w 0 i ) , on Ω .
The rest of the paper is organized as follows. In the next section, we give some preliminary results useful in the sequel. In Section 4, we shall prove the existence, stability, and uniqueness of weak solutions of the derived model, under some hypotheses for data and some regularity of nonlinear operators. An important feature of the uniqueness of the solution is the semiflow physical property of the system; starting the system at time t 0 , letting it run until time s = t 0 + τ , and then restarting it and letting it run from time s to the final time of T f amounts to running the system directly from time t 0 to final time T f . In Section 5, the existence of an absorbing set in state spaces for the system is discussed, an estimate of the solutions is derived when time is large enough, and then the dissipative dynamics result, with absorbing sets, is proved. In Section 6, synchronization phenomena are discussed and some Mittag–Leffler synchronization criteria for such complex dynamical networks are established in different situations. Precisely, some sufficient conditions for the synchronization are obtained first for the complete (or identical) synchronization (which refers to the process by which two or more identical dynamical systems adjust their motion in order to converge to the same dynamical state as time approaches infinity) problem and then for the master–slave synchronization problem via appropriate pinning feedback controllers and adaptive controllers. In Section 7, conclusions are discussed. In Appendix A, the full proof of well-posedness of the derived system is shown, and in Appendix B, a brief introduction to some definitions and basic results in fractional calculus in the Rieman–Liouville sense and Caputo sense is given.

3. Assumptions, Notations, and Some Fundamental Inequalities

Some basic definitions, notations, fundamental inequalities, and preliminary lemmas are introduced and other results are developed.
Let I R d , d 1 , be an open and bounded set with a smooth boundary and T = × ( 0 , T ) . We use the standard notation for Sobolev spaces (see [59]), denoting the norm of W q , p ( ) ( q I N , p [ 1 , ] ) by . W q , p . In the special case p = 2 , we use H q ( ) instead of W q , 2 ( ) . The duality pairing of a Banach space X with its dual space X is given by . , . X , X . For a Hilbert space Y, the inner product is denoted by ( . , . ) Y and the inner product in L 2 ( Ω ) is denoted by ( . , . ) . For any pair of real numbers r , s 0 , we introduce the Sobolev space H r , s ( T ) = L 2 ( 0 , T ; H r ( ) ) H s ( 0 , T ; L 2 ( ) ) , which is a Hilbert space normed by v L 2 ( 0 , T ; H r ( ) ) 2 + v H s ( 0 , T ; L 2 ( ) ) 2 1 / 2 , where space H s ( 0 , T ; L 2 ( ) ) denotes the Sobolev space of order s of functions defined on ( 0 , T ) and taking values in L 2 ( ) , and defined by interpolation as H s ( 0 , T ; L 2 ( ) ) = [ H q ( 0 , T , L 2 ( ) ) , L 2 ( T ) ] θ , for s = ( 1 θ ) q with θ ( 0 , 1 ) and q I N , and H q ( 0 , T ; L 2 ( ) ) = v L 2 ( T ) | j v t j L 2 ( T ) , for 1 j q .
We now recall the following Poincaré–Steklov inequalities (see, e.g., [60]):
Lemma 1.
(Poincaré–Steklov inequality) Assume that ℧ is a bounded connected open subset of I R d with a sufficiently regular boundary (e.g., a Lipschitz boundary). Then,
C P S 2 ( ζ ) v L 2 ( ) 2 2 ( v L 2 ( ) 2 + ζ v L 2 ( ) 2 ) , v H 1 ( ) .
 where : = d i a m ( ) > 0 and C P S 2 > 0 is the smallest eigenvalue of the Laplacian supplemented with the Robin boundary condition ζ v + n . v = 0 on (with ζ > 0 any positive constant).
Lemma 2.
(Extended Poincaré–Steklov inequality). Assume that q [ 1 , ) and that ℧ is a bounded connected open subset of I R d with a sufficiently regular boundary (e.g., a Lipschitz boundary). Let R be a bounded linear form on W 1 , q ( ) whose restriction on constant functions is not zero. Then, there exists a Poincaré–Steklov constant C ^ P S ; q > 0 such that C ^ P S ; q v L q ( ) v L q ( ) + R ( v ) , v W 1 , q ( ) .
Remark 2.
Let q be a nonnegative integer and ℧ be a bounded connected open subset of I R d with a sufficiently regular boundary . We have the following results (see, e.g., [59]):
(i) H 1 ( ) L p ( ) , p [ 1 , 2 d d 2 q ] , with continuous embedding (with the exception that if 2 q = d , then p [ 1 , + [ and if 2 q > d , then p [ 1 , + ] ).
(ii) (Gagliardo–Nirenberg inequalities) There exists C , q , θ > 0 such that
v L p ( ) C , q , θ ( v L 2 ( ) θ v L q ( ) ( 1 θ ) + v L q ( ) ) , v H 1 ( ) × L q ( ) ,
 where 0 θ 1 , q 1 , and p > 0 such that d p = θ ( d 2 1 ) + ( 1 θ ) d q (with the exception that if 1 d 2 is a nonnegative integer, 0 θ < 1 ).
Definition 1
(see, e.g., [61]). A real valued function H defined on D × I R q , q 1 , is a Carathéodory function iff H ( . ; v ) is measurable for all v I R q and H ( y ; . ) is continuous for almost all y D .
Our study involves the following fundamental inequalities, which are repeated here for review:
  • (i) Hölder’s inequality: D Π i = 1 , k f i d x Π i = 1 , k f i L q i ( D ) , where 1 i k 1 q i = 1 .
  • (ii) Young’s inequality ( a , b > 0 and ϵ > 0 ): a b ϵ p a p + ϵ q / p q b q , f o r p , q ] 1 , + [ a n d 1 p + 1 q = 1 .
  • (iii) Minkowski’s integral inequality ( for p ] 1 , + [ and t > 0 ):
    Ω 0 t f ( x , s ) d s p d x 1 / p 0 t Ω f ( x , s ) p d x 1 / p d s .
Finally, we denote by the Lebesgue measure of ℧, by sgn ( x ) the sign of the scalar x, and by L ( A ; B ) the set of linear and continuous operators from a vector space A into a vector space B. The operator R stands for the adjoint to linear operator R between Banach spaces.
From now on, we assume that the following assumptions hold for nonlinear operators, matrix coupling, and tensor functions appearing in our model on Ω .
First we introduce the following spaces: H = L 2 ( Ω ) and V = H 1 ( Ω ) (endowed with their usual norms). We will denote by V the dual of V . We have the following continuous embeddings ( p is such that 1 p + 1 p = 1 ):
V H V and V L p ( Ω ) H H L p ( Ω ) V
and the injection V H is compact, where p 2 if d = 2 and 2 p 6 if d = 3 .
For tensor functions, we assume that the following assumptions hold.
(H1) 
We assume that the conductivity tensor functions K θ W 1 , ( Ω ¯ ) , θ { e , f } , are symmetric, positive definite matrix functions and that they are uniformly elliptic, i.e., there exist constants 0 < d f r f and 0 < d e r e such that ( v I R n )
d f v 2 v T K f v r f v 2 and d e v 2 v T K e v r e v 2 in Ω ¯ .
The operators F and ( H i ) i = 1 , m , which describe behavior of the system, are supposed to satisfy the following assumptions.
(H2) 
The operators F and ( H i ) i = 1 , m are Carathéodory functions from Ω × I R into I R . Furthermore, the following requirements hold.
   ( H 2 ) 1
The nonlinear scalar activation function F C 1 ( Ω × I R ) , which can be taken as F = F 1 + F 2 with F 1 as a decreasing function on the second variable, satisfies ( ( x , v ) Ω × I R ):
 (i)
F ( x , v ) v λ v 4 + ρ 0 ( x ) ;
 (ii)
F ( x , v ) α 1 v 3 + ρ 1 ( x ) ;
 (iii)
F v ( x , v ) α 2 v 2 + ρ 2 ( x ) and F v ( x , v ) α 3 v 2 + β ;
 (iv)
F 2 ( x , v ) α 4 v 2 + ρ 3 ( x ) ;
 (v)
E F ( x , v ) α 5 v 4 + ρ 4 ( x ) and E F ( x , v ) α 6 v 4 + α 7 v 2 + α 8 .
λ , β , and α i , for i = 1 , 8 , are positive constants, ρ i L 2 ( Ω ) , for i = 0 , 4 , are given functions, and E F is the primitive function of F 1 .
   ( H 2 ) 2
The nonlinear scalar activation functions H i are bounded with H i ( 0 ) = 0 and satisfy l i -Lipschitz condition, i.e.,
 (vi)
H i ( u ) M i and H i ( u ) H i ( v ) l i u v , ( u , v ) I R 2 , with l i > 0 and M i > 0 .
For the operators ( G i ) i = 1 , m , which are defined from the matrix coupling G, we introduce first the following notations: the matrix S = ( ϵ i j ) 1 i , j m , where ϵ i j = ϵ j i = 1 2 ( G i j + G j i ) and the matrix A = ( δ i j ) 1 i , j m , where δ i j = δ i j = 1 2 ( G i j G j i ) , for i j . Then, the matrix S is symmetric, the matrix A is antisymmetric, and G can be represented uniquely as G = S + A . It is evident that both S and A have zero row sums (i.e., k = 1 m ϵ j k = 0 and k = 1 m δ j k = 0 ) and 2 δ i i = k = 1 m G k i . Then, we can now derive the following two lemmas.
Lemma 3.
For ( ϕ , w , u ) in L 4 ( Ω ) × L 2 ( Ω ) × L 2 ( Ω ) and ( f e x , g e x , ρ 0 ) ( L 2 ( Ω ) ) 3 , we have the following inequalities:
( i ) κ Ω ϕ 2 ( δ + η 1 w + η 2 ( w ) 2 + ζ tanh ( w ) ) d x θ w ϕ L 2 ( Ω ) 2 , ( ii ) Ω F ( x , ϕ ) ϕ d x + σ Ω u ϕ d x + Ω f e x ϕ d x λ 2 Ω ϕ 4 d x + υ 2 u L 2 ( Ω ) 2 + 1 2 λ f e x L 2 ( Ω ) 2 + Ω ( ρ 0 L 2 ( Ω ) + 1 2 λ ( σ 2 2 υ + λ 2 ) 2 ) , ( iii ) c 1 Ω ϕ u d x c 2 Ω ϕ 2 u d x + a Ω u d x b u L 2 ( Ω ) 2 5 b 8 u L 2 ( Ω ) 2 + 2 c 1 2 b ϕ L 2 ( Ω ) 2 + 2 a 2 Ω b + 2 c 2 2 b ϕ L 4 ( Ω ) 4 ,
 where θ 0 = η 1 2 η 2 , θ w = κ ( η 2 θ 0 2 + ζ δ ) and υ > 0 (υ is any positive constant, to be chosen appropriately).
Proof. 
Since tanh ( w ) 1 , we have (according to assumption ( H 2 ) )
κ Ω ϕ 2 ( δ + η 1 w + η 2 ( w ) 2 + ζ tanh ( w ) ) d x κ ( ζ δ ) Ω ϕ 2 d x κ η 2 Ω ϕ 2 ( ( w + θ 0 ) 2 θ 0 2 ) d x θ w ϕ L 2 ( Ω ) 2 , Ω F ( x , ϕ ) ϕ d x + σ Ω u ϕ d x + Ω f e x ϕ d x λ Ω ϕ 4 d x + σ Ω u ϕ d x + Ω f e x ϕ d x + Ω ρ 0 d x λ Ω ϕ 4 d x + υ 2 u L 2 ( Ω ) 2 + ( σ 2 2 υ + λ 2 ) ϕ L 2 ( Ω ) 2 + 1 2 λ f e x L 2 ( Ω ) 2 + Ω ρ 0 L 2 ( Ω ) λ 2 Ω ϕ 4 d x + υ 2 u L 2 ( Ω ) 2 + 1 2 λ f e x L 2 ( Ω ) 2 + Ω ( ρ 0 L 2 ( Ω ) + 1 2 λ ( σ 2 2 υ + λ 2 ) 2 )
and
c 1 Ω ϕ u d x c 2 Ω ϕ 2 u d x + a Ω u d x b u L 2 ( Ω ) 2 5 b 8 u L 2 ( Ω ) 2 + 2 c 1 2 b ϕ L 2 ( Ω ) 2 + 2 a 2 Ω b + 2 c 2 2 b Ω ϕ 4 d x .
This completes the proof. □
The estimates of the previous lemma are needed to construct a priori estimates (to establish the existence result).
Lemma 4.
For all ( v i ) i = 1 , m I R m , we have the following relations (with ψ i j = v j v i for 1 i , j m ).
( i ) 1 i m 1 k m G i k v k v i = 1 2 i , k = 1 ; k i m G i k ( v k v i ) 2 + k = 1 m δ k k v k 2 ,
( i i ) 1 i , j m 1 k m ( G j k G i k ) v k ψ i j = m 1 k , j m ϵ j k ψ j k 2 + 2 1 i , j m δ j j ψ i j 2 = m 1 k , j m ϵ j k ψ j k 2 + m 1 i , j m μ i j ψ i j 2 , with μ i j = δ i i + δ j j m .
Proof. 
Since k = 1 m G i k = 0 , we can deduce that
i , k = 1 ; k i m G i k ( v k v i ) 2 = k = 1 m ( i = 1 m G i k ) v k 2 2 i , k = 1 m G i k v k v i = 2 k = 1 m δ k k v k 2 2 i , k = 1 m G i k v k v i
and then the relation (i). For the relation (ii), we have (we proceed in a similar way as in [62])
1 i , j m 1 k m ( G j k G i k ) v k ψ i j = 1 i , j m 1 k m ( ϵ i k ψ i k + δ i k ψ i k ϵ j k ψ j k δ j k ψ j k ) ψ i j = 1 i , j m 1 k m ( ϵ j k ψ j k ϵ i k ψ i k ) ψ i j + 1 i , j m 1 k m ( δ j k ψ j k δ i k ψ i k ) ψ i j = I + I I .
Since ψ j k = ψ j i + ψ i k and k = 1 m δ j k = 0 , we can deduce that
I I = 1 i , j m 1 k m ( δ j k ψ j k δ i k ψ i k ) ψ i j = k = 1 m i = 1 m j = 1 m ( δ j k ψ j k ψ i j ) k = 1 m i = 1 m j = 1 m ( δ j k ψ j k ψ j i ) = 2 1 i , j m 1 k m δ j k ψ j k ψ i j = 2 1 i , j m δ j j ψ i j 2 + 2 i = 1 m ( j , k = 1 ; k j m ψ i j δ j k ψ i k ) .
We can deduce that j , k = 1 ; k j m ψ i j δ j k ψ i k = 1 j < k m ψ i j ( δ j k + δ k j ) ψ i k = 0 (since δ j k + δ k j = 0 ), and then I I = 2 1 i , j m δ j j ψ i j 2 = m 1 i , j m μ i j ψ i j 2 , with μ i j = δ i i + δ j j m , for 1 i , j m . For the term I, we have that
I = 1 i , j m 1 k m ( ϵ j k ψ j k ϵ i k ψ i k ) ψ i j = 1 i , j m 1 k m ϵ j k ψ j k ψ i j 1 i , j m 1 k m ϵ j k ψ j k ψ j i = 2 1 i , j m 1 k m ( ϵ j k ψ j k ψ i j )
and then
I = 1 i , j m 1 k m ϵ j k ( ψ j k + ψ i j ) 2 1 i , j m 1 k m ϵ j k ( ψ j k 2 + ψ i j 2 ) = 1 i , k m ( 1 j m ϵ j k ) ψ i k 2 1 i , j m 1 k m ϵ j k ψ j k 2 1 i , j m ( 1 k m ϵ j k ) ψ i j 2 = m 1 j , k m ϵ j k ψ j k 2 ( since S is symmetric and has zero row sums ) .
This completes the proof. □
Remark 3.
(a) If k = 1 m G k i = 0 then δ i i = 0 (for all i = 1 , m ) and consequently 1 i , j m δ j j ψ i j 2 = 0 (with ψ i j = ( v j v i ) ) and i = 1 m δ i i v i 2 = 0 (in Lemma 4).
(b) The matrix ( μ i j ) i , j is symmetric and satisfies 1 i , j m μ i j = 0 .
Remark 4.
Let A be an arbitrary antisymmetric matrix. Then, for any vector w, we have w t A w = 0 .
To end this section, we give the following lemmas and definitions. From, e.g., [63,64], we can deduce, respectively, the following Lemmas.
Lemma 5.
(A generalized Gronwall’s inequality) Assume γ > 0 , h is a nonnegative function locally integrable on ( 0 , T ) (some T + ) and b is a nonnegative, bounded, nondecreasing continuous function defined on [ 0 , T ) . Let f be a nonnegative and locally integrable function on ( 0 , T ) with, for t ( 0 , T ) , f ( t ) h ( t ) + b ( t ) I 0 + γ [ f ] ( t ) . Then (for t ( 0 , T ) ), f ( t ) h ( t ) + 0 t k = 1 h ( τ ) ( t τ ) k γ 1 ( b ( t ) ) k Γ ( k γ ) d τ . If, in addition, h is a nondecreasing function on ( 0 , T ) , then f ( t ) h ( t ) E γ , 1 ( b ( t ) t γ ) .
The used function E θ 1 , θ 2 is the classical two-parametric Mittag–Leffler function (usually denoted by E θ 1 if θ 2 = 1 ), which is defined by E θ 1 , θ 2 ( z ) = k = 0 1 Γ ( k θ 1 + θ 2 ) z k . The function E θ 1 , θ 2 is an entire function of the variable z for any θ 1 , θ 2 , R e ( θ 1 ) > 0 .
Lemma 6.
Let ω be a locally integrable nonnegative function on [ 0 , + ) such that 0 + α ω ( t ) + λ ω ( t ) R . Then, we have ω ( t ) ω ( 0 ) E α , 1 ( λ t α ) + R t α E α , α + 1 ( λ t α ) = ω ( 0 ) E α , 1 ( λ t α ) + R ( 1 E α , 1 ( λ t α ) ) .
The invertibility of f ( t ) = E α , β ( t ) , t > 0 follows from the complete monotonicity property of E α , β . As shown in [65], this function is completely monotone if and only if 0 < α 1 and β α . Since E α , β ( 0 ) = Γ ( β ) 1 and lim t + E α , β ( t ) = 0 , then the inverse function g = L α , β of f, is the function g : ( 0 , Γ ( β ) 1 ] [ 0 , ) ( 0 < α 1 and β α ).
Definition 2
(see, e.g., [66]). The initial-boundary value problem (1)–(6) is called dissipative in a Banach space L ( Ω ) if there is a bounded set, in L ( Ω ) , K S = B ( 0 , r ) for some positive constants r > 0 such that for any bounded subsets K I (of L ( Ω ) ) and all initial data ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m belonging to K I , there exists a time T K , depending on K I , such that the corresponding solution ( ϕ i ( t ) , u i ( t ) , w i ( t ) ) i = 1 , m is contained in K S for all t T K . The set K S is called an absorbing set and r is called a radius of dissipativity.
Remark 5.
(i) We can also express the previous definition as follows. For any bounded set K I , there exists a finite time T K such that any solution started with initial condition in K I remains within the bounded ball K S for all time t T K .
(ii) The study of dissipative dynamics opens the way to the analysis of the synchronization problem.
Definition 3
(see, e.g., [67]). The initial-boundary value problem (1)–(6) is (locally) complete synchronized if
lim t + ( ϕ i ϕ j L 2 ( Ω ) + u i u j L 2 ( Ω ) + w i w j L 2 ( Ω ) ) = 0 , i , j = 1 , m ,
 whenever the initial state belongs to an appropriate bounded open set.
Definition 4.
The initial-boundary value problem (1)–(6) is locally complete Mittag–Leffler synchronized if there exist X > 0 and λ > 0 such that for any t > t ( i , j = 1 , m ),
( ϕ i ϕ j L 2 ( Ω ) 2 + u i u j L 2 ( Ω ) 2 + w i w j L 2 ( Ω ) 2 ) ( t ) X E α , 1 ( λ ( t t ) α ) ,
 whenever the initial state belongs to an appropriate bounded open set. The value λ is called Mittag–Leffler synchronization rate (or degree).

4. Well-Posedness of the System

This section concerns the existence and uniqueness of a weak solution to problem (1), under Lipschitz and boundedness assumptions on the non-linear operators. We now define the following bilinear forms: A f ( w , v ) = Ω K f w · v d x , A e ( w , v ) = Ω K e w · v d x . Under hypothesis (8), it can easily be shown that the forms A k (for k = e , f ) are symmetric coercive and continuous on V . We can now write the weak formulation of the initial-boundary value problem (1)–(6) (for all v, ϑ , ρ in V and a . e . t ( 0 , T ) , with T > 0 ):
c α 0 + α ϕ i , v V , V + A f ( ϕ i , v ) = Ω ( F ( x , ϕ i ) + σ u i ) v d x + Ω f e x v d x + ξ h m j = 1 m Ω a i j ( x , t ) H j ( ϕ j ) v d x κ Ω ϕ i Ψ ( w i ) v d x + p f m j = 1 m Γ ( ϕ i N ϕ j N ) v d Γ + ξ f m Ω G i ( ϕ 1 , , ϕ m ) v d x , c β 0 + α w i , ϑ V , V + A e ( w i , ϑ ) = Ω ( ν 1 ϕ i ν 2 w i ) ϑ d x + Ω g e x ϑ d x + p e m j = 1 m Γ ( w i N w j N ) ϑ d Γ + ξ e m Ω G i ( w 1 , , w m ) ϑ d x , c γ 0 + α u i , ρ L 2 ( Ω ) = Ω ( a b u i c 2 ϕ i 2 + c 1 ϕ i ) ρ d x .
The first main theorem of this paper is then the following result.
Theorem 1.
Let assumptions  (H1)  and ( H 2 ) be fulfilled and T > 0 . Assume that there exists ν ˜ 2 > 0 such that
ν ˜ 2 ν 2 ξ e m max i = 1 , m δ i i .
 Then, for any ( ϕ 0 i , w 0 i , u 0 i ) V 2 × L 3 ( Ω ) (for i = 1 , m ) and f ex , g ex L ( 0 , ; L 2 ( Ω ) ) , there exists a unique weak solution ( ϕ i , w i , u i ) i = 1 , m to problem (1) verifying (for i = 1 , m )
ϕ i L ( 0 , T ; V ) , 0 + α ϕ i L 2 ( 0 , T ; L 2 ( Ω ) ) , w i L ( 0 , T ; V ) , 0 + α w i L 2 ( 0 , T ; L 2 ( Ω ) ) , u i L ( 0 , T ; L 3 ( Ω ) ) , 0 + α u i L 2 ( 0 , T ; L 2 ( Ω ) ) ,
 Moreover, if α > 1 / 2 , then ( ϕ i , w i , u i ) C ( [ 0 , T ] ; L 2 ( Ω ) ) and ( ϕ i , w i , u i ) ( 0 + ) = ( ϕ 0 i , w 0 i , u 0 i ) .
Proof. 
To establish the existence of a result of a weak solution to system (1), we proceed as in [39] by applying the Faedo–Galerkin method, deriving a priori estimates, and then passing to the limit in the approximate solutions using compactness arguments. The uniqueness result can be evaluated in the classical way. The full proof is given in Appendix A. □

5. Dissipative Dynamics of the Solution

In this section, first we prove that all the weak solutions of initial value problem (1) exist for time t [ 0 , ) . Then, we show that there exists an absorbing set in space H m = ( L 2 ( Ω ) × L 2 ( Ω ) × L 2 ( Ω ) ) m for the solution semiflow, which is dissipative in space H m in the sense of Definition 2.
Theorem 2.
Under the assumptions of Theorem 1, there exists a unique global weak solution
( ϕ i ( t ) , w i ( t ) , u i ( t ) ) i = 1 , m in the space H m , for time t [ 0 , ) , of problem (1)–(6). Moreover, for any given ε 0 > 0 , there exists an absorbing set for the semiflow for problem (1)–(6) in space H m , which is a bounded ball B ( 0 , D ) , where D = ε 0 + G 1 C m i n . The value G 1 = G 0 + Ω i = 1 m ( 1 + sgn ( θ i ) ) θ i 2 2 λ 0 , with λ 0 = ϵ λ 2 2 c 2 2 b , θ i = ( ϵ ( θ w + 1 2 λ + ξ f m δ i i + ξ h 2 m j = 1 m ( a s i j + l i 2 a s j i ) ) + ν 1 2 ν ˜ 2 + 2 c 1 2 b ) , θ w = κ ( η 1 2 4 η 2 + ζ δ ) , G 0 = ϵ ( 1 2 λ f e x L ( 0 , ; L 2 ( Ω ) ) 2 + Ω ( ρ 0 L 2 ( Ω ) + 1 2 λ ( σ 2 2 υ + λ 2 ) 2 ) ) + m ν ˜ 2 g e x L ( 0 , ; L 2 ( Ω ) ) 2 . The value ϵ > 0 is chosen appropriately so that λ 0 > 0 i.e., ϵ > 4 c 2 2 b λ .
Proof. 
Taking the scalar product of the equations of system (1) by ϕ i , u i , and w i , respectively, and adding these equations for i = 1 , m , we obtain, according to Lemma 4 and assumption ( H 1 ) :
c α 2 0 + α i = 1 m ϕ i L 2 2 + d f i = 1 m ϕ i L 2 ( Ω ) 2 p f 2 m i , j = 1 m Γ ( ϕ i ϕ j ) 2 d Γ + i = 1 m Ω F ( x , ϕ i ) ϕ i d x ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i ϕ j ) 2 d x + σ i = 1 m Ω u i ϕ i d x + ξ f m i = 1 m δ i i ϕ i L 2 ( Ω ) 2 κ i = 1 m Ω ϕ i 2 Ψ ( w i ) d x + ξ h m i , j = 1 m Ω a i j ( x , t ) H j ( ϕ j ) ϕ i d x + i = 1 m Ω f e x ϕ i d x , c γ 2 0 + α i = 1 m w i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 p e 2 m i , j = 1 m Γ ( w i w j ) 2 d Γ ν 2 i = 1 m w i L 2 ( Ω ) 2 ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i w j ) 2 d x + i = 1 m Ω g e x w i d x + ν 1 i = 1 m Ω w i ϕ i d x + ξ e m i = 1 m δ i i w i L 2 ( Ω ) 2 , c β 2 0 + α i = 1 m u i L 2 ( Ω ) 2 c 1 i = 1 m Ω ϕ i u i d x c 2 i = 1 m Ω ϕ i 2 u i d x + a i = 1 m Ω u i d x b i = 1 m u i L 2 ( Ω ) 2 .
By using similar arguments to derive (A9), we can deduce
0 + α ϵ c α 2 i = 1 m ϕ i L 2 ( Ω ) 2 + c β 2 i = 1 m u i L 2 ( Ω ) 2 + c γ 2 i = 1 m w i L 2 ( Ω ) 2 + λ 1 ϵ c α 2 i = 1 m ϕ i L 2 ( Ω ) 2 + c β 2 i = 1 m u i L 2 ( Ω ) 2 + c γ 2 i = 1 m w i L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 + ϵ p f 2 m i , j = 1 m Γ ( ϕ i ϕ j ) 2 d Γ + ϵ ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i ϕ j ) 2 d x + p e 2 m i , j = 1 m Γ ( w i w j ) 2 d Γ + ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i w j ) 2 d x G 1 ,
where G 1 = G 0 + Ω i = 1 m ( 1 + sgn ( θ i ) ) θ i 2 2 λ 0 , λ 1 = min ( 2 θ ˜ ϵ c α , ν 2 c γ , υ 0 c β ) and θ ˜ = min i = 1 , m ( θ i ) , with G 0 = ϵ ( 1 2 λ f e x L ( 0 , ; L 2 ( Ω ) ) 2 + Ω ( ρ 0 L 2 ( Ω ) + 1 2 λ ( σ 2 2 υ + λ 2 ) 2 ) ) + m ν 2 g e x L ( 0 , ; L 2 ( Ω ) ) 2 , θ i = ( ϵ ( θ w + 1 2 λ + ξ f m δ i i + ξ h 2 m j = 1 m ( a s i j + l i 2 a s j i ) ) + ν 1 2 ν ˜ 2 + 2 c 1 2 b ) , θ w = κ ( η 1 2 4 η 2 + ζ δ ) , λ 0 = ϵ λ 2 2 c 2 2 b and υ 0 = 5 b 8 ϵ υ 2 . The values ϵ > 0 and υ > 0 are chosen appropriately so that λ 0 > 0 and υ 0 > 0 , i.e., ϵ > 4 c 2 2 b λ and υ < 5 b 4 ϵ . In particular, we have
0 + α ϵ c α 2 i = 1 m ϕ i L 2 ( Ω ) 2 + c β 2 i = 1 m u i L 2 ( Ω ) 2 + c γ 2 i = 1 m w i L 2 ( Ω ) 2 + λ 1 ϵ c α 2 i = 1 m ϕ i L 2 ( Ω ) 2 + c β 2 i = 1 m u i L 2 ( Ω ) 2 + c γ 2 i = 1 m w i L 2 ( Ω ) 2 G 1 .
We can solve this Caputo fractional differential inequality (17) to obtain the following estimate in maximal existence time interval (from Lemma 6):
C m i n i = 1 m ϕ i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 G 1 + ( C m a x X 0 G 1 ) E α , 1 ( λ 1 t α ) ,
where X 0 = i = 1 m ϕ 0 i L 2 ( Ω ) 2 + u 0 i L 2 ( Ω ) 2 + w 0 i L 2 ( Ω ) 2 , C m i n = min ( ϵ c α 2 , c β 2 , c γ 2 ) and
C m a x = max ( ϵ c α 2 , c β 2 , c γ 2 ) .
Since the solution will never blow up at any finite time, then the maximal existence time interval is [ 0 , ) . Consequently, the solution of initial-boundary value problem (1)–(6) exists in space H m , for any time t [ 0 , ) . Then we have the existence of solution semiflow for (1)–(6), that is, a mapping Σ S : [ t ; ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m H m ] ( ϕ i ( t ) , u i ( t ) , w i ( t ) ) i = 1 , m H m , t 0 enjoying the semigroup property Σ S [ s + t ; ( ϕ 0 i , u 0 i ) i = 1 , m ] = Σ S [ s ; Σ S [ t ; ( ϕ 0 i , u 0 i ) i = 1 , m ] for any s , t 0 . Moreover, from (18), we can deduce that for any ε 0 > 0 (in view of the asymptotic property of the Mittag–Leffer function):
lim sup t i = 1 m ( ϕ i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 ) < D = ε 0 + D 0 ,
where D 0 = G 1 C m i n (since lim sup t E α , 1 ( λ 1 t α ) = 0 ) and then ( ϕ i ( t ) , w i ( t ) , u i ( t ) ) i = 1 , m B ( 0 , D ) .
Afterwards, for any bounded set B ( 0 , ρ ) (of H m ), with ρ > 0 , we have, if ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m B ( 0 , ρ ) :
i = 1 m ϕ i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 D 0 + ρ C m a x C m i n E α , 1 ( λ 1 t α )
and then there exists a finite T ρ , given by T ρ = ( 1 λ 1 L α , β ( min ( 1 , ε 0 C m i n ρ C m a x ) ) ) 1 / α , such that the solution trajectories that started at initial time t = 0 from the set B ( 0 , ρ ) will permanently enter the set B ( 0 , D ) , for all t T ρ .
So, B ( 0 , D ) is an absorbing set in H m for the semiflow Σ S and this semiflow is dissipative. This completes the proof. □

6. Synchronization Phenomena

Before analyzing the synchronization problems, we will examine the uniform boundedness of the solution in K m = ( L 4 ( Ω ) × L 2 ( Ω ) × L 4 ( Ω ) ) m .

6.1. Uniform Boundedness in K m

In this section, we prove a result on the ultimately uniform boundedness of solution ( ϕ i , u i , w i ) i = 1 , m in K m
Lemma 7.
For all ( v i ) i = 1 , m I R m , we have the following results.
 (i)
i , j = 1 m ϵ i j v i v j 3 = 1 2 i , j = 1 m ϵ i j ( v i v j ) 2 ( v j 2 + v j v i + v i 2 ) 0 ;
 (ii)
i , j = 1 m δ i j v j v i 3 i = 1 m ( j i m δ i j + δ i i ) v i 4 .
Proof. 
Since ( v i v j ) 2 ( v j 2 + v j v i + v i 2 ) = v j 4 + v i 4 v j 3 v i v i 3 v j , then (since ϵ i j = ϵ j i )
i , j = 1 m ϵ i j ( v i v j ) 2 ( v j 2 + v j v i + v i 2 ) = j = 1 m v j 4 ( i = 1 m ϵ i j ) + i = 1 m v i 4 ( j = 1 m ϵ i j ) 2 i = 1 m j = 1 m ϵ i j v i v j 3 .
Because i = 1 m ϵ i j = j = 1 m ϵ i j = 0 , we derive i = 1 m j = 1 m ϵ i j v i v j 3 = 1 2 i , j = 1 m ϵ i j ( v i v j ) 2 ( v j 2 + v j v i + v i 2 ) . For (ii), we have i , j = 1 m δ i j v j v i 3 = i m j i m δ i j v j v i 3 + i m δ i i v i 4 . Since (from Young’s inequality and the fact that δ i j = δ i j , for i j )
i m j i m δ i j v j v i 3 1 4 i = 1 m j i m δ i j v j 4 + 3 4 i = 1 m j i m δ i j v i 4 3 4 i = 1 m v i 4 ( j i m δ i j ) + 1 4 i = 1 m j i m δ j i v j 4 i = 1 m v i 4 ( j i m δ i j ) ,
we obtain that
i , j = 1 m δ i j v j v i 3 i = 1 m v i 4 ( j i m δ i j ) + i m δ i i v i 4 = i = 1 m ( j i m δ i j + δ i i ) v i 4 .
This completes the proof. □
Theorem 3.
Assume now that α > 1 / 2 , g ex L ( 0 , , L 4 ( Ω ) ) and the assumptions of Theorem 2 hold. If there exists a constant ν ˜ 1 , 2 > 0 such that
ν ˜ 1 , 2 ν 2 ξ e m max i = 1 , m ( Δ i ) ,
 where Δ i = j i δ i j + δ i i = j i ( δ i j δ i j ) (since δ i i = j i δ i j ), then (1)–(6) is dissipative in K m .
Proof. 
As g 1 : y y 3 is an increasing function, then the primitive g 1 is convex and we can have, from [68] (since g 1 is independent on time), 1 4 t ̲ + α v L 4 4 Ω v 3 0 + α v d x . Consequently, by taking the scalar product of the first and third equations of system (1) by ϕ i 3 and w i 3 , respectively, and adding these equations for i = 1 , m , we can deduce (according to Lemma 7)
c α 4 0 + α i = 1 m ϕ i L 4 4 + 3 d f i = 1 m ϕ i ϕ i L 2 ( Ω ) 2 + p f 2 m i , j = 1 m Γ ( ϕ i ϕ j ) 2 ( ϕ i 2 + ϕ i ϕ j + ϕ j 2 ) d Γ + ξ f 2 m i , j = 1 ; j i m Ω ϵ i j ( ϕ i ϕ j ) 2 ( ϕ i 2 + ϕ i ϕ j + ϕ j 2 ) d x i = 1 m Ω F ( x , ϕ i ) ϕ i 3 d x + σ i = 1 m Ω u i ϕ i 3 d x + i = 1 m Ω f e x ϕ i 3 d x + ξ f m i m ( Δ i + δ i i ) ϕ i 4 κ i = 1 m Ω ϕ i 4 Ψ ( w i ) d x + ξ h m i , j = 1 m Ω a i j ( x , t ) H j ( ϕ j ) ϕ i 3 d x , c γ 4 0 + α i = 1 m w i L 4 4 + 3 d e i = 1 m w i w i L 2 ( Ω ) 2 + p e 2 m i , j = 1 m Γ ( w i w j ) 2 ( w i 2 + w i w j + w j 2 ) d Γ + ξ e 2 m i , j = 1 ; j i m Ω ϵ i j ( w i w j ) 2 ( w i 2 + w i w j + w j 2 ) d x ν ˜ 1 , 2 i = 1 m w i L 4 4 + ν 1 i = 1 m Ω w i 3 ϕ i d x + i = 1 m Ω g e x w i 3 d x .
where Δ i = j i m δ i j + δ i i and ν ˜ 1 , 2 ν 2 ξ e m max i = 1 , m ( Δ i ) (by hypothesis).
Since v I R , we have tanh ( v ) 1 and v 4 1 + v 6 ; then (by using Young’s inequality),
κ Ω ϕ i 4 Ψ ( w i ) d x = κ Ω ϕ i 4 ( δ + η 1 w i + η 2 w i 2 + ζ tanh ( w i ) ) d x κ ( ζ δ ) ϕ i L 4 4 κ η 2 Ω ϕ i 4 ( ( w i + θ 0 ) 2 θ 0 2 ) d x θ w ϕ i L 4 4 ( as in Lemma 3 ) , ξ h m i , j = 1 m Ω a i j ( x , t ) H j ( ϕ j ) ϕ i 3 d x ξ h m i , j = 1 m Ω a s i j ϕ j l j ϕ i 3 d x ξ h 4 m i = 1 m ( j = 1 m ( 3 a s i j + l i 4 a s j i ) ) ϕ i L 4 4 , Ω F ( x , ϕ i ) ϕ i ϕ i 2 d x + σ Ω u i ϕ i 3 d x + Ω f e x ϕ i 3 d x λ Ω ϕ i 6 d x + σ Ω u i ϕ i 3 d x + Ω ( f e x ϕ i + ρ 0 ) ϕ i 2 d x λ Ω ϕ i 6 d x + λ 6 Ω ϕ i 4 d x + λ 3 Ω ϕ i 6 d x + 3 2 λ ( σ 2 u i L 2 ( Ω ) 2 + f e x L ( 0 , ; L 2 ( Ω ) ) 2 + ρ 0 L 2 ( Ω ) 2 ) , ν ˜ 1 , 2 w i L 4 4 + ν 1 Ω w i 3 ϕ i d x + Ω g e x w i 3 d x ν ˜ 1 , 2 4 w i L 4 4 + 2 ν 1 4 ν ˜ 1 , 2 3 ϕ i L 4 4 + 2 ν ˜ 1 , 2 3 g e x L ( 0 , ; L 4 ( Ω ) ) 4 .
From (20), (22) and (23), we can deduce
c α 4 0 + α i = 1 m ϕ i L 4 ( Ω ) 4 + 3 d f i = 1 m ϕ i ϕ i L 2 ( Ω ) 2 + p f 2 m i , j = 1 m Γ ( ϕ i ϕ j ) 2 ( ϕ i 2 + ϕ i ϕ j + ϕ j 2 ) d Γ + ξ f 2 m i , j = 1 ; j i m Ω ϵ i j ( ϕ i ϕ j ) 2 ( ϕ i 2 + ϕ i ϕ j + ϕ j 2 ) d x 2 λ 3 i = 1 m Ω ϕ i 6 d x + 3 σ 2 2 λ u i L 2 ( Ω ) 2 + 3 m 2 λ ( f e x L ( 0 , ; L 2 ( Ω ) ) 2 + ρ 0 L 2 ( Ω ) 2 ) + i = 1 m λ 6 + θ w + 1 4 m ( j = 1 m ξ h ( 3 a s i j + l i 4 a s j i ) + 4 ξ f Δ i ) ϕ i L 4 ( Ω ) 4 , C 0 c γ 4 0 + α i = 1 m w i L 4 ( Ω ) 4 + C 0 p e 2 m i , j = 1 m Γ ( w i w j ) 2 ( w i 2 + w i w j + w j 2 ) d Γ + 3 d e C 0 i = 1 m w i w i L 2 ( Ω ) 2 + C 0 ξ e 2 m i , j = 1 ; j i m Ω ϵ i j ( w i w j ) 2 ( w i 2 + w i w j + w j 2 ) d x C 0 ν ˜ 1 , 2 4 i = 1 m w i L 4 ( Ω ) 4 + 2 C 0 ν 1 4 ν ˜ 1 , 2 3 i = 1 m ϕ i L 4 ( Ω ) 4 + 2 C 0 m ν ˜ 1 , 2 3 g e x L ( 0 , ; L 4 ( Ω ) ) 4 ,
with C 0 > 0 to be chosen appropriately. In particular, we can deduce
c α 4 0 + α i = 1 m ϕ i L 4 4 + C 0 c γ 4 0 + α i = 1 m w i L 4 4 i = 1 m Ω ( 2 λ 3 ϕ i 4 + θ 2 ( i ) ϕ i 2 ) ϕ i 2 d x + 3 σ 2 2 λ i = 1 m u i L 2 ( Ω ) 2 C 0 ν ˜ 1 , 2 4 i = 1 m w i L 4 4 + C 1 ,
where C 1 = 2 m C 0 ν ˜ 1 , 2 3 g e x L ( 0 , ; L 4 ( Ω ) ) 4 + 3 m 2 λ ( f e x L ( 0 , ; L 2 ( Ω ) ) 2 + ρ 0 L 2 ( Ω ) 2 ) and
θ 2 ( i ) = ( λ 6 + θ w + 1 4 m ( j = 1 m ξ h ( 3 a s i j + l i 4 a s j i ) + 4 ξ f Δ i ) + 2 C 0 ν 1 4 ν ˜ 1 , 2 3 ) , with θ 2 ( i ) 0 for an appropriate C 0 > 0 .
Using a similar argument to derive (A8), we can deduce that
c α 4 0 + α i = 1 m ϕ i L 4 ( Ω ) 4 + C 0 c γ 4 0 + α i = 1 m w i L 4 ( Ω ) 4 + i = 1 m θ 2 i ϕ i L 4 ( Ω ) 4 + C 0 ν ˜ 1 , 2 4 i = 1 m w i L 4 ( Ω ) 4 i = 1 m ( 1 + sgn ( θ 2 ( i ) ) ) 3 θ 2 ( i ) 4 λ ϕ i L 2 ( Ω ) 2 + 3 σ 2 2 λ i = 1 m u i L 2 ( Ω ) 2 + C 1 .
If ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m is in a ball B ( 0 , ρ ) of K m , then ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m is in a ball B ( 0 , ρ ˜ ) of H m and then, from (19), ( ϕ i ( t ) , u i ( t ) , w i ( t ) ) i = 1 , m is bounded by some constant D ˜ > 0 in H m . Consequently,
0 + α c α 4 i = 1 m ϕ i L 4 ( Ω ) 4 + C 0 c γ 4 i = 1 m w i L 4 ( Ω ) 4 + β 0 c α 4 i = 1 m ϕ i L 4 ( Ω ) 4 + C 0 c γ 4 i = 1 m w i L 4 ( Ω ) 4 C 1 + 3 4 λ max i = 1 , m 2 σ 2 + θ 2 ( i ) ( 1 + sgn ( θ 2 ( i ) ) ) D ˜ ,
where β 0 = min ( 4 θ ˜ 2 c α , ν ˜ 1 , 2 c γ ) , with θ ˜ 2 = min i = 1 , m θ 2 ( i ) .
Then, as in the proof of Theorem 2, we can deduce that if ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m is in a ball B ( 0 , ρ ) of K m , then there exists t t 0 , such that for t t , ( ϕ i ( t ) , u i ( t ) , w i ( t ) ) i = 1 , m is in a ball B ( 0 , D ρ ) in K m . Therefore, B ( 0 , D ρ ) is an absorbing set in K m for (1)–(6) (and then (1)–(6) is a dissipative dynamical system in K m ). The proof is complete. □
In the sequel, we assume that ( ϕ 0 i , u 0 i , w 0 i ) i = 1 , m is in a bounded set B ( 0 , ρ ) of K m , and then, from Theorem 3, there exists t > t 0 , such that for t t , ( ϕ i ( t ) , u i ( t ) , w i ( t ) ) i = 1 , m is in a ball B ( 0 , D ρ ) of K m , where D ρ > 0 depend on some given parameters of the problem.

6.2. Local Complete Synchronization

In this section, we assume that ξ h = 0 and we consider the local synchronization solutions of (1), whether the synchronous state is robust to perturbations, whenever the initial conditions belong to some appropriate open and bounded set. Set Φ i j = ϕ i ϕ j , U i j = u i u j and W i j = w i w j on [ t , + ) . Then, ( Φ i j , U i j , W i j ) is a solution of
c α t + α Φ i j d i v ( K f ( x ) Φ i j ) = ( F ( x , ϕ i ) F ( x , ϕ j ) ) + σ U i j + ξ f m 1 k m ( G i k ϕ k G j k ϕ k ) κ ( ϕ i Ψ ( w i ) ϕ j Ψ ( w j ) ) , c β t + α U i j = c 1 Φ i j b U i j c 2 ( ϕ i + ϕ j ) Φ i j , c γ t + α W i j d i v ( K e ( x ) W i j ) = ν 1 Φ i j ν 2 W i j + ξ e m 1 k m ( G i k w k G j k w k ) ,
with the boundary conditions
K f . Φ i j · n + p f Φ i j = 0 , on Γ , K e . W i j · n + p e W i j = 0 , on Γ .
Theorem 4.
Under the assumptions of Theorem 3, if there exist appropriate constants i j > 0 and i j > 0 , i , j = 1 , m with i j , such that
( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 d e C P S 2 ( 4 p e d e ) 4 Ω 2 ) 1 i , j m ; i j W i j L 2 ( Ω ) 2 ξ e 1 i , j m ; i j ( ϵ i j μ i j ) W i j L 2 ( Ω ) 2 < 1 i , j m ; i j i j W i j L 2 ( Ω ) 2 and ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ϵ d f C P S 2 ( p f d f ) Ω 2 ) 1 i , j m ; i j Φ i j L 2 ( Ω ) 2 ϵ ξ f 1 i , j m ; i j ( ϵ i j μ i j ) Φ i j L 2 ( Ω ) 2 < 1 i , j m ; i j i j Φ i j L 2 ( Ω ) 2 ,
 where K w = 96 C Ω b ( c 2 D ρ α 3 κ η 2 ) 2 , ϵ = 8 c 2 2 b α 3 and 0 < γ 0 , ν < 1 (which can be appropriately and arbitrarily selected), then the response system (1) is local complete (Mittag–Leffler) synchronized in H m at a uniform Mittag–Leffler rate.
Proof. 
Multiply (28) by the function ( Φ i j , U i j , W i j ) and integrate the resulting system over all of Ω to obtain (according to (29) and Lemma 4)
c α 2 t + α Φ i j L 2 ( Ω ) 2 + d f Φ i j L 2 ( Ω ) 2 + p f Γ Φ i j 2 d Γ ξ f m Ω 1 k m ( G i k ϕ k G j k ϕ k ) Φ i j d x + Ω ( F ( x , ϕ i ) F ( x , ϕ j ) ) Φ i j d x + σ Ω U i j Φ i j d x κ Ω ( ϕ i Ψ ( w i ) ϕ j Ψ ( w j ) ) Φ i j d x , c β 2 t + α U i j L 2 ( Ω ) 2 c 1 Ω U i j Φ i j d x c 2 Ω U i j Φ i j ( ϕ i + ϕ j ) d x b U i j L 2 ( Ω ) 2 , c γ 2 t + α W i j L 2 ( Ω ) 2 + d e W i j L 2 ( Ω ) 2 + p e Γ W i j 2 d Γ ν 2 W i j L 2 ( Ω ) 2 + ν 1 Ω W i j Φ i j d x + ξ e m Ω 1 k m ( G i k w k G j k w k ) W i j d x .
Since we have (because, for any ( y , z ) I R 2 , η 1 y + z 2 η 2 y 2 + z 2 2 η 1 2 2 η 2 + η 2 4 ( y + z ) 2 η 2 y 2 + z 2 2 η 1 2 2 η 2 , tanh ( y ) 1 and sech 2 ( y ) 1 ), according to assumption (H1):
κ ( ϕ i Ψ ( w i ) ϕ j Ψ ( w j ) ) Φ i j = κ ( η 1 + η 2 ( w i + w j ) ) ϕ i + ϕ j 2 W i j Φ i j κ ( δ + η 1 w i + w j 2 + η 2 w i 2 + w j 2 2 + ζ tanh ( w i ) ) Φ i j 2 κ ( 0 1 sech 2 ( w j + s W i j ) d s ) W i j ϕ i Φ i j κ ( ζ δ + η 1 2 2 η 2 ) Φ i j 2 + κ W i j ϕ i Φ i j κ ( η 1 + η 2 ( w i + w j ) ) ϕ i + ϕ j 2 W i j Φ i j , ( F ( x , ϕ i ) F ( x , ϕ j ) ) Φ i j = ( 0 1 F s ( x , ϕ j + s Φ i j ) d s ) Φ i j 2 ( α 3 3 ( ϕ i 2 + ϕ j 2 + ϕ i ϕ j ) + β ) Φ i j 2 ( α 3 6 ( ϕ i 2 + ϕ j 2 ) + β ) Φ i j 2 , κ ( η 1 + η 2 ( w i + w j ) ) ϕ i + ϕ j 2 W i j Φ i j 3 α 3 κ 2 η 1 2 W i j 2 + 6 α 3 κ 2 η 2 2 ( w i 2 + w j 2 ) W i j 2 + α 3 24 ( ϕ i 2 + ϕ j 2 ) Φ i j 2 .
Then (from Young’s inequality),
c α 2 t + α Φ i j L 2 ( Ω ) 2 + d f Φ i j L 2 ( Ω ) 2 + p f Γ Φ i j 2 d Γ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) Φ i j L 2 ( Ω ) 2 + ξ f m Ω 1 k m ( G i k ϕ k G j k ϕ k ) Φ i j d x + σ Ω U i j Φ i j d x + 3 α 3 κ 2 η 1 2 W i j L 2 ( Ω ) 2 + 6 α 3 κ 2 η 2 2 Ω ( w i 2 + w j 2 ) W i j 2 d x α 3 8 Ω ( ϕ i 2 + ϕ j 2 ) Φ i j 2 d x , c β 2 t + α U i j L 2 ( Ω ) 2 c 1 Ω U i j Φ i j d x + c 2 2 b Ω ( ϕ i 2 + ϕ j 2 ) Φ i j 2 d x 3 b 4 U i j L 2 ( Ω ) 2 , c γ 2 t + α W i j L 2 ( Ω ) 2 + d e W i j L 2 ( Ω ) 2 + p e Γ W i j 2 d Γ ( 1 γ 0 , ν ) ν 2 W i j L 2 ( Ω ) 2 + ξ e m Ω 1 k m ( G i k w k G j k w k ) W i j d x + ν 1 2 4 γ 0 , ν ν 2 Φ i j L 2 ( Ω ) 2 ,
where 0 < γ 0 , ν < 1 is any positive parameter (to be chosen appropriately).
Consequently (by summing the first and the second inequalities of (33)),
ϵ c α 2 t + α Φ i j L 2 ( Ω ) 2 + c β 2 t + α U i j L 2 ( Ω ) 2 + ϵ d f Φ i j L 2 ( Ω ) 2 + ϵ p f Γ Φ i j 2 d Γ ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) Φ i j L 2 ( Ω ) 2 + ϵ ξ f m Ω 1 k m ( G i k ϕ k G j k ϕ k ) Φ i j d x + ( ϵ σ + c 1 ) Ω U i j Φ i j d x + ϵ 3 α 3 κ 2 η 1 2 W i j L 2 ( Ω ) 2 + ϵ 6 α 3 κ 2 η 2 2 Ω ( w i 2 + w j 2 ) W i j 2 d x + ( ϵ α 3 8 + c 2 2 b ) Ω ( ϕ i 2 + ϕ j 2 ) Φ i j 2 d x 3 b 4 U i j L 2 ( Ω ) 2 , c γ 2 t + α W i j L 2 ( Ω ) 2 + d e W i j L 2 ( Ω ) 2 + p e Γ W i j 2 d Γ ( 1 γ 0 , ν ) W i j L 2 ( Ω ) 2 + ξ e m Ω 1 k m ( G i k w k G j k w k ) W i j d x + ν 1 2 4 γ 0 , ν ν 2 Φ i j L 2 ( Ω ) 2 .
We take ϵ such that ϵ α 3 8 + c 2 2 b = 0 , i.e., ϵ = 8 c 2 2 b α 3 . From Gagliardo–Nirenberg inequalities, we have that there exists C Ω > 0 such that ( v H 1 ( Ω ) ) v L 4 ( Ω ) 2 C Ω ( v L 2 ( Ω ) 3 / 2 v L 2 ( Ω ) 1 / 2 + v L 2 ( Ω ) 2 ) . Then,
ϵ 6 α 3 κ 2 η 2 2 Ω ( w i 2 + w j 2 ) W i j 2 d x ϵ 6 α 3 κ 2 η 2 2 ( w i L 4 ( Ω ) 2 + w j L 4 ( Ω ) 2 ) W i j L 4 ( Ω ) 2 K w W i j L 2 ( Ω ) 3 / 2 W i j L 2 ( Ω ) 1 / 2 + K w W i j L 2 ( Ω ) 2 3 d e 4 W i j L 2 ( Ω ) 2 + ( K w + K w 4 4 d e 4 ) W i j L 2 ( Ω ) 2 ,
where K w = C Ω ϵ 12 α 3 κ 2 η 2 2 D ρ 2 (since for all k = 1 , m , w k L 4 ( Ω ) D ρ ). Thus, the previous relations with (34) yield the following inequalities:
ϵ c α 2 t + α Φ i j L 2 ( Ω ) 2 + c β 2 t + α U i j L 2 ( Ω ) 2 + ϵ d f Φ i j L 2 ( Ω ) 2 + ϵ p f Γ Φ i j 2 d Γ ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) Φ i j L 2 ( Ω ) 2 + ϵ ξ f m Ω 1 k m ( G i k ϕ k G j k ϕ k ) Φ i j d x + ( c 1 + ϵ σ ) 2 b Φ i j L 2 ( Ω ) 2 + 3 d e 4 W i j L 2 ( Ω ) 2 + ( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ) W i j L 2 ( Ω ) 2 b 2 U i j L 2 ( Ω ) 2 , c γ 2 t + α W i j L 2 ( Ω ) 2 + d e W i j L 2 ( Ω ) 2 + p e Γ W i j 2 d Γ ( 1 γ 0 , ν ) W i j L 2 ( Ω ) 2 + ξ e m Ω 1 k m ( G i k w k G j k w k ) W i j d x + ν 1 2 4 γ 0 , ν ν 2 Φ i j L 2 ( Ω ) 2 .
By summing (for all 1 i , j m ), we can deduce that (according to Lemma 4)
ϵ c α 2 t + α 1 i , j m Φ i j L 2 ( Ω ) 2 + c β 2 t + α 1 i , j m U i j L 2 ( Ω ) 2 + ϵ d f 1 i , j m Φ i j L 2 ( Ω ) 2 + ϵ p f 1 i , j m Φ i j L 2 ( Γ ) 2 ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b ) 1 i , j m Φ i j L 2 ( Ω ) 2 m ϵ ξ f m 1 i , j m ( ϵ i j μ i j ) Φ i j L 2 ( Ω ) 2 d x b 2 1 i , j m U i j L 2 ( Ω ) 2 + 3 d e 4 1 i , j m W i j L 2 ( Ω ) 2 + ( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ) 1 i , j m W i j L 2 ( Ω ) 2 ,
c γ 2 t + α 1 i , j m W i j L 2 ( Ω ) 2 + d e 1 i , j m W i j L 2 ( Ω ) 2 + p e Γ 1 i , j m W i j 2 d Γ ( 1 γ 0 , ν ) ν 2 1 i , j m W i j L 2 ( Ω ) 2 m ξ e m 1 i , j m ( ϵ i j μ i j ) W i j L 2 ( Ω ) 2 + ν 1 2 4 γ 0 , ν ν 2 1 i , j m Φ i j L 2 ( Ω ) 2 .
By adding the above two inequalities, we can deduce that
t + α ϵ c α 2 1 i , j m Φ i j L 2 ( Ω ) 2 + c β 2 1 i , j m U i j L 2 ( Ω ) 2 + c γ 2 1 i , j m W i j L 2 ( Ω ) 2 + ϵ d f 1 i , j m Φ i j L 2 ( Ω ) 2 + p f d f Φ i j L 2 ( Γ ) 2 + d e 4 1 i , j m W i j L 2 ( Ω ) 2 + 4 p e d e W i j L 2 ( Γ ) 2 ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) 1 i , j m Φ i j L 2 ( Ω ) 2 ϵ ξ f 1 i , j m ( ϵ i j μ i j ) Φ i j L 2 ( Ω ) 2 ξ e 1 i , j m ( ϵ i j μ i j ) W i j L 2 ( Ω ) 2 b 2 1 i , j m U i j L 2 ( Ω ) 2 + ( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) 1 i , j m W i j L 2 ( Ω ) 2 .
From the Poincaré–Steklov inequality, we can deduce that
t + α 1 i , j m ϵ c α 2 Φ i j L 2 ( Ω ) 2 + c β 2 U i j L 2 ( Ω ) 2 + c γ 2 W i j L 2 ( Ω ) 2 + ϵ d f C P S 2 ( p f d f ) Ω 2 1 i , j m Φ i j L 2 ( Ω ) 2 + d e 4 C P S 2 ( 4 p e d e ) Ω 2 1 i , j m W i j L 2 ( Ω ) 2 ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) 1 i , j m Φ i j L 2 ( Ω ) 2 ϵ ξ f 1 i , j m ( ϵ i j μ i j ) Φ i j L 2 ( Ω ) 2 ξ e 1 i , j m ( ϵ i j μ i j ) W i j L 2 ( Ω ) 2 + ( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) 1 i , j m W i j L 2 ( Ω ) 2 b 2 1 i , j m U i j L 2 ( Ω ) 2
and then,
t + α 1 i , j m ϵ c α 2 Φ i j L 2 ( Ω ) 2 + c β 2 U i j L 2 ( Ω ) 2 + c γ 2 W i j L 2 ( Ω ) 2 ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ϵ d f C P S 2 ( p f d f ) Ω 2 ) 1 i , j m Φ i j L 2 ( Ω ) 2 ϵ ξ f 1 i , j m ( ϵ i j μ i j ) Φ i j L 2 ( Ω ) 2 + ( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 d e C P S 2 ( 4 p e d e ) 4 Ω 2 ) 1 i , j m W i j L 2 ( Ω ) 2 ξ e 1 i , j m ( ϵ i j μ i j ) W i j L 2 ( Ω ) 2 b 2 1 i , j m U i j L 2 ( Ω ) 2 .
Since, from (30), there exist appropriate constants i j > 0 and i j > 0 such that
( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 d e C P S 2 ( 4 p e d e ) 4 Ω 2 ) 1 i , j m W i j L 2 ( Ω ) 2 ξ e 1 i , j m ( ϵ i j μ i j ) W i j L 2 ( Ω ) 2 < 1 i , j m i j W i j L 2 ( Ω ) 2
and
( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ϵ d f C P S 2 ( p f d f ) Ω 2 ) 1 i , j m Φ i j L 2 ( Ω ) 2 ϵ ξ f 1 i , j m ( ϵ i j μ i j ) Φ i j L 2 ( Ω ) 2 < 1 i , j m i j Φ i j L 2 ( Ω ) 2 ,
it holds that (for t t )
t + α 1 i , j m ϵ c α 2 Φ i j L 2 ( Ω ) 2 + c β 2 U i j L 2 ( Ω ) 2 + c γ 2 W i j L 2 ( Ω ) 2 + λ 3 1 i , j m ϵ c α 2 Φ i j L 2 ( Ω ) 2 + c β 2 U i j L 2 ( Ω ) 2 + c γ 2 W i j L 2 ( Ω ) 2 < 0 ,
where = 2 c γ min 1 i , j m ( i j ) , = 2 ϵ c α min 1 i , j m ( i j ) and λ 3 = min ( b c β , , ) .
Consequently (from Lemma 6),
1 i , j m ϵ c α 2 Φ i j ( t ) L 2 ( Ω ) 2 + c β 2 U i j ( t ) L 2 ( Ω ) 2 + c γ 2 W i j ( t ) L 2 ( Ω ) 2 X E α , 1 ( λ 3 ( t t ) α ) .
where λ 3 is the Mittag–Leffler synchronization rate and X is given by
X = i , j = 1 m ϵ c α 2 Φ i j ( t ) L 2 ( Ω ) 2 + c β 2 U i j ( t ) L 2 ( Ω ) 2 + c γ 2 W i j ( t ) L 2 ( Ω ) 2 .
Finally, i , j = 1 m ( Φ i j ( t ) L 2 ( Ω ) 2 + U i j ( t ) L 2 ( Ω ) 2 + W i j ( t ) L 2 ( Ω ) 2 ) 0 as t .
Corollary 1.
We assume that there exists appropriate positive constants i j and i j , for i , j = 1 , m with i j , such that
( K w + K w 4 4 d e 4 + ϵ 3 α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 d e C P S 2 ( 4 p e d e ) 4 Ω 2 ) ξ e ( ϵ i j μ i j ) < i j , ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 1 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ϵ d f C P S 2 ( p f d f ) Ω 2 ) ϵ ξ f ( ϵ i j μ i j ) < i j ,
 where K w = 96 C Ω b ( c 2 D ρ α 3 κ η 2 ) 2 , ϵ = 8 c 2 2 b α 3 and 0 < γ 0 , ν < 1 is any positive parameter (to be chosen appropriately). Then, the response system (1) is local complete (Mittag–Leffler) synchronized in H m at a uniform Mittag–Leffler rate.

6.3. Master–Slave Synchronization via Pinning Control

The goal of pinning control is to synchronize the whole of the memristor-based neural network by controlling a select part of neurons of the network. Without losing any generality, we may assume that the first q ( 1 < q < m ) neurons would be pinning. Controlled synchronization refers to a case when the synchronization phenomenon is artificially induced by using a suitably designed control law. In order to explore the synchronization behavior via pinning control, we introduce the corresponding slave system to the master system (1) by (for i = 1 , m )
c α 0 + α ϕ ˜ i d i v ( K f ( x ) ϕ ˜ i ) = F ( x , ϕ ˜ i ) + σ u ˜ i + ξ h m j = 1 m a ˜ i j ( x , t ) H j ( ϕ ˜ j ) + f e x κ ϕ ˜ i Ψ ( w ˜ i ) + ξ f m G i ( ϕ ˜ 1 , , ϕ ˜ m ) + Ξ i in Q c β 0 + α u ˜ i = a b u ˜ i c 2 ϕ ˜ i 2 + c 1 ϕ ˜ i in Q c γ 0 + α w ˜ i d i v ( K e ( x ) w ˜ i ) = ν 1 ϕ ˜ i ν 2 w ˜ i + g e x + ξ e m G i ( w ˜ 1 , , w ˜ m ) in Q
where, for i , j = 1 , m , Ξ i are reasonable controllers and a ˜ i j satisfies the assumption (2). The initial and boundary conditions of (38) are
K f . ϕ ˜ i · n p f m k = 1 , m ( ϕ ˜ k ϕ ˜ i ) = 0 , on Γ , K e . w ˜ i · n p e m k = 1 , m ( w ˜ k w ˜ i ) = 0 , on Γ , ( ϕ ˜ i , u ˜ i , w ˜ i ) ( 0 ) = ( ϕ ˜ 0 i , u ˜ 0 i , w ˜ i 0 ) , on Ω .
Introduce now the symmetric matrix P = ( p i j ) 1 i , j m such that p i j = 1 if i j and p i i = k = 1 , k i m p i k . Then k = 1 m p i k ( v k v i ) = k = 1 m p i k v k .
Remark 6.
We can prove easily that (for V = ( v i ) 1 i m )
i = 1 m k = 1 m p i k v k v i = V t P V , i = 1 m k = 1 m G i k v k v i = V t G V
 and
i = 1 m v i 2 = 1 2 m ( i = 1 m k = 1 m ( v i v k ) 2 + 2 i = 1 m v i 2 ) .
Set ϕ i = ϕ ˜ i ϕ i , u i = u ˜ i u i , w i = w ˜ i w i , ϕ 0 i = ϕ ˜ 0 i ϕ 0 i , u i = u ˜ 0 i u 0 i and w 0 i = w ˜ 0 i w 0 i .
Thus, from (38), (39), (1), (5), and (6), we can deduce that (for i = 1 , m )
c α 0 + α ϕ i d i v ( K f ( x ) ϕ i ) = ( F ( x , ϕ ˜ i ) F ( x , ϕ i ) ) + ξ h m j = 1 m ( a ˜ i j ( x , t ) H j ( ϕ ˜ j ) a i j ( x , t ) H j ( ϕ j ) ) + σ u i κ ( ϕ ˜ i Ψ ( w ˜ i ) ϕ i Ψ ( w i ) ) + ξ f m G i ( ϕ 1 , , ϕ m ) + Ξ i in Q , c β 0 + α u i = b u i c 2 ϕ i ( ϕ ˜ i + ϕ i ) + c 1 ϕ i in Q , c γ 0 + α w i d i v ( K e ( x ) w i ) = ν 1 ϕ i ν 2 w i + ξ e m G i ( w 1 , , w m ) in Q ,
with the initial and boundary conditions of (38) (for i = 1 , m )
K f . ϕ i · n p f m k = 1 , m ( ϕ k ϕ i ) = 0 , on Γ , K e . w i · n p e m k = 1 , m ( w k w i ) = 0 , on Γ , ( ϕ i ( 0 ) , u i ( 0 ) , w i ( 0 ) ) = ( ϕ 0 i , u 0 i , w i 0 ) , on Ω .

6.3.1. Feedback Control

In this section, we assume that the control is a feedback law.
Theorem 5.
Assume that assumptions of Theorem 3 hold. Suppose that the pinning feedback control functions Ξ i (for i = 1 , m ) satisfy
i = 1 , m Ω ϕ i Ξ i d x i = 1 , m ϖ 1 i ϕ i 2 i = 1 , m Ω ϖ 2 i ϕ i d x ,
 where ϖ 1 j (for j = 1 , , q ) and ϖ 2 i (for i = 1 , , m ) are positive constants, and ϖ 1 j 0 , for j = q + 1 , , m . Then, if there exist appropriate constants e f , i > 0 and e w , i > 0 , for i = 1 , m , such that (with Φ = ( ϕ i ) i = 1 , m , W = ( w i ) i = 1 , m )
Ω Φ t ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f + P ϖ Φ d x + Ω W t ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w W d x > i = 1 m e f , i ϕ i L 2 ( Ω ) 2 + e w , i w i L 2 ( Ω ) 2 and Ω i = 1 m ( ϖ 2 i j = 1 m ( ϵ ξ h m a d i j M j ) ) ϕ i d x 0 ,
 where 0 < γ 0 , ν < 1 is any positive parameter (to be chosen appropriately) and
P ϖ = diag ( ϖ 11 , . , ϖ 1 m ) , L f = ϵ ξ h m diag ( j = 1 m a s 1 j l j , , j = 1 m a s m j l j ) , D f = ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 2 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) I m , D w = ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) I m , I m = diag ( 1 , , 1 ) , ϵ = 8 c 1 2 α 3 b = 0 and K ^ w = C Ω 96 D ρ 2 c 1 2 b α 3 2 κ 2 η 2 2 ,
 the master–slave systems (1) and (42) can achieve, in H m , Mittag–Leffler synchronization via the pinning feedback control functions ( Ξ i ) i = 1 , m .
Proof. 
Multiply (42) by the function ( ϕ i , u i , w i ) and integrate the resulting system over all of Ω to obtain (according to (43) and Lemma 4)
c α 2 t + α i = 1 m ϕ i L 2 ( Ω ) 2 + d f i = 1 m ϕ i L 2 ( Ω ) 2 p f 2 m 1 i , j m Γ ( ϕ i ϕ j ) 2 d Γ + i = 1 m Ω Ξ i ϕ i d x + i = 1 m Ω ( F ( x , ϕ ˜ i ) F ( x , ϕ i ) ) ϕ i d x + ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + i = 1 m σ Ω u i ϕ i d x i = 1 m κ Ω ( ϕ ˜ i Ψ ( w ˜ i ) ϕ i Ψ ( w i ) ) ϕ i d x + ξ h m Ω 1 i , j m ( a i j ( x , t ) ( H j ( ϕ ˜ j ) H j ( ϕ j ) ) ϕ i d x + ξ h m Ω 1 i , j m ( a ˜ i j ( x , t ) a i j ( x , t ) ) H j ( ϕ ˜ j ) ϕ i d x , c β 2 t + α i = 1 m u i L 2 ( Ω ) 2 c 2 i = 1 m Ω u i ϕ i d x c 1 i = 1 m Ω u i ϕ i ( ϕ ˜ i + ϕ i ) d x b i = 1 m u i L 2 ( Ω ) 2 , c γ 2 t + α i = 1 m w i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 p e 2 m 1 i , j m Γ ( w i w j ) 2 d Γ ν 2 i = 1 m w i L 2 ( Ω ) 2 + ξ e m 1 i , j m Ω G i j w i w j d x + ν 1 i = 1 m Ω w i ϕ i d x .
By using a similar argument as in (32), we can deduce that
κ ( ϕ ˜ i Ψ ( w ˜ i ) ϕ i Ψ ( w i ) ) ϕ i κ ( ζ δ + η 1 2 2 η 2 ) ϕ i 2 + κ w i ϕ i ϕ i κ ( η 1 + η 2 ( w i + w ˜ i ) ) ϕ i + ϕ ˜ i 2 w i ϕ i , ( F ( x , ϕ ˜ i ) F ( x , ϕ i ) ) ϕ i ( α 3 3 ( ϕ i 2 + ϕ ˜ i 2 + ϕ i ϕ ˜ i ) + β ) ϕ i 2 ( α 3 6 ( ϕ i 2 + ϕ ˜ i 2 ) + β ) ϕ i 2 , κ ( η 1 + η 2 ( w i + w ˜ i ) ) ϕ i + ϕ ˜ i 2 w i ϕ i 3 α 3 κ 2 η 1 2 w i 2 + α 3 24 ( ϕ i 2 + ϕ ˜ i 2 ) ϕ i 2 + 6 α 3 κ 2 η 2 2 ( w i 2 + w ˜ i 2 ) w i 2 .
Then (according to (48), (3), (2), and assumptions (H1)):
c α 2 t + α i = 1 m ϕ i L 2 ( Ω ) 2 + d f i = 1 m ϕ i L 2 ( Ω ) 2 + p f 2 m 1 i , j m Γ ( ϕ i ϕ j ) 2 d Γ ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + σ i = 1 m Ω u i ϕ i d x + i = 1 m ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) ϕ i L 2 ( Ω ) 2 + i = 1 m 3 α 3 κ 2 η 1 2 w i L 2 ( Ω ) 2 + i = 1 m 6 α 3 κ 2 η 2 2 Ω ( w i 2 + w ˜ i 2 ) w i 2 d x i = 1 m α 3 8 Ω ( ϕ i 2 + ϕ ˜ i 2 ) ϕ i 2 d x + ξ h m 1 i , j m a s i j l j ϕ i L 2 ( Ω ) 2 + ξ h m Ω 1 i , j m a d i j M j ϕ i d x + i = 1 m Ω Ξ i ϕ i d x , c β 2 t + α i = 1 m u i L 2 ( Ω ) 2 i = 1 m c 2 Ω u i ϕ i d x + i = 1 m c 1 2 b Ω ( ϕ i 2 + ϕ ˜ i 2 ) ϕ i 2 d x i = 1 m 3 b 4 u i L 2 ( Ω ) 2 , c γ 2 t + α i = 1 m w i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 + p e 2 m 1 i , j m Γ ( w i w j ) 2 d Γ ξ e m 1 i , j m Ω G i j w i w j d x ( 1 γ 0 , ν ) ν 2 i = 1 m w i L 2 ( Ω ) 2 + ν 1 2 4 γ 0 , ν ν 2 i = 1 m ϕ i L 2 ( Ω ) 2 ,
where 0 < γ 0 , ν < 1 is any positive parameter (to be chosen appropriately).
Combining the first and second inequalities of (49), we obtain
ϵ c α 2 t + α i = 1 m ϕ i L 2 ( Ω ) 2 + c β 2 t + α i = 1 m u i L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i L 2 ( Ω ) 2 + ϵ p f 2 m 1 i , j m Γ ( ϕ i ϕ j ) 2 d Γ ( ϵ σ + c 2 ) i = 1 m Ω u i ϕ i d x + ϵ ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) i = 1 m ϕ i L 2 ( Ω ) 2 3 b 4 i = 1 m u i L 2 ( Ω ) 2 + 3 ϵ α 3 κ 2 η 1 2 i = 1 m w i L 2 ( Ω ) 2 + 6 ϵ α 3 κ 2 η 2 2 i = 1 m Ω ( w i 2 + w ˜ i 2 ) w i 2 d x + ( ϵ α 3 8 + c 1 2 b ) i = 1 m Ω ( ϕ i 2 + ϕ ˜ i 2 ) ϕ i 2 d x + ϵ ξ h m 1 i , j m a s i j l j ϕ i L 2 ( Ω ) 2 + ϵ ξ h m Ω 1 i , j m a d i j M j ϕ i d x + i = 1 m Ω Ξ i ϕ i d x , c γ 2 t + α i = 1 m w i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 + p e 2 m 1 i , j m Γ ( w i w j ) 2 d Γ ξ e m 1 i , j m Ω G i j w i w j d x ( 1 γ 0 , ν ) ν 2 i = 1 m w i L 2 ( Ω ) 2 + ν 1 2 4 γ 0 , ν ν 2 i = 1 m ϕ i L 2 ( Ω ) 2 .
By taking ϵ such that ϵ α 3 8 + c 1 2 b = 0 and by using similar arguments to derive (34), we can deduce that
ϵ 6 α 3 κ 2 η 2 2 Ω ( w i 2 + w ˜ i 2 ) w i 2 d x 3 d e 4 w i L 2 ( Ω ) 2 + ( K ^ w + K ^ w 4 4 d e 4 ) w i L 2 ( Ω ) 2 ,
with K ^ w = C Ω ϵ 12 α 3 κ 2 η 2 2 D ρ 2 .
Substitute (50) into the previous inequalities
ϵ c α 2 t + α i = 1 m ϕ i L 2 ( Ω ) 2 + c β 2 t + α i = 1 m u i L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i L 2 ( Ω ) 2 + ϵ p f 2 m 1 i , j m Γ ( ϕ i ϕ j ) 2 d Γ ϵ ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + i = 1 m Ω Ξ i ϕ i d x b 2 i = 1 m u i L 2 ( Ω ) 2 + i = 1 m ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) ϕ i L 2 ( Ω ) 2 + i = 1 m ( ϵ σ + c 2 ) 2 b ϕ i L 2 ( Ω ) 2 + 3 d e 4 i = 1 m w i L 2 ( Ω ) 2 + i = 1 m ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ) w i L 2 ( Ω ) 2 + i = 1 m ( j = 1 m ϵ ξ h m a s i j l j ) ϕ i L 2 ( Ω ) 2 + ϵ ξ h m Ω i = 1 m ( j = 1 m a d i j M j ) ϕ i d x , c γ 2 t + α i = 1 m w i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 + p e 2 m 1 i , j m Γ ( w i w j ) 2 d Γ ξ e m 1 i , j m Ω G i j w i w j d x ( 1 γ 0 , ν ) ν 2 i = 1 m w i L 2 ( Ω ) 2 + ν 1 2 4 γ 0 , ν ν 2 i = 1 m ϕ i L 2 ( Ω ) 2 .
Adding the above inequality, we can write
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + b 2 i = 1 m u i L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i L 2 ( Ω ) 2 + d e 4 i = 1 m w i L 2 ( Ω ) 2 + ϵ p f 2 m 1 i , j m Γ ( ϕ i ϕ j ) 2 d Γ + p e 2 m 1 i , j m Γ ( w i w j ) 2 d Γ ϵ ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + ξ e m 1 i , j m Ω G i j w i w j d x + i = 1 m ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 2 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) ϕ i L 2 ( Ω ) 2 + i = 1 m ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) w i L 2 ( Ω ) 2 + i = 1 m ( j = 1 m ϵ ξ h m a s i j l j ) ϕ i L 2 ( Ω ) 2 + ϵ ξ h m Ω i = 1 m ( j = 1 m a d i j M j ) ϕ i d x + i = 1 m Ω Ξ i ϕ i d x .
From (41), we can deduce that
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + ϵ d f m i = 1 m ϕ i L 2 ( Ω ) 2 + d e 2 m i = 1 m w i L 2 ( Ω ) 2 + b 2 i = 1 m u i L 2 ( Ω ) 2 + ϵ d f 2 m 1 i , j m ( ϕ i ϕ j ) L 2 ( Ω ) 2 + ϵ p f 2 m 1 i , j m Γ ( ϕ i ϕ j ) 2 d Γ + d e 4 m 1 i , j m ( w i w j ) L 2 ( Ω ) 2 + p e 2 m 1 i , j m Γ ( w i w j ) 2 d Γ ϵ ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + ξ e m 1 i , j m Ω G i j w i w j d x + i = 1 m ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 2 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) ϕ i L 2 ( Ω ) 2 + i = 1 m ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) w i L 2 ( Ω ) 2 + i = 1 m ( j = 1 m ϵ ξ h m a s i j l j ) ϕ i L 2 ( Ω ) 2 + ϵ ξ h m Ω i = 1 m ( j = 1 m a d i j M j ) ϕ i d x + i = 1 m Ω Ξ i ϕ i d x .
According to Poincaré–Steklov, we obtain (using the expression of p i j )
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + b 2 i = 1 m u i L 2 ( Ω ) 2 + ϵ d f m i = 1 m ϕ i L 2 ( Ω ) 2 + d e 2 m i = 1 m w i L 2 ( Ω ) 2 C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 1 i , j m Ω p i j ϕ i ϕ j d x + C P S 2 min ( d e , 2 p e ) 4 m Ω 2 1 i , j m Ω p i j w i w j d x + ϵ ξ f m 1 i , j m Ω G i j ϕ i ϕ j d x + ξ e m 1 i , j m Ω G i j w i w j d x + i = 1 m ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 2 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) ϕ i L 2 ( Ω ) 2 + i = 1 m ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) w i L 2 ( Ω ) 2 + i = 1 m ( j = 1 m ϵ ξ h m a s i j l j ) ϕ i L 2 ( Ω ) 2 + ϵ ξ h m Ω i = 1 m ( j = 1 m a d i j M j ) ϕ i d x + i = 1 m Ω Ξ i ϕ i d x .
Let us put
D f = ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 2 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) I m , D w = ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) I m , L f = ϵ ξ h m diag ( j = 1 m a s 1 j l j , , j = 1 m a s m j l j ) .
Then (with Φ = ( ϕ i ) i = 1 , m , W = ( w i ) i = 1 , m )
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + b 2 i = 1 m u i L 2 ( Ω ) 2 Ω Φ t ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f Φ d x Ω W t ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w W d x + ϵ ξ h m Ω i = 1 m ( j = 1 m a d i j M j ) ϕ i d x + i = 1 m Ω Ξ i ϕ i d x .
From (44), we can deduce (with P ϖ = diag ( ϖ 11 , . , ϖ 1 m ) )
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + b 2 i = 1 m u i L 2 ( Ω ) 2 Ω Φ t ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f + P ϖ Φ d x Ω W t ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w W d x Ω i = 1 m ( ϖ 2 i j = 1 m ( ϵ ξ h m a d i j M j ) ) ϕ i d x .
According to assumption (45), we can deduce that
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + Θ i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 < 0 ,
where Θ = min ( min i = 1 , m ( e f , i ) 2 ϵ c α , b c β , min i = 1 , m ( e w , i ) 2 c γ ) . Then, by Lemma 6, we can deduce that
i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 X E α ( Θ ( t t ) α ) ,
where X = i = 1 m ϵ c α 2 ϕ i ( t ) L 2 ( Ω ) 2 + c β 2 u i ( t ) L 2 ( Ω ) 2 + c γ 2 w i ( t ) L 2 ( Ω ) 2 . Hence, the error system (42) is globally asymptotically stable and the system (1) and response system (38) are globally (Mittag–Leffler) synchronized in H m under the feedback controllers ( Ξ i ) i = 1 , m (at a uniform Mittag–Leffler rate). This completes the proof of Theorem. □
Corollary 2.
Assume that the assumptions of Theorem 3 hold. Suppose that the pinning feedback control functions Ξ i (for i = 1 , m ) satisfy assumption (44). Then, if ϖ 2 i j = 1 m ( ϵ ξ h m a d i j M j ) 0 , for i = 1 , m , and if there exist diagonal matrices N f and N w with strictly positive diagonal terms such that
ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f + P ϖ N f , ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w N w are positive-defined matrices, where
P ϖ = diag ( ϖ 11 , . , ϖ 1 m ) , D f = ( ϵ ( κ ( ζ δ + η 1 2 2 η 2 ) + β ) + ( c 2 + ϵ σ ) 2 b + ν 1 2 4 γ 0 , ν ν 2 ) I m , L f = ϵ ξ h m diag ( j = 1 m a s 1 j l j , , j = 1 m a s m j l j ) , D w = ( K ^ w + K ^ w 4 4 d e 4 + 3 ϵ α 3 κ 2 η 1 2 ( 1 γ 0 , ν ) ν 2 ) I m ,
 with ϵ = 8 c 1 2 α 3 b , K ^ w = C Ω 96 D ρ 2 c 1 2 b α 3 2 κ 2 η 2 2 and 0 < γ 0 , ν < 1 any positive parameter (to be chosen appropriately), the master–slave systems (1) and (42) can achieve Mittag–Leffler synchronization via the pinning feedback control functions ( Ξ i ) i = 1 , m .
Example 1.
The pinning feedback control functions ( Ξ i ) i = 1 , m can have, for example, the following forms.
 - First form:
Ξ i ( ϕ i ) = κ 1 i ϕ i κ 2 i sgn ( ϕ i ) , i = 1 , 2 , , q , Ξ i ( ϕ i ) = κ 2 i sgn ( ϕ i ) , i = q + 1 , 2 , , m ,
where κ 1 j (for j = 1 , , q ) and κ 2 i (for i = 1 , , m ) are arbitrary positive constants, and κ 1 j = 0 , for j = q + 1 , , m . We prove easily that Ξ i satisfies the condition (44) (for i = 1 , m ).
 - Second form:
Ξ i ( ϕ i ) = κ 1 i ϕ i ( κ 3 i d q ( i = 1 m ϕ i ) 2 + κ 2 i ) sgn ( ϕ i ) , i = 1 , 2 , , q , Ξ i ( ϕ i ) = κ 2 i sgn ( ϕ i ) , i = q + 1 , 2 , , m ,
where d q = 1 i = 1 q ϕ i if i = 1 q ϕ i 0 and d q = 0 otherwise, κ 1 j , κ 3 j (for j = 1 , , q ) and κ 2 i (for i = 1 , , m ) are arbitrary positive constants, and κ 1 j = κ 3 j = 0 , for j = q + 1 , , m .
For this example, we have if i = 1 q ϕ i 0
sgn ( ϕ i ) ( j = 1 m ϕ j ) 2 d q ϕ i = ( j = 1 m ϕ j ) 2 j = 1 q ϕ j ϕ i ϕ i j = 1 q ϕ j
 and then
i = 1 q κ 3 i sgn ( ϕ i ) ( j = 1 m ϕ j ) 2 d q ϕ i = i = 1 q κ 3 i ( j = 1 m ϕ j ) 2 j = 1 q ϕ j ϕ i j = 1 q i = 1 q κ 3 i ϕ i ϕ j i = 1 q κ 3 i ϕ i 2 .
 Consequently, we obtain that
i = 1 , m Ω ϕ i Ξ i d x i = 1 , q ( κ 1 i + κ 3 i ) ϕ i 2 i = 1 , m Ω κ 2 i ϕ i d x .

6.3.2. Adaptive Control

Now, we will consider the case when the pinning controllers are defined as
Ξ i ( ϕ i ) = ( κ ˜ 1 i φ 1 i ( x , t ) + ζ ˜ 1 i ) ϕ i κ ˜ 2 i φ 2 i ( x , t ) sgn ( ϕ i ) , i = 1 , 2 , , q , Ξ i ( ϕ i ) = κ ˜ 2 i φ 2 i ( x , t ) sgn ( ϕ i ) , i = q + 1 , 2 , , m ,
where φ 1 i and φ 2 j are solutions of (for i = 1 , q and j = 1 , m )
t + α φ 1 i ( x , t ) = ζ ˜ 1 i κ ˜ 1 i ϕ i 2 ( x , t ) , t + α φ 2 j ( x , t ) = ζ ˜ 2 j κ ˜ 2 j ϕ j ( x , t )
and φ 1 k ( x , t ) = 0 and κ ˜ 1 k = 0 , for k = q + 1 , m , with κ ˜ 1 j > 0 , κ ˜ 2 i > 0 , ζ ˜ 2 i and ζ ˜ 1 i > 0 positive constants (for i = 1 , m and j = 1 , q ). Finally, we consider the following functions (for i = 1 , m ):
S i ( x , t ) = ( φ 1 i ( x , t ) φ ¯ 1 i ) 2 2 ζ ˜ 1 i + ( φ 2 i ( x , t ) φ ¯ 2 i ) 2 2 ζ ˜ 2 i , S ( t ) = i = 1 m Ω S i ( x , t ) d x ,
where φ ¯ 1 k = 0 , for k = q + 1 , m , φ ¯ 1 k > 0 , for k = 1 , q , and φ ¯ 2 k > 0 , for k = 1 , m .
Theorem 6.
When the assumptions of Theorem 3 are satisfied, the master–slave system can achieve Mittag–Leffler synchronization via pinning feedback controllers (58) if there are always appropriate positive constants e f , i and e w , i (for i = 1 , m ) such that (with Φ = ( ϕ i ) i = 1 , m , W = ( w i ) i = 1 , m )
Ω Φ t ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f + R φ Φ d x + Ω W t ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w W d x > i = 1 m e f , i ϕ i L 2 ( Ω ) 2 + e w , i w i L 2 ( Ω ) 2 and Ω i = 1 m ( κ ˜ 2 i φ ¯ 2 i j = 1 m ( ϵ ξ h m a d i j M j ) ) ϕ i d x 0 ,
 where 0 < γ 0 , ν < 1 is any positive parameter (to be chosen appropriately), ϵ = 8 c 1 2 α 3 b , K ^ w = C Ω 96 D ρ 2 c 1 2 b α 3 2 κ 2 η 2 2 , the matrices D f , L f and D w , which depend on the parameter γ 0 , ν are given by (46) and the matrix R φ = diag ( κ ˜ 1 i φ ¯ 1 i , . , κ ˜ 1 m φ ¯ 1 m ) .
Proof. 
According to (58) and (59), we can have that t + α S ( t ) I (from the expression of S ) with
I = i = 1 m Ω ( φ 1 i ( x , t ) φ ¯ 1 i ζ ˜ 1 i ) t + α φ 1 i ( x , t ) + ( φ 2 i ( x , t ) φ ¯ 2 i ζ ˜ 2 i ) t + α φ 2 i ( x , t ) d x = i = 1 q Ω κ ˜ 1 i ( φ 1 i ( x , t ) φ ¯ 1 i ) ϕ i 2 ( x , t ) d x + i = 1 m Ω κ ˜ 2 i ( φ 2 i ( x , t ) φ ¯ 2 i ) ϕ i ( x , t ) d x = i = 1 q Ω κ ˜ 1 i ( φ 1 i ( x , t ) φ ¯ 1 i ) ϕ i 2 ( x , t ) d x + i = 1 m Ω κ ˜ 2 i ( φ 2 i ( x , t ) φ ¯ 2 i ) sgn ( ϕ i ( x , t ) ) ϕ i ( x , t ) d x = i = 1 q Ω κ ˜ 1 i φ ¯ 1 i ϕ i 2 ( x , t ) d x i = 1 m Ω κ ˜ 2 i φ ¯ 2 i ϕ i ( x , t ) d x i = 1 m Ω Ξ i ( ϕ i ) ϕ i ( x , t ) d x .
Consequently, we derive (according to (55))
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + S + b 2 i = 1 m u i L 2 ( Ω ) 2 Ω Φ t ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f Φ d x Ω W t ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w W d x Ω i = 1 m ( κ ˜ 2 i φ ¯ 2 i j = 1 m ( ϵ ξ h m M j a d i j ) ) ϕ i d x i = 1 q Ω κ ˜ 1 i φ ¯ 1 i ϕ i 2 d x
and then
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + S + b 2 i = 1 m u i L 2 ( Ω ) 2 Ω Φ t ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f + R φ Φ d x Ω W t ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w W d x Ω i = 1 m ( κ ˜ 2 i φ ¯ 2 i j = 1 m ( ϵ ξ h m M j a d i j ) ) ϕ i d x ,
with R φ = diag ( κ ˜ 1 i φ ¯ 1 i , . , κ ˜ 1 m φ ¯ 1 m ) . According to assumption (61), we can deduce that
t + α i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 + S + Θ i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 0 ,
where Θ = min ( min i = 1 , m ( e f , i ) 2 ϵ c α , b c β , min i = 1 , m ( e w , i ) 2 c γ ) .
Then, by Lemma 4.3 of [33], there exists a T S > t such that (for t T S )
i = 1 m ϵ c α 2 ϕ i L 2 ( Ω ) 2 + c β 2 u i L 2 ( Ω ) 2 + c γ 2 w i L 2 ( Ω ) 2 X + S ( t ) + h E α ( Θ ( t t ) α ) ,
where X = i = 1 m ϵ c α 2 ϕ i ( t ) L 2 ( Ω ) 2 + c β 2 u i ( t ) L 2 ( Ω ) 2 + c γ 2 w i ( t ) L 2 ( Ω ) 2 and h is any positive constant. Consequently, lim t i = 1 m ϕ i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 = 0 .
Hence, the error system (42) is globally asymptotically stable and the system (1) and the response system (38) are Mittag–Leffler synchronized under the feedback controllers ( Ξ i ) i = 1 , m . This completes the proof of the theorem. □
We end this analysis by the following corollary (of Theorem 6).
Corollary 3.
When the assumptions of Theorem 3 are satisfied, the master–slave system can achieve Mittag–Leffler synchronization via pinning feedback controllers (58) if κ ˜ 2 i φ ¯ 2 i j = 1 m ( ϵ ξ h m a d i j M j ) 0 (for all i = 1 , m ) and if there exist diagonal matrices N f and N w with strictly positive diagonal terms such that ξ e m G C P S 2 min ( d e , 2 p e ) 4 m Ω 2 P D w N w and ϵ ξ f m G C P S 2 ϵ min ( d f , p f ) 2 m Ω 2 P D f L f + R φ N f are positive-defined matrices.

7. Conclusions

Recent studies in neuroscience have shown the need to develop and implement reliable mathematical models in order to understand and effectively analyze the various neurological activities and disorders in the human brain. In this article, we mainly investigate the long-time behavior of the proposed fractional-order complex memristive neural network model in asymmetrically coupled networks, and the Mittag–Leffler synchronization problems for such coupled dynamical systems when different types of interactions are simultaneously present. A new mathematical brain connectivity model, taking into account the memory characteristics of neurons and their past history, the heterogeneity of brain tissue, and the local anisotropy of cell diffusion, is developed. This developed model is a set of coupled nonlinear Caputo fractional reaction–diffusion equations, in the shape of a fractional-order differential equation coupled with a set of time fractional-order partial differential equations, interacting via an asymmetric complex network. The existence and uniqueness of the weak solution as well as regularity result are established under assumptions on nonlinear terms. The existence of some absorbing sets for this model is established, and then the dissipative dynamics of the model (with absorbing sets) are shown. Finally, some synchronization problems are investigated. A few synchronization criteria are derived to obtain some Mittag–Leffler synchronizations for such complex dynamical networks. Precisely, some sufficient conditions are obtained first for the complete synchronization problem and then for the master–slave synchronization problem via appropriate pinning feedback controllers and adaptive controllers.
The developed analysis in this work can be further applied to extended impulsive models by using the approach developed in [16]. Moreover, it would be interesting to extend this study to synchronization problems for fractional-order coupled dynamical networks in the presence of disturbances and multiple time-varying delays. For predicting and acting on phenomena and undesirable behavior (as opposed to a synchronous state) occurring in brain network dynamics, the established synchronization results can be applied in the study of robustness behavior of uncertain fractional-order neural network models by considering the approach developed in [61].
The future objective is to simulate and validate numerically the developed theoretical results. These studies will be the subject of a forthcoming paper. According to the stochastic nature of synapses (and then the presence of noise), it would also be interesting to investigate the stochastic processes in the brain’s neural network and their impact on the synchronization of network dynamics.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declare no conflict of interest.

Appendix A

Proof of Theorem 1.
To establish the existence result of a weak solution to system (1), we proceed as in [39] by applying the Faedo–Galerkin method.
Let ( ϖ k ) k 1 be a Hilbert basis and orthogonal in L 2 of V . For all N I N , we denote by W N = s p a n ( ϖ 1 , , ϖ N ) the space generated by ( ϖ k ) N k 1 , and we introduce the orthogonal projector L N on the spaces W N . For each N, we would like to define the approximate solution ( ϕ i N , w i N , u i N ) i = 1 , m of problem (1). Setting
ϕ i N ( · , t ) = k = 1 N ϕ i N , k ( t ) ϖ k , w i N ( · , t ) = k = 1 N w i N , k ( t ) ϖ k , u i N ( · , t ) = k = 1 N u i N , k ( t ) ϖ k ,
where ( ϕ i N , k , w i N , k , w i N , k ) i = 1 , m ; k = 1 , N are unknown functions, and replacing ( ϕ i , w i , u i ) i = 1 , m by ( ϕ i N , w i N , u i N ) i = 1 , m in (1), we obtain l = 1 , N and a . e . t ( 0 , T ) , the system of Galerkin equations ( i = 1 , m ):
Ω c α 0 + α ϕ i N ϖ l d x = A f ( ϕ i N , ϖ l ) + Ω ( F ( . , ϕ i N ) + σ u i N ) ϖ l d x + Ω f e x ϖ l d x + ξ e m j = 1 m Ω a i j ( . , t ) H j ( ϕ j N ) ϖ l d x κ Ω ϕ i Ψ ( w i N ) ϖ l d x + p f m i , j = 1 m Γ ( ϕ i N ϕ j N ) ϖ l d Γ + ξ f m Ω G i ( ϕ 1 N , , ϕ m N ) ϖ l d x , Ω c β 0 + α u i N ϖ l d x = Ω ( a b u i N c 2 ( ϕ i N ) 2 + c 1 ϕ i N ) ϖ l d x , Ω c γ 0 + α w i N ϖ l d x = A e ( w i N , ϖ l ) + Ω ( ν 1 ϕ i N ν 2 w i N ) ϖ l d x + Ω g e x ϖ l d x + p e m i , j = 1 m Γ ( w i N w j N ) ϖ l d Γ + ξ e m Ω G i ( w 1 , , w m ) ϖ l d x , with the initial condition ( ϕ i N , w i N , u i N ) ( t = 0 ) = ( L N ϕ 0 i , L N w 0 i , L N u 0 i ) ,
where ( L N ϕ 0 i , L N w 0 i , L N u 0 i ) satisfies (by construction)
( L N ϕ i 0 , L N w i 0 , L N u i 0 ) N ( ϕ 0 i , w 0 i , u 0 i ) strongly in V 2 × L 3 ( Ω ) .
  • Step 1. We show first that, for every N, the system (A1) admits a local solution. The system (A1) is equivalent to an initial value for a system of nonlinear fractional differential equations for functions ( ϕ i N , l , w i N , l , w i N , l ) i = 1 , m ; l = 1 , N , in which the nonlinear term is a Carathéodory function. The existence of a local absolutely continuous solution on interval [ 0 , T ˜ ] , with T ˜ ] 0 , T ] is insured by the standard FODE theory (see, e.g., [69,70]). Thus, we have a local solution of (A1) on [ 0 , T ˜ ] .
  • Step 2. We next derive a priori estimates for functions ( ϕ i N , w i N , w i N ) i = 1 , m , which entail that T ˜ = T , by applying iteratively step 1. For simplicity, in the next step, we omit the “ ˜ ” on T. Now, we set
    h N ( · , t ) = k = 1 N ϑ N , k ( t ) ϖ k , v N ( · , t ) = k = 1 N ζ N , k ( t ) ϖ k , e N ( · , t ) = k = 1 N π N , k ( t ) ϖ k ,
    where ϑ N , k , π N , k and ζ N , k are absolutely continuous coefficients.
Then, from (A1), the approximation solution satisfies the following weak formulation:
Ω c α 0 + α ϕ i N h N d x = A f ( ϕ i N , h N ) + Ω ( F ( . , ϕ i N ) + σ u i N ) h N d x + Ω f e x h N d x + ξ h m j = 1 m Ω a i j ( . , t ) H j ( ϕ j N ) h N d x κ Ω ϕ i Ψ ( w i N ) h N d x + p f m i , j = 1 m Γ ( ϕ i N ϕ j N ) h N d Γ + ξ f m Ω G i ( ϕ 1 N , , ϕ m N ) h N d x , Ω c β 0 + α u i N e N d x = Ω ( a b u i N c 2 ( ϕ i N ) 2 + c 1 ϕ i N ) e N d x , Ω c γ 0 + α w i N v N d x = A e ( w i N , v N ) + Ω ( ν 1 ϕ i N ν 2 w i N ) v N d x + Ω g e x v N d x + p e m i , j = 1 m Γ ( w i N w j N ) v N d Γ + ξ e m Ω G i ( w 1 , , w m ) v N d x , with the initial condition ( ϕ i N , w i N , u i N ) ( t = 0 ) = ( L N ϕ i 0 , L N w i 0 , L N u i 0 ) .
Take ( h N , v N , e N ) = ( ϕ i N , w i N , u i N ) and add these equations for i = 1 , m .
We obtain, according to Lemma 4 and assumption ( H 1 ) ,
c α 2 0 + α i = 1 m ϕ i N L 2 ( Ω ) 2 + d f i = 1 m ϕ i N L 2 ( Ω ) 2 + p f 2 m i , j = 1 m Γ ( ϕ i N ϕ j N ) 2 d Γ i = 1 m Ω F ( x , ϕ i N ) ϕ i N d x + ξ f m i = 1 m Ω δ i i ϕ i N 2 d x + i = 1 m Ω f e x ϕ i N d x ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i N ϕ j N ) 2 d x + σ i = 1 m Ω u i N ϕ i N d x κ i = 1 m Ω ϕ i N 2 Ψ ( w i N ) d x + ξ h m i , j = 1 m Ω a i j ( x , t ) H j ( ϕ j N ) ϕ i N d x , c β 2 0 + α i = 1 m u i N L 2 ( Ω ) 2 a i = 1 m Ω u i N d x b i = 1 m u i N L 2 ( Ω ) 2 + c 1 i = 1 m Ω ϕ i u i N d x c 2 i = 1 m Ω ϕ i N 2 u i d x , c γ 2 0 + α i = 1 m w i N L 2 ( Ω ) 2 + d e i = 1 m w i N L 2 ( Ω ) 2 + p e 2 m i , j = 1 m Γ ( w i N w j N ) 2 d Γ ν 2 i = 1 m w i N L 2 ( Ω ) 2 + ξ e m i = 1 m Ω δ i i w i N 2 d x + i = 1 m Ω g e x w i N d x ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i N w j N ) 2 d x + ν 1 i = 1 m Ω w i N ϕ i N d x .
From Lemma 3 and assumption ( H 2 ) , we can deduce that
κ Ω ϕ i N 2 ( δ + η 1 w i N + η 2 ( w i N ) 2 + ζ tanh ( w i N ) ) d x θ w ϕ i N L 2 ( Ω ) 2 , ξ h m i , j = 1 m Ω a i j ( x , t ) H j ( ϕ j N ) ϕ i d x ξ h m i , j = 1 m Ω a s i j ϕ j N l j ϕ i N d x ξ h 2 m i = 1 m ( j = 1 m ( a s i j + l i 2 a s j i ) ) ϕ i N L 2 ( Ω ) 2 , Ω F ( x , ϕ i N ) ϕ i N d x + σ Ω u i N ϕ i N d x + Ω f e x ϕ i N d x λ 2 Ω ϕ i N 4 d x + υ 2 u i N L 2 ( Ω ) 2 + 1 2 λ f e x L 2 ( Ω ) 2 + Ω ( ρ 0 L 2 ( Ω ) + 1 2 λ ( σ 2 2 υ + λ 2 ) 2 ) , c 1 Ω ϕ i N u i N d x c 2 Ω ϕ i N 2 u i N d x + a Ω u i N d x b u i N L 2 ( Ω ) 2 5 b 8 u i N L 2 ( Ω ) 2 + 2 c 1 2 b ϕ i N L 2 ( Ω ) 2 + 2 a 2 Ω b + 2 c 2 2 b Ω ϕ i N 4 d x ,
where θ 0 = η 1 2 η 2 , θ w = κ ( η 2 θ 0 2 + ζ δ ) and υ > 0 to be chosen appropriately.
Consequently (from the assumption (13)),
ϵ c α 2 0 + α i = 1 m ϕ i N L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i N L 2 ( Ω ) 2 + ϵ λ 2 i = 1 m Ω ϕ i N 4 d x + ϵ p f 2 m i , j = 1 m Γ ( ϕ i N ϕ j N ) 2 d Γ + ϵ ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i N ϕ j N ) 2 d x ϵ i = 1 m ( θ w + λ 2 + ξ f m δ i i + ξ h 2 m j = 1 m ( a s i j + l i 2 a s j i ) ) ϕ i N L 2 ( Ω ) 2 + ϵ m ( 1 2 λ ( σ 2 2 υ + λ 2 ) 2 Ω + 1 2 λ f e x L 2 ( Ω ) 2 + Ω ρ 0 L 2 ( Ω ) ) + ϵ υ 2 i = 1 m u i N L 2 ( Ω ) 2 , c β 2 0 + α i = 1 m u i N L 2 ( Ω ) 2 5 b 8 i = 1 m u i N L 2 ( Ω ) 2 + 2 c 1 2 b i = 1 m ϕ i N L 2 ( Ω ) 2 + 2 c 2 2 b i = 1 m Ω ϕ i N 4 d x + 2 m a 2 Ω b
and
c γ 2 0 + α i = 1 m w i N L 2 ( Ω ) 2 + d e i = 1 m w i N L 2 ( Ω ) 2 + ν ˜ 2 2 i = 1 m w i N L 2 ( Ω ) 2 + p e 2 m i , j = 1 m Γ ( w i N w j N ) 2 d Γ + ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i N w j N ) 2 d x ν 1 2 ν ˜ 2 i = 1 m ϕ i N L 2 ( Ω ) 2 + m ν ˜ 2 g e x L 2 ( Ω ) 2 ,
with the constants ϵ > 0 chosen appropriately and ν ˜ 2 such that ν ˜ 2 ν 2 ξ e m max i = 1 , m δ i i (by hypothesis).
Choosing ϵ and υ such that λ 0 = ϵ λ 4 2 c 2 2 b > 0 and υ 0 = 5 b 8 ϵ υ 2 > 0 , we can deduce, by summing the previous three inequalities:
0 + α ϵ c α 2 i = 1 m ϕ i N L 2 ( Ω ) 2 + c β 2 i = 1 m u i N L 2 ( Ω ) 2 + c γ 2 i = 1 m w i N L 2 ( Ω ) 2 + υ 0 i = 1 m u i N L 2 ( Ω ) 2 + ν 2 2 i = 1 m w i N L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i N L 2 ( Ω ) 2 + d e i = 1 m w i N L 2 ( Ω ) 2 + ϵ λ 4 i = 1 m Ω ϕ i N 4 d x + ϵ p f 2 m i , j = 1 m Γ ( ϕ i N ϕ j N ) 2 d Γ + ϵ ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i N ϕ j N ) 2 d x + p e 2 m i , j = 1 m Γ ( w i N w j N ) 2 d Γ + ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i N w j N ) 2 d x i = 1 m Ω ( λ 0 ϕ i N 4 + θ i ϕ i N 2 ) d x + G 0 ,
where G 0 = ϵ ( 1 2 λ f e x L ( 0 , ; L 2 ( Ω ) ) 2 + Ω ( ρ 0 L 2 ( Ω ) + 1 2 λ ( σ 2 2 υ + λ 2 ) 2 ) ) + m ν ˜ 2 g e x L ( 0 , ; L 2 ( Ω ) ) 2 and θ i = ϵ ( θ w + 1 2 λ + ξ f m δ i i + ξ h 2 m j = 1 m ( a s i j + l i 2 a s j i ) ) + ν 1 2 ν ˜ 2 + 2 c 1 2 b . Because v I R , we have
λ 0 v 4 + θ i v 2 = λ 0 ( v 2 + θ i λ 0 ) 2 θ i v 2 + θ i 2 λ 0 θ i v 2 + θ i 2 λ 0 if θ i > 0 and λ 0 v 4 + θ i v 2 θ i v 2 if θ i < 0 ,
we can deduce that
λ 0 v 4 + θ i v 2 θ i v 2 + ( 1 + sgn ( θ i ) ) θ i 2 2 λ 0
and then we have (from (A7))
0 + α ϵ c α 2 i = 1 m ϕ i N L 2 ( Ω ) 2 + c β 2 i = 1 m u i N L 2 ( Ω ) 2 + c γ 2 i = 1 m w i N L 2 ( Ω ) 2 + λ 1 ϵ c α 2 i = 1 m ϕ i N L 2 ( Ω ) 2 + c β 2 i = 1 m u i N L 2 ( Ω ) 2 + c γ 2 i = 1 m w i N L 2 ( Ω ) 2 + ϵ d f i = 1 m ϕ i N L 2 ( Ω ) 2 + d e i = 1 m w i N L 2 ( Ω ) 2 + ϵ λ 4 i = 1 m Ω ϕ i N 4 d x + ϵ p f 2 m i , j = 1 m Γ ( ϕ i N ϕ j N ) 2 d Γ + ϵ ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i N ϕ j N ) 2 d x + p e 2 m i , j = 1 m Γ ( w i N w j N ) 2 d Γ + ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i N w j N ) 2 d x G 1 ,
where G 1 = G 0 + Ω i = 1 m ( 1 + sgn ( θ i ) ) θ i 2 2 λ 0 , λ 1 = min ( 2 θ ˜ ϵ c α , ν 2 c γ , υ 0 c β ) and θ ˜ = min i = 1 , m ( θ i ) .
In particular, we have
0 + α ϵ c α 2 i = 1 m ϕ i N L 2 ( Ω ) 2 + c β 2 i = 1 m u i N L 2 ( Ω ) 2 + c γ 2 i = 1 m w i N L 2 ( Ω ) 2 + λ 1 ϵ c α 2 i = 1 m ϕ i N L 2 ( Ω ) 2 + c β 2 i = 1 m u i N L 2 ( Ω ) 2 + c γ 2 i = 1 m w i N L 2 ( Ω ) 2 G 1 .
From Lemma 6 and the uniform boundedness of ( ϕ i N , u i N , w i N ) i = 1 , m ( t = 0 ) (from (A2)), we can deduce that (for all t ( 0 , T ) )
Ψ N ( t ) = i = 1 m ϕ i N L 2 ( Ω ) 2 + i = 1 m u i N L 2 ( Ω ) 2 + i = 1 m w i N L 2 ( Ω ) 2 ( t ) C
and then Ψ N ( t ) is uniformly bounded with respect to N. This ensures that, for i = 1 , m ,
the sequences ( ϕ i N , u i N , w i N ) are bounded sets of L ( 0 , T ; L 2 ( Ω ) ) .
Moreover, from the inequality of (A9), relations (A32) and (A2), we have that for t ( 0 , T ) (since 0 t ( t τ ) α 1 d τ = 1 α t α 1 α T α )
Γ ( α ) Ψ N ( t ) + i = 1 m 0 t ( t τ ) α 1 ( ϵ d f ϕ i N L 2 ( Ω ) 2 + d e w i N L 2 ( Ω ) 2 + ϵ λ 4 Ω ϕ i N 4 d x ) d τ C 1 .
We can deduce that
I 0 + α ϕ i N L 2 ( Ω 2 ( t ) = 1 Γ ( α ) 0 t ( t τ ) α 1 ϕ i N L 2 ( Ω 2 d τ C 2 , I 0 + α w i N L 2 ( Ω ) 2 ( t ) = 1 Γ ( α ) 0 t ( t τ ) α 1 w i N L 2 ( Ω H ) 2 d τ C 2 , I 0 + α ϕ i N L 4 ( Ω ) 4 ( t ) = 1 Γ ( α ) 0 t ( t τ ) α 1 ϕ i N L 4 ( Ω ) 4 d τ C 2 .
Hence (since α 1 0 and ( t τ ) ( 0 , T ) , τ ( 0 , t ) then ( t τ ) α 1 T α 1 )
the sequence ( w i N ) is in a bounded set of L 2 ( 0 , T ; H 1 ( Ω ) ) and the sequence ( ϕ i N ) is in a bounded set of L 4 ( 0 , T , L 4 ( Ω ) ) L 2 ( 0 , T ; H 1 ( Ω ) ) .
Now we estimate the fractional derivative 0 + α ( ϕ i N , w i N , u i N ) .
Taking h N = 0 + α ϕ i N , v N = 0 + α w i N and e N = 0 + β u i N in (A3) and using uniform coercivity of forms A f and A e , we can deduce
c α i = 1 m 0 + α ϕ i N L 2 ( Ω ) 2 + d f 2 i = 1 m 0 + α ϕ i N L 2 ( Ω ) 2 + p f 2 m i , j = 1 m Γ ( ϕ i N ϕ j N ) 0 + α ( ϕ i N ϕ j N ) d Γ i = 1 m Ω F ( x , ϕ i N ) 0 + α ϕ i N d x + σ i = 1 m Ω u i N 0 + α ϕ i N d x + ξ f m i , j = 1 m Ω G i j ϕ j N 0 + α ϕ i N d x + i = 1 m Ω f e x 0 + α ϕ i N d x κ i = 1 m Ω ϕ i N Ψ ( w i N ) 0 + α ϕ i N d x + ξ h m i , j = 1 m Ω a i j ( x , t ) H j ( ϕ j N ) 0 + α ϕ i N d x , c β i = 1 m 0 + α u i N L 2 ( Ω ) 2 + b i = 1 m Ω u i N 0 + α u i N d x c 1 i = 1 m Ω ϕ i N 0 + α u i N d x c 2 i = 1 m Ω ϕ i N 2 0 + α u i N d x + a i = 1 m Ω 0 + α u i N d x , c γ i = 1 m 0 + α w i N L 2 ( Ω ) 2 + d e 2 i = 1 m 0 + α w i N L 2 ( Ω ) 2 + p e 2 m i , j = 1 m Γ ( w i N w j N ) 0 + α ( w i N w j N ) d Γ + ν 2 i = 1 m Ω w i N 0 + α w i N d x ξ e m i , j = 1 m Ω G i j w j N 0 + α w i N d x + ν 1 i = 1 m Ω ϕ i N 0 + α w i N d x + i = 1 m Ω g e x 0 + α w i N d x .
According to assumptions (H1)-(H2) and the boundedness of function tanh, we obtain from (A16) that
c α i = 1 m 0 + α ϕ i N L 2 ( Ω ) 2 + d f 2 i = 1 m 0 + α ϕ i N L 2 ( Ω ) 2 + p f 4 m i , j = 1 m 0 + α ( ϕ i N ϕ j N ) L 2 ( Γ ) 2 i = 1 m Ω F 1 ( x , ϕ i N ) 0 + α ϕ i N d x C 3 i = 1 m Ω ρ 3 + f e x + ϕ i N + u i N 0 + α ϕ i N d x + C 4 i = 1 m Ω ϕ i N 2 + ϕ i N w i N 2 + ϕ i N w i N 0 + α ϕ i N d x + ξ f m i , j = 1 m Ω G i j ϕ j N 0 + α ϕ i N d x , c β i = 1 m 0 + α u i N L 2 ( Ω ) 2 + b 2 i = 1 m Ω 0 + α u i N L 2 ( Ω ) 2 d x C 5 i = 1 m Ω ( 1 + ϕ i N 2 + ϕ i N ) 0 + α u i N d x , c γ i = 1 m 0 + α w i N L 2 ( Ω ) 2 + d e 2 i = 1 m 0 + α w i N L 2 ( Ω ) 2 + p e 2 m i , j = 1 m 0 + α ( w i N w j N ) L 2 ( Γ ) 2 + ν 2 2 i = 1 m 0 + α w i N L 2 ( Ω ) 2 ξ e m i , j = 1 m Ω G i j w j N 0 + α w i N d x + C 6 i = 1 m Ω ( ϕ i N + g e x ) 0 + α w i N d x .
Then, from (A12)–(A15), the regularity of ( ρ 3 , f e x , g e x ) , and the continuous embedding of H 1 in L 6 , we can derive (according to Young and Hölder inequalities)
c α 2 i = 1 m 0 + α ϕ i N L 2 ( Ω ) 2 + d f 2 i = 1 m 0 + α ϕ i N L 2 ( Ω ) 2 + p f 4 m i , j = 1 m 0 + α ( ϕ i N ϕ j N ) L 2 ( Γ ) 2 i = 1 m Ω F 1 ( x , ϕ i N ) 0 + α ϕ i N d x C 7 ( 1 + ϕ i N L 4 ( Ω ) 4 + ϕ i N L 4 ( Ω ) 2 w i N L 4 ( Ω ) 2 + ϕ i N L 6 ( Ω ) 2 w i N L 6 ( Ω ) 4 ) ,
c β 2 i = 1 m 0 + α u i N L 2 ( Ω ) 2 + b 2 i = 1 m Ω 0 + α u i N L 2 ( Ω ) 2 d x C 8 ( 1 + ϕ i N L 4 ( Ω ) 4 ) , c γ 2 i = 1 m 0 + α w i N L 2 ( Ω ) 2 + d e 2 i = 1 m 0 + α w i N L 2 ( Ω ) 2 + p e 2 m i , j = 1 m 0 + α ( w i N w j N ) L 2 ( Γ ) 2 + ν 2 2 i = 1 m 0 + α w i N L 2 ( Ω ) 2 C 9 .
Moreover, since F 1 is an increasing function, then the primitive E F of F 1 is convex. Hence, from [68], we can deduce (since F 1 is independent on time)
0 + α ( Ω E F ( . , ϕ i N ) d x ) Ω F 1 ( . , ϕ i N ) 0 + α ϕ i N d x .
Now, we estimate the following term I 0 + α 0 + α ( Ω E F ( . , ϕ i N ) d x ) ( t ) = Ω E F ( . , ϕ i N ) ( t ) d x Ω E F ( . , ϕ i N ) ( 0 ) d x .
From ( H 2 ) we have E F ( . , ϕ i N ) C ( ϕ i N 2 + 1 ) and E F ( . , ϕ i N ( 0 ) ) C ( ρ 4 + ϕ i N ( 0 ) 4 ) and then (according to the regularity of ρ 4 and (A12))
I 0 + α 0 + α Ω E F ( . , ϕ i N ) d x ( t ) C 10 ( 1 + ϕ i N ( 0 ) H 1 ( Ω ) 4 ) .
Consequently,
c α 2 i = 1 m I 0 + α 0 + α ϕ i N L 2 ( Ω ) 2 ( t ) + d f 2 i = 1 m ϕ i N ( t ) L 2 ( Ω ) 2 C 11 i = 1 m ( 1 + ϕ i N ( 0 ) H 1 ( Ω ) 2 + ϕ i N ( 0 ) H 1 ( Ω ) 4 ) + C 12 I 0 + α ϕ i N L 6 ( Ω ) 2 w i N L 6 ( Ω ) 4 ( t ) + C 13 I 0 + α ϕ i N L 4 ( Ω ) 4 ( t ) + I 0 + α ϕ i N L 4 ( Ω ) 2 w i N L 4 ( Ω ) 2 ( t ) , c β 2 i = 1 m I 0 + α 0 + α u i N L 2 ( Ω ) 2 ( t ) C 14 ( 1 + u i N ( 0 ) L 2 ( Ω ) 2 + I 0 + α ϕ i N L 4 ( Ω ) 4 ( t ) ) , c γ 2 i = 1 m I 0 + α 0 + α w i N L 2 ( Ω ) 2 ( t ) + d e 2 i = 1 m w i N ( t ) L 2 ( Ω ) 2 C 15 ( 1 + w i N ( 0 ) H 1 ( Ω ) 2 ) .
According to (A12)–(A15), the continuous embedding of H 1 in L 6 , and the uniform boundedness of quantities ϕ i N ( 0 ) , w i N ( 0 ) , and w i N ( 0 ) in H 1 , we obtain from the third equation of (A19) that the sequence ( w i N ) is uniformly bounded in L ( 0 , T ; H 1 ( Ω ) ) and we can derive the following estimate (for all t ( 0 , T ) ):
c α i = 1 m I 0 + α 0 + α ϕ i N L 2 ( Ω ) 2 ( t ) + d f i = 1 m ϕ i N ( t ) L 2 ( Ω ) 2 C c β i = 1 m I 0 + α 0 + α u i N L 2 ( Ω ) 2 ( t ) C c γ i = 1 m I 0 + α 0 + α w i N L 2 ( Ω ) 2 ( t ) + d e i = 1 m w i N L 2 ( Ω ) 2 C .
Prove now that u i N is uniformly bounded in L ( 0 , T , L 3 ( Ω ) ) .
Since u i N satisfies u i N ( x , t ) = u i N ( x , 0 ) 1 c β Γ ( α ) 0 t ( t τ ) α 1 ( a b u i N c 2 ( ϕ i N ) 2 + c 1 ϕ i N ) ( x , τ ) d τ , then, we have
u i N ( x , t ) C 16 u i N ( x , 0 ) + 0 t ( t τ ) α 1 d τ + 0 t ( t τ ) α 1 u i N ( x , τ ) d τ + C 17 0 t ( t τ ) α 1 ϕ i N 2 ( x , τ ) d τ + 0 t ( t τ ) α 1 ϕ i N ( x , τ ) d τ .
This implies (since 0 t ( t τ ) α 1 d τ T α / α )
Ω u i N ( x , t ) 3 d x 1 / 3 C 18 1 + u i N ( x , 0 ) L 3 ( Ω ) + C 19 Ω 0 t ( t τ ) α 1 u i N ( x , τ ) d τ 3 d x 1 / 3 + C 20 Ω 0 t ( t τ ) α 1 ϕ i N ( x , τ ) d τ 3 d x 1 / 3 + C 21 Ω 0 t ( t τ ) α 1 ϕ i N ( x , s ) 2 d s 3 d x 1 / 3
and then (using Minkowski inequality and the continuous embedding of H 1 in L 6 )
u i N ( t ) L 3 ( Ω ) C 22 1 + u i N ( 0 ) L 3 ( Ω ) + 0 t ( t τ ) α 1 u i N ( τ ) L 3 ( Ω ) d τ + 0 t ( t τ ) α 1 ϕ i N ( τ ) L 3 ( Ω ) d τ + 0 t ( t τ ) α 1 ϕ i N ( τ ) H 1 ( Ω ) 2 d τ .
Using (A12), (A14) and uniform boundedness of quantities u i N ( 0 ) in L 3 (from (A2)), we obtain u i N ( t ) L 3 ( Ω ) C 23 1 + I 0 + α u i N L 3 ( Ω ) ( t ) and then (from Lemma 5)
u i N ( t ) L 3 ( Ω ) C .
This estimate enables us to say that
u i N is uniformly bounded in the space L ( 0 , T , L 3 ( Ω ) ) .
In order to prove that the local solution can be extended to the whole interval ( 0 ; T ) , we use the following process. We suppose that a solution of (A1) on [ 0 , T k ] has already been defined and we shall derive the local solution on [ T k , T k + 1 ] (where 0 < T k + 1 T k is small enough) by making use of the a priori estimates and fractional derivative T k + α with beginning point T k . So, by this iterative process, we can deduce that Faedo–Galerkin solutions are well defined on the interval ( 0 , T ) . So, we omit details.
  • Step 3. We can now show the existence of weak solutions to (1). From results (A12), (A15), (A21) and (A20), Theorem A1 and compactness argument, it follows that there exist ( ϕ i , w i ; u i ) and ( ϕ ˜ i , w ˜ i , u ˜ i ) such that there exists a subsequence of ( ϕ i N , w i N ; u i N ) also denoted by ( ϕ i N , w i N ; u i N ) , such that
    ( ϕ i N , w i N , u i N ) ( ϕ i , w i , u i ) weakly in L ( 0 , T ; V × V × L 3 ( Ω ) ) , ( ϕ i N , w i N , u i N ) ( ϕ i , w i , u i ) strongly in ( L 2 ( Q ) ) 3 , 0 + α ( ϕ i N , w i N ; u i N ) ( ϕ ˜ i , w ˜ i , u ˜ i ) weakly in ( L 2 ( Q ) ) 3 .
    First, we show that ( 0 + α ϕ i N , 0 + α w i N , 0 + α u i N ) exists in the weak sense and that ( 0 + α ϕ i , 0 + α w i , 0 + α u i ) = ( ϕ ˜ i , w ˜ i , u ˜ i ) . Indeed, we take ω C 0 ( 0 , T ) and v V (then ω v D = L 4 ( Q ) L 2 ( 0 , T ; V ) ).
Then, by the weak convergence and Lebesgue’s dominated convergence arguments, we have
ϕ ˜ i , ω v D , D = lim N 0 T Ω ω ( t ) 0 + α ϕ i N ( x , t ) v ( x ) d x d t = lim N Ω 0 T ω ( t ) 0 + α ϕ i N ( x , t ) d t v ( x ) d x = lim N 0 T D T α ω ( t ) Ω ϕ i N ( x , t ) v ( x ) d x d t I 0 + 1 α ω ( 0 + ) Ω ϕ i N ( x , 0 ) v ( x ) d x = 0 T D T α ω ( t ) Ω ϕ i ( x , t ) v ( x ) d x d t I 0 + 1 α ω ( 0 + ) Ω ϕ i ( x , 0 ) v ( x ) d x = 0 T Ω H ω ( t ) 0 + α ϕ i ( x , t ) v ( x ) d x d t .
Consequently, ϕ ˜ i = 0 + α ϕ i in the weak sense. In the same way, we prove, in the weak sense, that ( 0 + α w i , 0 + α u i ) = ( w ˜ i , u ˜ i ) .
Consider D ( ] 0 , T [ ) and ( h N , v N , e N ) W N 3 . According to (A1), we can deduce that
c α 0 T ( t ) 0 + α ϕ i N , h N V , V d t = 0 T ( t ) A f ( ϕ i N , h N ) d t + 0 T ( t ) Ω ( F ( . , ϕ i N ) + σ u i N ) h N d x d t + ξ h m j = 1 m 0 T ( t ) Ω a i j ( . , t ) H j ( ϕ j N ) h N d x d t + 0 T ( t ) Ω f e x h N d x d t κ 0 T ( t ) Ω ϕ i Ψ ( w i N ) h N d x d t + p f m j = 1 m 0 T ( t ) Γ ( ϕ i N ϕ j N ) h N d Γ + ξ f m 0 T ( t ) Ω G i ( ϕ 1 N , , ϕ m N ) h N d x d t , c β 0 T ( t ) Ω 0 + α u i N e N d x d t = 0 T ( t ) Ω ( a b u i N c 1 ( ϕ i N ) 2 + c 2 ϕ i N ) e N d x d t , c γ 0 T ( t ) 0 + α w i N , v N V , V d t = 0 T ( t ) A e ( w i N , v N ) d t + 0 T ( t ) Ω ( ν 1 ϕ i N ν 2 w i N ) v N d x d t + 0 T ( t ) Ω g e x v N d x d t + p e m j = 1 m 0 T ( t ) Γ ( w i N w j N ) v N d Γ + ξ e m 0 T ( t ) Ω G i ( w 1 , , w m ) v N d x d t .
According to (A14), (A23) and to density properties of the space spanned by ( ϖ l ) , and using similar arguments as used to obtain relation (A24), passing to the limit when N goes to infinity is easy for the linear terms. For passing to the limit ( N ) in nonlinear terms, which requires hypothesis ( H 2 ) , we can use the standard technique, which consists of taking the difference between the sequence and its limit in the form of the sum of two quantities such that the first uses the strong convergence property and the second uses the weak convergence property. So we omit details.
Thus, the limiting function ( ϕ i , w i , u i ) satisfies the system (for any elements h , e and v of V )
c α 0 + α ϕ i , h V , V = A f ( ϕ i , h ) + Ω ( F ( . , ϕ i ) + σ u i ) h d x + ξ h m j = 1 m Ω a i j ( . , t ) H j ( ϕ j ) h d x + Ω f e x h d x κ Ω ϕ i Ψ ( w i ) h d x + p f m j = 1 m Γ ( ϕ i ϕ j ) h d Γ + ξ f m Ω G i ( ϕ 1 , , ϕ m ) h d x , c β Ω 0 + α u i e d x = Ω ( a b u i c 2 ( ϕ i ) 2 + c 1 ϕ i ) e d x , c γ 0 + α w i , v V , V = A e ( w i , v ) + Ω ( ν 1 ϕ i ν 2 w i ) v d x + Ω g e x v d x + p e m j = 1 m Γ ( w i w j ) v d Γ + ξ e m Ω G i ( w 1 , , w m ) v d x .
In the case of α > 1 / 2 , the continuity of solution at t = 0 and the equalities ϕ i ( 0 + ) = ϕ i 0 , w i ( 0 + ) = w i 0 and u i ( 0 + ) = u i 0 is a consequence of Lemma A2. This completes the proof of the existence result. For the uniqueness result, let ( ϕ i 0 , w i 0 , u 0 i , f ex , g ex ) be given such that ( ϕ i 0 , w i 0 , u 0 i ) V 2 × L 3 ( Ω ) and ( f ex , g ex ) L ( 0 , T ; L 2 ( Ω ) ) , for i = 1 , m . Let ( ϕ i ( k ) , w i ( k ) , u i ( k ) ) i = 1 , m be the weak solutions to (1) (for k = 1 , 2 ), which corresponds to data ( ϕ i 0 , w i 0 , u i 0 ) i = 1 , m , f ex and g ex . According to Lemma 4 and assumption ( H 1 ) , we can deduce (for i = 1 , m ) for ( ϕ i , w i , u i ) = ( ϕ i ( 1 ) ϕ i ( 2 ) , w i ( 1 ) w i ( 2 ) , u i ( 1 ) u i ( 2 ) ) the following relation:
c α 2 0 + α ϕ i L 2 ( Ω ) 2 + d f ϕ i L 2 ( Ω ) 2 Ω ( F ( . , ϕ i ( 1 ) ) F ( . , ϕ i ( 2 ) ) ) ϕ i d x + σ Ω u i ϕ i d x + ξ h m j = 1 m Ω a i j ( . , t ) ( H j ( ϕ j ( 1 ) ) H j ( ϕ j ( 2 ) ) ) ϕ i d x κ Ω ( ϕ i ( 1 ) Ψ ( w i ( 1 ) ) ϕ i ( 2 ) Ψ ( w i ( 2 ) ) ) ϕ i d x + p f m j = 1 m Γ ( ϕ i ϕ j ) ϕ i d Γ + ξ f m Ω G i ( ϕ 1 , , ϕ m ) ϕ i d x , c β 2 0 + α u i L 2 ( Ω ) 2 + b u i L 2 ( Ω ) 2 Ω ( c 1 ϕ i c 2 ϕ i ( ϕ i ( 1 ) + ϕ i ( 2 ) ) ) u i d x , c γ 2 0 + α w i L 2 ( Ω ) 2 + d e w i L 2 ( Ω ) 2 + ν 2 w i L 2 ( Ω ) 2 ν 1 Ω ϕ i w i d x + p e m j = 1 m Γ ( w i w j ) w i d Γ + ξ e m Ω G i ( w 1 , , w m ) w i d x , ( ϕ i , w i , u i ) ( t = 0 ) = ( 0 , 0 , 0 ) .
Because for any y I R , tanh ( y ) 1 and sech 2 ( y ) 1 , we can deduce according to assumption (H1)–(H2) (since w i ( k ) and ϕ i ( k ) , for k = 1 , 2 , are in L ( 0 , T , H 1 ( Ω ) ) )
Ω ( F ( x , ϕ i ( 1 ) ) F ( x , ϕ i ( 2 ) ) ) ϕ i d x = Ω ( 0 1 F s ( x , ϕ i ( 2 ) + s ϕ i ) d s ) ϕ i 2 d x Ω ( α 3 3 ( ( ϕ i ( 1 ) ) 2 + ( ϕ i ( 2 ) ) 2 + ϕ i ( 1 ) ϕ i ( 2 ) ) + β ) ϕ i 2 d x β ϕ i L 2 ( Ω ) 2 , κ Ω ( ϕ i ( 1 ) Ψ ( w i ( 1 ) ) ϕ i ( 2 ) Ψ ( w i ( 2 ) ) ) ϕ i d x = κ Ω δ + η 2 ( w i ( 1 ) + η 1 2 η 2 ) 2 η 1 2 4 η 2 + ζ tanh ( w i ( 1 ) ) ϕ i 2 d x κ Ω ( 0 1 sech 2 ( w i ( 2 ) + s w i ) d s ) w i ϕ i ( 2 ) ϕ i d x κ Ω ( η 1 + η 2 ( w i ( 1 ) + w i ( 2 ) ) ) ϕ i ( 2 ) w i ϕ i d x C 1 ϕ i L 2 ( Ω ) 2 + C 2 ϕ i L 2 ( Ω ) w i L 4 ( Ω ) + C 3 ϕ i L 4 ( Ω ) w i L 4 ( Ω ) .
Then, by summing for all 1 i m in (A26), we can deduce that (using the Minkowski inequality, continuous embedding of H 1 in L 4 , and boundedness of w i ( k ) and ϕ i ( k ) in L ( 0 , T , H 1 ( Ω ) ) )
c α 2 0 + α i = 1 m ϕ i L 2 ( Ω ) 2 + d f 2 i = 1 m ϕ i L 2 ( Ω ) 2 + p f 2 m i , j = 1 m Γ ( ϕ i ϕ j ) 2 d Γ + ξ f 2 m i , j = 1 ; j i m Ω G i j ( ϕ i ϕ j ) 2 d x C 4 i = 1 m ( ϕ i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 ) + C 5 i = 1 m w i L 2 ( Ω ) 2 , c β 2 0 + α i = 1 m u i L 2 ( Ω ) 2 + b 2 i = 1 m u i L 2 ( Ω ) 2 C 6 i = 1 m ϕ i L 2 ( Ω ) 2 + C 7 i = 1 m ϕ i L 2 ( Ω ) 2 , c γ 2 0 + α i = 1 m w i L 2 ( Ω ) 2 + d e i = 1 m w i L 2 ( Ω ) 2 + ν ˜ 2 2 i = 1 m w i L 2 ( Ω ) 2 + p e 2 m i , j = 1 m Γ ( w i w j ) 2 d Γ + ξ e 2 m i , j = 1 ; j i m Ω G i j ( w i w j ) 2 d x C 8 i = 1 m ϕ i L 2 ( Ω ) 2 .
Consequently,
0 + α c α 2 i = 1 m ϕ i L 2 ( Ω ) 2 + ϵ 1 c γ 2 i = 1 m w i L 2 ( Ω ) 2 + ϵ 2 c β 2 i = 1 m u i L 2 ( Ω ) 2 + d f 4 i = 1 m ϕ i L 2 ( Ω ) 2 + ϵ 1 d e 2 i = 1 m w i L 2 ( Ω ) 2 C c α 2 i = 1 m ϕ i L 2 ( Ω ) 2 + ϵ 1 c γ 2 i = 1 m w i L 2 ( Ω ) 2 + ϵ 2 c β 2 i = 1 m u i L 2 ( Ω ) 2 ,
where ϵ 1 = 2 c 5 d e and ϵ 2 = d f 4 c 8 . So
i = 1 m ( ϕ i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 ) C i = 1 m I 0 + α ϕ i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 .
By Lemma 5, we can deduce i = 1 m ( ϕ i L 2 ( Ω ) 2 + w i L 2 ( Ω ) 2 + u i L 2 ( Ω ) 2 ) = 0 (since ( ϕ i , w i , u i ) ( t = 0 ) = ( 0 , 0 , 0 ) , for i = 1 , m ). This completes the proof. □

Appendix B

The purpose of what follows is to recall some basic definitions and results of fractional integrals and derivatives in the Riemann–Liouville sense and Caputo sense. We start from a formal level and, for a given Banach space X, we consider a sufficiently smooth function with values in X, f : t [ a , + ) f ( t ) (with < a < b + ). Each fractional-order parameter γ is assumed to be in ] 0 , 1 ] .
Definition A1.
For each fractional-order parameter γ, the forward and backward γth-order Riemann–Liouville fractional integrals are defined, respectively, by ( t [ a , + ) and b > a )
I a + γ f ( t ) = 1 Γ ( γ ) a t ( t τ ) γ 1 f ( τ ) d τ , I b γ f ( t ) = 1 Γ ( γ ) t b ( τ t ) γ 1 f ( τ ) d τ ,
 where Γ ( z ) = 0 e τ τ z 1 d τ is the Euler Γ-function.
For each fractional-order parameters γ 1 and γ 2 , the following equality for the fractional integral
I a + γ 1 I a + γ 2 f = I a + γ 1 + γ 2 f
holds for an L q -function f ( 1 q ).
Definition A2.
For each fractional-order parameter γ, the γth-order Riemann–Liouville and γth-order Caputo fractional derivatives on [ a , + ) are defined, respectively, by ( t [ a , + ) )
D a + γ f ( t ) = d d t ( I a + 1 γ f ( t ) ) = 1 Γ ( 1 γ ) d d t ( a t ( t τ ) γ f ( τ ) d τ ) , a + γ f ( t ) = I a + 1 γ d f ( t ) d t = 1 Γ ( 1 γ ) a t ( t τ ) γ d f d t ( τ ) d τ .
From (A30), we can derive the following relation
f ( t ) = f ( a ) + I a + γ a + γ f ( t ) = f ( a ) + 1 Γ ( γ ) a t ( t τ ) γ 1 a + γ f ( τ ) d τ .
Definition A3.
For each fractional-order parameter γ, the backward γth-order Riemann–Liouville and backward γth-order Caputo fractional derivatives, on [ a , + ) are defined, respectively, by ( t [ a , + ) and b > a )
D b γ f ( t ) = d d t ( I b 1 γ f ( t ) ) = 1 Γ ( 1 γ ) d d t ( t b ( τ t ) γ f ( τ ) d τ ) , b γ f ( t ) = I b 1 γ d f ( t ) d t = 1 Γ ( 1 γ ) t b ( τ t ) γ d f d t ( τ ) d τ .
Remark A1.
  • For γ 1 , the forward (respectively, backward) γth-order Riemann–Liouville and Caputo fractional derivatives of f converge to the classical derivative d f d t (respectively, to d f d t ). Moreover, the γth-order Riemann–Liouville fractional derivative of constant function t C ( t ) = k (with k a constant) is not 0, since D a + γ C ( t ) = k Γ ( 1 γ ) d d t ( a t ( t s ) γ d s ) = k ( t a ) γ Γ ( 1 γ ) .
  • We can show that the difference between Riemann–Liouville and Caputo fractional derivatives depends only on the values of f on endpoint. More precisely, for f C 1 ( [ a , + ) , X ) , we have ( t [ a , + ) and b > a )
    D a + γ f ( t ) = a + γ f ( t ) + f ( a ) ( t a ) γ Γ ( 1 γ ) , D b γ f ( t ) = b γ f ( t ) + f ( b ) ( b t ) γ Γ ( 1 γ ) .
From [71], we have the following Lemma.
Lemma A1
(Continuity properties of fractional integral in L p spaces on ( a , b ) ). The fractional integral I a + γ is a continuous operator from:
 (i)
L p ( a , b ) into L p ( a , b ) , for any p 1 ;
 (ii)
L p ( a , b ) into L r ( a , b ) , for any p ( 1 , 1 / γ ) and r [ 1 , p / ( 1 γ p ) ] ;
 (iii)
L p ( a , b ) into C 0 , γ 1 / p ( [ a , b ] ) , for any p ] 1 / γ , + [ ;
 (iii)
L 1 / γ ( a , b ) into L p ( a , b ) , for any p [ 1 , + ) ;
 (iv)
L ( a , b ) into C 0 , γ ( [ a , b ] ) .
From Lemma A1 and (A32), we can deduce the following corollary.
Lemma A2.
Let X be a Banach space and γ ] 0 , 1 ] . Suppose the Caputo derivative a + γ f L p ( a , b ; X ) and p > 1 γ , then f C 0 , γ 1 / p ( [ a , b ] ; X ) .
We also recall the fractional integration by parts in the formulas (see, e.g., [64,72]):
Lemma A3.
Let 0 < γ 1 and p , q 1 with 1 / p + 1 / q 1 + γ . Then
 (i)
if f is an L p -function on ( a , b ) with values in X and g is an L q -function on ( a , b ) with values in X, then
a b ( f ( τ ) , I a + γ g ( τ ) ) X d τ = a b ( g ( τ ) , I b γ f ( τ ) ) X d τ ,
 (ii)
if f I b γ ( L p ) and g I a + γ ( L q ) , then a b ( f ( τ ) , D a + γ g ( τ ) ) X d τ = a b ( g ( τ ) , D b γ f ( τ ) ) X d τ .
Lemma A4.
Let 0 < γ 1 , g be an L p -function on ( a , b ) with values in X (for p 1 ) and f be an absolutely continuous function on [ a , b ] with values in X. Then,
 (i)
a b ( a + γ f ( τ ) , g ( τ ) ) X d τ = a b ( D b γ g ( τ ) , f ( τ ) ) X d τ + | ( I b 1 γ g ( τ ) , f ( τ ) ) X | a b ,
 (ii)
a b ( D a + γ f ( τ ) , g ( τ ) ) X d τ = a b ( D b γ g ( τ ) , f ( τ ) ) X d τ + ( I b 1 γ g ( b ) , f ( b ) ) X ,
 (iii)
a b ( D b γ f ( τ ) , g ( τ ) ) X d τ = a b ( D a + γ g ( τ ) , f ( τ ) ) X d τ ( I a + 1 γ g ( a + ) , f ( a ) ) X .
We end this appendix by giving a compactness theorem in Hilbert spaces. Assume that Y 0 , Y 1 , and Y are Hilbert spaces with
Y 0 Y Y 1 being continuous and Y 0 Y is compact .
We define the Hilbert space W γ ( I R ; Y 0 , Y 1 ) , for a given γ > 0 , by
W γ ( I R ; Y 0 , Y 1 ) = { v L 2 ( I R , Y 0 ) | γ f L 2 ( I R , Y 1 ) } ,
endowed with the norm v W γ = v L 2 I R , Y 0 + τ γ v ^ L 2 I R , Y 1 1 / 2 . For any subset K of I R , we define the subspace W K γ of W γ by
W K γ ( I R ; Y 0 , Y 1 ) = { v W γ ( I R ; Y 0 , Y 1 ) | support of v K } .
As, e.g., in [39], we have the following compactness result.
Theorem A1.
Let  Y 0 , Y 1 , and Y  be Hilbert spaces with the injection (A35). Then, for any bounded set K and any  γ > 0 , the injection of  W K γ ( I R ; Y 0 , Y 1 )  into  L 2 ( I R ; Y )  is compact.

References

  1. Sporns, O.; Zwi, J.D. The small world of the cerebral cortex. Neuroinformatics 2004, 2, 145–162. [Google Scholar] [CrossRef] [PubMed]
  2. Barrat, A.; Barthelemy, M.; Vespignani, A. Dynamical Processes on Complex Networks; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  3. Belmiloudi, A. Mathematical modeling and optimal control problems in brain tumor targeted drug delivery strategies. Int. J. Biomath. 2017, 10, 1750056. [Google Scholar] [CrossRef]
  4. Venkadesh, S.; Horn, J.D.V. Integrative Models of Brain Structure and Dynamics: Concepts, Challenges, and Methods. Front. Neurosci. 2021, 15, 752332. [Google Scholar] [CrossRef] [PubMed]
  5. Craddock, R.C.; Jbabdi, S.; Yan, C.G.; Vogelstein, J.T.; Castellanos, F.X.; Di Martino, A.; Kelly, C.; Heberlein, K.; Colcombe, S.; Milham, M.P. Imaging human connectomes at the macroscale. Nat. Methods 2013, 10, 524–539. [Google Scholar] [CrossRef] [PubMed]
  6. Deco, G.; McIntosh, A.R.; Shen, K.; Hutchison, R.M.; Menon, R.S.; Everling, S.; Hagmann, P.; Jirsa, V.K. Identification of optimal structural connectivity using functional connectivity and neural modeling. J. Neurosci. 2014, 34, 7910–7916. [Google Scholar] [CrossRef] [PubMed]
  7. Damascelli, M.; Woodward, T.S.; Sanford, N.; Zahid, H.B.; Lim, R.; Scott, A.; Kramer, J.K. Multiple functional brain networks related to pain perception revealed by fMRI. Neuroinformatics 2022, 20, 155–172. [Google Scholar] [CrossRef]
  8. Hipp, J.F.; Engel, A.K.; Siegel, M. Oscillatory synchronization in large-scale cortical networks predicts perception. Neuron 2011, 69, 387–396. [Google Scholar] [CrossRef] [PubMed]
  9. Varela, F.; Lachaux, J.P.; Rodriguez, E.; Martinerie, J. The brainweb: Phase synchronization and large-scale integration. Nat. Rev. Neurosci. 2001, 2, 229–239. [Google Scholar] [CrossRef] [PubMed]
  10. Brookes, M.J.; Woolrich, M.; Luckhoo, H.; Price, D.; Hale, J.R.; Stephenson, M.C.; Barnes, G.R.; Smith, S.M.; Morris, P.G. Investigating the electrophysiological basis of resting state networks using magnetoencephalography. Proc. Natl. Acad. Sci. USA 2011, 108, 16783–16788. [Google Scholar] [CrossRef]
  11. Das, S.; Maharatna, K. Fractional dynamical model for the generation of ECG like signals from filtered coupled Van-der Pol oscillators. Comput. Methods Programs Biomed. 2013, 122, 490–507. [Google Scholar] [CrossRef]
  12. Schirner, M.; Kong, X.; Yeo, B.T.; Deco, G.; Ritter, P. Dynamic primitives of brain network interaction. NeuroImage 2022, 250, 118928. [Google Scholar] [CrossRef] [PubMed]
  13. Axmacher, N.; Mormann, F.; Fernández, G.; Elger, C.E.; Fell, J. Memory formation by neuronal synchronization. Brain Res. Rev. 2006, 52, 170–182. [Google Scholar] [CrossRef] [PubMed]
  14. Breakspear, M. Dynamic models of large-scale brain activity. Nat. Neurosci. 2017, 20, 340–352. [Google Scholar] [CrossRef] [PubMed]
  15. Lundstrom, B.N.; Higgs, M.H.; Spain, W.J.; Fairhall, A.L. Fractional differentiation by neocortical pyramidal neurons. Nat. Neurosci. 2008, 11, 1335–1342. [Google Scholar] [CrossRef] [PubMed]
  16. Belmiloudi, A. Dynamical behavior of nonlinear impulsive abstract partial differential equations on networks with multiple time-varying delays and mixed boundary conditions involving time-varying delays. J. Dyn. Control Syst. 2015, 21, 95–146. [Google Scholar] [CrossRef]
  17. Gilding, B.H.; Kersner, R. Travelling Waves in Nonlinear Diffusion-Convection Reaction; Birkhäuser: Basel, Switzerland, 2012. [Google Scholar]
  18. Kondo, S.; Miura, T. Reaction-diffusion model as a framework for understanding biological pattern formation. Science 2010, 329, 1616–1620. [Google Scholar] [CrossRef] [PubMed]
  19. Babiloni, C.; Lizio, R.; Marzano, N.; Capotosto, P.; Soricelli, A.; Triggiani, A.I.; Cordone, S.; Gesualdo, L.; Del Percio, C. Brain neural synchronization and functional coupling in Alzheimer’s disease as revealed by resting state EEG rhythms. Int. J. Psychophysiol. 2016, 103, 88–102. [Google Scholar] [CrossRef] [PubMed]
  20. Lehnertz, K.; Bialonski, S.; Horstmann, M.T.; Krug, D.; Rothkegel, A.; Staniek, M.; Wagner, T. Synchronization phenomena in human epileptic brain networks. J. Neurosci. Methods 2009, 183, 42–48. [Google Scholar] [CrossRef]
  21. Schnitzler, A.; Gross, J. Normal and pathological oscillatory communication in the brain. Nat. Rev. Neurosci. 2005, 6, 285–296. [Google Scholar] [CrossRef]
  22. Touboul, J.D.; Piette, C.; Venance, L.; Ermentrout, G.B. Noise-induced synchronization and antiresonance in interacting excitable systems: Applications to deep brain stimulation in Parkinson’s disease. Phys. Rev. X 2020, 10, 011073. [Google Scholar] [CrossRef]
  23. Uhlhaas, P.J.; Singer, W. Neural synchrony in brain disorders: Relevance for cognitive dysfunctions and pathophysiology. Neuron 2006, 52, 155–168. [Google Scholar] [CrossRef] [PubMed]
  24. Bestmann, S. (Ed.) Computational Neurostimulation; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  25. Chua, L.O. Memristor-the missing circuit element. IEEE Trans. Circuit Theory 1971, 18, 507–519. [Google Scholar] [CrossRef]
  26. Lin, H.; Wang, C.; Tan, Y. Hidden extreme multistability with hyperchaos and transient chaos in a Hopfield neural network affected by electromagnetic radiation. Nonlinear Dyn. 2020, 99, 2369–2386. [Google Scholar] [CrossRef]
  27. Njitacke, Z.T.; Kengne, J.; Fotsin, H.B. A plethora of behaviors in a memristor based Hopfield neural networks (HNNs). Int. J. Dyn. Control 2019, 7, 36–52. [Google Scholar] [CrossRef]
  28. Farnood, M.B.; Shouraki, S.B. Memristor-based circuits for performing basic arithmetic operations. Procedia Comput. Sci. 2011, 3, 128–132. [Google Scholar]
  29. Jo, S.H.; Chang, T.; Ebong, I.; Bhadviya, B.B.; Mazumder, P.; Lu, W. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 2010, 10, 1297–1301. [Google Scholar] [CrossRef]
  30. Snider, G.S. Cortical computing with memristive nanodevices. SciDAC Rev. 2008, 10, 58–65. [Google Scholar]
  31. Anbalagan, P.; Ramachandran, R.; Alzabut, J.; Hincal, E.; Niezabitowski, M. Improved results on finite-time passivity and synchronization problem for fractional-order memristor-based competitive neural networks: Interval matrix approach. Fractal Fract. 2022, 6, 36. [Google Scholar] [CrossRef]
  32. Bao, G.; Zeng, Z.G.; Shen, Y.J. Region stability analysis and tracking control of memristive recurrent neural network. Neural Netw. 2018, 98, 51–58. [Google Scholar] [CrossRef]
  33. Chen, J.; Chen, B.; Zeng, Z. O(t)-synchronization and Mittag-Leffler synchronization for the fractional-order memristive neural networks with delays and discontinuous neuron activations. Neural Netw. 2018, 100, 10–24. [Google Scholar] [CrossRef]
  34. Rakkiyappan, R.; Sivaranjani, K.; Velmurugan, G. Passivity and passification of memristor-based complex-valued recurrent neural networks with interval time-varying delays. Neurocomputing 2014, 144, 391–407. [Google Scholar] [CrossRef]
  35. Takembo, C.N.; Mvogo, A.; Ekobena Fouda, H.P.; Kofané, T.C. Effect of electromagnetic radiation on the dynamics of spatiotemporal patterns in memristor-based neuronal network. Nonlinear Dyn. 2018, 95, 1067–1078. [Google Scholar] [CrossRef]
  36. Tu, Z.; Wang, D.; Yang, X.; Cao, J. Lagrange stability of memristive quaternion-valued neural networks with neutral items. Neurocomputing 2020, 399, 380–389. [Google Scholar] [CrossRef]
  37. Zhu, S.; Bao, H. Event-triggered synchronization of coupled memristive neural networks. Appl. Math. Comput. 2022, 415, 126715. [Google Scholar] [CrossRef]
  38. Baleanu, D.; Lopes, A.M. (Eds.) Applications in engineering, life and social sciences. In Handbook of Fractional Calculus with Applications; De Gruyter: Berlin, Germany, 2019. [Google Scholar]
  39. Belmiloudi, A. Cardiac memory phenomenon, time-fractional order nonlinear system and bidomain-torso type model in electrocardiology. AIMS Math. 2021, 6, 821–867. [Google Scholar] [CrossRef]
  40. Hilfer, R. Applications of Fractional Calculus in Physics; World Scientific: Singapore, 2000. [Google Scholar]
  41. Maheswari, M.L.; Shri, K.S.; Sajid, M. Analysis on existence of system of coupled multifractional nonlinear hybrid differential equations with coupled boundary conditions. AIMS Math. 2024, 9, 13642–13658. [Google Scholar] [CrossRef]
  42. Magin, R.L. Fractional calculus models of complex dynamics in biological tissues. Comput. Math. Appl. 2010, 59, 1586–1593. [Google Scholar] [CrossRef]
  43. West, B.J.; Turalska, M.; Grigolini, P. Networks of Echoes: Imitation, Innovation and Invisible Leaders; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  44. Caputo, M. Linear models of dissipation whose Q is almost frequency independent. II. Fract.Calc. Appl. Anal. 2008, 11, 414, Reprinted from Geophys. J. R. Astr. Soc. 1967, 13, 529–539.. [Google Scholar] [CrossRef]
  45. Ermentrout, G.B.; Terman, D.H. Mathematical Foundations of Neuroscience; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  46. Izhikevich, E.M. Dynamical Systems in Neuroscience; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  47. Osipov, G.V.; Kurths, J.; Zhou, C. Synchronization in Oscillatory Networks; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  48. Wu, C.W. Synchronization in Complex Networks of Nonlinear Dynamical Systems; World Scientific: Singapore, 2007. [Google Scholar]
  49. Ambrosio, B.; Aziz-Alaoui, M.; Phan, V.L. Large time behaviour and synchronization of complex networks of reaction–diffusion systems of FitzHugh-Nagumo type. IMA J. Appl. Math. 2019, 84, 416–443. [Google Scholar] [CrossRef]
  50. Ding, K.; Han, Q.-L. Synchronization of two coupled Hindmarsh-Rose neurons. Kybernetika 2015, 51, 784–799. [Google Scholar] [CrossRef]
  51. Huang, Y.; Hou, J.; Yang, E. Passivity and synchronization of coupled reaction-diffusion complex-valued memristive neural networks. Appl. Math. Comput. 2020, 379, 125271. [Google Scholar] [CrossRef]
  52. Miranville, A.; Cantin, G.; Aziz-Alaoui, M.A. Bifurcations and synchronization in networks of unstable reaction–diffusion system. J. Nonlinear Sci. 2021, 6, 44. [Google Scholar] [CrossRef]
  53. Yang, X.; Cao, J.; Yang, Z. Synchronization of coupled reaction-diffusion neural networks with time-varying delays via pinning-impulsive controllers. SIAM J. Contr. Optim. 2013, 51, 3486–3510. [Google Scholar] [CrossRef]
  54. You, Y. Exponential synchronization of memristive Hindmarsh–Rose neural networks. Nonlinear Anal. Real World Appl. 2023, 73, 103909. [Google Scholar] [CrossRef]
  55. Hymavathi, M.; Ibrahim, T.F.; Ali, M.S.; Stamov, G.; Stamova, I.; Younis, B.A.; Osman, K.I. Synchronization of fractional-order neural networks with time delays and reaction-diffusion Terms via Pinning Control. Mathematics 2022, 10, 3916. [Google Scholar] [CrossRef]
  56. Li, W.; Gao, X.; Li, R. Dissipativity and synchronization control of fractional-order memristive neural networks with reaction-diffusion terms. Math. Methods Appl. Sci. 2019, 42, 7494–7505. [Google Scholar] [CrossRef]
  57. Wu, X.; Liu, S.; Wang, H.; Wang, Y. Stability and pinning synchronization of delayed memristive neural networks with fractional-order and reaction–diffusion terms. ISA Trans. 2023, 136, 114–125. [Google Scholar] [CrossRef]
  58. Tonnesen, J.; Hrabetov, S.; Soria, F.N. Local diffusion in the extracellular space of the brain. Neurobiol. Dis. 2023, 177, 105981. [Google Scholar] [CrossRef]
  59. Adams, R.A. Sobolev Spaces; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  60. Ern, A.; Guermond, J.L. Finite Elements I: Approximation and Interpolation, Texts in Applied Mathematics; Springer: Cham, Switzerland, 2021. [Google Scholar]
  61. Belmiloudi, A. Stabilization, Optimal and Robust Control: Theory and Applications in Biological and Physical Sciences; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  62. Belykh, I.; Belykh, V.N.; Hasler, M. Sychronization in asymmetrically coupled networks with node balance. Chaos 2006, 16, 015102. [Google Scholar] [CrossRef]
  63. Ye, H.; Gao, J.; Ding, Y. A generalized Gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 2007, 328, 1075–1081. [Google Scholar] [CrossRef]
  64. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
  65. Gorenflo, R.; Kilbas, A.A.; Mainardi, F.; Rogosin, S.V. Mittag-Leffler Functions: Related Topics and Applications; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  66. Chueshov, I.D. Introduction to the Theory of Infinite-Dimensional Dissipative Systems; ACTA Scientific Publishing House: Kharkiv, Ukraine, 2002. [Google Scholar]
  67. Bauer, F.; Atay, F.M.; Jost, J. Synchronization in time-discrete networks with general pairwise coupling. Nonlinearity 2009, 22, 2333–2351. [Google Scholar] [CrossRef]
  68. Li, L.; Liu, J.G. A generalized definition of Caputo derivatives and its application to fractional ODEs. SIAM J. Math. Anal. 2018, 50, 2867–2900. [Google Scholar] [CrossRef]
  69. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations. J. Math. Anal. Appl. 2002, 265, 229–248. [Google Scholar] [CrossRef]
  70. Kubica, A.; Yamamoto, M. Initial-boundary value problems for fractional diffusion equations with time-dependent coefficients. Fract. Calc. Appl. Anal. 2018, 21, 276–311. [Google Scholar] [CrossRef]
  71. Hardy, G.H.; Littlewood, J.E. Some properties of fractional integrals I. Math. Z. 1928, 27, 565–606. [Google Scholar] [CrossRef]
  72. Zhou, Y. Basic Theory of Fractional Differential Equations; World Scientific: Singapore, 2014. [Google Scholar]
Figure 1. Schematic design of a biological neuron.
Figure 1. Schematic design of a biological neuron.
Axioms 13 00440 g001
Figure 2. Concept map of the coupled neural network.
Figure 2. Concept map of the coupled neural network.
Axioms 13 00440 g002
Figure 3. Abridged general view of synaptic connection.
Figure 3. Abridged general view of synaptic connection.
Axioms 13 00440 g003
Figure 4. Two examples of connection topology of the neural network with three neurons (based on memristor–autapse and under electromagnetic radiation effects).
Figure 4. Two examples of connection topology of the neural network with three neurons (based on memristor–autapse and under electromagnetic radiation effects).
Axioms 13 00440 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Belmiloudi, A. Brain Connectivity Dynamics and Mittag–Leffler Synchronization in Asymmetric Complex Networks for a Class of Coupled Nonlinear Fractional-Order Memristive Neural Network System with Coupling Boundary Conditions. Axioms 2024, 13, 440. https://doi.org/10.3390/axioms13070440

AMA Style

Belmiloudi A. Brain Connectivity Dynamics and Mittag–Leffler Synchronization in Asymmetric Complex Networks for a Class of Coupled Nonlinear Fractional-Order Memristive Neural Network System with Coupling Boundary Conditions. Axioms. 2024; 13(7):440. https://doi.org/10.3390/axioms13070440

Chicago/Turabian Style

Belmiloudi, Aziz. 2024. "Brain Connectivity Dynamics and Mittag–Leffler Synchronization in Asymmetric Complex Networks for a Class of Coupled Nonlinear Fractional-Order Memristive Neural Network System with Coupling Boundary Conditions" Axioms 13, no. 7: 440. https://doi.org/10.3390/axioms13070440

APA Style

Belmiloudi, A. (2024). Brain Connectivity Dynamics and Mittag–Leffler Synchronization in Asymmetric Complex Networks for a Class of Coupled Nonlinear Fractional-Order Memristive Neural Network System with Coupling Boundary Conditions. Axioms, 13(7), 440. https://doi.org/10.3390/axioms13070440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop