Attainability for Markov and semi-Markov chains

: When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors n , where each component n i represents the number of entities in the state S i , can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability ( SR -maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using SR -maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.


Introduction
The notion of attainability first took root in the context of manpower planning [1,2].While it is possible to study attainability in any (semi-)Markov chain model that allows for inflow, outflow, and internal transitions, our discussion will conform to the established norms and use the language typically linked to the context of manpower planning.In a Markov model, the system's states represent homogeneous groups that are characterised by intra-group homogeneity with similar likelihoods of transitioning from one state to another.For more context regarding population models and Markov chains we refer the reader to [3,4].
In this domain, the system's states are often aligned with hierarchical levels in the organisation, which we call the "organisational states".Let us denote the organisational states as S = {S 1 , . . ., S l }.The distribution of personnel across these states is captured by a vector s = (s i ), which we call the personnel structure.The central question in control theory within this context is whether a desired distribution of personnel across these states can be sustained (maintainability) or attained (attainability) through strategic adjustments to manageable parameters.When investigating maintainability and attainability, one has to start by precisely defining what should be maintained or attained and how this can be accomplished.Note, however, that the concepts of maintainability and attainability can be used for every discrete-time Markov process that allows for leaving the system, for entering the system, and for making transitions within the system.
Regarding the means of control, control theory typically considers three primary types of personnel movements, as outlined in [5]: wastage, internal transitions, and recruitment.An employee in the organisation at time t is either still in one of the organisational states or has left the organisation through wastage at time t + 1.The wastage probability from state S i is denoted as w i , and the wastage probabilities are gathered in the wastage vector w.Internal transitions account for movements within the organisation, such as promotions or demotions, and are represented by a transition matrix P I .Recruitment reflects the process of recruiting new members into the organisation, with a recruitment vector r.In this paper, only systems with a finite number of states are considered.
The field of control theory is well established and has a significant history within the engineering domain, as highlighted in [6,7], among others, while the foundations for control theory in Markov models were first explored in the context of manpower planning in [5,[8][9][10][11].
While there are three main approaches to influencing the personnel structure within organisations, recruitment control is often preferred.Controlling through recruitment is viewed as a more ethical alternative compared to using wastage, which involves dismissing employees and can negatively affect morale and job satisfaction.Adjusting promotion and demotion rates can also lead to dissatisfaction, particularly among those who perceive it as a hindrance to their career advancement [12].Such adjustments may also result in promoting underqualified individuals or demoting competent ones.Therefore, recruitment control is generally favoured as it avoids immediate negative impacts on existing employees, as was already discussed in Bartholomew's work [5].
Investigations into attainable configurations under different Markov system conditions have been conducted by a number of researchers, including continuous-time Markov chains [13] and non-homogeneous Markov chains [14][15][16].
However, variations in control methods have been explored in the literature, including the concept of pressure in states introduced by [17] and restricted recruitment as discussed in [18].Control theory in the context of semi-Markov processes has received limited attention.The concepts of attainability and maintainability in non-homogeneous semi-Markov chains, particularly through maintaining the number of members in each seniority class within an organisational state, was first examined by [19], where Vassiliou and Papadopoulou extended the concept of maintainability by imposing that the number of members is maintained for each seniority class within an organisational state.Recently, a new concept of maintainability was developed for semi-Markov chains [20], namely state reunion maintainability (SR-maintainability), where the number of members is maintained for each organisational state.Building upon this foundation, our work introduces the parallel concept of state reunion attainability, wherein we explore the possibility of reaching a specified distribution of members across organisational states.
The definition of SR-maintainability will be our starting point to discuss attainability in the setting of time-homogeneous semi-Markov chains, as maintainability and attainability go hand in hand.In practical scenarios, the goal may involve initially achieving a specific personnel structure and subsequently ensuring its sustainability over time using consistent control mechanisms.Alternatively, starting from an already maintainable personnel structure, the objective might shift towards transforming this stable configuration to achieve a different, desired personnel structure, all while employing adaptive control strategies to navigate the complexities of such a transition.These processes necessitate a thorough understanding of how control strategies can be effectively applied to first reach the desired state distribution and then to preserve it.The dual focus on attainability and maintainability underscores the importance of strategic planning in managing population dynamics, where the initial phase of reaching an optimal structure is seamlessly followed by efforts to maintain that structure through careful control and management practices.
In Section 3 we first review the concept of attainability for Markov chains and extend this work to systems with a growth factor 1 + α, where the parameter α signifies the rate of change in the size of the system over time.When α is negative, this indicates a contraction in the system size, i.e., a decline of the number of people in the system.Conversely, when α is positive, the system expands over time.Thereafter, in Section 4, we introduce and study attainability as well as state reunion attainability for semi-Markov chains starting from the concept of SR-maintainability for semi-Markov chains.We show that a general approach to state reunion attainability, where a structure is said to be state reunion attainable if there exists an arbitrary initial structure from which it can be attained, is not appropriate and introduce the concept of (n-step) state reunion attainability starting from a subset S of structures.We provide a method to determine the associated region of attainable structures and illustrate these results.

Time-Homogeneous Markov Chain and Semi-Markov Chain Models
In this section, we provide fundamental concepts and notations that are common in previous studies on Markov and semi-Markov chain models [3,4].
For a Markov chain model with states S 1 , . . ., S l : • Let P I ∈ R l×l denote the internal transition matrix, where the ijth element P I ij represents the probability of transitioning from state S i to state S j within one time unit.

•
The vector w = (w i ) ∈ R l captures the wastage probabilities for each state, where w i is the probability of an entity leaving the system from state S i within one time unit.

•
The recruitment vector r = (r i ) ∈ R l gathers the probabilities r i of entering the system into state S i .
Let us further introduce ∆ k−1 as the (k − 1)-probability simplex, i.e., the set of all vectors x ∈ R k where x i ≥ 0 for all i, and ∑ k i=1 x i = 1.This set represents the space of all possible population structures in a k-state system.In this paper, we will be primarily interested in ∆ l−1 .
Population structures at times t and t + 1 are represented by vectors s(t) and s(t + 1), respectively, where s(t), s(t + 1) ∈ ∆ l−1 .These vectors describe the distribution of entities across l states at specific time points.
Then, the evolution of the population structure in a constant-sized Markov chain from time t to t + 1 is described by the following equation: where the notation w ′ refers to the transpose of the row vector w.
A population structure s ∈ ∆ l−1 is said to be attainable with respect to a constantsized Markov process defined by the internal transition matrix P I if there exists a structure y ∈ ∆ l−1 such that s can be achieved from y in one step using control by recruitment, formally stated as follows: Semi-Markov chain models are extensions of Markov chain models that can take into account the duration of stay in the states.Define J n as the state following the nth transition and T n as the time at which the nth transition occurs in a semi-Markov process.The semi-Markov kernel q is then given by the following [21]: where q ij (k) represents the probability that the process transitions from state S i to state S j after exactly k time units.The semi-Markov kernel q can be used to obtain the sequence of transition matrices {P(k)} k in the following way: Theorem 1 ([22]).For all k such that ∑ h∈S ∑ k−1 m=0 q ih (m) ̸ = 1, we have the following: Let K denote the maximum seniority level considered within the system.The sequence of matrices {P(k)} K k=0 , with each P(k) ∈ R l×l , is derived from the semi-Markov kernel q.Here, P(k) specifically represents the transition probabilities for entities with state seniority k.

Attainability for Constant-Sized Markov Systems
When examining attainability, one needs to clarify three things: a starting structure, the means to attain a certain structure, and an optional time limit to attain the desired structure.Regarding the means to attain a certain structure, we assume that the system is under control by recruitment.The starting point and the optional time limit will be discussed in the next sections.Bartholomew [5] defined the concept of attainability as follows:

Definition 1 ([5]
).A structure s is called attainable with respect to a constant-sized Markov process defined by P I if there exists a structure y such that s is reachable from y in one step using control by recruitment.
The attainable region, which we will denote as A R M , was characterised for a constantsized system as well [5].We restate the theorem and formulate a slightly different proof, which will be the basis of the remainder of the results, where we will write e i for the standard basis vectors in R l .Theorem 2 ([5]).The attainable region for a constant-sized Markov system, A R M , is the convex hull of the vectors {e i P I + w i e j } i,j , i.e., the following is true: Proof.Suppose that a is an arbitrary attainable structure.This implies that there exist probability vectors y = ∑ l i=1 y i e i and r = ∑ l j=1 r j e j such that the following is true: Rewriting this equation, we obtain the following equalities: where we use e i • w as the notation for the scalar product of e i and w.We conclude that a ∈ conv{e i P I + w i e j } i,j .Note that the reverse inclusion is trivial, as every convex combination of vectors of the set {e i P I + w i e j } i,j can be rewritten in the form y(P I + w ′ r), where y and r are probability vectors.
This region can be useful as we know for certain that the points that belong to the complement of A R M are definitely not attainable, no matter what the starting point y is.One can also define the concept of attainability starting from a set of structures S .Definition 2 (n-step attainability from S ).A structure s is called n-step attainable from S with respect to a constant-sized Markov process defined by P I if there exists a structure y ∈ S such that s is reachable from y in n steps using control by recruitment.
Since Equation ( 1) can be rewritten as ∑ l j=1 r j yP I + (y • w)e j , the following lemma holds: Lemma 1.If S = {v}, it follows that, for a constant-sized Markov process defined by P I , the one-step attainable region from S is given by the following: Lemma 1 can be used to determine, in a straightforward way, the attainable region for a finite set S .If the set S is a convex set, the same technique yields the following: Lemma 2. If the starting region S is a convex set with vertices {v 1 , v 2 , . . ., v k }, it follows that, for a constant-sized Markov process defined by P I , the one-step attainable region from S is given by the following: A R 1,S M = conv{v i P I + (v i • w)e j } i,j To obtain A R n,S M , one could calculate A R 1,S M and use this as the new starting region to calculate A R 2,S M , and by iteratively following this procedure, one can obtain the desired A R n,S M .

Attainability for Markov Systems Subject to Growth and Contraction
In his doctoral dissertation [3], Bartholomew suggested the extension of these findings to organisations that experience growth or contraction.This research gap will be addressed in this section.
The evolution of the total size of a system that is subject to growth or contraction can be described by the following: where N(t) corresponds to the total number of people in the organisational states at time t, and the parameter α refers to the rate of change in the size of the system over time.When α is negative, this indicates a contraction in the system size; conversely, when α is positive, this signifies growth.Note that in the case of growth or contraction, starting from a personnel structure y(t) at time t, y(t + 1) is given by the following where the additive recruitment vector r + (t) is chosen such that the sum of the components of y(t + 1) equals 1 + α instead of 1. So, when talking about the structures, we need to normalise with respect to the L 1 norm and consider y(t+1) ||y(t+1)|| 1 .Observe that the vector r + (t) should not be confused with the classical recruitment vector r(t), which is a probability vector that corresponds to r + (t) ||r + (t)|| 1 .Now, note that the maximal amount of contraction is limited by max i w i , as being part of the wastage vector is the only way to leave the system, i.e., α ≥ − max i w i .Furthermore, note that the procedure sketched in the proof of Theorem 2 is rooted in the fact that the vectors of the form e i P I have to be supplemented to achieve the desired vector, which has to sum to 1, as a constant-sized system is considered in Theorem 2. As long as w i ≥ −α holds for all i, the same reasoning can be repeated, i.e., we need to supplement the vectors e i P I to achieve the desired vector, of which the elements have to sum to 1 + α.This immediately yields the following: Theorem 3. The attainable region for a Markov system with growth factor 1 + α, where w i ≥ −α for all i, A R M (1 + α), is the convex hull of the vectors {e i P I + (w i + α)e j } i,j , i.e., the following is true: This result covers the case of a growing system, as α > 0 implies that w i ≥ −α.However, this result does not indicate how to compute the attainable region for contracting systems with w i < −α for some i.In the latter case, the sums of the components of some of the vectors of the form e i P I are simply too big, i.e., their L 1 norm exceeds (1 + α); therefore, they cannot be used as building blocks of the attainable region.Now, suppose that there exist just one i for which w i < −α.For all j ̸ = i, we can still supplement e j P I with the [(w j + α)e l ] l to take into account all of the attainable convex combinations where e j P I contributes with a non-zero coefficient.But, for e i P I , it is impossible to do this, as ∥e i P I ∥ 1 > (1 + α).Simply discarding e i P I is no option either, as there might still exist convex combinations of e i P I with the e j P I that do result in attainable structures.To resolve this problem, we should simply take into account these convex combinations.If we write {β 0 e i P I + ∑ i̸ =j β j e j P I } β := {β 0 e i P I + ∑ i̸ =j β j e j P I | ∑ s β s = 1; ∀s : 0 ≤ β s ≤ 1, β 0 ̸ = 0, with ||β 0 e i P I + ∑ i̸ =j β j e j P I || 1 = (1 + α)} we can use this result to state the following theorem, which includes growth as well as contraction: Theorem 4. The attainable region for a Markov system with growth factor 1 + α, A R M (1 + α), is the convex hull of the vectors {e i P I * } i , where With the use of Theorem 4, one can easily generalise Lemmas 1 and 2 to systems with growth factor 1 + α.
Although this definition can be used for general starting regions S , we argue that it can often be useful in practice to use the maintainability region M R M as a starting region, as a maintainable structure might already be in place within the company, or a company could be actively working towards such a structure.Furthermore, the maintainable region is the a priori smallest known state reunion attainable set, regardless of the starting position.Note that in this case, A R n−1,S M ⊂ A R n,S M .

SR-Maintainability and SR-Attainability for Semi-Markov Chains 4.1. State Re-Union Maintainability
In order to study the attainability of a semi-Markov chain, we need to incorporate all of the information in the matrices P(k) into one matrix P SM that characterises the semi-Markov (SM) model.This involves segregating the states S by their levels of organisational state seniority, with P SM serving as the transition matrix for these seniority-based disaggregated states.This leads to Definitions 3 and 4, which were initially developed to facilitate the study of maintainability for semi-Markov chains [20].Suppose we have l organisational states and one state that corresponds to leaving the system, which is called the wastage state.If the sequence {P(k)} k is of length K + 1, we define the following: Definition 3. The set of seniority-based states is given by where the state S a(b) corresponds to the staff in organisational state b that has organisational state seniority equal to a. Definition 4. For 0 ≤ k ≤ K, the elements of P SM are equal to the following: and, if i − 1 ≡ K+1 k, then the following is true: If we redefine the state set, we can view this matrix P SM as the transition matrix with state space S SB .In this way, all of the information regarding transitions is stored in one matrix P SM , which can be used to elegantly state the definitions of state reunion maintainability and attainability.By writing the state vector at time t as n SB (t) and the non-normalised recruitment vector, which entails the absolute recruitment counts, at time t as r + SB (t), we obtain the following equations that describe the evolution of the stock vector for a system with a growth factor 1 + α: The concepts of state reunion maintainability as well as state reunion attainability can be stated by the use of a reunion matrix U, which encodes the specific seniority-based states that are to be fused.Definition 5.For a transition matrix P SM with state space S SB , a (K + 1)l × l matrix U = (U ij ) is called the reunion matrix if each of its l columns U j consists of K + 1 ones through the following: We can now restate the concept of state reunion maintainability.
Definition 6 (State reunion maintainability [20]).A structure s is called state reunion maintainable (SR-maintainable) for a system with growth factor 1 + α under control by recruitment if there exists a path of seniority-based stock vectors (n SB (t)) t and if a sequence of recruitment vectors (r + SB (t)) t can be chosen such that for every t ∈ N, the following is true: (1 + α)n SB (t)U = n SB (t + 1)U ( 4) A sequence of seniority-based stock vectors (n SB (t)) t that satisfies Equations ( 3)-( 5) will be called a seniority-based path associated to the SR-maintainable personnel structure s.

Attainability for Semi-Markov Chains
For semi-Markov chains, we follow a similar approach as the one in Section 3.This would yield the following definition for attainability: Definition 7. A structure s SB is called attainable with respect to a semi-Markov process defined by P SM if there exists a structure y SB such that s SB is reachable from y SB in one step using control by recruitment.

Remark 1.
As recruitment is only allowed in the seniority-based states with state seniority zero, most of the components of r SB are zero.
However, in the context of real-world applications, it might often be deemed less restrictive and more efficacious to focus on preserving the proportions in the organisational states instead, as is the case for state reunion maintainability.

State Re-Union Attainability
A natural way to define state reunion attainability would be the following.

Definition 8 (General state reunion attainability).
A structure s = s SB U is called state reunion attainable with respect to a semi-Markov process defined by P SM if there exists a structure y SB such that s SB is reachable from y SB in one step using control by recruitment.
Yet, it turns out that this approach is not informative with regard to state reunion attainability: Lemma 3.All structures s = s SB U ∈ ∆ l−1 are state reunion attainable for every semi-Markov process defined by P SM .
Proof.For the structure y SB with all the personnel in one of the S K(b) states at time t, we know that none of these people will be in an internal state at time t + 1, i.e., the stock vector at time t + 1 will be completely determined by the recruitment vector r SB , which implies that under control by recruitment, each structure s = s SB U would be attainable in this way.
A more suitable definition of state reunion attainability would be the n-step state reunion attainability, starting from a set S .Definition 9 (n-step state reunion attainability from S ).A structure s = s SB U is called n-step state reunion attainable from S with respect to a semi-Markov process defined by P SM if there exists a structure y SB ∈ S such that s SB is reachable from y SB in n steps using control by recruitment.
We will denote, for a system subject to a growth factor of (1 + α), the n-step attainable set starting from S and the n-step state reunion attainable set starting from S as A R n,S SM (1 + α) and A R n,S SM (1 + α) • U, respectively.
Remark 2. Definition 9 is a generalisation of Definition 8 in the sense that they coincide for n = 1 with S = ∆ l−1 .
To calculate the n-step state reunion attainable structures in practice, one could use the same technique as in the proof of Theorem 2. Furthermore, this technique also yields the following results: Lemma 4. If S = {v}, it follows that the one-step state reunion attainable region from S for a semi-Markov process with growth factor (1 + α) defined by P SM is given by the following: α and that only the e j are considered that correspond to states with zero seniority, as these are the only states were recruitment can take place.We denote this restriction on the index j as j| S 0(b) .
Lemma 4 can be used to determine the attainable region for a finite set S = {v 1 , v 2 , . . ., v k }.We will first introduce the following notation: If the set S is a convex set, we obtain the following: Lemma 5.If the starting region S is a convex set with vertices {v 1 , v 2 , . . ., v k }, it follows that the one-step state reunion attainable region from S for a semi-Markov process with growth factor (1 + α) defined by P SM is given by the convex hull of the vectors {v i P * SM } i • U, where To obtain A R n,S SM one could calculate A R 1,S SM and use this as the new starting region to calculate A R 2,S SM .By iteratively following this procedure, one can obtain the desired A R n,S SM .

Illustrations
In this section, we illustrate our findings by constructing the attainable and state reunion attainable regions for different growth factors (1 + α).
First, consider the Markov system defined by for which we determine the maintainable region for the cases α ∈ {0, 1, −0.15}.

Figure 2 .S 1 ( 1 )
Figure 2. M R M = S 0 , A R 1,S 0 M and A R 2,S 0 M .Now, consider a semi-Markov system for which P(k) is given by the following: