Abstract
When studying Markov chain models and semi-Markov chain models, it is useful to know which state vectors , where each component represents the number of entities in the state , can be maintained or attained. This question leads to the definitions of maintainability and attainability for (time-homogeneous) Markov chain models. Recently, the definition of maintainability was extended to the concept of state reunion maintainability (-maintainability) for semi-Markov chains. Within the framework of semi-Markov chains, the states are subdivided further into seniority-based states. State reunion maintainability assesses the maintainability of the distribution across states. Following this idea, we introduce the concept of state reunion attainability, which encompasses the potential of a system to attain a specific distribution across the states after uniting the seniority-based states into the underlying states. In this paper, we start by extending the concept of attainability for constant-sized Markov chain models to systems that are subject to growth or contraction. Afterwards, we introduce the concepts of attainability and state reunion attainability for semi-Markov chain models, using -maintainability as a starting point. The attainable region, as well as the state reunion attainable region, are described as the convex hull of their respective vertices, and properties of these regions are investigated.
Keywords:
semi-Markov model; Markov model; attainability; maintainability; state reunion; manpower planning MSC:
60K15; 91D35; 60J20
1. Introduction
The notion of attainability first took root in the context of manpower planning [1,2]. While it is possible to study attainability in any (semi-)Markov chain model that allows for inflow, outflow, and internal transitions, our discussion will conform to the established norms and use the language typically linked to the context of manpower planning. In a Markov model, the system’s states represent homogeneous groups that are characterised by intra-group homogeneity with similar likelihoods of transitioning from one state to another. For more context regarding population models and Markov chains we refer the reader to [3,4].
In this domain, the system’s states are often aligned with hierarchical levels in the organisation, which we call the “organisational states”. Let us denote the organisational states as . The distribution of personnel across these states is captured by a vector , which we call the personnel structure. The central question in control theory within this context is whether a desired distribution of personnel across these states can be sustained (maintainability) or attained (attainability) through strategic adjustments to manageable parameters. When investigating maintainability and attainability, one has to start by precisely defining what should be maintained or attained and how this can be accomplished. Note, however, that the concepts of maintainability and attainability can be used for every discrete-time Markov process that allows for leaving the system, for entering the system, and for making transitions within the system.
Regarding the means of control, control theory typically considers three primary types of personnel movements, as outlined in [5]: wastage, internal transitions, and recruitment. An employee in the organisation at time t is either still in one of the organisational states or has left the organisation through wastage at time . The wastage probability from state is denoted as , and the wastage probabilities are gathered in the wastage vector . Internal transitions account for movements within the organisation, such as promotions or demotions, and are represented by a transition matrix . Recruitment reflects the process of recruiting new members into the organisation, with a recruitment vector . In this paper, only systems with a finite number of states are considered.
The field of control theory is well established and has a significant history within the engineering domain, as highlighted in [6,7], among others, while the foundations for control theory in Markov models were first explored in the context of manpower planning in [5,8,9,10,11].
While there are three main approaches to influencing the personnel structure within organisations, recruitment control is often preferred. Controlling through recruitment is viewed as a more ethical alternative compared to using wastage, which involves dismissing employees and can negatively affect morale and job satisfaction. Adjusting promotion and demotion rates can also lead to dissatisfaction, particularly among those who perceive it as a hindrance to their career advancement [12]. Such adjustments may also result in promoting underqualified individuals or demoting competent ones. Therefore, recruitment control is generally favoured as it avoids immediate negative impacts on existing employees, as was already discussed in Bartholomew’s work [5].
Investigations into attainable configurations under different Markov system conditions have been conducted by a number of researchers, including continuous-time Markov chains [13] and non-homogeneous Markov chains [14,15,16].
However, variations in control methods have been explored in the literature, including the concept of pressure in states introduced by [17] and restricted recruitment as discussed in [18]. Control theory in the context of semi-Markov processes has received limited attention. The concepts of attainability and maintainability in non-homogeneous semi-Markov chains, particularly through maintaining the number of members in each seniority class within an organisational state, was first examined by [19], where Vassiliou and Papadopoulou extended the concept of maintainability by imposing that the number of members is maintained for each seniority class within an organisational state. Recently, a new concept of maintainability was developed for semi-Markov chains [20], namely state reunion maintainability (-maintainability), where the number of members is maintained for each organisational state. Building upon this foundation, our work introduces the parallel concept of state reunion attainability, wherein we explore the possibility of reaching a specified distribution of members across organisational states.
The definition of -maintainability will be our starting point to discuss attainability in the setting of time-homogeneous semi-Markov chains, as maintainability and attainability go hand in hand. In practical scenarios, the goal may involve initially achieving a specific personnel structure and subsequently ensuring its sustainability over time using consistent control mechanisms. Alternatively, starting from an already maintainable personnel structure, the objective might shift towards transforming this stable configuration to achieve a different, desired personnel structure, all while employing adaptive control strategies to navigate the complexities of such a transition. These processes necessitate a thorough understanding of how control strategies can be effectively applied to first reach the desired state distribution and then to preserve it. The dual focus on attainability and maintainability underscores the importance of strategic planning in managing population dynamics, where the initial phase of reaching an optimal structure is seamlessly followed by efforts to maintain that structure through careful control and management practices.
In Section 3 we first review the concept of attainability for Markov chains and extend this work to systems with a growth factor , where the parameter signifies the rate of change in the size of the system over time. When is negative, this indicates a contraction in the system size, i.e., a decline of the number of people in the system. Conversely, when is positive, the system expands over time. Thereafter, in Section 4, we introduce and study attainability as well as state reunion attainability for semi-Markov chains starting from the concept of -maintainability for semi-Markov chains. We show that a general approach to state reunion attainability, where a structure is said to be state reunion attainable if there exists an arbitrary initial structure from which it can be attained, is not appropriate and introduce the concept of (n-step) state reunion attainability starting from a subset of structures. We provide a method to determine the associated region of attainable structures and illustrate these results.
2. Time-Homogeneous Markov Chain and Semi-Markov Chain Models
In this section, we provide fundamental concepts and notations that are common in previous studies on Markov and semi-Markov chain models [3,4].
For a Markov chain model with states :
- Let denote the internal transition matrix, where the th element represents the probability of transitioning from state to state within one time unit.
- The vector captures the wastage probabilities for each state, where is the probability of an entity leaving the system from state within one time unit.
- The recruitment vector gathers the probabilities of entering the system into state .
Let us further introduce as the -probability simplex, i.e., the set of all vectors where for all i, and . This set represents the space of all possible population structures in a k-state system. In this paper, we will be primarily interested in .
Population structures at times t and are represented by vectors and , respectively, where . These vectors describe the distribution of entities across l states at specific time points.
Then, the evolution of the population structure in a constant-sized Markov chain from time t to is described by the following equation:
where the notation refers to the transpose of the row vector .
A population structure is said to be attainable with respect to a constant-sized Markov process defined by the internal transition matrix if there exists a structure such that can be achieved from in one step using control by recruitment, formally stated as follows:
Semi-Markov chain models are extensions of Markov chain models that can take into account the duration of stay in the states. Define as the state following the nth transition and as the time at which the nth transition occurs in a semi-Markov process. The semi-Markov kernel is then given by the following [21]:
where represents the probability that the process transitions from state to state after exactly k time units. The semi-Markov kernel can be used to obtain the sequence of transition matrices in the following way:
Theorem 1
([22]). For all k such that , we have the following:
Let K denote the maximum seniority level considered within the system. The sequence of matrices , with each , is derived from the semi-Markov kernel . Here, specifically represents the transition probabilities for entities with state seniority k.
3. Attainability for Markov Chains
3.1. Attainability for Constant-Sized Markov Systems
When examining attainability, one needs to clarify three things: a starting structure, the means to attain a certain structure, and an optional time limit to attain the desired structure. Regarding the means to attain a certain structure, we assume that the system is under control by recruitment. The starting point and the optional time limit will be discussed in the next sections. Bartholomew [5] defined the concept of attainability as follows:
Definition 1
([5]). A structure is called attainable with respect to a constant-sized Markov process defined by if there exists a structure such that is reachable from in one step using control by recruitment.
The attainable region, which we will denote as , was characterised for a constant-sized system as well [5]. We restate the theorem and formulate a slightly different proof, which will be the basis of the remainder of the results, where we will write for the standard basis vectors in .
Theorem 2
([5]). The attainable region for a constant-sized Markov system, , is the convex hull of the vectors , i.e., the following is true:
Proof.
Suppose that is an arbitrary attainable structure. This implies that there exist probability vectors and such that the following is true:
Rewriting this equation, we obtain the following equalities:
where we use as the notation for the scalar product of and . We conclude that . Note that the reverse inclusion is trivial, as every convex combination of vectors of the set can be rewritten in the form , where and are probability vectors. □
This region can be useful as we know for certain that the points that belong to the complement of are definitely not attainable, no matter what the starting point is. One can also define the concept of attainability starting from a set of structures .
Definition 2
(n-step attainability from ). A structure is called n-step attainable from with respect to a constant-sized Markov process defined by if there exists a structure such that is reachable from in n steps using control by recruitment.
Since Equation (1) can be rewritten as , the following lemma holds:
Lemma 1.
If , it follows that, for a constant-sized Markov process defined by , the one-step attainable region from is given by the following:
Lemma 1 can be used to determine, in a straightforward way, the attainable region for a finite set . If the set is a convex set, the same technique yields the following:
Lemma 2.
If the starting region is a convex set with vertices , it follows that, for a constant-sized Markov process defined by , the one-step attainable region from is given by the following:
To obtain , one could calculate and use this as the new starting region to calculate , and by iteratively following this procedure, one can obtain the desired .
3.2. Attainability for Markov Systems Subject to Growth and Contraction
In his doctoral dissertation [3], Bartholomew suggested the extension of these findings to organisations that experience growth or contraction. This research gap will be addressed in this section.
The evolution of the total size of a system that is subject to growth or contraction can be described by the following:
where corresponds to the total number of people in the organisational states at time t, and the parameter refers to the rate of change in the size of the system over time. When is negative, this indicates a contraction in the system size; conversely, when is positive, this signifies growth. Note that in the case of growth or contraction, starting from a personnel structure at time t, is given by the following
where the additive recruitment vector is chosen such that the sum of the components of equals instead of 1. So, when talking about the structures, we need to normalise with respect to the norm and consider . Observe that the vector should not be confused with the classical recruitment vector , which is a probability vector that corresponds to .
Now, note that the maximal amount of contraction is limited by , as being part of the wastage vector is the only way to leave the system, i.e., . Furthermore, note that the procedure sketched in the proof of Theorem 2 is rooted in the fact that the vectors of the form have to be supplemented to achieve the desired vector, which has to sum to 1, as a constant-sized system is considered in Theorem 2. As long as holds for all i, the same reasoning can be repeated, i.e., we need to supplement the vectors to achieve the desired vector, of which the elements have to sum to . This immediately yields the following:
Theorem 3.
The attainable region for a Markov system with growth factor , where for all i, , is the convex hull of the vectors , i.e., the following is true:
This result covers the case of a growing system, as implies that . However, this result does not indicate how to compute the attainable region for contracting systems with for some i. In the latter case, the sums of the components of some of the vectors of the form are simply too big, i.e., their norm exceeds ; therefore, they cannot be used as building blocks of the attainable region. Now, suppose that there exist just one i for which . For all , we can still supplement with the to take into account all of the attainable convex combinations where contributes with a non-zero coefficient. But, for , it is impossible to do this, as . Simply discarding is no option either, as there might still exist convex combinations of with the that do result in attainable structures. To resolve this problem, we should simply take into account these convex combinations. If we write
we can use this result to state the following theorem, which includes growth as well as contraction:
Theorem 4.
The attainable region for a Markov system with growth factor , , is the convex hull of the vectors , where
With the use of Theorem 4, one can easily generalise Lemmas 1 and 2 to systems with growth factor .
Although this definition can be used for general starting regions , we argue that it can often be useful in practice to use the maintainability region as a starting region, as a maintainable structure might already be in place within the company, or a company could be actively working towards such a structure. Furthermore, the maintainable region is the a priori smallest known state reunion attainable set, regardless of the starting position. Note that in this case, .
4. SR-Maintainability and SR-Attainability for Semi-Markov Chains
4.1. State Re-Union Maintainability
In order to study the attainability of a semi-Markov chain, we need to incorporate all of the information in the matrices into one matrix that characterises the semi-Markov (SM) model. This involves segregating the states by their levels of organisational state seniority, with serving as the transition matrix for these seniority-based disaggregated states. This leads to Definitions 3 and 4, which were initially developed to facilitate the study of maintainability for semi-Markov chains [20]. Suppose we have l organisational states and one state that corresponds to leaving the system, which is called the wastage state. If the sequence is of length , we define the following:
Definition 3.
The set of seniority-based states is given by
where the state corresponds to the staff in organisational state b that has organisational state seniority equal to a.
Definition 4.
For , the elements of are equal to the following:
and, if , then the following is true:
If we redefine the state set, we can view this matrix as the transition matrix with state space . In this way, all of the information regarding transitions is stored in one matrix , which can be used to elegantly state the definitions of state reunion maintainability and attainability. By writing the state vector at time t as and the non-normalised recruitment vector, which entails the absolute recruitment counts, at time t as , we obtain the following equations that describe the evolution of the stock vector for a system with a growth factor :
The concepts of state reunion maintainability as well as state reunion attainability can be stated by the use of a reunion matrix , which encodes the specific seniority-based states that are to be fused.
Definition 5.
For a transition matrix with state space , a matrix is called the reunion matrix if each of its l columns consists of ones through the following:
We can now restate the concept of state reunion maintainability.
Definition 6
(State reunion maintainability [20]). A structure is called state reunion maintainable (-maintainable) for a system with growth factor under control by recruitment if there exists a path of seniority-based stock vectors and if a sequence of recruitment vectors can be chosen such that for every , the following is true:
A sequence of seniority-based stock vectors that satisfies Equations (3)–(5) will be called a seniority-based path associated to the -maintainable personnel structure .
4.2. Attainability for Semi-Markov Chains
For semi-Markov chains, we follow a similar approach as the one in Section 3. This would yield the following definition for attainability:
Definition 7.
A structure is called attainable with respect to a semi-Markov process defined by if there exists a structure such that is reachable from in one step using control by recruitment.
Remark 1.
As recruitment is only allowed in the seniority-based states with state seniority zero, most of the components of are zero.
However, in the context of real-world applications, it might often be deemed less restrictive and more efficacious to focus on preserving the proportions in the organisational states instead, as is the case for state reunion maintainability.
4.3. State Re-Union Attainability
A natural way to define state reunion attainability would be the following.
Definition 8
(General state reunion attainability). A structure is called state reunion attainable with respect to a semi-Markov process defined by if there exists a structure such that is reachable from in one step using control by recruitment.
Yet, it turns out that this approach is not informative with regard to state reunion attainability:
Lemma 3.
All structures are state reunion attainable for every semi-Markov process defined by .
Proof.
For the structure with all the personnel in one of the states at time t, we know that none of these people will be in an internal state at time , i.e., the stock vector at time will be completely determined by the recruitment vector , which implies that under control by recruitment, each structure would be attainable in this way. □
A more suitable definition of state reunion attainability would be the n-step state reunion attainability, starting from a set .
Definition 9
(n-step state reunion attainability from ). A structure is called n-step state reunion attainable from with respect to a semi-Markov process defined by if there exists a structure such that is reachable from in n steps using control by recruitment.
We will denote, for a system subject to a growth factor of , the n-step attainable set starting from and the n-step state reunion attainable set starting from as and , respectively.
Remark 2.
Definition 9 is a generalisation of Definition 8 in the sense that they coincide for with .
To calculate the n-step state reunion attainable structures in practice, one could use the same technique as in the proof of Theorem 2. Furthermore, this technique also yields the following results:
Lemma 4.
If , it follows that the one-step state reunion attainable region from for a semi-Markov process with growth factor defined by is given by the following:
Remark 3.
Note that is empty if and that only the are considered that correspond to states with zero seniority, as these are the only states were recruitment can take place. We denote this restriction on the index j as .
Lemma 4 can be used to determine the attainable region for a finite set . We will first introduce the following notation:
If the set is a convex set, we obtain the following:
Lemma 5.
If the starting region is a convex set with vertices , it follows that the one-step state reunion attainable region from for a semi-Markov process with growth factor defined by is given by the convex hull of the vectors , where
To obtain one could calculate and use this as the new starting region to calculate . By iteratively following this procedure, one can obtain the desired .
4.4. Illustrations
In this section, we illustrate our findings by constructing the attainable and state reunion attainable regions for different growth factors .
First, consider the Markov system defined by
for which we determine the maintainable region for the cases .
For , Theorem 2 yields that
so it follows that is the convex span of the vectors (0.6, 0.4, 0), (0.5, 0.5, 0), (0.5, 0.4, 0.1), (0.1, 0.6, 0.3), (0, 0.7, 0.3), (0, 0.6, 0.4), (0.2, 0, 0.8), (0, 0.2, 0.8), and (0, 0, 1).
For , Theorem 3 implies that
which implies that is the convex span of the following vectors after normalisation: (1.6, 0.4, 0), (0.5, 1.5, 0), (0.5, 0.4, 1.1), (1.1, 0.6, 0.3), (0, 1.7, 0.3), (0, 0.6, 1.4), (1.2, 0, 0.8), (0, 1.2, 0.8), and (0, 0, 2).
For , Theorem 4 implies that
which implies that is the convex span of the following vectors after normalisation: (0.25, 0.2, 0.4), (0, 0.3, 0.55), (0.05, 0, 0.8), (0, 0.05, 0.8), and (0, 0, 0.85).
These regions are shown in Figure 1.
Figure 1.
, , and .
Furthermore, we can use the maintainable region for the constant-sized Markov system defined by as the starting set. We know that the maintainable region is given by the following [20]:
If we use this region as the set , following Lemma 2, we obtain the following:
So, if we define , we obtain the following:
These regions are shown in Figure 2.
Figure 2.
, and .
Now, consider a semi-Markov system for which is given by the following:
We obtain, following Definition 4, the following:
and .
Let with
Using Lemma 5, we obtain that with
A simple calculation shows that . Multiplying the vectors , and by yields , the one-step state reunion attainable region, which is the convex combination of the following vectors:
This region is shown in Figure 3.
Figure 3.
.
5. Conclusions and Further Research Avenues
In this study, we explore the concept of control through recruitment, broadening the traditional concept of attainability for constant-sized Markov systems to systems subjected to growth and contraction. Furthermore, we generalise this concept to semi-Markov chains. Our exploration is characterised not only by its expansion of existing frameworks but also by the introduction of an innovative concept known as state reunion attainability (-attainability), based on the concept of state reunion maintainability [20]. This new concept allows us to gain important theoretical insights and identify the -attainable regions. Our work is distinguished by its novel method of broadening the scope of attainability and the introduction of -attainability, providing both theoretical understanding and practical algorithms, such as Theorem 4 and Lemma 5, for use in this field.
Future research could explore the broadening of reunion matrices, aiming to extend -attainability to include the attainability of various state combinations based on seniority, such as reclassification by overall seniority or pay scale. This opens the possibility of preserving selective subsets of seniority-based states, rather than encompassing all states, giving rise to a concept of partial ()-attainability. Consequently, this would allow for the application of other and more diverse -matrices, which encode the fusion of seniority-based states (Definition 5), thereby expanding the practical use of the -attainability framework.
Author Contributions
Conceptualization, B.V.; validation, B.V. and M.-A.G.; writing—original draft preparation, B.V.; writing—review and editing, B.V. and M.-A.G.; visualization, B.V.; supervision, M.-A.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data sharing is not applicable.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Amenaghawon, V.A.; Ekhosuehi, V.U.; Osagiede, A.A. Markov manpower planning models: A review. Int. J. Oper. Res. 2023, 46, 227–250. [Google Scholar] [CrossRef]
- Ezugwu, V.; Ologun, S. Markov chain: A predictive model for manpower planning. J. Appl. Sci. Environ. Manag. 2017, 21, 557–565. [Google Scholar] [CrossRef][Green Version]
- Bartholomew, D.J. A Mathematical Analysis of Structural Control in a Graded Manpower System; University of California: Berkeley, CA, USA, 1969. [Google Scholar]
- Vassiliou, P.-C.G. Non-Homogeneous Markov Chains and Systems: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
- Bartholomew, D.J. Stochastic Models for Social Processes; Wiley: Hoboken, NJ, USA, 1967. [Google Scholar]
- Azcue, P.; Frostig, E.; Muler, N. Optimal strategies in a production inventory control model. Methodol. Comput. Appl. Probab. 2023, 25, 43. [Google Scholar] [CrossRef]
- Fernández Cara, E.; Zuazua Iriondo, E. Control theory: History, mathematical achievements and perspectives. Boletín Soc. Esp. Mat. Apl. 2003, 26, 79–140. [Google Scholar]
- Davies, G. Structural control in a graded manpower system. Manag. Sci. 1973, 20, 76–84. [Google Scholar] [CrossRef]
- Davies, G. Control of grade sizes in a partially stochastic markov manpower model. J. Appl. Probab. 1982, 19, 439–443. [Google Scholar] [CrossRef]
- Tsaklidis, G.M. The evolution of the attainable structures of a homogeneous markov system by fixed size. J. Appl. Probab. 1994, 31, 348–361. [Google Scholar] [CrossRef]
- Vajda, S. Mathematical aspects of manpower planning. J. Oper. Res. Soc. 1975, 26, 527–542. [Google Scholar] [CrossRef]
- Masood, H.; Podolsky, M.; Budworth, M.-H.; Karajovic, S. Uncovering the antecedents and motivational determinants of job crafting. Career Dev. Int. 2023, 28, 33–54. [Google Scholar] [CrossRef]
- Tsaklidis, G.M. The evolution of the attainable structures of a continuous time homogeneous markov system with fixed size. J. Appl. Probab. 1996, 33, 34–47. [Google Scholar] [CrossRef]
- Georgiou, A.; Vassiliou, P.-C. Periodicity of asymptotically attainable structures in nonhomogeneous markov systems. Linear Algebra Appl. 1992, 176, 137–174. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.; Georgiou, A. Asymptotically attainable structures in nonhomogeneous markov systems. Oper. Res. 1990, 38, 537–545. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.; Tsantas, N. Stochastic control in non-homogeneous markov systems. Int. J. Comput. Math. 1984, 16, 139–155. [Google Scholar] [CrossRef]
- Kalamatianou, A. Attainable and maintainable structures in markov manpower systems with pressure in the grades. J. Oper. Res. Soc. 1987, 38, 183–190. [Google Scholar] [CrossRef]
- Ossai, E. Maintainability of manpower system with restricted recruitment. Glob. J. Math. Sci. 2013, 12, 1–4. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.; Papadopoulou, A. Non-homogeneous semi-markov systems and maintainability of the state sizes. J. Appl. Probab. 1992, 29, 519–534. [Google Scholar] [CrossRef]
- Verbeken, B.; Guerry, M.-A. State reunion maintainability for semi-markov models in manpower planning. Under review at Methodology and Computing in Applied Probability. arXiv 2024, arXiv:2306.02088. [Google Scholar] [CrossRef]
- Barbu, V.; Limnios, N. Semi-Markov Chains and Hidden Semi-Markov Models toward Applications: Their Use in Reliability and DNA Analysis; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009; Volume 191. [Google Scholar]
- Verbeken, B.; Guerry, M.-A. Discrete time hybrid semi-markov models in manpower planning. Mathematics 2021, 9, 1681. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).