4.1. Risk Management Maturity Measurement Models
“Maturity” is complicated and slippery concept, since its definition depends largely on the context to which it is being applied, on how it is being used and on the individual who is evaluating it. For our purposes, maturity in the context of the SME setting means reaching a certain level of effectiveness and performance with regard to management of OHS risks. Measurement of the level of risk management maturity is fundamental to self-evaluation and improvement. As Kaplan and Norton [
28] put it, “You can’t manage what you can’t measure”.
In order to evaluate OHS risk management maturity, many SMEs use benchmarking, which is, according to Moriarty and Smallman ([
31], p. 498), “an exemplar-driven teleological process operating within an organization with the objective of intentionally changing an existing state of affairs into a superior state of affairs.” In commerce, benchmarking is divided into two broad categories, namely in-house and external. The in-house variety engages branches, departments or services within the same company. It consists usually of comparing these divisions in order to improve practices. Being limited in this manner is considered a weakness. In contrast, external benchmarking relies on comparison to competitors and thus provides a way of revealing opportunities for improvement [
27]. It is used widely in research on SMEs [
36,
37]. With regard to risk management, all companies prefer external references [
27]. It has been believed for a long time that competitive benchmarking leads to better results [
38]. This approach makes it easier for companies to identify good practices, discover innovative improvements, learn who their competitors are and what advantages they have and find the best measurement tools and methods. In a study involving 8 companies focused on reducing occupational injuries, it was found that benchmarking led to improved OHS risk management performance and brought average reductions of 77% in the incidence of lesions [
39]. The importance of benchmarking to OHS had been demonstrated earlier using measurable indicators of performance and factors known to affect OHS management [
40]. Among these factors were inspections, meetings, regulatory framework, training, communication, collaboration, workplace relations and resources.
Performance evaluations should focus both on means (or processes) and on results [
41]. Performance can be described also in terms of “practices” and “quantifiable measurements.” Practices are descriptions of behaviours that lead to deviations in performance. These may be associated with processes, organizational structures, management systems, human factors or strategic approaches. Quantifiable measurements refer to indicators associated with company performance, technical performance (productivity) and efficiency (human input). The relevance of these measurements must be reviewed continually. Several factors that influence benchmarking in this context have been identified, including communication, training, commitment and involvement of upper management (leadership), worker participation, development of policies and effective setting of goals, systematic approach to management and decision-making, continuous improvement, teamwork and programs of recognition and appreciation of effort [
42].
During the past decade, several models and technological tools have been developed for the measurement of “risk management maturity.” The growing interest of researchers in these models is reflected in the number of articles published, particularly during the years 2009 and 2010 [
32]. Prior to this, a “maturity model” had been defined as “a simplified representation of the evolution that an organization’s capacity for sound judgment and responsiveness may undergo in a given field of skill” ([
43], p. 49). A model of risk management maturity was later characterized as a means of helping businesses to determine their current level of maturity, to understand their strengths and weaknesses and to implement suitable measures in order to improve their level of performance [
33].
Several “maturity measurement models” appear to have originated from the quality management maturity grid, first proposed in 1979 by Crosby [
44]. This device is used to raise awareness of the general need for a sustained, long-term quality program in all companies. It defines five progressive levels corresponding to increasing awareness of requirements with respect to quality, namely uncertainty, alertness, awareness, wisdom and certainty [
45]. Maturity measurement models often have similar structures [
44]. They are based on definition of an ensemble of process areas, of which the control is evaluated within an organization in terms of maturity level. Among the models discussed in the literature, one of the most relevant appears to be the capability maturity integration model, used notably to improve commercial software development processes [
46]. Another is the organizational project management maturity model (OPM3) developed by the Project Management Institute (PMI
®) [
47]. Also well-known are the control objectives for information and related technology (COBIT) model, used to manage the security, reliability and compliance risks inherent in information systems [
48] and the risk management capability maturity model for complex production systems (COPS).
It should be noted that the Mellon model developed at the Carnegie Institute of Software Engineering was revised and corrected in 1997 [
44,
49]. The capability maturity model (CMM) was thus proposed, which features a simple structure that facilitates evaluation of the risk management processes of a business, based on indicators (culture, behaviour, etc.) suggested by observation. These composite indicators facilitate communication of results, particularly by using plain language and a common reference system. The CMM model describes an organization using four levels (
Figure 3) of increasing risk management capacity (naïve, novice, normalized and natural) measured in terms of four attributes, namely culture, processes, experience and application [
50,
51]. Each level is defined and characterized clearly, thus allowing companies to self-evaluate their level of maturity using a previously established scale. Once this level of maturity has been determined, a company can set clear targets for improvement [
52].
The risk management maturity system (RMMS) was developed for application to construction projects [
33]. This model is based on a system composed of three functional elements: capability, evaluation and improvement.
Figure 4 describes the capability element: (1) managerial capability associated with the “organization,” which represents the support provided by the company with regard to safety, stakeholder management, culture and the program risk management and (2) capability associated with “processes,” which represents the identification, analysis, evaluation and handling of risks as well as collection of afferent information.
In general, managerial capabilities associated with the “organization” and with “processes” are at least somewhat interconnected. In fact, the former provides the basis of risk management processes. Improvement of either favours improvement of the other (mutual relationship). The model is divided into six categories and 21 activities, including risk management analysis feasibility, the risk identification process, structured description of risk, selection and use of risk analysis methods, risk analysis results, identification of residual risk and risk monitoring.
The risk management maturity model (RM3) was also developed for application to construction projects [
53]. Several facets of projects are analyzed, including the system, the process, human resources and culture. RM3 is composed of the following five elements: risk management and leadership capability, risk management organizational culture, risk identification and analysis capability and aptitude for developing and applying a standardized process of risk management. It includes four levels of maturity: initial, repeatable, directed and optimized. The reliability and effectiveness of RM3 has been demonstrated in several empirical applications.
We note also a model developed initially to measure and later to promote safety culture in Brazilian petrochemical companies [
54]. In its original configuration, it included three levels (pathological, bureaucratic and generative). Two others were added, namely reactive and proactive [
55] and the term “bureaucratic” was changed to “calculating.” The different levels have been defined as follows [
56]:
Pathological: characterized by worker-caused unsafe conditions. The principal focus of the business is productivity. Safety legislation and regulations are disregarded or circumvented deliberately.
Reactive: the organization is starting to take safety seriously. Measures and actions are undertaken as accidents occur.
Calculating: safety is guided by management systems based on data gathering. It is more suggested or imposed by the administration than desired by the workers.
Proactive: performance is improved using predictions. Worker involvement is starting the transition from a purely top-down approach.
Generative: active participation is preached and practised at all levels. Safety is perceived as a central and crucial issue for the company.
Using a model proposed by Hudson [
57], Parker et al. [
58] designed a framework that could help SMEs evaluate their safety culture maturity. Following improvements by Filho et al. [
54], the model now includes the following five dimensions:
“Information” refers to the information system, that is, the manager’s evaluation of the system put in place to favour circulation of and access to information on workplace accidents, for the purpose of improving safety performance.
“Organizational learning” refers to information processing and analysis and to training of workers in subjects related to safety.
“Involvement” refers to that of workers in the risk management process.
“Communication” refers to horizontal as well as vertical exchanges of information within the company.
“Commitment” refers to the support provided by the company with regard to safety.
Among the other relevant research articles identified are Safety culture maturity model [
59]; The Model to Assess the Maturity Level of the Risk Management Process in Information Security [
60], The Safety Culture Maturity and Risk Management Maturity in Industrial Organizations [
61] and The Risk Maturity Model for Dutch Municipalities [
34]. Underlying the development of these models is the inclusion of the more recently described notion of good risk management practices [
33,
34], which are particularly relevant to the SME problem. In summary, models of risk management maturity measurement are helpful in several ways. The literature nevertheless does mention some of their drawbacks, such as poor grounding (overwhelming subjectivity), structural complexity or highly theoretical aspects.
4.2. Indicators Suitable for Measuring Risk Management Maturity
All of the models of maturity measurement cited above are based on the use of indicators [
31,
42,
56,
62,
63]. Measurable indicators play an important role in the process of improving OHS risk management performance in businesses [
29]. They allow comparisons of different situations and provide reliable and meaningful information on relevant phenomena [
31] and appear to be indispensable for sound decision-making and getting staff involved and committed to implementing actions designed to improve OHS risk management [
30].
A measurable indicator of performance is defined as information that can be used to monitor the evolution of a phenomenon relative to defined objectives or a previously determined and accepted framework [
64]. Updated regularly, such indicators are useful for evaluating the effectiveness and the efficiency of a system or a part thereof over time. The number of inspections completed on time divided by the total number of inspections planned for a given time interval is an example of an indicator of proper carrying out of maintenance operations.
An indicator can also be an element or group of elements of meaningful information, a representative index or a statistic representing some concern [
65]. These are derived from data collected on an observed condition, manifested phenomenon or element associated with the proper functioning of an organization. The choice of an indicator is based on a certain number of criteria that must be met. In this context, it should be kept in mind that monitoring trends over time is an important aspect of risk management and must be based on objective characteristics such as those proposed by the Institut national de l’environnement industriel et des risques (INERIS) [
64] and the Conseil du trésor du Québec [
66]:
Relevance: The preoccupation and the associated expectations have real significance in terms of the objective to be met. The result or phenomenon to be measured is shown clearly and sufficient information is obtained on the effects of the activities underway and the expected results.
Validity: The measurement provides accurate and precise evaluation of the situation of concern. It should be noted that validity is verified by cross-comparison with other indicators used to measure the same phenomenon.
Feasibility: The data associated with an indicator are accessible when needed and at an acceptable cost.
User friendliness: The criterion (indicator) is simple, clear, easy to understand and to present and is interpreted the same way by all within known limits.
Reliability: The measurements obtained correspond to reality. Values remain constant while the measurement is repeated under identical conditions. The overall reliability of the indicator depends largely on the reliability of the data (of the actual measurements).
Compatibility: Standard variables, calculation methods and frequencies of measurement are used, as recommended by recognized official organizations, thus lending credibility to the indicator.
Although the usefulness of measurable indicators needs little further explanation, their non-judicious use can be counterproductive and costly [
67]. For example, over-rating a performance indicator can lead to erroneous results and unrealistic objectives, which will likely have a negative impact on the entire process of risk management and on motivation of staff.
In the literature, performance indicators are divided into several categories: quantitative, qualitative, reactive, proactive, operational, strategic, in-house, external and so on. Even the names of the various types vary depending on the author and the field of research. The two main categories that emerge most consistently from the literature search are lagging indicators (reactive or retrospective) and leading indicators (proactive or predictive).
The focus of most reactive indicators (mishap frequency, severity, frequency-severity and cost) is maximization of results (income, sales, quality, etc.), whereas the focus of OHS performance indicators is the reduction (frequency, severity, etc.) of workplace injuries [
68]. In Québec, the CNESST evaluates economic sectors and calculates dues using reactive indicators only [
69,
70]. This method has provided close-up views of the OHS situation. However, the data show only what types of injury occur and where, while providing little information about their causes. It has been shown that this way of measuring OHS performance is imprecise, inappropriate and even counterproductive [
70]. It is of little aid to businesses where accident rates are low while situations at risk are numerous. In addition, reactive indicators alone provide a partial and thus incomplete view of reality, one that can and often does lead to ineffective actions. In order to avoid increases in annual dues, businesses tend to under-report accidents. Results-oriented indicators can be divided into two levels of preoccupation [
68]: (1) injury prevention, with result categories such as number of incidents, equipment damage, rescue, first aid, temporary reassignment and indirect costs and (2) monitoring of losses, with conventional result categories including number of accidents, number of illnesses, number of workdays lost, direct costs, dues and so on. Both levels concern past and hence unchangeable occurrences. Too much emphasis on reactive indicators has been identified as a frequent major obstacle to the improvement of OHS risk management performance [
71,
72].
Proactive indicators provide means of monitoring current OHS performance effectiveness (maturity), activities and risk management processes plus information on the current OHS status of the organization. In other words, they provide advanced warning of potential problems and therefore the possibility of implementing preventive measures before mishaps occur. They can also be used to reveal weaknesses in risk management procedures or worker behaviours that increase the risk of injury [
72,
73]. Given the dynamic nature of the conditions under which most injuries occur, a proactive indicator will be suitable only if it can provide constant monitoring. Being proactive is beneficial only if the chosen indicator provides timely information that can be interpreted effectively and translated into concrete actions. The USA National Safety Council [
74] describes an effective proactive indicator as being actionable, feasible, meaningful, transparent, easy to communicate, valid, helpful and timely. Their use is generally associated with the following expectations: (1) anticipating, foreseeing or eliminating risks and losses (possibility of taking swift action); (2) monitoring and evaluating OHS risk management efficiency; (3) promoting safe behaviours, staff involvement and continuing improvement and (4) communicating reliable findings. The USA National Safety Council [
74] suggests three broad categories of proactive indicators, namely (1) focused on operations (relevant to the functioning of SMEs); (2) focused on systems (i.e., management systems); and (3) focused on behaviours or actions of individuals or groups in the workplace, personal interactions in association with supervision and management. As shown in
Figure 5, there are also three hierarchical levels of application of indicators, namely strategic, tactical and operational [
64].
A few guides and research articles discuss the use of proactive indicators of OHS status and performance, such as a publication by the Government of Alberta [
73]. Based on actual investigation, this guide emphasizes that the implementation of indicators may encounter several obstacles, including the absence of standardized practices or of reference criteria in the studied sector of activity. One study reached the conclusion that benchmarking should be considered the most appropriate method of comparing proactive indicators and their impact on businesses [
31]. The proactive indicators mentioned most often in the literature are listed below in
Table 1.
4.3. Preliminary Model of OHS Risk Management Maturity Measurement
The model was developed in two steps. The first step consisted of compiling a preliminary list of maturity indicators and grouping these into families in accordance with the main objective of this study, based on relevance to (1) risk management processes, (2) organization and management, (3) individuals and (4) tasks and resources. These four families formed the basis of our preliminary model. We supported the selected indicators with concrete examples. The second step consisted of using the list of families in conjunction with relevant models found in the literature in order to propose a preliminary model better adapted to the SME context.
Our approach to developing a model of OHS management maturity in the SME context was drawn mainly from work by Jia et al. [
33] and Cienfuegos [
34]. This preliminary model is based on the notion of levels of maturity. These levels find support in organizational learning theory as described in work by Argyris and Schön [
35].
The indicators are shown grouped into the four main families in
Figure 6. It is important to specify the frequency (annually, quarterly, etc.) of evaluation.
“Process based indicators” (
Table 2) refers to quantification of OHS risk management activities, preventive actions, corrective actions, risk characterization by category and/or level (risk profiling) and finally monitoring and review. This latter activity allows overseeing of setting up and effective application of risk-reducing measures as well as ensuring that the results obtained contribute to achieving the objectives.
“Organization and management based indicators” (
Table 3) refers to quantification of the role of management, particularly in terms of commitment in the deployment of risk management at all levels of the business, the communication of risks present in the work setting, OHS training for workers, recognition of good behavior, leadership, organizational and process changes, evaluation of the relevance of proactive indicators, carrying out inspections and checking OHS practice realities, compliance with OHS regulations and prevention by design.
“Individuals based indicators” (
Table 4) refers to quantification of measures regarding information on OHS, perception of OHS risks by workers, worker involvement and participation with regard to OHS and behaviour in the workplace.
“Tasks and resources based indicators” (
Table 5) groups quantification of the work environment danger level and situations potentially at risk and requiring permits, workload, the technological sophistication accessible to the company and use thereof to limit OHS risks, equipment use and preventive maintenance and finally the budget allotted to OHS.
At this point, as mentioned in the research methodology section, we used maturity levels based on organizational learning theory. According to this theory, learning is deemed to have occurred when an organization detects and corrects an error in its management of OHS risks. The first step is devoted to error discovery, while the other steps take place when the error is corrected. As shown in
Figure 7, there are at least two ways of correcting an error, including acting on the underlying behaviour (this action requires a single-loop learning) and acting on the management system, which is considered as a double-loop mode of learning [
91].
These two types of learning and the definition of the five levels of maturity are listed in
Table 6. The levels included in the preliminary model are the immature stage (N1), the mature stage (N5) and three intermediate stages. In order to make the model applicable, percentage intervals for each level (0 to 100%) and scores for each element of the corresponding family (weighted 1 to 5) need to be allotted.