Maturity of Industry 4.0: A Systematic Literature Review of Assessment Campaigns

: The Industry 4.0 paradigm represents the fourth industrial revolution, embodied by the marriage between information and communication technologies and manufacturing. Assessment campaigns are conducted to examine the status of deployment of that paradigm, mostly through self-assessment questionnaires. Each campaign is typically limited in scope, involving just a group of companies located in a few countries at most. Such limitation does not allow an overall view of Industry 4.0’s diffusion. In this paper, we offer that panoramic view through a systematic literature review. The number of papers devoted to Industry 4.0 assessment grows steadily. However, many papers do not provide essential information about the assessment campaigns they report, e.g., not detailing the number, type, or location of companies involved and the questionnaire employed. We observe a large diffusion in Europe and Asia but not in the U.S., with the Top 5 countries being Malaysia, Poland, Italy, Germany and Slovakia. The campaigns uniformly cover small, medium, and large companies but not all industrial sectors. The choice of questionnaires is extremely varied, with no standard emerging.


Introduction
Industry 4.0 is the popular name given to a set of innovations brought about by the extensive use of smart industrial devices with built-in sensing and communication capabilities that can be easily interconnected (see [1][2][3] for a brief introduction). Ample descriptions of the genesis and development of Industry 4.0 can be found in [4][5][6]. Such a massive deployment supports and is spurred at the same time by several technologies, which include, e.g., adaptive robotics, big data analytics, cloud platforms, advanced networking technologies. In very few words, Industry 4.0 embodies the introduction of ICT into manufacturing. Since it promises a shift of paradigm rather than just an increase in efficiency, Industry 4.0 has been dubbed the 4th Industrial Revolution or 4IR for short.
Considerable efforts have been made to assess the level of deployment of Industry 4.0. Such an assessment is relevant since it allows companies to understand where the deployment lags behind and resources have to be spent to accelerate the transition.
A few attempts have been made to measure the state of that transition through socioeconomic and technological indicators available from official statistics [7][8][9]. In addition to using open governmental data, an original approach has been taken in [10] also to consider bibliometric and patent data as well as news stories. Due to the geographical-focussed nature of the supporting data, all those studies provide results about regions or nations rather than individual companies.
Instead, most authors have opted for a self-assessment approach, where each company can assess its own level of deployment by compiling a questionnaire, which goes by the name of maturity model or readiness model 1 . Such tools have been extensively deployed since the 1970s, becoming an important management tool to accompany the evolution of companies both on a short and long term [11]. Several models have been proposed in the literature, and some surveys have been devoted to listing and analysing them. The number of identified maturity models varies from 10 in [12] to 15 in [13] and 30 in [14]. Sony and Naik have tried to identify the key elements that every maturity model should incorporate [15].
Though a company could use a maturity model on its own and keep the results to itself, several authors have attempted to assess the level of deployment of Industry 4.0 in several companies at the same time by submitting the same questionnaire (model) to all of them. Here we refer to those efforts as assessment campaigns.
The diffusion of maturity models can be considered as an indication of their usefulness. The need to assess the readiness of companies to embrace Industry 4.0 arises from the urgency of companies to introduce ICT into manufacturing processes and understand what still needs to be done to accomplish that task. Companies would not be available to be involved in those assessment campaigns if they did not deem it useful to help them in their effort to deploy the Industry 4.0 paradigm. Hence, a large diffusion of maturity models (i.e., their creation and administration) implies that many companies consider them so useful as to adopt them and take the time to fill the questionnaires research submit to them. We can then consider the diffusion of maturity model as a straightforward indicator of their usefulness. We can evaluate their diffusion by examining how much and where they have been employed (as reported in the literature). Though several papers have reported the results of an assessment campaign, no overall analysis has been made of those reports. This is the research gap we intend to fill here. We wish to provide a panoramic view of the assessment campaigns described in the scientific literature. The general research question we are interested in is: What is the current status of Industry 4.0 maturity assessment campaigns? We can instantiate that general question into the following specific research questions, which address several aspects of those Industry 4.0 maturity assessment campaigns: RQ1 What is the time trend of assessment campaigns? RQ2 What are the preferred publication outlets? RQ3 Which geographical areas are most concerned? RQ4 What is the breadth of the assessment campaign? RQ5 What are the sizes of companies involved? RQ6 What are the industries where assessment campaigns have been conducted? RQ7 Do assessment campaigns adopt standard questionnaires?
In this paper, we approach answering those RQs by carrying out a systematic literature review (SLR). We describe the methodology in Section 2 and report the results of the SLR in Section 3. In Section 4, we see what those results tell us about the RQs.

Method
As hinted in the Introduction, we have conducted a systematic literature review to answer our RQs. We have adopted the well known PRISMA approach [16].
Our literature search has been carried out without limitations on the year of publication, including just fully published sources (i.e., excluding those made of the abstract alone). We have limited our search to English publications, since English represents the dominant language in the technical literature and considering papers in other languages would have compelled us to properly define the terminology used in that language to describe readiness/maturity models, let alone the difficulty of reading other languages. Our bibliographic databases of choice have been Scopus and Web of Science (WoS). The two databases still represent the primary reference for bibliographic analyses, in particular for the fields of Natural Science and Engineering [17], which is where the theme of Industry 4.0 lies. Though the coverage of Web of Science is smaller than Scopus, their union provide a comprehensive and authoritative view of what is published on a subject. The last search date was 14 December 2021.
We have employed the query ("industry 4.0" AND ("readiness" OR "maturity")), applying it to the title, the abstract, and the keywords. The term maturity was included since it can be considered as a synonym of "readiness" for our purposes. Actually, "readiness" would be more appropriate to a stage where companies may still not be ready to introduce the Industry 4.0 paradigm. On the other hand, the term "maturity" seems more appropriate to describe situations where that paradigm has already been introduced, and we wish to examine the degree of adhesion of companies (and maybe its impact) [18]. However, the information about the introduction of Industry 4.0 and its level of implementation (i.e., whether it is at an initial or advanced stage) must be provided by companies; it is the purpose of the surveys we are reviewing here to extract that information. So, the terms readiness and maturity may be more appropriately used after surveys have been carried out and cannot be distinguished on solid grounds before the survey has started. For that reason, we considered them as interchangeable words for our analysis.

Results
In this section, we report the results of our SLR by first describing the flow of information through the different phases of our SLR and then answering the RQs.

The Dataset
The systematic literature search described in Section 2 has provided the results shown in Figure 1. As can be seen, the initial collection was very generous. After removing duplicated between Scopus and Web of Science, we were left with over seven hundred papers. However, the analysis of their abstract led us to remove over six hundred. The most frequent reasons for removal were the following: Records unavailable (n=20) The final slash on the number of papers examined in their entirety was due to some papers not being available, in most cases because the contents were only available to a restricted group or behind a paywall managed by a minor publishing company outside the range of subscriptions available to us. However, among the unavailable papers, just four had some citations, while the remaining sixteen had none; also, the number of citations of the four cited papers was respectively seven, five, two, and one. We can safely conclude that we did not lose a significant amount of information by not considering them.

Time Trend
We classified the papers resulting from the final count reported in Section 3.1 according to the year of publication, which would allow us to answer RQ1.
In Figure 2, we see that there were no assessment campaigns reported before 2017. In addition, the trend is clearly growing. Hence, assessment campaigns have not reached their peak, and we can expect a widespread diffusion of maturity assessment efforts in the near future.

Publication Outlet
Another issue, which answers RQ2, is the publication outlet chosen to report the results of the assessment campaigns. As commonly agreed, journals and conferences exhibit complementary features, roughly summed up as permanence of record (a feature typical of journals) vs speed of publication (a feature typical of conferences). Recently, the picture is becoming more blurred, with conferences finding their long-standing place in bibliographic databases and journals shortening their publication time.
However, in Figure 3, we see that the preferred outlets to report the result of the readiness assessment campaigns are journals, with roughly a 60/40 distribution.
If we dissect that overall percentage over the years, we see in Figure 4 that journals have been gaining ground over the years. While just one in four campaigns was reported in a journal in 2017, journals have now become the largely dominant way to disseminate the results of Industry 4.0 readiness assessment campaigns (77.4% so far in 2021).   We can also identify the most influential papers by the number of citations. In Table 1, we report the Top 10 papers by citations. We see that journal papers are the most cited ones by far (eight out of ten). In addition, there are no recurring authors in the Top 10. However, papers concerning the Czech Republic are particularly well-received, making up three presences in the Top 10.

Geographical Coverage
The size and variety of countries involved can be considered a measure of the deployment of Industry 4.0 assessment campaigns, which we assess with RQ3. Except for two papers, where the participants in the survey were selected from all over the world, all the papers included participants from a single country or a very few countries at most. In Figure 5, we show the number of assessment campaigns reported for each country (for clarity, we have shown just the countries with at least three campaigns). Notable absences from that list are China, which is not shown but was the subject of one assessment campaign, and Japan, for which no assessment campaigns have been reported. The Top 5 is made of four European countries and a single country from Asia, as detailed in Table 2.  If we take a look at the distribution by geographical region, this observation is confirmed since Europe and Asia account for 89.3% of the campaigns (see Figure 6). Europe is where Industry 4.0 is assessed most (or, at least, where assessment campaigns are reported most). 54  The details are provided in Table 3.

Assessment Campaign Breadth
In addition to the countries touched, another facet of coverage is the number of companies involved, which we address with RQ4. We judge an assessment campaign more informing as the number of companies grows.
In Table 4, we see that a fraction (slightly over 20%) did not mention the number of companies taking the questionnaire. In addition, we see a roughly equal share of the pie represented by campaigns with little statistical value, where the number of companies involved is less than ten. A more detailed look reveals that 60% of the campaigns involving less than ten companies actually concern a single company. On the other end of the range, we observe a small fraction of papers (just over 5%) reporting massive assessment campaigns. The Top10 of those massive reports is reported in Table 5. Though most of them focus on a single country, those papers represent the few attempts to conduct a large-scale assessment. The countries analysed are Slovakia, Poland, Malaysia, Turkey, Serbia, India, the Czech Republic, and Denmark. With the notable exception of Denmark, all the other countries appear in the Top list shown in Figure 5.

Company Size
We are interested also in examining the size of the companies involved in the assessment. We aim to see if Industry 4.0 is to be considered an innovation devoted to large companies only or small-and medium-sized can benefit as well. This is the rationale for RQ5.
Unfortunately, a significant portion of the papers do not provide information about the size of the companies involved in the assessment: just 54.7% of them detail that size, adopting a common terminology, which employs the following four size categories (small and medium are often combined under the SME hat; hereafter by SME we mean papers that cover both small and medium enterprises): However, despite the common terminology, those terms probably hide different definitions and measurement methods (e.g., by the revenues or the number of employees). The lack of a common definition is a long-standing issue, as highlighted in [102]. The European Commission released a publication defining the sizes from micro to medium in [103] by employing staff headcount, turnover, and balance sheet total.
In Table 6, we report the percentages of papers dealing with companies of specific size categories as computed over the number of papers that do provide info about company sizes. Unfortunately, the consistency of definitions remains unaddressed. Furthermore, we employ the label only to denote those papers that deal with the size category only. The results not tagged by that label refer to papers that also deal with other size categories.
We see that the single most present category is that of medium enterprises, which are included in nearly 90% of all campaigns (we remind that the percentages refer to the number of campaigns for which company size data are reported). However, small and large companies rank well with 80.9% and 66% respectively. Micro companies are instead underrepresented, being present in just 17% of campaigns. Finally, just a handful of papers (four, representing 8.5%) examine companies of all sizes.

Industries
We now come to RQ6: What are the industries where assessment campaigns have been conducted?
In Figure 7, we see that just over 60% of the paper report the industry to which their survey was applied. We would have expected a much more detailed indication.  In addition, they do not appear to follow a standardized classification. Such a lack of common terminology may cause difficulty since the reader may not recognize that papers apparently dealing with different industries are actually not. Unfortunately, different approaches to industry classification schemes exist, from the North American Industry Classification System (NAICS) [104,105], mainly business-research oriented, to market-based industry classification schemes like the Global Industry Classification 2 (GICS) [106] and the Industry Classification Benchmark [107]. There are also classification schemes proposed by inter-governmental institutions, such as the Statistical Classification of Economic Activities in the European Community (NACE) by the European Union and the United Nations Standard Products and Services Code (UNSPSC). However, some standardized approach is needed to avoid being misled by the growth of myriad different terms for the same thing. In this paper, we have opted for the GICS since it appears closer to the terms employed in the Industry 4.0 literature. A description of GICS is available in [107,108]. The GICS adopts a hierarchical classification with four layers: Sectors (11), Industry Groups (24), Industries (69), and Sub-Industries (158). We have employed the industry level whenever possible. In some cases, it was not possible to go deeper than the Industry group level since the paper did not provide more detailed information. To understand the distribution of industries affected by assessment campaigns, we decided to stick with either the industry group or the sector level to report results. Our choice is due partly to the lack of specification about the industry, as mentioned above, and partly to the large number of industries that emerge from the analysis (69), which would make the analysis quite fragmented.
Aside from two papers reporting an application of Industry 4.0 in the Public Administration, all the other papers fell into the classification proposed by the GICS. However, of 24 industry groups, just 16 (i.e., two thirds) are involved. In addition, the distribution is quite uneven. In Figure 8, we see that six groups account for more than three quarters.  A similar non-uniform coverage applies to sectors. However, here nine out of eleven sectors are represented, with Utilities and Real Estate being the only ones left out, as reported in Figure 9. Consumer Discretionary has now the largest share, thanks to the contribution of two industry groups (Automobiles and Components plus Consumer Durables and Apparel) in the Top

Questionnaires
As stated in the Introduction and Section 2, we have focused on investigations where questionnaires are adopted as the analysis tool. We wish to observe whether the authors have employed an established questionnaire or not. By established questionnaire, we mean a questionnaire that has been put forward in the literature, fully defined (i.e., including the full list of items), and given a name for future reference (which means that it is intended to gain a wide diffusion).
Unfortunately, the overwhelming majority of papers do not specify the questionnaire that has been adopted. The exact distribution can be seen in Figure 10. Such a lack is truly undesirable since it does not allow external researchers to examine the items employed to assess the level of maturity and employ the same questionnaire for their analyses. There is a wide fragmentation in the list of papers (just 12 out of 85) that specify the questionnaire model. With the notable exception of just four models (IMPULS Industrie 4.0 Readiness Model, ACATECH Industrie 4.0 Maturity Index, Industry 4.0 by VDMA, and Simmy 4.0), no model is employed in more than a single paper. The full list of the models adopted is reported in Table 7.

Discussion and Conclusions
The results reported in Section 3 allow us to answer all the RQs set in the Introduction. The topic is alive and kicking since there is a clear and steadily growing trend of publications (which represents the answer to RQ1). This trend is entirely consistent with what was earlier observed in [118] by considering Google Trends results and the analysis of the literature of Scopus and WoS concerning the general themes of Industry 4.0 and 4th Industrial Revolution. There is also a flight to archival publications since the balance between conferences and journals is steadily shifting towards journals, which represent nearly 60% of the overall volume of papers in 2021 (RQ2). The distribution of publication outlets does not look to have been considered so far in other literature surveys on Industry 4.0 topics: it has been considered neither in [15], nor in [14,118].
There is a significant unevenness in the geographical distribution (RQ3). There are many assessment campaigns in Europe and Asia (together, they make nearly 90% of the campaigns), with Europe a strong leader (nearly 60%). Instead, the U.S. is way behind, with just three campaigns. Of course, this may not reflect the actual diffusion of Industry 4.0 in the country since companies could be at an advanced stage of deployment without their efforts being reported in the scientific literature. In addition, a large proportion of the papers (over 40%) is of little or no statistical value since the sample size (i.e., the number of companies taking the questionnaire) is either not reported or too low (below ten) (RQ4). We can, however, conduct a comparison between the status of assessment campaigns and the origins and development of Industry 4.0 projects around the world. Industry 4.0 originated in Germany, where it was included as one of the strategic projects in its high-tech strategy [119]. Its leading role in the birth of the Industry 4.0 concept is consistent with its presence in the Top 5 list of countries where assessment campaigns have been conducted. However, if we take a broader look at Europe and the rest of the world, we find some confirmations and discrepancies between different deployment indices. In Europe, the classification proposed by Roland Berger in 2015 based on their own Industry 4.0 Readiness Index placed six countries in the group of frontrunners [120]. Frontrunners exhibited a large industrial base and modern, forward-looking business conditions and technologies. Those countries were Switzerland, Germany, Ireland, Sweden, Finland, and Austria. Expect for Germany, which we have already dealt with, none of them appeared in the Top 5 countries. However, Switzerland, Sweden, and Austria are among the countries where assessment campaigns have been reported. Our assessment campaigns survey results are partially consistent with the expectations set in 2015 by Roland Berger's classification. The more recent look at Europe taken in [121] shows a large number of Industry 4.0 projects across Europe, which confirms the leading role of that continent appearing from our survey. When we move to Asia, the situation changes completely. Asia is the second most represented continent in assessment campaigns. Several Industry 4.0 initiatives have been launched throughout Asia, e.g., Made in China 2025 in China, Productivity 4.0 in Taiwan, I-Korea 4.0 in South Korea, and Society 5.0 in Japan, as reported in [121]. While assessment campaigns are well present in South Korea (three are reported in our survey), Taiwan and China are marginally represented (just one campaign each). While this may be justified for Taiwan, due to its small size, the small presence of China is quite puzzling. In addition, Japan is completely absent, and, again, this observation clashes with the industrial weight of that country. We do not have a supported explanation for that. However, we can consider the difficulties that an ambitious plan like Made in China 2025 is facing. Three critical factors have been highlighted for the success of that plan in [122] that may be lacking in today's Chinese economy: manufacturing capabilities, investments in research and development, and human capital. We can also postulate the reluctance of Chinese companies to expose their level of deployment of Industry 4.0. As to Japan, again, we find critical factors in the adoption of Industry 4.0 that could explain the lack of assessment campaigns. The predominance of market-driven factors for the adoption of Industry 4.0 in Japan has been considered in [123] as pushing towards a late adoption for small and medium enterprises in particular. Finally, when we consider the USA, the number of reported assessment campaigns is three, which may appear as somewhat less than what we would expect of a great industrial power such as the USA. The ranking of the USA in other literature surveys is undoubtedly higher than what we have observed here. They ranked second in the analysis of key ingredients to assess Industry 4.0 readiness in [15] and ranked equally second on Scopus in the analysis of the general literature on Industry 4.0 and the 4th Industrial revolution (but got a presence lower than 4% on WoS) in [118]. We also notice that some efforts on Industry 4.0 may be labelled differently in the USA, where the terminology Smart manufacturing was also adopted, as reported in [13,124]. We can conclude that the general interest for Industry 4.0 in the USA finds a partial confirmation in our survey, and assessment campaigns probably lag a bit behind.
When we come to the companies' characteristics, we find out that there is no significant difference between the companies' size categories. Campaigns directed to assess the state of the Industry 4.0 paradigm involve both small, medium, and large companies, with micro ones being the only marginally concerned (RQ5). The slight dominance of small and medium enterprises reflects the interest given in the literature to companies of that size over micro and large ones, as shown in [13,[125][126][127][128]. We can also observe a broad coverage of industry groups and sectors, with Capital goods being the most represented industry group (nearly a quarter of the papers considered) and Consumer Discretionary and Industrials being the two most represented sectors (with roughly one-third of the papers each) (RQ6). The diffusion of Industry 4.0 among industries has also been considered in [118]. However, the classification adopted there does not follow an established standard, making the comparison difficult. However, the distributions obtained in [118] and here appear largely consistent, with the notable exception of the Education sector, which is ranked first in [118] but largely absent in our survey. The surveys in [14,15] did not report any industry-specific analysis.
An issue of concern is the lack of an agreed standard to test the readiness/maturity of Industry 4.0 (RQ7). Most models have been adopted by their proposers only, with just three models employed more than once. Without a standard, comparing the degree of advancement of Industry 4.0 across any group (e.g., companies, industries, nations, geographical areas) is impossible or quite difficult.
Summing up, our analysis shows that the current status of Industry 4.0 assessment campaigns is not uniform, and there is room for improvement.
On the plus side, we can certainly be glad of the growing diffusion of assessment campaigns, which represents a similar trend of the Industry 4.0 paradigm.
However, the paradigm concentrates on Europe and Asia, with the U.S.A. severely underrepresented. In addition, several industry groups are marginally involved.
Another pro is the diffusion over all companies' sizes. Industry 4.0 is shown to be neither an SME nor a large company matter but to concern all companies irrespective of their size. However, the way assessments are conducted and reported is quite lacking. Many papers did not report crucial information (such as the number, size, and geographical location of the companies involved). In addition, no standard questionnaire emerges (most papers do not even provide information about the information eliciting method of their choice), so campaigns are not comparable on level ground.
Among the limitations of our work, we can mention the need to explore some discrepancies in the terminologies adopted for the same concepts in different parts of the world, which could make our analysis more comprehensive. In addition, we did not delve into the contents of the questionnaires, which have been largely found to be built ad hoc and may hide differences in analysis quality (e.g., lacking any reliability criterion at all).
As indications for the future, we can highlight the need for better (and standard) reporting practices. The use of ad hoc questionnaires rather than standard ones may make results obtained in different countries challenging to compare. In addition, an issue deserving to be investigated concerns the reasons for the minimal diffusion of Industry 4.0 outside Europe and Asia and in several industry groups. Our findings could spur researchers to cover those geographical areas where assessment campaigns are largely absent (or at least underreported), such as China, Japan, and the USA.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:  1 We do not enter here into the possible distinction between the two terms. Readiness could refer to companies at an initial development stage or even prior to the evolutionary path, while Maturity could refer to companies already at a possibly advanced evolution stage. In the following, we consider the terms interchangeable and typically use the maturity model term. We come back to the issue in more detail in Section 2.