Next Article in Journal
The Causality Relationship between Trade and Environment in G7 Countries: Evidence from Dynamic Symmetric and Asymmetric Bootstrap Panel Causality Tests
Next Article in Special Issue
XAI for Churn Prediction in B2B Models: A Use Case in an Enterprise Software Company
Previous Article in Journal
Game Theory and an Improved Maximum Entropy-Attribute Measure Interval Model for Predicting Rockburst Intensity
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges

Institute of Automation and Information Technologies, Satbayev University (KazNRTU), Almaty 050013, Kazakhstan
Institute of Information and Computational Technologies, Almaty 050010, Kazakhstan
Baltic International Academy, 1/4 Lomonosov Str., LV-1003 Riga, Latvia
Faculty of Management Science and Informatics, University of Zilina, 010 26 Žilina, Slovakia
Higher School of Economics and Business, Al-Farabi Kazakh National University (KazNU), Almaty 050040, Kazakhstan
Office of Academic Excellence and Methodology, Almaty Management University, Almaty 050060, Kazakhstan
International Radio Astronomy Centre, Ventspils University of Applied Sciences, Inzhenieru Str., 101, LV-3601 Ventspils, Latvia
Department of Natural Science and Computer Technologies, ISMA University of Applied Sciences, Lomonosov Str., 1, LV-1011 Riga, Latvia
School of Digital Technologies, Almaty Management University, Almaty 050060, Kazakhstan
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2552;
Submission received: 28 May 2022 / Revised: 4 July 2022 / Accepted: 20 July 2022 / Published: 22 July 2022
(This article belongs to the Special Issue Advances in Machine Learning and Applications)


Artificial intelligence (AI) is an evolving set of technologies used for solving a wide range of applied issues. The core of AI is machine learning (ML)—a complex of algorithms and methods that address the problems of classification, clustering, and forecasting. The practical application of AI&ML holds promising prospects. Therefore, the researches in this area are intensive. However, the industrial applications of AI and its more intensive use in society are not widespread at the present time. The challenges of widespread AI applications need to be considered from both the AI (internal problems) and the societal (external problems) perspective. This consideration will identify the priority steps for more intensive practical application of AI technologies, their introduction, and involvement in industry and society. The article presents the identification and discussion of the challenges of the employment of AI technologies in the economy and society of resource-based countries. The systematization of AI&ML technologies is implemented based on publications in these areas. This systematization allows for the specification of the organizational, personnel, social and technological limitations. This paper outlines the directions of studies in AI and ML, which will allow us to overcome some of the limitations and achieve expansion of the scope of AI&ML applications.

1. Introduction

The studies in the field of artificial intelligence (AI) are developed and widely discussed in the scientific community. These studies consider the theoretical aspects of the AI technologies and their applications in many areas of our society. The methods of machine learning (ML) are a group of methods often used in AI, which allow for the prediction of new properties of data based on known properties discovered from the training data. One of the specific areas of ML is deep learning (DL). In recent years, there has been an increased interest in research in this area. It can be illustrated by the number of publications in scientific databases. The graph presented in Figure 1 shows the number of reviews in AI, ML, and DL in Scopus in years 2000–2021.
Methods of AI have many practical applications. The applications, which are most often discussed in scientific publications, are shown in Figure 2. The dominant areas of AI, ML, and DL studies are computer science, engineering and mathematics [1,2,3].
The studies in computer science cover many subjects in this knowledge domain. These studies are implemented for image processing [4,5], signal analysis [3,6], natural language processing [7,8], security [9], and the development of intelligent software and hardware (for example, brain–computer interface) [10,11]. These studies are based on classification methods [12,13], data clustering [14], visualization [15], and other approaches of AI, ML, and DP. These approaches are effectively used not only in computer science but also in other areas of knowledge. According to [4], the use of AI significantly affects commerce, logistics, automated manufacturing and banking. The authors of the study [16] also consider the application of AI in various sectors of the economy. However, most of these researches discuss the possibility of the application of AI. The practical applications of AI and ML technologies and their introduction to industry and implementation in daily use have difficulties that we would like to consider in this paper. The identification and analysis of these difficulties can be implemented based on a systematic analysis of AI and ML. The provided analysis allows us to consider the classification of AI and ML technologies, the correlation of AI and ML methods, and opportunities for their development and application.
The difficulties in AI and ML can be well illustrated by the systematic analysis of AI and ML, exemplified by the resource-based economies. The application of state-of-the-art technologies can significantly improve the performance of the economic sectors in these countries [17,18]. The development of national economies of the countries rich in mineral resources substantially depends on the extracted minerals. However, their proven reserves are depleting. The sustainable development concept requires a transition to new methods and technologies of management, and economic exploitation of other resources of the area (agricultural products, animal husbandry, new mineral deposits, human reserves, etc.). The comparatively low rate of usage of contemporary technologies results in decreased productivity, development of low-value-added industries and underdevelopment of industries capable of creating the high value added.
For example, the products of the processing industry of Kazakhstan (Kazakhstan is traditionally considered as a resource-based economy) created a significant share of the gross national product, and 65% of enterprises operate in industries of low technological complexity. The share of innovative products is 1.6%, and only 1% of the companies belong to the high-tech sector [19]. In [20], it is noted that there is a gap in labor productivity in the economic sectors of Kazakhstan. For instance, compared to countries such as Australia and Canada, the gap in agriculture reaches 12–15 times, in the mining industry 5–10 times and in the manufacturing industry 2–4 times. Productivity growth is constrained by insufficient penetration and development of modern technologies, a high level of depreciation and a low technological level of fixed assets, as a result of decreased gross fixed capital formation from 30% of GDP in 2007 to 23.3% in 2016. Primary goods account for 70–75% of exports. According to the index of economic complexity [21], Kazakhstan is in the 93rd position [22], while the Global Competitiveness Index ranks the country 95th out of 141 countries according to the index of innovative potential [23].
The successful and economically viable development of traditional and new industries, increased manufacturing processes and advanced labor productivity require not only new production technologies, but also the collection, processing and analysis of data accompanying these processes. Undoubtedly, artificial intelligence (AI) is one of the most promising tools in this direction. AI demonstrates a significant economic effect in healthcare [24], commerce, transport, logistics, automated manufacturing, banking, etc. [25]. Many countries are elaborating or have implemented their own strategies for the use and development of AI [26]. Kazakhstan has identified the advancement and application of AI technologies as one of its development priorities [27].
This area of technology is a source of high expectations, reflected not only in the growth in the number of scientific publications, but also publications in the media. The quantitative indicators of these changes, calculated according to the methodology applied by [28], with the use of the corpus of texts described in [29], show that the share of publications in the media related to AI is growing and currently stands at up to 5% of publications in the media of Kazakhstan and up to 1% in Russia.
There are promising prospects for the widespread use of AI, but there are also a number of obstacles, and overcoming these barriers means a new round of technological development of AI and expansion of the scope of its application.
The goal of this research is systematization in the area of AI, comprising the core technologies of AI and obstacles to AI implementation in the economies, including the resource-based ones.
The article consists of the following sections.
Section 1 systematizes the areas of artificial intelligence, machine learning and deep learning models. The systematization results are used to analyze the intensity of the researches aimed at overcoming the limitations of AI&ML.
Section 2 discusses and systematizes the obstacles to the implementation of the artificial intelligence technologies. We divide the whole complex of obstacles into internal obstacles, inherent in the system of artificial intelligence technologies, and external, those caused by the preparedness or non-preparedness of organizations and people to use artificial intelligence. The research considers the peculiar limitations for AI in resource-based economies.
In Section 3, we discuss some solutions to overcome the technological limitations of machine learning, including the need for large amounts of data, long model training times, and difficulties for end users in understanding how the models work.
We conclude with a summary of the discussion.

2. Artificial Intelligence and Machine Learning Technologies Classification

Artificial intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” [30]. It can also be said that AI includes software and hardware methods that imitate or reproduce human behavior and thinking. AI is divided into weak AI and strong AI or general artificial intelligence [31], depending on the “degree of intelligence” of the system compared to a human [32,33]. The modern practical applications employ weak or soft AI, which demonstrates the ability to solve individual problems, and that the level of accuracy of these solutions is satisfactory for practice. The subject of the research [31] is strong or general artificial intelligence.
AI includes several major scientific areas, such as machine learning, natural language processing (NLP), text and speech synthesis, computer vision, robotics, planning and expert systems. The AI domains are shown in Figure 3, generated by the authors based on materials [7,34,35].
A large number of the AI applications are built on the basis of machine learning methods; these methods implement the fundamental idea of AI [36]. ML is applied to achieve better results in speech recognition [37] and speech emotions [38]. A wide range of ML methods are used in economic planning [39] and manufacturing control [40]. Ref. [41] notes that ML is a powerful tool for data analysis, and it can be used in various expert systems [42]. Machine learning is currently one of the main areas of the researches in the field of robotics [10,43].
Machine learning is often used to solve scientific and applied problems. For example, the ML applicability conditions [44] and the promise of deep learning [45] are considered for solving problems in the field of chemistry. There are numerous cases of applying ML in medicine [46,47], especially for medical imaging [5], astronomy [48], computational biology [49,50], agriculture [51], municipal economy [52] and industry [53], construction [54], modeling environmental [55] and geo-ecological processes [56], petrographic studies [12,57] exploration [58] and forecasting of mining [59], etc. ML is actively used and in fact, it is the core of modern investigations in the field of natural language processing [8,60,61].
ML methods are divided into several classes depending on the learning method and purpose of the algorithm [62] and include the following: supervised learning (SL) [13], unsupervised learning (UL) or cluster analysis [14], dimensionality reduction, semi-supervised learning (SSL), reinforcement learning (RL) [63], and deep learning (DL) [64].
UL methods solve the task of splitting the set of unlabeled objects into isolated or intersecting groups by applying the automatic procedure based on the properties of these objects [65,66]. UL reveals the hidden patterns in data, as well as anomalies and imbalances.
SL methods solve classification or regression issues. Such problems arise when a finite group of specifically marked objects is allocated in a potentially infinite set of objects. If the objects are marked by a finite set of integers (class numbers), then the classification problem is implemented. The classification algorithm, using this group as an example, must mark the objects, which have not yet been designated, with one of the indicated numbers. If the objects are marked with real numbers, both integer and fractional, then the problem of regression recovery is implemented. The algorithm selects a real number for unlabeled objects based on previously marked objects. In this case, the problems of prediction or filling the gaps in the data are solved.
DL methods solve the problem of revealing the hidden properties in data arrays by using neural networks with a large number of hidden layers and networks of a special architecture.
In the context of DL, the concept of transfer learning (TF) is frequently mentioned. TF means improving “a learner from one domain by transferring information from a related domain” [67]. ML models can be conditionally divided into classical and modern ones (see Figure 4) [59]. Without claiming to be comprehensive, the classic SL models comprise the following types: k-nearest-neighbor (k-NN) [68,69], logistic regression [70], decision tree (DT), support vector machines (SVM) [71] and feed forward artificial neural networks (ANN) [72]. The classic UL models include the following types: k-means [73] and principal component analysis (PCA).
The contemporary UL models are as follows: isometric mapping (ISOMAP) [74], locally linear embedding (LLE) [75, t-distributed stochastic neighbor embedding (t-SNE) [15], kernel principal component analysis (KPCA) [75], and multidimensional scaling (MDS) [76]. The contemporary SL, SSL, RL, DL models include the ensemble methods (boosting [77], random forest [78], etc.) and deep learning long short-term memory (LSTM) [79], deep feed forward neural networks (DFFNN) [80], convolutional neural networks (CNN) [81], recurrent neural networks (RNN) [82], etc.
Deep learning is the fastest growing AI sub-domain [36]. DL is a set of methods that employs the so-called deep neural networks, in other words, the networks containing two or more hidden layers. The main advantage of the deep architectures is related to the ability to solve the tasks using the end-to-end method. This approach reduces the requirements towards preliminary data processing, since a signal or image vector is used as an input to the network, and the network independently identifies the regularities relating the input vector to the target variable. The network performs the labor-intensive and complex process of selecting the significant features. This network functioning greatly simplifies the task of the researcher. However, these advantages appear only with a sufficiently large amount of training data and correctly chosen neural network architecture. There are three basic architecture types among the dozens of architectures [83]; the following various modified models are formed from these three basic types (see Figure 5):
  • Standard feed-forward neural network (FFNN).
  • Recurrent neural network (RNN).
  • Convolutional neural network (CNN).
  • Hybrid architectures, including elements of 1, 2, 3 basic architectures, for example, Siamese networks and transformers.
Feedforward neural networks (FFNN) are widely used in practice to solve classification problems [72,84] and the regression [59].
Recurrent neural networks (RNN) can use signal sequences as input data, including those of different lengths. Signals x(t) arrive at the moment t and form the internal state of the network a t ; the next signal x(t+1) is superimposed on it; thus, the final result y ^ depends on the entire sequence of signals. The result of the network operation can be either a class number or a sequence of signals   y ^   , y ^ 1     y ^ 2   , ,   y ^ T y .
Depending on the input and output sequences, the recurrent networks are divided into four main classes. It should be noted that FFNN can be considered as RNN of one-to-one architecture (see Figure 6).
The one-to-many architecture is used to solve the problems when a relatively small sequence of input data causes the generation of long sequences of data or signals, for example, for music generation [85] or text generation [86], when it is enough just to set the style of a piece of music or the theme of a story.
The many-to-one architecture is used for classification. The emotional coloring of the text or tonality (sentiment analysis) [87] can serve as an example. The tone of the text, which can be expressed by class assessment (neutral, negative, positive), is determined not only by specific words, but also by their combinations. Another task is the recognition of the named entities [88], such as proper names, days of the week and months, locations, dates, etc. Gene value in DNA analysis [89] is also determined by the nucleotide sequence.
The many-to-many architecture is used for machine translation [90] and speech recognition [91].
The development of the RNN idea turned into the appearance of LSTM [79] and transformer models, such as BERT (bidirectional encoder representations from transformers) [92], ELMO [93], GPT (generative pre-trained transformer) and generative adversarial networks [94,95,96,97]. These models have recently become widely used, since they effectively solve the issues of natural language processing.
Convolutional neural networks (CNNs) make it possible to single out the complex regularities in the presented data; these regularities are invariant with respect to their location in the input signal vector. They are exemplified by horizontal or vertical lines, or other characteristic features in the image. The scheme in Figure 7 shows a case of a 3 × 3 filter that highlights the vertical lines in an image.
The formation of convolutional filters, or to be more precise, the adjustment of the weights of the neurons modeling such a filter, occurs in the process of network training. Due to the implementation of the convolution operation, the computational complexity of the training process remains within the reasonable limits. CNNs have shown exceptional results in image processing tasks. The development of this architecture of deep neural networks was initiated by the LeNet model [98], which for the first time used convolutional filters, pooling and a fully connected neural network (FC) for image classification. The network contains four convolutional layers and a two-layer fully connected network. The total number of network parameters is about 60,000 (see Figure 8).
AlexNet [99] is an evolution of the original architecture obtained by increasing the size of the network and using the maxpool. The network contains about 100 times more weights than LeNet. VGGnet [100] is another large-scale architecture consisting of unified elements. GoogleNet is a network that uses the so-called inception module, providing the parallel paths for convolutional filters of different sizes, which ensures the detection of rare features [101]. ResNet is a network consisting of 152 layers and employs the so-called residual modules for solving the problem of the vanishing gradient [102]. Convolutional filters turned out to be an effective tool for solving the problems of identification and recognition “in one pass” (Yolo model [103]) and for solving the problem of image segmentation (the Unet model [104]).
The latest advances in CV tasks are related to the application of the self-attention mechanism [105] (so-called transformer architecture). Recent models using this approach that achieved SotA results on certain object classification tasks in recent month are: Florence [106], Swin Transfomer V2 [107] and DINO [108].
Graph neural networks (GNNs) are one of the most rapidly developing areas of machine learning in recent years; they are a subclass of deep learning techniques that are specifically built to carry out inference on graph-based data. They can be considered as a generalization of convolutional neural networks (which are used on two-dimensional black and white image data and three-dimensional color image data) to graph-structured data. They provide the best work in structural scenarios, where the graph structure is explicit in the applications, such as molecules, physical systems, knowledge graphs, social networks and so on. However, they also can be successfully used in non-structural scenarios, where graphs are implicit so that first, it is necessary to build the graph from the task, for example building a fully-connected “word” graph for text or building a scene graph for an image. Problems that a GNN can solve can be divided into the following three categories: node classification, link prediction and graph classification.
Different variants of graph neural networks, such as the graph convolutional network (GCN [109]), graph attention network (GAT [110]) and graph recurrent network (GRN [111,112]), have demonstrated outstanding performance in many deep learning tasks (graph mining [113], physics [114], chemistry [115], text classification [116], image classification [117] and many others [118]).
The proposed classification of the AI and ML technologies allows us to outline the opportunities for AI and ML. The proposed classification of AI and ML technologies permits us to identify their capabilities. The methods of DL based on RNN and CNN are being intensively developed at the present time [90,101,119]. These and other technologies mentioned above are most often used to solve problems such as recognition [102,119], recommendation [88,120], natural language processing [60,61], data processing [121,122], tracking or monitoring [123,124], personalization [125,126], and learning [127,128]. One of AI’s opportunities in technologies involves applications for the analysis of uncertain data, which are often elaborated with the application of fuzzy logic [129], route sets [130], and possibility conception-based theories [131]. However, the possibilities of AI development should be considered from different points of view. Moreover, one of the first points of view may be the areas of use of AI [118]. Most often, AI methods are developed for medicine [119,126], agriculture [132], and education [133,134]. However, it is supposed to accelerate the use of AI methods in social applications, the domestic environment, and art [85,128,135]. AI’s opportunities are closely related to and caused by the trends of Industry 4.0, in which AI technologies are the background for most applications [136,137]. It results in the intensive developments and application of AI in safety [131], cyber security [9], and IoT [138], based on uncertain data with unsupervised methods [14].

3. Limitation and Difficulties in the AI and ML Application

There are promising prospects for the widespread use of AI. In general, the economic prospects of AI application are highly rated. According to [24], the economic effect in the European health care system is about 200 billion euros. This effect is associated with saving time and increasing the number of saved lives. Refs. [16,17] perform the results of AI implementation in various economic sectors. According to estimates [17,25], a significant effect from the use of AI is observed in commerce (USD 400 billion), logistics (USD 400 billion), automated production (USD 300 billion), banking (USD 200 billion). In percentage terms, the greatest effect (up to 10% of profit growth) is observed in high-tech products. Consequently, the economic impact is higher in developed countries, producing more high-tech products compared to the resource-based economies. About a quarter of GDP in Kazakhstan is formed by the extraction, processing and transportation of resources [18]. Primary products account for 70–75% of exports. The share of innovative products in GDP is 1.6% and only 1% of companies are high-tech [19]. Accordingly, the predicted increase in GDP is associated with the use of AI technologies and can be estimated at 1.5–2%.
Nevertheless, there are a number of obstacles, and overcoming these barriers means new possibilities for AI implementation in manufacturing, and also a new round of technological development of AI.
The scientific community identifies the following types of restrictions: organizational [139], personnel, comprising the fear of new technologies (fear of AI) and shortage of data scientists [140]), problems with data including data quality and the large volume of data [141], legal, economic, social issues, etc. [142]. In particular, the authors of [142] identify the following nine problems: data quality, privacy, and security; data biases and technical limitations; “black box”, transparency, and predictability; wealth gap and inequality; economy of developing countries; job displacement and replacement; trust and adoption; ethical and morality issues; legal issues and regulation policy. The study [32] describes the following ten constraints in the process of solving the medical problems:
  • Insufficient amount of posted data.
  • Sample variability, such as variability in tissue and organ samples.
  • Prevalence of non-binary classification problems.
  • Large image sizes (50,000 × 50,000), while existing deep learning models operate with substantially smaller images (608 × 608 Yolo), 224 × 224 VGG16 [143].
  • Turing test dilemma. The final assessment is carried out by a human, which is not always possible.
  • Orientation of weak AI to the solution of one task, which increases the complexity of training and leads to the associativity problem indicated below.
  • High computational costs mean high costs of AI-based solutions.
  • Instability of solutions of the computer vision systems and their dependence on noise in problems of medical diagnostics.
  • Lack of transparency and interpretability.
  • Difficulties in applying AI in practice. For example, the difficulties of the Watson Health project [144] are associated with the complexity of the practical application, low confidence in the results and high costs.
In terms of strong AI, the study [145] highlights the problem of associativity, by which they mean the limitedness of the obtained results and the inability of the modern AI systems to relate the obtained results to the real world. This problem is also related to the single tasking element of the AI systems, which are good at solving the tasks for which they are configured, but cannot independently transfer the solution to other similar tasks.
The research [146] states the need to develop new architectures of deep neural networks for the specific applications of computer vision on UAV boards. It also mentions the necessity to prepare the datasets for different applications, for example, for processing multispectral data [147].
Industrial AI also requires data of high quality, robust and validated machine learning models and cyber-infrastructure, including remote operations, cybersecurity, privacy mechanisms and 5G technology [148].
There is also a problem with the data and trust in the results of the AI operations in the financial sector [149].
By summarizing the list of problems mentioned in the literature, we can identify the external problems in relation to AI technology, and internal limitations inherent in the current state of AI technology (see Figure 9).
A detailed description of the limitations inherent in the current state of AI technologies is presented in Table 1 and Table 2. As a rule, the external problems in AI shown in Table 1 are investigated and considered by engineering and non-engineering specialists. These limitations can be at different levels of the AI implementation and applications, including from the government level to the level of a single department. In addition, the decisions on these problems should be complex because they are closely related. For example, the social limitations, such as ethical, moral, and legal issues and organizational limitations, such as lack of strategy of the AI adoption and weak technological infrastructure, are one of the multifaceted complex problems. In many cases, these limitations should be considered at the government level as national strategies. An example of such a strategy is the Ethics Guidelines for Trustworthy AI in the EU [150], where one of the first recommendations for governments is to create an enabling environment for the effective implementation of AI development, and elaboration of national strategies for AI-based applications. An important step in the implementation of such strategies is the creation of available infrastructure for AI. The creation of this infrastructure has not only external limitations but also includes internal limitations, such as technologies (see Table 2). First of all, it includes data collection and safety and cybersecurity [9,151]. At the same time, the problem of infrastructure development can be decided at the level of one enterprise [151] or academic organization [152]. Similarly, the personnel problems should be solved both at the government level and in each specific organization. The personnel’s limitations are an amalgamation of a few different obstacles—a lack of expertise, a lack of management buy-in, and the society, insufficiently saturated with the interests and practicalities of AI. The training of staff for the possibilities of using AI can begin in secondary schools, which is a government task [133]. Incorporating AI elements into online training courses expands access to affordable education and improves the quality of education and improves the employment situation. Nevertheless, the economic constraints are very important among all the external limitations. This is evidenced by the studies conducted on the relationship between the economic level of the country development and the readiness to use AI [153].
The internal limitations of AI technologies (see Table 2) are typically technical problems, which are often discussed in studies on AI, ML, and DL. Special infrastructures, such as biobanks, are elaborated for data collection in medicine [155], agriculture [132], and other areas [138]. The methods of the preliminary transformation are often developed based on feature extraction and/or dimensional reduction [156,157]. There are investigations in learning process improvement [158] and result availability increasing [159]. The limitations in AI technologies are eliminated by the development of new fuzzy based methods, which allows us to decrease the result uncertainty [129].
The authors of this research would like to mention the specific restrictions that exist in resource-based economies to the general limitations in the AI area.
As it has been already mentioned, the biggest share of the GDP of resource-based economies is formed by the exploitation of the natural resources, mining and processing the excavated resources. These processes are based on the technologies that are usually imported. As a result, these economies do not create incentives for the development of technologies within the national economies, since borrowing them from other countries requires substantially less efforts. These countries often do not have the necessary conditions (reliable ICT infrastructure, human capital and regulatory framework) to collect enough data to use the AI algorithms for development. The existing data are often not used because the data arrive too late or do not appear at all, are not available in a digital format, or do not have the level of detail required for decision-making and innovation in the field. Therefore, restrictions on the economic advantages obtained as a result of the use of the AI should be imposed. According to [17], an increase in the share of high-tech production can significantly increase the effect of the use of the AI&ML technologies. Agriculture, healthcare, and education are often cited as development sectors that have made the most progress in harnessing big data and the analytical power of AI. Therefore, the development of these sectors should be prioritized in resource-based economies [153,160].
Although there is no way to overcome these problems completely, there are approaches that provide partial compensation of the weaknesses in the particular cases.

4. Discussion

Despite the significant expectations associated with the use of AI, in many cases, the current level of technology is a significant obstacle to their implementation in the manufacturing and service sectors. One must consider the approaches to solving some internal problems of employing the AI technologies and the evaluation of the prospects for overcoming these limitations.
Data generation for training deep ML models. The solution of many problems in the field of deep learning depends on the volume and quality of the data sets (data set—DS). The labeled image packs, such as ImageNet [161], Open Images [162], COCO Dataset [163] and FaceNet, are widely used to solve computer vision problems. However, the existing widely used DS may not be sufficient for solving specific problems, for example, they provide the recognition of a limited number of objects. Therefore, the problem of data shortage in the field of computer vision can be solved via using the synthetic DS created on 3D graphic editors [164], game engines and environments [165,166,167,168]. These DS, in particular, are used for training unmanned vehicles [169]. Synthetic datasets are also applied in other areas [57]. Generative adversarial networks have been recently used to generate them [170]. A large-scale review of the approaches to creating the synthetic datasets is presented in [171].
Speed up learning. In cases when the subject area and the problem are close to the available solutions, the acceleration of learning is possible if preliminary trained models are used, according to the transfer learning scheme [67,172]. This means that a neural network previously trained on a large data set can be supplemented with one or more layers. Additional layers are adjusted at the final training phase on a specialized data set, which is usually not large. In this case, all other layers are considered as “frozen”, and their weights do not change. It is believed that the preliminary trained network retains the basic patterns that are inherent in a data set of a certain type (faces, landscape, speech, etc.), and additional layers focus on the features of a specialized data set. The use of transfer learning not only speeds up the learning process, but also reduces the requirements of the hardware.
Explaining the results of machine learning models. Artificial intelligence (AI) has recently achieved great success, due to the rapid development of the machine learning technologies. Despite this, there are potential risks associated with a “black box” approach to learning. Unlike some classic machine learning methods, especially decision trees, where the results of a model can be explained relatively simply, non-linear classification and especially deep learning models lack transparency, making it difficult to understand how the model made a particular decision. This is a serious problem that hinders the widespread use of AI in healthcare [173], banking and many other areas [174].
A complex machine learning model is a “black box”, hiding the mechanism of the results. The methods that permit the assessment of the influence of input parameters on the final result are used to turn the “black box” into a “white” or “grey” one.
The existing methods of explanation today can be classified into the following four areas [175]: explanation target, explanation scope, model type, and data type used to train the ML model (see Figure 10). It is often desirable that the interpreter does not depend on the machine learning model, in other words, the interpreter should be agnostic and should provide the global interpretation along with the local one. The recently appeared methods such as interpretable model-agnostic (LIME) [176] and SHapley Additive exPlanations (SHAP) [177] partly solve these problems. However, in the case of complex models with a significant correlation of properties, their application can be difficult, due to the linear nature of the interpretation (LIME) and the high complexity of the calculations (SHAP) [178].
In addition, the interpretation of the influence of individual parameters and their combinations is possible if they are clear. Otherwise, semantically vague parameters would render such an interpretation as useless.
The implementation of the AI&ML technologies in the economy requires solving the problems described above; the possible directions of further research are summarized in Table 3.
The scientific community is striving to overcome the limitations existing in the AI technologies at the current level; this trend is reflected in the growth of publication activity. Although the total number of articles devoted to AI is declining, there is, on the contrary, a rapid increase [23] in a number of publications in some new scientific areas (Figure 11).
The growth and decrease in the annual number of publications is estimated by calculating the growth rate (D1) and acceleration of growth (D2) of the number of articles according to the methodology proposed in [185] (Figure 12). The positive values of D1 and D2 are an indicator of the rapid growth of publication activity. Other possible combinations of indicator values are interpreted as follows: when D1 is negative and D2 is positive, it indicates a slowdown in the number of items; when D1 is positive and D2 is negative, it indicates a slowdown in the growth of the number of items; when both values are negative, it indicates an accelerated decrease in the number of items. Figure 12 also shows the normalized value of the number of publications in the considered domains at the end of 2021. The maximum number of publications is 88,000.
The results show that explainable machine learning models exhibit positive D1 and D2 values in many industries. Industries identified by the terms storehouse, retail, fabrication, logistics, precision agriculture combined with the term dataset also show high D2 values. In large scientific domains with a volume of more than 1000, including the term “deep learning”, the leaders in the growth of articles are sectors such as precision agriculture, precision farming, supply-chain management, transport, healthcare and manufacturing (see Figure 12).
It can be concluded that these industries will develop new technologies based on deep learning models.
Solving at least part of the internal problems of AI technology will lead to an increase in the economic efficiency of AI in the industry. However, the economic effect associated with the use of AI depends not only on overcoming the above-mentioned limitations, but also on the conditions of a particular country and the resources allocated in the research field. It should be noted that at the present time, in general, the expenditures on research and development in Kazakhstan are 0.125% of GDP (2018), which is significantly less than the same indicator in developed countries (Austria—3.172%, Germany—3.094%, Great Britain—1.724%; Russia—0.99%) [186]. The resource nature of the country’s economy imposes a limitation on the economic benefits obtained as a result of the application of AI. According to [4], an increase in the share of high-tech production can significantly increase the effect of the use of AI&ML technologies.

5. Conclusions

The fastest growing area of research in the AI field is deep learning. New results, as well as the ways of application of the previously proposed networks, appear almost daily. This area of research and application includes a large family of networks dealing with text recognition, speech recognition, handwriting recognition, networks performing image transformation and styling and networks for processing time sequences.
Nevertheless, there are obstacles to applying AI&ML, inherent both in the technologies and in a socio-economic environment, which might not be ready for the rapid changes. This research presents an attempt to systematize the sections of AI. We also classified the limitations that make it difficult to implement AI technologies in practice, especially in resource-based economies. The employment of AI&ML methods is restricted by the internal limitations inherent in the technology, and by the external factors, such as the organization of the enterprise’s operations and data collection or psychological issues associated with the lack of understanding of the operation procedure of machine learning models, etc.
The efforts of the scientific community are aimed at overcoming the technological limitations, such as lack of data for the deep learning models, improving the AI models, and accelerating learning. The activity of the researchers, reflected in the number of published articles, is increasing. It is believed that some technological problems will be solved in the near future. In particular, the methods for generating data sets, explanation of the results of machine learning systems, and accelerating learning have already been developed and are successfully applied in some cases. At the same time, we still need solutions for overcoming the described limitations for most AI applications. They are especially important for the industries in which AI usage may lead to a significant economic and social effect.
One of the most important directions may be a combination of remote sensing and machine learning technologies in assessing soil quality, salinity and increasing agricultural productivity. In our opinion, future research should be aimed at creating large accessible data sets, the implementation of existing technologies and developing methods oriented towards solving specific tasks in the field of mining, transport, trade, finance, healthcare, etc.

Author Contributions

Conceptualization, R.I.M., E.Z. and Y.P.; methodology, R.I.M., Y.K. and V.G.; software, A.S., R.I.M. and F.A.; validation, V.L., Y.K. and M.Y.; investigation, F.A., K.Y., A.K., E.Z. and V.G.; resources, R.I.M.; data curation, A.S., E.M. and M.Y.; writing—original draft preparation, R.I.M., Y.P. and E.Z.; writing—review and editing, Y.P., E.M., V.G. and Y.K.; visualization, R.I.M., E.Z. and E.M.; supervision, R.I.M.; project administration, Y.K. and V.G.; funding acquisition, R.I.M. and V.L. All authors have read and agreed to the published version of the manuscript.


This research has been funded by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan, Grant No. AP09259587, “Developing of methods and algorithms of intelligent GIS for multi-criteria analysis of healthcare data”, Grant No. BR10965172, “Space monitoring and GIS for quantitative assessment of soil salinity and degradation of agricultural land in the South of Kazakhstan”. This work was partially supported by the Ministry of Education, Science, Research, and Sport of the Slovak Republic, under the grant “Creation of methodological and learning materials for Biomedical Informatics—a new engineering program at the UNIZA” ( KEGA 009ŽU-4/2020) and the Integrated Infrastructure Operational Program for the following project: Systemic Public Research Infrastructure—Biobank for Cancer and Rare diseases, ITMS: 313011AFG5, co-financed by the European Regional Development Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


The authors would like to express their sincere gratitude to the anonymous referees for their useful comments.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Ghasemi, Y.; Jeong, H.; Choi, S.H.; Park, K.-B.; Lee, J.Y. Deep learning-based object detection in augmented reality: A systematic review. Comput. Ind. 2022, 139, 103661. [Google Scholar] [CrossRef]
  2. Widdows, D.; Kitto, K.; Cohen, T. Quantum mathematics in artificial intelligence. J. Artif. Intell. Res. 2021, 72, 1307–1341. [Google Scholar] [CrossRef]
  3. Panetto, H.; Iung, B.; Ivanov, D.; Weichhart, G.; Wang, X. Challenges for the cyber-physical manufacturing enterprises of the future. Annu. Rev. Control. 2019, 47, 200–213. [Google Scholar] [CrossRef]
  4. Izonin, I.; Tkachenko, R.; Peleshko, D.; Rak, T.; Batyuk, D. Learning-based image super-resolution using weight coefficients of synaptic connections. In Proceedings of the 2015 Xth International Scientific and Technical Conference “Computer Sciences and Information Technologies”(CSIT), Lviv, Ukraine, 14–17 September 2015; pp. 25–29. [Google Scholar]
  5. Shen, D.; Wu, G.; Suk, H.-I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [Green Version]
  6. Tuncer, T.; Ertam, F.; Dogan, S.; Aydemir, E.; Pławiak, P. Ensemble residual network-based gender and activity recognition method with signals. J. Supercomput. 2020, 76, 2119–2138. [Google Scholar] [CrossRef]
  7. Barakhnin, V.; Duisenbayeva, A.; Kozhemyakina, O.Y.; Yergaliyev, Y.; Muhamedyev, R. The automatic processing of the texts in natural language. Some bibliometric indicators of the current state of this research area. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2018; p. 012001. [Google Scholar]
  8. Hirschberg, J.; Manning, C.D. Advances in natural language processing. Science 2015, 349, 261–266. [Google Scholar] [CrossRef]
  9. Abdullahi, M.; Baashar, Y.; Alhussian, H.; Alwadain, A.; Aziz, N.; Capretz, L.F.; Abdulkadir, S.J. Detecting Cybersecurity Attacks in Internet of Things Using Artificial Intelligence Methods: A Systematic Literature Review. Electronics 2022, 11, 198. [Google Scholar] [CrossRef]
  10. Kim, D.; Kim, S.-H.; Kim, T.; Kang, B.B.; Lee, M.; Park, W.; Ku, S.; Kim, D.; Kwon, J.; Lee, H. Review of machine learning methods in soft robotics. PLoS ONE 2021, 16, e0246102. [Google Scholar] [CrossRef]
  11. Torres, E.P.; Torres, E.A.; Hernández-Álvarez, M.; Yoo, S.G. EEG-based BCI emotion recognition: A survey. Sensors 2020, 20, 5083. [Google Scholar] [CrossRef]
  12. Kuchin, Y.; Mukhamediev, R.; Yakunin, K.; Grundspenkis, J.; Symagulov, A. Assessing the impact of expert labelling of training data on the quality of automatic classification of lithological groups using artificial neural networks. Appl. Comput. Syst. 2020, 25, 145–152. [Google Scholar] [CrossRef]
  13. Kotsiantis, S.B.; Zaharakis, I.; Pintelas, P. Supervised machine learning: A review of classification techniques. Emerg. Artif. Intell. Appl. Comput. Eng. 2007, 160, 3–24. [Google Scholar]
  14. Hastie, T.; Tibshirani, R.; Friedman, J. Unsupervised learning. In The Elements of Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2009; pp. 485–585. [Google Scholar]
  15. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  16. Adopting, Deploying, and Applying AI. Available online: (accessed on 21 April 2022).
  17. Zhao, H. Assessing the economic impact of artificial intelligence. ITU Trends. In Emerging Trends in ICTs; Morgan Kaufmann Publishers: Burlington, MA, USA, 2018. [Google Scholar]
  18. Financial Climate in the Republic of Kazakhstan. Available online: (accessed on 21 April 2022).
  19. Bureau of National Statistics of the Agency for Strategic Planning and Reforms of the Republic of Kazakhstan. Available online: (accessed on 21 April 2022).
  20. Strategic Development Plan of the Republic of Kazakhstan until 2025. Available online: (accessed on 21 April 2022).
  21. Hidalgo, C.A.; Hausmann, R. The building blocks of economic complexity. Proc. Natl. Acad. Sci. USA 2009, 106, 10570–10575. [Google Scholar] [CrossRef] [Green Version]
  22. List of Countries by Economic Complexity. Available online: (accessed on 21 April 2022).
  23. The Global Competitiveness Report 2019. Available online: (accessed on 21 April 2022).
  24. The Socio-Economic Impact of AI in Healthcare, October 2020. Available online: (accessed on 21 April 2022).
  25. Haseeb, M.; Mihardjo, L.W.; Gill, A.R.; Jermsittiparsert, K. Economic impact of artificial intelligence: New look for the macroeconomic assessment in Asia-Pacific region. Int. J. Comput. Intell. Syst. 2019, 12, 1295. [Google Scholar] [CrossRef] [Green Version]
  26. Van Roy, V. AI Watch-National Strategies on Artificial Intelligence: A European Perspective in 2019; Joint Research Centre: Seville, Spain, 2020. [Google Scholar]
  27. A National Artificial Intelligence Cluster Will Appear in Kazakhstan. Available online: (accessed on 21 April 2022).
  28. Mukhamediev, R.I.; Yakunin, K.; Mussabayev, R.; Buldybayev, T.; Kuchin, Y.; Murzakhmetov, S.; Yelis, M. Classification of negative information on socially significant topics in mass media. Symmetry 2020, 12, 1945. [Google Scholar] [CrossRef]
  29. Yakunin, K.; Kalimoldayev, M.; Mukhamediev, R.I.; Mussabayev, R.; Barakhnin, V.; Kuchin, Y.; Murzakhmetov, S.; Buldybayev, T.; Ospanova, U.; Yelis, M. Kaznewsdataset: Single country overall digital mass media publication corpus. Data 2021, 6, 31. [Google Scholar] [CrossRef]
  30. Artificial Intelligence. Available online: (accessed on 21 April 2022).
  31. Everitt, T.; Goertzel, B.; Potapov, A. Artificial general intelligence. In Lecture Notes in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  32. Tizhoosh, H.R.; Pantanowitz, L. Artificial intelligence and digital pathology: Challenges and opportunities. J. Pathol. Inform. 2018, 9, 38. [Google Scholar] [CrossRef]
  33. Strong, A. Applications of artificial intelligence & associated technologies. In Proceedings of the International Conference on Emerging Technologies in Engineering, Biomedical, Management and Science [ETEBMS-2016], Jodhpur, India, 5–6 March 2016. [Google Scholar]
  34. The Artificial Intelligence (AI) White Paper. Available online: (accessed on 23 February 2021).
  35. Mukhamediev, R.I.; Symagulov, A.; Kuchin, Y.; Yakunin, K.; Yelis, M. From Classical Machine Learning to Deep Neural Networks: A Simplified Scientometric Review. Appl. Sci. 2021, 11, 5541. [Google Scholar] [CrossRef]
  36. Szczepanski, M. Economic Impacts of Artificial Intelligence (AI). 2019. EPRS: European Parliamentary Research Service. Available online: (accessed on 27 May 2022).
  37. Watanabe, S.; Hori, T.; Karita, S.; Hayashi, T.; Nishitoba, J.; Unno, Y.; Soplin, N.E.Y.; Heymann, J.; Wiesner, M.; Chen, N. Espnet: End-to-end speech processing toolkit. arXiv 2018, arXiv:1804.00015. [Google Scholar]
  38. Kerkeni, L.; Serrestou, Y.; Mbarki, M.; Raoof, K.; Mahjoub, M.A.; Cleder, C. Automatic speech emotion recognition using machine learning. In Social Media and Machine Learning; IntechOpen: London, UK, 2019. [Google Scholar]
  39. An, J.; Mikhaylov, A.; Sokolinskaya, N. Machine learning in economic planning: Ensembles of algorithms. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2019; p. 012126. [Google Scholar]
  40. Usuga Cadavid, J.P.; Lamouri, S.; Grabot, B.; Pellerin, R.; Fortin, A. Machine learning applied in production planning and control: A state-of-the-art in the era of industry 4.0. J. Intell. Manuf. 2020, 31, 1531–1558. [Google Scholar] [CrossRef]
  41. Ogidan, E.T.; Dimililer, K.; Ever, Y.K. Machine learning for expert systems in data analysis. In Proceedings of the 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 19–21 October 2018; pp. 1–5. [Google Scholar]
  42. Prasadl, B.; Prasad, P.; Sagar, Y. An approach to develop expert systems in medical diagnosis using machine learning algorithms (asthma) and a performance study. Int. J. Soft Comput. 2011, 2, 26–33. [Google Scholar] [CrossRef]
  43. Mosavi, A.; Varkonyi, A. Learning in robotics. Int. J. Comput. Appl. 2017, 157, 8–11. [Google Scholar] [CrossRef]
  44. Artrith, N.; Butler, K.T.; Coudert, F.-X.; Han, S.; Isayev, O.; Jain, A.; Walsh, A. Best practices in machine learning for chemistry. Nat. Chem. 2021, 13, 505–508. [Google Scholar] [CrossRef]
  45. Mater, A.C.; Coote, M.L. Deep learning in chemistry. J. Chem. Inf. Modeling 2019, 59, 2545–2559. [Google Scholar] [CrossRef]
  46. Cruz, J.A.; Wishart, D.S. Applications of machine learning in cancer prediction and prognosis. Cancer Inform. 2006, 2, 59–77. [Google Scholar] [CrossRef]
  47. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Brief. Bioinform. 2018, 19, 1236–1246. [Google Scholar] [CrossRef]
  48. Ball, N.M.; Brunner, R.J. Data mining and machine learning in astronomy. Int. J. Mod. Phys. D 2010, 19, 1049–1106. [Google Scholar] [CrossRef] [Green Version]
  49. Chicco, D. Ten quick tips for machine learning in computational biology. BioData Min. 2017, 10, 1–17. [Google Scholar] [CrossRef]
  50. Zitnik, M.; Nguyen, F.; Wang, B.; Leskovec, J.; Goldenberg, A.; Hoffman, M.M. Machine learning for integrating data in biology and medicine: Principles, practice, and opportunities. Inf. Fusion 2019, 50, 71–91. [Google Scholar] [CrossRef]
  51. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [Green Version]
  52. Mahdavinejad, M.S.; Rezvan, M.; Barekatain, M.; Adibi, P.; Barnaghi, P.; Sheth, A.P. Machine learning for Internet of Things data analysis: A survey. Digit. Commun. Netw. 2018, 4, 161–175. [Google Scholar] [CrossRef]
  53. Farrar, C.R.; Worden, K. Structural Health Monitoring: A Machine Learning Perspective; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  54. Lai, J.; Qiu, J.; Feng, Z.; Chen, J.; Fan, H. Prediction of soil deformation in tunnelling using artificial neural networks. Comput. Intell. Neurosci. 2016, 2016, 33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Recknagel, F. Applications of machine learning to ecological modelling. Ecol. Model. 2001, 146, 303–310. [Google Scholar] [CrossRef]
  56. Tatarinov, V.; Manevich, A.; Losev, I. A system approach to geodynamic zoning based on artificial neural networks. Gorn. Nauk. I Tekhnologii Min. Sci. Technol. 2018, 3, 14–25. [Google Scholar] [CrossRef]
  57. Kuchin, Y.I.; Mukhamediev, R.I.; Yakunin, K.O. One method of generating synthetic data to assess the upper limit of machine learning algorithms performance. Cogent Eng. 2020, 7, 1718821. [Google Scholar] [CrossRef]
  58. Chen, Y.; Wu, W. Application of one-class support vector machine to quickly identify multivariate anomalies from geochemical exploration data. Geochem. Explor. Environ. Anal. 2017, 17, 231–238. [Google Scholar] [CrossRef]
  59. Mukhamediev, R.I.; Kuchin, Y.; Amirgaliyev, Y.; Yunicheva, N.; Muhamedijeva, E. Estimation of Filtration Properties of Host Rocks in Sandstone-Type Uranium Deposits Using Machine Learning Methods. IEEE Access 2022, 10, 18855–18872. [Google Scholar] [CrossRef]
  60. Goldberg, Y. A primer on neural network models for natural language processing. J. Artif. Intell. Res. 2016, 57, 345–420. [Google Scholar] [CrossRef] [Green Version]
  61. Sadovskaya, L.L.; Guskov, A.E.; Kosyakov, D.V.; Mukhamediev, R.I. Natural language text processing: A review of publications. Artif. Intell. Decis. Mak. 2021, 95–115. [Google Scholar] [CrossRef]
  62. Nassif, A.B.; Shahin, I.; Attili, I.; Azzeh, M.; Shaalan, K. Speech recognition using deep neural networks: A systematic review. IEEE Access 2019, 7, 19143–19165. [Google Scholar] [CrossRef]
  63. Li, Y. Deep reinforcement learning: An overview. arXiv 2017, arXiv:1701.07274. [Google Scholar]
  64. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  65. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. 1999, 31, 264–323. [Google Scholar] [CrossRef]
  66. Barbakh, W.A.; Wu, Y.; Fyfe, C. Review of clustering algorithms. In Non-Standard Parameter Adaptation for Exploratory Data Analysis; Springer: Berlin/Heidelberg, Germany, 2009; pp. 7–28. [Google Scholar]
  67. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 1–40. [Google Scholar] [CrossRef] [Green Version]
  68. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar]
  69. Dudani, S.A. The distance-weighted k-nearest-neighbor rule. IEEE Trans. Syst. Man Cybern. 1976, 4, 325–327. [Google Scholar] [CrossRef]
  70. Yu, H.-F.; Huang, F.-L.; Lin, C.-J. Dual coordinate descent methods for logistic regression and maximum entropy models. Mach. Learn. 2011, 85, 41–75. [Google Scholar] [CrossRef] [Green Version]
  71. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  72. Zhang, G.P. Neural networks for classification: A survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2000, 30, 451–462. [Google Scholar] [CrossRef] [Green Version]
  73. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Auckland, CA, USA, 7 January 1967; pp. 281–297. [Google Scholar]
  74. Tenenbaum, J.B.; Silva, V.d.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  75. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  76. Borg, I.; Groenen, P.J. Modern multidimensional scaling: Theory and applications. J. Educ. Meas. 2005, 40, 277–280. [Google Scholar] [CrossRef]
  77. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  78. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  79. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  80. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
  81. LeCun, Y.; Kavukcuoglu, K.; Farabet, C. Convolutional networks and applications in vision. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 30 May–2 June 2010; pp. 253–256. [Google Scholar]
  82. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  83. The Neural Network Zoo. Available online: (accessed on 21 April 2022).
  84. Nguyen, G.; Dlugolinsky, S.; Bobák, M.; Tran, V.; Lopez Garcia, A.; Heredia, I.; Malík, P.; Hluchý, L. Machine learning and deep learning frameworks and libraries for large-scale data mining: A survey. Artif. Intell. Rev. 2019, 52, 77–124. [Google Scholar] [CrossRef] [Green Version]
  85. Nayebi, A.; Vitelli, M. Gruv: Algorithmic music generation using recurrent neural networks Course CS224D. Deep. Learn. Nat. Lang. Processing (Stanf.) 2015, 1–6. Available online: (accessed on 27 May 2022).
  86. Lu, S.; Zhu, Y.; Zhang, W.; Wang, J.; Yu, Y. Neural text generation: Past, present and beyond. arXiv 2018, arXiv:1803.07133. [Google Scholar]
  87. Zhang, L.; Wang, S.; Liu, B. Deep learning for sentiment analysis: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1253. [Google Scholar] [CrossRef] [Green Version]
  88. Lample, G.; Ballesteros, M.; Subramanian, S.; Kawakami, K.; Dyer, C. Neural architectures for named entity recognition. arXiv 2016, arXiv:1603.01360. [Google Scholar]
  89. Liu, X. Deep recurrent neural network for protein function prediction from sequence. arXiv 2017, arXiv:1701.08318. [Google Scholar]
  90. Wu, Y.; Schuster, M.; Chen, Z.; Le, Q.V.; Norouzi, M.; Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv 2016, arXiv:1609.08144. [Google Scholar]
  91. Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenger, R.; Satheesh, S.; Sengupta, S.; Coates, A. Deep speech: Scaling up end-to-end speech recognition. arXiv 2014, arXiv:1412.5567. [Google Scholar]
  92. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  93. Peters, M.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; Zettlemoyer, L. Deep contextualized word representations. arXiv 2018, arXiv:1802.05365. [Google Scholar]
  94. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Processing Syst. 2014, 27, 1–9. Available online: (accessed on 27 May 2022).
  95. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Processing Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
  96. Agnese, J.; Herrera, J.; Tao, H.; Zhu, X. A survey and taxonomy of adversarial neural networks for text-to-image synthesis. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1345. [Google Scholar] [CrossRef] [Green Version]
  97. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  98. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Processing Syst. 2012, 25, 1–9. Available online: (accessed on 27 May 2022). [CrossRef]
  99. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  100. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  101. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  102. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  103. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  104. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Processing Syst. 2017, 30, 1–11. Available online: (accessed on 27 May 2022).
  105. Yuan, L.; Chen, D.; Chen, Y.-L.; Codella, N.; Dai, X.; Gao, J.; Hu, H.; Huang, X.; Li, B.; Li, C. Florence: A new foundation model for computer vision. arXiv 2021, arXiv:2111.11432. [Google Scholar]
  106. Liu, Z.; Hu, H.; Lin, Y.; Yao, Z.; Xie, Z.; Wei, Y.; Ning, J.; Cao, Y.; Zhang, Z.; Dong, L. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 12009–12019. [Google Scholar]
  107. Zhang, H.; Li, F.; Liu, S.; Zhang, L.; Su, H.; Zhu, J.; Ni, L.M.; Shum, H.-Y. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. arXiv 2022, arXiv:2203.03605. [Google Scholar]
  108. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  109. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
  110. Gallicchio, C.; Micheli, A. Graph echo state networks. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  111. Li, Y.; Tarlow, D.; Brockschmidt, M.; Zemel, R. Gated graph sequence neural networks. arXiv 2015, arXiv:1511.05493. [Google Scholar]
  112. Riba, P.; Fischer, A.; Lladós, J.; Fornés, A. Learning graph distances with message passing neural networks. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2239–2244. [Google Scholar]
  113. Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R. Relational inductive biases, deep learning, and graph networks. arXiv 2018, arXiv:1806.01261. [Google Scholar]
  114. Do, K.; Tran, T.; Venkatesh, S. Graph transformation policy network for chemical reaction prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 750–760. [Google Scholar]
  115. Peng, H.; Li, J.; He, Y.; Liu, Y.; Bao, M.; Wang, L.; Song, Y.; Yang, Q. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018; pp. 1063–1072. [Google Scholar]
  116. Garcia, V.; Bruna, J. Few-shot learning with graph neural networks. arXiv 2017, arXiv:1711.04043. [Google Scholar]
  117. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  118. Islas-Cota, E.; Gutierrez-Garcia, J.O.; Acosta, C.O.; Rodríguez, L.-F. A systematic review of intelligent assistants. Future Gener. Comput. Syst. 2022, 128, 45–62. [Google Scholar] [CrossRef]
  119. Motai, Y.; Siddique, N.A.; Yoshida, H. Heterogeneous data analysis: Online learning for medical-image-based diagnosis. Pattern Recognit. 2017, 63, 612–624. [Google Scholar] [CrossRef]
  120. Kilian, M.A.; Kattenbeck, M.; Ferstl, M.; Ludwig, B.; Alt, F. Towards task-sensitive assistance in public spaces. Aslib J. Inf. Manag. 2019, 71, 344–367. [Google Scholar] [CrossRef]
  121. Chuan, C.-H.; Morgan, S. Creating and evaluating chatbots as eligibility assistants for clinical trials: An active deep learning approach towards user-centered classification. ACM Trans. Comput. Healthc. 2020, 2, 1–19. [Google Scholar] [CrossRef]
  122. Migkotzidis, P.; Liapis, A. SuSketch: Surrogate models of gameplay as a design assistant. IEEE Trans. Games 2021, 14, 273–283. [Google Scholar] [CrossRef]
  123. Sahadat, N.; Sebkhi, N.; Ghovanloo, M. Simultaneous multimodal access to wheelchair and computer for people with tetraplegia. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, Boulder, CO, USA, 16–20 October 2018; pp. 393–399. [Google Scholar]
  124. Haescher, M.; Matthies, D.J.; Srinivasan, K.; Bieber, G. Mobile assisted living: Smartwatch-based fall risk assessment for elderly people. In Proceedings of the 5th International Workshop on Sensor-Based Activity Recognition and Interaction, Berlin, Germany, 20–21 September 2018; pp. 1–10. [Google Scholar]
  125. Kumar Shastha, T.; Kyrarini, M.; Gräser, A. Application of reinforcement learning to a robotic drinking assistant. Robotics 2019, 9, 1. [Google Scholar] [CrossRef] [Green Version]
  126. Viceconti, M.; Hunter, P.; Hose, R. Big data, big knowledge: Big data for personalized healthcare. IEEE J. Biomed. Health Inform. 2015, 19, 1209–1215. [Google Scholar] [CrossRef]
  127. Hemminahaus, J.; Kopp, S. Towards adaptive social behavior generation for assistive robots using reinforcement learning. In Proceedings of the 2017 12th ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 332–340. [Google Scholar]
  128. Duguleană, M.; Briciu, V.-A.; Duduman, I.-A.; Machidon, O.M. A virtual assistant for natural interactions in museums. Sustainability 2020, 12, 6958. [Google Scholar] [CrossRef]
  129. Mardani, A.; Nilashi, M.; Antucheviciene, J.; Tavana, M.; Bausys, R.; Ibrahim, O. Recent fuzzy generalisations of rough sets theory: A systematic review and methodological critique of the literature. Complexity 2017, 2017, 33. [Google Scholar] [CrossRef] [Green Version]
  130. Paliwal, C.; Biyani, P. To each route its own ETA: A generative modeling framework for ETA prediction. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 3076–3081. [Google Scholar]
  131. Sanenga, A.; Mapunda, G.A.; Jacob, T.M.L.; Marata, L.; Basutli, B.; Chuma, J.M. An overview of key technologies in physical layer security. Entropy 2020, 22, 1261. [Google Scholar] [CrossRef] [PubMed]
  132. Blackburn, H. Biobanking genetic material for agricultural animal species. Annu. Rev. Anim. Biosci. 2018, 6, 69–82. [Google Scholar] [CrossRef] [PubMed]
  133. UNESCO, Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. 2019. Available online: (accessed on 27 May 2022).
  134. Tahiru, F. AI in education: A systematic literature review. J. Cases Inf. Technol. 2021, 23, 1–20. [Google Scholar] [CrossRef]
  135. Opportunities and Challenges of Artificial Intelligence Technologies for the Cultural and Creative Sectors. Available online: (accessed on 27 May 2022).
  136. Phua, A.; Davies, C.; Delaney, G. A digital twin hierarchy for metal additive manufacturing. Comput. Ind. 2022, 140, 103667. [Google Scholar] [CrossRef]
  137. Rocha-Jácome, C.; Carvajal, R.G.; Chavero, F.M.; Guevara-Cabezas, E.; Hidalgo Fort, E. Industry 4.0: A Proposal of Paradigm Organization Schemes from a Systematic Literature Review. Sensors 2021, 22, 66. [Google Scholar] [CrossRef]
  138. Hussein, W.N.; Kamarudin, L.; Hussain, H.N.; Zakaria, A.; Ahmed, R.B.; Zahri, N. The prospect of internet of things and big data analytics in transportation system. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2018; p. 012013. [Google Scholar]
  139. AI Adoption Advances, But Foundational Barriers Remain. Available online: (accessed on 21 April 2022).
  140. 4 Major Barriers to AI Adoption. Available online: (accessed on 21 April 2022).
  141. 3 Barriers to AI Adoption. Available online: (accessed on 21 April 2022).
  142. Wang, W.; Siau, K. Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda. J. Database Manag. 2019, 30, 61–79. [Google Scholar] [CrossRef]
  143. Reddy, A.S.B.; Juliet, D.S. Transfer learning with ResNet-50 for malaria cell-image classification. In Proceedings of the 2019 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 4–6 April 2019; pp. 0945–0949. [Google Scholar]
  144. A Reality Check for IBM’S AI Ambitions. Available online:,to%20make%20medicine%20much%20smarter (accessed on 21 April 2022).
  145. Lu, H.; Li, Y.; Chen, M.; Kim, H.; Serikawa, S. Brain intelligence: Go beyond artificial intelligence. Mob. Netw. Appl. 2018, 23, 368–375. [Google Scholar] [CrossRef] [Green Version]
  146. Mukhamediev, R.I.; Symagulov, A.; Kuchin, Y.; Zaitseva, E.; Bekbotayeva, A.; Yakunin, K.; Assanov, I.; Levashenko, V.; Popova, Y.; Akzhalova, A. Review of Some Applications of Unmanned Aerial Vehicles Technology in the Resource-Rich Country. Appl. Sci. 2021, 11, 10171. [Google Scholar] [CrossRef]
  147. Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF. Remote Sens. Environ. 2020, 250, 112012. [Google Scholar] [CrossRef]
  148. Peres, R.S.; Jia, X.; Lee, J.; Sun, K.; Colombo, A.W.; Barata, J. Industrial artificial intelligence in industry 4.0-systematic review, challenges and outlook. IEEE Access 2020, 8, 220121–220139. [Google Scholar] [CrossRef]
  149. Mhlanga, D. Industry 4.0 in finance: The impact of artificial intelligence (ai) on digital financial inclusion. Int. J. Financ. Stud. 2020, 8, 45. [Google Scholar] [CrossRef]
  150. European Commission. Ethics Guidelines for Trustworthy AI. 2019. Available online: (accessed on 27 May 2022).
  151. Macas, M.; Wu, C.; Fuertes, W. A survey on deep learning for cybersecurity: Progress, challenges, and opportunities. Comput. Netw. 2022, 109032. [Google Scholar] [CrossRef]
  152. Rico-Bautista, D.; Guerrero, C.D.; Collazos, C.A.; Maestre-Gongora, G.; Sánchez-Velásquez, M.C.; Medina-Cárdenas, Y.; Parra-Sánchez, D.T.; Barreto, A.G.; Swaminathan, J. Key Technology Adoption Indicators for Smart Universities: A Preliminary Proposal. In Intelligent Sustainable Systems; Springer: Berlin/Heidelberg, Germany, 2022; pp. 651–663. [Google Scholar]
  153. Government AI Readiness Index 2020—Oxford Insights. Available online: (accessed on 27 May 2022).
  154. Machine Learning and the Five Vectors of Progress. Available online: (accessed on 21 April 2022).
  155. Rychnovská, D. Anticipatory governance in biobanking: Security and risk management in digital health. Sci. Eng. Ethics 2021, 27, 1–18. [Google Scholar] [CrossRef]
  156. Khan, S.; Khan, A.; Maqsood, M.; Aadil, F.; Ghazanfar, M.A. Optimized gabor feature extraction for mass classification using cuckoo search for big data e-healthcare. J. Grid Comput. 2019, 17, 239–254. [Google Scholar] [CrossRef]
  157. Tkachenko, R.; Izonin, I. Model and principles for the implementation of neural-like structures based on geometric data transformations. In Proceedings of the International Conference on Computer Science, Engineering and Education Applications, Kiev, Ukraine, 18–20 January 2018; pp. 578–587. [Google Scholar]
  158. Kulkarni, V.; Kulkarni, M.; Pant, A. Quantum computing methods for supervised learning. Quantum Mach. Intell. 2021, 3, 1–14. [Google Scholar] [CrossRef]
  159. Negro, P.; Pons, C.F. Artificial Intelligence techniques based on the integration of symbolic logic and deep neural networks: A systematic review of the literature. Intel. Artif. 2022, 25. [Google Scholar] [CrossRef]
  160. Verhulst, S.G.; Young, A. Open Data in Developing Economies: Toward building an Evidence Base on What Works and How; African Minds: Cape Town, South Africa, 2017. [Google Scholar]
  161. Imagenet. Available online: (accessed on 21 April 2022).
  162. Open Images Dataset M5+ Extensions. Available online: (accessed on 21 April 2022).
  163. COCO Dataset. Available online: (accessed on 21 April 2022).
  164. Wong, M.Z.; Kunii, K.; Baylis, M.; Ong, W.H.; Kroupa, P.; Koller, S. Synthetic dataset generation for object-to-model deep learning in industrial applications. PeerJ Comput. Sci. 2019, 5, e222. [Google Scholar] [CrossRef] [Green Version]
  165. Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; Lopez, A.M. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3234–3243. [Google Scholar]
  166. Cho, S. How to Generate Image Dataset based on 3D Model and Deep Learning Method. Int. J. Eng. Technol. 2015, 7, 221–225. [Google Scholar] [CrossRef]
  167. Müller, M.; Casser, V.; Lahoud, J.; Smith, N.; Ghanem, B. Sim4cv: A photo-realistic simulator for computer vision applications. Int. J. Comput. Vis. 2018, 126, 902–919. [Google Scholar] [CrossRef] [Green Version]
  168. Doan, A.-D.; Jawaid, A.M.; Do, T.-T.; Chin, T.-J. G2D: From GTA to Data. arXiv 2018, arXiv:1806.07381. [Google Scholar]
  169. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An open urban driving simulator. In Proceedings of the Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; pp. 1–16. [Google Scholar]
  170. Arvanitis, T.N.; White, S.; Harrison, S.; Chaplin, R.; Despotou, G. A method for machine learning generation of realistic synthetic datasets for Validating Healthcare Applications. Health Inform. J. 2022, 28, 14604582221077000. [Google Scholar] [CrossRef]
  171. Nikolenko, S.I. Synthetic Data for Deep Learning; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  172. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  173. Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc. J. 2019, 6, 94. [Google Scholar] [CrossRef] [Green Version]
  174. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
  175. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable ai: A review of machine learning interpretability methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef]
  176. Ribeiro, M.T.; Singh, S.; Guestrin, C. “ Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  177. Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. Adv. Neural Inf. Processing Syst. 2017, 30, 1–10. Available online: (accessed on 27 May 2022).
  178. Van den Broeck, G.; Lykov, A.; Schleich, M.; Suciu, D. On the tractability of SHAP explanations. In Proceedings of the 35th Conference on Artificial Intelligence (AAAI), Washington, DC, USA, 7–14 February 2021. [Google Scholar]
  179. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  180. Erol, B.A.; Majumdar, A.; Benavidez, P.; Rad, P.; Choo, K.-K.R.; Jamshidi, M. Toward artificial emotional intelligence for cooperative social human–machine interaction. IEEE Trans. Comput. Soc. Syst. 2019, 7, 234–246. [Google Scholar] [CrossRef]
  181. Schuller, D.; Schuller, B.W. The age of artificial emotional intelligence. Computer 2018, 51, 38–46. [Google Scholar] [CrossRef]
  182. Shaban-Nejad, A.; Michalowski, M.; Buckeridge, D.L. Health intelligence: How artificial intelligence transforms population and personalized health. NPJ Digit. Med. 2018, 1, 1–2. [Google Scholar] [CrossRef] [PubMed]
  183. Shaham, U.; Yamada, Y.; Negahban, S. Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 2018, 307, 195–204. [Google Scholar] [CrossRef] [Green Version]
  184. Jarrahi, M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
  185. Muhamedyev, R.I.; Aliguliyev, R.M.; Shokishalov, Z.M.; Mustakayev, R.R. New bibliometric indicators for prospectivity estimation of research fields. Ann. Libr. Inf. Stud. 2018, 65, 62–69. [Google Scholar]
  186. Research and Development (% of GDP). Available online: (accessed on 21 April 2022).
Figure 1. The number of review studies devoted to artificial intelligence (AI), machine learning (ML), and deep learning (DL) indexed in Scopus.
Figure 1. The number of review studies devoted to artificial intelligence (AI), machine learning (ML), and deep learning (DL) indexed in Scopus.
Mathematics 10 02552 g001
Figure 2. Ranking of the reviews in Scopus according to the areas of application (percentage). Source: generated by the authors.
Figure 2. Ranking of the reviews in Scopus according to the areas of application (percentage). Source: generated by the authors.
Mathematics 10 02552 g002
Figure 3. Subsections of artificial intelligence. Source: generated by the authors.
Figure 3. Subsections of artificial intelligence. Source: generated by the authors.
Mathematics 10 02552 g003
Figure 4. Classic and modern ML models. Source: generated by the authors.
Figure 4. Classic and modern ML models. Source: generated by the authors.
Mathematics 10 02552 g004
Figure 5. Deep networks. Source: generated by the authors.
Figure 5. Deep networks. Source: generated by the authors.
Mathematics 10 02552 g005
Figure 6. Recurrent neural networks. Source: generated by the authors.
Figure 6. Recurrent neural networks. Source: generated by the authors.
Mathematics 10 02552 g006
Figure 7. An example of the performance of a vertical filter for a 6 × 6 pixel image. Source: generated by the authors.
Figure 7. An example of the performance of a vertical filter for a 6 × 6 pixel image. Source: generated by the authors.
Mathematics 10 02552 g007
Figure 8. LeNet network architecture. Source: generated by the authors.
Figure 8. LeNet network architecture. Source: generated by the authors.
Mathematics 10 02552 g008
Figure 9. Constraints hindering the implementation of AI technologies in the economy. Source: generated by the authors.
Figure 9. Constraints hindering the implementation of AI technologies in the economy. Source: generated by the authors.
Mathematics 10 02552 g009
Figure 10. Classification of methods for interpreting machine learning. Source: generated by the authors.
Figure 10. Classification of methods for interpreting machine learning. Source: generated by the authors.
Mathematics 10 02552 g010
Figure 11. Annual number of publications in deep learning domains related to new neural network architectures. Source: generated by the authors.
Figure 11. Annual number of publications in deep learning domains related to new neural network architectures. Source: generated by the authors.
Mathematics 10 02552 g011
Figure 12. Speed D1 and acceleration D2 of the increase in the number of scientific publications in the applied domains of deep learning. Source: generated by the authors.
Figure 12. Speed D1 and acceleration D2 of the increase in the number of scientific publications in the applied domains of deep learning. Source: generated by the authors.
Mathematics 10 02552 g012
Table 1. External AI restrictions.
Table 1. External AI restrictions.
Problems External to AI Technology
Organizational [139] Personnel [140]Economic [142]Social [142]
  • Lack of strategy for AI adoption.
  • Functional fragmentation that hinders the integrated use of AI.
  • Lack of leadership and commitment to AI development.
  • Weak technological infrastructure.
  • Difficulties in data collection and limited usefulness [99].
  • Fear of changes and new technologies.
  • Shortage of qualified personnel.
  • The high cost of the AI-based solutions.
  • Insufficient readiness for the practical application.
  • Wealth gap and inequality.
  • Economy of developing countries.
  • Job displacement and replacement.
  • Trust and adoption.
  • Ethical and morality issues.
  • Legal issues and regulation policy.
Source: generated by the authors.
Table 2. Internal limitations of AI technology.
Table 2. Internal limitations of AI technology.
Internal Limitations of AI Technology [32,145,148,154]
DataLearning ProcessesResultsTechnology
  • Difficulties in data collection and preliminary processing.
  • Large amounts of data are required.
  • Lack of labeled data and time-consuming data labeling.
  • Privacy, “bias” and data security.
  • Slow learning process.
  • Significant computing capacities are required.
  • Lack of large image processing technologies.
  • Lack of transparency and interpretability.
  • Instability of the existing solutions obtained, due to machine learning and their dependence on noise.
  • Single-tasking element of the modern machine learning models and limited associativity.
  • The need to develop new machine learning models for peculiar cases of application and specific data.
  • The need for cyber infrastructure for the industrial AI&ML applications.
Source: generated by the authors.
Table 3. Directions of research oriented towards overcoming the limitations of AI&ML usage.
Table 3. Directions of research oriented towards overcoming the limitations of AI&ML usage.
Overcoming the External Limitations of AI&ML
OrganizationalUnification of the data set formation processes, development of policies and technologies for the data accumulation and use.
PersonnelTraining of specialists and explanatory work in the environment of the applied specialists
EconomicCreation of unified solutions suitable for application in many areas. Development of economic models of AI application.
SocialSocial research and the empowerment of explainable machine intelligence [179]. The researches in the field of artificial emotional intelligence [180,181] should improve the quality of human–machine interaction and personalized AI; for example, if AI is employed in the field of medicine, it should identify the patients’ preferences, personalize assistance to patients (and their families) in participating in the care process, personalize “general” therapy plans, and the information provided to patients [182].
Overcoming the Internal Limitations of AI&ML
DataUnification of the data collection and the data markup processes. Formation of the data sets in different areas of AI&ML application.
Learning processesResearch in the field of transfer learning.
Research in the field of increasing the computing power and new technical solutions.
ResultsDevelopment of systems for interpreting the results of machine learning models and simplification of the interaction with the applied specialists. Although there are similar models, they are intended for specialists only.
TechnologiesResearch in the field of improving the stability of the results generated by the machine learning models [183].
Development of models for specific cases of application of the machine learning (drones). Research in the field of general or strong AI.
Research in the field of the human-machine symbiosis oriented towards the expansion of the human intelligence “intelligence augmentation”, instead of replacing it [184].
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mukhamediev, R.I.; Popova, Y.; Kuchin, Y.; Zaitseva, E.; Kalimoldayev, A.; Symagulov, A.; Levashenko, V.; Abdoldina, F.; Gopejenko, V.; Yakunin, K.; et al. Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges. Mathematics 2022, 10, 2552.

AMA Style

Mukhamediev RI, Popova Y, Kuchin Y, Zaitseva E, Kalimoldayev A, Symagulov A, Levashenko V, Abdoldina F, Gopejenko V, Yakunin K, et al. Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges. Mathematics. 2022; 10(15):2552.

Chicago/Turabian Style

Mukhamediev, Ravil I., Yelena Popova, Yan Kuchin, Elena Zaitseva, Almas Kalimoldayev, Adilkhan Symagulov, Vitaly Levashenko, Farida Abdoldina, Viktors Gopejenko, Kirill Yakunin, and et al. 2022. "Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges" Mathematics 10, no. 15: 2552.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop