1.1. Anthropomorphism
Historically, artificial intelligence has been contemplated in anthropomorphic terms [
1,
2], yet the desire to make algorithms human-like prevents an adequate comprehension of ethical problems related to emerging technologies.
An artificial neural network (ANN) is an anthropomorphic computation system designed to simulate information analysis and processing by the human brain. These systems form a foundation for AI and Machine Learning (ML) technologies. A research study [
3] deals with basic knowledge and understanding in artificial neural networks. ANNs are very popular in ML studies, resulting in the rapid development of AI and ML systems for many tasks, such as text processing, speech recognition, and image processing. They are also important search tools for patterns that are too complicated or numerous to be retrieved by a human developer and recognized by a machine.
There is a difference between a normative and conceptual approach to AI issues [
4]. It is important to formalize psychological and neurological terms in a quantitative language, and to explain their role in intellectual behavior. The importance of AI for understanding the human brain is, however, limited by the fact that neither AI nor the brain have an isomorphic structure. Notwithstanding the above, there is still potential for collaboration between AI and neurobiology. AI has benefited and apparently will benefit from the neuroscience, yet compliance with a biological plausibility cannot be imposed; for AI developers, biological plausibility is a roadmap rather than a mandatory requirement. AI understanding based on mental patterns of humans can reduce it to a sort of a limited copy of human intelligence.
When using AI service agents, clients expect better performance from more anthropomorphic agents. While humans prefer very human-like AI service agents [
5], they are more threatened when dealing with more anthropomorphic agents, and, therefore, prefer less human-like AI agents. This effect, however, is meaningful only in social scenarios, and offers a new understanding of previous inconsistent findings regarding the effect of anthropomorphic design on clients’ willingness to use AI service agents.
While dialogue AI agents (ICA) are becoming an increasingly popular service tool for various enterprises, their successful implementation requires a good understanding ICA acceptance factors. A study [
6] proposes a collective model of ICA acceptance and usage, where acceptance mainly depends on the benefits of use, which, in turn, depend on agent and user characteristics. As emphasized in this study, the proposed model is context dependent because the relevant factors depend on usage parameters. Certain strategic implications for business are also mentioned, such as service design, personalization, and customer care management.
Artificial neural networks are operated by machine learning (ML). One approach to ML for this purpose was proposed by [
7], who combined imitation learning (IL) and several types of reinforcement learning (RL). In their study, these researchers examined the performance of a human teacher, who trains the agent to deal with environmental factors, and an agent learner, who has a specific goal.
Another study [
8] examined the integration of algorithms into the social fabric of an organization, and the interplay between humans and machines in a human-in-the-loop configuration. Over time, humans and algorithms are configured and reconfigured in multiple ways, while the organization addresses algorithmic analysis. These new configurations call for new organizational roles and the redistribution of organizational knowledge, together with efforts to improve the algorithms and the data collection architecture. This study supports the strategic importance of a human-in-the-loop pattern in organizational efforts to ensure that the algorithm’s performance meets the organization’s requirements and is responsive to environmental changes.
The concept of relies on four elements, data, information, knowledge, and intelligence itself [
9]. Data are raw facts, while information assigns meaning. Knowledge is an interpretation of information, and intelligence applies relevant knowledge to solve problems. It involves perception, judgment, rules, and expertise, leading to new knowledge. Developers use various knowledge types to create patterns in specific areas when developing intellectual software systems.
Another study [
10] offers a philosophical understanding of the special nature and evolution of computer modeling. Computer knowledge in the artificial intellect (AI) is an object of modeling. This study deals with the correlation between private, subjectified, and personalized knowledge of specific individuals, and non-personalized objectified knowledge in computer modeling. Knowledge representation analysis in terms of computer modeling shows the importance of successful results, which are obtained for their current technological possibilities (i.e., a computer representation of knowledge) in this type of modeling. On the other hand, the results and performance of computer modeling provide new insights into philosophical and general scientific problems, such as the knowledge representation problem, and encourage a search for new solutions.
The status of research methodology employed by studies on the application of AI techniques to problem solving in engineering design, analysis, and manufacturing is poor. There may be many reasons for this, including unfortunate legacy from AI, poor educational system, and researchers’ sloppiness. Understanding this status is a prerequisite for improvement. The study of research methodology can promote such understanding, but, most importantly, it can assist in improving the situation. A study [
11] deals with general methodological foundations of studying Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM). The urgency of this problem becomes more apparent in view of a great number of articles dealing with the myths, legends, and misconceptions related to certain AI issues (such as expert systems, fuzzy logic, and ML).
Visual recognition systems are now an integral part of modern computer vision. While interactive and embodied visual AI is not far from visual recognition [
12], the crucial question remains as to the degree to which we can generalize models trained in simulation to real life. Creating such a generalizable ecosystem for studying simulated and real embodied AI remains challenging and costly, and entails the resolution of several issues including:
- (a)
The inherently interactive nature of the problem;
- (b)
The need to ensure that the simulated environment is closely aligned with the real world;
- (c)
Creating conditions that allow replication and repeatability of experiments.
RoboTHOR offers researchers a framework of simulated environments that can be used to address and resolve the challenge of achieving similar performance levels in real environments.
AI is increasingly being used in real-time data distribution (Big Data) to enhance education by enabling personalized, flexible, and inclusive learning experiences [
13]. Governments, educational sectors, and organizations are exploring the implementation of AI tools and platforms to improve the efficiency and effectiveness of monitoring the educational system compared to current methods. AI is defined as the ability of digital computers or robots to perform tasks typically associated with humans. ML and data analysis techniques are receiving significant attention, as they allow for the acquisition, structuring, and analysis of large-scale datasets, enabling the identification of patterns, trends, associations, and predictions. An intelligent system is one that “learns” from data, using it to make informed decisions in specific cases, provided the data accurately represent the objects they represent.
AI clearly has a diverse, encompassing effect on education [
14] on two important levels. First, AI-based innovations are being developed to optimize and enhance existing educational systems. AI developments for the field of education range, for example, from personalized systems that use virtual assistants, to systems that track student or teacher activities. Despite the promise of these innovations, their risks for users’ privacy and well-being are not insignificant. Second, AI has created broad social changes that require reforms of traditional educational systems. AI capabilities underscore the need for educational systems to train everyday users to understand the systems that are being developed. AI also highlights the need for education in arts and humanities, and the need to train critical thinkers to be able to ask questions and evaluate answers.
AI is a well-established scientific field with remarkable achievements over the years. Alongside this, the popularity of interactive computer games and virtual environments has grown, attracting millions of users. The study [
15] provides valuable insights into AI agents in virtual worlds. Views on achieving human-level AI vary, with differing opinions and challenges. Developing human-level AI is complex, as it requires integrating various fundamental human capabilities. While some researchers believe it will eventually be accomplished with new approaches, even if it proves to be an insurmountable challenge, studying human-level AI enhances our understanding of human intelligence and has positive impacts on various scientific disciplines.
Inference. Artificial intelligence is mainly perceived as an anthropomorphic phenomenon in terms of simulation of the human brain and its functions. Philosophical, methodological, and social aspects of the artificial intellect, comparative analysis of human and non-human consciousness, cognitive and psychological problems, etc., are treated accordingly. In the meantime, the anthropomorphic nature of existing systems is largely overstated, and the benefits of this approach raise reasonable doubts. When combined together, such features of anthropomorphic AI as sociability and independence can both attract and discourage users. Independence creates additional social and legal tension, which is attributed, inter alia, to uncertain moral and legal evaluations of AI’s potential implications and to the resulting liability. Better performance of artificial intelligence should be ensured by incorporating realistic considerations in solving specific problems, rather than driven by the desire to achieve “human likeliness”. This being said, a reasonable balance between independence and sociability should always be sought.
1.2. Transparency
The field of information systems (IS) is currently undergoing a significant transition from rule-based decision support systems based on deterministic algorithms [
16] to the use of probabilistic algorithms (e.g., deep learning). Based on patterns in data that are identified by such probabilistic algorithms, these algorithms draw conclusions and develop predictions that apply to other data, under some uncertainty. Despite their enormous potential, research offers numerous examples of how probabilistic algorithms may have systemic biases integrated in them, as a result of which their implementation leads to systemic discrimination. In one example, decision support systems for credit loan applications disproportionally denied applications of women, and individuals living in certain areas or from a specific ethnic background.
Transparency is a significant concern in AI systems. While these systems enhance automated decision-making capabilities, they often lack transparency in providing explanations for their recommendations or predictions. ML processes over large datasets drive these systems, but the underlying reasoning remains hidden. Users also struggle to access and understand potential biases embedded in algorithms or obscured in training data. A study by [
17] proposes three rules for developing meaningful explanations for non-transparent “black box” AI/ML systems. These rules involve using logic, statistics, and causal interpretations, inferring local explanations for specific cases by auditing the black box near the target instance, and generalizing multiple local explanations into simple global ones. Their approach allows for diverse data sources, languages, learning problems, and auditing methods to be employed when generating explanations.
Another article [
18] offers a brief analytical review of the current situation in the explainability of AI, in the context of recent advances in ML and deep learning. AI and ML have demonstrated their potential to revolutionize industries, public services, and society, achieving or even surpassing human levels of performance in terms of accuracy for a range of problems, such as image and speech recognition and language translation. However, their most successful offering in terms of accuracy—deep learning (DL)—is often characterized as being “black box” and non-transparent. Using non-transparent “black box” models is especially problematic in highly sensitive areas, such as healthcare and other applications related to human life, rights, finances, and privacy. Since the applications of advanced AI and ML, including DL, are now growing rapidly, encompassing the digital health, legal, transport, finance, and defense sectors, the importance of transparency and explainability is increasingly recognized.
Technological innovations come with risks that require comprehensive management [
19]. While AI-based advances like health IT, robotics, chatbots, and social media offer benefits, users often lack an understanding of how AI systems make decisions, especially deep learning neural networks. This lack of understanding undermines trust, hides biases, and leads to discrimination. Explainable AI (XAI) systems aim to provide visibility into the decision-making process, offering explanations that enhance trust and impact user actions. XAI enables developers to improve AI models and allows for the study of user behaviors related to privacy and personal information usage. Explanations can be tailored to meet the needs of developers, users, and stakeholders, varying in transparency and depth.
Explainable AI has been an active field since the 1980s, with early applications in expert systems. While recent advancements in neural networks have led to significant success in various domains, their lack of explainability remains a challenge. AI systems need to be able to provide human-understandable explanations for their decisions, especially in domains like biology, chemistry, medicine, and drug design where data can be represented as graphs [
20]. Deep Tensor networks have been used to identify influential factors and construct interaction paths in these domains, incorporating knowledge from medical research. By annotating the interaction paths with supporting evidence from specific medical texts, the classification results can be explained to human users. Explanation is considered critical in all areas of AI, and approaches like combining logic-based systems with stochastic systems or employing transfer learning can enhance the interpretability and transferability of AI models.
The work [
21] presents a preliminary analysis of the global desirability of different forms of openness in AI development, such as open source, science, data, security practices, opportunities, and goals. Increased short-term openness is generally seen as socially beneficial. However, the strategic implications in the medium and long term are complex. Assessing long-term impacts depends on whether the goal is to benefit the current generation or promote the aggregate welfare of future generations. Openness about security measures and goals is acceptable, but other forms of openness (open source, science, and opportunity) may lead to increased competition, potentially compromising safety and efficiency. The global desirability of openness in AI development involves intricate trade-offs. One concern is that openness can worsen racial dynamics, as competitors may take higher existential risks to achieve progress in developing advanced AI. Partial openness, allowing outsiders to contribute, is considered desirable.
The work [
22] builds a neural network to classify X-ray images of a hand as fractured or intact. If the neural network does not have an exact answer, then the doctor can check. This kind of neural network can help doctors, who have to check a large amount of X-ray images.
Inference. Transparency is a key feature of AI systems’ operation, as the lack of transparency aggravates relevant social, legal, and technological challenges. These challenges are more inherent in enhanced autonomous systems (e.g., neural networks, ML) that operate in a black box mode. Therefore, developing an explainable AI (EAI) is an urgent problem. Such systems contain options that comprise chains of actions, judgements, operations, interim results, etc., and enable a black box–glass box transfer. An alternative approach to transparency lies in the use of inherently transparent deterministic systems (based on rules, knowledge, etc.).
1.3. Determinacy
Rule-based systems are widely used to develop AI applications and systems in various domains. This is the most successful approach to artificial intelligence that paves the way to a relatively easy development of complex large-scale applications [
23]. Rule-based systems are a simplified form of artificial intelligence. This technology is based on facts and rules, yet is time consuming and difficult to implement. It ensures great performance in a specific domain, which is, however, limited by human-coded information. Rule-based systems help software developers and machines to tackle problems with multiple pattern nodes and solve tasks at a higher level of abstraction using human-like thinking and reasoning capabilities. Even though rule-based systems have several limitations, there is no doubt that with ever-evolving technology they will also evolve to be more flexible, effective, and suitable.
Software Variability Modeling (SVM) is a key challenge in software product lines, especially in configurable software product lines (CSPL) that require strict SVM. CSPL application, such as dynamic DSPL, service-oriented SPL, and autonomous or comprehensive systems [
24]. Knowledge-based configuration is an accepted variability modeling method that aims to enable automatic adjustment of physical products. Conceptual clarity of modeling (e.g., availability of both taxonomic and compositional relations) can be useful. A conceptual basis to ensure multiple representations (e.g., graphical and textual) is also important. Application of the ideas and the expertise embodied in these ideas might promote the development of modeling support for software product lines.
Methods of integrated knowledge representations (BR) of analyses and syntheses may use atomic models of knowledge representation, including both an elementary data structure and rules of knowledge processing stored in the form of machine-readable data and/or program instructions. These methods are implemented in proprietary software products [
25]. One or more knowledge processing rules are used to analyze the input complex CI to decompose its concepts and/or conceptual relationships into elementary concepts and/or conceptual relationships for inclusion in the elementary data structure. One or more knowledge processing rules may be applied to synthesize the RC output from stored elementary data structure according to contextual information.
An expert system is a computer program that provides expert-level solutions to important problems and is heuristic, transparent, and flexible [
26]. Rule-based systems demonstrate the state of building expert systems and illustrate the main issues. In a rule-based system, much of the knowledge is represented as rules, that is, as conditional sentences relating statements of facts with one another. An expert system requires a knowledge base and an inference procedure. The knowledge base is a set of rules and facts covering specialized knowledge of the subject, as well as some general knowledge about the relevant domain. The inference procedure is a large set of functions that control the interaction and update the current state of knowledge about the case at hand.
The problem of choosing a knowledge representation model and processing techniques can be formulated as follows, how to present the knowledge structure based on such sources, such as professional literature and the knowledge of highly skilled professionals (i.e., how to choose a knowledge representation model), so that its automated processing might enable the solving of a problem in this specific area and to achieve desired results. Software developers often try to describe complex knowledge domains that deal with complex informative tasks using steady regular patterns that are user-friendly yet too primitive to represent the diversity of semantic aspects in each specific domain. An article [
27] examines the set of requirements for a knowledge representation model in smart systems, offers extended semantic networks, and demonstrates that the proposed model meets the above set of requirements.
Ontologies have gained popularity and recognition in the semantic network due to their widespread use in Internet applications. Ontologies are often regarded as an excellent source of semantics and interoperability in all artificial intelligence systems. The exponential growth of unstructured data on the Internet has made automatic ontology from unstructured text the most prominent area of research. Methodologies [
28] are expected to be developed in many fields (ML, text analysis, knowledge and reasoning representation, information retrieval and natural language processing) to automate the process of obtaining some degree of ontologies from unstructured text.
Knowledge-based design is an advanced form of automated design that enables informed decision-making and a fully digital representation of the product lifecycle [
29]. It involves applying rules-to-inter-parametric relationships for configuring products at a lower, parametric level. By utilizing indicators, rule-based configurations can construct finished products. Tools such as knowledge-based process models and product catalogs support this process. However, effective methods for routine work with computer knowledge are still lacking. Industrial applications of knowledge-based design offer various benefits.
Inductive empirical systems simulate the human brain and operate in a “black box” mode, while deductive analytical systems use transparent formalized models and algorithms to represent knowledge. Both systems solve intellectual problems, but the solutions may lead to the development of alternative artificial intelligence systems [
30]. The authors propose principles for AI system development, including the exclusion of black box technologies, the use of data conversion systems, and direct mathematical modeling. The system consists of a simulator module, an ontological module that extracts structured functional links, and interfaces for generating custom knowledge representations. This approach forms the methodological basis of an AI e-learning platform.
Inference. In contrast to widely known model-based machine-learning neural networks, which handle complex redundant data and are easy to implement and use due to their highly autonomous nature, existing rule-based systems are more methodologically simple and transparent, but are time-consuming and hard to implement. That being said, they are capable of addressing problems with multiple pattern nodes by using databases, knowledge bases, logical output models, etc., and treating such problems at a higher abstraction level with the help of the human intellect in user interaction. Despite their limitations, these systems are rapidly becoming more flexible and efficient.
1.4. Configurability
Mass customization involves providing individually designed products and services through flexible and integrated processes [
31]. It is seen as a competitive strategy adopted by many companies. This paper examines scenario-based rules and methods for supporting future-oriented system architectures in mass customization, initially developed for medical visualization equipment. These architectures must accommodate space variability (creating unique products for specific clients) and time variability (meeting new requirements). Two key considerations are predicting future client needs and ensuring efficient response to changes. Knowledge-based configuration systems, utilizing declarative knowledge representation and smart search methods, are widely used by major vendors for solving configuration problems. These systems offer benefits in terms of adaptable configuration rules and efficient problem-solving algorithms.
The development of big data and cyberphysical systems has increased the demand for product design. Digital product design [
32] incorporates advanced technologies like geometric modeling, virtual reality, and multi-object optimization. Intelligent design methods include analyzing customer requirements, product family design, modular design, and design variations. Trends in intelligent user products involve developing smart products based on big data and specialized design tools. Intelligent custom design enables dynamic response resolution and intelligent customization of user requirements. Future customized equipment design relies on a fusion-mapping model and swarm intelligence to enhance design intelligence. Knowledge-based intelligent design utilizes feedback features and scene-based design. With cloud databases and event-condition-action rules, intelligent design becomes more requirement-centered, knowledge-diversified, and efficient.
Parametric design is essentially a generative design [
33] that can be created using a computer and mathematical interplay. Rather than changing the shape, a process-oriented method is used to modify parameters of the components that comprise the shape, resulting in a new look. As the consequences of manipulations are immediately evident, the development of product families and new product options can be rapid. With the help of parametric design, customizing products and increasing customer satisfaction are becoming more efficient, as several successful projects in the automotive industry, aircraft manufacturing, architecture, jewellery manufacturing, and other industries have already shown. Product customization is important for higher customer satisfaction. For example, Deloitte discovered that every fifth customer is ready to pay 20% more for a unique personalized product. That is why customization has become a business strategy in many fields, from footwear to automotive industry. Parametric or generative design (also known as algorithmic design) has great future potential, because it enables designers to offer unique personalized products to the customers. The results are algorithm-generated. In contrast to conventional design methods, all design processes are controllable, with the step consequences of manipulations immediately evident. Such a new approach accelerates the release of multiple product versions.
In the era of Big Data, mass customization (MC) systems are faced with the need to integrate mass customization and social IoT in order to effectively connect customers with enterprises. This is necessary not only to allow customers to participate in the production of microcontrollers throughout the entire process, but also to give enterprises the ability to control all communications within the information system. The paper [
34] describes the architecture of the proposed system from an organizational and technological point of view, and discuss the key problems that mass customization faces—the social system of the Internet of Things. These include (1) the system can make convenient information queries and clearly understand the user’s intentions; (2) the system can anticipate the changing relationship between different technical fields and help the scientific staff of the enterprise to find technical knowledge; and (3) the system can combine deep learning technology and digital redundancy technology to better maintain system health. Additional issues include data management, knowledge discovery, and human–computer interaction, such as data quality management, small data samples, lack of dynamic learning, time wasting, and task scheduling.
Knowledge-based configuration systems are industrially available [
35]. Major vendors of configuration systems rely on a certain form of declarative knowledge representation and smart search methods to solve the major configuration problem, due to the inherent benefits of this technology. On one hand, changes in the business logic (configuration rules) might be easier due to the declarative and modular nature of the knowledge base. On the other hand, highly optimized domain-agnostic problem-solving algorithms are available to build acceptable configurations. Their development is still in progress due to ever-emerging challenges of smart system configuration development in our increasingly automated and interconnected world; web configurators are becoming available for large diverse groups of users, representations of custom products require that the companies be integrated into delivery chain, and configuration and reconfiguration of services is becoming an increasingly serious problem.
Mass customization is designed to meet individual requirements and is therefore a method for attracting and retaining customers, which is a key issue in the design industry [
36]. The development of computer-aided design opens up new opportunities for high-speed custom-made product planning, which is equivalent to mass production. Automated design is based on the reuse of food and process knowledge. The ontology has shown itself to be an acceptable highly aggregated representation of knowledge in the field of engineering design. While knowledge about food and processes at other stages of the life cycle is presented in different approaches, the product planning process and the product tailoring process are missing, leading to interruptions or additional iterations in computer-aided design. Therefore, a suitable representation of knowledge adapted for automation is still lacking.
Many companies offer websites that enable customers to design their own customized products, which a manufacturer can then customize [
37]. The economic value of products developed in-house using mass customization (MC) toolkits comes from two factors, high preference for conformity and little design effort. The authors suggest a third factor, namely the creator’s involvement in the product design. Through their research, the authors obtained experimental evidence that the “I designed it myself” effect creates economic value for the consumer. Regardless of other factors, self-developed products generate significantly higher willingness to pay. This effect is mediated by the sense of accomplishment elicited by the outcome of the process as well as the perceived contribution of the individual to the self-design process. These findings are important for MK companies; it is not enough to simply design MK tools in a way that maximizes customization and minimizes design effort. To fully reflect the value of MK, toolboxes should also evoke the feeling of “I came up with it myself”.
Current food design research focuses on how to transform a predefined set of ingredients into a true set of product structures. As configurable food products increase in scale and complexity, there is an increasing interdependence between customer requirements and product structure. As a result, existing product structures cannot meet individual customer requirements, so there is a need for product variants. The purpose of this work [
38] is to build a bridge between customer requirements and product structure in order to ensure rapid planning according to demand. First of all, the multi-hierarchical design models of the configured product are created, including the customer requirements model, the technical requirements model, and the product structure model. Then, using a fuzzy analytical hierarchical process (FAHP) and a multi-level comparison algorithm, the migration of multi-hierarchical models solves in a fuzzy analytic hierarchy process (FAHP) and a multi-level matching algorithm. Finally, the optimal structure based on the customer’s demands is obtained through calculations of Euclidean distance and similarity to other cases.
Companies need to adapt to changing market trends by providing customers with a diverse range of product and service offerings, covering various combinations of the two [
39]. In order to achieve this, a shared knowledge model is proposed for customizing commercial offerings. This involves evaluating product and service configurations and developing a comprehensive model that encompasses the entire range of offerings. A knowledge-based model is then defined, demonstrating its relevance to use cases in the secondary and tertiary sectors.
Mass customization is a business strategy that aims to satisfy individual customers’ needs with near-mass-production efficiency. Mass customization information systems in business provide original and innovative research on IT systems for mass customization. It is a wide-ranging reference collection of chapters describing the solutions, tools, and concepts needed for successful realization of these systems. A knowledge-based configuration includes knowledge representation descriptions to capture sophisticated product models and reasoning methods in order to provide smart interactive interplay with the user. Dedicated research that provides a better understanding of the knowledge-based configuration offers a toolkit [
40,
41] that extends the boundaries available to configurators. State-of-the-art approaches to gaining configuration knowledge entail testing, adjustment, redundancy detection, and conflict management. Business processes, mass customized markets, product modeling, and supply chain management, required to produce customized products, are being explored. The commercial benefits of knowledge-based technologies can be applied to various business sectors, from services to industrial machines.
Some recent web application offerings are focused on providing advanced search services through virtual stores. In this context, [
42] proposes an advanced type of software application that simulates a seller–buyer dialogue to dynamically customize a product to meet specific consumer needs. The proposed approach is based on a shared knowledge model that uses artificial intelligence and knowledge-based methods to simulate the customization process.
The trend of product diversification and personalization offers companies the opportunity to increase income through higher prices and market share. However, there is a risk of the “diversity paradox” where overwhelming customization options lead to decreased sales. Sales Configurator is an app aimed at helping customers choose the best solution for their needs. Research on the characteristics of effective configurators is limited, and proposed solutions lack empirical study and psychometric measurement. The study [
43] identified key capabilities for sales customizers to avoid the diversity paradox, including target navigation, fluid navigation, simple comparison, benefit and cost information, and user-friendly product descriptions. This tool can serve as a diagnostic and benchmarking tool for companies looking to assess and improve their sales configurators.
Inference. Configurable deterministic AI systems (functioning as a type of rule-based system) have emerged in response to the demand for mass customization, which requires a high flexibility and process integration. Such systems are widely used in industrial and commercial design and have proven highly effective due to, inter alia, their use of the intellectual potential of designers and other experts and professionals. It should be noted that the potential application of configurable systems in sectors such as science and education is unfairly ignored. However, such systems can be efficiently applied, for example, in the mass development and production of training content, experiment planning and verification of results, etc.
1.5. Modeling and Imitating
The paper [
44] examines the role of artificial intelligence in modeling and simulation (M&S). M&S implies some fundamental and highly sophisticated problems, which can be solved by using AI methods and concepts. Several key problem issues (e.g., verification and validation, reuse and possibility of layout, distributed modeling and system interation, service-oriented architecture and semanic network, onthologies, limited natural language and genetic algorithms) have been explored. A special-purpose methodology is proposed to enable agent involvement in modeling and simulation based on their activity monitoring. Feasibility of endomorphic modeling, which is an extended function that has to be available to this agent if it imitates all human abilities of M&S, has also been considered. Since this ability implies an indefinite regress, with the models indefinitely contain the models of themselves, this is a limited function. This being said, it can provide a critical understanding of a competitive coevolutionary behavior of humans or higher level primates in order to launch more intensive studies of model nesting depth. This is the extent to which an endomorphic agent is capable of mobilizing its mental resources, as necessary to create and use the models of its own “brain” and of the “brains” of the other agents. The mystery of such endomorphic agents is hindering further studies in AI, M&S, and relevant domains, such as cognitive science and philosophy.
With the rapid advancement of AI technologies, the field of modeling and simulation has also seen significant progress, particularly in the area of multi-agent modeling and simulation (MAMS). The paper [
45] provides an overview of MAMS, including its concept, technical advantages, research steps, and current status. It discusses hybrid modeling and simulation that combines multi-agent and system dynamics, modeling and simulation of multi-agent reinforcement learning, and modeling and simulation of large-scale multi-agent systems. The study highlights the benefits of multi-agent simulation technology in terms of descriptive ability, emergence analysis, organizational framework, autonomic decision-making, interaction ability, distributed simulation, and model reuse. Furthermore, it presents the applications of MAMS in social, economic, and military fields. The study identifies several challenges in multi-agent reinforcement learning, such as convergence and stability, large state space, and poor knowledge transfer. The current status of large-scale MAMS is also summarized, including the solutions of reorganizing the model at the software level and utilizing distributed parallel computing to enhance computational efficiency.
Simulation studies are increasingly used in various scientific fields, including production engineering. Implementing computer-based solutions in production processes helps reduce costs associated with planning errors and streamlines the development of manufacturing plans for new products. This is particularly important for manufacturing companies aiming to optimize inventory levels while ensuring uninterrupted production. In the work [
46], computer simulation models are proposed to study different production scenarios. The analysis reveals that increasing the batch sizes of input components leads to a decrease in production efficiency. Simulation modeling serves as a valuable tool for assessing process performance and visually representing various assumptions. The databases created for these simulation models can serve as a foundation for real process development. However, simulation tools do not replace decision-making by managers. Simulation experiments provide valuable data and information about processes, assisting in making informed decisions. In this study, computer simulation was used as a substitute for real experiments, tailored to the research requirements. The use of simulation models enables preliminary analysis of process development and validation of proposed changes under specified conditions. Simulations offer a means to explore specific scenarios and evaluate potential solutions without the risks associated with testing assumptions in real-world settings.
Process analysis provides valuable information on events stored in information systems. Analysis of event data can reveal many performance and compliance issues, as well as insights into how to improve productivity. Process analysis techniques tend to be backward-looking and do little to promote forward-looking approaches, as potential interventions are not evaluated. System dynamics complements this backward-looking analysis by identifying the relationships between different factors at a higher level of abstraction and using modeling to predict the outcomes of process improvement actions. The work [
47] proposes a new approach to the development of system dynamics models using event data. This approach extracts various performance parameters from the current state of the process using historical performance data, and provides an interactive performance modeling platform in the form of system dynamics models that are able to answer “What if” questions. Experiments with event logs that include various parameter relationships show that this approach enables robust models and underlying relationships.
Another study [
48] discusses the validation and verification of imitation models. It proposes four different approaches to defining whether or not a model is valid; presents two paradigms to associate validation and verification with the model development process; defines various validation techniques, discusses the validity of the conceptual model, model verification, operational validity, and data validity; provides a method to document the results; presents a recommended model verification procedure; and, finally, outlines accreditation. The study argues that despite the availability of literature on validation and verification, there is no set of special-purpose tests that might be easily applied to establish whether a model is “correct”. Moreover, there is no algorithm to define which methods or procedures should be used. Each new modeling project is a unique new challenge.
The paper [
49] provides an overview of Simulation Model Development Environment (SMDE). Building the environment suggests that a minimal toolkit should be developed including a premodel manager, help manager, model generator, model analyzer, model translator, and a model verifier. A model generator is the most important tool. The automation-based SW paradigm has been achieved largely due to the development of DOMINO-based visual simulation support environment (DOMINO is a multifaceted conceptual framework for visual Imitation modeling). A comprehensive set of requirements for SMDE is a major technological challenge in terms of independence of the modeling domain in the field of modeling discrete events. Building the visual simulation support environment prototype implements the automation-based SW paradigm, and also enables the animation of the imitation model. The development of a conceptual basis for visual imitation modeling is one of the major challenges here.
The paper [
50] introduces a flexible simulation model generator for discrete operating systems. It introduces two concepts of discrete system modeling, the operating network and operating equations. These tools are used to describe the structure of the simulated system. The model generator uses a batch input file containing a list of working equations and other system specifications to generate a simulation code.
The paper [
51] presents the conceptual design and implementation of an interactive simulation simulation model prototype based on knowledge. The system manages several components, including a model database, a knowledge base and a database. Particular attention is paid to integrating the model and the knowledge base. This combination of numerical and knowledge representation components is one of the main strengths of the system. The framework approach was chosen for the semantic representation of models. The system is designed to free the scientific expert from the details of computer science and to allow them to focus on real modeling tasks.
The work [
52] suggests that differential equations should be solved on the basis of AI stochastic neural network models theory. There are used multiple layer neural networks with an appropriate number of layers, for solving differential equations.
Inference mathematical modeling and simulation (M&S) and AI are closely related. On one hand, AI system architecture and algorithms are built on the basis of various models, from the most simple computational and logic models through multi-agent modeling and simulation (including endomorphic models) to semantic, ontological, and other models. Moreover, imitation models can be used as data sources to validate and verify algorithms and systems. On the other hand, AI serves as a basis for building automated model development and validation environments (generators) including such dedicated tools as premodel managers, help managers, model generators, model analyzers, model translators, and model verifiers.
1.6. Complexity
The paper [
53] proposes a hierarchical system framework that uses iHLBA for job scheduling in a grid environment. iHLBA assigns the most suitable resource for each specific job and compares load clusters to the adaptive balance threshold to balance the load system. Local and global update rules are applied to obtain the updated status of resources and to define a balanced threshold, thus making it possible to assign the next job to the most suitable resource. A local update rule is responsible for updating the status of cluster and resource that was assigned to the job. The job scheduler then uses the new status to assign the next action. A global update rule updates the status of each cluster and resource in the grid system, after the resources complete their jobs. This provides the job scheduler with the most updated information on all clusters and resources, thus making it possible to assign the next job to the most suitable resource. Experimental results show that iHLBA is capable of balancing the total system load and improving performance by choosing the best resource for each specific job based on the updated state of the system.
Service-oriented computing has created a new method of service delivery based on pay-as-you-go models in which users consume services based on their Quality of Service (QoS) requirements. In these pay-as-you-go models, users pay for services based on usage and compliance with QoS limits; processing time and cost are two common QoS requirements. Therefore, to create effective planning maps, it is necessary to take into account the prices of services when optimizing performance indicators. The work [
54] proposed a heterogeneous constrained budget scheduling (HBCS) algorithm that guarantees execution cost within a budget specified by the user, and minimizes the execution time of the user application. Their results show that this algorithm provides faster execution, guaranteed application cost, and lower time complexity compared to other existing algorithms, subject to a limited budget. The improvements are especially important for more heterogeneous systems, where a 30% reduction in execution time was achieved without an increase in budget.
Effective application planning is critical to achieving high performance in heterogeneous computing environments. The application scheduling problem is NP-complete in the general case and also in some limited cases. This important issue has been extensively studied, and the various algorithms proposed in the literature are mainly intended for systems with homogeneous processors. Although several algorithms for heterogeneous processors are described in the literature, they usually require more planning and do not offer reductions in planning costs. The article [
55] presents two new scheduling algorithms, heterogeneous early-finish-time (HEFT) and heterogeneous early-finish (HEF) for a limited number of heterogeneous processors to achieve high performance and fast scheduling. The HEFT algorithm selects the task with the highest increasing rank value at each step and assigns the selected task to the processor that minimizes the earliest completion time using an insertion-based approach. The CPOP algorithm uses the sum of ascending and descending rank values to determine task priorities. In the processor selection phase, critical tasks are assigned to a processor, which minimizes the overall execution time for critical tasks. To provide a reliable and unbiased comparison with similar work, a parametric graph generator was developed to generate weighted directed acyclic graphs with different characteristics.
The distribution estimation algorithm (EDA) is a well-known stochastic optimization method. The average temporal complexity is an important criterion for measuring the performance of stochastic algorithms. Various types of EDA have been proposed in recent years, but relevant theoretical studies on the temporal complexity of these algorithms are relatively rare. The work [
56] analyzes the temporal complexity of two early versions of EDA, Univariate Limit Distribution (UMDA) and Incremental UMDA (IUMDA). This was the first rigorous assessment of the mean values of FHT, UMDA, and IUMDA, including both polynomial and exponential cases. Their analysis shows that UMDA (IUMDA) has an
behavior for the pseudo-modular function and that IUMDA can spend an exponential number of generations to find a global optimum.
The paper [
57] proposes a method for defining conditions for goals that guarantee that the goal is sufficiently coarse to justify parallel evaluation. This method is powerful enough to reason about divide-and-conquer programs, and in the case of fast sorting, for example, concludes that a fast sorting goal has temporal complexity greater than 64 resolution steps (the creation threshold) if the input list is 10, long or longer. This method has proven to be correct, can be implemented directly, has shown to be useful on a parallel machine, and, unlike many previous works on analyzing the temporal complexity of logic programs, does not require a complex solution to the problem.
In the field of optimization, the speed of convergence is a crucial measure of efficiency. Many accelerated schemes have been developed, but they often lack intuitive explanations and rely on complex arguments from areas like control theory or differential equations. However, a study [
58] offers a preliminary explanation of optimization algorithms using integration method theory, providing a clear and theoretically grounded analysis. It shows that optimization schemes can be seen as a special case of integration methods for gradient flow integration. This study explains the origin of acceleration using standard arguments. Fast methods typically require additional parameters that are difficult to estimate and are specific to a particular problem setup. The study also discusses a new approach to acceleration using general arguments from numerical analysis, where sequences are accelerated by constructing another sequence with a higher convergence rate. These methods can be combined with iterative algorithms to speed up convergence in most cases. However, extrapolation schemes are not widely adopted in practice due to the lack of theoretical guarantees and instability concerns.
Another study [
59] promotes a fast approximation of solutions to optimization problems that are limited to iteratively solved traffic simulations. Given an objective function, a set of candidate variables, and a “black box” transport simulation that is solved by iteratively reaching (deterministic or stochastic) equilibrium, the proposed method approximates the best decision variable from the set of candidates without having to run a transport simulation to converge for each individual candidate variable. This method can be included in a broad class of optimization algorithms or search heuristics that implement the following logic, (1) generating variations on a given, currently best decision variable; (2) identifying one of these variations as the new, currently best decision variable; and (3) repeating steps (1) and (2) until further improvement is achieved. Probabilistic and asymptotic efficiency bounds are established, which are used to formulate efficient heuristics adapted to limited computational budgets. The effectiveness of the method was confirmed by a comprehensive simulation study of a non-trivial problem of pricing on roads. The method is compatible with a wide range of simulators and requires minimal parameterization.
Despite advances in computer capacity, the enormous computational cost of running complex engineering simulations makes it impractical to rely exclusively on simulation for the purpose of structural health monitoring. To cut costs, surrogate models, also known as metamodels, are constructed and then used in place of the actual imitation models. In a study by [
60], structural damage was detected using 10 popular metamodeling techniques, including Back-Propagation Neural Networks (BPNN), Least Square Support Vector Machines (LS-SVMs), Adaptive Neural-Fuzzy Inference System (ANFIS), Radial Basis Function Neural Network (RBFN), Large Margin Nearest Neighbors (LMNN), Extreme Learning Machine (ELM), Gaussian Process (GP), Multivariate Adaptive Regression Spline (MARS), Random Forests, and Kriging. The results indicate that Kriging and LS-SVM models have better performance in predicting the location/severity of damage compared to other methods. A properly trained surrogate model can be used to efficiently reduce the computational cost of model updating during the optimization process.
Evolutionary computing using surrogates or metamodels uses efficient computational models, often referred to as surrogates or metamodels, to approximate the fitness function in evolutionary algorithms [
61]. Although many proposed evolutionary algorithms using surrogates have proven to be more efficient than their non-surrogate counterparts, rigorous comparative studies of such evolutionary algorithms using surrogates have not been conducted. This lacuna can be explained by two factors. First, there is no generally accepted efficiency ratio for comparing evolutionary algorithms with surrogate support. Second, there are no reference problem algorithms specifically designed for evolutionary algorithms using surrogates. Evolutionary algorithms using surrogates mostly use standard test functions or special applications for empirical evaluations. Expensive optimization tasks (for example, optimizing the aerodynamic structure) are time consuming. In addition, simulations can be unstable, resulting in impractical isolated solutions. Finally, the design space is extremely large and the geometric representation can be critical for efficient design optimization.
In the article [
62], global optimization problems and their numerical solutions are investigated. These problems often involve computationally intensive tasks due to the presence of multi-extremal, non-differentiable objective functions typically provided as black boxes. The study employs a deterministic algorithm specifically designed for global extremum search, distinct from iterative or nature-inspired approaches. Computational rules for one-dimensional problems and a nested optimization scheme for multidimensional problems are presented. The complexity of solving global optimization problems primarily stems from numerous local extrema. To address this, ML methods are utilized to identify areas of attraction for local minima. By employing local optimization algorithms in these selected areas, the convergence of the global search is accelerated by reducing the number of attempts near local minima. Computational experiments conducted on several hundred global optimization problems of varying dimensions confirm the accelerated convergence achieved in terms of the number of search attempts required to attain a given accuracy.
In another study [
63], a method is proposed to enhance the search process of evolutionary multi-objective optimization (EMO) through the use of an estimated point of convergence. The study presents an approach that identifies promising regions of Pareto solutions in both goal space and parameter space. These regions are utilized to construct a set of moving vectors, from which the non-dominated Pareto point is estimated. Various methods are employed to construct these moving vectors and facilitate the search for EMO. The proposed method proves effective in improving EMO search, particularly in cases where the Pareto improvement landscape exhibits a unimodal distribution or a randomly distributed multimodal characteristic in the parameter space. The approach not only enables the generation of a greater number of Pareto solutions compared to conventional methods like the non-dominant sorting genetic algorithm (NSGA)-II but also enhances the diversity of the obtained Pareto solutions.
Stochastic feedback control [
64] accelerates the convergence of the annealing algorithm, using parallel processors to solve combinatorial optimization problems, in combination with a probability measure of quality (PMQ). PMQ is used to generate an error signal for use in a closed control loop. This signal contributes to the control of the search process to modulate the temperature parameter. Such a scheme increases the stationary probability of globally optimal solutions. Other aspects of control theory are also described, including the system gain and its influence on system performance.
Deep learning applications require global optimization of non-convex objective functions with multiple local minima. The same problem is often encountered in physical simulations and can be solved using annealing-simulated Langevin dynamics methods, which is a well-established approach to minimizing multi-particle potentials. This analogy provides useful information for non-convex stochastic optimization in ML. Integrating the discretized Langevin equation yields a sequential update rule equivalent to the well-known momentum optimization algorithm. A study by [
65] shows a gradual decrease in the impulse coefficient from its initial value close to unity,
Inference. Applied algorithm rate (coupled with an operating hardware rate) is among the key performance factors of AI systems. This rate can be controlled by computing-based balancing of system resources and/or applying relevant systems. Algorithm acceleration techniques including modifications of local and global optimization models, use of surrogate- or metamodel-assisted evolutionary or stochastic methods, and use of trained neural networks to control dynamic calculation models are widely applied.
1.7. Education and Science
In science, technology, engineering, art, and mathematics (STEAM) education, AI analytics is useful as an educational framework for developing students’ thinking skills based on AI-supported human-centered learning to develop knowledge and competence. The paper [
66] shows how STEAM students who are not computer science majors can use AI for predictive modeling. To help STEAM students understand how AI can help them with human-centered reasoning, two AI-based approaches are illustrated, a naive Bayesian approach for ML of datasets with a teacher, and a semi-supervised Bayesian approach for ML of the same dataset. These AI-based approaches enable controlled experiments in which selected parameters can be held constant and others can be modified to simulate hypothetical “what-if” scenarios. By applying AI to discursive thinking, it is possible to develop AI thinking in STEAM students, thereby increasing their AI literacy, which, in turn, allows them to ask more accurate questions when solving problems.
Another study [
67] aims to (1) develop a common framework for artificial intelligence in higher education (AAI-HE model); and (2) assess the AAI-HE model. The research process is divided into two stages, (1) developing an AAI-HE model and (2) assessing this model. The resulting system structure can be upgraded to an AAI-HE model to serve as a reference for researchers and instructors intending to explore and implement the best practices as management support tools, so that managers are able to make plans and decisions more efficiently. The introduction of artificial intelligence has significant potential to direct higher education towards technological progress. Moreover, recent advances in AI, deep learning, and computing architectures promote the use of AI by all population groups. As regards fundamental technologies, network systems and related Internet equipment, the AAI-HE model must be properly equipped.
The monograph [
68] explores the methodological and technological issues of building the next generation of educational content by electronic means. It proposes an automated system to implement a new methodology that actually contains content generators and means of introducing educational materials into the educational process in the form of specialized consulting. The use of an educational content synthesis system injects electronic educational resources into the educational process, thereby significantly reducing labor and financial costs for their development. The research aims to create a next-generation automated educational content building system in the field of general and engineering subjects. A center for new educational technologies was established in the form of a technological platform to produce the learning materials.
Significant changes in modern education are associated with the introduction of artificial intelligence and robotic learning [
69]. The transition from traditional knowledge bases to knowledge generators requires a methodology adequate to the content of education, which is based on a parametric simulation model of the educational project (e.g., course). Random and regular parameters of the model provide a set of parametric slices—specific models of learning tasks and theorems presented in forms for learning and control. Qualitative structuring of content by topics, complexity, graphic and numerical configurations contributes to the personalization of learning materials, initiates collaborative activities and stimulates competition. This knowledge generator methodology is consistent with didactic features and trends in educational systems.
Learning in the twenty-first century implies a capacity for e-learning. Training content management systems are integral components of e-education [
70]. Existing content generators transform content rather than create original content. Creating a methodology and technology for generating original content is important and relevant. To develop adequate content generating techniques, primarily for mathematical and related subjects, the problem of generating a triangle together with its multiple attributes is considered to be the simplest object of elementary mathematics. The authors created an imitation model that describes the properties of the object and applies modified optimization methods and relevant algorithms. The current algorithmic scheme illustrates the performance of the developed system.
Another study [
71] examines the impact of automated content creation on AI-based e-learning trends. Widely used content generators do not actually create new content, but modify content stored in databases. The concept of primary content generation is based on the use of simulation models of the objects being studied. The methodology of primary content generation shows the possibility of implementing AI-based content management systems in e-education.
The study [
72] proposes a universal educational platform that combines online content generation and the learning interface. It introduces a methodology using imitation models to generate educational content and a matrix interface for user actions. The system builds a reference operator base and incorporates user solutions to support student learning. A demonstrator prototype has been developed and tested, showcasing all methodological options. The platform includes management, learning, and control interfaces. The study concludes that this modeling-based approach creates an algorithmic platform for content generation and learning, providing a unique opportunity for self-learning through student interactions.
Developing effective learning systems is a well-known challenge. Paper [
73] explores the use of computational models of student learning to create expert models across various domains. The paper introduces a learner–learner architecture that defines the components required for learner learning, resulting in decision tree and trestle models. These models leverage a small set of prior knowledge to develop expert models. Despite limited prior knowledge, the paper successfully demonstrates the creation of a new mentor for planning experiments and learning expert models across multiple domains (language, math, engineering, and science) and knowledge types (associations, categories, and skills). This work highlights the efficacy of student–learner models in creating tutors that are difficult to develop using traditional approaches and their applicability to various subject areas with minimal prior knowledge.
A report by the US Department of Energy [
74] discusses meetings attended by scientists and engineers in 2019. They focused on the potential of AI, Big Data, and high-performance computing (HPC) in the coming decade. The report emphasizes the importance of infrastructure that allows researchers to utilize computational resources effectively, with AI playing a key role. This infrastructure would enable optimized and controlled tests using AI and detection techniques, channeling data and resources based on researchers’ needs and availability.
The rapid development of AI is changing our lives in many ways. One area of application is data science. New methods for automating the creation of AI, known as AutoAI or AutoML, aim to automate the work of data scientists [
75]. AutoAI systems can autonomously collect and preprocess data, develop new features, and build and evaluate models based on performance targets (such as accuracy or runtime efficiency). Interviews with 20 data scientists who work for a large global technology company and analyze data in various business environments were quite controversial; although the informants expressed concern about the trend towards automation in their work, they also strongly believed in its inevitability.
Recent advancements in neural networks have enabled the emulation of energy conservation laws in continuous-time differential equations, but discrete-time scenarios pose challenges. Previous neural network models have overlooked other physical laws. Works [
76,
77] propose a deep energy-based physical model with a specific differential geometric structure has been proposed, incorporating energy and mass conservation laws. Researchers have developed AI-based models for phenomena lacking clear mechanisms or formulas, using observation data. This technology enables highly accurate and fast simulations by adhering to physical laws. It overcomes the limitations of previous prediction techniques, which struggled with digitized phenomena. By reproducing physics in the digital world, this technology enables simulations of complex phenomena like wave flow, fracture mechanics, and crystalline structure growth. Sufficient observation data are required for its application.
Automated scientific discovery has gained interest with the advent of artificial intelligence. The work [
78] proposes a knowledge transfer approach that leverages interdisciplinary engineering knowledge to identify unknown concepts, methods, or laws in one discipline based on counterparts in other disciplines. Through software execution, they successfully replicated three recent discoveries in mechanics, showcasing the effectiveness of their approach. Their future aim is to uncover new knowledge in mechanics, electronics, and other engineering disciplines.
Inference. Educational and research problems use alternative approaches. One is based on imitating human intellect using autonomous neural systems that are capable of data analysis, and provides users with methodological object research tools. An alternative approach is simulation modeling of an object under investigation. Application of models with a high degree of similarity (up to axiomatic) makes it possible to generate and solve a wide range of problems, from general theorems to simple applied jobs. This ensures great diversity and a basically unlimited number of generated problems.
1.8. Engineering and Business
A design framework for robust manufacturing systems is presented by [
79]. It combines modeling, neural networks, and knowledge-based expert systems tools. An operation/ cost-oriented cell design methodology is used to consider both the physical design and control functions of the cell. Simulations estimate performance metrics based on input parameters and cell configurations. Expert knowledge is stored in a rule-based expert system, capturing the relationship between cell control complexity, cost, performance metrics, and configuration. Neural networks predict cell design and control complexity, trained using forward and backward datasets. This methodology has been successfully implemented, leading to the production of an automated cell in an industrial environment. It serves as an effective decision support system for cell designers and management.
The approach proposed in [
80] aims to reduce the overall training costs by a factor of 15 while maintaining the same level of quality compared to current deployment approaches. This reduction in training costs is achieved by continuously deploying the model using real-time data input along with historical data, eliminating the need for frequent retraining of the model. The approach incorporates sampling techniques to include historical data, calculates online statistics, and dynamically materializes pre-processed features, all of which contribute to reducing training and preprocessing time. The authors also provide guidance on developing and deploying two pipelines and models to process real datasets.
The work [
81] applies AI to stabilize the flight of a drone. Several techniques can help to make an ML project a success, including cloud dependency, a continuous integration (CI) and continuous delivery (CD) model, and investments in monitoring and observability. There exists a noticeable delay in obtaining information about position and orientation of a drone to autopilot. It has been demonstrated that it is possible to provide stable flight at a constant height in a vertical plane.
ML enables computers to emulate human thinking by analyzing data and identifying patterns. Supervised ML models learn from labeled data, while unsupervised models find patterns in unlabeled data. Iterative modifications to the code and data are common in ML projects. However, accuracy is crucial, especially in domains like medicine or spam detection. Sometimes, improving the dataset can enhance the model’s performance. The study [
82] explores a data-driven approach to improve ML model performance in real-world applications.
ML is now widely used in data-driven applications across organizations. However, there is a lack of quantitative data on the complexity and challenges faced by ML production pipelines. The work [
83] analyzed 3000 ML production pipelines at Google, covering over 450,000 trained models, over a period of four months. The analysis revealed the characteristics, components, and typologies of typical industry ML pipelines. The authors introduced a data model called model grapheets for reasoning about re-running components in these pipelines and identified optimization opportunities using traditional data management ideas. By reducing unnecessary computation that does not contribute to model deployment, significant cost savings can be achieved without compromising deployment speed.
ML model deployment and MLOps pipeline implementation are challenging due to iterative development, time-consuming processes, and the need for diverse skills, experience, domain knowledge, and teamwork [
84]. However, with proper planning and experimentation, the expected results can be achieved. ML models may fail due to various reasons, including misalignment with business needs, testing and validation issues, and lack of generalization. Several techniques can contribute to the success of ML projects:
- (a)
Cloud dependency: utilizing cloud-based tools enables efficient communication, teamwork, and automation of testing, training, validation, and model development;
- (b)
DevOps Approach: adopting a DevOps approach with continuous integration (CI) and continuous delivery (CD) enables seamless updates, changes, and iterations throughout the development process;
- (c)
Monitoring and observability: continuous monitoring ensures data quality and model performance, mitigating risks in real-world ML scenarios and contributing to successful production models.
ML systems in e-commerce use AI technology to analyze user activity and provide personalized recommendations. By incorporating AI, market researchers can attract more consumers with customized offers based on visit frequency, browsing history, and previous purchases. Research on college students [
85] shows that consumers are attracted to these AI-generated offers. This technology has shifted consumer preferences towards online shopping by offering convenience, special discounts, and a wide range of products. AI technology in e-commerce involves robots, sound, image recognition, and other instant-response technologies. AI and computer technologies progress together, with ML and interactive learning being the main focus.
In a method for producing parameters for product design [
86], explicit or implicit preferences are received from customers, directly or indirectly, and constraints are received from at least one provider. The method includes mapping preferences and constraints to a space, where searches for an optimum are limited by the constraint. The parameters for product design are generated according to at least one optimum found. The method may be performed by a system that comprises at least one processor adapted to execute code and at least one memory storing a preference data structure, designed in accordance with the space.
AI’s strength lies in creating personalized customer experiences in e-commerce through product personalization and virtual personal shoppers. It plays a crucial role in decision-making, proactive decision-making, and information delivery [
87]. AI will impact e-commerce in three key ways, visual search capabilities for finding similar products, precise personalization tools across multiple channels, and interactive shopping experiences with virtual personal shoppers.
The article [
88] introduces ADVISOR SUITE, a commercial system that creates intelligent personalized applications for sales consultants. This knowledge-based system simplifies development tasks by using conceptual models and declarative knowledge representations. It supports defining user models, recommendation rules, personalizing dialog flows, and creating user interfaces. The system includes user-friendly graphical tools that reduce development and maintenance costs.
Inference. Many types of AI systems—from autonomous networks to customizable knowledge-based systems—have become embedded in the business sphere, from the stage of initial design to sales of finished products. As regards customizable (configurable) design systems, AI systems prove highly efficient in all production sectors. They are used as mass customization tools in engineering design, e-commerce, and marketing, and have significant potential for further development. Given the scale of implementation of neural networks, the main problem lies more in developing ML production pipelines rather than improving AI models.