(AI) in Infrastructure Projects—Gap Study

: Infrastructure projects are usually complicated, expensive, long-term mega projects; ac-cordingly, they are the type of projects that most need optimization in the design, construction and operation stages. A great deal of earlier research was carried out to optimize the performance of infrastructure projects using traditional management techniques. Recently, artiﬁcial intelligence (AI) techniques were implemented in infrastructure projects to improve their performance and efﬁciency due to their ability to deal with fuzzy, incomplete, inaccurate and distorted data. The aim of this research is to collect, classify, analyze and review all of the available previous research related to implementing AI techniques in infrastructure projects to ﬁgure out the gaps in the previous studies and the recent trends in this research area. A total of 159 studies were collected since the beginning of the 1990s until the end of 2021. This database was classiﬁed based on publishing date, infrastructure subject and the used AI technique. The results of this study show that implementing AI techniques in infrastructure projects is rapidly increasing. They also indicate that transportation is the ﬁrst and the most AI-using project and that both artiﬁcial neural networks (ANN) and particle swarm optimization (PSO) are the most implemented techniques in infrastructure projects. Finally, the study presented some opportunities for farther research, especially in natural gas projects.


Introduction
Infrastructure projects are usually classified as complex construction projects because of their huge number of interacting activities. Accordingly, the optimization of this type of project is a very complicated process. Choosing the optimum alternative from many available ones, considering the effects of many involved parameters, has a direct impact on the project success in terms of cost, duration and quality.
Searching for the optimum alternative was usually done based on traditional, manual and slow techniques to compare the strengths and weaknesses of each alternative. Since the 1990s, artificial intelligence (AI) techniques have competed with the traditional techniques in optimizing infrastructure projects. Today, AI techniques are recognized as the most efficient, flexible and user-friendly optimization techniques due to their ability to deal with multiple alternatives and conflicting, fuzzy, incomplete, and distorted data. The multiplicity of artificial intelligence techniques allows both designers and decision-makers to choose the best technique according to project type and problem configurations. Trying different AI techniques is a good strategy to determine the most suitable one for certain applications. This review includes 159 studies published between 1990 and 2020.
Infrastructure is the general term for the basic physical systems of a business, region, or nation. Examples of infrastructure include transportation systems, communication networks, sewage, water, and electric systems. These systems tend to be capital-intensive and high-cost investments and are vital to a country's economic development and prosperity. Infrastructure projects Figure 1 could be classified by subject into transportation, water networks and stations, natural gas network, electrical power networks and communications.
for certain applications. This review includes 159 studies published between 1990 and 2020.
Infrastructure is the general term for the basic physical systems of a business, region, or nation. Examples of infrastructure include transportation systems, communication networks, sewage, water, and electric systems. These systems tend to be capital-intensive and high-cost investments and are vital to a country's economic development and prosperity. Infrastructure projects Figure 1 could be classified by subject into transportation, water networks and stations, natural gas network, electrical power networks and communications. Due to the leak and shortage in the review papers of this research area (implementing AI in infrastructure projects), this research was carried out to build a database for the application of AI techniques in the infrastructure field and to follow up on the implementation of AI techniques in different infrastructure areas to determine opportunities for farther research.
The methodology of this review is collecting most of the available research on the web that concerned implementing AI techniques in infrastructure projects. About 160 studies and theses published in the last two decades were collected. The collected materials were sorted, commented and classified with respect to subject, AI technique and publication year. Papers/theses with multi-subjects or multi-techniques are counted in all considered subjects or techniques.
Five subjects (transportation, water, gas, electric and communication networks) and eight AI techniques (ES, FS, ANN, SVM, GA, GP, PSO and others) were considered.
This research is divided into five main sections. Section 1 includes introduction, problem statement, research objective and a brief description of the research organization. Section 2 presents a quick description for the considered (AI) techniques and their concepts. Section 3 summarized the collected previous studies categorized by subjects and sub-categorized by (AI) technique and finally sub-sub-sorted by date. Section 4 contains discussion for the extracted results, while Section 5 includes conclusions and recommendations for farther research.

Artificial Intelligence AI Techniques
AI techniques are the mathematical approaches developed to search for the optimal solution for complicated problems within the available time and resources (hardware, Due to the leak and shortage in the review papers of this research area (implementing AI in infrastructure projects), this research was carried out to build a database for the application of AI techniques in the infrastructure field and to follow up on the implementation of AI techniques in different infrastructure areas to determine opportunities for farther research.
The methodology of this review is collecting most of the available research on the web that concerned implementing AI techniques in infrastructure projects. About 160 studies and theses published in the last two decades were collected. The collected materials were sorted, commented and classified with respect to subject, AI technique and publication year. Papers/theses with multi-subjects or multi-techniques are counted in all considered subjects or techniques.
Five subjects (transportation, water, gas, electric and communication networks) and eight AI techniques (ES, FS, ANN, SVM, GA, GP, PSO and others) were considered.
This research is divided into five main sections. Section 1 includes introduction, problem statement, research objective and a brief description of the research organization. Section 2 presents a quick description for the considered (AI) techniques and their concepts. Section 3 summarized the collected previous studies categorized by subjects and sub-categorized by (AI) technique and finally sub-sub-sorted by date. Section 4 contains discussion for the extracted results, while Section 5 includes conclusions and recommendations for farther research.

Artificial Intelligence AI Techniques
AI techniques are the mathematical approaches developed to search for the optimal solution for complicated problems within the available time and resources (hardware, software and databases). The more time and resources allowed, the more accurate and the better solution that the AI can reach. There are two main types of AI approaches, one of them is based on mimicking the behavior of natural creatures and the other one depends on logical, mathematical or statistical approaches. Each type of them can deal better with Infrastructures 2022, 7, 137 3 of 28 specific types of problem. The first one (such as ANN, GA, GP and PSO) is better for inaccurate, distorted and incomplete data, while the other one (such as ES, FL and SVM) will be better for problems that need proof and reason. Accordingly, a suitable technique for certain problem could be selected based on problem type, data quality, data distribution and search restrictions. Figure 2 presents a classification for considering AI techniques by approaches and applications.

Expert System (ES)
Expert system is considered the earliest AI technique, which was developed in the 1980s. This technique depends on creating a decision tree based on cumulative human experience, research, and solved problems. This decision tree is called a "Knowledge base" it is developed by knowledge engineers who collect sort, arrange and store the knowledge of human experts regarding a certain topic or subject. The access to the "Knowledge base" requires an "Interface Engine" to search though the stored data. Then, the user can receive a justified choice for his case by answering a series of questions using the "User Interface". Usually, the input answers should be accurate, complete and correct [1]. Figure 3 illustrates the schematics of the ES approach. software and databases). The more time and resources allowed, the more accurate and the better solution that the AI can reach. There are two main types of AI approaches, one of them is based on mimicking the behavior of natural creatures and the other one depends on logical, mathematical or statistical approaches. Each type of them can deal better with specific types of problem. The first one (such as ANN, GA, GP and PSO) is better for inaccurate, distorted and incomplete data, while the other one (such as ES, FL and SVM) will be better for problems that need proof and reason. Accordingly, a suitable technique for certain problem could be selected based on problem type, data quality, data distribution and search restrictions. Figure 2 presents a classification for considering AI techniques by approaches and applications.

Expert System (ES)
Expert system is considered the earliest AI technique, which was developed in the 1980s. This technique depends on creating a decision tree based on cumulative human experience, research, and solved problems. This decision tree is called a "Knowledge base" it is developed by knowledge engineers who collect sort, arrange and store the knowledge of human experts regarding a certain topic or subject. The access to the "Knowledge base" requires an "Interface Engine" to search though the stored data. Then, the user can receive a justified choice for his case by answering a series of questions using the "User Interface". Usually, the input answers should be accurate, complete and correct [1]. Figure 3 illustrates the schematics of the ES approach.

Fuzzy System (FS)
In contrast to the expert-system technique, fuzzy systems (FS) can handle inaccurate, uncompleted and incorrect information (data). FS uses a statistical approach based on the probability-distribution function called "Membership Function". The main concept of this approach is that each parameter has a "membership function", and the "membership function" of the outputs could be calculated from the "membership functions" of the inputs using logical operators. The procedure starts by converting the inputs to "Fuzzified inputs" using the "Fuzzifier" module; then the "Interface Engine" applies the logical operators stored in the "Rule base" to obtain the "Fuzzified outputs"; finally, the "Defuzzifier" module is used to convert the "Fuzzified outputs" back to normal outputs [1]. Figure 4 presents this procedure.

Fuzzy System (FS)
In contrast to the expert-system technique, fuzzy systems (FS) can handle inaccurate, uncompleted and incorrect information (data). FS uses a statistical approach based on the probability-distribution function called "Membership Function". The main concept of this approach is that each parameter has a "membership function", and the "membership function" of the outputs could be calculated from the "membership functions" of the inputs using logical operators. The procedure starts by converting the inputs to "Fuzzified inputs" using the "Fuzzifier" module; then the "Interface Engine" applies the logical operators stored in the "Rule base" to obtain the "Fuzzified outputs"; finally, the "Defuzzifier" module is used to convert the "Fuzzified outputs" back to normal outputs [1]. Figure  4 presents this procedure.

Artificial Neural Networks (ANN)
This is the most successful and famous AI technique. It depends on simulating the human brain anatomy and operation. The network consists of sets of cells (called neurons) arranged in layers. The cells of each layer are connected to the cells of the previous and the next layers. Figure 5 shows the arrangement of a typical ANN with an input layer, an output layer and only one hidden layer. In the human brain, electrical signals transfer between cells through the connectors; accordingly, they are affected by the quality of the connectors. On the other hand, the cells do not trigger their output signals until the summation of the input signals exceed a certain thresholds. Similarly, the inputs of ANN transfer between neurons through the connectors. The quality of the biological connector is simulated by the weight of the ANN connector, and to simulate the effect of connector quality on the electrical signal, the end ANN neuron receives the input value multiplied by the connector weight. Finally, the threshold of a biological cell is simulated by the "activation function" in an ANN neuron, which triggers the neuron output. Just like the human brain, the ANN gains knowledge by learning; during this process, the ANN adjusts the weight of each connector to maximize the accuracy of the outputs.
One of the advantages of the ANN is the huge number of learning techniques that could be implemented, for example, back propagation (BPANN), forward propagation (FPANN), redial basis (RBANN), Bayesian regression (BRANN), generalized regression (GRANN), differentiated evolution (DEANN) and Levenberg-Marquardt (LMFANN). Many different approaches were used to optimize the architecture of the ANN, such as the number of hidden layers, number of neurons in each layer, the activation function of each neuron and the implemented learning technique. Some of these approaches are neuro-fuzzy, recurrent fuzzy, local linear neural fuzzy, cuckoo optimization algorithm-ANN, support vector quintile regression and genetic algorithm-ANN [1][2][3].

Artificial Neural Networks (ANN)
This is the most successful and famous AI technique. It depends on simulating the human brain anatomy and operation. The network consists of sets of cells (called neurons) arranged in layers. The cells of each layer are connected to the cells of the previous and the next layers. Figure 5 shows the arrangement of a typical ANN with an input layer, an output layer and only one hidden layer. In the human brain, electrical signals transfer between cells through the connectors; accordingly, they are affected by the quality of the connectors. On the other hand, the cells do not trigger their output signals until the summation of the input signals exceed a certain thresholds. Similarly, the inputs of ANN transfer between neurons through the connectors. The quality of the biological connector is simulated by the weight of the ANN connector, and to simulate the effect of connector quality on the electrical signal, the end ANN neuron receives the input value multiplied by the connector weight. Finally, the threshold of a biological cell is simulated by the "activation function" in an ANN neuron, which triggers the neuron output. Just like the human brain, the ANN gains knowledge by learning; during this process, the ANN adjusts the weight of each connector to maximize the accuracy of the outputs.
One of the advantages of the ANN is the huge number of learning techniques that could be implemented, for example, back propagation (BPANN), forward propagation (FPANN), redial basis (RBANN), Bayesian regression (BRANN), generalized regression (GRANN), differentiated evolution (DEANN) and Levenberg-Marquardt (LMFANN). Many different approaches were used to optimize the architecture of the ANN, such as the number of hidden layers, number of neurons in each layer, the activation function of each neuron and the implemented learning technique. Some of these approaches are neuro-fuzzy, recurrent fuzzy, local linear neural fuzzy, cuckoo optimization algorithm-ANN, support vector quintile regression and genetic algorithm-ANN [1][2][3].

Support Vector Machine (SVM)
SVM is a classification technique with a statistical basis that is used to find the best boundaries between classes (which have the widest margin). The formulas of these boundaries could be linear or non-linear. Figure 6 describes the difference between linear and non-linear boundaries and the concept of margin in 2D-space. It is very efficient, especially for multi-classes in higher dimensions where the boundaries can't be presented graphically [1].

Genetic Algorithm (GA)
This optimization technique depends on mimicking the natural reproduction process of living creatures whose children inherit the genes of their parents. The natural selection law by Darwin states that, in a group of individuals, the most adapted one to the surrounding environment is the most expected survivor. Accordingly, GA is based on a similar concept, the most fitting solution to certain optimization conditions is the most optimized one. To find the optimum solution for a certain problem with certain conditions, called a "Fitting function", the process begins with generating a random set of solutions

Support Vector Machine (SVM)
SVM is a classification technique with a statistical basis that is used to find the best boundaries between classes (which have the widest margin). The formulas of these boundaries could be linear or non-linear. Figure 6 describes the difference between linear and non-linear boundaries and the concept of margin in 2D-space. It is very efficient, especially for multi-classes in higher dimensions where the boundaries can't be presented graphically [1].

Support Vector Machine (SVM)
SVM is a classification technique with a statistical basis that is used to find the best boundaries between classes (which have the widest margin). The formulas of these boundaries could be linear or non-linear. Figure 6 describes the difference between linear and non-linear boundaries and the concept of margin in 2D-space. It is very efficient, especially for multi-classes in higher dimensions where the boundaries can't be presented graphically [1].

Genetic Algorithm (GA)
This optimization technique depends on mimicking the natural reproduction process of living creatures whose children inherit the genes of their parents. The natural selection law by Darwin states that, in a group of individuals, the most adapted one to the surrounding environment is the most expected survivor. Accordingly, GA is based on a similar concept, the most fitting solution to certain optimization conditions is the most optimized one. To find the optimum solution for a certain problem with certain conditions, called a "Fitting function", the process begins with generating a random set of solutions

Genetic Algorithm (GA)
This optimization technique depends on mimicking the natural reproduction process of living creatures whose children inherit the genes of their parents. The natural selection law by Darwin states that, in a group of individuals, the most adapted one to the surrounding environment is the most expected survivor. Accordingly, GA is based on a similar concept, the most fitting solution to certain optimization conditions is the most optimized one. To find the optimum solution for a certain problem with certain conditions, called a "Fitting function", the process begins with generating a random set of solutions and testing each solution's quality. Then, the most fitting solutions (the ones with high quality) survive to form the next generation, while the others will be canceled. In order to maintain the same population size, the survivors are married from each other to generate new individuals (solutions); this process is called "crossover". Sometimes, natural errors occur during inheriting the genes, called "mutation" in GA. The mutation is intentionally carried out with a certain percentage to mimic the biological process. Figure 7 illustrates the concepts of "crossover" and "mutation". Generation after generation, the average fitness of the population increases until a certain individual (solution) reaches an accepted fitness (accuracy) [1]. and testing each solution's quality. Then, the most fitting solutions (the ones with high quality) survive to form the next generation, while the others will be canceled. In order to maintain the same population size, the survivors are married from each other to generate new individuals (solutions); this process is called "crossover". Sometimes, natural errors occur during inheriting the genes, called "mutation" in GA. The mutation is intentionally carried out with a certain percentage to mimic the biological process. Figure 7 illustrates the concepts of "crossover" and "mutation". Generation after generation, the average fitness of the population increases until a certain individual (solution) reaches an accepted fitness (accuracy) [1].

Genetic Programming (GP)
GP is a special application of GA wherein the considered problem is to optimize a mathematical expression to fit certain observations; hence, GP is a multi-variable and free structure regression technique. The individual solutions of GA are presented by mathematical expressions in GP, and the fitness function of GA is replaced by the sum of squared errors (SSE) in GP. Finally, to apply GA on mathematical expressions, they must be coded in genetic form first. Figure 8 presents how to translate a formula from mathematical form to tree form and then to genetic form. Today, GP is a main technique, including many sub-techniques such as linear and Cartesian genetic programming (LGP) and (CGP), in addition to gene expression programming (GEP) and multi-gene expression programming (MGEP) [1].

Genetic Programming (GP)
GP is a special application of GA wherein the considered problem is to optimize a mathematical expression to fit certain observations; hence, GP is a multi-variable and free structure regression technique. The individual solutions of GA are presented by mathematical expressions in GP, and the fitness function of GA is replaced by the sum of squared errors (SSE) in GP. Finally, to apply GA on mathematical expressions, they must be coded in genetic form first. Figure 8 presents how to translate a formula from mathematical form to tree form and then to genetic form. Today, GP is a main technique, including many sub-techniques such as linear and Cartesian genetic programming (LGP) and (CGP), in addition to gene expression programming (GEP) and multi-gene expression programming (MGEP) [1]. and testing each solution's quality. Then, the most fitting solutions (the ones with high quality) survive to form the next generation, while the others will be canceled. In order to maintain the same population size, the survivors are married from each other to generate new individuals (solutions); this process is called "crossover". Sometimes, natural errors occur during inheriting the genes, called "mutation" in GA. The mutation is intentionally carried out with a certain percentage to mimic the biological process. Figure 7 illustrates the concepts of "crossover" and "mutation". Generation after generation, the average fitness of the population increases until a certain individual (solution) reaches an accepted fitness (accuracy) [1].

Genetic Programming (GP)
GP is a special application of GA wherein the considered problem is to optimize a mathematical expression to fit certain observations; hence, GP is a multi-variable and free structure regression technique. The individual solutions of GA are presented by mathematical expressions in GP, and the fitness function of GA is replaced by the sum of squared errors (SSE) in GP. Finally, to apply GA on mathematical expressions, they must be coded in genetic form first. Figure 8 presents how to translate a formula from mathematical form to tree form and then to genetic form. Today, GP is a main technique, including many sub-techniques such as linear and Cartesian genetic programming (LGP) and (CGP), in addition to gene expression programming (GEP) and multi-gene expression programming (MGEP) [1].

Particle Swarm Optimization (PSO)
This technique is an optimization technique simulating the searching behavior of a group of living creatures. The main concept is to divide the research zone into sub-zones, one sub-zone for each individual. Each individual is searching for the best solution within his sub-zone (best local solution), and by communicating with other individuals, the final solution (best global solution) could be detected.
The communication protocol between the individual depends on the mimicking creature; that is why PSO is a large umbrella that covers many sub-techniques such as ant colony optimization (ACO), grasshopper optimization algorithm (GOA), salp swarm algorithm, artificial bee colony (ABC), crow search algorithm (CSA), whale optimization algorithm (WOA) and firefly algorithm (FA). Figure 9 describes the main concept of this technique [1].

Particle Swarm Optimization (PSO)
This technique is an optimization technique simulating the searching behavior of a group of living creatures. The main concept is to divide the research zone into sub-zones, one sub-zone for each individual. Each individual is searching for the best solution within his sub-zone (best local solution), and by communicating with other individuals, the final solution (best global solution) could be detected.
The communication protocol between the individual depends on the mimicking creature; that is why PSO is a large umbrella that covers many sub-techniques such as ant colony optimization (ACO), grasshopper optimization algorithm (GOA), salp swarm algorithm, artificial bee colony (ABC), crow search algorithm (CSA), whale optimization algorithm (WOA) and firefly algorithm (FA). Figure 9 describes the main concept of this technique [1].

Hybrid Techniques
Each AI technique has its own advantage and disadvantage. To overcome its disadvantage and take the benefit of it, researchers combined two or more techniques. Sequential, auxiliary and embedded are the main types of hybrid techniques. Sequential techniques take the outputs from the first one, then put them into the next one and obtain a better solution. Auxiliary techniques just take some information from the first technique, then benefit from this information in the second one and take the output from the last technique only. Embedded techniques make a combination between the techniques and use them in parallel. The most common hybrid techniques, which are a combination of ANN and other techniques as previously mentioned, are efficient and fast but also very complicated.

Transportation Networks
AI techniques were widely used in this subject. Using traditional techniques in transportation problems are very hard, time-consuming and expensive because of fuzzy, inaccurate data. On the other hand, AI techniques can efficiently deal with this type of data. The most addressed topics ( Figure 10) related to this subject are: Traffic congestion Maximum and minimum car speed Intelligent transportation system Traffic flow Risks in construction.

Hybrid Techniques
Each AI technique has its own advantage and disadvantage. To overcome its disadvantage and take the benefit of it, researchers combined two or more techniques. Sequential, auxiliary and embedded are the main types of hybrid techniques. Sequential techniques take the outputs from the first one, then put them into the next one and obtain a better solution. Auxiliary techniques just take some information from the first technique, then benefit from this information in the second one and take the output from the last technique only. Embedded techniques make a combination between the techniques and use them in parallel. The most common hybrid techniques, which are a combination of ANN and other techniques as previously mentioned, are efficient and fast but also very complicated.

Transportation Networks
AI techniques were widely used in this subject. Using traditional techniques in transportation problems are very hard, time-consuming and expensive because of fuzzy, inaccurate data. On the other hand, AI techniques can efficiently deal with this type of data. The most addressed topics ( Figure 10) related to this subject are: Traffic congestion Maximum and minimum car speed Intelligent transportation system Traffic flow Risks in construction. A total of 57 researchers were concerned about these topics; 15 of them used PSO, and 13 used ANN.

Using Expert System
In 2019, ALI Ahmed Mohamed wrote a review paper that talked about the usage of ES in engineering transportation. The writer discussed the benefit of using this technique: time constraints, stability and efficiency. He concludes that ES can be used by designers because it helps them to analyze, determine and customize information [4]. In 1989, he was asked if we could use ES to help in planning problems. By using old urban information systems to design new ones, he concludes that ES can help planners and obtain solutions with high efficiency [5]. In 1993, Amin Hammad studied the planning of bridges by the ES technique. The selection of the best alignment and location of bridges is essential. Many factors should be considered in this process. He concluded that ES can choose from different alternatives with high accuracy and good results [6]. In 1998, Peter Crossley studied how to use ES to predict the number of cars and road operating systems. He closed the research in developed countries [7]. In 2000, Jonas Norrman studied the slipperiness on roads, which is an important factor in design. By making many observations and examinations on different types of roads in different years to make a great database, we can use ES to choose the optimum design. The results show that 49% of the roads are considered slippery [8]. In 2005, A. Zischg studied wet snow avalanches and risk management in roads. The application that was produced depended on the data taken from this area. The ES system technique showed great results when it was combined with FS [9]. In 2014, T. Frantti studied traffic management. This field is vital in road design. The design of roads in most countries depends on this management, and it plays a great role. By using ES, we can solve problems related to congestion and flow control [10].

Using Fuzzy System
In 1994, Robert Hoyer studied traffic in urban road traffic using FS. The state machine used essential ideas, and the linguistic variables of fuzzy rule design are explained [11]. In 1996, Bemhard Krause compared the new and previous approaches based on conventional control technology. FS proved its role in traffic management. Detecting traffic congestion is very important, which will be easier and more accurate using fuzzy system [12]. In 2004, Salvatore Cafiso used FS to study safety on the roads. He noticed that most of the accidents happen on two-lane roads, and most of the fatalities occurred in the carved

Using Expert System
In 2019, ALI Ahmed Mohamed wrote a review paper that talked about the usage of ES in engineering transportation. The writer discussed the benefit of using this technique: time constraints, stability and efficiency. He concludes that ES can be used by designers because it helps them to analyze, determine and customize information [4]. In 1989, he was asked if we could use ES to help in planning problems. By using old urban information systems to design new ones, he concludes that ES can help planners and obtain solutions with high efficiency [5]. In 1993, Amin Hammad studied the planning of bridges by the ES technique. The selection of the best alignment and location of bridges is essential. Many factors should be considered in this process. He concluded that ES can choose from different alternatives with high accuracy and good results [6]. In 1998, Peter Crossley studied how to use ES to predict the number of cars and road operating systems. He closed the research in developed countries [7]. In 2000, Jonas Norrman studied the slipperiness on roads, which is an important factor in design. By making many observations and examinations on different types of roads in different years to make a great database, we can use ES to choose the optimum design. The results show that 49% of the roads are considered slippery [8]. In 2005, A. Zischg studied wet snow avalanches and risk management in roads. The application that was produced depended on the data taken from this area. The ES system technique showed great results when it was combined with FS [9]. In 2014, T. Frantti studied traffic management. This field is vital in road design. The design of roads in most countries depends on this management, and it plays a great role. By using ES, we can solve problems related to congestion and flow control [10].

Using Fuzzy System
In 1994, Robert Hoyer studied traffic in urban road traffic using FS. The state machine used essential ideas, and the linguistic variables of fuzzy rule design are explained [11]. In 1996, Bemhard Krause compared the new and previous approaches based on conventional control technology. FS proved its role in traffic management. Detecting traffic congestion is very important, which will be easier and more accurate using fuzzy system [12]. In 2004, Salvatore Cafiso used FS to study safety on the roads. He noticed that most of the accidents happen on two-lane roads, and most of the fatalities occurred in the carved parts. Using the FS technique with big databases will ensure great results in designing new roads and Infrastructures 2022, 7, 137 9 of 28 redesigning old ones. Eliminating the dangers in the different categories of roads by using AI will be a great step to a safer road [13]. In 2007, Panita Pongpaibool presented how to evaluate roads using fuzzy logic and adaptive neuro-fuzzy techniques. He verified the accuracy of the results by using AI and the results shown manually by the volunteers. FS achieved 88.8%, and the neuro-fuzzy technique achieved 75.4%, accuracy [14]. In 2011, Danial Moazami acknowledged that there were some limits to priorities in pavement rehabilitation and maintenance processes, such as budget and inevitability. With MATLAB software, the coding M-file was tried in a case study in Tehran. A mathematical model was selected and executed on those streets. The model prioritized 131 sections [15]. Another study published in 2012 by Matilde about the safety of roads. From his point of view, there are three systems such as the vehicle, the driver, and the trip. For each of them, they constituted a subsystem, and the fuzzy system decided on an action to improve safety [16].

Using Artificial Neural Networks
ANN is one of the most used techniques in the roads approach. It has too many advantages, with a great amount of data. From 2010 to 2019, there were 12 papers that have been collected. Researchers discussed the traffic flow [27][28][29][30][31][32][33][34]. They studied optimization for pavement, the expectation of the car flow in the roads even if it is low volume or high volume, the prediction of traffic noise in the crowded roads, automated road detection by satellite, prediction the duration to finish rural roads, speed prediction based on images, detecting road regions and buildings by using areal images and improving the efficiency of roads [27,28,[35][36][37].

Using Particle Swarm Optimization
Every single topic of these is working together to improve the roads. PSO is one of the earliest AI techniques that has too many branches. After research, it was found that it has the largest share of research on infrastructure. Almost every branch has been used, such as ant colony optimization. Some of them will be mentioned here, collected from 2006 to 2019. They studied routing problems, congestion management, the design of roads, scheduling for vehicles using roads, the planning of gas and recharging stations, road extraction by satellite images and traffic forecasting [38][39][40][41][42][43][44][45][46][47][48][49][50][51].

Using Support Vector Machine
In a roads approach, SVM has been used many times in many topics, such as predicting the remaining time for pavement [52], new-lanes detection [53], the geometry of the land to detect pavement thickness [54], detection road centerline from high-resolution images [55], detecting five classes from LIDAR data [56], automatic detection of pavement [57], predicting safety risks [58], road surface prediction in the tire cavity [59] and, finally, in 2020, Viet-Ha Nhu published a paper that compared 3 of machine learning algorithms that were checking the performance of a mountain road [60]. Mnih and Hinton (2010) wrote a book discussing how to convert high-resolution images to road maps with different AI techniques [61].

Electrical Networks
There are many topics under this subject that used AI techniques as shown in Figure 11, for example: Maintenance of electrical power network Optimal power flow Power system controller design Power loss reduction Solving economic dispatch problems Fault detection.
In general, these subjects are related to infrastructure. Countries can save a massive amount of money by developing power systems in all aspects. In this survey, there are 42 studies on these topics, as mentioned in Section 4 Most of them used PSO or GA.
Infrastructures 2022, 7, x FOR PEER REVIEW 10 of 29 Power system controller design Power loss reduction Solving economic dispatch problems Fault detection. In general, these subjects are related to infrastructure. Countries can save a massive amount of money by developing power systems in all aspects. In this survey, there are 42 studies on these topics, as mentioned in Section 4 Most of them used PSO or GA.

Using Expert System
In 1993, Kun-Long Ho used the old databases and experience from the last five years to make an expert system. Using it in Taiwan's power system for short-term load expectation, he found out that the error decreased from 3.86% to 2.52% using expert system techniques [62]. In 1993, M. Kezunovid did a study to detect faults in power systems automatically. He made a system to help operators analyze disturbances and faults. The bigger the power system, the more useful this research [63]. In 1994, M. M. Adibi wrote a paper that considered one of the series concerned with power system restoration. The requirements of an expert system are mentioned in this paper, focusing on initial power sources. In 1997, Ernest Vhquez studied fault detection by ES and made a decision support system to help designers save time and resources [64]. In 2008, M. Babita Jain wrote a paper discussing fault detection and the control of general power system equipment and how to deal with modern and online faults. As he said, this data in the paper can be extended to deal with all other equipment [65]. S. J. Kiartzis and Abdelazeem A. Abdelsalam used hybrid techniques (ES and FS) in different ways in their studies. The first one showed how this hybrid technique was developed to forecast the daily load curve. He concluded that the system could forecast future loads in the power systems that were used in 2000 [66]. The other one, in 2012, performed classification based on current and voltage in power systems. The new proposed system showed great accuracy and little calculation time compared to other methods [67]. Another one using a hybrid technique was Deyin Ma. In 2012, he used (ES and ANN) to detect faults in the system. His way was to divide the main network into sub-groups with short depth and then propose a novel method. Experimental results show that this system obtains better accuracy than the two commonly used methods [68].

Using Expert System
In 1993, Kun-Long Ho used the old databases and experience from the last five years to make an expert system. Using it in Taiwan's power system for short-term load expectation, he found out that the error decreased from 3.86% to 2.52% using expert system techniques [62]. In 1993, M. Kezunovid did a study to detect faults in power systems automatically. He made a system to help operators analyze disturbances and faults. The bigger the power system, the more useful this research [63]. In 1994, M. M. Adibi wrote a paper that considered one of the series concerned with power system restoration. The requirements of an expert system are mentioned in this paper, focusing on initial power sources. In 1997, Ernest Vhquez studied fault detection by ES and made a decision support system to help designers save time and resources [64]. In 2008, M. Babita Jain wrote a paper discussing fault detection and the control of general power system equipment and how to deal with modern and online faults. As he said, this data in the paper can be extended to deal with all other equipment [65]. S. J. Kiartzis and Abdelazeem A. Abdelsalam used hybrid techniques (ES and FS) in different ways in their studies. The first one showed how this hybrid technique was developed to forecast the daily load curve. He concluded that the system could forecast future loads in the power systems that were used in 2000 [66]. The other one, in 2012, performed classification based on current and voltage in power systems. The new proposed system showed great accuracy and little calculation time compared to other methods [67]. Another one using a hybrid technique was Deyin Ma. In 2012, he used (ES and ANN) to detect faults in the system. His way was to divide the main network into sub-groups with short depth and then propose a novel method. Experimental results show that this system obtains better accuracy than the two commonly used methods [68].

Using Fuzzy System
FS has been used many times from 1990 to 2016 in the study of power systems. In 1990, Cheng studied a new system stabilizer based on FS [69]. In 1995, Dipti Srinivasan focused on connecting complicated and large-scale power systems using the FS technique [70]. In 1997, Yung Hua Song explained how we could use AI and FS techniques to solve longstanding problems concerning power systems. He said that the conventional method has troubles in the solutions [70]. In 2000, P. K. Dash talked about power-system distribution using FS and a small part with ES. He made several numerical examples using EMTP programs to validate the results with the actual ones using the new hybrid technique [71]. In 2016, Ahmed E. Saleh used a hybrid system between FS and ANN to predict the amount of energy generated from wind. He also illustrated the advantage of wind energy in literature [72]. In 2015, Sajid Hussain talked about renewable energy. Using FS and GA, he generated fuzzy decision criteria to choose the most suitable power system to deliver the energy to a demand. This research obtained the optimum power flow by using a control framework to test electricity grids [73].

Using Genetic Algorithms and Genetic Programing
In 2002, J.Z. Zhu studied how to reduce and minimize power system losses. He made a refined GA and tested it on a 33-bus distribution network and 16-bus distribution network. The results were shown in the paper [74]. In 2017, Saleh Almasabi studied the communication in infrastructure by improving the power system connected to transportation using the GA technique. The cost had a great role in this study [75]. In 1997, W. B. Langdon used GP for optimization, starting from the hand-coded heuristics to the evolution of the best schedule to minimize the cost [76]. In 2014, T.S. Kishore wrote a state-of-the-art paper about transmission-line optimization, the history of transmission lines and planning and cost optimization. The optimization shown in this paper used GP [77].

Using Artificial Neural Networks
In 2000, F. Zahra used a hybrid technique between GA and ANN. He showed that ANN is promising and has great potential in transmission-line protection due to its fault inception [78]. Fault detection, protection, and location in transmission lines are mentioned in too many research papers, especially those using ANN [79][80][81][82][83][84][85][86].

Using Particle Swarm Optimization
PSO techniques constitute too many kinds. Some of them used it in general, and others used sub-techniques, such as fish, birds and cats. Some state-of-the-art papers talked about using PSO in power systems in general [87][88][89]. Some other papers used a specific subtechnique of PSO, such as arranging by published year [90][91][92][93][94][95].

Using Support Victor Machine
The SVM technique played a great role in power systems and power transmission lines. Over the years, new approaches have been studied for fault detection, fault classification, power quality and ice forecasting in research, such as [96][97][98][99][100][101][102][103].

Water Networks
Water networks present one of the most significant branches of infrastructure engineering. Development in this area has become necessary. AI techniques were used in the following topics as shown in Figure 12: Water management Water distribution Rainfall-runoff Water recycling Sewage collection.
More than 39 studies focused on these topics and more in this survey (Section 4). Most of them used PSO (Section 4). As mentioned before, this method is very suitable for a wide range of data.
More than 39 studies focused on these topics and more in this survey (Section 4). Most of them used PSO (Section 4). As mentioned before, this method is very suitable for a wide range of data.

Using Expert System
In 1993, Daene C. McKinney studied water planning with a 50-year horizon by using an expert geographic information system. After generating ES, the efficiency of the cost and time will be increased. The other benefit will be developing water supplies. The most important conclusion is that the more information and alternatives we give to the system, the better results [104]. In 1997, Mohan and Arumugam understood the complexity of irrigation systems. Due to the variety of expert opinions, they thought that, if we have an expert system with all the varieties and alternatives, it will increase the efficiency of using water in irrigation. This paper focused on irrigation but touched on management, storing, and consuming water effectively [105]. In 2007, Fuzhan Nasiri used ES in water-quality management. He saw that we have a lot of uses of water that needs a decision-making system. This system allows its user to set priorities that increase the efficiency of using the water. This expert system will transfer the knowledge of experts to non-expert people who can easily use it [106]. In 2013, Igor Cretescu made an ES designed for monitoring the water surface. This paper studied measurements and sampling frequency. Some data processing had practical and operating issues. All of these data are increasing the benefit from the expert system. Lassi was the location of the case study. The researcher chooses it as the highest generator of the pollution (Romina [107]).

Using Fuzzy System
In 2002, M. Hasebe studied in his paper multipurpose dams by using FS and ANN. He concluded that good results came only in non-flood seasons and could help countries increase water usage efficiency. The simulations made by the AI were too close to the actual operations [108]. An important topic in this approach is water management. In 2008, Slobodan used FS to design, plan and manage water infrastructure. This system will take into consideration complex socio-economic conditions. By focusing on three main sets of tools: simulation, optimization and multi-objective analysis, he concluded that incorporating uncertainties obtains better solutions [109]. In 2010, Y.P. Li used FS to create a fuzzy boundary, which gives designers and planners alternatives to help them make decisions. The more complex and the more details in the system, the more accurate it will be, which will help countries manage water resources and increase the efficiency of infrastructure projects [110].

Using Expert System
In 1993, Daene C. McKinney studied water planning with a 50-year horizon by using an expert geographic information system. After generating ES, the efficiency of the cost and time will be increased. The other benefit will be developing water supplies. The most important conclusion is that the more information and alternatives we give to the system, the better results [104]. In 1997, Mohan and Arumugam understood the complexity of irrigation systems. Due to the variety of expert opinions, they thought that, if we have an expert system with all the varieties and alternatives, it will increase the efficiency of using water in irrigation. This paper focused on irrigation but touched on management, storing, and consuming water effectively [105]. In 2007, Fuzhan Nasiri used ES in water-quality management. He saw that we have a lot of uses of water that needs a decision-making system. This system allows its user to set priorities that increase the efficiency of using the water. This expert system will transfer the knowledge of experts to non-expert people who can easily use it [106]. In 2013, Igor Cretescu made an ES designed for monitoring the water surface. This paper studied measurements and sampling frequency. Some data processing had practical and operating issues. All of these data are increasing the benefit from the expert system. Lassi was the location of the case study. The researcher chooses it as the highest generator of the pollution (Romina [107]).

Using Fuzzy System
In 2002, M. Hasebe studied in his paper multipurpose dams by using FS and ANN. He concluded that good results came only in non-flood seasons and could help countries increase water usage efficiency. The simulations made by the AI were too close to the actual operations [108]. An important topic in this approach is water management. In 2008, Slobodan used FS to design, plan and manage water infrastructure. This system will take into consideration complex socio-economic conditions. By focusing on three main sets of tools: simulation, optimization and multi-objective analysis, he concluded that incorporating uncertainties obtains better solutions [109]. In 2010, Y.P. Li used FS to create a fuzzy boundary, which gives designers and planners alternatives to help them make decisions. The more complex and the more details in the system, the more accurate it will be, which will help countries manage water resources and increase the efficiency of infrastructure projects [110].

Using Genetic Algorithms and Genetic Programing
In 1997, Dragan A. Slavic made a program that depended on genetic algorithms to make a design with the lowest cost. He focused on water distribution networks. The author showed how computational techniques could improve economic efficiency [111]. On the other side, T. Devi Prasad focused on using the GA's multi-objective. He studied how to minimize the cost and maximize the reliability measure. In this study, he introduced a new measurement called resilience. With the new method, the financial results were better. This research was published in 2004 [112]. In this research, five pieces of research have been collected that used GP. One of them is a state-of-the-art review published in 2018. Ali Danandeh Mehr presented the fundamentals of genetic programming and application in water-resource engineering, such as rainfall-runoff modeling, stream-flow forecasting, water-quality modeling, surface/subsurface water-level prediction and soil-properties modeling. He presented many future works depending on the GP and its variants like Cartesian GP, graph-based GP, stack-based GP and others [113]. In 2002, Vladan Babovic studied rainfall-runoff modeling. By making these models, we can take advantage of knowledge about this rooted problem [114]. In 2013, Qiang Xu used GP to make a strategy to replace the pipelines in an optimal way. This will help the country a lot and save money and time. Cost-effective pipe maintenance plans will improve by using AI. A case study has been done in Beijing, and the results were very accurate [115].

Using Artificial Neural Networks
In 2009, Mahmut Firat used hybrid techniques to make accurate water-consumption models. For example generalized regression neural networks, feed forward neural networks and radial basis neural networks. Efficiency and correlation coefficients have been calculated for all models. The author concluded that GRNN is the best technique [116]. In 2010, H. Md. Azamathulla studied how to estimate the depth to scour the pipeline. This process was hard and complex in the past, and there are no accurate results for modeling. The author studied this topic using genetic programming. Results have been more effective compared to using ANN [117]. In 2016, Randal S. Olson used Python software to generate an evaluation tool to optimize pipeline automatically. He implemented an open source program called Tree-based Pipeline Optimization Tool (TPOT). This software showed significant improvement based in machine learning in complex pipelines without sacrificing classification accuracy [118].

Using Particle Swarm Optimization
In 2006, C. R. Suribabu tested the PSO technique for designing a water-distributionpipeline network. The result of the research was more efficient and accurate than other optimization methods. Moreover, it required fewer objective evaluations [119]. PSO is the most used technique. Fifteen studies were collected in this state-of-the-art paper. In 2007, Idel Montalvo improved the method to tackle optimal design problems for water-supply systems using PSO. With two case studies in Hanoi and New York City, the new method achieved good results and too many advantages [120]. In 2008, M. H. Afshar did research on exploration and the exploitation of stormwater. Exploration is the ability to predict and search for water coming from storms in hard weather. Exploitation is refining solutions to obtain the best one that is better than the previous solutions. There were two options tested for the rebirthing mechanism, one of them is clearing the memory or keeping the memory of the rebirthing particles. The results showed that the new mechanism and the performance of PSO algorithm were improved without extra computational effort [121]. In 2008, Alexandre M. Baltar presented a new application of water-resource management that depends on multi-objective PSO. He said that the large variety of problems in waterresource management needs powerful optimization tools. Mathematical programs are one of the best ways to deal with this. PSO has been found to perform very well in a wide spectrum of optimization problems. Multi-objective PSO (MOPSO) is used in three applications: test function, multipurpose reservoir operation and selective withdrawal.
The result of this research is much simpler than the old applications [122]. In 2009, Jagdish Chand Bansal made two mathematical models based on PSO to obtain the optimal design of a water-distribution-network system (WDNS). One of them is called a branched network, and the other one is called a serial network. PSO has been used to obtain the minimum cost of WDNS. The author suggested that PSO could be used in generalized WDNS [123]. Sedimentation problems consumption too much money and effort to solve it recently. In 2010, Asghar Azadnia developed an application based on MOPSO to help with solving these problems. The author made a model that optimized two objectives, which are demand points and sediment removal. He made performed a case study in north Iran, where the people suffer from significant storage loss because of the high rate of sedimentation [124]. In the same year, Idel Montalvo studied multi-criteria and multi-objective optimization problems in the water distribution systems. He takes took into consideration human interaction. He concluded that the same approach can could also be applied to other optimization problems by using the same technique. He used the PSO algorithm to improve the ability to find the Pareto front [125]. In 2011, Abbas Afshar published a paper about the automatic calibration of a water-quality model. After comparing the automatic model with the field data, it shows very close results [126]. In 2012, A. Sedki noticed that designers face many problems with non-linear optimization problems involving complicated and implicit issues that make the problem harder. He noticed too that many researchers shifted their focus to use meta-heuristic approaches instead of traditional methods. He concluded that PSO-DE (particle swarm optimization differential evolution) is promising and solves WDS problems with high efficiency [127]. In 2014, Riham Ezzeldin used integer discrete PSO to optimize the design of a water distribution network. This research aimed to minimize the total cost. A new boundary condition was introduced called billiard boundary, which was tested with different numbers of populations. The results show that application is more effective compared to other techniques, such as reducing pipe cost and the function evaluation number [128]. In 2017, Yong Peng studied and presented multi-core parallel particle swarm optimization (PPSO) based on a combination of the Fork/Join framework. He concluded that the PPSO algorithm performed better than the traditional one. The watersupply system will improve with more effort and work in the approach [129]. Recycling water will be one of the main aims of countries, especially in poor-water countries. In 2018, Hanliang Fu talked about using PSO to help designers recycle water. Public acceptance is a huge part of any recycling project. Social opinion and public attitude are so critical. The author showed how citizens regard of recycled water use make him think it is imperative to popularize this technology [130]. In 2020, Yazid Tikhamarine made a comparison between the Harris hawks optimizer and PSO. Rainfall water is a great source of water in many countries. The author used the two methods to model rainfall and runoff. Developing an accurate model was hard and problematic to engineer, with too many limitations in the models, and these limitations affect accuracy. PSO proved a high level of accuracy in run-off values [131].

Using Support Victor Machine
In 2009, Xiang Yunrong studied water quality by an application called LS-SVM in the Luixi River in Guangzhou. Every country's infrastructure purification system depends on the quality of its water. The author used a hybrid technique between SVM and PSO to reach the extreme minimum value. Testing this technique in a river showed high efficiency in predicting water quality [132]. In 2009, Zong Woo Geem used PSO to detect and search for water network designs. This research focused on a non-linear, constrained problem and multi-model problems. Ant colony and frog leaping algorithms have been tested and applied to four benchmarks. The results were good [133]. In 2014, Ch. Suryanarayana used SVM to predict the water ground level. He said that the situation was very complicated because of the causation of the ground water level and described it as very non-linear data. By combining two AI techniques such as SVM and ANN, he obtained better results. The accuracy has been increased massively, and this was the conclusion [134]. In 2014, Reza Mohammad used SVM to predict the water-quality index. There are too many serious problems coming from poor water quality. In this research, the author used 17 points and took two readings a month for 15 months. He collected eleven water-quality variables. He used two methods of ANN called feed forward back propagation and radial basis function. He concluded that using SVM and FFBP will obtain good and accurate results for the prediction of water quality [135]. In 2014, Qi Feng studied the forecasting of rainfall. As he said, it was very important to predict the amount of rainfall because it affects water management efficiency. Using a wavelet support vector machine (WA-SVM) gave good results, which can be applied successfully. It provides high accuracy and reliable solutions [136]. In 2017, Seyed Amir Naghibi did research discussing the issue of water scarcity. The study plans based on applying SVM, random forest (RF) and GA. He produced a number of models that ran and produced groundwater-potential maps. The results showed the higher importance of altitude, slope angle and TWI in groundwater assessment [137]. In 2020, Taher Rajaee studied the difference between the single and hybrid techniques of AI to predict water quality in rivers. A total of 51 journal papers published from 2000 to 2016 have been studied for this purpose [138]. In the same year, Jayashree Chadalawada wrote a review paper about rainfall-runoff models [139]. Kennedy C. Onyelowe studied how to predict water quality by using GP. That will help the shortterm agricultural reservoir that is affected by the rainfall-runoff [140]. Ali Danandeh, in 2020, validated models based on data from 1930 to 2017 (88 years) by using GP and gene expression programming (GEP) to predict the water cycle in Turkey and the amount of water [141]. Mehdi Jamei used GP to detect dissolved solids (DS) in water [142].

Natural Gas Networks
Although storage, pipelines and distribution networks of natural gas are not a common infrastructure projects in many countries, it is a fast-growing industry. AI techniques were used in many topics under this subject, such as the delivery, extraction and rationalization of gas consumption.
A hybrid technique of FS and PSO has been used since 2020. Wu Liu focused on gas infrastructure with something called hierarchical design. He concluded that this technique could be used in this type of design. Furthermore, gas companies can obtain great benefits from this paper and research [143]. PSO has been used widely. They studied the prediction places for gas extraction by using PSO-ELM [144]. Trying to minimize the operational cost and obtain optimal power dispatch, Muhammad Yousif concluded that PSO could be used in this area with high efficiency [145]. Alaa Farah studied natural gas turbine operating cost and how to minimize their cost and, simultaneously, CO 2 emissions [146]. The prediction of the amount of production using this technique has recently become popular in China for short and medium terms [147]. S. Askari used a hybrid technique of FS and PSO in the field of gas networks. A total of 4258 models for the same number of customers have been used in this area of study in the last three years [148]. Another hybrid technique used ANN and PSO in two pieces of research that studied benzene detection, forecasting and prediction models [149,150]. Deyan Wang studied gas consumption so we could deliver the amounts to the population without any delays or shortage in time by using new hybrid technique [150].
Finally, using AI and its different techniques will help researchers and countries improve gas consumption and gas-detection efficiency. That will help them improve the industry and infrastructure networks.

Communication Networks
Communication networks become one of the essential infrastructure projects all over the world. There are many types of communication networks: wired (land telephone network, TV cable networks), wireless (radio/TV networks, cellar phones, telegram, satellite communications) and fiber optics (internet, phone, TV networks). AI techniques were successfully used to develop, improve and maintain the performance of these networks ( Figure 13).

Communication Networks
Communication networks become one of the essential infrastructure projects all over the world. There are many types of communication networks: wired (land telephone network, TV cable networks), wireless (radio/TV networks, cellar phones, telegram, satellite communications) and fiber optics (internet, phone, TV networks). AI techniques were successfully used to develop, improve and maintain the performance of these networks (Figure 13) Figure 13. Communication networks topics.

Using Expert System
In 2009, researchers made an expert system for detecting rigging in telecommunication. This system can help large organizations. This system shows us how we can make an expert system by using old data collected over time. The author concluded that the human factor is the weak link in this system and could cause a problem in this sensitive field [151]. In 2011, Dmitry made a model for spotting a WIFI transmitter by using the expert system technique. This model may shortly enable designers to use it to build network towers in places that increase their site efficiency. As mentioned before, mobile communication has been among the most important things in modern life, and every point can help the big picture be more efficient [152]. In 2019, Krešimir Vidović used the collected data from a public mobile network to present a new way of urban multimodal mobility. This paper shows us how we can take the benefit from this data. It is also described the mobile network and addressed issues and the topology. The mythology for modeling has been presented in this paper in detail by taking four urban multimodal mobility indicators (distance, number of trips, travel time and speed). The author also shows that we can use the same expert system regardless of its size and population [153].

Using Artificial Neural Networks and Fuzzy Systems
In 2017, Mingzhe Chen used neural networks in his research that was about machine learning. Some properties should be in the next generation of wireless networks in the author's point of view, such as intelligently managing low-latency communications. This paper considers a comprehensive tutorial to use machine learning in this topic basic on neural networks. Investigating many applications, including wireless virtual reality and mobile edge caching, will be a great step to the future [154]. In 2015, Kostiantyn Polshchykov published a paper. He used in this paper a hybrid technique between a neural network and a fuzzy system. By using it, he was able to predetermine time intervals over

Using Expert System
In 2009, researchers made an expert system for detecting rigging in telecommunication. This system can help large organizations. This system shows us how we can make an expert system by using old data collected over time. The author concluded that the human factor is the weak link in this system and could cause a problem in this sensitive field [151]. In 2011, Dmitry made a model for spotting a WIFI transmitter by using the expert system technique. This model may shortly enable designers to use it to build network towers in places that increase their site efficiency. As mentioned before, mobile communication has been among the most important things in modern life, and every point can help the big picture be more efficient [152]. In 2019, Krešimir Vidović used the collected data from a public mobile network to present a new way of urban multimodal mobility. This paper shows us how we can take the benefit from this data. It is also described the mobile network and addressed issues and the topology. The mythology for modeling has been presented in this paper in detail by taking four urban multimodal mobility indicators (distance, number of trips, travel time and speed). The author also shows that we can use the same expert system regardless of its size and population [153].

Using Artificial Neural Networks and Fuzzy Systems
In 2017, Mingzhe Chen used neural networks in his research that was about machine learning. Some properties should be in the next generation of wireless networks in the author's point of view, such as intelligently managing low-latency communications. This paper considers a comprehensive tutorial to use machine learning in this topic basic on neural networks. Investigating many applications, including wireless virtual reality and mobile edge caching, will be a great step to the future [154]. In 2015, Kostiantyn Polshchykov published a paper. He used in this paper a hybrid technique between a neural network and a fuzzy system. By using it, he was able to predetermine time intervals over the telecommunications channel. To determine the weight of neuron links, he conducted experimental research [155]. In 2010 Nasrin Abazari Torghabeh used fuzzy logic in the management of stations. According to the results, the system proved its efficiency in terms of the load distribution, lifetime and residual energy of the network. This could help designers in the future to improve network stations [156].

Using Artificial Particle Swarm Optimization
In 2008, Papagianni used one of the most common techniques, which is particle swarm optimization, to design communication networks. Then the author compared his results with corresponding evolutionary branches like genetic algorithms. He was able to optimize the layout cost and the average packet-delivery-delay successfully [157].

Using Other AI Techniques
More studies talked about using AI in communications in general. Kan Zheng, Tomoyuki Otani, Huimin Lu, Albert Banchs, Zhuang Chen and Jiayi Lu found that AI can improve communications massively; it cannot reach this point without it. It can solve human shortages in resources, time and money. The new 5G network will receive great benefit from this improvement. The authors recommend many future studies to exploit AI advantages and avoid its disadvantages. Mobile edge computing will take huge steps to the future when researchers start to do continuous and in-depth research in this field. Zhuang Chen proved that with a case study [158][159][160][161][162][163][164].
All of these surveys and papers are state-of-the-art and most proved the great benefits of using AI techniques to improve the efficiency of communication networks that will benefit the infrastructure in a country.

Discussion
The study results could be summarized as follows: Implementing AI technique in infrastructures has rapidly increased in the last two decades, as illustrated in Figure  the telecommunications channel. To determine the weight of neuron links, he conducted experimental research [155]. In 2010 Nasrin Abazari Torghabeh used fuzzy logic in the management of stations. According to the results, the system proved its efficiency in terms of the load distribution, lifetime and residual energy of the network. This could help designers in the future to improve network stations [156].

Using Artificial Particle Swarm Optimization
In 2008, Papagianni used one of the most common techniques, which is particle swarm optimization, to design communication networks. Then the author compared his results with corresponding evolutionary branches like genetic algorithms. He was able to optimize the layout cost and the average packet-delivery-delay successfully [157].

Using Other AI Techniques
More studies talked about using AI in communications in general. Kan Zheng, Tomoyuki Otani, Huimin Lu, Albert Banchs, Zhuang Chen and Jiayi Lu found that AI can improve communications massively; it cannot reach this point without it. It can solve human shortages in resources, time and money. The new 5G network will receive great benefit from this improvement. The authors recommend many future studies to exploit AI advantages and avoid its disadvantages. Mobile edge computing will take huge steps to the future when researchers start to do continuous and in-depth research in this field. Zhuang Chen proved that with a case study [158][159][160][161][162][163][164].
All of these surveys and papers are state-of-the-art and most proved the great benefits of using AI techniques to improve the efficiency of communication networks that will benefit the infrastructure in a country.

Discussion
The study results could be summarized as follows: Implementing AI technique in infrastructures has rapidly increased in the last two decades, as illustrated in Figure    The number of publications that uses AI in infrastructure depends on the considered subject. As illustrated in Figure 15, transportation projects constituted more than onethird of the published studies, while electrical power projects and water treatment and distribution projects constituted two quarters of the research (one quarter for each); finally, communication projects and natural gas projects constituted the rest.
The number of publications that uses AI in infrastructure depends on the considered subject. As illustrated in Figure 15, transportation projects constituted more than onethird of the published studies, while electrical power projects and water treatment and distribution projects constituted two quarters of the research (one quarter for each); finally, communication projects and natural gas projects constituted the rest.   Figure 16 showed the distribution of the collected publications with respect to subject and publishing year. It could be noticed that AI techniques were implemented in different infrastructure subjects starting from different years. For example, transportation, water, and electrical projects have used AI since the early 1990s, while communication projects implemented AI in the middle of the 2000s. Finally, natural gas projects began using AI only five years ago. They also illustrated the booming in using AI techniques in infrastructure projects regardless of the subject.  Table 1 and Figure 16 showed the distribution of the collected publications with respect to subject and publishing year. It could be noticed that AI techniques were implemented in different infrastructure subjects starting from different years. For example, transportation, water, and electrical projects have used AI since the early 1990s, while communication projects implemented AI in the middle of the 2000s. Finally, natural gas projects began using AI only five years ago. They also illustrated the booming in using AI techniques in infrastructure projects regardless of the subject.  It could be noted that PSO is the most used technique in infrastructure projects; it was used in about 25% of the collected database. ANN was the second, with 17%, and SVM, ES and FS all shared third place, with about 13%. Finally, GA and GP came in last place, with about 9% each. Refer to Figure 17. It could be noted that PSO is the most used technique in infrastructure projects; it was used in about 25% of the collected database. ANN was the second, with 17%, and SVM, ES and FS all shared third place, with about 13%. Finally, GA and GP came in last place, with about 9% each. Refer to Figure 17.  Figure 18 and Table 2 present the distribution of the collected publications with respect to AI technique and publishing year. It is clear that AI techniques were implemented in infrastructure projects starting from different years. ES and FS were the earliest techniques; they were implemented in the beginning of the 1990s. GA, GP and ANN were implemented 5 years later, in the middle of the 1990s. Finally, SVM and PSO were the latest implemented techniques; they were used since the middle of the 2000s. However, it looks like all AI techniques are still implemented today, even the 30-year-old ones like ES.   Table 2 present the distribution of the collected publications with respect to AI technique and publishing year. It is clear that AI techniques were implemented in infrastructure projects starting from different years. ES and FS were the earliest techniques; they were implemented in the beginning of the 1990s. GA, GP and ANN were implemented 5 years later, in the middle of the 1990s. Finally, SVM and PSO were the latest implemented techniques; they were used since the middle of the 2000s. However, it looks like all AI techniques are still implemented today, even the 30-year-old ones like ES. spect to AI technique and publishing year. It is clear that AI techniques were implemented in infrastructure projects starting from different years. ES and FS were the earliest techniques; they were implemented in the beginning of the 1990s. GA, GP and ANN were implemented 5 years later, in the middle of the 1990s. Finally, SVM and PSO were the latest implemented techniques; they were used since the middle of the 2000s. However, it looks like all AI techniques are still implemented today, even the 30-year-old ones like ES.

Conclusions
This review aimed to collect, sort, summarize and comment on all of the available publications concerning implementing AI techniques in infrastructure projects. A total of 159 studies published between 1989 and 2021 were collected; they were classified into five infrastructure subjects and seven AI techniques. The results could be concluded in the following points: The implementation of AI techniques in infrastructure projects increases exponentially, as illustrated in Figure 14 There is no obsolete AI techniques, and even the earliest ones are still implemented in researches. It is a matter of selecting the right technique for the considered problem.
The new generation of AI techniques are mostly hybrid ones made by merging two or more traditional techniques, which widens their scope and improves their potentials.
The gap study showed in Table 3 and Figure 19 presents many future potential studies. For example, there is a great shortage in implementing AI techniques in the natural gas industry, especially GA, GP, SVM and ES techniques. On the other hand, the electrical power industry can benefit from implementing GA and GP techniques besides the already used techniques. Finally, there were still opportunities to increase the implementation of GA, ANN, FS and ES techniques in the water networks industry.
A similar review for the implementation of AI in civil engineering is recommended in a future study.

Conclusions
This review aimed to collect, sort, summarize and comment on all of the available publications concerning implementing AI techniques in infrastructure projects. A total of 159 studies published between 1989 and 2021 were collected; they were classified into five infrastructure subjects and seven AI techniques. The results could be concluded in the following points: The implementation of AI techniques in infrastructure projects increases exponentially, as illustrated in Figure 14.
There is no obsolete AI techniques, and even the earliest ones are still implemented in researches. It is a matter of selecting the right technique for the considered problem.
The new generation of AI techniques are mostly hybrid ones made by merging two or more traditional techniques, which widens their scope and improves their potentials.
The gap study showed in Table 3 and Figure 19 presents many future potential studies. For example, there is a great shortage in implementing AI techniques in the natural gas industry, especially GA, GP, SVM and ES techniques. On the other hand, the electrical power industry can benefit from implementing GA and GP techniques besides the already used techniques. Finally, there were still opportunities to increase the implementation of GA, ANN, FS and ES techniques in the water networks industry.
A similar review for the implementation of AI in civil engineering is recommended in a future study.