The Efﬁcacy and Superiority of the Expert Systems in Reservoir Engineering Decision Making Processes

: In the process of making a critical decision in reservoir engineering, most of the time we ﬁnd ourselves in a quandary. Like in any other scientiﬁc or technical ﬁeld, when we ﬁnd ourselves having to make a critical decision at a juncture, we cannot go ahead with our gut feelings, but rather must ﬁgure out what knowledge and information is lacking. In generating the missing knowledge and understanding, the depth and the rapid nature of the search will surface as two critical parameters. In other words, most of the time, a shallow search that can be conducted in a short period of time will not produce the missing information and the knowledge and more often, possibly, it will provide misguidance. When a large volume of sources of information is reviewed and the missing knowledge is generated using unbiased deductive methodologies, then, one can make an informed decision based on facts rather than intuition. In achieving such a desired result, it will be necessary to use fast algorithmic protocols to not sacriﬁce the wide nature of the search domain, to ensure that it is possible to generate the desired solution. In this paper, it is shown how in reservoir engineering desirable decisions can be reached in a timely manner choosing the most appealing course of action. It is true that in reservoir engineering applications, the decision-making process may involve a blend of intuition and scientiﬁc and rational thinking, critical factors such as blind spots, and the use of conventional methodologies that make decision-making hard to fully operationalize or to get a handle on. Luckily, there are mathematical and computational tools to ensure that scientists/engineers consistently make correct decisions, which include gathering as much information as possible and considering all possible alternatives (like combinatorial analysis protocols). The tool (model) proposed in this paper for making critical reservoir engineering decisions is a new computational platform/protocol that exploits the advantages of mathematically developed formulations and of the models that are based on the data/information collected. It is furthermore shown that the analyses conducted, and critical decisions reached, represent more thorough and far-reaching solutions that are structured using less computational overhead, thereby increasing the quality of the decision even further.


Introduction
Reservoir engineering is one of the more important branches of petroleum engineering. While reservoir engineering studies the existing drive mechanisms and their efficiencies, its overall objective is the optimization of the processes that are encountered in the extraction of hydrocarbons. Most of the reservoir engineering principles are bounded by the geological characteristics of the hydrocarbon reservoirs. The very first step in reservoir engineering involves estimating the location of the external boundaries and hence the size of the reservoir. The next step is to estimate the amounts of hydrocarbons that exist in the reservoir. The third step encompasses optimization of the recovery processes to make the entire process an economically sound and feasible one. In all these steps, reservoir engineers and geoscientists utilize various numerical and analytical tools, including stochastic methodologies; first, to characterize the reservoir in terms of its boundaries and intrinsic properties such as porosity, permeability, thickness and saturation distributions. During this first step, other macroscopic features of the reservoir including its compartmentalization characteristics, presence of sealing or non-sealing faults, whether the reservoir is layered or not, and whether it is a naturally fractured reservoir or not, etc. are examined. Conclusions reached in this first step will often bring up more questions about the reservoir, including the principal flow directions, micro-pore and macro-pore characteristics of the dual-porosity reservoirs, and existence or nonexistence of crossflow in layered systems. Lack of data, especially during this first step of reservoir analysis, presents some critically significant issues that operators face. What makes the problem even more complex, and demanding is the need for the high levels of accuracy in the data structure, so that it can be used confidently to find answers to the questions that surface during the second and the third steps. Most of the time, in the third step, the studies are conducted in a dynamic state whereas the studies conducted during the first and second steps are often carried out in a static state. Therefore, in the third step it is of paramount importance to have a good idea of the reservoir mechanisms/processes that are active. It should be clearly recognized that studies conducted during these three steps must be crafted in an integrated manner so that internal checks and balances are in place to ensure that the findings of each step are in a complete circular (recursive) agreement with each other. Such a recursive agreement requirement of reservoir engineering studies implies that there may be several combinations of reservoir and project design parameters that may indicate that the solution sought is around a local extremum. At that stage, it will be necessary to implement some Monte Carlo experiments, which are a broad class of computational algorithms that rely on repeated random sampling to assign uncertainty/certainty levels to numerical results. When the search domain is expanded, or the number of scenarios investigated is increased, then, most of the time, it becomes necessary to conduct a prohibitively large number of simulation studies to cover the entire search domain. At this juncture, it is important to note that a fast proxy model to reduce the computational footprint during the development stage will receive its training data set generated by a high-fidelity model. Therefore, in order to achieve a strong coupling between a high-fidelity model and the proxy model, an effective handshaking protocol must be in place. Three examples, which highlight the efficacy of such a hybrid platform, are introduced in later sections.

Decision-Making and Decision Quality
In our daily lives, we always make decisions. In psychology, decision-making is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options [1]. Decision-making can be regarded as a problem-solving activity yielding a solution deemed to be optimal, or at least satisfactory [2]. While some of these decisions are good decisions, some of them can be decisions that may result in failures. Moreover, the stage of the project implementation at which the decision is made is also critical. Sometimes although the decision is good, its timing can be off, and the failure of the project outcome becomes inevitable. Figure 1 below shows four possible quadrants where one can locate the outcomes of the decisions that have been made. The upper right-quadrant shows the region in which the good decisions are mapped out. While a good decision is risking little for the opportunity gained, similarly, a bad decision can risk a lot for the opportunity to gain just a little. The lower left-quadrant, on the other hand, represents the decisions that take us to complete failures. When we cannot objectively assess the risks and when we have a poor perception of time, the project outcome is destined to fail miserably. Between these two diametrically opposite quadrants, a decision-making tool (which can also be a model) is needed to establish a balance. It is obvious that the more information the model must consider, the longer it will take to reach a decision. Again, fast algorithmic protocols such as artificial expert systems can help by, on the one hand, generating and processing a larger volume of information to improve the accuracy, and on the other, because of its high speed, making decisions in a timely manner.
3 paths or courses of actions can be constructed. An informed and effective analysis of the results for different alternatives (scenarios) should provide guidance towards a solution, which is identified as the one with a higher potential for reaching the desired goal. As a final step of the decision process, it will be necessary to examine the results of the decision and reach a conclusion whether the decision has met the identified need or not. If additional questions about the validity of the results still exist, it may become necessary to reiterate some of the steps of the decision-making process to modify the previous decision or to make a completely new decision (e.g., exploring some new alternatives). The decision quality concept defines the framework of a good decision. It is generally accepted as an extension of decision analysis to have a set of perceptions and tools that produce more clarity about the best choice in an uncertain and dynamic environment. Therefore, "decision quality" represents the quality of a decision at the moment the decision is made [3]. Decision quality also describes the process that leads to a high-quality decision [4]. It is important to reiterate that the quality of a decision depends on the quality and quantity of the information to inform the decision. Quality in information is achieved when the information is meaningful and reliable and reflects properly all uncertainties. Therefore, a properly implemented decision quality process enables the achieving of the goal under uncertain and complex scenarios. Figure 2 shows possible decision outcomes when mapped onto decision quality and onto risk and uncertainty management. As a process, decision-making involves making choices by identifying a decision, collecting information, and assessing the viability of possible alternative resolutions. In searching alternative solutions, one should use their lateral thinking skills so that new paths or courses of actions can be constructed. An informed and effective analysis of the results for different alternatives (scenarios) should provide guidance towards a solution, which is identified as the one with a higher potential for reaching the desired goal. As a final step of the decision process, it will be necessary to examine the results of the decision and reach a conclusion whether the decision has met the identified need or not. If additional questions about the validity of the results still exist, it may become necessary to reiterate some of the steps of the decision-making process to modify the previous decision or to make a completely new decision (e.g., exploring some new alternatives).
The decision quality concept defines the framework of a good decision. It is generally accepted as an extension of decision analysis to have a set of perceptions and tools that produce more clarity about the best choice in an uncertain and dynamic environment. Therefore, "decision quality" represents the quality of a decision at the moment the decision is made [3]. Decision quality also describes the process that leads to a high-quality decision [4]. It is important to reiterate that the quality of a decision depends on the quality and quantity of the information to inform the decision. Quality in information is achieved when the information is meaningful and reliable and reflects properly all uncertainties. Therefore, a properly implemented decision quality process enables the achieving of the goal under uncertain and complex scenarios. Figure 2 shows possible decision outcomes when mapped onto decision quality and onto risk and uncertainty management.

Significance of Data-Driven Decisions
It is only fair to ask the question why we need data-driven decisions in reserv engineering. The most direct answer to this question is because of the inherently comp nature of the reservoir systems. The simplest to the most difficult decisions made in r ervoir engineering applications are thoroughly based on data. In common practice, in d cussing data-driven decisions, most of the time it is assumed that data is already availab Unfortunately, this is not a realistic assumption for reservoir engineering applications. F example, in terms of property distributions, one can generate voluminous data sets th will help in studying the performance of a reservoir for different scenarios. As the numb of scenarios (data sets) is increased, the decision reached is expected to be more comp hensive and more instructive in terms of the key performance indicators (KPIs). It is w recognized that every single reservoir engineering problem is an ill-conditioned one th needs to be converted to a well-posed problem. In other words, in reservoir engineeri analysis problems, the number of unknowns far exceeds the number of equations avai ble. Therefore, it will always be necessary to convert this ill-conditioned problem to a we posed problem before attempting to obtain a solution. Then, within inherently impos specific bounds of the domain, many scenarios can be generated and viewed in a succ sive manner. The generated results are then sorted out using a formal analytical proto (such as Monte Carlo simulation analysis) to quantify the uncertainties. In order to crease the precision of such an analysis it will, then, be necessary to examine a sufficien large number of scenarios. In order to satisfy this precision condition, more effectively is desirable to use some proxy (AI-based models) so that computational overhead can logarithmically reduced as compared to the computational overhead of the numeri models. It should also be clearly recognized that as the number of scenarios investigat increases, the solution generated will approach the global extremum so that the marg towards the nonunique nature of the solution will be drastically compressed.

Significance of Data-Driven Decisions
It is only fair to ask the question why we need data-driven decisions in reservoir engineering. The most direct answer to this question is because of the inherently complex nature of the reservoir systems. The simplest to the most difficult decisions made in reservoir engineering applications are thoroughly based on data. In common practice, in discussing data-driven decisions, most of the time it is assumed that data is already available. Unfortunately, this is not a realistic assumption for reservoir engineering applications. For example, in terms of property distributions, one can generate voluminous data sets that will help in studying the performance of a reservoir for different scenarios. As the number of scenarios (data sets) is increased, the decision reached is expected to be more comprehensive and more instructive in terms of the key performance indicators (KPIs). It is well recognized that every single reservoir engineering problem is an ill-conditioned one that needs to be converted to a well-posed problem. In other words, in reservoir engineering analysis problems, the number of unknowns far exceeds the number of equations available. Therefore, it will always be necessary to convert this ill-conditioned problem to a well-posed problem before attempting to obtain a solution. Then, within inherently imposed specific bounds of the domain, many scenarios can be generated and viewed in a successive manner. The generated results are then sorted out using a formal analytical protocol (such as Monte Carlo simulation analysis) to quantify the uncertainties. In order to increase the precision of such an analysis it will, then, be necessary to examine a sufficiently large number of scenarios. In order to satisfy this precision condition, more effectively, it is desirable to use some proxy (AI-based models) so that computational overhead can be logarithmically reduced as compared to the computational overhead of the numerical models. It should also be clearly recognized that as the number of scenarios investigated increases, the solution generated will approach the global extremum so that the margins towards the nonunique nature of the solution will be drastically compressed.

Decision Making in Reservoir Engineering While Facing Uncertainties
In reservoir management problems, the most critical problem is the geological uncertainty. In today's practices of the upstream petroleum engineering technologies, reservoir engineering principles and fundamentals are instrumental in making sound decisions towards the development and production of a hydrocarbon reservoir. Most of the time these decisions need to be made in the presence of limited data.
Fundamentally, the basic relation for reservoir evaluation for a given process using a deterministic protocol can be expressed as [5]: In Equation (1), the following terms are identified: q(t): Production/injection rate at point x, y, z at time t p(t): Pressure at point x, y, z at time t Ω(D, R): Mathematical flow model with the relevant "built in" physics and thermodynamics D: Flow domain characteristics (e.g., fluid types, spatial and directional dependencies) R: Some predefined production/injection mechanism, and/or recovery process including the project design parameters (e.g., well geometry).
Equation (1) may appear in front of a reservoir engineer in three different forms. If Equation (1) is used to solve for q(t)[or p(t)] using the available information on Ω(D, R) as it appears on the right-hand-side, then, this solution is known as "forward solution". The accuracy of the forward solution depends on the degree of accuracy of the representation of the flow mechanisms in terms of physics and thermodynamics as well as accurate description of the spatial distribution of the domain characteristics, the design parameters imposed on the reservoir system in terms of boundary and initial conditions as well as the key features of the project implemented. Equation (1) can also be rearranged to solve for D, which is known as "history matching", so that a reservoir can be characterized. This inverse form of the solution of Equation (1), as expected, inherently suffers from non-uniqueness. The second inverse solution involves the design of the project parameters. For example, in Equation (1), if a desired q(t) is specified, then, the same equation can be rearranged to solve for the project design parameters that will effectively generate the expected q(t) solution. Like any inverse solution, this second inverse solution is also prone to non-uniqueness problems. In solving Equation (1), if the right-hand-side of the equation, Ω(D, R), is fully known, solving for the left-hand-side is straightforward and such a solution protocol is known as a deterministic process. The main difficulty faced here is how certain one is of the entries on the right-hand-side, namely Ω, D and R. If Ω, D, and R are partially known and several scenarios are created for the unknown entries of Ω, D, and R and the problem is solved for each scenario, then the solution protocol is called stochastic. The "hybrid modeling" that will be discussed later incorporates deterministic, stochastic, and proxy modeling approaches in a synergistic manner.
There are four main sources of uncertainty in reservoir characterization including quantity and quality of the existing data, geological understanding, scaling-up, and mathematical representations. Unfortunately, out of these four sources of uncertainty, only one of them, the uncertainties originating from mathematical representation, is quantifiable. Yet, to reduce the uncertainties, we tend to purchase more seismic lines to be more certain of the reservoir volume, collect more core samples to be more certain of the average porosity and permeability values, do more petrophysical analysis to be more certain of the saturations, buy more gamma logs to be more certain of the net to gross thickness ratio, and do more simulation studies to have more certainty regarding the recovery factor. Table 1 shows the data categories that one needs to have a good grip on the solution of Equation (1). In the next section, a general outline of a hybrid computational platform that is assembled to obtain the forward and two inverse forms of the solutions to Equation (1) will be discussed.

A Hybrid Computational Platform
The forward and the associated two inverse forms of Equation (1) can be solved using conventional formulations and applicable solution protocols. However, these solutions typically prove to be prohibitively expensive and slow in terms of personnel requirements and computational overhead encountered. By coupling proxy-based algorithms with conventional numerical and/or analytical solution procedures on a common platform, it will be possible to generate rapid and much less expensive solutions with higher levels of accuracy towards identifying the uncertainty. Figure 3 shows a schematic representation of the computational structure of the platform that will be discussed here.
The basic tenet of the computational platform shown in Figure 3 is to capitalize on the advantages of both high-fidelity models and artificial intelligence-based models by coupling them in computations in a synchronous manner. As a result of such an effective integration, it will be possible to achieve high accuracy, low-cost, low energy, and high-speed solutions that are tractable. The seamless integration of these two radically different modeling technologies with the help of well-established hand-shaking protocols adds a uniquely assembled powerful tool to the capabilities of reservoir engineers and geoscientists. On the left-hand-side of Figure 3, a suit of high-fidelity reservoir models is shown. These models are developed with the following functionalities: The basic tenet of the computational platform shown in Figure 3 is to capitalize on the advantages of both high-fidelity models and artificial intelligence-based models by coupling them in computations in a synchronous manner. As a result of such an effective integration, it will be possible to achieve high accuracy, low-cost, low energy, and highspeed solutions that are tractable. The seamless integration of these two radically different modeling technologies with the help of well-established hand-shaking protocols adds a uniquely assembled powerful tool to the capabilities of reservoir engineers and geoscientists. On the left-hand-side of Figure 3, a suit of high-fidelity reservoir models is shown. These models are developed with the following functionalities: • Rectangular and radial-cylindrical grid systems: Both rectangular and radial-cylindrical grid systems are incorporated and supported in these models so that they become adaptable to problems for a set of varying physical boundaries and boundary conditions. EOR process models: These models include thermal EOR models, chemical EOR models, and miscible gas injection models.
On the right-hand-side of Figure 3, a library of artificial-neural-network based expert systems are shown. These modules can be considered three separate toolboxes. Figures 4-6 show the expert tools that are included in the enhanced oil recovery process, well test analysis, and general reservoir engineering toolboxes, respectively. It should be noted that tools highlighted in each of these figures should not be considered the final catalogue of each toolbox, as more tools can be added as they become available.  • Rectangular and radial-cylindrical grid systems: Both rectangular and radial-cylindrical grid systems are incorporated and supported in these models so that they become adaptable to problems for a set of varying physical boundaries and boundary conditions. EOR process models: These models include thermal EOR models, chemical EOR models, and miscible gas injection models.
On the right-hand-side of Figure 3, a library of artificial-neural-network based expert systems are shown. These modules can be considered three separate toolboxes. Figures 4-6 show the expert tools that are included in the enhanced oil recovery process, well test analysis, and general reservoir engineering toolboxes, respectively. It should be noted that tools highlighted in each of these figures should not be considered the final catalogue of each toolbox, as more tools can be added as they become available.  All the tools assembled honor the information exchange protocols of the hybrid platform discussed earlier.
Making the decision on the suitability of an EOR process to a specific field is generally accepted as the very first step of paramount importance. There are typically a multitude of options in making this important decision. The expert tools highlighted in Figure 4 are designed to make the best decision on an EOR process, and when some representative reservoir parameters are assigned, they can generate not only a checkmark as required by a rule of thumb approach, but also some expected recovery performances in a quantified form. Thus, during the screening stage, it will be possible to visit several EOR processes and make an informed decision about the suitability of the most promising process. Once such a decision is in place for a specific EOR methodology, then in the next step a large number (tens of thousands) of proxy model runs can be conducted by varying reservoir properties and project design parameters to generate some certainty brackets on the project performance indicators. If needed, the information on optimized project design parameters, together with reservoir parameters used, is passed to the relevant high-fidelity model located on the right-hand-side of Figure 3 and the high-fidelity model is run to verify the results from the proxy model. As explained earlier, the coupled approach helps the EOR project design engineer by narrowing the search window extensively so that most of the unnecessary and peripheral work is eliminated. Furthermore, by studying tens of thousands of scenarios rapidly on the implementation of the project, it will be possible to quantify the overall certainty of the results more precisely. Figure 5 shows several well test analysis expert systems assembled in a second toolbox. These tools are designed for the pressure transient analysis (PTA) of hydrocarbon reservoirs with radically different macroscopic and microscopic characteristics. Some of these PTA expert systems are designed for complex well structures such as multilateral wells, slanted wells, wells completed with multi-stage hydraulic fractures, etc. Well test analysis is a class of systems analysis. In a systems analysis application, the system under observation is perturbed (e.g., the well is shut in or put on production). The reservoir system responds to this perturbation and the resulting signals (pressures) are measured in a time series. Then, using the existing analytical solutions in an inverse mode, some key reservoir characteristics are determined. Models that are used in pressure transient analysis are always built in the same manner. The basic models assume homogeneous and isotropic reservoir characteristics. There are also analytical models that are built for double-porosity, multilayer (with or without crossflow), and composite reservoirs. In well test analysis, perhaps the most critical step is the identification of the model that will be used in the interpretation of the well test data. The supervisory model that is placed at the top of Figure 5 reviews the time series of the pressure data and looks for specific signatures (markers) so that it can decide which expert system from the toolbox should be called in to conduct the pressure transient analysis. In generating the training data for the expert tools shown in Figure 5, either available analytical models or numerical models are used. Then, the specialized expert systems are exposed to the training data to learn and discover the existing signatures recorded in the data sets. The expert systems shown in Figure 5 have the capabilities to shed more light on the reservoir characteristics than the classical analysis models. For example, when the expert system analyzes a faulted reservoir system, it can analyze the sealing characteristics of the fault and directional permeability values where a classical model for faulted reservoirs will work only for isotropic systems with completely sealing faults. The third toolbox, shown in Figure 6, is designed for generalized reservoir and production engineering methodologies and perhaps more importantly for integrated engineering and geosciences studies. This toolbox displays some expert systems that can be used in integrating geological, geophysical and production data to find the most promising infill well locations in brown and semi-brown fields, performing history matching studies. Furthermore, some expert systems that can be used to judge the spatial and temporal variations of reservoir properties, such as relative permeability and capillary pressure, are also included. Introducing some dynamic characterization of relative permeability and capillary pressure characteristics in conventional reservoir simulation studies will 10 increase the accuracy of the results significantly. Again, the computational platform, as sketched in Figure 3, can incorporate expert systems in a "built-in" form into the reservoir models that are based on hard computing protocols.

Examples
In this section, some examples pertaining to different expert systems that appear in three toolboxes (Figures 4-6) are presented. The final topology of a network is typically identified by the number of input and output neurons, number of intermediate layers, and the number of neurons on each intermediate layer. Another important component of a network is the learning algorithms used in assembling the network. Searching the combination of all possible options to find the most optimum network structure is an arduous task. In overcoming this difficulty, with the purpose of optimizing the network topology, typically a parallel-processing workflow is utilized [7]. This protocol is outlined in Figure  7. The overall idea of the workflow is to randomly generate many network architectures. With the help of the power of parallel processing, a series of multiple trainings on various architectures can be processed in a synchronous manner. The use of such a workflow is attractive not only because it converges to an optimized structure but also because it decreases the computational overhead. All the tools assembled honor the information exchange protocols of the hybrid platform discussed earlier.
Making the decision on the suitability of an EOR process to a specific field is generally accepted as the very first step of paramount importance. There are typically a multitude of options in making this important decision. The expert tools highlighted in Figure 4 are designed to make the best decision on an EOR process, and when some representative reservoir parameters are assigned, they can generate not only a checkmark as required by a rule of thumb approach, but also some expected recovery performances in a quantified form. Thus, during the screening stage, it will be possible to visit several EOR processes and make an informed decision about the suitability of the most promising process. Once such a decision is in place for a specific EOR methodology, then in the next step a large number (tens of thousands) of proxy model runs can be conducted by varying reservoir properties and project design parameters to generate some certainty brackets on the project performance indicators. If needed, the information on optimized project design parameters, together with reservoir parameters used, is passed to the relevant high-fidelity model located on the right-hand-side of Figure 3 and the high-fidelity model is run to verify the results from the proxy model. As explained earlier, the coupled approach helps the EOR project design engineer by narrowing the search window extensively so that most of the unnecessary and peripheral work is eliminated. Furthermore, by studying tens of thousands of scenarios rapidly on the implementation of the project, it will be possible to quantify the overall certainty of the results more precisely. Figure 5 shows several well test analysis expert systems assembled in a second toolbox. These tools are designed for the pressure transient analysis (PTA) of hydrocarbon reservoirs with radically different macroscopic and microscopic characteristics. Some of these PTA expert systems are designed for complex well structures such as multilateral wells, slanted wells, wells completed with multi-stage hydraulic fractures, etc. Well test analysis is a class of systems analysis. In a systems analysis application, the system under observation is perturbed (e.g., the well is shut in or put on production). The reservoir system responds to this perturbation and the resulting signals (pressures) are measured in a time series. Then, using the existing analytical solutions in an inverse mode, some key reservoir characteristics are determined. Models that are used in pressure transient analysis are always built in the same manner. The basic models assume homogeneous and isotropic reservoir characteristics. There are also analytical models that are built for double-porosity, multilayer (with or without crossflow), and composite reservoirs. In well test analysis, perhaps the most critical step is the identification of the model that will be used in the interpretation of the well test data. The supervisory model that is placed at the top of Figure 5 reviews the time series of the pressure data and looks for specific signatures (markers) so that it can decide which expert system from the toolbox should be called in to conduct the pressure transient analysis. In generating the training data for the expert tools shown in Figure 5, either available analytical models or numerical models are used. Then, the specialized expert systems are exposed to the training data to learn and discover the existing signatures recorded in the data sets. The expert systems shown in Figure 5 have the capabilities to shed more light on the reservoir characteristics than the classical analysis models. For example, when the expert system analyzes a faulted reservoir system, it can analyze the sealing characteristics of the fault and directional permeability values where a classical model for faulted reservoirs will work only for isotropic systems with completely sealing faults.
The third toolbox, shown in Figure 6, is designed for generalized reservoir and production engineering methodologies and perhaps more importantly for integrated engineering and geosciences studies. This toolbox displays some expert systems that can be used in integrating geological, geophysical and production data to find the most promising infill well locations in brown and semi-brown fields, performing history matching studies. Furthermore, some expert systems that can be used to judge the spatial and temporal variations of reservoir properties, such as relative permeability and capillary pressure, are also included. Introducing some dynamic characterization of relative permeability and capillary pressure characteristics in conventional reservoir simulation studies will increase the accuracy of the results significantly. Again, the computational platform, as sketched in Figure 3, can incorporate expert systems in a "built-in" form into the reservoir models that are based on hard computing protocols.

Examples
In this section, some examples pertaining to different expert systems that appear in three toolboxes (Figures 4-6) are presented. The final topology of a network is typically identified by the number of input and output neurons, number of intermediate layers, and the number of neurons on each intermediate layer. Another important component of a network is the learning algorithms used in assembling the network. Searching the combination of all possible options to find the most optimum network structure is an arduous task. In overcoming this difficulty, with the purpose of optimizing the network topology, typically a parallel-processing workflow is utilized [7]. This protocol is outlined in Figure 7. The overall idea of the workflow is to randomly generate many network architectures. With the help of the power of parallel processing, a series of multiple trainings on various architectures can be processed in a synchronous manner. The use of such a workflow is attractive not only because it converges to an optimized structure but also because it decreases the computational overhead. After identifying the most suitable ANN architecture, the next step involves the training of the network using the available data. Often, the entire dataset is divided into a training set, a validation set, and a testing set in a random manner. Typically, 80% of the data is used for training and 10% is used for validation purposes, with the remaining 10% of the dataset being used for testing purposes.

Example 1-Design of Cyclic Steam Stimulation Process
Cyclic steam stimulation (CSS) has enjoyed great success as an enhanced oil recovery (EOR) process in its broad applications in heavy oil reservoirs. It is widely implemented for its attractive economic efficacy and rapid project response [8]. A typical CSS project consists of multiple repetitive cycles of (i) steam injection period, (ii) soaking period, and (iii) production period. During each of these three phases of the CSS process, only one well that serves as an injector and producer is utilized. During the injection stage, superheated steam is injected through the well. The injection period may last 2 to 10 days. Then, the well will be shut in (typically for 5 to 7 days). During this soaking period, the purpose is to create an opportunity for the injected steam to spread itself throughout the reservoir evenly and decrease the viscosity of the heavy and viscous oil. In the third phase of the process, the same well is put on production (typically from several weeks to several months) so that the heated oil with the reduced viscosity can flow more easily towards the wellbore. These three stages of injection, soaking, and production periods will be repeated in a cyclic manner if production rates at the end of each cycle are economical. From this short discussion, it should be clear that in the design of the process it will be

STEP 1
Define the number of network topologies to be searched and generate them by combining the randomly selected specified network parameters in Step 0. STEP 2 Construct networks and prepare the training jobs based on the architectures generated in Step 1.

Training
Job #1 Training Job #2 STEP 3 Send the training jobs to a local scheduler or cluster server. In this way, multiple training jobs can be processed synchronously.

Training Job #N
Step 4 Collect the internal testing results from all the training jobs, select the ANN architecture with the best testing performance to be the optimized model.

STEP 4
Compile the testing results from all training jobs and select the ANN structure yielding the best performance to be the optimized. After identifying the most suitable ANN architecture, the next step involves the training of the network using the available data. Often, the entire dataset is divided into a training set, a validation set, and a testing set in a random manner. Typically, 80% of the data is used for training and 10% is used for validation purposes, with the remaining 10% of the dataset being used for testing purposes.

Example 1-Design of Cyclic Steam Stimulation Process
Cyclic steam stimulation (CSS) has enjoyed great success as an enhanced oil recovery (EOR) process in its broad applications in heavy oil reservoirs. It is widely implemented for its attractive economic efficacy and rapid project response [8]. A typical CSS project consists of multiple repetitive cycles of (i) steam injection period, (ii) soaking period, and (iii) production period. During each of these three phases of the CSS process, only one well that serves as an injector and producer is utilized. During the injection stage, superheated steam is injected through the well. The injection period may last 2 to 10 days. Then, the well will be shut in (typically for 5 to 7 days). During this soaking period, the purpose is to create an opportunity for the injected steam to spread itself throughout the reservoir evenly and decrease the viscosity of the heavy and viscous oil. In the third phase of the process, the same well is put on production (typically from several weeks to several months) so that the heated oil with the reduced viscosity can flow more easily towards the wellbore. These three stages of injection, soaking, and production periods will be repeated in a cyclic manner if production rates at the end of each cycle are economical. From this short discussion, it should be clear that in the design of the process it will be important for the design engineer to plan the duration of each stage of the process and the total volume of the steam to be injected. It is not good practice to design a CSS project with fixed time periods of injection, soaking, and production stages. The disadvantage of such a pre-fixed time schedule is that the oil production rate may drop to an extremely low value when the cycle switches because the energy introduced to the reservoir is not enough to sustain such a long production period as designed, or conversely, the timing of the cycle switching could prove to be premature since the oil production rate is still high. This highly nonlinear behavior of the system makes it more challenging for the expert system to learn. To address this issue Sun and Ertekin suggest the development of a supervisory ANN, which can be used as a classification tool based on the number of cycles [7]. In other words, a catalogue of sub-ANN models is developed for a fixed number of cycles to predict the oil production rate profiles for projects with different numbers of CSS cycles. At the first phase of the design process, the supervisory ANN will predict the number of cycles that will be necessary for the specific reservoir being studied. Based on the predicted number of cycles, the corresponding sub-ANN will be used to study and understand the performance of the same reservoir. The general workflow of the expert system is defined in Figure 8.
ume of the steam to be injected. It is not good practice to design a CSS project w time periods of injection, soaking, and production stages. The disadvantage of su fixed time schedule is that the oil production rate may drop to an extremely lo when the cycle switches because the energy introduced to the reservoir is not en sustain such a long production period as designed, or conversely, the timing of switching could prove to be premature since the oil production rate is still hi highly nonlinear behavior of the system makes it more challenging for the exper to learn. To address this issue Sun and Ertekin suggest the development of a sup ANN, which can be used as a classification tool based on the number of cycles [7] words, a catalogue of sub-ANN models is developed for a fixed number of cycle dict the oil production rate profiles for projects with different numbers of CSS c the first phase of the design process, the supervisory ANN will predict the nu cycles that will be necessary for the specific reservoir being studied. Based on dicted number of cycles, the corresponding sub-ANN will be used to study an stand the performance of the same reservoir. The general workflow of the exper is defined in Figure 8. The supervisory ANN is a fully connected three-layer model with its 78, 83 neurons on the first, second, and third intermediate layers, respectively. On t layer with the help of 30 neurons, input parameters involving spatial propertie conditions, fluid properties, relative permeability coefficients, and project design eters including steam quality, steam injection rate, steam temperature, product bottom hole pressure, well drainage radius, injection duration, soaking duration cle switching rate are provided to the network. As explained above, the output la one neuron, simply predicts the number of cycles. Once the number of cycles is from the supervisory ANN, the corresponding sub-ANN developed for that spec ber of cycles predicts the oil production rate as shown in Figure 9. The supervisory ANN is a fully connected three-layer model with its 78, 83, and 68 neurons on the first, second, and third intermediate layers, respectively. On the input layer with the help of 30 neurons, input parameters involving spatial properties, initial conditions, fluid properties, relative permeability coefficients, and project design parameters including steam quality, steam injection rate, steam temperature, production well bottom hole pressure, well drainage radius, injection duration, soaking duration, and cycle switching rate are provided to the network. As explained above, the output layer with one neuron, simply predicts the number of cycles. Once the number of cycles is learned from the supervisory ANN, the corresponding sub-ANN developed for that specific number of cycles predicts the oil production rate as shown in Figure 9.

Example 3-Integration of Seismic, Well-Log and Production Data to Design an Infill Drilling Program
Seismic data, petrophysical data including well logs and core analysis data, and field production history are the principal sources of information used in reservoir characterization. While the use of each of these data sets individually helps in answering several questions about the characterization of the reservoir rock, the integration of these different data resources in a purposeful way opens new vistas so that effective workflows for sweetspot locations in infill drilling operations can be adopted. This challenging problem becomes even more perplexing when it needs to be addressed in the presence of a complex reservoir architecture. In this section, we will show how this challenging problem is brought to a resolution with the aid of an integrated model built using artificial neural networks [10]. This example, once again, reiterates how an expert system can become a powerful tool in decision-making processes.
The oil field focused on in this study is in North America and is a part of the Wilcox formation. Figure 11 shows the reservoir boundaries and locations of 39 existing wells. An extensive seismic survey was carried throughout the reservoir.

ANGLE, [degrees]
ANN Angle TEST NUMBER TEST NUMBER Figure 9. A blind testing case results from the sub-ANN trained for six cycles [7].

Example 2-Characterization of a Fault Plane from PTA Data
In this section, a proxy model developed as a powerful tool to be deployed in the analysis of the pressure transient data collected in an anisotropic and faulted reservoir is described [9]. The principal tasks assigned to the proxy model include determination of the permeability values in principal flow directions, porosity of the reservoir, distance to the fault, sealing characteristics of the fault, and orientation of the fault plane with respect to the principal flow directions. The training data for the proxy model is generated using a two-dimensional, single-phase, slightly compressible numerical model. The pressure transient data sets generated for a large variety of combinations of input parameters are shown to the proxy model in a systematic manner. In generating the pressure transient data, the following assumptions are made: • Single-well producing at a constant flow rate, • Infinitely large reservoir, • Single-phase, slightly compressible fluid, • Homogeneous formation thickness, porosity, and anisotropic permeability distributions, • Fully or partially sealing fault plane (expressed in percentage), • Infinitely long fault plane with no width.
In order to accommodate the infinitely large reservoir assumption in the numerical model, it was ensured that during the collection of the well test data the outer physical boundaries had not felt any pressure transients. Table 2 shows the principal reservoir variables placed on the input and output layers. Table 2. Ranges of variables placed on the input (A) and output (B) layers [9]. The pressure transient data placed together with the other reservoir variables include nine pressure and time values that are chosen randomly from the pressure transient data. These nine pressure and time pairs that are exposed to the network must include pressure values after the pressure transients reach the fault plane (in other words, after the presence of fault is felt at the wellbore). Furthermore, several functional links are added to the input and output layers. The resulting optimum ANN structure is found to be six layers including the input and output layers. The first middle and the second middle layers have 175 and 100 neurons, respectively. The third middle layer has 60 neurons, and the fourth middle layer has 30 neurons. The log-sigmoid transfer functions (tansig, and logsig) in the middle layers and the linear transfer function purelin in the output layer appeared to be the most appropriate transfer function for this class of problems. The existing high-level nonlinearities were the main reasons for adding several functional links to the input and output layers and using four intermediate layers for deep machine learning. Figure 10 shows results of 30 test runs (predictions) and comparisons with the actual characteristics that are used in generating the pressure transient data used in the analysis. In developing the network described here, it was observed that the inclusion of appropriate functional links in the input and output layers were crucial in generating a powerful network with testing results as displayed in Figure 10.

Example 3-Integration of Seismic, Well-Log and Production Data to Design an Infill Drilling Program
Seismic data, petrophysical data including well logs and core analysis data, and field production history are the principal sources of information used in reservoir characterization. While the use of each of these data sets individually helps in answering several questions about the characterization of the reservoir rock, the integration of these different data resources in a purposeful way opens new vistas so that effective workflows

ANGLE, [degrees]
ANN Angle TEST NUMBER TEST NUMBER Figure 10. Comparison of the predicted and actual characteristics of a faulted reservoir.

Example 3-Integration of Seismic, Well-Log and Production Data to Design an Infill Drilling Program
Seismic data, petrophysical data including well logs and core analysis data, and field production history are the principal sources of information used in reservoir characterization. While the use of each of these data sets individually helps in answering several questions about the characterization of the reservoir rock, the integration of these different data resources in a purposeful way opens new vistas so that effective workflows for sweet-spot locations in infill drilling operations can be adopted. This challenging problem becomes even more perplexing when it needs to be addressed in the presence of a complex reservoir architecture. In this section, we will show how this challenging problem is brought to a resolution with the aid of an integrated model built using artificial neural networks [10]. This example, once again, reiterates how an expert system can become a powerful tool in decision-making processes.
The oil field focused on in this study is in North America and is a part of the Wilcox formation. Figure 11 shows the reservoir boundaries and locations of 39 existing wells. An extensive seismic survey was carried throughout the reservoir.  Table 3 gives a summary of the availability of the types of the well logs and number of wells with production histories. As can be seen in Table 3, a full suite of well logs was not available for each well. Therefore, it would be necessary to generate synthetic well logs not only at every intersection of the seismic lines but also at some actual well locations where certain types of well logs are missing. In this work, the available production data spans over a period of three years. A typical well production history is decomposed into a plateau period and a decline period. For the purpose of reducing the volume of the data, a curve fitting scheme is applied to express the production data collected during the decline period in a hyperbolic-decline form. For each well, a plateau flow rate is calculated as the arithmetical average of the flow rates, which is then used as one of the fitting parameters for the plateau region. The total duration of the plateau period is also recorded. The production rate of the decline period is fitted using a 3-parameter hyperbolic-decline form as shown in Figure 12.  Table 3 gives a summary of the availability of the types of the well logs and number of wells with production histories. As can be seen in Table 3, a full suite of well logs was not available for each well. Therefore, it would be necessary to generate synthetic well logs not only at every intersection of the seismic lines but also at some actual well locations where certain types of well logs are missing. In this work, the available production data spans over a period of three years. A typical well production history is decomposed into a plateau period and a decline period. For the purpose of reducing the volume of the data, a curve fitting scheme is applied to express the production data collected during the decline period in a hyperbolic-decline form. For each well, a plateau flow rate is calculated as the arithmetical average of the flow rates, which is then used as one of the fitting parameters for the plateau region. The total duration of the plateau period is also recorded. The production rate of the decline period is fitted using a 3-parameter hyperbolic-decline form as shown in Figure 12. Appl. Sci. 2021, 11, x FOR PEER REVIEW 1 17 Figure 12. Curve fitting scheme applied to production histories.
Following the protocol as described in Figure 12, the production history of a ty well can be simply expressed by five parameters. These parameters consist of two pa eters in the plateau region (average flow rate during the plateau region and durati the plateau time), and three parameters in the decline region (initial flow rate at th ginning of the decline period and the hyperbolic decline constants a and b). In this w order to predict the expected production at any location within the reservoir, it w sufficient to find values of these five parameters.
In this work, a two-stage tool-development protocol is implemented. This pro fully utilizes the available field data and provides fast and accurate predictions fo purposes of reservoir characterization and field development optimization. The firs addresses the need for synthetic well logs at each intersection of seismic lines throug the field. In other words, this tool can generate 5 different types of synthetic well lo any location within the seismic survey boundaries. In the development stage of this seismic attributes extracted from 3D seismic data, 5 different types of well logs, and coordinates are utilized. The developed synthetic well log tool is capable of gener well logs for vertical wells at specified locations. However, the very same tool can be in generating logs for wells with complex architecture such as slanted wells and hor tal wells when the wellbore trajectory is defined. Figure 13 shows schematically how first tool works. Following the protocol as described in Figure 12, the production history of a typical well can be simply expressed by five parameters. These parameters consist of two parameters in the plateau region (average flow rate during the plateau region and duration of the plateau time), and three parameters in the decline region (initial flow rate at the beginning of the decline period and the hyperbolic decline constants a and b). In this way, in order to predict the expected production at any location within the reservoir, it will be sufficient to find values of these five parameters.
In this work, a two-stage tool-development protocol is implemented. This protocol fully utilizes the available field data and provides fast and accurate predictions for the purposes of reservoir characterization and field development optimization. The first tool addresses the need for synthetic well logs at each intersection of seismic lines throughout the field. In other words, this tool can generate 5 different types of synthetic well logs at any location within the seismic survey boundaries. In the development stage of this tool, seismic attributes extracted from 3D seismic data, 5 different types of well logs, and well coordinates are utilized. The developed synthetic well log tool is capable of generating well logs for vertical wells at specified locations. However, the very same tool can be used in generating logs for wells with complex architecture such as slanted wells and horizontal wells when the wellbore trajectory is defined. Figure 13 shows schematically how the first tool works. The second tool is developed to predict the oil flow rates and cumulative oil production profiles as a time series at any desired well location. It should be remembered that the final goal of the second tool is to predict the five production-profile related parameters as described in Figure 12. Once these five parameters are identified, it is a straightforward procedure to construct the predicted production profiles at the desired locations. Figure 14 highlights the significant parts of the workflow established for the second stage of the analysis and construction of potential hydrocarbon productivity distribution over the domain of interest.

Seismic cube Train
Generate Slice at desired depth intervals Figure 13. Schematic representation of the workflow for the first tool.
The second tool is developed to predict the oil flow rates and cumulative oil production profiles as a time series at any desired well location. It should be remembered that the final goal of the second tool is to predict the five production-profile related parameters as described in Figure 12. Once these five parameters are identified, it is a straightforward procedure to construct the predicted production profiles at the desired locations. Figure  14 highlights the significant parts of the workflow established for the second stage of the analysis and construction of potential hydrocarbon productivity distribution over the domain of interest. Figure 15 shows the predicted oil flow rates and cumulative production profiles for the average, best, and worst cases. In these comparisons, predicted production profiles are compared against the production data from the actual wells. The average absolute error reported in the average quality case is 16.66% (Figure 15a), in the best quality case is 9.86% (Figure 15b), and in the worst case is 18.59% (Figure 15c). These testing cases represent the results for wells that are not shown to the expert system during the training phase. It is observed that promisingly high-quality matches were obtained both for oil production rates and cumulative oil production. It should be noted that the cumulative production time series is not obtained by integrating area under the production rate profile but determined directly by the expert system. This approach ensures the invoking of an additional internal check on the predictions. The average absolute error for cumulative production as a function of time is found to be slightly larger than 5%. 19 Figure 14. Schematic representation of the workflow for the second tool. Figure 15 shows the predicted oil flow rates and cumulative production profiles for the average, best, and worst cases. In these comparisons, predicted production profiles are compared against the production data from the actual wells. The average absolute error reported in the average quality case is 16.66% (Figure 15a), in the best quality case is 9.86% (Figure 15b), and in the worst case is 18.59% (Figure 15c). These testing cases represent the results for wells that are not shown to the expert system during the training phase. It is observed that promisingly high-quality matches were obtained both for oil production rates and cumulative oil production. It should be noted that the cumulative production time series is not obtained by integrating area under the production rate profile but determined directly by the expert system. This approach ensures the invoking of an additional internal check on the predictions. The average absolute error for cumulative production as a function of time is found to be slightly larger than 5%. After validating and successfully testing the tools that are developed in the first and second phases of development, the remaining objective is to establish a heat map displaying the expected productivity distributions over the entire field. By doing so, it will be possible to identify previously unnoticed sweet spots for infill drilling purposes. This is schematically illustrated in the last phase of the second stage tool development as shown in Figure 14. After sweeping the entire field, the heat maps generated are shown in Figure 16. The upper left panel is a heat map showing the expected flow rates in barrels per month three years after commencing the production. On this panel, in the central section, the elongated area with bright colors shows the location of the existing producing wells (compare with the panel at the bottom). Additionally, in the same panel, the sweet spots identified as a result of this study are marked with red ovals. Furthermore, in the upper right panel, the heat map displays the expected cumulative production at the end of three years. In both heat maps, some permeability channels identified are also highlighted with the help of the red colored arrows. Appl. Sci. 2021, 11 16. The upper left panel is a heat map showing the expected flow rates in barrels per month three years after commencing the production. On this panel, in the central section, the elongated area with bright colors shows the location of the existing producing wells (compare with the panel at the bottom). Additionally, in the same panel, the sweet spots identified as a result of this study are marked with red ovals. Furthermore, in the upper right panel, the heat map displays the expected cumulative production at the end of three years. In both heat maps, some permeability channels identified are also highlighted with the help of the red colored arrows. Figure 16. The sweet spots and permeability channels identified as a result of this study. The third insert in the bottom shows the wells that were put on production during the initial development stage of the field.

Implementation Generate
The work summarized in this section offers a promising innovative protocol to characterize semi-brown to brown oil fields in terms of their expected productivities. Once again, what makes the entire protocol more attractive are the high accuracy levels in predictions that are achieved at high computational speeds.

Taking One Step Further down the Road on the Hybrid Computational Platform
Thus far, all the implementations on the hybrid computational platform discussed in this article have comprised of an asynchronous external handshaking between hard-computing and soft-computing modules. The 'external-handshaking' between hard-computing and soft-computing protocol shown in Figure 3 involves the two-way information/knowledge transfer between the high-fidelity and the proxy models. In this section, how a proxy model can be built within a numerical model synchronously to accelerate the computational speed is described via some examples. The work summarized in this section offers a promising innovative protocol to characterize semi-brown to brown oil fields in terms of their expected productivities. Once again, what makes the entire protocol more attractive are the high accuracy levels in predictions that are achieved at high computational speeds.

Taking One Step Further down the Road on the Hybrid Computational Platform
Thus far, all the implementations on the hybrid computational platform discussed in this article have comprised of an asynchronous external handshaking between hardcomputing and soft-computing modules. The 'external-handshaking' between hardcomputing and soft-computing protocol shown in Figure 3 involves the two-way information/knowledge transfer between the high-fidelity and the proxy models. In this section, how a proxy model can be built within a numerical model synchronously to accelerate the computational speed is described via some examples.
In a numerical representation of multi-phase and multi-component fluid flow dynamics in porous media, in order to capture the effects of the varying composition of phases in the reservoir and the wellbore domains accurately, it is necessary to use compositional formulations. In the execution of such formalisms, most of the computational overhead is encountered in vapor-liquid-equilibria calculations. In order to decrease the time spent on flash calculations, it is recommended to use a neuro-simulation methodology such that a capable artificial expert system can predict the pressure profile along a production tubing containing flow of pure hydrocarbon components [11]. This expert system is designed to be operational within a range of flow rates, well depths, pipe diameters, inlet fluid compositions, wellhead pressures and geothermal gradients. The data for training the expert system is generated via a numerical simulator that computes wellbore hydraulics. In this work, a two-phase drift-flux model accounts for the variations in gaseous and oleic phase velocities under various flow regimes. In solving the wellbore hydraulics equations, compositional mass balance and momentum equations are solved simultaneously using a fully implicit scheme [11].
As expected, the numerical wellbore hydraulics model is computationally expensive and takes a significant time, ranging from minutes to hours to predict the pressure distribution of the stabilized wellbore system. The complexity of the computations increases when a fine grid is overlaid in the production tubing to control the numerical dispersion.
This strategy results in a much larger number of blocks, especially in the case of ultra-deep wells. The increase in number of blocks in the wellbore hydraulics calculations necessitates more flash calculations to be performed. Therefore, a different model utilizing artificial neural network protocol as a classification and regression tool is suggested to carry the flash calculations in the wellbore in a much more expeditious manner. In this example, in structuring the artificial neural network, the total number of components was set to seven. These seven components are comprised of methane, ethane, n-propane, iso-butane, and iso-pentane, and two pseudo-components C 6+ ( C 6 through C 19) and C 20+ (C 20 through C 45 ). Table 4 displays a summary of the input parameters used in the ANN-based wellbore hydraulics model. The model has 43 input parameters as shown in Table 4 and one output (flowing bottom-hole pressure) A total of almost 80,000 data sets have been generated. As observed in Table 4, input number 5 represents the depth as a fraction of the total depth for the point at which one intends to predict the wellbore pressure. After obtaining raw pressure data from each wellbore simulation, the pressures were interpolated at intervals representing each tenth of the total depth of a well. Figure 17 shows the architecture of the ANN-based wellbore hydraulics model.
Further computational experiments with the ANN-based wellbore hydraulics model indicate that the developed proxy model is fast and robust. Table 5 shows a comparison of the wellbore hydraulics model against the numerical model and compares the results against the field data with respect to the level of accuracy achieved. A computational performance comparison of the full numerical model (numerical reservoir model coupled with numerical wellbore hydraulics model) against the neuro-simulation model (numerical reservoir model coupled with ANN-based wellbore hydraulics model) was made. The full numerical wellbore model was run for 24 h with the input variables randomly chosen in a protocol like the one followed to generate data for ANN training. The numerical model was able to study 91 cases within this time span. These 91 cases were re-simulated on the same computer using the ANN model. Although the ANN can take inputs simultaneously in the form of a 43 × 91 matrix and predict a result as a 1 × 91 vector, an iterative loop was used to run the neuro-simulation model. This was done in order to account for the ANN call time, which will be a factor in each iteration of the coupled reservoir-wellbore simulation. It was observed that the total time taken by the ANN toolbox was 2.13 s to re-simulate the 91 runs. Since the numerical model was run for 86,400 s, the proposed ANN model outperformed the numerical model speed-wise by more than a factor of 40,000. The ANN model did not encounter any stability issues that may lead to time-step cuts or failure of the program. This speed and robustness of the ANN model provides a significant advantage in coupled reservoir-wellbore simulation studies. This comparison was performed on a computer with a 2.20 GHz processor and 8 GB RAM [11].
Further computational time experiments were conducted for gas-lift operations. Table  6    24 Figure 17. Architecture of the ANN-based wellbore hydraulics model.
Further computational experiments with the ANN-based wellbore hydraulics model indicate that the developed proxy model is fast and robust. Table 5 shows a comparison of the wellbore hydraulics model against the numerical model and compares the results against the field data with respect to the level of accuracy achieved. A computational performance comparison of the full numerical model (numerical reservoir model coupled with numerical wellbore hydraulics model) against the neuro-simulation model (numerical reservoir model coupled with ANN-based wellbore hydraulics model) was made. The full numerical wellbore model was run for 24 h with the input variables randomly chosen in a protocol like the one followed to generate data for ANN training. The numerical model was able to study 91 cases within this time span. These 91 cases were re-simulated on the same computer using the ANN model. Although the ANN can take inputs simultaneously in the form of a 43 × 91 matrix and predict a result as a 1 × 91 vector, an iterative loop was used to run the neuro-simulation model. This was done in order to account for the ANN call time, which will be a factor in each iteration of the coupled reservoir-wellbore simulation. It was observed that the total time taken by the ANN toolbox was 2.13 s to re-simulate the 91 runs. Since the numerical model was run for 86,400 s, the proposed ANN model outperformed the numerical model speed-wise by more than a factor of 40,000. The ANN model did not encounter any stability issues that may lead to time-step cuts or failure of the program. This speed and robustness of the ANN model provides a significant advantage in coupled reservoir-wellbore simulation studies. This comparison was performed on a computer with a 2.20 GHz processor and 8 GB RAM [11].   Further computational time experiments were conducted for gas-lift operations. Table 6  The ideas and observations presented in these experiments suggest similar coupled protocols in some other reservoir engineering applications can be attractive not only from the perspective of gaining computational speed but also capturing the reservoir heterogeneities more accurately. Along these lines, two possible implementations immediately come to mind. One implementation is the use of the ANN-based relative permeability and capillary pressure models in conventional simulators. In other words, during the computations at any reservoir block, at any time step, at any iteration level rather than calling the same subroutine that will return the relative permeability values only as a function of saturation, it will be much more representatively accurate to receive  As another example of integration of ANN-based proxy models with numerical models, multi-component, multiphase simulation of coalbed methane (CBM) reservoirs can be considered. For example, consider multi-purpose CO2 injection into a CBM reservoir for enhanced methane gas recovery and CO2 sequestration. In compositional modeling of such an implementation, one of the challenges that surfaces may be the construction of multi-component Langmuir adsorption isotherms from individual pure adsorption isotherms. The thermodynamics of multi-component isotherms is analogous to vapor-liquid-equilibria calculations as established by Myers and Prausnitz [14]. However, the use of ideal adsorbate solution (IAS) theory still requires expensive flash calculations. In deploying the IAS theory again at any time step in any reservoir block at any iteration level, the computationally expensive flash calculations need to be executed to construct and reconstruct the multi-component adsorption isotherms. Accordingly, a relatively simple two-component (CH4 and CO2) ideal adsorbate construction for multicomponent adsorption isotherms can be accomplished with the help of an ANN-based model. Then, such a proxy model can construct and update the multi-component adsorption isotherms in an extremely fast manner as requested by the transport equations. Obviously, implementation of the VLE computations at the reservoir block level with the help of the same proxy model will even accelerate the overall solution much further.

Concluding Remarks
The overall goal of the ideas presented in this paper is to achieve field scale optimization in reservoir engineering applications. In order to achieve this goal, it will be necessary to integrate every piece of information that is available from the field/reservoir characteristics to field production strategies, from implementing competing completion strategies to artificial lift techniques, from implementation of enhanced recovery techniques to infill drilling strategies in a continuous and seamless manner. Consideration The ideas and observations presented in these experiments suggest similar coupled protocols in some other reservoir engineering applications can be attractive not only from the perspective of gaining computational speed but also capturing the reservoir heterogeneities more accurately. Along these lines, two possible implementations immediately come to mind. One implementation is the use of the ANN-based relative permeability and capillary pressure models in conventional simulators. In other words, during the computations at any reservoir block, at any time step, at any iteration level rather than calling the same subroutine that will return the relative permeability values only as a function of saturation, it will be much more representatively accurate to receive the response from an ANN-based property model that is capable of tracing and updating the requested petrophysical property not only as a function of saturation but also as a function of other independent spatial and temporal properties such as permeability, porosity, interfacial tension or capillary number (the latter two collectively determine the final saturation values).
As another example of integration of ANN-based proxy models with numerical models, multi-component, multiphase simulation of coalbed methane (CBM) reservoirs can be considered. For example, consider multi-purpose CO 2 injection into a CBM reservoir for enhanced methane gas recovery and CO 2 sequestration. In compositional modeling of such an implementation, one of the challenges that surfaces may be the construction of multicomponent Langmuir adsorption isotherms from individual pure adsorption isotherms. The thermodynamics of multi-component isotherms is analogous to vapor-liquid-equilibria calculations as established by Myers and Prausnitz [14]. However, the use of ideal adsorbate solution (IAS) theory still requires expensive flash calculations. In deploying the IAS theory again at any time step in any reservoir block at any iteration level, the computationally expensive flash calculations need to be executed to construct and reconstruct the multicomponent adsorption isotherms. Accordingly, a relatively simple two-component (CH 4 and CO 2 ) ideal adsorbate construction for multi-component adsorption isotherms can be accomplished with the help of an ANN-based model. Then, such a proxy model can construct and update the multi-component adsorption isotherms in an extremely fast manner as requested by the transport equations. Obviously, implementation of the VLE computations at the reservoir block level with the help of the same proxy model will even accelerate the overall solution much further.

Concluding Remarks
The overall goal of the ideas presented in this paper is to achieve field scale optimization in reservoir engineering applications. In order to achieve this goal, it will be necessary to integrate every piece of information that is available from the field/reservoir characteristics to field production strategies, from implementing competing completion strategies to artificial lift techniques, from implementation of enhanced recovery techniques to infill drilling strategies in a continuous and seamless manner. Consideration of such an integrated approach will require the deployment of several automated modules that are connected to each other in parallel or in series and exploring the existing synergy that can be found in several different facets of the process as shown in Figure 18.
The integrated model shown in Figure 18 will have its boundary conditions at the reservoir boundaries, surface facilities, and the interfaces between reservoir, wellbore, and surface facilities.
The available manpower in most of our daily operations often constrains us from making comprehensive, multi-dimensional analyses to find the most optimum solution. Every day, industry in the field operations generates mountains of bytes of new information. Perhaps this information is recorded, but unfortunately, it is not typically utilized extensively. Use of the protocols proposed in this paper should enable the engineers and scientists, on the one hand, to maximize their contributions by not spending time on issues that are peripheral to the problem, and on the other hand, to aid them by providing them with dependable tools with functionalities that consider every aspect of the problem effectively in a decision-making process. In this article, some examples of how artificial expert systems can effectively be utilized in addressing some of the long-standing reservoir engineering problems are discussed. It is hoped that by virtue of artificial intelligence techniques, one will be able to assemble expert tools that will enable the reservoir engineers to implement better controls on decision systems, optimization studies, information management, and in making smart inferences concerning operations.
Appl. Sci. 2021, 11, x FOR PEER REVIEW 27 of 34 integrated approach will require the deployment of several automated modules that are connected to each other in parallel or in series and exploring the existing synergy that can be found in several different facets of the process as shown in Figure 18. The integrated model shown in Figure 18 will have its boundary conditions at the reservoir boundaries, surface facilities, and the interfaces between reservoir, wellbore, and surface facilities.
The available manpower in most of our daily operations often constrains us from making comprehensive, multi-dimensional analyses to find the most optimum solution. Every day, industry in the field operations generates mountains of bytes of new information. Perhaps this information is recorded, but unfortunately, it is not typically utilized extensively. Use of the protocols proposed in this paper should enable the engineers and scientists, on the one hand, to maximize their contributions by not spending time on issues that are peripheral to the problem, and on the other hand, to aid them by providing them with dependable tools with functionalities that consider every aspect of the problem effectively in a decision-making process. In this article, some examples of how artificial expert systems can effectively be utilized in addressing some of the long-standing reservoir engineering problems are discussed. It is hoped that by virtue of artificial intelligence techniques, one will be able to assemble expert tools that will enable the reservoir engineers to implement better controls on decision systems, optimization studies, information management, and in making smart inferences concerning operations.
A technological discontinuity might be defined as a "breakthrough innovation" that advances by an order of magnitude the technological state-of the-art, which characterizes an industry [15]. Technological discontinuities are based on new technologies whose technical limits are inherently greater than those of the previous dominant technology along economically relative dimensions of merit [16]. Then, with this understanding, we can characterize the technological discontinuity in the form of introduction of a disruptive know-how that sweeps away the systems of habits it replaces because it has attributes that are recognizably superior. We should also acknowledge that in developing and assem-  Figure 18. A fully integrated "reservoir-wellbore-surface" hybrid model.
A technological discontinuity might be defined as a "breakthrough innovation" that advances by an order of magnitude the technological state-of the-art, which characterizes an industry [15]. Technological discontinuities are based on new technologies whose technical limits are inherently greater than those of the previous dominant technology along economically relative dimensions of merit [16]. Then, with this understanding, we can characterize the technological discontinuity in the form of introduction of a disruptive know-how that sweeps away the systems of habits it replaces because it has attributes that are recognizably superior. We should also acknowledge that in developing and assembling tools as described in this paper, there will be opportunities to examine challenges. The activation energy barrier to implementing expert system-based solutions are typically not the cost, lack of benefits, or technical risk, but other issues such as the resistance to change. It is recognized that new technologies may require changes to established workflows that practitioners may resist; however, it is safe to assume that practitioners are not resisting specific technologies but the creation of a discontinuity in technology adaptation. However, it should further be recognized that, it is very same discontinuity (disruption) that sets the level of betterment of our understanding and thinking on a new trajectory in technology adaptation. Therefore, it is naturally expected that the innovative protocols like the ones described in this paper will continually have immediate applications.
As a final point, it will be vital to remind the readers of this forum of the importance of "Pasteur's Quadrant", as shown in Figure 19. Donald Stokes, in his 1997 book entitled Pasteur's Quadrant: Basic Science and Technological Innovation [17], states that "he was struck by how often a gifted scientist/engineer would talk about the goals of research-especially the relationship between the quest of fundamental understanding on the one hand and consideration of use on the other-in a way that seemed odd. Odd and unhelpful, since the preceptors' view of this relationship and of the relationship between the categories of basic and applied research derived from these goals kept them from seeing things they needed to see". Figure 19, adapted from Stokes, schematically shows Pasteur's Quadrant's invitation to scholars and scientists who are at the front of their academic and industrial pursuits to move in and to pursue fundamental understanding of phenomena with the goal of tackling critically important real-world problems. Therefore, it is naturally expected that the innovative protocols in reservoir modeling efforts similar to the ones described in this paper will continually evolve and have immediate applications with the overall goal of responding to the calls of Pasteur's Quadrant. In other words, the implementation of Pasteur's Quadrant concept to available reservoir modeling options inspires similar thought development processes in reservoir model development activities, as shown in Figure 20. practitioners are not resisting specific technologies but the creation of a discontinuity in technology adaptation. However, it should further be recognized that, it is very same discontinuity (disruption) that sets the level of betterment of our understanding and thinking on a new trajectory in technology adaptation. Therefore, it is naturally expected that the innovative protocols like the ones described in this paper will continually have immediate applications. As a final point, it will be vital to remind the readers of this forum of the importance of "Pasteur's Quadrant", as shown in Figure 19.  [17] , states that "he was struck by how often a gifted scientist/engineer would talk about the goals of research-especially the relationship between the quest of fundamental understanding on the one hand and consideration of use on the other-in a way that seemed odd. Odd and unhelpful, since the preceptors' view of this relationship and of the relationship between the categories of basic and applied research derived from these goals kept them from seeing things they needed to see". Figure 19, adapted from Stokes, schematically shows Pasteur's Quadrant's invitation to scholars and scientists who are at the front of their academic and industrial pursuits to move in and to pursue fundamental understanding of phenomena with the goal of tackling critically important real-world problems. Therefore, it is naturally expected that the innovative protocols in reservoir modeling efforts similar to the ones described in this paper will continually evolve and have immediate applications with the overall goal of responding to the calls of Pasteur's Quadrant. In other words, the implementation of Pasteur's Quadrant concept to available reservoir modeling options inspires similar thought development processes in reservoir model development activities, as shown in

CONSIDERATION OF USE (INDUSTRIAL INTEREST) QUEST FOR UNDERSTANDING (ACADEMIC INTEREST)
Purely curiosity driven research. can be considered as Bohr's Quadrant research Research carried out to advance fundamental knowledge while pursuing a practical result is in the Pasteur's Quadrant.
This quadrant is not really an empty quadrant.
Research that can neither be characterized as inspired by the goal of fundamental understanding nor by a goal of practical use.
Research activities conducted to achieve a practical benefit without consideration of advancing the frontiers of knowledge is known Edison's Quadrant.
A desirable place to be: Pasteur's Quadrant Figure 19. Importance of Pasteur's Quadrant.