Parallel Computing Alters Approaches, Raises Integration Challenges In Reservoir Modeling
Gautam S. Shiralkar, Richard F. Volz, Robert E. Stephenson,
Manny J. Valle, Kirk B. Hird
Amoco Exploration & Production Technology
Tulsa
Parallel computing is emerging as an important force in reservoir characterization, with the potential of altering the way we approach reservoir modeling. In just hours, it is possible to routinely simulate the fluid flow in reservoir models 10 times larger than the largest studies conducted previously within Amoco.
Although parallel computing provides solutions to reservoir characterization problems not possible in the past, such a state-of-the- art technology also raises several new problems, including the need to handle large amounts of data and data integration.
A reservoir study recently conducted by Amoco provides a showcase for these emerging technologies.
Characterization philosophies
Flow simulation studies are typically performed when the resource base of the reservoir is large enough to justify the personnel and computing costs. Although flow simulation is used for many reasons, including scoping of prospects, sizing surface equipment, etc., the principal goal we consider here is that of arriving at a characterization of the reservoir system which will allow for prediction of the reservoir's production rates and reserves.
When such a study is undertaken, the usual practice is to adjust different reservoir system parameters within the flow simulator so as to match the previous fluid production and pressure history of the reservoir.
Typically, an initial "guess" is obtained from a geologist, and any available data from cores, well logs, and well pressure transient tests are incorporated. This could be the last time that the geologist sees the reservoir engineer. The reservoir flow model is then tweaked to match production history, and a single "best estimate" and, therefore, deterministic model of the reservoir is developed.
There are large differences in philosophy within the industry on exactly which reservoir parameters are the best to iterate, and which are "hands-off" parameters. These differences can exist within a single company. Often, fieldwide characteristics are matched first, followed by matching of individual well performance.
Although this traditional approach has had notable successes, it does suffer from several well-known shortcomings. The resulting history-matched reservoir model suffers from non-uniqueness: A number of significantly different reservoir descriptions can give essentially the same simulated reservoir performance over the period of historical data but differ in prediction of future performance.
Also, since single numbers are used to represent properties such as rock permeability for volumes that often exceed the size of a medium-size building, such reservoir descriptions often tend to be overly homogeneous and, therefore, unrealistic.
Significant improvements must be made if we are to improve our ability to use reservoir simulators as a basis for making informed business decisions. Among the most important needs are the following:
- To be able to represent reservoir geology more realistically.
- To bring the geologist into the iterative reservoir characterization process, rather than have the geologist hand off the baton to the engineer.
- To better assess the technical risks (and, therefore, the economic risks) associated with the reservoir description.
These views are not new, of course, and are shared by many in the industry. We feel that it is only recently that many of the different components of the technology needed to meet the stated goals have started to come together to complete the picture.
Geostatistics
Geostatistics is a promising technology that attempts to address some of these issues.
The essence of a reservoir characterization approach that includes geostatistics is to generate multiple descriptions of the reservoir, called "realizations," and then flow-simulate each such description, resulting in a range of possible reservoir performance, rather than a single answer. The different realizations are generated on a statistical basis that also accounts for the spatial variability in reservoir properties.
Different geostatistical techniques are used, depending on the geological environment. Recent work also seeks to incorporate data from geophysical sources1 and short term well tests2 to constrain the range of possible realizations.
Much uncertainty remains in the reservoir model because the amount of available data is minuscule in comparison to the size of the reservoir. A reservoir model whose simulated performance agrees with field data has far more credibility than one that behaves differently from the actual reservoir system. The range of statistically generated (equi-probable) realizations is thus reduced through flow simulation.
Geostatistics incorporates the geological model description directly into each realization and attempts to do this in a more realistic fashion. It integrates data from a number of sources and generates many equi-probable realizations that are then tested through flow simulation.
In representing reservoir geology in a more realistic way, the spatial resolution is often increased to the extent that the resulting flow simulator model may contain several million cells. Because many such realizations have to be simulated, the computational requirements are high-which explains why few geostatistical studies have been undertaken in practice.
Often, some sort of compromise has to be made. Usually, "upscaling"3 is used to develop a different description with fewer grid cells to simulate.
The greater the degree of upscaling, of course, the greater the loss of information. If the upscaled cells are larger than an important scale of reservoir heterogeneity, then the description may be excessively compromised. Simulation of the fine-scaled reservoir model using parallel computing is now viewed as a viable alternative to upscaling in some cases and should reduce the degree of upscaling necessary in others.
In summary, there are two primary drivers behind modern techniques of reservoir characterization:
(1) the need to integrate increasing amounts of data from diverse sources and
(2) the increasing demands placed upon reservoir flow simulators by more geologically realistic descriptions generated on a stochastic basis.
Data management
Reservoir characterization creates special data handling needs in an industry already forced by advancing technologies-such as 3D seismic-to process huge and growing amounts of information.
In the not too distant past, geoscientists and engineers operated in relative isolation and rarely interacted; even then, the geoscientist would "hand off" a description to the engineer. That situation has changed recently with the recognition of the benefits of multidisciplinary teamwork.
Within Amoco, for example, multidisciplinary "exploitation" teams focused on reservoir assets are the norm rather than the exception. Along with these changes has emerged the need for "exploitation" tools, which tie geoscience and engineering applications together into an integrated suite of applications suitable for use by teams that are no longer purely geoscience or purely engineering.
Of course, all these different applications from diverse sources-such as well logging, well test analysis, geostatistics, and reservoir engineering-have different data input and output formats, so tying applications together is a laborious task.
An emerging trend among commercial software vendors, in response to these needs, is the development of integrated systems that tie geoscience and engineering applications together through a common database. These can be based on either a proprietary data model, such as the ERC Tigress product and Petrosystems's Integralplus, or on an emerging data model, such as the standard proposed by the Petrotechnical Open Software Corp. (POSC) and the Public Petroleum Data Model (PPDM).
The advantage of tying in different applications through a common database is that the applications can then be tightly coupled; i.e., a change in a property made by any application is reflected as a change within the shared database and is immediately accessible to all other applications sharing that database.
Such integrated suites that tie together diverse applications and data from different disciplines also have the potential to afford exploitation teams a common view of the reservoir model. Ideally, the reservoir engineer could sit at the same workstation computer as the geoscientist and agree on details of the reservoir model. For this reason, the ability to render 3D graphical views of the data must also be considered an essential feature of such a tool.
The reservoir study we describe later was performed by a multidisciplinary team that included a geologist, geostatistician, and reservoir engineer. The core part of the study was finished in about a month, which would not have been possible without the different participants' working closely together.
Parallel computing
Fortunately, the enormous needs placed on reservoir flow simulation by advances in reservoir characterization (geostatistics), have been matched by advances in the computing world. Multiprocessor computers- i.e., computers equipped with more than a single processor-are rapidly emerging as a source of tremendous computing power.
Computers have internal clocks; the faster this clock ticks, the faster the computer can complete assigned tasks. Recent years have seen very rapid increases in the speed of individual processors-the central processing unit (CPU) that sits at the heart of a computer. A common assumption in the computer industry is that the number of transistors on a chip doubles every 18 months, leading to a commensurate increase in the speed of individual processors.
Several cogent arguments, however, point to an inevitable slowing down. Among factors often cited are the increased requirements of cooling the electronics due to increased power dissipation and the increasing need to pack components very densely to avoid long wire lengths.
If clock speed limitations impose a fundamental barrier to processing power on a single processor, an alternative to achieving very high performance lies in the use of multiple processors operating in parallel. With the vertical integration occurring in parts of the computer industry, parallel computing is able to leverage the advances made in single processor technology: At the heart of several massively parallel machines lie many high-volume-market processors.
It is only recently that parallel computing has started to come of age. Pioneers in the field included Kendall Square Research (KSR) and Thinking Machines Corp. (TMC). Several other companies, notably Cray Research (CRI), Silicon Graphics, and IBM, have matured their hardware and software offerings to engineering software users, so that parallel computing is now a viable tool.
While geoscientists have had a longer relationship with parallel computing, particularly in seismic processing, engineers are beginning to explore the potential of parallelism.
There appears to be a trend within the computer industry towards scalable multiprocessor systems built of commodity components. "Scalable" here means that multiprocessor systems may start from a few processors and be built up into powerful systems containing many tens or hundreds of processors, adding total computer memory along with additional processors. At the high end of the computing spectrum, we find the massively parallel processing (MPP) computers, such as CRI's T3D computer (being replaced by the T3E model) and TMC's CM5 computer.
Although several problems remain in the efficient usage of massively parallel computers, which are beyond the scope of this article, significant advances have clearly been made, particularly with regard to the maturing of much-needed system software. We feel that parallel computing is likely to increase in many areas of computation, even reaching the desktop in the very near future.
Falcon project
More than 2 years ago, Amoco initiated a strategic project called Falcon to bring the different pieces of the reservoir characterization puzzle together within a "next generation" reservoir management system. Driven by geostatistics, the initial focus was on the development of reservoir-simulation software (Falcon) that could efficiently exploit parallel computers.
Through a cooperative research and development agreement (Crada) in place with Los Alamos National Laboratories (LANL) in Los Alamos, N.M., and CRI, the extended Falcon development team consists of two experienced reservoir engineers from Amoco, two computing experts from LANL, and one from CRI. The Crada calls for eventual commercialization of Falcon.
The project enabled Amoco to carry out a significant field study using a geostatistical approach. In the past, we would have had to make significant compromises to achieve these same goals. The study was designed to showcase Falcon and reservoir characterization technologies.
The study
Although Amoco has conducted field reservoir studies in the past using geostatistical techniques, these often suffered from undesirable elements. Typically, only a handful of realizations could be practically simulated, resulting in a return to a manual tweaking of reservoir parameters in an effort to history-match performance data.
The Falcon field study was designed to showcase new technology that overcame several of these problems; it identified several new difficulties in the process.
The study was conducted at Amoco's Tulsa Technology Center by members of Amoco Exploration & Production Technology. The core team consisted of a geologist and an engineer working in cooperation with the study team in Houston and members of the reservoir characterization and simulation development group in Tulsa.
The showcase study had two goals:
(1) to demonstrate Falcon's capabilities in conducting field-size probabilistic studies, and
(2) to use Falcon in a real-world Amoco problem and thus demonstrate the business value.
One of the teams in Amoco's Operations Business Group offered data from a field the team was evaluating. The size of the field and scarcity of well data made it an ideal for testing the strength of combining geostatistics and Falcon technologies. In light of the scarcity of production data, the showcase study did not entail matching of performance history but was aimed at a probabilistic evaluation of the uncertainties in production forecasts.
An aggressive goal was set to generate and flow-simulate 50 different, but equally probable, reservoir descriptions for an area with in-place estimated reserves in excess of 1 billion bbl of oil. The area under study was represented within the simulator by 2.3 million grid cells. In total, 1,039 wells were simulated over 25 years of waterflood performance. This made it the largest field study undertaken within Amoco.
After the initial setup, the 50 reservoir descriptions and model runs were made in less than 1 month. This was possible only because the extremely large flow simulations could be run in about 4 hr.
Reservoir description
At this early stage of field evaluation, the parameters believed to most influence the variability in rate and reserve predictions are the distributions of permeability and porosity.
Data in our evaluation was limited to well logs and well tests from sparsely drilled exploratory wells. Considerably different interpretations of porosity and permeability trends can be made from the same raw data. A major outcome of the study was to quantify the expected variability in predicted rates and reserves.
The geological model consists of siliciclastic wedges roughly striking south-southwest/north-northeast and dipping west-northwest. These wedges are divided into three main depositional settings: the shelf, the slope, and the toe.
The shelf and toe are relatively thin, and permeability, where present, is distributed fairly isotropically. Permeability in the slope, which represents the thickest deposition, is distributed anisotropically, with the major axis trending northeast (30°) and the minor axis trending normal to the major. Production potential exists in three zones: A, B, and C from youngest to oldest. The wedges are found in zones A and B.
The geological models for permeability and porosity used in the fluid flow simulations were derived geostatistically with the commercial software package (RC)2. Fifty realizations of the reservoir area under study were generated from different combinations of correlation lengths: vertical (3 ft and 7.5 ft), horizontal isotropic (3 miles and 4 miles), and horizontal, anisotropic (4 miles x 2 miles and 3.5 miles x 1 mile). A number of realizations were generated for each of the combinations.
The model area was divided horizontally into nine areas and vertically into 127 layers. This areal division, a feature provided by the (RC)2 software, permitted the use of differing permeability histograms and variograms, which allowed us to honor the permeability distributions in each of the depositional settings.
The permeability models were generated through a technique known as sequential Gaussian simulation (SGS). While SGS was not the recommended method, given the strongly bimodal distribution of pay versus non-pay, it was selected because of its speed. The porosity models were generated through use of the permeability models, a porosity vs. permeability crossplot, and the "cloud transform" technique. This technique preserves the inherent scatter in the correlation of porosity and permeability, resulting in a reservoir description that is more heterogeneous than those resulting from other techniques.
Model setup, execution
Each of the black oil model runs simulated the performance of limited pressure depletion, followed by a pattern waterflood over a 25 year period. The producing wells were operated at 500 psia bottom hole pressure, while bottom hole injection pressure was limited to 5,200 psia. Water injection wells were typically produced for 1 year prior to being converted to injection.
In many parts of the field, the pressure drawdown was sufficient to form a flowing gas phase. A realistic drilling schedule resulted in a development period of 10 years.
The flow simulator dataset for the showcase study was not substantially different from that of other black oil simulations. Two exceptions are the manner in which data input of the properties for the 2.3 million grid cells and well controls for the 1,039 wells were handled.
Grid properties are normally input in Ascii (text-based) formats. This is not practical for the routine handling of millions of grid cell properties: It would take upwards of 20 min for Falcon just to read in the grid data. In the showcase study, output from the geostatistics software was converted to binary mode before input to Falcon.
Well scheduling keeps track of items such as well locations, completion intervals, completion flow capacities, flowing or injection pressures, etc., as operating conditions change over time. Because the showcase study addressed only performance predictions, it was possible to build the entire well-scheduling input data section for the simulator with an ad hoc Fortran program that generated over 6,000 lines of simulator input. In a model study with hundreds or even thousands of wells with historical production, it will be critical to have good preprocessing tools to generate the required well control data.
An initial flow simulation, comprising 5 million grid cells, was successfully made on a Cray T3D massively parallel computer equipped with 256 compute nodes at LANL. The actual grind time for this simulation was about 10 hr. However, the time required to transfer massive input and output files back and forth to Tulsa over relatively slow network lines of communication resulted in a decision to perform the study on Amoco's in-house massively parallel CM5 computer.
Amoco's in-house CM5 computer is normally reserved for seismic processing. The size of the flow simulations required all of the computer's 128 processing nodes, and by agreement with the geoscience department, model runs were made at night. Since the average model run lasted about 4 hr, it was possible to make 2-3 runs/night.
Model results
The value of stochastic reservoir modeling is in assessing the economic uncertainty associated with important performance parameters.
The performance variables of interest in this study include producing rates, water injection rates, produced water:oil ratio, and oil recovery factor. The expected values of each have a direct bearing on the project costs (operating and investment), revenues, facilities design, etc. Knowing the range of uncertainty in these variables leads to an economic model that enhances our understanding of risk and return of the investment opportunity.
Peak oil rate predictions for the field varied by more than a factor of two for the high and low cases. The variation in rate and reserves is closely correlated to the initial oil in place for a given realization.
Fig. 1 [27330 bytes] shows the 25 year profile for oil production, and Fig. 2 [32562 bytes] is the accompanying histogram of recoverable reserves for the same model runs. The probability distribution function of reserves in Fig. 2 [32562 bytes] is a key element in risk-weighted economic evaluations.4
The variability in performance is attributed to the interpretation of permeability and porosity between actual data control points, and the scarcity of the control data. Once a trend is established, the relatively large correlation lengths, and lack of control between wells, cause it to have a dominant influence on performance.
The permeability maps in Figs. 3a and 3b [84270 bytes] highlight the differences possible in reservoir characterization even when starting with the same raw data. These 3D maps are based on 5 million points covering the geologic area and represent the highest and lowest OOIP cases, respectively.
The figures show only 13 layers for clarity and use a 1 md cutoff. We expect the acquisition and interpretation of seismic data or additional drilling to reduce the variability.
A question asked at the beginning of this study was whether 50 realizations would be statistically significant. One way to determine the required number of realizations is to plot the running coefficient of variation (standard deviation normalized by the mean) of the results, and stop when this coefficient stabilizes. For this study, the coefficients stabilized by about 25 runs, as shown in Fig. 4 [35485 bytes] plotted for original oil in place.
Future studies will likely include the integration of seismic data into the reservoir description process, using Falcon to quantify the number of control wells needed to reduce the uncertainty to an acceptable level, optimize well spacing and waterflood pattern orientation, and aid in reservoir management and refinement of the geological model as additional data becomes available.
Challenges
The performance of Falcon using multi-processor technology was deemed to be an outstanding success. Building complex 3D descriptions of the reservoir and transferring the huge amounts of associated data are problems awaiting solutions.
To create and track 127 2D maps (files) of porosity and permeability for each of 50 model runs would have been impractical and would have greatly increased cycle time. The work flow process used by the study team is illustrated in Fig. 5 [42198 bytes]. Total elapsed time required to generate a new geostatistical realization of the reservoir, run the flow simulator on the CM5, and post-process the well data was about 8 hr. Even so, cycle time can be reduced even further.
The work embodied in Fig. 5 [42198 bytes] needs to be more automated, in particular the generation of the geostatistical realizations. Specific areas of improvement include:
- Batch-processing options for the geostatistical software.
- Ability to read and write simulator grid data in binary format.
- Pre-processors that provide statistical measures of input grid data.
- Post-processors that can extract significant field and well parameters from multiple runs and collate.
Perhaps the largest impact on cycle time will be the availability of integrated software environments that will allow the seamless integration and automation of the entire work process cycle.
Significant progress
We have presented one company's approach to a particular field study, the largest of its kind within Amoco. Significant progress has been made towards the development of a "next generation" reservoir management system.
It is our opinion that parallel computing is beginning to emerge as a powerful technology in high-end engineering computing. Its usage is likely to increase in the near future.
References
1. Araktingi, U.G., Bashore, W.M., Tran, T.T.B., Hewett, T.A., "Integration of Seismic and Well Log Data in Reservoir Modeling," Reservoir Characterization III, edited by B. Linville, T. Burchfield, and T. Wesson, PennWell Books, 1993, pp. 515-54.
2. Sagar, R.K., Kelkar, B.G., Thompson, L.G., "Reservoir Description by Integration of Well Test Data and Spatial Statistics," SPE paper 26420, SPE Annual Technical Conference and Exhibition, Houston, Oct. 3-6, 1993.
3. Durlofsky, L.J., Behrens, R.A., Jones, R.C., Bernath, A., "Scale Up of Heterogeneous Three Dimensional Reservoir Descriptions," SPE paper 30709, SPE Annual Technical Conference and Exhibition, Dallas, Oct. 22-25, 1995.
4. Gutleber, D.S., Heiberger, E.M., Morris, T.D., "Simulation Analysis for Integrated Evaluation of Technical and Commercial Risk," Journal of Petroleum Technology, December 1995.
The Authors
Gautam Shiralkar is a senior staff petroleum engineer with Amoco Exploration & Production Technology. He has been with Amoco for 10 years. A past editor for the Society of Petroleum Engineers, he holds a PhD in mechanical engineering from the University of California at Brekeley.
Robert Stephenson, senior petroleum engineering associate with Amoco E&P Technology, has been with Amoco for 18 years. He holds a PhD in chemical engineering from Clemson University.
Richard Volz is a senior staff petroleum engineer with Amoco E&P Technology. He joined Amoco in 1980 and holds a BS from the University of Tulsa.
Manuel "Manny" Valle is a senior petroleum geoscientist with Amoco E&P Technology. He has been with Amoco for 8 years and holds a BS in geology from the University of Southern Mississippi.
Kirk Hird has worked for Amoco 16 years, the last 6 in reservoir characterization. A senior staff petroleum engineer with Amoco E&P Technology, he holds a PhD in petroleum engineering from the University of Tulsa.
Copyright 1996 Oil & Gas Journal. All Rights Reserved.