THE QUEST: COMBINING DATA TO DESCRIBE THE RESERVOIR
Peter M. Duncan
3DX Technologies Inc.
Houston
A picture is worth 1,000 words.
Make that 10,000 words and many times that in dollars when it comes to the powerfully descriptive and revealing images of the subsurface that are being created with computer-aided exploration (CAEX) and 3D seismic data.
Meanders of ancient rivers emerge from the screen much like invisible writing when the candle is applied. Colonies of reefs are discovered growing near a 350-million-year-old beach. The geologic history of an area can be played backward and forward like some home movie.
The real power in these images, the real value to exploration, is that seismic data and their interpretation have been made more accessible than ever before to the geologist and reservoir engineer.
Visualization software lets you almost "see" the geology in the 3D data set-a treat that once was exclusive to the experienced interpreter, who had the seismic patterns ingrained deeply in mind.
The result is vastly improved opportunity for communication between geologists, geophysicists, and reservoir engineers, which in turn is changing work practices in a trend called "integration." From this improved communication comes a greater understanding of the reservoir and a concurrent lowering of risks.
NO SIMPLE FORMULA
The success ascribed to 3D and the colorful images created by the workstation have made for good copy in trade publications. The upside of this is that more and more people are appreciating what 3D can do for them.
The downside is a growing perception that 3D is a completely developed tool - a black box that simply can be buttoned onto any project for instant success.
The reality is very different: 3D may well be over 25 years old but is still very immature in its practice.
Every aspect of the technology from positioning to data processing to data management and interpretation-lags in practice what theory understands.
For example, a typical survey of 25 sq miles might contain as many as 1.2 million traces and 9 gigabytes of data before stack. If this is a U.S. Gulf Coast project, the operator probably wants to do amplitude vs. offset (AVO) analysis, retaining all 1.2 million traces after a full 3D prestack migration.
In reality, there is no way to do that with today's hardware and software. In practice, a company can retain the stacked and migrated data set representing perhaps 5% of the data collected in the field and maybe try to look at the AVO effect on select gathers.
Clearly, this is a far cry from the ideal.
THE QUEST
So what is the geophysical ideal?
Simply stated, it is to be able to quantify unambiguously a target prospect as far as trap, reservoir, and hydrocarbon content from seismic data alone - to turn the drill bit into a tool for recovery, not discovery - and do this in real time.
Will we ever achieve this ideal? No - at least not from seismic data alone.
The nature of the seismic experiment is such that we must calibrate the acoustic response with hard subsurface data. The drilling industry has nothing to fear from geophysics. Nevertheless, there are exciting developments ahead that will make all of us more efficient at what we do.
Acquiring enough field data to allow for 3D imaging at a reasonable cost has required putting more and more channels in the field. Land systems with 2,000 channel capacity are now available.
Even so, station spacing usually must be 24 times larger than what the geophysicists really would like. The movement to more channels and higher resolution will surely continue such that in the near future the 12.5 m spacing used offshore can be duplicated on land.
Larger numbers of channels will exacerbate the positioning dilemma onshore. Next to permitting, perhaps the biggest impediment to onshore 3D surveys is getting accurate positioning data on station locations.
Traditional surveying methods are just not adequate for laying out 3D surveys. At least one vendor has shown a real rime positioning system for vibrators using the differential Global Positioning System (GPS) that will display the location of the source back in the doghouse. This should help eliminate misplaced or misassigned shots.
Very shortly we should be able to produce real time fold plots in the doghouse just as can be done at sea. Such assignment of geometry in the field will greatly shorten the time to interpretation.
The trend to higher numbers of channels will soon outstrip the facilities in use today for data processing and interpretation. 3D seismic data processing largely is still done with software that has grown out of a 2D approach.
There is a need for a wholesale redesign of the management of the 3D data set that allows for true 3D processing from the outset-and a need to shorten the time to presentation of the fully migrated image.
There is once again a movement to putting processing systems in the field both on land and at sea. These systems can really only barely handle today's channel density and must improve greatly before the large data processing facility is replaced.
Still, any move toward letting the geophysicists begin to see the image as it is acquired is a move in the right direction. Ultimately, our vision should be to have the subsurface image appear on the screen as the data are collected, just as logs appear in the logging truck today. Actually achieving real time imaging will probably take until the end of the century.
TOWARD INTEGRATION
The CAEX workstation of today is pretty well developed for handling migrated stacked 3D data for structural interpretation.
Volume visualization software that allows for an incredibly elegant and powerful display of the data is quickly becoming standard.
But to reveal reservoir character we have to go well beyond the picking and displaying of seismic events.
We must be able practically to integrate the subsurface information into the seismic data base, be able to isolate seismic attributes other than time of arrival, and calibrate these attributes to the lithology encountered by the drill bit. The experienced seismic interpreter always has performed this "inversion" of seismic data in his head, eventually refining the model at the expense of dry holes.
3D data volumes and the rigors of working in mature areas where 10s or 100s of wells exist make the task far too large to be a mental arithmetic problem anymore.
The workstation should snow for an efficient combining of geological, geophysical, and engineering data into a consistent 3D image.
However, the parts of this problem have been conquered in separate pieces of software.
it remains to put it all together, something that will happen perhaps within the next 2 years, at least for stacked data sets.
USING ALL THE DATA
It is important to remember that the stacked data represent as little as 1% and usually not more than 15% of the data collected in the field.
If we really want to qualify the reservoir, open the container and peer inside, it will be necessary to use all the data in the unstacked but fully migrated fashion.
Right now, we can't even produce this data set at reasonable cost, let alone manage, manipulate, and "invert" it with an the subsurface constraints. Yet this is where we must be headed if we are to continue our quest to reliably predict lithology from seismic data.
The end result of this quest win be a dynamically addressable 3D image of the subsurface in depth that pictures not only structure but lithology, reservoir distribution, quality, and fluid content as estimated from the available wells, production history and seismic data.
The estimates will be quantitative and associated with a numerical level of uncertainty. That uncertainty will be our risk, which can be converted to a dollar number and allow for a well reasoned decision as to how to reduce the risks further - be that drilling or more seismic.
Copyright 1994 Oil & Gas Journal. All Rights Reserved.