PRACTICAL DEPTH CONVERSION WITH SPREADSHEET

Richard A. Box British Gas Houston In the oil and gas business, seismic time to depth conversion problems are best solved, in many cases, by looking for velocity versus depth relationships on a spreadsheet program with graphics capability. Thus seismic data may be married with well data in a sensible way with minimum computation. The work is done by the interpreter as part of the mapping step, not by data processing contractors, or anyone else unfamiliar with the geology. The incremental
June 25, 1990
15 min read
Richard A. Box
British Gas
Houston

In the oil and gas business, seismic time to depth conversion problems are best solved, in many cases, by looking for velocity versus depth relationships on a spreadsheet program with graphics capability. Thus seismic data may be married with well data in a sensible way with minimum computation. The work is done by the interpreter as part of the mapping step, not by data processing contractors, or anyone else unfamiliar with the geology. The incremental cost is negligible.

INTRODUCTION

The most cost effective way of improving the contribution seismic data makes to an exploration or development program, in many cases, is to do detailed, geologically oriented time to depth conversions, or "depth interpretations," on a spreadsheet program to get rid of the misleading effects of seismic velocities. The dangers in ignoring this have been repeatedly stressed in the geophysical literature (Tucker and Yorston, Ambrose), but in practice, most wells are drilled without it. The problem seems to be that ancient one of practicality: Doing a good depth conversion via traditional methods is so time-consuming that it cannot be cost-effective; doing a bad depth conversion is worse than none at all; so the best plan is to not do one unless forced to because an area has a reputation as "tricky."

But the technology is available to most geophysical departments, even many oneperson ones, to do the job right, routinely, at low cost. The method herein is cost-effective, more accurate than the popular constant-velocity depth interpretation method, and is far more geologically satisfying than the lateral velocity gradient method.

For purposes of illustration, four methods are contrasted below.

Each does the conversion as part of the mapping step, rather than as part of seismic processing (extreme problems will still demand depth migration or other exotic processing, but most datasets can be interpreted cheaply and quickly without this). The differences between these four methods are demonstrated via reference to a fictional example abstracted from a pair of real cases from a certain intermontane basin in the U.S.

THE EXAMPLE

A faulted nose prospect has been identified, using a dense grid of excellent-quality seismic data (Fig. 1). The map is a time-structure on the "Kxy" horizon, Cretaceous in age. This map was built in time, then very accurately migrated. Linetie analysis, and other factors, indicate that the standard error is probably 5 ms or less.

The biggest weakness of the prospect is the possibility that the rocks juxtaposed to the Kxy formation at the fault are porous, causing the feature to leak (the fault is nearly vertical in its present orientation, so no raytrace effects need be considered). In order to examine this possibility closely, an accurate depth interpretation is required. In fact, an accuracy on the order of 60 ft is necessary to make the prospect drillable. Can this be achieved?

There are eight wells in the area, and all are located on seismic shotpoints in the grid. The depth information from these, converted to the seismic datum of 5,000 ft above mean sea level, is Table 1.

It is assumed that the calculations are done on a spreadsheet, although there is nothing herein too involved for a good calculator to handle.

Most working interpreters nowadays have access to spread-sheet programs on PCs or mainframe computers, and most who use them find them very helpful.

METHOD 1 CONSTANT-VELOCITY

A graph is made of depth versus time (Fig. 2). A linear regression is used to find the best-fit straight line. In our case, the line is

Depth=9137* Time-4809.

This is disturbing, both theoretically and practically. A time of zero implies a depth of -4,809 ft, which doesn't make sense, theoretically (both are from the same datum, so zero time should equal zero depth). Moreover, the idea that one velocity should work over a large area with well over 12,000 ft of relief does not appeal (Gregory). Looking at the results practically, the difference between the known depth and the calculated depth is always at least 29 ft and reaches - 631 ft! The standard error is 419 ft.

Obviously, if we use this method to convert the large number of seismic time values, many of the resulting depth values will contain errors of 419 ft.

This fails utterly to meet the tolerance of 60 ft set out above.

METHOD 2 VELOCITY "GRADIENT" MAPPING

Each of the depth values is divided by its corresponding time value (both measured from same datum) to get a half-velocity value, as shown in column F of Table 2. These velocities are plotted on the map, one per well, then contoured to give a "velocity field," or "velocity gradient map." The field map and the time map are multiplied (either pointwise, or at contourintercepts) to give the depth map.

This method is very dangerous. It is generally impossible to contour this map, because it has very few control points, the points are expressed in unfamiliar units, and the points are noisy (i.e., inexact).

Many interpreters accept the fact that this method is not generally applicable, but argue that it works in two common special cases I which I nickname "Napoleon" and "Dovetail."

"Napoleon" cases are when the interpreter thinks he knows everything there is to know about the area in question, so he doesn't fear the task of contouring the velocity map despite its very few points of control. This reasoning is wrong because it is circular. If he really doesn't understand the area, he can't contour the velocity map. If he does understand it, why shouldn't he just contour the well depths themselves and pitch the seismic in the garbage? After all, his vaunted intuition has been developed in working with structure maps, and should be used directly. If contouring a structure map with few control points scares you, then contouring a velocity map with the same number should terrify you!

"Dovetail" cases are those in which the velocity field contours out in such a way as to match nicely with something else. Reassuring, "plain vanilla" dip into the basin, for example. Whatever the particulars, when the velocity map fits something, it "feels right" to the interpreter. So right, in fact, that he can't detach himself enough to realize that Method 4, below, would probably do the same job more accurately, entails less work, and is easier for others to understand once it is finished.

Even in the worst cases, where lateral velocity gradients actually exist (if any), the data should be corrected via Method 3 (or, Method 4, if applicable) before being mapped, because vertical (i.e., compactional) effects must be removed before the velocity gradient may be considered lateral. In more than a few cases, interpreters will discover that nothing lateral is left to contour,

METHOD 3 VELOCITY VERSUS DEPTH

A plot is made of velocity versus depth (Fig. 3). Another regression finds the best linear fit. In the example, we get

Velocity - 0.145*Depth + 4671.

Now in order to get depth as a function of time, we combine this equation with the definition of velocity,

Velocity = Depth/Time, (half-velocity equals one-way depth over two-way time) to get Depth/Time 0.145 Depth + 4671, or Depth (1 -0.145*Time) = 4671 Time, Depth - (4671 *Time) (1 - 0.145 * Time).

This is much better, but not quite sufficient. Rock physics theory supports the notion that seismic velocities should be a function of depth, because velocity is proportional to confining pressure, which is proportional to depth of burial (Gregory, Faust). Empirical evidence backs this up, in general (Gregory, Faust).

The errors this method yields with respect to the eight wells in the example, are shown (Table 2, Col. J). These errors range between -17 and -262 ft, with a standard error of 134 ft. This is a great improvement over Method 1, but more than half the examples still exceed the tolerance of 60 ft.

METHOD 4 VELOCITY VERSUS DEPTH AND ITS FRIENDS

This is the preferred method; it is an outgrowth of Method 3. Study the departures from the trend on Fig. 3, and speculate about their causes. Consider the raw data, and try fitting other straight lines to some of the points. Refer to the map, and look for ideas: What else can affect the velocity? The geophysicists for this project should repeatedly ask the geologist (especially if they are not the same individual), "What makes these wells different from these?" If nothing else comes to mind, type the kelly bushing elevation (or seabottom depth) of the wells into the spreadsheet, and graph velocity versus kelly bushing. If there is any relationship, ask yourself, "Why? Is this because different formations occupy the deeper parts of the basin? Has my seismic been misdatumed?" Work back and forth between math and geology in this manner until something clicks.

In the example, the points can be imagined to fit two different sub-linear trends-three on the right, five on the left. The soul of the method is to determine whether this makes sense, by asking, "What do these three wells have as opposed to the rest?" The answer in this case is clear: they are east of the fault. Does it make geological sense that these should be different? Keep in mind that compaction/depth has already been accounted for.

Well E and well F are at about the same depth but have differing velocities; what difference does it make if E got there by folding and F got there by faulting? Shouldn't compaction be the same?

No. In the example, the fault is very old and known to have affected both sedimentation patterns and the span of time for which the rocks were exposed to compaction. Well E was compacted longer, beneath a different composition of materials, than was F. To fully understand this process would require detailed knowledge of the burial history, which in this area would include the exhumation of the basin after Larimide time.

But the point is not to derive the exact relationship or even understand it fully, but rather to convince yourself that it does exist ... that it is more than mere coincidence. We must believe one of the following three basic alternatives about what controls velocity besides depth:

  1. The fault really does control velocity. Errors are small. If we had more wells to work with, the relationship would show up more clearly.

  2. Something else-the thickness of an overlying bed, for example-controls velocity. Errors are probably small. The relation of velocity to the fault is an illusion caused by the partial coincidence of overlying bed thickness to fault; if we had more wells this would be clearer.

  3. There is nothing else that controls velocity; velocity depends on depth alone. Errors are large. Apparent relation to other factors (such as fault block) are coincidence; illusions caused by having just too few wells. Errors are caused by impurities in the data, and nothing will remove them short of reshooting the data, or radical new processing, or correcting mispicks. If we had more wells, the coincidences would disappear, leaving us with velocity being a function of depth, plus or minus huge errors.

What to do if this three-way choice is not clear is considered in the following section. But for the sake of the present illustration, let us assume that (A) is clearly the best choice in this case. What happens if we subscribe to this concept?

Method 3 is repeated, once for each subset of wells (Fig. 4). The values are given in Table 2. The two equations are

Velocity = 0. 1 66*Depth + 451 1 and

Velocity - 0.1 67*Depth + 4294.

Notice that the slopes are essentially the same, while their intercepts differ by 217 ft.

This means that, "Compaction with depth is the same on both sides of the fault, except that the downthrown side has a 217 ft headstart."

This seems entirely reasonable. Once the decision is made to utilize these relationships, each equation must be solved to give depth as a function of time, as it was in Method 3 above.

The error between this calculation and the actual well values varies from - 1 ft to 60 ft; the standard error is 32 feet; none of the eight examples exceeds our stated tolerance of 60 ft.

These items are shown (Table 2, Col. N).

LIMITS OF METHOD 4

Should the interpreter now search for further relationships? We have found that velocity is primarily a function of burial depth, and that it also varies by fault block; what else does it depend on? Our evolving belief about velocity is following an endless path:

Where will the iteration end? Won't we have a tendency to continually imagine relationships forever, never having the courage to claim that we have exhausted the possibilities? If so, we would spend forever analyzing the data and never make a map, which is not likely to enhance our reputations for practicality.

This quandary is quickly remedied by a dose of common sense:

If the error remaining to be fixed is already less than the chatter in the input data, then further analysis is pointless.

In the case above, this is clearly happening after the second pass: linetie errors, pick errors, migration errors, and well survey errors, together create an error which is on the order of 5 ms Oust under 40 ft), as discussed before. Our velocity analysis is accurate to 32 ft. To analyze velocity further would be to kid ourselves.

This is handy for choosing between (A) and (B), also. For example, stipulate that we picked thicknesses of a certain overlying horizon, analyzed these, and found that the "depth & thickness" method was accurate to 20 ft, which is better than the "depth & fault" method, which is accurate to 32 ft as discussed above. Which method should we use?

The quickest. If the basic data is really only accurate to 40 ft, then any improvement beyond that must be happenstance. Between two perfect methods, choose the easiest to implement. When reviewing someone else's work, you need to keep in mind that this last step is ambiguous and that what is "the easiest to implement" will vary from shop to shop depending on its available technology.

COMPARISON OF ERROR TERMS

We have defined an error depth value at a well and the depth interpreted seismic point corresponding thereto. This does not imply that well data are always perfect. In fact, a major plus of this method is that it forces consideration of the geologic reasonableness of the velocity interpretation before depth conversion, thereby often resulting in rethinking of the wells as much as the seismic. But really, the question of whether the source of the error lies in well data or seismic data is generally immaterial to the main question, how to get a good depth interpretation.

However, it is instructive to plot these errors, and compare them between methods. This has been done in Fig. 5. This figure gives us a good feel for how accurate a prognosis can be achieved based on seismic depth converted via each method. When a curve on this type of graph tracks the zero-error line within an adequate tolerance, it is time to quit deriving better methods and use that one. If one well has an unusually high error, it will stand out on this plot, and its data and the adjacent seismic picks should be checked.

There is one more question worth asking at this point: Are the wells used really representative of the area to be drilled? Are they representative of the entire mapped area? Even if they are, are they such that no area-bias is induced?

Like all statistical methods, Method 4 works better with more data, so there is a tendency to input all available wells for study. This is very good for the analysis phase (i.e., everything discussed above), but it is often better to fine-tune the final go-around by weeding out wells not exactly on shotpoints by thinning out dense wells in a field, removing some wells very far from the area of interest, and so on.

SUMMARY, CONCLUSIONS

  • Depth conversion often is most practically done by the interpreter, who converts a time map to depth.

  • Spreadsheets are a cost-effective way of handling the derivation of a formula to be used for depth conversion. It helps greatly if the spreadsheet program will make onscreen graphs quickly, with convenient default values for scales and limits. Printouts may be made to document the conversion.

  • Different methods of depth conversion yield very different results.

    The method chosen should be one which is correct practically and theoretically; the outcome should make geologic sense.

  • If velocity varies both horizontally and vertically, it is better to analyze it vertically first, and then analyze the residuals horizontally. The common method of horizontal analysis, called "velocity (gradient) mapping," prematurely combines the two.

  • Many times the horizontal component of the velocity gradient is found to be zero; what appeared to be a lateral gradient was in fact explainable as a compactional or geopressure phenomenon, which is proportional to depth.

  • Ignoring the fact that velocity is a function of depth is wrong theoretically and practically.

BIBLIOGRAPHY

Faust, L.Y., Seismic Velocities, Geophysics, Vol. 16, pp. 192-206.

Gregory, A.R., Rock Physics, AAPG Memoir 26, pp. 18-19.

Ambrose, R.W., Time to Depth Conversion Using a Geometrical Form of Downward Continuation, 49th Annual International SEG Meeting, 1979, Preprint No. R-39, p. 3.

Tucker, P.M., and Yorston, H.J., Pitfalls in Seismic interpretation, SEG Monograph Series #2, pp. 8-25.

Copyright 1990 Oil & Gas Journal. All Rights Reserved.

Sign up for Oil & Gas Journal Newsletters