Procedure optimizes well test frequency

March 25, 2002
Determining the optimum well testing interval, with the model discussed in this article, can balance the cost of performing periodic well tests with the benefits of finding well problems that may reduce production.

Based on a presentation to the ASME- Engineering Technology Conference on Energy, Houston, Feb. 4-6, 2002.

Determining the optimum well testing interval, with the model discussed in this article, can balance the cost of performing periodic well tests with the benefits of finding well problems that may reduce production.

A major premise in formulating the model is that for developed fields, one can improve production by limiting the amount of deferred production.

Click here to enlarge image

Well test data from three Permian basin and four Gulf of Mexico wells (Table 1) show the relationship between testing frequency, well test quality, and accuracy. Potential savings can be realized by determining an optimum well test interval.

Well tests

Operators routinely test wells to evaluate well performance for establishing a lease allocation factor, which in turn establishes tax and royalty payments. Most operators test wells through conventional gravity separators. These vessels separate the produced stream into oil, water, and gas components before each component's production rate is measured separately.

Also, operators now can install multiphase measurement systems to improve well test results through better accuracy, consistency, and more frequent well testing.

Well test measurements are taken at various time intervals, and it is usually assumed that the test data represent the well's production capability for a time period before and after the test interval, or until such time that one obtains another well test.

Produced fluids from multiple wells often flow to a central facility that processes the total flow stream and separates it into oil, water, and gas streams. The separated hydrocarbon streams are mostly sold. Well test measurements form the basis for allocating the sold oil and gas back to the producing wells.

The sum of all well test measurements is called the theoretical production from the field. This sum deviates from the volume of fluids sold.1 One accounts for the difference with an allocation factor, which also needs to take into account the following:

  • Well downtime. Random well testing or other indications may not accurately determine the total downtime.
  • Normal production decline. In mature fields most wells are on a natural decline and most tax and royalty accounting systems credit production at the last test rate. Forced allocation tends to minimize but not eliminate the discrepancy.
  • Transient production rates. Wells on artificial lift may have transient production rates that are greater than average, and the well tests may not detect these transient effects.
  • Entrained gas or oil. Most well tests use conventional gravity separators that measure and separate the produced stream into components, but factors such as entrained gas or water in the oil stream may cause measurement errors.2-4

Because of these factors, the allocation factor may vary by 2-40%.5 A high allocation factor suggests that the well tests may not fully represent well performance.

In the past 20-30 years, the industry has improved the detection of unforeseen changes in individual wells. Automated well testing systems,5 short duration tests, flow detection devices,6 improved oil-water measurement devices,7 8 and novel separation techniques9 have improved well testing accuracy and consistency. But many production operations still have a high allocation error.

In the past 10 years, some operators have started installing well testing systems with multiphase measurement technology.10 11 These systems provide operators with greater measurement accuracy and allow more frequent well tests, although at a higher capital cost.

Several field studies have shown that improved accuracy and more frequent tests can help an operator identify problem wells more quickly and decide when a well needs repair.5-7 This improved diagnostic capability allows the operator to implement more effective maintenance and well workover efforts that decrease deferred production.

Model development

Click here to enlarge image

An unplanned well shutdown can impact overall production and thus revenue from a field. In a field with a natural decline, Fig. 1 illustrates the relationship among the lost-production, the time needed to detect the problem, and the result of the remedial work to correct the problem.

The lost production in Fig. 1 is defined as the production (revenue) that is not realized during the accounting period because of unplanned events, not anticipated by the operator. These unplanned events can be caused by plugged chokes, tubing leaks, pump leaks, increased annulus pressure, flood response, fluid breakthrough, etc.

Well testing only detects some unplanned events.13 To detect other events, one also has to rely on information from pump-off controllers, visual inspection, etc.

Each operating area has an optimum distribution between the problems detected by well testing and other means, and well test accuracy can establish the detection level for the lost production.

The optimum period between the well tests and the test accuracy represents the best trade-off among well testing cost, percentage of unplanned events found by well tests, and recovery of lost production. Fig. 1 illustrates the relationships among these parameters.

Limiting the amount of deferred production is a major premise in developing the model for determining the optimum test interval. Two basic steps involved are:

1. Establish the accuracy and certainty with which the well testing system can detect changes in the production rate of a well.

2. Calculate the optimum test interval, at system measurement accuracy, to maximize recovery of deferred production.

Well test accuracy

In reporting field data, one often overlooks the accuracy and confidence level of well test data. The quality of these parameters can be assessed by analysis of raw data collected from well test systems. This article uses the statistical method called process capability index (PCI), described in Reference 12, for determining the accuracy (detecting changes in flow rate) and confidence interval of the well test data from three wells in the Permian basin and four wells in the Gulf of Mexico (Table 1).

The PCI method can determine the percent production change that can be recognized with a high confidence level. The process includes:

  • Identifying data that appear to be wrong.
  • Establishing measurement error levels.
  • Determining the statistical confidence at the error levels.
Click here to enlarge image

Figs. 2a and 2b show the confidence level with which the existing well testing system can predict the loss of production. In Fig. 2a, Well 11D has the best accuracy, with about 27% accuracy at a 95% confidence level. Wells 1D and 5 have much lower accuracies, with 60% accuracy at 95% confidence. Well 1 has poor quality well tests.

Fig. 2b illustrates the data analysis for the three Permian basin wells. These tests are better than those for Gulf of Mexico wells, but both areas have similar trends, for example none of the well testing systems have the same accuracy.

Optimum well test interval

Well testing can be performed at a random or fixed (periodic) interval. The model for determining the optimum interval uses a regular, periodic sampling and assumes the well performance changes occur, on average, half way through the interval.

Click here to enlarge image

To establish the optimum well test interval, one has to optimize the annual cost of well tests and the potential annual revenue from the recovery of the deferred production. The five equations, shown in the equation box, can calculated the following values:

  • Cost of the well tests - Equation 1.
  • Optimum well test annual cost - Equation 2.
  • Value of deferred production - Equation 3.
  • Optimum value of annual deferred production costs - Equation 4.
  • Total annual cost including test and deferred costs - Equation 5.
Click here to enlarge image

Table 2 and Fig. 3 show a set of sample values, a solution for the sample values, and illustrate the optimum test interval.

In Fig. 3, test costs are negative numbers. These costs decrease as the test interval lengthens.

Deferred production is also a negative cost because it represents the revenue lost during the year that will not be captured until later in the reservoir life. The annual deferred production cost increases, becomes more negative, as the time to detect the reduced production increases.

One can obtain the optimum test interval time from the point at which the sum of test cost and deferred production cost is the least negative.

Click here to enlarge image

Besides this simple model, the authors also evaluated a net present value (NPV) based model using a 10 year life, 2%/year production decline, and 3%/year cost increase (Table 2).

These calculations also demonstrated an optimum interval at a minimum negative NPV. The NPV was negative because the investment yields no profit, only less costs.

Both models showed the same trend for the optimum test interval; therefore, this article will only further discuss the simple model.

Fig. 3 illustrates that for a set of well flow rates, well testing uncertainty (accuracy), and operational conditions, one can establish an optimum well test frequency that will minimize the total annual costs and thus maximize revenue. Table 2 summarizes the parameters and assumptions used to develop Fig. 3.

Click here to enlarge image

The well flow rate (Qo) and the well test system's accuracy and consistency to detect the changes in flow rate changes (%Qo) are two main factors that affect the length of the optimum test interval. Data from the three Permian Basin and four Gulf of Mexico wells illustrate this point (Table 3).

For the low-producing Permian basin wells (Table 3), the actual test interval is nearly the same as the optimum test interval. Also, because the well test accuracy at the 95% confidence level is high, one would not obtain significant cost savings from changing the test interval.

This is not the case for the higher producing, gas-lifted Gulf of Mexico wells (Table 3). The model indicates that one can potentially save $25,900/well by operating with the same average 49% well testing accuracy but changing the test interval from 25 days to the optimum 6 days.

Also by improving the well testing accuracy to a 5% average, one can increase the test interval to 16 days at a potential $37,000/well savings.

References

  1. Neely, A.B., Bridges, G.L., and Ganus, P.G., "Improved Well Testing in Denver Unit," SPE Paper No. 9365, 1980.
  2. Marrelli, J.D., "Duri Area 10 Expansion-Optimal Matching of Separation and Metering Facilities for Performance, Cost, and Size," Energy Sources Technology Conference & Exhibition, New Orleans, 2000.
  3. Kouba, G.E., "A New Look at Measurement Uncertainty of Multiphase Flow Meters," Transaction of ASME, Vol. 120, March 1998, pp. 56-60.
  4. Hakimian, N.H., Jumonville, J.M., and Scott, S.L., "Experimental Investigation of the Influence of Trace Amounts of Gas on Coriolis Liquid Metering," Energy Sources Technology Conference & Exhibition, Houston, 2001.
  5. Christianson, B.A., and Burger, E.L., "San Ardor Field Production Testing System Upgrade," SPE Paper No. 21533, 1991.
  6. Womack, J.T., 1973, "Uses of Short-Term Tests in Computer-Controlled Well Testing System," SPE Paper No. 4402, 1973.
  7. Means, S.R., and Mehdizadeh, P., "New Technology Improves Well Testing Units," OGJ, Oct. 30, 2000, p. 36.
  8. Shoham, O., and Mohan, R., "Integrated Compact Separation System for the Petroleum Industry of the New Millennium," Energy Sources Technology Conference & Exhibition, Houston, 2001.
  9. Shoham, O., and Kouba, G.E., "The State-of-the-Art of Gas-Liquid Cylindrical Cyclone Compact Separation Technology," JPT, July 1998, pp. 54-61.
  10. Mehdizadeh, P., "Multiphase Measuring Advances," OGJ, July 9, 2001, p. 45.
  11. Falcone, G., Hewitt, G.F., Alimonti, C., and Harrison, B., "Multiphase Flow Metering: Current Trends and Future Developments," SPE Paper No. 71474, 2001.
  12. Christianson, B.A., "More Oil form Better Information: New Technological Application for Testing Producing Wells," SPE Paper No. 37526, 1997.
  13. Muni, H.W., and Dunn, K., "Industrial Engineering Evaluation of Petroleum Production Operations," JPT, February 1966, pp. 177-180.

The authors

Click here to enlarge image

Parviz Mehdizadeh is a consultant with Production Technology, Phoenix. He previously worked for Conoco Inc. and Agar Corp. Mehdizadeh holds a BS and MS in physics and a PhD in chemical engineering and materials sciences from the University of Oklahoma. He is a member of SPE and has served on various API standardization committees.

Click here to enlarge image

Dennis Perry, before his retirement, worked for Conoco Inc. and was involved in production allocation, well test systems, and oil field SCADA systems. He also previously worked for Honeywell Aerospace and Supreme Electronics. Perry has a BS in electrical engineering from Louisiana Tech.