Method gives realistic analysis of leak-detection systems

March 21, 2005
The pipeline and leak-detection industries do not currently have a consistent, coherent, and complete means of assessing leak-detection system (LDS) performance.

The pipeline and leak-detection industries do not currently have a consistent, coherent, and complete means of assessing leak-detection system (LDS) performance. Probabilistic performance maps, however, may help overcome present LDS shortcomings.

Probabilistic performance maps can be developed in an automated fashion for a pipeline LDS by imposing a large range of artificial leaks on sets of recorded realtime data and then using the modified data to drive the system in an off-line mode.

The resulting maps provide a more realistic assessment of leak-detection performance than more commonly utilized single-parameter LDS leak-sensitivity benchmarks because they inherently address uncertainties associated with instrument errors and noise and modeling or calculation errors. This is particularly true when the LDS is tuned to sensitivity levels that are on the same order as the uncertainties inherent in the leak-detection calculations.

The baseline offline analysis with no artificial leaks imposed is also used to assess false positive or false alarm performance for the tuned LDS. The resulting probabilistic performance maps for the tuned system, combined with the corresponding false-positive rate, provide a method for benchmarking the current system's performance, as well as indicate improvements that might be obtained by tuning or other modifications to the system.

They also provide a more realistic vehicle or basis for discussions regarding LDS performance between pipeline operating companies and both LDS vendors and regulatory organizations.

The problem

Common shortcomings of current leak-detection systems include:

  • LDS performance is often quoted in terms of leak sensitivity based on leak rate or percentage of throughput. This is an inadequate measure of performance because it neglects the element of time required to detect the leak. This is critical because cleanup costs and fines are most often based on the total volume of product spilled.
  • There is a strong tendency to use configuration parameters or LDS threshold inputs as proxies for the actual system performance, rather than going through the hard work of actually evaluating the performance.

However, it is also incorrect because a modern LDS often utilizes a significant number of tunable parameters, such as integrating and averaging processes, statistical tests, event detectors, persistence tests, cumulative leaked volume tests, and many other inputs, and may also utilize flexible instrumentation selection. This tends to make the LDS a black box for which testing is the only means to assess the true performance.

  • Leak-detection systems are not only characterized by their ability to detect leaks, but also by their propensity to emit false alarms. In this case, a false alarm is considered to occur when the system alarms a leak that has not really occurred. The benefits of setting LDS sensitivity to low values must be balanced by assessing the time and costs required to assess LDS false negatives.

The LDS process

Consider the simplest case of a pipeline leak-detection system (Fig. 1). Oil enters a pipeline segment on the left side and exits on the right. The flow of oil is measured via flowmeters at both entrance and exit of the segment.

Click here to enlarge image

Additional measurements, accessed via pressure, temperature, gravity, and other transducers, provide corrections for inventory changes. These variables may also be measured at the points of fluid entrance to—or egress from—the line, as shown, but may be supplemented by other measurements at locations internal to the segment.

Pipeline leak-detection systems can operate via various principles. API 1130 identifies several variants or approaches:1

  • Line balance. Systems that restrict volume-loss calculation between inputs to the system and system-discharge flowmeter measurements, with no compensation for pipeline inventory changes due to variation in pressure, temperature, or composition.
  • Volume balance. These leak-detection systems provide a limited calculation to pipeline inventory by using pressure and temperature measurements. These systems typically do not include corrections due to composition changes and often use representative bulk modulus and thermal expansion values for pressure and temperature inventory calculations.
  • Modified volume balance. These leak-detection systems account for inventory changes resulting from composition or batch changes, often by using pressure and temperature moduli that represent the various commodities in the line on a fractional basis.
  • Real time transient model. These systems provide inventory and other corrections by utilizing a sophisticated real time transient model (RTTM) of the pipeline, including not only all fluid dynamic characteristics such as flow, pressure, and temperature effects, but also physical plant modeling, which includes pipeline length, diameter, wall thickness, expansion, pipe roughness, pump, valve, and tank effects, as well as other equipment effects.

These systems can also provide corrections for inventory changes by including state-based modeling of pressure and temperature effects based on the species present in the pipeline.

Acoustic or negative pressure wave. This approach provides a leak-detection calculation based on the assumption that a new leak will produce a negative pressure wave that travels away from the leak site at the sonic speed of the pipeline fluid. These systems can theoretically locate the site of the leak by correlating the arrival of the pressure wave at two or more measurement sites.

  • Statistical systems. These systems rely on any of a variety of statistical approaches. Such systems can range from the fairly simple, where variation of single measured parameters such as flow or pressure outside of their normal ranges will be sufficient to produce an alarm, to the complex and sophisticated, where statistical leak parameters may consist of correlates of multiple measured variables.

Additional statistical processing may involve averaging or other techniques designed to increase sensitivity at the expense of detection time, such as averaging of either input or processed data.

The level of sophistication that will be used for a leak-detection system is a strong function of the data available to drive it. It makes little sense to employ an RTTM-based system in a pipeline where pressure, temperature, and composition effects are important, but where there are no corresponding measurements available to support the model-based inventory calculations, unless performance requirements are not particularly demanding.

It is also important to recognize that modern leak-detection systems, especially those at the high end, are unlikely to rely exclusively on any one of the techniques outlined above, but are more likely to employ two or more approaches to garner the advantages that can accrue when the strengths of the individual methodologies supplement the weaknesses inherent in individual approaches. Thus, many current systems employ both a realtime model in conjunction with a statistical post-processing of the raw, model-based output.

Click here to enlarge image

Regardless of the internal details regarding the approach or approaches taken, the leak-detection process will follow the basic steps shown in Fig. 2. Raw, measured data will be processed in some way as described in the approaches described above.

The calculation process, which is likely to employ a model or other engine of greater or lesser sophistication, will output a leak-detection signal of some kind. The signal will then be subject to a signal evaluation process, which, if a large enough flow imbalance or other evidence of a leak is detected, alerts the pipeline operator via a leak alarm. A human-machine interface (HMI) of some sort will typically be provided to view alarms and data trends as well as other results, and make LDS configuration changes.

The inputs to the LDS process are realtime field measurements. All of the input measurements are subject to errors of various sorts. The inputs are generally processed in some way by the leak-detection engine, which is also likely to be subject to its own errors.

Typical leak-detection engine processing errors in model-based systems would consist of errors associated with failure to account for important physical effects, such as temperature or composition, or poor or simplistic modeling of such effects. In statistical systems, processing errors might be associated with the use of poor estimators, such as the use of linear leak-detection estimators when more sophisticated, nonlinear estimators might provide a better partition of the decision space or over fitting of limited data to estimators with too many degrees of freedom.

The measurement and processing errors will combine to produce net errors or uncertainties in the LDS output signals that are used to indicate the pipeline system-leak status. As these errors are considered to be random, they have the potential to create uncertainty in the system performance.

If the LDS is tuned to work in a leak-sensitivity range that is well above the range of the uncertainty in the output leak-detection signal, it will generally work in a deterministic fashion, so that any leak alarm produced is likely to be significant. On the other hand, any LDS which works to leak-detection sensitivities that are of the same order as the measurement errors will begin to exhibit errors in its output.

Such errors can be expressed in terms of false negatives (real leaks missed and never alarmed by the system) or false positives (false alarms, or LDS system alarms that do not correspond to any real pipeline leak). An LDS operating in this range can be described as operating probabilistically. Fig. 3 shows the effect of threshold size on leak-detection performance.

In an ideal world, of course, an LDS will always operate in a fully deterministic fashion with leak-detection thresholds of zero, and all leaks would be detected with no false alarms. In the real world, however, where leak signals have associated errors, the cost of increased sensitivity as leak-detection thresholds are reduced is the increased probability that the system will generate more false positives.

Click here to enlarge image

Such alarms incur a cost by requiring that operators or support personnel spend time diagnosing the causes of the alarms (Fig. 3). Conversely, any attempt to reduce false alarms by increasing leak-detection thresholds will have an increased probability that a leak will not be detected by the system, or will be detected only after a significant time lag. Costs associated with an actual leak, of course, depend on expenditures required to clean up the leak as well as any regulatory and legal consequences of the spill.

Thus, configuring and tuning a pipeline LDS always involves a conflict between those who wish to maximize the chances of finding a leak and those who would minimize operational impact by reducing false positives.

Offline LDS testing

As previously noted, a modern LDS is usually complex enough that system inputs (expressed as either leak rate, spilled volume thresholds, or more complex, secondary configuration parameters) are often insufficient fully to describe the system performance. Consequently, testing the LDS is usually the only suitable way to evaluate the system performance.

The American Petroleum Institute recommends that a pipeline LDS be periodically tested to determine its performance.1 This can be done by performing an actual, unmetered discharge of oil to a tank, tanker truck, containment vessel, or pipeline system that is external to the pipeline system that is protected by the LDS.

This type of testing carries significant cost and risks. The costs are associated with the full aspect of preparing for the test and actual test implementation. From a risk point of view, any time oil is being transferred to temporary storage vessels, such as tanks and tanker trucks, there is a chance for a spill. This type of testing also restricts the number of tests that can be practical to run.

A holistic view of the leak-detection system performance is difficult to obtain without extensive testing, particularly when the LDS performance map is in its probabilistic regime. In particular, a detailed determination of performance utilizing field tests would require the gathering of so much data that development of the system performance map by this mechanism is generally infeasible. (We do note, however, that use of physical discharge tests to confirm a performance map developed by the alternate mechanism described below is not only viable, but recommended.)

An alternate mechanism permitted is offline testing of the LDS software.1 This type system evaluation has the advantage that it can be performed in an automated or semiautomated fashion using prerecorded data with inputs modified as required to map the full system performance.

An additional advantage is that a range of simulated leaks can be generated that provides a very comprehensive set of performance maps which can be built using this mechanism. This, as previously noted, is an option not practically available for physical discharge tests.

A further advantage is that the leak data can also be read in and processed as rapidly as the hardware of the supporting LDS permits, allowing for processing of large numbers of simulated leak data points, rather than being restricted to processing of very limited physical leaks that must be processed in realtime.

A potential disadvantage to this approach is that it is sensitive to the quality of the inputs used to drive the LDS. In general, all inputs used should reflect the behavior of data that would arrive to the LDS from the supporting real world field measurements.

In addition, the data used to drive the leak-detection system must be capable of being modified so that it realistically expresses the behavior of the pipeline field devices in the presence of leaks of varying sizes. Too often, offline testing is done with "ideal" data (i.e., no errors) or with statistically generated errors that do not adequately reflect the behavior of the actual pipeline inputs.

Example performance map development

The LDS for a large-diameter pipeline was used for the example evaluation. The leak-detection system is an RTTM-based LDS employing a methodology called transient volume balance.

Click here to enlarge image

In this model-based approach, flow measurements at points of flow entrance or egress from the pipeline system or segment are used to develop a flow balance on the entire pipeline. Pressure, temperature, composition, and other measurements are used to drive the RTTM, the outputs of which are boundary flows that calculate a separate packing rate for the pipeline system.

In the absence of a leak, the flow balance and packing rate should exactly balance. Thus, the difference between these two quantities is a proxy for the leak size, should one be present. The general approach of this methodology, therefore, is to alarm based on perceived leak rates.

The system is supplemented with a layer of integrating and averaging calculators designed to find small leaks over longer periods of time, as well as additional processing to prevent alarming of leaks until leak persistence and cumulative volume thresholds have been exceeded.

Additional processing is included to allow the leak calculators to look for changes in the size of the leak signal, as opposed to alarming based on the absolute magnitude of the signal. This provides better performance by eliminating the effects of long-term drift from instrument measurements.

Note that other functions of the LDS include filtering and processing of input data to eliminate and minimize spikes and data that may have been declared bad by the supporting supervisory control and data acquisition (SCADA) system. Other components of the LDS perform long-term archiving of input field data so that it can be played back later to reanalyze and understand leak alarms created by the system.

This latter is performed automatically on an ongoing basis and is an important aspect of the analysis described here. All data filtering and processing as well as most leak-detection signal processors are configurable by the LDS user. A partial list of configurable parameters appears in the box above.

As can be seen, the complexity of the LDS configuration does not make it clear as to how the system will perform in practice with noisy real world data. To provide a test-based assessment of LDS performance, an off-line performance analyzer was developed to allow a realistic evaluation of the leak-detection performance of the system.

For any specified configuration state of the LDS, the performance analyzer application runs in two modes:

  • A basic data mode in which archived input data are run unaltered through the LDS to determine the false-positive rate for the current tuning.
  • A mode that utilizes archived data that has been perturbed to reflect the presence of a leak of specified size.

Operation in the first mode is straightforward. Data sets are selected and the number of alarms created over the period of simulation recorded for later analysis. In the second mode, the application selects a leak rate, runs through an archived data set up to the point where the leak is applied, and then perturbs the recorded field inputs to reflect the leak, running thereafter for a prespecified simulation period.

The user specifies the range of leak sizes, the time period to be analyzed, the time allowed to detect the leak, and the frequency of the leak-detection analysis (hours between simulated leaks). The software runs through all specified leak cases, recording the results in a database. Time to detect, detected size, and detected location are recorded for each simulated leak.

Click here to enlarge image

Fig. 4 shows typical results for a specified configuration. This figure shows cumulative probability of detecting a leak as a function of time out to 24 hr for a range of leak rates.

The false-positive rate for this configuration was determined to be less than one alarm/month/pipeline segment. All leaks are expressed as a fraction of pipeline design throughput.

In all cases, cumulative probability increases with time. As the leak rates become smaller, the probability of detection with time shrinks. Because of the design of this particular LDS, existing leaks tend to become "embedded" in the internal data model of the pipeline over time, with the consequence that if a leak is not detected after 24 hr, it becomes increasingly unlikely thereafter that the leak will be detected.

Click here to enlarge image

If the probability of detection at 24 hr is replotted, we get the simplified chart in Fig. 5. If a criterion of 95% probability of detection within 24 hr is used, the LDS behaves as though it has a detection threshold of roughly 0.65% of design throughput when tuned to yield less than one false alarm/month/pipeline segment.

If we subtract the probability of detection as a function of leak size, we obtain the probability that the LDS will not detect a leak as a function of throughput (Fig. 5).

Not surprisingly, Fig. 5 shows that the LDS ceases to be useful for detecting very small leaks. To address this, pipeline operators might elect—or be required—to provide supplemental methods for addressing spills resulting from very small leaks. Methods used might be a regularly scheduled drive down the pipeline corridor, flyover inspections, or leak-detection pigs. For such periodic methods, it would be useful to determine the maximum interval between checks.

Let us assume that the company is governed by an internally or externally imposed maximum spill size that can occur up to the time it is detected and that this maximum is a specified fraction of the design throughput.

A conservative way of doing this would be to use the 0.65% of design throughput criterion that we obtained by using a 95% probability of detection by the LDS in 24 hr. In this case we can divide this into the spill criterion (expressed as a function of design throughput) to obtain the inspection interval vs. spill size curve based on the 95% LDS detection probability criterion (Fig. 7).

Click here to enlarge image

It could be argued that this approach is overly conservative because it does not properly address the fact that the probability of detection by the LDS changes with leak size. The curve in Fig. 5 can be multiplied by the corresponding leak size to obtain a plot of the most likely expected or probability adjusted LDS undetected leak rate (Fig. 6).

Click here to enlarge image

Using this approach, the highest value of the "most likely" undetected leak rate would be only 0.2% of design flow. Expressed as a period between spill checks, we would get the higher curve shown in Fig. 7.

This approach addresses the fact that although an increase in leak rate corresponds to an increase in total spilled volume when periodic flyovers or similar approaches are used to detect spills, this is offset by the fact that the LDS is more likely to detect larger spills. It seems reasonable to argue that the most reasonable inspection period for the pipeline in question would be somewhere between these two curves.

It is possible to take this analysis further if the LDS tuning configuration is allowed to vary. Any modification in LDS tuning parameters designed to reduce effective LDS thresholds (as defined by the 95% or similar likelihood of detection by the LDS) will result in either lower requirements for periodic spill inspections, or lower potential spill cleanup costs, since even small leaks will be detected quickly.

Such reduction in threshold will, however, be accompanied by an increase in false alarms, as Fig. 3 previously implied. Effective diagnosis of such false alarms can be costly. Thus, effective implementation of an LDS for a pipeline system has a minimum or optimum cost that may strongly depend on finding the best balance between low LDS thresholds (or high sensitivities) and corresponding support costs.

Minimizing implementation costs for the LDS and associated systems can have significant potential financial benefits. A modern LDS for any pipeline system of significant size can carry a very significant initial investment cost (product licenses, initial configuration, instrumentation upgrades, etc.).

Annual support costs to support the leak-detection system (diagnosing false positives, product upgrades, modifications to address pipeline configuration changes, etc.) can easily run in the ballpark of 5-10% of the initial installation cost.

If support costs are combined with periodic inspection or spill cleanup costs, the variability of the total annual leak-detection support costs can be in the same ballpark. For hurdle rates in the area of 10%, the net present value to be obtained by optimizing these total support costs also has the potential to be significant. F

Reference

1."Computational Pipeline Monitoring," American Petroleum Institute 1130, second edition, November 2002.

The authors

Click here to enlarge image

Philip Carpenter ([email protected]) is a consultant in Leander, Tex., with 20 years of experience in pipeline hydraulics, mechanical design, leak detection systems, integrity analysis, systems simulation and analysis, and project and engineering management. He holds an MS in engineering science and a BS in aerospace engineering from the University of New York at Buffalo.

Click here to enlarge image

Ed Nicholas is an independent consultant in Austin who specializes in pipeline leak detection. He has worked on real time pipeline applications for 25 years. Nicholas holds a BS in physics from Houston Baptist University, an MS in applied physics from the California Institute of Technology, and 3 additional years of full-time graduate work in engineering at the University of Houston.

Click here to enlarge image

Morgan Henrie is an independent consultant in Anchorage who has been associated with leak detection systems maintenance, design, project implementation, SCADA security, and state-of-the industry analysis for 20 years. Henrie, a PhD candidate, holds several degrees, including an MS in poject management.