U.S. produced water discharge regulations have tough limits

July 15, 1996
Gordon H. Otto University of Houston Houston Kenneth E. Arnold Paragon Engineering Services Inc. Houston Based on the most recent study of 292 platforms (of a total of 950) in the Gulf of Mexico, nearly 60% of the platforms may experience difficulty meeting the new lower oil and grease concentration standards in produced water discharged during the course of a year. Nearly 20% of all platforms may experience chronic difficulty meeting these limits, which were imposed in 1994.
Gordon H. Otto
University of Houston
Houston

Kenneth E. Arnold
Paragon Engineering Services Inc.
Houston

Based on the most recent study of 292 platforms (of a total of 950) in the Gulf of Mexico, nearly 60% of the platforms may experience difficulty meeting the new lower oil and grease concentration standards in produced water discharged during the course of a year.

Nearly 20% of all platforms may experience chronic difficulty meeting these limits, which were imposed in 1994.

This article traces the history of Gulf of Mexico statistical studies of oil and grease removal capability for produced water treating systems, from the 1974 Brown & Root Inc. study through a 1993 survey of 292 platforms. The analysis of these data helps explain the changing Environmental Protection Agency's (EPA) policy of regulation based on process capability to one of selective analysis.

More than 3,000 observations from the 1993 study were used to analyze the ability of existing systems to meet the 42/29 mg/l. discharge limits promulgated in 1994. In addition, the data were reviewed to determine if conclusions could be reached on actual field performance of the full range of water treating equipment and system configurations, including skimmers, hydraulic flotation units, mechanical flotation units, hydrocyclones, and filters.

The Water Quality Improvement Act of 1970 (P.L. 91-224) directed the President to regulate the discharge of oil into the territorial waters of the U.S. for the protection of the public health and welfare. Rules were promulgated by the Secretary of the Interior prohibiting the discharge into the territorial waters of the U.S. any oily waste that would be sufficient to create a film, sheen, or discoloration of the water or the adjoining shorelines, cause a sludge or emulsion to be deposited beneath the surface of the water or upon adjoining shorelines, or violate appropriate water quality standards.

Much of the interpretation and enforcement of this law passed to the EPA when it was created in 1971. Since then, there has been an increasingly deep division between the EPA and the petroleum industry over how to define the capability of the oil and grease removal system, the identification of an "exemplary" set of platforms as the reference group to be used in setting the standard for all other platforms, the laboratory testing protocol to measure oil and grease concentrations, and even the philosophy of regulation of a "pollutant" that has no verified deleterious effect on marine life at the regulatory discharge levels that have been in place since the mid-1970s nor any consistent propensity to produce a sheen on the waters around the discharge point.

Produced water contains both dissolved and dispersed oil and grease pollutants, along with other dissolved organic compounds that are reported as oil and grease in the mandated EPA Method 413.1 testing protocol. This water is treated to reduce the concentration of the dispersed droplets of oil and grease to a level low enough to satisfy the regulatory requirements and is then discharged into the ocean waters.

This article discusses studies performed by industry and the EPA and highlights the results of a 1993 survey of the most recent 12-month history of the Discharge Monitoring Reports filed by operators on 292 platforms in the Gulf of Mexico.

Early process capability studies

The Offshore Operators Committee and the American Petroleum Institute have sought to influence regulatory agencies to use process capability as the basis for setting discharge standards. The terms were never explicitly used and the statistical practices of process control charting have never been seriously explored, however. Each of the studies used different criteria.

1974 Brown & Root study

The initial study was made by Brown & Root Inc. under contract to the Offshore Operators Committee.1 This study identified gas flotation as the best practicable control technology (BPCT) and analyzed 38 platforms with such technology installed.

The statistical treatment of the data was primitive, and data screening was done without reference to either histograms or the underlying distribution of the observations. Specifically, the following rules were employed:

  • The highest 10% were deleted.

  • All platforms averaging more than 100 mg/l. were excluded.

  • Data associated with the highest and lowest 10% of the observations within each range of 0-20 mg/l., 20-40 mg/l., etc. were deleted.

No explanation was given in the report for these actions. As a consequence, the surviving data base contained 1,218 observations from 19 platforms. These data were plotted on a cumulative probability plot. The average concentration in the sample was 33.5 mg/l., the 98th percentile occurred at 100 mg/l., and the long-term sustainable platform average (deemed to be the 75th percentile value of a single outcome) was 44 mg/l.

This study proposed the use of process capability at the 98% level as a basis for regulation. Clearly, the statistical reasoning was flawed and the decades of quality control literature on process capability analysis was not consulted.

For example, the use of the 98th percentile value would put a randomly selected platform at risk of being declared out of compliance at least once during the course of a year (assuming one sample per month) at 21.5%, an unacceptably high value. The use a 99th percentile limit yields a risk of 11.4%, which is still high. By contrast, the convention in statistical process control is to use three standard deviation limits (99.865% limits) which would produce a risk of only 1.6% in this sampling regimen.

If one assumes that the distribution of outcomes is lognormal (a property identified in subsequent studies), a mean concentration of 33.9 mg/l., a 75th percentile value of 42.5 mg/l. (not 44 mg/l.), a 98th percentile value of 100 mg/l., and a 99th percentile value of 119 mg/l. are derived from this data set.

This lognormal distribution is shown as the 1974 curve in Fig. 2 [20158 bytes].

1976 EPA study

The 1976 EPA study was a landmark study for numerous reasons.

  • It established the EPA's definition of process capability as the 99th percentile point of the BPCT data.

  • It demonstrated that the data outcomes from this process were lognormally distributed.

  • It used the original Brown & Root data plus a number of additional platforms, subjected the data to reasonable outliers tests, and created a data base of 2,262 observations from 28 of a candidate set of 41 platforms. All of these were gas flotation units with chemical programs in place.

  • The results of the study were used to set the oil and grease discharge limits at 72 mg/l. for a daily maximum value and 48 mg/l. for the monthly maximum of four daily samples.5

The lognormal analysis was done by plotting the data on lognormal paper and "eye-balling" a straight line through the data. The median value of the distribution (erroneously called the long-term mean) was 25 mg/l. While no variance was estimated for the logarithms of the observations, the graphical solution enables the standard deviation to be estimated as 0.4548. The plot of this distribution is shown as the 1976 curve in Fig. 2 [20158 bytes].

The report also established the limit of 48 mg/l. for the average of four independent samples. Using the parameters estimated for the distribution, the 99th percentile for the geometric mean of four observations should be 43 mg/l.

The 48 mg/l. result was obtained by randomly selecting sets of two, three, and four observations that were taken no more than 30 days apart from the same platform and computing the arithmetic average. The overall average for each sample size was plotted and a "French curve" was used to fit a line through the points.

The value of 48 mg/l. was read from the curve at a sample size of four. This result seems reasonable because the geometric mean is always less than the arithmetic mean (unless all four values are identical).

The only noteworthy statistical lapse in this study is the failure to recognize that when multiple criteria are used simultaneously, the alpha level of each test must be reduced to keep the overall risk constant at 1%. A Bonferroni-type analysis would have set each limit at the 99.5 percentile level. This would have produced limits of 81 mg/l. for the daily maximum and 45 mg/l. for the geometric mean of four.

1981 30-platform study

The 1981 EPA study was disastrous in several ways:

  • The sampling plan was flawed.

  • The statistical analysis of the data was biased towards the performance of one particular system.

  • The EPA effectively shifted its focus from process capability to best available technology and continuous improvements.

  • Consequently, the cooperative spirit between industry and the EPA was replaced with distrust and a more adversarial relationship.

The data were collected from the platforms and chemically analyzed for a wide range of organic and inorganic compounds. The results were tabulated and analyzed by Burns & Roe under contract to the EPA.2 The EPA, in turn, selected seven of the platforms that had skimmers to pretreat the influent stream and were adding chemicals; the EPA declared them to be exemplary.

One platform with the highest effluent average was dropped from the exemplary set with no reason given. The exemplary set was reduced to only 6, with a combined sample size of 12, of which 6 were taken at one platform alone. These were used as the calibration set for the entire Gulf of Mexico.

From this small, select set, the EPA recommended promulgating a new daily maximum discharge limit of 59 mg/l.6 A second study was commissioned to Burns & Roe by the EPA to analyze the Discharge Monitoring Reports (DMR) submitted monthly to the EPA to monitor compliance with the 72 mg/l. regulation.3 Reports from 1,396 platforms in the Gulf of Mexico during selected months in 1981-1983 were analyzed. The report states that less than 60% of the observations could meet the 59 mg/l. requirement.

The petroleum industry responded by preparing a critical review of the EPA approach and initiated its own analysis of the 30-platform study. This review included, for the first time, analysis of variance procedures to estimate variance components due to differences between platforms, variation within platforms over time, and sampling error.6 8 Review of the DMR data for the exemplary 6 platforms over a 2-3 year period revealed that the calibration set used in the 30-platform study for 4 of the 6 platforms was significantly lower than the long-term average for these platforms (3 were at about 50% of the average level).

When the DMR data were added to the calibration set for the six exemplary platforms and obvious outliers deleted, the 99th percentile limit was found to be 117 mg/l., a result almost identical to the 119 mg/l. imputed at the 99th percentile of the original 1974 Brown & Root study.

Fig. 1 [21909 bytes] shows the histogram of the data from the exemplary six along with the subset of data used by the EPA to make recommendations for reduced discharge limits. Fig. 2 [20158 bytes]Fig. 1 illustrates two points:

  • The curve was fitted through data that seem to consist of two subpopulations on the same platforms, those that can meet the 72 mg/l. daily maximum easily and those that are struggling.

  • The calibration set of data used by the EPA is not a typical set of values for these six platforms.

1991 platform study

In 1989, industry made an independent study of the sources of variation in oil and grease discharge concentrations and the effects of dissolved oil on the measurement process. A series of statistically designed experiments were conducted in which three replicate samples were collected on each of three trips taken at least 1 week apart on a set of 42 platforms with gas flotation equipment.

In 1991, a composite study was conducted in which the data from several statistically designed studies were combined. Even the "exemplary seven" platforms from the 1981 study were included. The base for analysis consisted of 83 platforms. Several studies were done, each with a more restrictive filtering of data. The last study eliminated several platforms that exhibited excessive "within-platform" variability and every sample that exceeded the regulatory limit of 72 mg/l. (there was evidence of some bunching at that value).8 This left 75 platforms with a total of 471 observations, most with replicate samples.

The lognormal distribution derived from this study produced a median concentration of 21.6 mg/l. and a 99th percentile value of 93 mg/l., a significantly lower figure than either the 1974 and 1976 broad-based studies. The probability that a single observation exceeds 72 mg/l. in this distribution is 2.7%, and the probability is 5.4% that it exceeds 59 mg/l. (the new standard being proposed at that time).

The 1991 data are undoubtedly the most reliable and most representative of well-regulated gas flotation units (Fig. 2 [20158 bytes]).

Dissolved organic

compounds

Dissolved organic compounds cause a problem in oil and grease testing. The EPA sampling and testing protocol requires samples to be acidified to a pH of 2, refrigerated, and tested using the simplified gravimetric EPA Method 413.1 (total recoverable oil). The oil and grease removal equipment, however, can only remove dispersed oil.

To make matters worse, other nonoil compounds give a false positive reading in this test. Hence, the regulations are based on measurements that are not controllable by the operators if dissolved oil and other organic compounds are present.

Caudle, Stephenson, and Hall made an in-depth study of 10 platforms that had such dissolved organic problems in 1988.10 The study concluded that the dispersed (treatable) oil could be properly measured by either using unacidified samples or by following EPA Method 413.1 with a silica gel wash. This protocol is referred to as Method 503E. It was the preferred method because it exhibited a lower variance. This is the method used by the European Environmental Protection Conventions.

In addition, the study reported that a significant fraction of the dissolved organic compounds was carboxylic acids (not dissolved oil). Statistical analyses concluded that the ratio of dispersed oil to total "oil" differs from platform to platform, and hence no correction factor can be universally applied to convert from one oil and grease measure to the other.

In 1990, an expanded study was commissioned by the Offshore Operators Committee to visit an additional 42 platforms. Each platform was visited on three occasions, and six 1-l. bottles were collected contiguously during each trip. Three bottles were randomly selected to be tested according to Method 413.1 and the other three by Method 503E.

Statistical outliers were removed, and both distributions were found to be lognormally distributed. The total oil had a median concentration of 22 mg/l. while the dispersed oil had a median concentration of only 8 mg/l. The magnitude of the dissolved organic compounds problem can be seen in Fig. 3 [21587 bytes].

292 platform survey

In 1993, the EPA made known its intention to reduce the discharge limits from the current 72/48 mg/l. set to a drastically reduced level of 42/29 mg/l. based on its own studies and decided not to "grandfather" existing platforms.

The Offshore Operators Committee commissioned a study to survey a significant percentage of the water treating systems currently operating in the Gulf of Mexico to determine the type of equipment in use, its recent discharge history, and information about the presence of soluble organic compounds and the addition of chemicals to increase the efficiency of the equipment in use.10

The initial survey produced reports on 354 platforms, of which 292 had complete responses including the most recent 12 sequential months of effluent tests (the monthly average and monthly maximum value), the type of equipment in use, type of production, chemical program, etc.

In some cases, a full 12 sequential months of data were not available because of hurricanes, shut downs, sale of property, and miscellaneous other reasons. Consequently, the final set of observations contained 3,218 data points, an average of 11 months per platform.

Operators were asked to code the platform based upon the type of equipment used at each stage of the treatment process. The following was the classification code from simplest to most efficient:

  • Skimmer vessel or tank

  • Plate coalescer

  • Gas flotation (the EPA best practicable technology equipment)-hydraulic separation and mechanical separation

  • Hydrocyclone

  • Filtration.

Table 1 [18786] shows the types of equipment used for oil and gas production. The equipment is classified according to the most efficient item in the treatment process. The table shows that half of the platforms surveyed were oil and half gas, and that the equipment configuration is significantly different for each type of production.

Gas platforms tend to use skimmers and plate coalescers (and gas flotation to a lesser degree), whereas the gas flotation units are the predominant equipment on oil platforms. There were only eight hydrocyclones and one filtration unit in the survey. Gas flotation accounts for 63% of the units in the survey.

Chemical addition was made part of the EPA definition of an "exemplary" platform in its 1981 study. Table 2 [12134 bytes] shows that 87% of the oil platforms were using some type of chemical, but only 36% of the gas platforms were using chemicals. This illustrates the difficulty experienced by the oil platforms in meeting the monthly maximum regulatory limit of 72 mg/l.

In this survey, less than half of the platforms reported dissolved organics in the discharge. Of the 109 that did report it, chemicals were used more in conjunction with the presence of organic compounds than when they were not present. Indeed, 71.43% of the chemical usage was reported on platforms with organics. Additionally, 70.59% of the organics reported were on oil platforms.

Treating system

characteristics

Operators responding to the survey were asked to identify the first three components in the water treatment stream. Table 3 [38620 bytes] and Table 4 [46193 bytes] show the configurations reported and the median, mean, and percentage of observations greater than 42 mg/l.

An attempt was made to see if significant differences in performance could be determined for the different types of system configurations. Table 5 [19301 bytes] indicates that roughly 35% of the systems would fail to meet the 42 mg/l. monthly maximum at least once during the year unless there were changes in equipment, chemical usage, or operating procedures. Table 6 [16081 bytes] shows the median, mean, and percentages of observations greater than 42 mg/l. for systems with different numbers of units with no pretreatment and with skimmers and coalescers for pretreatment.

Because the EPA has defined an exemplary treating system to include a skimmer followed by a flotation unit, the set of platforms in the survey with this configuration was selected for more detailed study. A single skimmer was selected as a comparison set.

The data were trimmed by removing all daily maximums above 200 mg/l. and all below 1 mg/l. The logarithms of the remaining observations across all platforms in the set failed to be lognormally distributed because the lower tail was too extended, and there tended to be spikes at 1-2 mg/l. These values are abnormally low relative to the expectations of a lognormal distribution but were not removed, nevertheless.

The lognormal distribution that best fit the upper tail of the distribution was computed using the median value of the data to estimate the mean of the lognormal distribution. The distance between the median and the 90th percentile of the data was used to estimate the standard deviation of the distribution (Table 8 [26611 bytes] and Table 9 [25743 bytes]). The distributions implied by these estimates are shown in Fig. 4 [26727 bytes] and Fig. 5 [25462 bytes].

Because all the systems were designed and operated to meet the same 48 mg/l. monthly average and 72 mg/l. monthly maximum, it is difficult to draw conclusions about the ability of the different type systems to treat produced water. Clearly, operators are employing higher levels of equipment types and more chemicals to treat more difficult streams. What is more surprising is that there is not a greater variation in median, mean, or percentage of observations (or systems) which exceed 42 mg/l.

The following qualitative observations may be drawn from the data:

  • There appear to be few oil platforms where only one treatment unit is being used, but a third of the gas platforms have only one unit. For 11 of the 15 oil platforms, the single unit is a mechanical flotation device. For 47 of the 52 gas platforms, the single unit is a skimmer (27) or a coalescer (20).

  • Table 7 [16187 bytes] indicates that flotation units tend to work better when an upstream influent treater is used. There appears to be no difference whether the pretreater is a skimmer or a coalescer.

  • Table 8 [26611 bytes] and Table 9 [25743 bytes] and Fig. 4 [26727 bytes]Fig. 5 [25462 bytes] seem to indicate that mechanical flotation units perform better than hydraulic units. While hydraulic units appear to outperform mechanical units in Table 1 [18786 bytes]Table 8, they appear to be operating at a much lower percent of capacity, and the shape of the curves in the 29 mg/l. range in Fig. 4 [26727 bytes] are very similar. This can be contrasted with the data in Table 1 [18786 bytes]Table 9 and Fig. 5 [25462 bytes], where the hydraulic units are operating more closely to the percent of capacity at which the mechanical units are operating.

Because the hydraulic units represent a wider range of designs and manufacturers, it is possible that some poorer designs are influencing the data. Thus, no conclusions can be reached between manufacturers' designs.

  • A skimmer/coalescer system would probably not perform as well on an oil platform as a skimmer/flotation system, but it may perform as well on a gas platform and use fewer chemicals.

  • Unfortunately, there are not sufficient systems using hydrocyclones (one gas platform and five oil platforms). Table 3 [38620 bytes] and Table 4 [46193 bytes] may indicate better performance with pretreatment, and Table 5 [19301 bytes] may indicate that a hydrocyclone may not be demonstrably better than mechanical flotation units.

Meeting the new limits

The 1993 survey was conducted to reach conclusions about the potential effect of the (then) proposed regulatory changes to 42 mg/l. as the monthly maximum and 29 mg/l. as the monthly average, absent any changes in equipment, chemical programs, or operating procedures.

Monthly maximum

Tabulation of the monthly maximum values in the full data base revealed that 162 of the 3,218 observations (5.03%) exceeded 42 mg/l. These overages occurred on 101 different platforms, indicating that 34.6% of all platforms in the survey would have had at least one discharge limit violation in the year surveyed.

Another way of assessing the effects of the new standard is to compute the probability of a value being over 42 mg/l. from the probability distribution that describes platform behavior. The last two rows in Table 8 [26611 bytes] and Table 9 [25743 bytes] provide these assessments for the "exemplary" equipment configurations of a skimmer followed by gas flotation. Although the probabilities seem small, the values are far too large for comfortable operation. The probability of at least one violation of the 42 mg/l. maximum is computed to be 31.5% for gas flotation (hydraulic) on gas platforms and 40.2% for the gas flotation (mechanical) on oil platforms.

Monthly average

The new standard also promulgated a maximum monthly average of 29 mg/l. Tabulation of the monthly average observations re vealed that 11.56% exceeded 29 mg/l. These were distributed over 168 of the 292 platforms and represents 57.5% of the platforms. This signals a serious problem for the industry.

Further investigation of the frequency of occurrence per platform shows that 42.3% had only one such occurrence in the monitoring period and 28% (47 platforms) had three or more occurrences. These platforms are clearly "troubled" and constitute 16.1% of the sample of 292 in the survey.

Investigation of the platforms with at least one monthly value greater than 29 mg/l. shows that they were equally divided between oil and gas platforms. Of these, 59% of the gas platforms and 77.5% of the oil platforms reported organics. Chemicals were being added to 64% of the gas platforms and 97.7% of the oil platforms.

The oil platforms which had monthly averages exceeding 29 mg/l. were predominantly using gas flotation and were adding chemicals. This gives a good description of a troubled platform. Ironically, it also meets the 1981 EPA definition of an exemplary platform.

The last issue considered was the long-term platform average; 2.5% of the platforms had 12-month averages above 29 mg/l. Platforms averaging just below 29 mg/l. could be expected to fail to meet the standard nearly half of the time.

Indeed, platforms averaging above 20 mg/l. on a long-term basis may have problems. Analysis of platform long-term averages revealed that 19.4% were over 20 mg/l. Platforms over 20 mg/l. will be at risk of an excessive violation frequency without a change in equipment, chemical programs, or operating procedures.

Results

This study records the distributions of oil and grease effluent concentrations that have been estimated from relatively large samples on three occasions from 1974 to 1991. The most optimistic estimate was the 1976 EPA study that was used to establish the 72/48 mg/l. regulation. Every other study conducted by the petroleum industry has confirmed that these values (or higher ones) are appropriate descriptions of the treatment process capability.

In the 1993 survey, 1.34% of the monthly maximum values exceeded 72 mg/l. (this represents 12.7% of the platforms) and 2.5% of the platforms exceeded 48 mg/l. at one time or another during the 12 month survey period. These results confirm the appropriateness of the 72/48 mg/l. regulatory limit with respect to present equipment configurations and practices.

Gas platforms tended to have lower technology treating systems (skimmers and plate coalescers) and oil platforms used gas flotation units to a large extent. A higher percent of oil platforms were adding chemicals. Oil platforms also reported a higher incidence of dissolved organic compounds.

Under the assumption that this sample of 292 platforms is representative of the population in the Gulf of Mexico (about 950 platforms), then nearly 60% of the platforms (570) will experience difficulty meeting the new 42/29 mg/l. standards during the course of a year, and nearly 20% of all platforms (190) are expected to experience chronic difficulty.

Conformance to the standard will come from one or more of the following actions:

  • Add chemicals to treaters where none are now being used.

  • Change the chemical programs currently in place to be more effective.

  • Increase the frequency of maintenance of the equipment to keep it closer to its maximum performance level.

  • Add tertiary equipment where feasible.

  • Replace lower-rated equipment (tanks and coalescers) with more efficient equipment.

  • Discontinue discharge into the ocean waters.

  • Discontinue operation.

The years ahead will reveal the effects of this change enacted in 1994. In the meantime, the EPA is planning to change the testing protocol to use normal hexane rather than the Freon-based 413.1 protocol now in use. It is unclear what additional uncertainties this will create and how quickly the new test will be calibrated to field conditions.

References

1. Brown & Root Inc., "Determination of Best Practicable Control Technology Currently Available to Remove Oil from Water Produced with Oil and Gas," 1974.

2. Burns & Roe Industrial Services Corp., "Oil and Gas Extraction Industry Evaluation of Analytical Data Obtained from the Gulf of Mexico Sampling Program," Vols. 1 and 2, submitted to the EPA, 1983.

3. Burns & Roe Industrial Services Corp., "Review of U.S. Region VI Discharge Monitoring Reports Offshore Oil and Gas Industry," submitted to the EPA, 1985.

4. Caudle, D., Stephenson, M., and Hall, "The Determination of Water Soluble Organic Compounds in Produced Water," final report, Offshore Operators Committee, 1988.

5. EPA, "Development Document for Interim Final Effluent Limitations and Guidelines and Proposed New Source Performance Standards for the Oil and Gas Extraction Point Source Category," 1976.

6. EPA, "Development Document for Effluent Limitations and Guidelines and Standards for the Oil and Gas Extraction Point Source Category," 1985.

7. Otto, G.H., "An Algorithm for ANOVA-Like Tests of Hypotheses in Unbalanced Designs," presented at the Joint National ORSA/TIMS meeting, Atlanta, November 1985.

8. Otto, G.H., "Oil and Grease Discharge Characteristics of Gas Flotation Water Treaters," proceedings of the Business Statistics Section of the American Statistical Association Joint Meetings, New Orleans, August 1988.

9. Otto, G.H., "Oil and Grease Discharge Characteristics, 83-Platform Study, Method 413.1 with Refrigerated Samples," submitted to the American Petroleum Institute, July 1991.

10. Otto and Associates, "Offshore Operators Committee 1993 Survey: Oil and Grease Discharge Data Analysis of 292 Platforms," submitted to the Offshore Operators Committee, August 1993.

Based on a paper presented at the Society of Petroleum Engineers Annual Technical Conference and Exhibition, Dallas, Oct. 22-25, 1995.

The Authors

Gordon H. Otto is an associate professor in decision and information sciences at the University of Houston. He has worked for the Atlantic Refining Co., the Research Triangle Institute, and the Mitre Corp. Otto is an active member in the quality movement, teaching statistical quality control and total quality management at UH. He serves as an examiner in both the Houston Awards for Quality and the Texas Quality Awards.

Otto is certified as a quality engineer by the American Society for Quality Control. He has consulted widely with the API and the Offshore Operators Committee on environmental issues involving produced water discharge, sheen testing, drilling mud toxicity, and NORM. Otto has a BS in mechanical engineering from the University of Texas and advanced degrees in industrial engineering and experimental statistics. His doctorate is a joint degree in economics and statistics from North Carolina State University.

Kenneth E. Arnold is the founder and president of Paragon Engineering Services Inc., a Houston-based company that provides engineering, design, project management, support services, and training to the petroleum industry. He has 30 years of industry experience, including 16 years at Shell Oil Co.

Arnold was named distinguished lecturer for 1994-95 on process safety management by the Society of Petroleum Engineers. He has served as a consultant to the American Petroleum Institute and the Offshore Operators Committee. He chaired the OOC subcommittee that provided guidance for the API 14 series of recommended practices, including 14C (safety systems), 14E (piping), 14F (electrical systems), 14G (fire prevention), and 14J (hazards analysis). Arnold is a registered professional engineer in Louisiana, New Mexico, New York, Texas, and Virginia.

Copyright 1996 Oil & Gas Journal. All Rights Reserved.