Offshore platform operations benefit from shared data access

May 10, 1999
The ETAP offshore facilities include a process and drilling platform (left) and a quarters and utilities platform (right). (Photo courtesy of BP Amoco plc) Access to valid shared data is vital for the management of an offshore production platform, where the management team is split between those offshore and onshore. Timely and accurate recording of process data maximizes performance by allowing informed decisions to be made based on current and historic data.
Simon Dyson
MDC Technology Ltd.
Riverside Park, Middlesbrough, U.K.
The ETAP offshore facilities include a process and drilling platform (left) and a quarters and utilities platform (right). (Photo courtesy of BP Amoco plc)
Access to valid shared data is vital for the management of an offshore production platform, where the management team is split between those offshore and onshore.

Timely and accurate recording of process data maximizes performance by allowing informed decisions to be made based on current and historic data.

As an example, one such system was employed on the Eastern Trough Area Project (ETAP) in the North Sea. The ETAP management system (EMS) provides a comprehensive set of management tools from day one to enable acceleration of the start-up and effective operation at reduced manning levels.

A data historian is the core of the system because it provides essential process data to all other systems.

ETAP

ETAP is an integrated development of seven different fields. BP Exploration Operating Co. (now BP Amoco plc) operates four fields (the M fields): Marnock, Mungo, Monan, and Machar. Shell U.K. Exploration & Production is the operator of the other three (the Heron cluster): Heron, Egret, and Skua fields.

Other partners in the development are Esso Exploration & Production U.K. Ltd., Agip (U.K.) Ltd, Total Oil Marine plc, Murphy Petroleum Ltd., and MOC Exploration (U.K.) Ltd.

The ETAP development is 240 km east of Aberdeen in the Central North Sea, and encompasses a 35-km area. Production commenced in the second half of 1998.

Each field would not have been commercially viable as a stand-alone development because of its small size; however, all fields combined have recoverable hydrocarbons of 400 million bbl of oil, 35 million bbl of natural gas liquids, and 1.1 tcf of gas.

EMS objectives

The EMS objectives were to:
  • Enable safe and effective operation of the asset
  • Reduce manning levels by providing access to information and analysis tools, for example, the detection of equipment degradation, equipment performance data, and equipment running hours
  • Provide data in a timely manner and to a consistently high level of completeness and accuracy for use by the onshore and offshore staffs
  • Secure and make accessible valuable commissioning data
  • Provide a single source of data for downstream applications to ensure a consistent reference data set, rather than incomplete and inconsistent data from several routes
  • Concentrate data from multiple sources, principally the Baileys Infi90 control system, the fiscal and allocation metering system (Daniels), and some manual input
  • Validate critical data used for statutory reporting prior to handoff to downstream applications
  • Support query and analysis functions to aid in remote troubleshooting.

Features

A number of application modules were supplied to support these goals. These included:
  • Chemical-injection package that calculates dosage rates, etc., based on motor speeds and equipment status and monitors usage and stocks of the various chemicals
  • Well-test application to enable users to graphically select a time range for a well under test and to summarize the well attributes for generating well test curves
  • Maintenance applications for measuring valve performance, and monitoring inhibition and equipment monitoring
  • Automatic validation of process data.
Some data in hydrocarbon accounting had to be validated hourly. Ideally, an operator would do a manual inspection of results and correct any anomalies of data sent ashore. But to minimize the operator work-load as far as possible, the validation process was automated to screen the data at the source.

Requirements

One main operational requirement for the data historian system was that it have a minimal impact on the distributed control system (DCS) and no impact on the control system and network security.

It also had to provide online access to historical data for 2 years for 20,000 tags. This had to be made available on a 24-hr, 7 days a week basis for use by a fault-tolerant hardware and software architecture.

Architecture

Fig. 1 [242,413 bytes] shows the data-historian system in terms of its relationship between downstream applications such as hydrocarbon accounting, production reporting, and optimization.

In the context of the ETAP asset information management, of which the historian was one part, a supplier alliance was formed to ensure the integration of all processes, people, and systems by first oil.

The EMS team had just 21 months to define, build, and install all the information and decision-support processes for the business.

Methodologies

To ensure the fast-track project remained on course, the team adopted the DSDM (dynamic systems development method) methodology.

DSDM, is a formal method based on rapid application development (RAD) and ensures that the design and development of the resulting system is user driven.

The more traditional method of using the "waterfall" model for system design and building is still valid, and indeed should be used where the application being developed is safety critical, has no visible user interface, or involves heavy real-time development.

The formality of the waterfall model requires a greater level of technical detail to be defined and is an excellent method for system level functions such as device drivers.

The waterfall model's weakness is that it is cumbersome and costly to respond to, even for apparently trivial changes in user preferences. In user-oriented developments, the greatest weakness of the waterfall model is that it defines too many details, too soon.

This is compounded by the fact that it puts less emphasis on the end user being involved in the acceptance and development of a design. This usually is done on their behalf via a technical approval mechanism with design authority granted by someone who will never use the application.

The danger is that the end user is presented with a system which, although functionally complete, may be difficult to use.

The RAD design approach is well suited to user interface applications and conceptual designs, in which a prototype is developed rather than the user being presented with a drawing of the display layout or a data-flow diagram.

Although the prototype has limited functionality, the user can "feel" it before getting it. Cosmetic, or other changes can then be made at minimal expense.

The key components that must be in place for this approach to work are:

  • Customer involvement
  • Empowerment
  • Prototype reviews
  • Joint application development (JAD) sessions or workshops.
The methodology relies on continuity, commitment, and time provided by the customer or end user. That person must also be empowered to make decisions on behalf of the purchaser, and those decisions need to be supported. This person must be someone who is appropriate for the job, not just whomever happens to be free that week.

Designs and user requirements are established in workshops and prototype reviews. A workshop involves a number of people with a common goal that has to be solved. The attendees are gathered together to brainstorm ideas that can be used in the design process.

It has been found that this environment will encourage innovative thinking and is more likely to highlight problems and solutions than a traditional project meeting.

Prototype reviews provide the customer an opportunity to see what it is he is going to get before it is delivered. The prototypes may be thrown away once the design has been approved or evolves into the delivered product.

MDC adopted a mixture of the DSDM and waterfall methodologies for the development of the data historian and associated applications. DSDM was used for the user-interface applications, with waterfall methods being used for the design and building of the interface to systems such as the fiscal metering system.

The customer provided a team member dedicated to the project who was empowered to make decisions to allow the project to proceed without delay.

This choice of mixed methodologies highlights a key point for successful software development, namely that the design and control mechanisms must be chosen to fit the activity. The "one size fits all" approach of using one methodology regardless of the nature of the activity results in either a higher cost, a poor quality of solution, or more commonly, higher cost and poor quality.

Technical solution

The overall architecture of the data historian is shown in Fig. 2 [97,962 bytes]. The components consist of hardware and software in a client-server configuration.

Hardware

A Compaq Proliant NT server supports the data archive part of the historian, with two DEC Alpha machines, running in fail-over mode performing the data collection from the Baileys Infi90 DCS.

All machines have an uninterruptible power supply (UPS) to protect against power failures, with the server having additional protection through the use of redundant hardware such as fans and power supplies. The data collection machines are connected to the DCS via dedicated computer gateways (CG).

A firewall was installed to prevent unauthorized access by users to the server, and further hubs prevent access to the process network.

The route between each data collector and the NT server was distinct, so that if one connection to the server is down, the other will continue to provide data. Although the NT server is a single point of failure, the architecture design prevents any permanent data loss.

Software

OSI Software Inc.'s PI was selected as the base software for the data-historian application because it fits into the client server model that had been adopted for the hardware architecture. It also is client-server based, in that clients, such as PC users onshore and offshore or device drivers establish a connection to the server and are then able to request data from or send updates to the data base server.

Using this model allowed full separation of data collection from data storage and data retrieval (Fig. 3 [92,907 bytes]).

In this particular instance, the NT version of the data historian server was selected to run on the Compaq NT server as opposed to a VMS or UNIX platform. This allowed it to be maintained along with the other NT servers installed offshore.

As well as performing compression on the data being passed to it, the server supports other vital system functions such as:

  • Supporting an open data base connectivity (ODBC) interface to allow complex querying from systems such as Oracle
  • Supporting a calculation environment in which expressions can be embedded inside tag definitions and evaluated on change or at a particular time
  • Maintaining tag definitions, security, and user accounts.
On each DEC Alpha machine, a standard interface to the Baileys Infi90 DCS running on OpenVMS was used. The interface required DCS vendor software (SEMapi) to connect to the CG.

VMS was chosen for the interface hosts, rather than NT, because this was the platform supported by the DCS supplier. The interface supports fail over by monitoring the station status of the CG attached to the other interface program.

If it detects a fault in this status, data transfer is seamlessly performed on the backup machine. To do this, one of the interface hosts acts as the primary node and the other machine the secondary node.

The Baileys interface talks to the DCS through a computer gateway rated at 30,000 tags. The DCS reports values by exception to the data collector interface, as opposed to the interface polling for values, thus ensuring that the traffic on the DCS network is minimized.

In turn, only significant changes to the data were sent to the data-historian server through the application of dead-bands to the tags being read. This prevented the network to the server from being swamped.

As well as performing a fail over function, the data collection nodes fulfil another important function. If the link to the PI server is down, for example due to the network being down, the node will continue to scan from the DCS and buffer events to a disk. When the link is restored, data for the missing period are forwarded to the server.

The main interface to the system for the user is via standard application software provided as part of the base PI software product. The interface supports the development of mimic-type displays and to display data coming from the PI server in a graphical form. It also provides an Excel add-in which allows data to be extracted into a spreadsheet for further analysis.

This software was supplied with the system and users are based both onshore via the satellite link and offshore.

EMS experience

As a result of implementing the data historian, a number of challenges were faced and overcome. To try to help other companies about to embark on similar projects, a list of things to watch out for are given as follows:
  • Good communication, both vertically and laterally through regular reporting procedures ensured that all parties were aware of their impact in the overall project. It was particularly important in this case where a large number of vendors were involved. For the ETAP project, related companies ranged from the platform designers, DCS manufacturer, the end customer, offshore commissioners, telecomms, and support.
  • Buy-in from the end customer and commitment to supplying a dedicated resource to the project helped enormously in delivering a system that the customer was happy with and on time and on budget.
  • The risk analysis process in the project identified potential problems with the plan at an early enough time to prevent or minimize their impact. The risks were continually re-evaluated as part of the reporting procedure.

Benefits

The EMS system has been running since June 1998 and has been classed as a great success by BP Amoco. The benefits from the system have been:
  • The quantity and quality of process data that are available to onshore users is seen to be unprecedented by BP Amoco. This is primarily due to the software tools that were selected and the user applications that were supplied to view and analyze the data.
  • The ease by which data are available has enabled BP Amoco better reporting and forecasting than for other recent asset start-ups.
  • The project was successfully completed in a short time frame due to matching the design methodologies to the nature of specific tasks.
  • The robustness which was designed into the system has ensured no data loss even during major unplanned events such as total network failures.
  • The software environment has enabled information to be shared with the other asset partners in a controlled manner via the world wide web.

The Author

Simon Dyson is a project manager for MDC Technology Ltd., Riverside Park, Middlesbrough, U.K. He specializes in the design and delivery of management information system projects worldwide and has 10 years' experience in delivering such systems within the oil and gas and petrochemical industries.

Copyright 1999 Oil & Gas Journal. All Rights Reserved.