New software moves distributed-temperature sensing data

Oct. 13, 2008
Shell has developed open standards software to fill the gap of moving distributed-temperature sensing (DTS) data from the wellsite to the engineer’s office desktop.

Shell has developed open standards software to fill the gap of moving distributed-temperature sensing (DTS) data from the wellsite to the engineer’s office desktop. The software, known as the DTS database, translates DTS data simply and effectively into a database of temperature vs. depth.

Many different industries have used distributed temperature sensing (DTS) for years. The technology uses a fiber optic cable installed along or around the object observed. In production operations, the fiber is installed along a wellbore within a control line clamped to the tubing. A light box installed at the beginning of the fiber contains a laser that sends light pulses into the fiber. The light box then records the light as it bounces back from the fiber.

A part of the back-scattered light is temperature dependent and determining the average spectrum at different depths provides a means for obtaining the temperature along the object.

The fiber provides temperature readings at about 1-m intervals with a wide range of sample frequencies ranging from 30 sec to many hours. The number of light pulses used in calculating the spectrum dictates the accuracy of the temperature measured.

Click here to enlarge image

Fig. 1 illustrates a typical fiber cable installation and resulting temperature profile for a gas lifted well.

The DTS voluminous real-time data stream, transferred electronically to the engineer’s desktop, provides timely well status.

Data transfer limitations

Prior to Shell’s rollout of the DTS database architecture, existing data handling systems had limitations. The systems gathered wellsite data in a proprietary file format and periodically transferred it to a remote, foreign server.

A proprietary process digested and manually interpreted the data, while a foreign database stored the data. To access the data, engineers logged into a website that displayed graphically the processed and partially interpreted data.

Engineers lacked confidence in data quality because of:

  • Inability to precheck the raw DTS data.
  • Uncertainty whether the service provider validated the raw data.
  • Uncertainty about which data the service provider processed.

Additional concerns included:

  • Data security. To access data, third parties needed to penetrate firewalls into the potentially vulnerable process-control domain.
  • Data ownership. Data stored on foreign databases and computers were out of company control and potentially noncompliant with related company standards
  • Data formats. Dependence on proprietary data formats limited the possibility of prechecking data.
  • Data analysis. Lack of appropriate visualization tools limited data analysis.

Consequently, the operators were motivated to develop internal data acquisition, processing, visualization, and storage systems.

DTS data-handling changes

Shell defined a DTS data-handling architecture to overcome the previously mentioned problems (Fig. 2). It resolved security issues by having the software comply fully with associated Shell data security. The new architecture and security standards led to supplier independent data transfer and storage. This decoupled the DTS hardware from data handling, visualization, and interpretation.

Click here to enlarge image

The net effect enables selection of “fit for purpose” elements from different suppliers without changing the associated data-handling systems. It also allows for clearly defined and managed data ownership. With this architecture, engineers have access to raw data and can develop interpretation skills to translate the data into valuable assessments.

This architecture thus incorporated two key elements:

  1. An industry-wide standard for DTS data exchange.
  2. A database for storing DTS data.

Industry-wide format

In 2004-05, several members of the petrotechnical open standards consortium (POSC) formed a work group to define a standard for DTS data exchange. POSC was an industry-wide organization that defined oil and gas industry data and data exchange standards. Prominent standard examples are the wellsite information transfer standard markup language (WITSML) used in drilling and the production markup language (PRODML) currently under development.

POSC has reorganized under the name Energistics (www.energistics.org).

In third-quarter 2005, POSC integrated the final version of the DTS exchange standard with the latest WITSML version. Shell supports compliance with this standard for all DTS systems. At the time of this article, multiple vendors comply with this new DTS data-exchange standard.

An industry common interest group called subsea fiber optic monitoring group (SEAFOM), established in 2006, promotes the growth of fiber optics for subsea applications (www.seafom.com). SEAFOM also supports DTS data handling standard approach based on POSC and WITSML.

The DTS data-handling approach described in this article fully complies with POSC-WITSML standards.

Data storage

Before Shell designed its new DTS database, a market survey showed that no existing generic database could be found to satisfactorily handle and store DTS data.

The primary reason for this is that DTS systems generate data vectors (temperature vs. depth vs. time) incompatible with standard industry control and data-acquisition systems that are designed for processing only individual, time series point measurements. Hence, Shell decided to develop a dedicated database for storage of distributed production data in-house.

The DTS database is relational and includes leveraging techniques such as processing extensible markup language (XML) files “on the fly” that use a standard hypertext transfer protocol (HTTP). It also has built-in data database security features, such as cyclic redundancy checking.

Large data volumes

DTS measurements provide voluminous data. The database, therefore, required a compression algorithm that ensures that the database only stores the latest trace if there are substantial changes from the previous acquisition.

An important part of this compression algorithm is the ability to specify zones of interest. The database only stores measurements with substantial changes in these zones, while ignoring changes outside these zones.

Click here to enlarge image

For example, temperature variations near the wellhead due to surface temperature could cause enough temperature change to trigger storage of traces. Additionally, one can configure the algorithm such that it stores data in a minimum interval, separate and distinct from temperature change.

Extracting data

The database provides several ways for extracting data (Fig. 3). One way to extract data to a spreadsheet uses open database connectivity (ODBC), with features such as:

  • Plotting temperature vs. fiber length.
  • Playing temperature development in time as in a movie.
  • Plotting multiple traces in one graph.
  • Showing temperature development in time at a selected depth.
  • Zooming capabilities in fiber length and temperature.
Click here to enlarge image

Fig. 4 shows the most common means for viewing DTS data. It illustrates the temperature at different time steps for a shut-in well.

One can also view DTS data in the database with an internet browser using standard relational database web server features. This allows administrators and support staff to check on the status and data that are in the database without needing to install a viewer on the end user’s personal computer. Also viewers are becoming available commercially.

Another useful data-viewing possibility is a system for selecting data at a certain depth and storing this as a point measurement in the plant historian.

Shell has implemented this link with a standard historian to a relational database interface.

Click here to enlarge image

Fig. 5 is an example of a DTS data displayed in the historian. The plot shows DTS data exported to the historian at a depth of 31.85 m.

Remote expert center

A remote expert center for interpretation of DTS data required access to good quality data from numerous wells in various operating companies. Factors taken into account were:

  • Operators may have legal requirements to control access to their data and may restrict access to selected individuals.
  • Extraction of large data volumes could affect performance, especially if databases are around the globe with bandwidth limitations.

The DTS database has an inbuilt functionality for transferring data between different databases, inclusive of flags for controlling the sharing of data, thus preventing the sharing of restricted data. This allows operating companies to control shared data.

Click here to enlarge image

The DTS interpretation framework provides this functionality (Fig. 6).

Data security standards

The DTS database complies with Shell data security standards. Fig. 7 shows the path for the data transfer to the database.

Click here to enlarge image

The system has an HTTP post service installed between the process control domain and office domain and uses an HTTP post service for receiving XML files and transferring the data into the database.

DTS database status

Six Shell worldwide locations currently use the DTS database, including an expert center implementation for central support and remote “expert” data analysis.

The DTS database systems have operated for more than 2 years, confirming sustainable operations. Shell expects the installed base to grow with several more installations already on the drawing board.

Shell is gathering large volumes of DTS data for engineers and their skills are progressing in interpreting the data. As the understanding in interpreting DTS data increases, Shell has under development automatic monitoring and fingerprinting/pattern recognition so that engineers receive automatic notification of events occurring in wells.

These processes add value by supporting production optimization and surveillance such that engineers can spot potential well problems earlier, leading to improved production efficiency.

The authors

Click here to enlarge image

Ron Cramer (Ronald.Cramer @shell.com) is a senior advisor with Shell Global Solutions, Houston, in oil field automation and production systems. Cramer has 30 years’ experience with Shell International in upstream oil field operations and production systems. He also worked for 10 years as a chemical engineer for Union Carbide and Polysar in downstream research and process areas. Cramer has a BS in chemical engineering from Strathclyde UK and an MSc from Waterloo in Canada.

Click here to enlarge image

Martijn Hooimeijer is a production technologist, working for Shell for 7 years in production forecasting, monitoring, and system optimization. Prior to Shell, he lectured at the Delft University of Technology on modeling. He holds a PhD in applied mathematical modeling and an MSc in civil engineering.

Click here to enlarge image

Andre Franzen is a senior R&D scientist at Shell International E&P in the Netherlands. His work involves research on optical sensor deployment in the upstream industry. He previously worked at the University of Strathclyde, Corning Inc., Monash University (Malaysia), and Nottingham University (Malaysia campus). Franzen has a subsidiary degree in computer science from Bochum University, German and a PhD in opto-electronics from the University of Strathclyde, UK.