The panel...
Kenneth N. Abrahams, operations superintendent, Star Enterprise, Delaware City, Del. Joanne Deady, regional technical service manager, fluid cracking catalysts, Grace Davison, Baltimore. Dale Emanuel, manager of planning and engineering, southeastern business unit, Fina Oil & Chemical Co., Dallas. U. George Frondorf, manager, technical support, Citgo Petroleum Corp., Lake Charles, La. Dale Hansen, complex manager, Valero Refining & Marketing Co., Corpus Christi, Tex. William F. Johns, coordinator of hydrotreating and hydrocracking processes, Texaco Refining & Marketing Inc., Bakersfield, Calif. Theodore E. Keller, manager of project engineering, Ultramar Inc., Wilmington, Calif. Charles L. Morgan, manager of operations, La Gloria Oil & Gas Co., Tyler, Tex. Dennis Parker, team leader, catalytic cracking units, Phillips Petroleum Co., Sweeny, Tex. William J. Potscavage, vice-president of technology, ChemLink, Houston. J.L. (Jay) Ross Jr., manager, FCC technology development, Stone & Webster Engineering Corp., Houston. C.C. Shen, process section manager, Jacobs Engineering Group Inc., Houston. Jose Joaquin Solis Muñoz, corporate director of technology, Compañia Española de Petroleos, S.A., Madrid. C.E. (Bud) Van Iderstine, manager, operations, Consumers' Co-operative Refineries Ltd., Regina, Sask. Moderator: Terrence L. Higgins, technical director, National Petroleum Refiners Association, Washington, D.C.
Panelists and attendees at the most recent National Petroleum Refiners Association Question and Answer Session on Refining and Petrochemical Technology discussed process control issues in detail.
Participants shared their experiences on:
- Personal computers (PCs) in process control
- Programmable logic control issues
- Neural networks
- Fieldbus technology
- Statistical analyses of refinery data.
For details on the format of this meeting, held Oct. 4-6, 1995, in San Antonio, see Part 1 of this series (OGJ, June 3, p. 49).
Use of PCs
Van Iderstine:
We do not use personal computers to control any process directly. However, we do use them as an interface to control systems for our programmable logic controllers (PLCs), which, in our plant, primarily control the emergency shutdown systems.
We also use PCs to transfer and store laboratory information into the operating control rooms. Engineers use them for monitoring plant yields and operating conditions.
Abrahams:
At Star Enterprise, PCs are not part of the distributed control system (DCS). However, they are used extensively for control system configuration, troubleshooting, control application maintenance, and process monitoring.
PCs have largely replaced larger computers that were previously used for data gathering, reporting, and information systems. We use Intel-Processor-based computers for our advanced control computers.
PCs are also becoming common in shutdown logic and batch control situations. Ladder logic previously done using relays and wires is now being done with software.
Johns:
Except in very limited applications, we do not use PCs for on-line control systems. They are more typically used for data acquisition applications as configuration maintenance interfaces to subsystems such as tank gauging or motor-operated valve controls, or as control system tools for running modeling software.
Where reliability is important, "hardened" and/or redundant PCs are not used.
Keller:
Personal computers have taken many roles in refinery control systems. None of these roles involves actual control of a system, but rather more of a role of assistance to the control system. Personal computers are used for remote indication and data acquisition purposes.
Personal computers: enable the use of specialized software that can offer features not available on DCSs; enable the use of instrumentation with noncompatible communication protocols; filter and direct data to specific groups of people that have a need for the data; and are a cost-effective display and interface, instead of a new DCS at a remote site.
One important, but often overlooked, role personal computers play is that of an interfacing device to PLCs. Without the personal computer, we could not set up, monitor, or troubleshoot a PLC.
PLC/DCS interface
What instrument philosophies are used in regard to local PLC interface with DCS systems? What is the preferred location of PLC (outside vs. inside control room, redundancy, etc.)? Are PLCs on their way out in favor of direct tie-in of sensors to the DCS without local controllers?
Van Iderstine:
PLCs are used only in our newer upgrader units for providing the control for emergency shutdown systems. The PLCs have redundancy built in and connect into the DCS via a single nonredundant gateway.
PLCs are located in our substations, which are heated under a positive pressure. The advantage of the PLC is that it is independent from the DCS. On the other hand, the PLCs have caused numerous nuisance shutdowns. Having nuisance shutdowns in a refinery is like driving your car down a highway at 70 mph and having the driver's side air bag blow up in your face.
Parker:
Our philosophy is to share as much information between the PLC and DCS as is practical. We have used Honeywell's PLC gateway to integrate the Alan Bradley PLC6. Thus far, we are only transferring the information, no control functions.
These larger PLC systems have been confined to control rooms and blockhouses. The smaller shoebox PLCs are being located in the field.
For us, redundancy has not been an issue, as we are not using PLCs in shutdown system applications. We still use relays and hardwire.
Emanuel:
All of our PLCs are located inside the control room whenever possible, and are redundant. Less than 5% of our PLCs are nonredundant and located in areas other than the control room.
Frondorf:
We use PLCs for safety shutdown systems and keep that function separated from the DCS. Generally, the interface will be a common trouble alarm. For nonsafety systems, such as water treatment regeneration systems, etc., we prefer to use the logic capabilities of the newer level of DCSs rather than PLCs. The preferred location, whenever possible, would be the main control room or the DCS rack building. For shutdown systems, again, we prefer the PLC of the multiple voting design type.
Assuming that the fieldbus instrumentation systems now being developed and offered by the instrumentation companies will become the standard over time, we could see a future where the DCS will evolve more to an informational base for the operator, and to maintain the advanced control applications. We do not believe this will occur immediately; it will be an evolution over time.
Keller:
The most important philosophy regarding local PLC interface with DCSs is that the DCS is always the master and the PLC is the slave. The preferred location for a PLC is near the instrumentation/equipment for troubleshooting purposes.
The question of redundancy needs to be addressed on a case-by-case basis. If the system design requires community protection and refinery personnel protection, redundancy is required. If we are considering machinery protection and operational interlocks, redundancy is not required.
PLCs are definitely not on their way out of favor as a direct tie-in to the DCS. On the contrary, PLCs are being used more because we do not want to overload our DCS. Our DCS has a limited capacity.
Don A. Husted (UOP):
It should be remembered that there are some standards and regulations, like SP84, being proposed that would require safety systems to be separated from control systems. And, in many cases, people have found that using DCS for control and PLCs for safety is one way of doing that.
Neural networks
Process monitoring and advanced control applications frequently require current stream property data. On-line analyzers are often unreliable, hence a number of inferred property methods have been developed. Please comment on the use of neural network technology to develop or improve inferred property data.
Johns:
We are using inferred property correlations at some of our plants. At present, the neural network technology is in use at one plant. The others are using more traditional regression techniques.
Emanuel:
We have one pilot application that has been completed at our Port Arthur, Tex., refinery involving the use of neural networks to infer stream property data. We have a neural network in service that predicts the 95% distillation point of light cycle oil from the fluid catalytic cracking (FCC) unit every minute. The result deviates from the laboratory measurement by an average of about 7° F.
A neural network is more reliable than an analyzer in that it does not need to be calibrated frequently. However, the development of the neural network model is not as easy as the literature would have you believe.
A significant amount of preprocessing of the training data is necessary in order to remove upsets or other undesirable information. Once the model has been trained to an acceptable error level, the results need to be evaluated to make sure the model makes engineering sense.
We have had several occasions when a neural network claimed that a critical process variable, from an engineering perspective, was not important to the neural network. Clearly, an understanding of the process is very desirable.
Another important consideration in neural network maintenance is to make sure that the neural network is operating on data within the same statistical range of the data on which it was trained. If a neural network is shown data outside its training set, its answer is nothing more than a bad guess.
We will continue to develop neural network models to be used as soft analyzers where analyzers do not exist or are unreliable. The cost of developing and maintaining a neural-network-based system should be a small fraction of the cost of purchasing and maintaining a process analyzer.
Frondorf:
The promise of the neural network technology lies in its ability to model nonlinear behavior without requiring an in-depth understanding of the chemistry and thermodynamics of the process, or the physical configuration of the equipment. However, in trying to use this technology as a replacement for on-line analyzers, we have run into several obstacles.
The model requires a large set of data, which at typical sampling frequencies can mean 6 months or more of collecting data. Neural networks also are very susceptible to process changes. Catalysts change, trays or packings change, product specifications change, and you must start over with your data collection and retune the model. Depending on how the model is configured, it may not be able to compensate for unmeasured process changes such as a feed composition or equipment damage.
At the Corpus Christi, Tex., refinery, we have experienced these obstacles. Because of these limitations, at this time we feel the best use of neural network technology may be to complement, rather than replace on-line analyzers. The model could be used to predict future readings from the on-line analyzers.
Ross:
Several successful neural network applications have been reported in the area of continuous emissions monitoring of combustion systems, particularly in the area of NOx content and control.
In another application, we have worked with neural networks applied to FCC operations to develop predictive algorithms to operate closer to multiple operating constraints. Multiple predictive solutions were compared to data from the unit to infer feedstock properties and crosscheck unit data for consistency. These applications were used only in a limited way for on-line control.
You should have a framework of a model for the network to build upon instead of letting the unit make up its own correlation, which would require masses of highly accurate data to back-derive some of the basic relationships the designers already know.
Roger O. Pelham (Honeywell Profimatics Inc.):
I would like to comment on neural networks from a historical perspective.
It has been almost 30 years since I was first involved with a catalytic cracker control project. At the time, IBM was putting computers in many of our refineries. They recommended that we log all this data and regress it to produce instantaneous models which would shed light on all our problems.
Somehow, it just wasn't that simple, or successful. Issues kept cropping up regarding quality of data, range of data, process modifications since the data were taken, and cause and effect among the variables.
We are still struggling with the issue of analyzer reliability. It has forced us in the last 10 or 12 years to attempt various inferential methods because the analyzer does not work.
Neural networks are the latest, and probably the best, mathematical technique to obtain information where we cannot effectively measure it. However, the basic issues remain unchanged from 30 years ago.
You have to know your process fundamentals so that your neural network model is reasonable. Neural networks are no panacea; do not treat them as a black box. Ultimately, they are no replacement for having good, reliable analyzers in the first place. That is really the problem we need to solve.
Rick Vice (Marathon Oil Co.):
Marathon implemented four neural networks based on inferred properties on a crude unit at its Texas City, Tex., refinery at the end of 1994. The properties and their statistics are shown in Table 1 [12908 bytes].
Model bias is corrected automatically with lab data; therefore the average error should be nearly zero.
The cost of a neural network inferred property can be less than $10,000/property-less than other commercially offered inferred property products and much less than an analyzer. Reliability is as good as the instrumentation used in the model.
Neural networks are an empirical modeling technique and have the same strengths and weaknesses of any empirical technique:
- They require a comprehensive data set from which to train
- The data must cover the entire range of expected operation (We used 2 years of data to cover both winter and summer operations.)
- If an important variable does not vary in the training data, the model will not show a relationship
- A data historian is almost required.
There must be a significant signal-to-noise ratio in the target property data. If you are controlling within, for example, 3° F. of the target, and the test method is accurate to 2° F., a model cannot be developed. We calculate an expected best correlation before attempting to develop a model.
Bad data in the training data set will result in a bad model. Considerable effort must be put into cleaning up the data. Of course, the results must be reviewed to determine if the relationship makes sense.
One can easily model nonlinear systems. As with any real-time model, tracking model performance and model maintenance are ongoing issues. For inferred properties, a simple statistical process control chart is usually sufficient.
Fieldbus
What refiners have plans to install fieldbus technology instead of, or to replace, DCS systems?
Solis:
Besides being able to send signals to and receive signals from all field instruments through a single cable rather than one cable per instrument, fieldbus also moves the elemental control systems (proportional integral derivative) to the proximity of the field instrument, thereby enabling the central control house (DCS) to concentrate on more sophisticated control jobs.
In this way, the fieldbus is much more than a new technology applied to field instruments; it is a new control philosophy. In addition, bulky hardware such as input/output racks can be replaced with an extremely small number of cables.
We are closely tracking the development of this philosophy. We believe the main constraint is the lack of agreement between vendors. Nevertheless, today, field tests are being conducted, and we expect it will be on the market very soon (OGJ, May 20, p. 79).
We believe that just as we changed from pneumatics to electronics, and then to DCS, we will move to fieldbus technology as soon as it becomes a reliable and available commercial product.
Shen:
We do not yet know anyone who has installed fieldbus technology. This technology uses the data highway concept inside the process area. It can drastically reduce the number of instrument signal cables; we have heard of ratios anywhere from 20-to-1 to 30-to-1.
For new process units, this concept can reduce costs in terms of the cables, cable installation, junction boxes, and engineering. It is probably not economical to revamp an existing unit.
To date, two areas still lack established acceptable standards. The first is the communications protocol for handling the field instrument signals. The second is the interface between the digital signals and the DCS.
A few groups of hardware manufacturers are pushing their own standards. This situation is analogous to the early years of the videocassette recorder: Beta format vs. VHS format. We understand that the Instrument Society of America is very close to having a standard established, so we expect that refineries will start using this technology very soon.
Johns:
We do not believe that fieldbus technology will replace DCS. There is not a standard for fieldbus technology currently. We are not sure that a standard will ever exist.
In the event that one emerges, there are reliability issues, such as lack of redundancy, failure modes, etc., and the costs associated with implementing this technology that seem to make it quite unattractive.
Statistical analysis
What kind of statistical software packages are being used to analyze and reconcile data at the refinery, and how are they being used? Who is using these programs?
Ross:
We are aware of at least one refiner using a commercially developed advanced control package to trend and reconcile masses of hydro cracker data to predict and control the operation of the unit. Bed temperature monitoring and control of recycle quench was the specific application.
Of course, some of the majors are developing their own internal packages for data reconciliation.
Potscavage:
On a smaller scale, we use an off-line program called EON, which stands for Electronic Operator Notebook. It is user-friendly, allows operators, engineers, or whoever is desired in the plant, to input the data manually, and is capable of statistically massaging the data, as well as helping to develop correlations between various parameters.
Johns:
Datacom from Simulation Sciences Inc. is in use where rigorous on-line models (ROMs) are installed. We currently have three of our plants with the ROMs on various units.
Copyright 1996 Oil & Gas Journal. All Rights Reserved.