Allen Johnson
Convex Computer Corporation
Dallas
Will Morse
BHP Petroleum
Houston
The letter depicts an E&P infrastructure in which scalable parallel processing (SPP) systems transparently support desktop workstations with the processing power to handle huge numerical problems and efficiently manage terabytes of 3D seismic and well data. It suggests that as a result, E&P companies will compete with unprecedented effectiveness by better understanding and visualizing the subsurface.
November 30, 2005
Stanislaw P. Stanislaw. PhD Bill Gates Professor of Interpretive Seismic Processing College of Applied Sciences P.O. Box 9999 Conroe. Texas 77999-9999
Dear Prof. Stanislaw:
My boss tells me you are on the lookout for current case histories of interest to your students. He asked me to write a description of our work leading up to last week's Wyoming lease sale. For your undergrads I've added explanatory comments to a few of the details. Your students may find this story particularly interesting because of the recently publicized finds in the Overthrust Belt.
As you may already know, I am senior project coordinator and geoscience manager for Energy Corp.'s Houston office. Wyoming gave us major headaches because we got such a late start preparing the bid. An environmental protest nearly killed the sale, and one of these groups obtained a court injunction that kept our data acquisition contractor from completing the seismic shoot almost until the last minute this despite the fact that our company, and his, comply with applicable federal and state environmental regulations.
Over mountainous terrain. the acquisition team gradually shot 25 sq miles of 3D seismic on a 41 ft grid. They delivered the 490 gigabytes (Gb) of field seismic to us only about 2 weeks before the bid deadline. Our geoscience data manager one of our few staff people - quickly loaded the pair of high density cartridges into our robotic archive and captured the header data for cataloging. The new data set became immediately available to any authorized scientist working from any of our desktops.
By temporarily raiding two other projects I got the best available people to work on the Wyoming data. That of course meant scientists who have strong interdisciplinary backgrounds and inclinations. Susan, for instance, is a math whiz with a geophysics degree who does interpretive processing. She's employed by a processing service company and spends most of her nine in-house with us. Many of our other people similarly come to us via out-sourcing.
Susan got started immediately. She used her map-based data browser to quickly locate and retrieve all the project data. Before lunch she muted the first breaks, increasing the signal to noise ratio by purging the data of spurious signals caused by ground roll and direct acoustic pickup.
Susan swiftly tackled the statics and various filtering tasks such as cleaning up multiples, reflections so strong they bounce back and forth between strata, generating other spurious signals. Susan's own powerful workstation can process data with substantial speed. But even this newly purchased system would slow if it was left unassisted to filter Susan's i:10 sample, totaling some 40 Gb.
As Susan's workstation approached its numeric capacity, it transparently shifted computation load to our scalable parallel processor (SPP). Just a few months ago we added another dozen processors to this system. It was our fourth expansion in 6 years, each time adding the latest generation of processors to those already populating the machine. We also scaled up the memory and data bandwidth to keep the system architecture in balance.
Thanks to automatic load-scheduling, Susan never detected a lag. The SPP's operating system lets our users easily run both seismic and imaging jobs in either serial or parallel mode. Our people can use the SPP directly, like a workstation, but that's unusual. Most never see this machine nor think much about it. It acts as a supplemental reservoir of compute cycles that any of our workstations can seamlessly draw upon - as Susan's desktop was now doing. The SPP also serves as our data management platform.
By now it was closing time. The next day Susan would delve into her velocity model. As your students may know, the Overthrust Belt's folded layers make it extremely difficult to pick velocities and tie the seismic to known pay formations. That's why until the mid-1990s, the Overthrust Belt often posed unacceptable risk.
Sufficient compute power can solve the problem by boosting velocity-picking precision and the geologic model's accuracy. In the mid-1990s SPP technology emerged, and in the ensuing years the performance of SPP-based systems and their reduced instruction set computing (RISC) processors multiplied. This and other advances increased the abundance of compute power and reduced its cost. Companies like Energy Corp. found they could readily afford the intensive computations on 3D seismic required to achieve acceptable risk in the Overthrust Belt.
As workstations, SPP systems, application software, and related technologies evolved, the average time to solution shrank. And activity in the Overthrust Belt really took off.
Intensive processes such as prestack migration velocity analysis and multioffset ray-tracing - which can calculate each acoustic path from shot points to receivers throughout a 3D volume - are now completed in a day or less, even on 3D seismic projects the size of our Wyoming data set. A dozen years ago, performing the same job even on a smaller data set would have taken a massively parallel processing (MPP) machine months of computation.
Target-oriented prestack depth migration was the rule because doing the same migration on an entire 3D volume cost far too much time and money. By now, of course, the situation has reversed: Full-volume prestack depth migration is the rule, not the exception.
The next morning, while our SPP system completed its overnight filtering tasks, Susan called in Sam. He was the other member of my ad hoc Wyoming project team. A geologist by train- ing, Sam also does geophysics and has a better grasp of earth modeling than do most other scientists. He and two others last year formed their own independent prospect-generating company; mostly they work here at Energy Corp. and partner with us. I borrowed Sam on an emergency basis to interpret our Wyoming prospect, pulling him away from his own projects. For that privilege his company will require a bit of participation in Wyoming, but it'll be worth it.
For the first couple of days Sam spent long hours roughing out a geologic model, combing through the digitized log data from the dozen wells widely scattered around the area and applying his own knowledge of the region. Now he and Susan spent most of the day picking velocity ranges. They set up a velocity run to perform prestack migration analysis on a quarter-mile grid through the area. Again the job ran overnight.
When Susan and Sam came in the next morning, they studied the coherency plots. Luck was with them: The sampling pretty much confirmed their velocity model. They integrated it with the geologic model and by end of day started the full-volume prestack depth migration.
Because depth migration is inherently parallel, and because the deadline was so short, I borrowed SPP capacity from other projects for this run. The SPP operating system lets me reconfigure capacity within seconds. My desktop displays a grid representing all my SPP processors. It also shows the current status of each: what project team it "belongs to," whether it's assigned to run in throughput or parallel mode, and how much of its capacity is currently available. To grab some extra processing capacity for Susan's run I quickly clicked on several processors and designated them for parallel mode. instantly tripling the size of the numeric server Susan's workstation can access.
Each team's workstations "see all of the SPP but access only their assigned portion of its capacity. The assigning takes place simply by pointing to any processor or group of processors then clicking on a project team named m a pop-up list. This creates a logically defined server, a virtual machine "seen by that team's workstations. It happens in seconds, on the fly. f keep at least one logical server assigned to each team.
Recent operating system (OS) releases let the SPP perform automatic load scheduling. The system can intelligently juggle its own numeric capacity to optimize the time to solution of each job running at any given moment. Essentially this means it can make its own decisions about deploying processors among logical servers. I usually enable this feature because the system makes good choices and saves me the trouble of doing so,
This means I've let my role evolve from assigning processors to specifying the project teams' relative priorities for the system to act upon. For the Wyoming project, however, I did a manual override, temporarily reassigning a number of processors to help perform Susan's full-volume prestack depth migration. To make deadline I wanted a solution ASAP.
As I grabbed processors for the migration run, I glanced at my system administration screen to make sure they were being reassigned to the Wyoming project. As I watched, the system redistributed the remaining pool of processors among the other teams to keep their projects rolling at optimum available speed. I saw processors' utilization leap up as they took on extra load. I saw the 'ownership" status and operating mode of various processors change as the system automatically optimized the mix.
As a result, there wasn't much impact on my other teams this despite the intensive computation some were doing at the time. For instance, one asset team was using a Monte Carlo technique to simulate a new reservoir, Another was processing the latest in a series of "4D seismic" surveys actually time-lapse 3D we shoot periodically to monitor drainage during tertiary recovery in a mature field.
Numeric applications like these continue to drive up demand for our computing resources, sometimes challenging our capacity. After all, the need for fast, high quality results keeps growing. That drives up the size of numeric problems and data sets, and makes large-scale applications proliferate. It's also what keeps Energy Corp. competitive.
With so many processors crunching the Wyoming project's prestack depth migration in parallel, the run took only 48 hr. From what I hear by 2008 we'll be running the same size job in less than a day,
In any case, the SPP system delivered a depth volume to Sam's and Susan's workstations. Together they worked through the interpretation.
Your students probably know that the Overthrust Belt's geology looks like a Chinese puzzle. And my team was working with scanty well control. Naturally, they had a hard time tying the seismic to the well control.
However, the intensive computation had done a good lob of iterating on the complex velocities, providing a high degree of coherency. And the geologic model was basically sound. Susan and Sam encountered areas within the volume that weren't clearly imaged, but these places were relatively few.
To clear up the unfocused parts of the volume - especially around our prospective target the team studied the migrated gathers. They found large residual moveouts, so they knew some of the velocities weren't right and events weren't properly migrated.
Using vertical updating and global tomography tools on a workstation, Susan and Sam refined their velocity model. They remigrated the volume and this time the SPP system delivered a solution much faster because it didn't have to run so many approximations to achieve coherency.
Susan and Sam examined the new image. There were substantial improvements, but the area around our prospective target still lacked clarity. The team again applied their updating and tomography tools, adjusted velocities, and submitted the full volume to a prestack depth migration. This time the volume imaged clearly throughout the critical areas. They could slice through the imaged volume m virtually any direction and see a sharp image.
The geology now looked right and the tops tied with our well control. The Team had nailed it. And they'd found a significant bright spot that looked promising in a trap with closure.
As soon as this phase of work was completed, I kept my promise by quickly returning the additional processors to the teams that nominally "owned" them. Susan and Sam ran a de-tuning application on the entire data set to check the possibility that their bright spot was only a seismic artifact - and I know they breathed a sigh of relief 15 min later when they saw their target show up clearly again on the screen.
Their voxel-based 3D visualization tools provided a quick assessment of the prospective target's volume. Running a frequency analysis on the seismic showed acceptable porosity.
Sam called in Enrico, a young petrophysicist based m Buenos Aires. Enrico accessed the well data - encrypted of course - via the Internet. He studied the porosity measures, comparing them with production records and our porosity estimate derived from the seismic.
Sam and Enrico then videoconferenced from their workstations. Together they fine-tuned the petrophysical evaluation, working concurrently on the data that appeared on both their screens. Finally Enrico pronounced the prospect feasible from his point of view another hurdle overcome.
The lease sale now loomed. Susan and Sam rushed through a writeup of their recommendations, polished their 3D image animation, and scheduled a presentation. Management bought the interpretation and, scare hours before deadline, submitted a bid electronically to the Minerals Management Service. Energy Corp.'s sharp bidding edged out the competition I believe without leaving anything extra on the table.
Sam and Susan are now working with a development team to prioritize initial drilling sites. The team has already plunged into reservoir simulation and analysts. If the mild winter holds we'll spud the first well by February. I'll let you know how it turns out.
If you need any further details about this case, please feel free to contact me. Best of luck with your classes.
The authors believe many readers will consider the world this letter describes to be anything but farfetched. Some may consider it overly conservative.
We can make some educated guesses regarding E&P in 2005 by extrapolating from yesterday's lessons and today's emerging trends.
Exploration teams will continue to find and characterize most of the significant oil traps the available technology permits them to discover. Development teams will aggressively drill up the most economically recoverable reserves.
On average, each year the world oil supply will consist of smaller, harder-to-find traps and less-economic reserves. Exploration will become riskier and more challenging.
Still, a generally abundant energy supply should keep oil prices in check. This will squeeze profitability and make it tougher to compete. Large oil companies will continue to "smartsize," each focusing on its core competencies and outsourcing other functions. This will create opportunities for smaller independents, service companies, and entrepreneurial geoscientists and teams.
COMPETITION AND TECHNOLOGY
These competitive pressures will drive oil companies to continue seeking the most advanced technologies for exploration and reservoir engineering. Ever more fine-grained, accurate 3D models, processed on scalable servers and visualized on desktop workstations, will dominate efforts to characterize and image the subsurface.
Higher-resolution 3D seismic surveys will proliferate, and coverage will continue to expand. Oil companies will increasingly reshoot seismic over developmental fields to monitor conditions and drainage. Well data also will proliferate and will be used more effectively. This will fuel the continuing E&P data explosion and drive the development of affordable information processing and data communication, storage, and management tools.
Each year processor technology will continue to deliver more performance while the processor unit cost remains flat or decreases. (An industry organization projects that by 2005, processors will cycle at about 500 MHz, potentially yielding a peak performance of some 2.5 gigaflops.)
Software vendors will continue to gain proficiency developing parallel processing techniques while computer architectures become more efficient at processing parallel algorithms. The abundance, variety, and cost/performance of parallel applications will escalate. Run times for 3D seismic workflow will plunge.
Extremely powerful workstations and personal computers will reside on virtually all geoscience and engineering desktops. Easy-to-use application software for seismic processing, analysis, interpretation, and reservoir management and simulation will reside on the desktop. These applications will transparently share data and will form a seamless, integrated suite.
Lines between acquisition, seismic processing, and interpretation - and between geoscience disciplines - will blur. Using sophisticated desktop tools, a talented scientist or small team can make all the value judgments needed to fine-tune a processing sequence, generate a velocity model, and interpret the stacked data.
Some large E&P organizations will control costs by reducing professional staff, attracting top talent, and sharing profitability project-by-project with the most successful scientists and teams. These leading talents may operate as employees or independent contractors sharing substantial performance-based incentives.
Geoscientists and engineers will transparently access vast compute power. These professionals won't know or care whether their numeric problems are "crunched" by a local workstation or a background resource - any more than they care about where the electricity comes from when they turn on a light.
Data standards for application inter-operability and a universal data model will mature and be widely adopted. This will enable information systems to transparently translate among formats and data types. Advances in data compression, communication bandwidth, file management, and storage will com- bine with standards to enable transparent desktop data-browsing and retrieval.
Data storage and portable media technology also will advance, pushing data density well beyond today's 10-175 Gb capacities. A half-terabyte 3D data set may travel from acquisition site to processing center on a single cartridge.
At last, professionals will spend most of their time adding value to projects rather than chasing data. Resulting productivity boosts and net cost savings will be substantial.
All these factors will heat up competition among E&P companies by slashing the typical time to solution and will reduce risks by increasing the quality of decisions. This will mean sharper bidding on leases, sharper bargaining among prospective partners, and more effective exploration decisions and asset management.
QUALITY AND SPEED
The E&P computing world of 2005 can be visualized with a fair degree of confidence based on rational extrapolation from current trends. That world will depend on powerful desktop work- stations backed by a central "reservoir" of compute cycles transparently providing high capacity for numeric computation and data management.
This infrastructure will combine with a universal data model, easy-to-use application software, an operating system that readily handles very large files, robotic storage with a robust data browser, and other system features.
These features together will enable geoscientists and engineers to quickly find and use large data sets and quickly iterate models to produce very high quality recommendations on tight deadlines. This in turn will make E&P organizations more competitive than ever.
Copyright 1995 Oil & Gas Journal. All Rights Reserved.