Beyond the GPU: Rethinking Seismic Compute with Dataflow

Seismic imaging workloads are increasingly limited by GPU architecture. Discover how dataflow computing allows for higher utilization, greater efficiency, and scalable performance in advanced seismic simulations and exploration workflows.
March 11, 2026
6 min read

Key Highlights

  • Seismic workloads are limited by memory, causing GPU compute units to stay idle.
  • Dataflow computing enhances utilization and efficiency for seismic HPC workloads.
  • Maverick-2 enables seismic applications without costly code rewrites.
  • Improved performance-per-watt lowers operational cost in seismic processing centers.
  • Dataflow architectures scale better for advanced seismic modeling like FWI.

Elad Raz, Founder & CEO, NextSilicon

The global energy transition is well underway. Renewable sources are expanding, electrification is accelerating, and investment in sustainable infrastructure is reaching new heights. Yet the reality is clear. Oil and natural gas will remain essential to the global economy for decades to come. That makes efficient, responsible resource extraction more important than ever. At the heart of this effort lies high-performance computing (HPC) and the seismic simulations that guide exploration, production, and refinement decisions with multi-billion-dollar implications.

The high-performance systems running seismic simulations such as reverse time migration (RTM), full-waveform inversion (FWI), and other advanced modeling processes must evolve to extract smarter. That means reducing environmental impact, maximizing reservoir recovery, minimizing operational risk, and making every barrel count. 

But the computational infrastructure supporting these workloads is reaching its limits, not because we lack processing power, but because we’re using the wrong architecture for the job. 

The Seismic Cost of Seismic Compute

Seismic imaging is among the most computationally intensive applications in scientific computing. FWI can be hundreds to thousands of times more computationally expensive than RTM. As exploration moves to deeper water, more complex geology, and unconventional reservoirs, the need for high-fidelity models drives these computational requirements to new heights.

For the past decade, the industry has turned to GPU acceleration. And for good reason. GPUs have enabled dramatically higher floating-point throughput at attractive performance-per-dollar ratios. Today, the largest seismic processing centers run on thousands of GPUs, processing petabytes of survey data to generate the subsurface images that drive exploration decisions.

The scale of this challenge is staggering. A single high-resolution 3D FWI run on a leading supercomputer required roughly 25,000 GPU-hours across hundreds of GPUs (enough energy to power one average U.S. home for 15 months). For operators running these workloads in private data centers or the cloud, that translates directly into operational cost, power consumption, and infrastructure complexity.

Yet despite massive GPU deployments, many operators report that seismic workloads achieve only 50% of theoretical GPU performance. Expensive accelerators sit partially idle while energy consumption and cooling demands continue to climb.

The problem isn’t a lack of compute. It’s an architectural mismatch.

Why GPUs Struggle with Seismic Compute

Over the past decade, GPUs have become the accelerator of choice for HPC. Their high floating-point throughput and strong performance-per-dollar made them attractive for scientific workloads.

But seismic imaging isn’t primarily compute-bound. It’s memory-bound.

Seismic wave propagation algorithms, which calculate values at each grid point based on neighboring points, repeatedly move large volumes of data between memory and processing units. Every set of calculations requires fetching data from neighboring grid points, processing it, and then repeating this cycle billions of times. The arithmetic itself is relatively simple; the bottleneck is data movement.

GPU manufacturers optimize their architectures for workloads where data, once loaded, can be reused heavily for computation. In memory-bound seismic workloads, that reuse is limited. The result: powerful compute units spend much of their time waiting for data to arrive from memory.

At the same time, new GPU generations increasingly prioritize AI workloads, where lower-precision operations dominate. Seismic modeling, by contrast, relies heavily on high-precision FP64 performance. This creates what many operators describe as an “AI tax”, paying for silicon and power optimized for workloads that don’t align with seismic needs.

The industry has responded by scaling out: more GPUs, larger clusters, higher power draw. But simply building bigger GPUs or faster memory doesn't solve this fundamental problem; it just moves the bottleneck.

The Dataflow Approach

Dataflow computing inverts the traditional compute paradigm. Rather than fetching instructions and moving data to processing units, data itself drives computation. Operations execute as soon as their input data becomes available, with results flowing directly to downstream operations without returning to main memory.

NextSilicon built its Maverick-2 Intelligent Compute Accelerator around this dataflow principle and has deployed it at leading research institutions and energy companies worldwide.

Maverick-2 represents a fundamental rethinking of computing hardware, addressing the bottlenecks of traditional GPU architectures with a software-defined, dataflow architecture that delivers the efficient performance at scale that seismic and other HPC workloads require.

Instead of relying on fixed hardware cores optimized for general-purpose instruction handling, Maverick-2 uses a flexible compute fabric configured by software to match the workload's structure. The compiler analyzes existing C, C++, or Fortran seismic applications and maps them to an optimized dataflow execution model — without requiring application rewrites.

This is critical for seismic processing organizations that have invested decades in algorithm development. Porting and re-optimizing code for each new hardware generation consumes significant engineering effort. With Maverick-2, existing applications compile directly to optimized execution, allowing teams to focus on advancing geophysical science rather than low-level hardware tuning.

Production deployments at research institutions and national laboratories have validated the architecture’s efficiency gains, with operators reporting significantly improved throughput on memory-bound seismic workloads compared to their existing GPU infrastructure.

What This Means for Operators

For oil and gas operators, the practical performance benefits of dataflow architecture are evident.

  • Higher Effective Utilization: Architectures aligned with memory-bound workloads deliver more sustained throughput relative to peak capability. This translates into more useful work per deployed accelerator.
  • Improved Performance per Watt: Energy efficiency is increasingly critical in large-scale seismic processing environments. Reducing idle cycles and unnecessary data movement improves throughput per watt, lowering operational costs and reducing carbon intensity.
  • Reduced Optimization Burden: GPU optimization often requires extensive kernel tuning and architecture-specific engineering. A compiler-driven dataflow approach shifts the focus back to advancing geophysical science rather than managing low-level hardware complexity.
  • Scalability for Advanced Methods: As elastic FWI, anisotropic operators, and higher-resolution models become standard, computational demands will continue to grow. An architecture designed around dataflow efficiency scales more naturally with these evolving requirements.

Enabling Smarter Resource Development

The energy transition to renewables will unfold over decades, and hydrocarbon resources will remain essential throughout this period. The imperative to extract these resources responsibly, with greater precision, lower environmental impact, and better economic efficiency, only intensifies.

The computational infrastructure supporting this effort has reached an architectural limit. Traditional processors, including modern GPUs, are constrained by data movement architectures designed for general-purpose computing rather than the memory-intensive patterns of seismic simulation.

Dataflow computing offers a path forward. By inverting the relationship between data and computation, Maverick-2 delivers the performance seismic workflows actually need, without requiring organizations to abandon their algorithmic investments or accept vendor lock-in.

The wall that has constrained seismic computing isn't insurmountable. It requires thinking beyond it.

Learn more about NextSilicon’s architecture unveiled at their recent tech launch.

Sign up for our eNewsletters
Get the latest news and updates