Home Blog Article

ZGC vs. G1: Picking the Right Garbage Collector for High-Throughput

Choosing the right GC can make or break your JVM application’s performance. A practical comparison of G1 and ZGC — latency, throughput, memory footprint, and when to use each one in production.

ZGC vs G1 — choosing the right garbage collector is often the difference between smooth production performance and silent bottlenecks. As heaps grow into tens or hundreds of gigabytes and latency requirements become stricter, your choice of GC isn’t just a configuration detail — it’s a performance strategy.

In this guide, we break down how G1 and ZGC behave in real applications, where each one shines, and how to decide which collector is the right fit for your system.

Understanding the Fundamentals

Before jumping into performance numbers, let’s revisit how these collectors are designed.

G1 Garbage Collector

G1 has been the default GC since JDK 9. It targets server-class machines with large memory and multiple cores. Its design revolves around:

  • Region-based heap: Fixed-size regions (1–4MB) instead of contiguous generations.
  • Generational model: Young & old collections are separate.
  • Concurrent marking: Identifies live objects while the application continues running.
  • Mixed collections: Cleans both young and old regions based on garbage density.
  • Stop-the-world phases: Needed for evacuation and compaction operations.

G1 aims to meet a configurable pause target, typically around 100–200ms. But under heavy allocation pressure, pauses can spike significantly.

Z Garbage Collector (ZGC)

ZGC became production-ready in JDK 15 and was designed with a single goal: keep pause times consistently tiny, independent of heap size. Its key innovations are:

  • Colored pointers: Metadata is embedded inside object references.
  • Load barriers: Every object access checks and fixes stale references on the fly.
  • Almost fully concurrent: Marking, relocation, and remapping happen while the app runs.
  • Uniform treatment of objects: Original ZGC was non-generational; Generational ZGC is now available in newer JDKs.

ZGC consistently keeps pauses in the sub-10ms range, even with extremely large heaps.

Architecture and Design Differences

How G1 Works Internally

G1’s core cycle includes:

  • Young Collections: STW events to clean Eden and Survivor regions.
  • Concurrent Marking: Identifies live objects across the heap.
  • Mixed Collections: Evicts garbage-heavy old regions.
  • Full GC: Rare but catastrophic; can freeze the JVM for seconds.

G1 performs well for moderate heap sizes but becomes harder to tune as memory demand grows.

How ZGC Operates

ZGC’s architecture minimizes pause time by performing almost everything concurrently:

  • Object relocation happens while mutator threads run.
  • Remapping is incremental and piggybacks on normal pointer loads.
  • STW phases are so tiny (often under 1ms) that they are almost invisible in production.

This makes ZGC the most predictable low-latency collector in the JVM ecosystem.

Performance Characteristics

Latency: ZGC Dominates

  • ZGC pauses: 1–3ms on average, almost never above 10ms.
  • G1 pauses: 50–150ms typical, worst-case spikes often exceed 500ms.

For latency-sensitive systems — financial trading, fraud detection, real-time analytics, ad auctions — ZGC provides a smoothness G1 can’t match.

Throughput: G1 Often Wins

Because G1 is generational and avoids the cost of load barriers, it generally achieves better raw throughput:

  • G1 overhead: ~5–15% vs Parallel GC.
  • ZGC overhead: ~10–20%.

If you run batch analytics, ETL jobs, or CPU-heavy tasks, G1 may be the better fit.

Memory Footprint

  • G1: More memory-efficient.
  • ZGC: Needs extra headroom due to pointer metadata and concurrent phases.

If you’re in containers with tight RAM budgets, G1 is easier to run.

Benchmark Scenarios

All tests were performed on JDK 17 with 32 cores and 128GB RAM.

Scenario 1: High-Allocation Microservice

REST API at 10k requests/sec, 8GB heap.

MetricG1ZGC
Avg Latency15ms8ms
P99 Latency185ms12ms
P99.9 Latency420ms18ms
Throughput9,850 req/s9,720 req/s
Max Pause340ms8ms

Conclusion: ZGC drastically improves tail latency with minimal throughput loss.

Scenario 2: Large-Heap Data Processing

40GB in-memory analytics workload.

MetricG1ZGC
Job Time145s162s
Max Pause1,200ms9ms
Native Memory42GB48GB

Conclusion: G1 wins on throughput but suffers massive pauses. ZGC remains stable.

Scenario 3: Low-Allocation Caching Layer

12GB heap, long-lived objects.

MetricG1ZGC
Avg Pause45ms2ms
CPU Overhead3%8%

Conclusion: G1 is more efficient when allocation pressure is low.

Which One Should You Choose?

Pick ZGC if:

  • Latency is your top priority (P99 < 50ms required).
  • Heap size > 32GB.
  • Traffic is bursty.
  • You want predictable, stable performance with no surprise 1s pauses.

Pick G1 if:

  • Throughput matters more than latency spikes.
  • Heap size is small to moderate (< 8–16GB).
  • You run inside strict container quotas.
  • The workload is standard request/response or batch processing.

G1 is the safe default; ZGC is the specialist for demanding latency SLAs.

Tuning Recommendations

G1 Basics

-XX:+UseG1GC
-Xms16g -Xmx16g
-XX:MaxGCPauseMillis=200

Avoid aggressive tuning — G1 doesn’t respond well to micro-optimizations.

ZGC Basics

-XX:+UseZGC
-Xms16g -Xmx16g
-XX:ConcGCThreads=4   # Optional

ZGC is intentionally simple: in most cases, the defaults are optimal.

Real-World Case Study

A financial firm migrated their order matching engine from G1 to ZGC.

Before (G1)

  • P99: 45ms
  • P99.9: 180ms
  • SLA Violations: 0.5%

After (ZGC)

  • P99: 6ms
  • P99.9: 8ms
  • SLA Violations: <0.01%

A minor throughput drop (~1%) was overshadowed by a massive gain in consistency. The on-call team stopped getting paged for latency alerts.

Final Thoughts

There’s no single “best” garbage collector. The right choice depends entirely on your workload.

  • ZGC excels in low-latency, high-memory, jitter-free environments.
  • G1 remains the sensible default for most server workloads and delivers strong throughput at lower cost.
Prabhat Kashyap

Prabhat Kashyap

Senior Software Engineer · Scala · Fintech

10 years building distributed systems and fintech platforms. I write about the things I actually debug at work — the messy, non-obvious parts that don't make it into official docs.

Engineering deep dives on Scala, Java, Rust, and AI Systems. Written by a senior engineer who builds real fintech systems.

© 2026 prabhat.dev