Choosing the right GC can make or break your JVM application’s performance. A practical comparison of G1 and ZGC — latency, throughput, memory footprint, and when to use each one in production.
Choosing the right GC can make or break your JVM application’s performance. A practical comparison of G1 and ZGC — latency, throughput, memory footprint, and when to use each one in production.
ZGC vs G1 — choosing the right garbage collector is often the difference between smooth production performance and silent bottlenecks. As heaps grow into tens or hundreds of gigabytes and latency requirements become stricter, your choice of GC isn’t just a configuration detail — it’s a performance strategy.
In this guide, we break down how G1 and ZGC behave in real applications, where each one shines, and how to decide which collector is the right fit for your system.
Before jumping into performance numbers, let’s revisit how these collectors are designed.
G1 has been the default GC since JDK 9. It targets server-class machines with large memory and multiple cores. Its design revolves around:
G1 aims to meet a configurable pause target, typically around 100–200ms. But under heavy allocation pressure, pauses can spike significantly.
ZGC became production-ready in JDK 15 and was designed with a single goal: keep pause times consistently tiny, independent of heap size. Its key innovations are:
ZGC consistently keeps pauses in the sub-10ms range, even with extremely large heaps.
G1’s core cycle includes:
G1 performs well for moderate heap sizes but becomes harder to tune as memory demand grows.
ZGC’s architecture minimizes pause time by performing almost everything concurrently:
This makes ZGC the most predictable low-latency collector in the JVM ecosystem.
For latency-sensitive systems — financial trading, fraud detection, real-time analytics, ad auctions — ZGC provides a smoothness G1 can’t match.
Because G1 is generational and avoids the cost of load barriers, it generally achieves better raw throughput:
If you run batch analytics, ETL jobs, or CPU-heavy tasks, G1 may be the better fit.
If you’re in containers with tight RAM budgets, G1 is easier to run.
All tests were performed on JDK 17 with 32 cores and 128GB RAM.
REST API at 10k requests/sec, 8GB heap.
| Metric | G1 | ZGC |
|---|---|---|
| Avg Latency | 15ms | 8ms |
| P99 Latency | 185ms | 12ms |
| P99.9 Latency | 420ms | 18ms |
| Throughput | 9,850 req/s | 9,720 req/s |
| Max Pause | 340ms | 8ms |
Conclusion: ZGC drastically improves tail latency with minimal throughput loss.
40GB in-memory analytics workload.
| Metric | G1 | ZGC |
|---|---|---|
| Job Time | 145s | 162s |
| Max Pause | 1,200ms | 9ms |
| Native Memory | 42GB | 48GB |
Conclusion: G1 wins on throughput but suffers massive pauses. ZGC remains stable.
12GB heap, long-lived objects.
| Metric | G1 | ZGC |
|---|---|---|
| Avg Pause | 45ms | 2ms |
| CPU Overhead | 3% | 8% |
Conclusion: G1 is more efficient when allocation pressure is low.
G1 is the safe default; ZGC is the specialist for demanding latency SLAs.
-XX:+UseG1GC
-Xms16g -Xmx16g
-XX:MaxGCPauseMillis=200
Avoid aggressive tuning — G1 doesn’t respond well to micro-optimizations.
-XX:+UseZGC
-Xms16g -Xmx16g
-XX:ConcGCThreads=4 # Optional
ZGC is intentionally simple: in most cases, the defaults are optimal.
A financial firm migrated their order matching engine from G1 to ZGC.
A minor throughput drop (~1%) was overshadowed by a massive gain in consistency. The on-call team stopped getting paged for latency alerts.
There’s no single “best” garbage collector. The right choice depends entirely on your workload.
10 years building distributed systems and fintech platforms. I write about the things I actually debug at work — the messy, non-obvious parts that don't make it into official docs.
Engineering deep dives on Scala, Java, Rust, and AI Systems. Written by a senior engineer who builds real fintech systems.
TOPICS
CONNECT
© 2026 prabhat.dev