TL;DR: TLS handshake latency is the wall-clock time between a browser opening a TCP connection to your server and the encrypted tunnel being ready for use. Healthy modern setups finish in 100-300ms. Multi-second handshakes show up as cold-load slowness in Largest Contentful Paint and almost always trace back to four or five well-known causes — all of them fixable.

What is it?

When a browser visits https://yourdomain.com, it can’t send the HTTP request immediately. First it has to open a TCP socket, then it has to negotiate the TLS tunnel that will encrypt everything that follows. The handshake is that negotiation. It’s where the server proves its identity, where both sides agree on a fresh symmetric key, and where the cryptographic groundwork for the entire session gets done.

There’s a network budget and a CPU budget:

  • TLS 1.2 takes 2 full round trips between client and server before application data can flow. On a typical internet connection that’s roughly 100-200ms per round trip — call it 200-400ms just in network time.
  • TLS 1.3 cuts the handshake to a single round trip. Same network conditions, half the wall-clock latency. (And TLS 1.3 also supports 0-RTT resumption for repeat visitors, but that’s a separate optimization.)
  • CPU on the server. The server has to sign something with its private key during the handshake. RSA signatures are slow — hundreds of microseconds to single-digit milliseconds on commodity hardware. ECDSA signatures on the same hardware run roughly 5-10x faster.
  • OCSP fetch. If your server isn’t stapling its OCSP response (see /learn/ocsp-stapling), the browser may have to fetch revocation status from the CA mid-handshake — adding another round-trip to a server you don’t control.

A healthy modern handshake on a well-tuned origin behind a CDN finishes in 100-300ms. A slow one — distant origin, stapling off, RSA cert on a constrained box — can stretch to 1-3 seconds. That’s the gap WQI is grading.

Why does it matter?

Two reasons, both visible to real users:

Cold-load LCP. Every first-time visitor pays the handshake cost on the very first connection to your origin. That cost lands directly in the connection-setup portion of Largest Contentful Paint, the flagship Core Web Vital. A 1.5-second handshake doesn’t just feel slow — it pushes your LCP into the “needs improvement” or “poor” buckets that Google tracks for ranking and that real users feel as a slow-loading page.

Background API costs. Server-to-server clients (mobile apps, headless services, partner integrations) hit the same handshake on every cold connection. If they don’t pool or reuse connections — and many badly-written clients don’t — they pay the latency on every single poll.

The good news: a slow handshake is almost always a correctable operational problem rather than a fundamental constraint of your business. Stapling off. Origin in us-east while users are in Sydney. RSA cert on a CPU-bound box. No session resumption. No HTTP/2 + ALPN. These are config fixes, not rewrites.

Real-world analogy

Handshake latency is the time between pressing a doorbell and the door actually opening. Sub-second is fine. Multi-second feels broken.

The fix is usually somewhere in the doorbell circuit (your CDN — is the bell wired to a button right at the gate, or to a dispatcher in another country?), the door’s lock mechanism (cert algorithm — RSA deadbolts are heavier to operate than ECDSA ones), or whether you’re calling the locksmith every time a guest arrives (OCSP non-stapling).

When the door opens fast, nobody notices. When it doesn’t, every guest standing on the porch starts wondering whether anyone’s home.

The 5 common slow-handshake causes

In rough order of how often they trip up the sites WQI scans:

  • No OCSP stapling. Browsers want to verify the cert hasn’t been revoked. Without stapling, that means an extra round-trip to the CA’s responder during the handshake — sometimes fast, sometimes not, and never under your control. See /learn/ocsp-stapling for the full picture and the one-line fixes for Nginx, Apache, Caddy, and Cloudflare.
  • RSA cert on a CPU-bound box. RSA signature operations are slow — they involve modular exponentiation on 2048-bit numbers. ECDSA signatures on the same hardware run roughly 5-10x faster. If your origin shares CPU with a busy app server, switching to ECDSA shaves visible time off every handshake.
  • Geographically distant origin. TLS 1.2 needs 2 round trips. If your origin lives in us-east and your visitor is in Sydney, the best-case round trip is around 280ms — so the handshake alone is 560ms+ of pure network time before any content moves. A CDN in front of the origin terminates TLS close to the user instead.
  • No session resumption. TLS 1.3 makes resumption nearly automatic via session tickets. TLS 1.2 deployments without a proper session cache hit cold every time, paying the full handshake cost even for repeat visitors within the same session.
  • No HTTP/2 + ALPN. Without ALPN (Application-Layer Protocol Negotiation), the client and server negotiate the application protocol after the TLS handshake — adding another round-trip. Modern stacks handle this automatically; if yours doesn’t, you’re on a config from 2014.

How WQI measures this

We grade handshake latency as factor WEBQ-98 in the Performance category. The WQI prober opens a fresh TLS connection to your origin, records wall-clock time from TCP open to handshake complete, and buckets the result.

ResultThresholdWhat it means
Pass (100)≤ 500 msHealthy edge / regional CDN territory.
Warn (60)≤ 1500 msAcceptable but room to improve.
Fail (0)> 1500 msPage load takes the hit; users feel it.

A note on absolutes: this measurement is taken from our probe location, so the raw millisecond number depends on where we are relative to your origin. What matters for the score isn’t the exact number — it’s the tier. A site that’s serving us in 200ms is serving most users on similar paths in the same ballpark. A site that’s serving us in 1.8 seconds is in trouble for everyone outside its own datacenter.

What good looks like

Pass on WEBQ-98. Sub-500ms handshakes are the norm for any modern site behind a CDN. Cloudflare, Fastly, and CloudFront edge nodes will routinely hit this without any tuning at all — they’re specifically built to terminate TLS close to the user, with well-tuned crypto and stapling on by default.

If you’re failing or warning on WEBQ-98, the move is almost always “put a CDN in front of the origin” before chasing any of the smaller optimizations.

How to fix it

Quick wins, ranked by impact:

  1. Enable OCSP stapling. Eliminates one round-trip on every cold handshake. Free. One line of config in most servers. Walk-through at /learn/ocsp-stapling.
  2. Enable HTTP/2 + ALPN. Avoids the protocol-negotiation round-trip and unlocks multiplexing for free. Default on every modern web server; if you’re not on it, you’re on an old config.
  3. Switch from RSA to ECDSA. ~5x faster signature operation on the server side. Most CAs (Let’s Encrypt included) issue ECDSA certs at no cost — just pass the right key type when you request the cert.
  4. Put a CDN in front of your origin. Cloudflare, Fastly, CloudFront, Bunny — TLS terminates at an edge node tens of milliseconds from the user instead of hundreds of milliseconds from your back-end in us-east. This is the single biggest lever for geographically distributed audiences.
  5. Enable TLS 1.3. Cuts the handshake from 2 RTT to 1 RTT. See /learn/tls-minimum-version-supported for the protocol-version side of the story.

A quick measurement

You can sanity-check your own handshake from any shell with curl:

curl -w "tls=%{time_appconnect}s\n" -o /dev/null -s https://yourdomain.com

The time_appconnect variable is wall-clock time from start of request through completion of the TLS handshake. Compare a cold connection against a warm one — curl reuses connections by default, which makes repeat measurements look artificially fast. To force a cold handshake every time, add --no-keepalive:

curl --no-keepalive -w "tls=%{time_appconnect}s\n" -o /dev/null -s https://yourdomain.com

If your cold number is in the hundreds of milliseconds, you’re in good shape. If it’s in the seconds, walk down the list above.

Common pitfalls

  • Measuring on a warm connection. Connection reuse hides the problem entirely. Always force cold (--no-keepalive in curl, a fresh tab in browser dev tools).
  • Measuring from the same datacenter as your origin. The network round-trip is the dominant cost in TLS 1.2 handshakes. If you’re testing from a machine that’s on the same backbone as your origin, you’re not seeing what your users see. Test from at least one geographically distant location, or use the WQI prober’s number.
  • Trusting browser dev-tools “TLS” timing. Browser dev tools often show only a portion of the handshake — the time from ClientHello to handshake complete — and exclude ALPN negotiation, cert chain validation, and any OCSP fetch. The WQI number includes the whole budget.

Further reading