Web Fundamentals: HTTP Caching

Let’s walk through the mechanics of HTTP caching. HTTP caching is used to reduce latency by delivering content from caches that are closer to the client and reducing bandwidth since no network traffic is required to serve a (locally) cached resource.

There are two types of caches. Private caches and public caches.

A public cache is something like a shared cache which usually sits between the server and the user agent (browser). Those public caches or HTTP proxies can usually be found at large cooperates and ISPs. Public caches are not used for resources which require HTTP Authentication. Furthermore, HTTPS encrypted traffic can also not be cached.

A private cache is located at the client and cannot be used by other clients. It’ usually the browser’s cache. Also authenticated and encrypted requests are subject to private caching if not stated otherwise. If you don’t want to have sensitive information stored on the user’s client (e.g. credit card details) caching should be disabled (see Cache-Control: no-store).

Controlling caching with HTTP headers

With the HTTP header Cache-Control you can specify the caching policies for requests and responses. A caching policy for a response could tell the user agent that the response should not be cached or caching is allowed by private caches only.

no-cache vs. no-store

The no-store directive advises the user agent and public caches to not store the response. So no copy of the response should be stored locally.

The no-cachedirective forces caches to perform the request to the origin server for validation before releasing a cached copy.

Let’s look at common Cache-Control header directives to control response caching:

  • no-store – disable caching, no local copies are stored
  • no-cache – allows caching, but a request must be sent to the server for validation.
  • public – marks authenticated responses as cachable. By default, authenticated responses are marked as private.
  • private – allows caching for single users, usually in the user agent’s cache.
  • must-revalidate – instruct the cache that has to follow the defined freshness rules without exceptions. In some circumstances, caches are allowed to serve stale content which can be prevented by this directive.
  • max-age=<seconds> – Defines the relative time in seconds since the request until the cached version expires.

How not to control caching

  • HTML meta tags
  • Pragma HTTP Headers

Cache validation


An ETag is a fingerprint of the resource’s content. If the cached version of a response has expired, the user agent sends the cached fingerprint along with the request. The server compares the fingerprint and can skip the response by returning a 304 “not modified” response instead of the actual (unchanged and thus already cached) content.

Request with If-None-Match header

To make a request conditional, the client sends the If-None-Match HTTP header with the cached ETag value. The server responds with a 200 “OK” if and only if the ETag send with the request does not match the ETag of the current version of the resource.

If the none-match condition failed, which means that the resource hasn’t changed, the HTTP server responds with a 304 “Not Modified” status.

An interesting fact about ETags is that it can be abused for user tracking. You’ll find more details in the ETag Wikipedia article.

Invalidation and update of a cached resources

Your users deserve fast loading times and thus you’re extensively using caching with long expiration times. That’s great, but how do you make sure that your users get the latest and greatest updates of your web application?

To profit from caching and also make sure that new resources get loaded you can change the filename when the file’s content changes. Usually, a hash of the file’s content is used and appended to the file name. This ideally happens during build time.

This approach can only work if the HTML document is re-validated on each request. Otherwise, the new URLs are not visible to the client.

HTTP/2 and caching

The major advantage of HTTP/2 is the reuse of an existing TCP connection to transfer multiple resources instead of opening one TCP connection per request.

Caching works as in HTTP/1.1 and is mainly controlled by the Cache-Control headers and ETags with conditional requests. When it comes to web performance optimization HTTP/2 introduced two new features which are not present in HTTP/1.1. Stream prioritization lets the user agent specify what order they want to receive resources. Server push sends extra resources to the user agent before it knows that they are needed.


Did you like this post?

Leave a Reply

Your email address will not be published. Required fields are marked *