In our introductory post in the series we talked about the N+1 problem, and tackled deduplication and reuse techniques in Part II. Here we'll explore the role that caching and prefetching play in optimizing your GraphQL system.

How to avoid expensive retrieval and recomputation

Caching and prefetching are a couple of well-known techniques to keep local copies of frequently demanded data and to retrieve additional data in anticipation of future needs, respectively. Let's look at what we can do with these in the context of GraphQL.

Caching

Caching keeps local copies of frequently demanded data to avoid expensive retrievals and recomputation. In a GraphQL server, we can apply caching at many different levels:

  • Backend requests: responses to given backend requests, such as HTTP requests and database queries
  • GraphQL field: responses in the context of the schema, such as caching the response of query selection field that could bring together data from multiple backends so it can be used to avoid reconstructing the same selection field in a future request.
  • GraphQL operation: response to an entire set of selections, operation text and variables since applications tend to send the same set of operations.

Caching reduces the load on the backend while reducing latency of the frontend. It is like reuse, except the cached data will span requests, and with that comes other scenarios that must be handled, such as the invalidation of cached results when the source data has been modified, and evicting cached items when the local storage is full. Fortunately, caching has been around for a long time, and there are many well-known techniques employed throughout the hardware and software stack that we can leverage.

Prefetching fields

Another well-known technique, prefetching, retrieves additional data in anticipation of future needs. In GraphQL, we can add fields to a selection if we recognize a pattern of those fields being subsequently requested.

For example, with a query of {author { name birthplace }} the backend request is augmented to include other fields of the Author type, such as email, birth, death. Then when a future query requests email (or any of the other fields returned), caching can be used rather than requesting from the backend.

For example in a database, instead of executing the minimal

SELECT name, birthplace 
FROM authors 
WHERE name = “Greene”

the query also returned email, birth and death:

SELECT id, name, birthplace, email, birth, death 
FROM authors 
WHERE name = “Greene”

Deduplication and reuse, caching and prefetching are all techniques that are completely encapsulated by the GraphQL server and can therefore be applied to any backend without any additional support. There are two more optimizations that we consider that, in contrast, require additional access patterns from the backends. We will address batching and combining in our next blog.

Feedback & questions

If you jumped in here at part 3, here's what we explored in previous posts:

As you may have guessed, we love to talk about performance :-) If you have any questions or feedback or want to discuss a performance challenge, we’d love to connect. Drop us a note via this page or drop in to our Community Discord.