In our introductory post in the series we talked about the N+1 problem and more. Here we'll explore two optimization techniques you can employ to reduce the number of backend requests for an operation: deduplication and reuse. As with relational database optimizations, such techniques are used in combination, but each is described individually.

Optimization Techniques to Reduce the Number of Backend Requests for an Operation

With a declarative approach, a GraphQL server can use its knowledge of an incoming operation, the operation's variables, the schema and the relationship of fields to backends, to analyze and optimize the request, thereby reducing operation latency for the frontend. With a full understanding of the request, techniques such as these can be used to reduce the number of backend requests for an operation.

Deduplication

As its name implies, this technique removes duplicate requests to a backend from the GraphQL server layer. In this simplest case, consider the following repetitive operation:

{
   a1: author(id:1) { name }
   a2: author(id:2) { name }
   a3: author(id:1) { name }
}

In this case , we can eliminate the request to the backend for a3 as we will already have that data from the request for a1. While this would not normally occur at the topmost selection layer, it occurs frequently when the query is pulling together data from multiple backends. In these cases, a request from one backend often produces the arguments needed to form the request to another backend. The results from the first request can contain duplicates, and we can reduce the calls to the second backend by making one request for each unique value and then distributing the results appropriately in the result.

Consider a GraphQL server that consolidates book information from a Postgres database (the books backend) with author detail information from a REST API (the authors backend), and the following query:

{
   books(topic:"cookbooks") {title author { name }}
}

To resolve the above query, the GraphQL server will first make a request to the books backend to get the title and author auth_id for all cookbooks. Since an author of a cookbook likely writes more than one, their ID will occur multiple times from this first request. The engine must then make subsequent requests to the authors backend to get the author’s name. If there are only 20 authors for 100 cookbooks, a deduplication will make 20 (one per unique author) rather than 100 requests to the authors backend.

Reuse

Reuse avoids backend requests by reusing previous results. In this situation, we do not have the collective set of backend requests known ahead of time, but as we are making requests, we may recognize that we already have the needed data.

Consider the following query:

{
  huxley: authors(name:"Huxley") 
           { books { title } similar {name books { title }}
  orwell: authors(name:"Orwell") 
           { books { title } similar {name books { title }}
}

It is likely that Huxley and Orwell are in each other's similar list. If we have already retrieved Huxley’s book information from the books backend, then when we encounter a request to retrieve this again as part of Orwell’s similar list, we can reuse the data we already have. This technique also helps with very deep queries that could occur due to the recursive schemas as it detects cycles in the data. For example, a query that wanted to get five degrees of similarity would not repeatedly request for the same authors, but would reuse that information in filling out the result.

{
   authors(name: “Huxley”)
         { similar { name 
                     similar { name 
                               similar { name 
                                         similar {name 
                                                  similar {name}}}}}}
}

While reuse and deduplication both avoid multiple requests for the same data, reuse differs from deduplication in that the duplicity occurs at different levels of the tree. Reuse must find the work from previous requests where deduplication knows at the time that it is providing for multiple parts of the result.

So, in our example, deduplication would collapse three requests for id:100 into a single backend request and use it to populate the three instances, but with re-use, a later request for id:100 will find results from a previously executed request and use that to populate its instance.

Deduplication and reuse have the added advantage that they provide a consistent result. Since results for the same identifier are reused throughout the query, there is not an opportunity for subsequent execution to return different results. For example, without deduplication and reuse, a author’s rating could appear as 3.4 in one part of the result, but 3.7 in another.

What's Next?

In our next post we'll explore caching as a way to reduce the load on the backend while reducing latency of the frontend, and prefetching to retrieve data in anticipation of future needs.

Meanwhile, if you’d like to brainstorm a use case, discuss a performance challenge, or learn about implementing GraphQL with StepZen, we’d love to connect. Drop us a note via this page or drop in to our Community Discord.