Request Batching
This page is currently under construction and expected to change. Please feel free to reach out to us directly in case you are having any troubles.
Batching is the process of taking a group of requests, combining them into one, and making a single request with the same data that all of the other queries would have made. This is a way to reduce the number of requests that your application makes to the server.
The Batching functionality is described via the
Batching RFC
.
Enable Batching
Batching is disabled by default, but you can enable it by setting the batching
option to true
:
import { defineConfig } from '@graphql-mesh/serve-cli'
export const serveConfig = defineConfig({
batching: true
})
curl -X POST -H 'Content-Type: application/json' http://localhost:4000/graphql \
-d '[{"query": "{ hee: __typename }"}, {"query": "{ ho: __typename }"}]'
Limit the amount of Batched Requests
By default up to 10 GraphQL requests are allowed within a single HTTP request. If this amount is
exceeded an error will be raised. You can customize this option by passing an object to the
batching
configuration option:
import { defineConfig } from '@graphql-mesh/serve-cli'
export const serveConfig = defineConfig({
batching: {
limit: 2
}
})
curl -X POST -H 'Content-Type: application/json' -i http://localhost:4000/graphql \
-d '[{"query": "{ hee: __typename }"}, {"query": "{ ho: __typename }"}, {"query": "{ holla: __typename }"}]'
When exceeding the batching limit the HTTP status code will be
413
(Payload Too Large).
{
"errors": [{ "message": "Batching is limited to 2 operations per request." }]
}