The APIs provided implement a rate limiting system to manage the number of requests made by users.The API rate limit differs depending on the environment in which the calls are made.Sandbox environment: if our delivery team has configured the test environment to support integrations, the rate limit is set to 2 requests per second.Production environment: the rate limit is higher, allowing 5 requests per second, in order to ensure platform stability and performance.
This rate limit is returned in the response headers as follows:RateLimit-Policy (e.g., "5, w=1") where 5 indicates the maximum number of requests and "w" represents the window in seconds.
RateLimit (e.g., "limit=5, remaining=4, reset=1") where:limit: Represents the maximum number of allowed requests.
remaining: Indicates the number of requests still available.
reset: Indicates when the counter will be reset.
Retry-After: in case of error, this response HTTP header indicates how long the user agent should wait before making a follow-up request.
It is mandatory for integrators to handle these rate limits.
In case of exceeding the rate limit, the API responds with a status code 429 Too Many Requests and the following response body:{
"success": false,
"message": "Too many requests, please try again later."
}
Best Practices for Handling Rate Limits#
To effectively manage rate limits, it's essential to utilize the information provided in the response headers or the 429 Too Many Requests status code.Utilize Response Headers for Real-Time Rate Limit Monitoring
When making requests to the API, always check the RateLimit header in the response. This header contains vital information such as the current rate limit status, including the number of requests remaining and the time until the counter resets. By monitoring these headers, your application can adapt dynamically to the rate limit restrictions and optimize request scheduling accordingly.Example#
Exponential Backoff Strategy#
When interacting with the API, requests may be temporarily rejected due to rate limiting (HTTP 429 errors).
To handle this gracefully, clients should implement an exponential backoff strategy.This means retrying failed requests after progressively longer waiting intervals (e.g., 1s, 2s, 4s, 8s), up to a defined maximum delay.4° retry → after 8", ecc.
Introducing random jitter in the retry interval is also recommended to prevent synchronization issues when multiple clients are retrying at the same time.This approach helps reduce server load, avoids unnecessary retries, and increases the likelihood of successful request execution.Why It's Important:#
- Real-Time Monitoring: By inspecting the response headers, your application can accurately track the current rate limit status and adjust its behavior accordingly.- Optimized Request Scheduling: Utilizing the information provided in the headers allows your application to intelligently schedule requests, minimizing the likelihood of hitting rate limits and optimizing overall performance.By leveraging response headers for rate limit monitoring, your application can ensure efficient and responsible usage of the API resources, providing a smoother experience for both users and developers.