Rate Limits

The TempClock API enforces rate limits to ensure fair usage and system stability. Understanding these limits will help you build reliable integrations.

Default Rate Limit

Each API key is limited to 60 requests per minute. This limit is applied on a rolling window basis and is tracked independently per API key.

60
Requests per minute
1/sec
Average throughput
Per key
Independently tracked
Per-key limits: If your account has multiple API keys, each key gets its own independent 60 req/min allowance. Creating separate keys for different integrations effectively increases your total capacity.

Rate Limit Headers

Every API response includes headers that tell you your current rate limit status. Use these headers to monitor your usage and avoid hitting the limit.

Header Description Example
X-RateLimit-Limit The maximum number of requests allowed per minute for this key. 60
X-RateLimit-Remaining The number of requests remaining in the current window. 54
Retry-After Number of seconds to wait before retrying. Only included on 429 responses. 60

Example Response Headers

HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 X-RateLimit-Limit: 60 X-RateLimit-Remaining: 54

429 Too Many Requests

When you exceed the rate limit, the API returns a 429 status code with details about when you can retry.

Response Headers

HTTP/1.1 429 Too Many Requests Content-Type: application/json; charset=utf-8 X-RateLimit-Limit: 60 X-RateLimit-Remaining: 0 Retry-After: 60

Response Body

{ "error": true, "message": "Rate limit exceeded. Maximum 60 requests per minute.", "status": 429 }

Best Practices

1

Implement exponential backoff

When you receive a 429 response, wait before retrying. Start with a short delay and increase it exponentially on consecutive failures. Always respect the Retry-After header.

// Pseudocode: exponential backoff delay = 1 // seconds for attempt in 1..5: response = make_request() if response.status == 429: wait = max(delay, response.headers["Retry-After"]) sleep(wait) delay = delay * 2 // double the delay else: break
2

Cache responses locally

Data like locations, departments, and cost codes changes infrequently. Cache these responses locally and refresh periodically (e.g. every hour) rather than fetching on every request. This dramatically reduces your API usage.

3

Monitor the X-RateLimit-Remaining header

Check the X-RateLimit-Remaining header after each response. If it drops below a threshold (e.g. 10), slow down your request rate proactively instead of waiting for a 429.

4

Use pagination efficiently

Use the maximum limit=100 when fetching large datasets instead of making many small requests. One request for 100 records is always better than ten requests for 10 records.

5

Queue and throttle requests

If your application generates bursts of API calls (e.g. syncing all workers on startup), implement a request queue that spaces requests at least 1 second apart to stay well within the limit.

6

Use separate API keys for separate services

If you have multiple integrations (e.g. a payroll sync and a dashboard), give each its own API key. Each key has its own independent rate limit, so they will not interfere with each other.

Need Higher Limits?

If your integration requires more than 60 requests per minute, contact us to discuss increased rate limits. Higher limits are available for accounts with specific requirements such as real-time dashboard integrations or large-scale data synchronisation.

Request a rate limit increase

Include your account name, current API key name, the specific endpoints you need higher limits for, and your expected request volume. Our team will review your requirements and get back to you within one business day.

Contact Us