Skip to main content

Overview

The folksbase API uses Upstash Ratelimit backed by Redis to enforce request limits. There are two limiters, each with a different scope and threshold. Both use a sliding window algorithm, which distributes requests more evenly than fixed windows.

Limiters

Global Limiter

Applied to all /api/* routes. Identifies clients by IP address.
SettingValue
Window60 seconds (sliding)
Max requests100 per window
IdentifierClient IP (X-Forwarded-For or X-Real-IP)
Redis prefixfolksbase:rl

Upload Limiter

Applied only to POST /api/imports (CSV file uploads). This is stricter because uploads are expensive — each one triggers blob storage, CSV parsing, AI column mapping, and background processing.
SettingValue
Window10 minutes (sliding)
Max requests5 per window
IdentifierAuthenticated user ID, falling back to IP
Redis prefixfolksbase:rl:upload
Upload requests hit both limiters. A request must pass the global limiter first, then the upload limiter.

Response Headers

Every response from a rate-limited endpoint includes these headers:
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingRequests remaining in the current window
X-RateLimit-ResetUnix timestamp (ms) when the window resets
When a request is rejected, an additional header is included:
HeaderDescription
Retry-AfterSeconds until the client can retry

Rate Limit Exceeded

When either limiter rejects a request, the API returns a 429 response:
{
  "code": "RATE_LIMITED",
  "message": "Too many requests"
}
For the upload limiter specifically:
{
  "code": "RATE_LIMITED",
  "message": "Too many uploads. Try again later."
}

Why Two Limiters?

The global limiter protects the API from general abuse — bots, scrapers, or runaway scripts. 100 requests per minute is generous for normal usage. The upload limiter exists because CSV imports are resource-intensive. A single upload can trigger:
  • Blob storage write (up to 200 MB)
  • CSV parsing and validation
  • AI-powered column mapping via Anthropic
  • Background job processing with chunked database inserts
  • Email notification on completion
Without a separate limit, a client could exhaust server resources by uploading repeatedly within the global 100-request window.

Client Best Practices

  • Check X-RateLimit-Remaining before making requests in tight loops
  • When you receive a 429, wait for the duration specified in Retry-After before retrying
  • For bulk operations, use the CSV import feature instead of individual POST /api/contacts calls
  • The frontend already handles retries for transient failures (502, 503, 504) with a 1-second delay