Rate Limits
Understand rate limits for the ScamVerify™ API, including per-tier RPM limits, rate limit headers, and best practices for handling 429 responses.
The ScamVerify™ API enforces rate limits to ensure fair usage and platform stability. Limits are measured in requests per minute (RPM) and vary by plan.
RPM Limits by Plan
| Plan | Price | RPM (Standard Endpoints) | RPM (Batch Endpoints) |
|---|---|---|---|
| Free | $0/mo | 10 RPM | 5 RPM |
| Starter | $149/mo | 30 RPM | 5 RPM |
| Professional | $499/mo | 100 RPM | 5 RPM |
| Business | $1,499/mo | 300 RPM | 5 RPM |
| Scale | $2,999/mo | 600 RPM | 5 RPM |
Batch endpoints (/batch/phone and /batch/url) are fixed at 5 RPM for all tiers. Each batch request can contain up to 100 items, so even at 5 RPM you can process up to 500 lookups per minute.
Rate Limit Headers
Every API response includes headers that tell you your current rate limit status:
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed per minute | 100 |
X-RateLimit-Remaining | Requests remaining in the current window | 87 |
X-RateLimit-Reset | Unix timestamp (seconds) when the window resets | 1709251200 |
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1709251200
Content-Type: application/jsonHandling 429 Responses
When you exceed your rate limit, the API returns a 429 Too Many Requests response with a Retry-After header indicating how many seconds to wait before retrying.
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Try again in 12 seconds.",
"retry_after": 12
}
}The response also includes the Retry-After HTTP header:
HTTP/1.1 429 Too Many Requests
Retry-After: 12
Content-Type: application/jsonRetry Logic
Always respect the Retry-After header. Use exponential backoff with jitter for the best results.
import time
import random
import requests
def api_request_with_retry(url, headers, payload, max_retries=5):
for attempt in range(max_retries):
response = requests.post(url, headers=headers, json=payload)
if response.status_code == 429:
# Respect the Retry-After header
retry_after = int(response.headers.get("Retry-After", 5))
# Add jitter to prevent thundering herd
jitter = random.uniform(0, retry_after * 0.25)
wait_time = retry_after + jitter
print(f"Rate limited. Retrying in {wait_time:.1f}s (attempt {attempt + 1})")
time.sleep(wait_time)
continue
return response
raise Exception("Max retries exceeded")
# Usage
response = api_request_with_retry(
url="https://scamverify.ai/api/v1/phone/lookup",
headers={
"Authorization": "Bearer sv_live_abc123...",
"Content-Type": "application/json",
},
payload={"phone_number": "+12025551234"},
)
print(response.json())async function apiRequestWithRetry(url, headers, payload, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, {
method: "POST",
headers,
body: JSON.stringify(payload),
});
if (response.status === 429) {
// Respect the Retry-After header
const retryAfter = parseInt(response.headers.get("Retry-After") || "5", 10);
// Add jitter to prevent thundering herd
const jitter = Math.random() * retryAfter * 0.25;
const waitTime = retryAfter + jitter;
console.log(`Rate limited. Retrying in ${waitTime.toFixed(1)}s (attempt ${attempt + 1})`);
await new Promise((resolve) => setTimeout(resolve, waitTime * 1000));
continue;
}
return response;
}
throw new Error("Max retries exceeded");
}
// Usage
const response = await apiRequestWithRetry(
"https://scamverify.ai/api/v1/phone/lookup",
{
Authorization: "Bearer sv_live_abc123...",
"Content-Type": "application/json",
},
{ phone_number: "+12025551234" }
);
console.log(await response.json());Best Practices
-
Respect
Retry-After- Never retry before the time indicated by the header. Ignoring it may result in longer throttling periods. -
Use exponential backoff with jitter - If you do not receive a
Retry-Afterheader, use exponential backoff (2s, 4s, 8s, 16s) with random jitter to avoid synchronized retries from multiple clients. -
Queue requests - Instead of firing requests as fast as possible, spread them evenly across the minute. If your limit is 100 RPM, aim for roughly 1 request every 600ms.
-
Monitor the headers - Check
X-RateLimit-Remainingbefore each request. If you are running low, slow down proactively rather than waiting for a 429. -
Use batch endpoints for bulk work - A single batch request with 100 items counts as 1 request against your batch RPM limit, not 100 individual requests.
-
Leverage caching - Cached lookups return instantly and do not count toward your rate limit. Check the
cachedfield in the response to see if caching is working.