# Limitly - Full documentation for LLMs Limitly is the first and only free distributed rate limiter in the entire JavaScript ecosystem. Website: https://limitly.xyz Docs index: https://limitly.xyz/docs npm: https://www.npmjs.com/package/limitly-sdk GitHub: https://github.com/emmanueltaiwo/limitly --- ## Summary Limitly is a TypeScript-first rate limiting SDK for Node.js and browsers. Redis-backed distributed rate limiting with zero configuration. No API keys, no payments, no usage caps. Use hosted or bring your own Redis for full tenant isolation. Features: TypeScript-first, free forever, zero config, multiple algorithms (token bucket, sliding window, fixed window, leaky bucket), bring your own Redis, dynamic limits per request, optional PostHog analytics, framework agnostic (Express, Next.js, Fastify, Hono). Install: npm install limitly-sdk --- # checkRateLimit() URL: https://limitly.xyz/docs/api-reference/checkratelimit x Description: Checks if a request is allowed based on configured rate limits. Returns detailed information about the rate limit status. # client.checkRateLimit(options?) Checks if a request is allowed based on the configured rate limits. This is the core method for rate limiting - it determines whether a request should be processed or rate limited. ## Function Signature ```typescript function checkRateLimit( options?: RateLimitOptions | string ): Promise; ``` ## Parameters ### `options` (optional) Either a configuration object or a string identifier: ```typescript // As an object await client.checkRateLimit({ identifier: 'user-123', capacity: 100, refillRate: 10, skip: false, }); // As a string (shorthand for identifier) await client.checkRateLimit('user-123'); ``` ### RateLimitOptions ```typescript interface RateLimitOptions { identifier?: string; // User ID, IP, or other unique identifier algorithm?: | 'token-bucket' | 'sliding-window' | 'fixed-window' | 'leaky-bucket'; // Override algorithm for this request capacity?: number; // Maximum capacity (for token bucket/leaky bucket, default: 100) refillRate?: number; // Tokens refilled per second (for token bucket, default: 10) limit?: number; // Maximum requests (for sliding/fixed window, default: 100) windowSize?: number; // Window size in milliseconds (for sliding/fixed window, default: 60000) leakRate?: number; // Leak rate per second (for leaky bucket, default: 10) skip?: boolean; // Skip rate limiting (default: false) } ``` ## Returns A `Promise` with the following structure: ```typescript interface LimitlyResponse { allowed: boolean; // true if request is allowed, false if rate limited limit?: number; // Total request capacity remaining?: number; // Number of requests remaining reset?: number; // Unix timestamp (milliseconds) when limit resets message?: string; // Optional error message if not allowed } ``` ## Basic Usage Check rate limit with just an identifier: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-api', }); // Simple check with identifier const result = await client.checkRateLimit('user-123'); if (result.allowed) { console.log(`Request allowed. ${result.remaining} remaining.`); } else { console.log('Rate limited!'); } ``` ## With Custom Limits Override default limits for this specific check: ```typescript // Token bucket (default) const result = await client.checkRateLimit({ identifier: 'user-123', capacity: 50, // Maximum 50 requests refillRate: 5, // Refill 5 tokens per second }); // Sliding window const result = await client.checkRateLimit({ identifier: 'user-123', algorithm: 'sliding-window', limit: 100, // 100 requests windowSize: 60000, // per 60 seconds }); // Fixed window const result = await client.checkRateLimit({ identifier: 'user-123', algorithm: 'fixed-window', limit: 100, windowSize: 60000, }); // Leaky bucket const result = await client.checkRateLimit({ identifier: 'user-123', algorithm: 'leaky-bucket', capacity: 100, leakRate: 10, // Leak 10 per second }); ``` ## Understanding the Response ### `allowed` (boolean) Indicates whether the request should be processed: ```typescript if (result.allowed) { // Process the request processRequest(); } else { // Return 429 Too Many Requests return rateLimitError(); } ``` ### `limit` (number, optional) The total capacity for this rate limit bucket: ```typescript console.log(`User can make up to ${result.limit} requests`); ``` ### `remaining` (number, optional) How many requests are still available: ```typescript if (result.remaining !== undefined) { console.log(`${result.remaining} requests remaining`); // Set HTTP header res.setHeader('X-RateLimit-Remaining', result.remaining.toString()); } ``` ### `reset` (number, optional) Unix timestamp (milliseconds) when the bucket will be full again: ```typescript if (result.reset) { const resetDate = new Date(result.reset); console.log(`Limit resets at: ${resetDate.toISOString()}`); // Calculate retry after seconds const retryAfter = Math.ceil((result.reset - Date.now()) / 1000); res.setHeader('Retry-After', retryAfter.toString()); } ``` ## Skip Rate Limiting Bypass rate limiting for specific cases (e.g., admins): ```typescript const result = await client.checkRateLimit({ identifier: 'user-123', skip: user.isAdmin, // Admins bypass rate limits }); // If skip is true, result.allowed will always be true ``` ## Per-Endpoint Limits Use different limits for different endpoints: ```typescript async function checkEndpointLimit(userId: string, endpoint: string) { const endpointLimits: Record< string, { capacity: number; refillRate: number } > = { '/api/login': { capacity: 5, refillRate: 0.1 }, '/api/search': { capacity: 100, refillRate: 10 }, '/api/export': { capacity: 10, refillRate: 0.5 }, }; const limits = endpointLimits[endpoint] || { capacity: 50, refillRate: 5 }; return await client.checkRateLimit({ identifier: `${userId}:${endpoint}`, ...limits, }); } ``` ## Error Handling Handle errors gracefully: ```typescript try { const result = await client.checkRateLimit({ identifier: userId, }); if (!result.allowed) { return handleRateLimit(result); } return processRequest(); } catch (error) { // Handle Redis connection errors, timeouts, etc. console.error('Rate limit check failed:', error); // Fail open - allow request if rate limiting fails return processRequest(); } ``` ## Setting HTTP Headers Include rate limit information in response headers: ```typescript const result = await client.checkRateLimit({ identifier: userId }); // Set standard rate limit headers if (result.limit) { res.setHeader('X-RateLimit-Limit', result.limit.toString()); } if (result.remaining !== undefined) { res.setHeader('X-RateLimit-Remaining', result.remaining.toString()); } if (result.reset) { res.setHeader('X-RateLimit-Reset', Math.ceil(result.reset / 1000).toString()); } if (!result.allowed) { const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; res.setHeader('Retry-After', retryAfter.toString()); return res.status(429).json({ error: 'Rate limit exceeded' }); } ``` ## Performance Considerations - **Caching**: Results are cached in Redis for fast lookups - **Async**: Always use `await` - the method returns a Promise - **Batching**: Multiple checks can be done in parallel with `Promise.all()` ```typescript // Check multiple users in parallel const results = await Promise.all([ client.checkRateLimit('user-1'), client.checkRateLimit('user-2'), client.checkRateLimit('user-3'), ]); ``` --- # createClient() URL: https://limitly.xyz/docs/api-reference/createclient x Description: Creates a new Limitly client instance for advanced configurations and custom rate limiting setups. # createClient(config?) Creates a new Limitly client instance with custom configuration. **For production, always provide `redisUrl` for full tenant isolation.** ## Function Signature ```typescript function createClient(config?: LimitlyConfig): LimitlyClient; ``` ## Parameters ### `config` (optional) Configuration object for the client: ```typescript interface PostHogConfig { apiKey: string; // PostHog API key host?: string; // PostHog host (default: https://app.posthog.com) } interface LimitlyConfig { redisUrl?: string; // ⭐ Recommended for production. Redis connection URL for direct Redis mode serviceId?: string; // Service identifier for isolation algorithm?: | 'token-bucket' | 'sliding-window' | 'fixed-window' | 'leaky-bucket'; // Rate limiting algorithm (default: 'token-bucket') timeout?: number; // Request timeout in milliseconds (default: 5000) baseUrl?: string; // Base URL of the Limitly API service (default: https://api.limitly.emmanueltaiwo.dev). Only used when redisUrl is not provided. enableSystemAnalytics?: boolean; // Enable system analytics tracking (default: true). All identifiers are hashed for privacy. posthog?: PostHogConfig; // PostHog configuration to send events to your PostHog instance } ``` **Important:** - **With `redisUrl`**: SDK connects directly to your Redis. Full tenant isolation, no collisions with other users. **Recommended for production.** - **Without `redisUrl`**: Uses HTTP API mode (hosted service). Shares Redis with other users - may collide if multiple users use the same `serviceId`. Good for development/testing. ## Returns A `LimitlyClient` instance with the following methods: - `checkRateLimit(options?)` - Check if a request is allowed - Other client methods (see API reference) ## Basic Usage **Recommended: Use your own Redis** ```typescript import { createClient } from 'limitly-sdk'; // Recommended for production (default: token-bucket algorithm) const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app', }); // Or choose a different algorithm const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app', algorithm: 'sliding-window', // or 'fixed-window', 'leaky-bucket' }); ``` **Without Redis URL (development/testing):** ```typescript // ⚠️ Shares hosted Redis - may collide with other users const client = createClient({ serviceId: 'my-app' }); ``` ## Custom Service ID Isolate rate limits by service or application: ```typescript // Recommended: Use with your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'my-api-service', }); // All rate limits using this client will be isolated under 'my-api-service' const result = await client.checkRateLimit({ identifier: 'user-123', }); ``` This is useful when you have multiple services and want to keep their rate limits separate: ```typescript // Recommended: Use your own Redis const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379'; // API service const apiClient = createClient({ redisUrl, serviceId: 'api-service', }); // Authentication service const authClient = createClient({ redisUrl, serviceId: 'auth-service', }); // Background job service const jobClient = createClient({ redisUrl, serviceId: 'job-service', }); // Each service has independent rate limit buckets await apiClient.checkRateLimit({ identifier: 'user-123' }); await authClient.checkRateLimit({ identifier: 'user-123' }); await jobClient.checkRateLimit({ identifier: 'user-123' }); ``` ## Bring Your Own Redis (Recommended for Production) **Always use your own Redis URL for production deployments to ensure full tenant isolation:** ```typescript // Recommended for production const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app', }); // All rate limit data stored in your Redis - no collisions const result = await client.checkRateLimit('user-123'); ``` **Benefits of using your own Redis:** - ✅ **Full tenant isolation** - No collisions with other Limitly users - ✅ **Data privacy** - Your rate limit data stays in your Redis instance - ✅ **Better performance** - Direct Redis connection (no HTTP overhead) - ✅ **Production ready** - Recommended for all production deployments **Without `redisUrl` (HTTP API mode):** - ⚠️ Shares hosted Redis with other users - ⚠️ Potential collisions if multiple users use the same `serviceId` - ✅ Works out of the box with zero configuration - ✅ Good for development and testing ## PostHog Analytics Integration Send rate limit events directly to your PostHog instance: ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'my-app', posthog: { apiKey: process.env.POSTHOG_API_KEY!, host: 'https://app.posthog.com', // optional }, }); ``` **How it works:** - Events are sent to your PostHog with actual identifiers (serviceId, clientId) - Events are also sent to Limitly's analytics endpoint (if enabled) with hashed identifiers - Both happen asynchronously and failures don't affect rate limiting - Tracked events: `rate_limit_check`, `rate_limit_allowed`, `rate_limit_denied` **Benefits:** - Track your own analytics in PostHog - See actual user IDs (not hashed) - Build custom dashboards and insights - Works with direct Redis mode ## Custom Timeout Set a custom timeout for HTTP requests: ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app', timeout: 3000, // 3 seconds timeout }); ``` This is useful when: - You want faster failure detection - Your network has higher latency - You're using a remote API service ## System Analytics Limitly collects anonymous usage analytics to improve the service. Analytics are enabled by default and can be disabled: ```typescript // Disable analytics const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'my-app', enableSystemAnalytics: false, }); ``` **Privacy:** - All identifiers (service IDs, client IDs) are hashed before sending to Limitly - No sensitive data (Redis URLs, IP addresses) is collected - Analytics are sent asynchronously and failures don't affect rate limiting - Analytics only track when using direct Redis mode (with `redisUrl`) **Note:** If you provide `posthog` config, events are sent to your PostHog with actual identifiers (not hashed) for your own analytics. Limitly's system analytics (if enabled) still uses hashed identifiers. **What's tracked:** - Rate limit check events (allowed/denied) - Usage patterns (limits, remaining, reset times) - SDK version - Custom configuration usage **What's NOT tracked:** - Raw identifiers or user data (in system analytics - hashed only) - Redis connection strings - IP addresses - Sensitive application data ## Environment-Based Configuration Create different clients for different environments: ```typescript // config/limitly.ts import { createClient } from 'limitly-sdk'; const isProduction = process.env.NODE_ENV === 'production'; // Production: Always use your own Redis (recommended) export const prodClient = createClient({ redisUrl: process.env.REDIS_URL!, // Required for production serviceId: process.env.SERVICE_ID || 'production', timeout: parseInt(process.env.TIMEOUT || '5000', 10), }); // Development: use local Redis export const devClient = createClient({ redisUrl: 'redis://localhost:6379', serviceId: 'dev', timeout: 5000, }); // ⚠️ Not recommended for production: HTTP API mode (shares hosted Redis) export const apiClient = createClient({ serviceId: 'my-app', // No redisUrl = uses HTTP API, may collide with other users }); ``` ## Multiple Clients Create multiple clients for different use cases: ```typescript // Recommended: Use your own Redis const redisUrl = process.env.REDIS_URL || 'redis://localhost:6379'; // Strict rate limiting for authentication const authClient = createClient({ redisUrl, serviceId: 'auth', timeout: 2000, }); // Lenient rate limiting for public APIs const publicClient = createClient({ redisUrl, serviceId: 'public-api', timeout: 5000, }); // Background job rate limiting const jobClient = createClient({ redisUrl, serviceId: 'background-jobs', timeout: 10000, }); ``` ## Error Handling Handle connection errors gracefully: ```typescript import { createClient } from 'limitly-sdk'; const client = createClient({ serviceId: 'my-app', redisUrl: process.env.REDIS_URL, }); async function checkLimitSafely(userId: string) { try { const result = await client.checkRateLimit({ identifier: userId }); return result; } catch (error) { // Handle Redis connection errors if (error instanceof Error) { console.error('Rate limit check failed:', error.message); } // Fail open - allow request if rate limiting fails return { allowed: true, error: 'Rate limit service unavailable', }; } } ``` ## Best Practices 1. **Use service IDs**: Always specify a `serviceId` to isolate rate limits 2. **Connection pooling**: Limitly handles connection pooling automatically 3. **Singleton pattern**: Create clients once and reuse them: ```typescript // lib/limitly.ts import { createClient } from 'limitly-sdk'; let client: ReturnType | null = null; export function getLimitlyClient() { if (!client) { // Recommended: Always provide redisUrl for production client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: process.env.SERVICE_ID || 'default', }); } return client; } ``` --- # Type Exports URL: https://limitly.xyz/docs/api-reference/types x Description: All TypeScript types are exported for use in your own code. Get full type safety and IDE autocomplete. # Type Exports Limitly exports all TypeScript types for use in your own code. This provides full type safety, better IDE autocomplete, and makes it easier to build type-safe wrappers around Limitly. ## Available Types All types can be imported from `'limitly-sdk'`: ```typescript import type { LimitlyConfig, LimitlyResponse, RateLimitOptions, LimitlyClient } from 'limitly-sdk'; ``` ## LimitlyConfig Configuration for creating a Limitly client: ```typescript interface PostHogConfig { apiKey: string; // PostHog API key host?: string; // PostHog host (default: https://app.posthog.com) } interface LimitlyConfig { serviceId?: string; // Service identifier for isolation redisUrl?: string; // Redis connection URL timeout?: number; // Request timeout in milliseconds baseUrl?: string; // Base URL of the Limitly API service enableSystemAnalytics?: boolean; // Enable system analytics tracking (default: true) posthog?: PostHogConfig; // PostHog configuration to send events to your PostHog instance } ``` ### Example Usage ```typescript import type { LimitlyConfig } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; function createLimitlyClient(config: LimitlyConfig) { return createClient(config); } const client = createLimitlyClient({ serviceId: 'my-api', timeout: 5000 }); ``` ## LimitlyResponse Response object returned by rate limit checks: ```typescript interface LimitlyResponse { allowed: boolean; // Is request allowed? limit?: number; // Total request limit remaining?: number; // Requests remaining reset?: number; // Unix timestamp (ms) when limit resets message?: string; // Optional error message } ``` ### Example Usage ```typescript import type { LimitlyResponse } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); async function handleRequest(userId: string): Promise { const result: LimitlyResponse = await client.checkRateLimit(userId); if (!result.allowed) { console.log(`Rate limited. Reset at: ${new Date(result.reset!)}`); } return result; } ``` ## RateLimitOptions Options for checking rate limits: ```typescript interface RateLimitOptions { identifier?: string; // User ID, IP, or other identifier capacity?: number; // Maximum number of requests refillRate?: number; // Tokens refilled per second window?: number; // Time window in milliseconds skip?: boolean; // Skip rate limiting } ``` ### Example Usage ```typescript import type { RateLimitOptions } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); function checkCustomLimit( userId: string, options?: Partial ): Promise { return client.checkRateLimit({ identifier: userId, ...options }); } // Usage await checkCustomLimit('user-123', { capacity: 50, refillRate: 5 }); ``` ## LimitlyClient Type for the client instance: ```typescript interface LimitlyClient { checkRateLimit(options?: RateLimitOptions | string): Promise; // ... other methods } ``` ### Example Usage ```typescript import type { LimitlyClient } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; let client: LimitlyClient | null = null; export function getClient(): LimitlyClient { if (!client) { client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-api' }); } return client; } ``` ## Type Guards Create type guards for better type safety: ```typescript import type { LimitlyResponse } from 'limitly-sdk'; function isRateLimited(response: LimitlyResponse): response is LimitlyResponse & { allowed: false } { return !response.allowed; } function isAllowed(response: LimitlyResponse): response is LimitlyResponse & { allowed: true } { return response.allowed; } // Usage import { createClient } from 'limitly-sdk'; const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit('user-123'); if (isRateLimited(result)) { // TypeScript knows result.allowed is false here console.log('Rate limited:', result.message); const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; } else if (isAllowed(result)) { // TypeScript knows result.allowed is true here console.log('Allowed. Remaining:', result.remaining); } ``` ## Typed Wrappers Create type-safe wrappers around Limitly: ```typescript import type { LimitlyResponse, RateLimitOptions } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; interface ProtectedRouteOptions extends RateLimitOptions { userId: string; endpoint?: string; } async function protectedRoute( options: ProtectedRouteOptions ): Promise { const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'api' }); return client.checkRateLimit({ identifier: options.endpoint ? `${options.userId}:${options.endpoint}` : options.userId, capacity: options.capacity, refillRate: options.refillRate, skip: options.skip }); } // Usage with full type safety const result = await protectedRoute({ userId: 'user-123', endpoint: '/api/data', capacity: 100, refillRate: 10 }); ``` ## Generic Helpers Create generic helper functions with proper typing: ```typescript import type { LimitlyResponse } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; type RateLimitHandler = (result: LimitlyResponse) => T; async function withRateLimit( identifier: string, onAllowed: RateLimitHandler, onRateLimited: RateLimitHandler ): Promise { const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit(identifier); return result.allowed ? onAllowed(result) : onRateLimited(result); } ``` Usage: ```typescript const response = await withRateLimit( 'user-123', (result) => ({ success: true, remaining: result.remaining }), (result) => ({ success: false, error: 'Rate limited', retryAfter: result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60 }) ); ``` ## Complete Example Putting it all together: ```typescript import type { LimitlyConfig, LimitlyResponse, RateLimitOptions, LimitlyClient } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; // Typed configuration const config: LimitlyConfig = { serviceId: 'my-api', timeout: 5000 }; // Typed client const client: LimitlyClient = createClient(config); // Typed options const options: RateLimitOptions = { identifier: 'user-123', capacity: 100, refillRate: 10 }; // Typed response const result: LimitlyResponse = await client.checkRateLimit(options); // Type-safe handling if (result.allowed && result.remaining !== undefined) { console.log(`Allowed. ${result.remaining} remaining.`); } else if (!result.allowed) { console.log('Rate limited:', result.message); } ``` --- # Advanced Configuration URL: https://limitly.xyz/docs/examples/advanced x Description: Advanced usage patterns with tiered rate limiting, dynamic limits, and complex scenarios. # Advanced Configuration Advanced patterns for complex rate limiting scenarios. ## Tiered Rate Limiting ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'tiered-api' }); const tierLimits = { free: { capacity: 100, refillRate: 10 }, pro: { capacity: 1000, refillRate: 100 }, enterprise: { capacity: 10000, refillRate: 1000 } }; async function checkTieredLimit(user: User, endpoint?: string) { if (user.isAdmin) { return { allowed: true, limit: Infinity, remaining: Infinity }; } const limits = tierLimits[user.plan]; return await client.checkRateLimit({ identifier: `${user.id}:${endpoint || 'default'}`, ...limits }); } ``` ## Dynamic Rate Limiting ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'adaptive-api' }); // Simulate system load monitoring async function getSystemLoad(): Promise { // In real implementation, get from your monitoring system // Returns 0-100 representing CPU/memory usage return Math.random() * 100; } async function checkAdaptiveLimit(userId: string) { const systemLoad = await getSystemLoad(); // Reduce limits when system is under heavy load const baseCapacity = 100; const baseRefillRate = 10; const capacity = systemLoad > 80 ? Math.floor(baseCapacity * 0.5) // 50% capacity under high load : systemLoad > 50 ? Math.floor(baseCapacity * 0.75) // 75% capacity under medium load : baseCapacity; // Full capacity under normal load const refillRate = systemLoad > 80 ? Math.floor(baseRefillRate * 0.5) : systemLoad > 50 ? Math.floor(baseRefillRate * 0.75) : baseRefillRate; return await client.checkRateLimit({ identifier: userId, capacity, refillRate }); } ``` ## Time-Based Limiting ```typescript function getTimeBasedLimits() { const hour = new Date().getHours(); return hour >= 9 && hour < 17 ? { capacity: 50, refillRate: 5 } // Peak hours : { capacity: 200, refillRate: 20 }; // Off-peak } await client.checkRateLimit({ identifier: userId, ...getTimeBasedLimits() }); ``` ## Geographic Limiting ```typescript const regionLimits = { 'US': { capacity: 100, refillRate: 10 }, 'ASIA': { capacity: 200, refillRate: 20 }, 'DEFAULT': { capacity: 50, refillRate: 5 } }; const limits = regionLimits[country] || regionLimits['DEFAULT']; await client.checkRateLimit({ identifier: `${userId}:${country}`, ...limits }); ``` ## Multiple Windows ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'multi-window' }); async function checkMultiWindowLimit(userId: string) { // Check multiple time windows const [minuteResult, hourResult, dayResult] = await Promise.all([ // Per-minute limit client.checkRateLimit({ identifier: `${userId}:minute`, capacity: 10, refillRate: 10, window: 60000 // 1 minute }), // Per-hour limit client.checkRateLimit({ identifier: `${userId}:hour`, capacity: 1000, refillRate: 1000 / 3600, // ~0.28 per second window: 3600000 // 1 hour }), // Per-day limit client.checkRateLimit({ identifier: `${userId}:day`, capacity: 10000, refillRate: 10000 / 86400, // ~0.12 per second window: 86400000 // 1 day }) ]); // Request is allowed only if all windows allow it const allowed = minuteResult.allowed && hourResult.allowed && dayResult.allowed; return { allowed, windows: { minute: minuteResult, hour: hourResult, day: dayResult }, // Return the most restrictive remaining count remaining: Math.min( minuteResult.remaining ?? Infinity, hourResult.remaining ?? Infinity, dayResult.remaining ?? Infinity ) }; } ``` ## Burst Protection Allow bursts but limit sustained traffic: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'burst-protection' }); async function checkBurstLimit(userId: string) { // Short window for burst detection const burstResult = await client.checkRateLimit({ identifier: `${userId}:burst`, capacity: 20, // Allow 20 requests refillRate: 20, // Refill 20 per second window: 1000 // 1 second window }); // Longer window for sustained rate const sustainedResult = await client.checkRateLimit({ identifier: `${userId}:sustained`, capacity: 100, // 100 requests refillRate: 10, // Refill 10 per second window: 10000 // 10 second window }); // Allow if either window allows (OR logic) // Or require both (AND logic) - current implementation const allowed = burstResult.allowed && sustainedResult.allowed; return { allowed, burst: burstResult, sustained: sustainedResult }; } ``` ## Rate Limit with Exponential Backoff Implement exponential backoff for rate-limited users: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'backoff' }); interface RateLimitState { attempts: number; lastAttempt: number; } async function checkWithBackoff(userId: string): Promise<{ allowed: boolean; backoffSeconds?: number; result: any; }> { // Check standard rate limit const result = await client.checkRateLimit({ identifier: userId, capacity: 100, refillRate: 10 }); if (result.allowed) { return { allowed: true, result }; } // Calculate exponential backoff // Get previous attempt count from cache or database const state: RateLimitState = await getRateLimitState(userId); const attempts = state.attempts + 1; // Exponential backoff: 2^attempts seconds, max 60 seconds const backoffSeconds = Math.min(Math.pow(2, attempts), 60); // Store state await setRateLimitState(userId, { attempts, lastAttempt: Date.now() }); return { allowed: false, backoffSeconds, result: { ...result, message: `Rate limited. Please retry after ${backoffSeconds} seconds.`, retryAfter: backoffSeconds } }; } // Helper functions (implement with your cache/database) async function getRateLimitState(userId: string): Promise { // Get from Redis, database, etc. return { attempts: 0, lastAttempt: 0 }; } async function setRateLimitState(userId: string, state: RateLimitState): Promise { // Store in Redis, database, etc. } ``` --- # Rate Limiting Algorithms URL: https://limitly.xyz/docs/examples/algorithms x Description: Learn about the different rate limiting algorithms available in Limitly and when to use each one. # Rate Limiting Algorithms Limitly supports multiple rate limiting algorithms. Choose the one that best fits your use case. ## Available Algorithms ### Token Bucket (Default) **Best for:** General purpose rate limiting, smooth traffic with burst handling The token bucket algorithm provides smooth, continuous rate limiting with burst capability. ```typescript import { createClient } from 'limitly-sdk'; const client = createClient({ redisUrl: process.env.REDIS_URL, algorithm: 'token-bucket', // default serviceId: 'my-app', }); // Each request consumes 1 token // Tokens refill at a constant rate const result = await client.checkRateLimit({ identifier: 'user-123', capacity: 100, // Bucket capacity refillRate: 10, // 10 tokens per second }); ``` **How it works:** - Starts with `capacity` tokens - Each request consumes 1 token - Tokens refill at `refillRate` per second - Allows bursts up to capacity - Smooth, continuous refill **Example:** `capacity=100, refillRate=10` - Initially: 100 requests allowed - After 1 second: +10 tokens (110 available) - After 10 seconds: Full bucket (100 tokens) ### Sliding Window **Best for:** Accurate limits, smooth enforcement, better UX than fixed windows The sliding window tracks requests in a rolling time window, providing more accurate rate limiting than fixed windows. ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, algorithm: 'sliding-window', serviceId: 'my-app', }); const result = await client.checkRateLimit({ identifier: 'user-123', limit: 100, // 100 requests windowSize: 60000, // per 60 seconds (rolling window) }); ``` **How it works:** - Tracks individual request timestamps - Removes requests outside the window - Allows requests if count < limit - Smooth, rolling enforcement **Example:** `limit=100, windowSize=60000` - Allows 100 requests in any 60-second window - Window slides continuously - More accurate than fixed windows ### Fixed Window **Best for:** Simple quotas, predictable reset times, API tier limits The fixed window divides time into discrete windows with predictable reset times. ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, algorithm: 'fixed-window', serviceId: 'my-app', }); const result = await client.checkRateLimit({ identifier: 'user-123', limit: 100, // 100 requests windowSize: 60000, // per 60-second window }); ``` **How it works:** - Divides time into fixed windows - Counts requests in current window - Resets at window boundary - Simple and predictable **Example:** `limit=100, windowSize=60000` - Window 1: 00:00-00:59 (100 requests allowed) - Window 2: 01:00-01:59 (resets, 100 requests allowed) - Predictable reset times ### Leaky Bucket **Best for:** Traffic shaping, burst smoothing, constant output rate The leaky bucket smooths traffic by leaking requests at a constant rate, preventing spikes. ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, algorithm: 'leaky-bucket', serviceId: 'my-app', }); const result = await client.checkRateLimit({ identifier: 'user-123', capacity: 100, // Bucket capacity leakRate: 10, // Leak 10 requests per second }); ``` **How it works:** - Requests fill the bucket - Bucket leaks at constant `leakRate` - Rejects if bucket would overflow - Smooths traffic spikes **Example:** `capacity=100, leakRate=10` - Burst of 50 requests: bucket fills to 50 - Leaks at 10/second: empty in 5 seconds - Prevents traffic spikes ## Choosing the Right Algorithm | Use Case | Recommended Algorithm | Why | | ------------------------- | --------------------- | ----------------------------------- | | General API rate limiting | Token Bucket | Smooth, allows bursts, good balance | | Accurate limits | Sliding Window | More accurate than fixed windows | | Simple quotas | Fixed Window | Predictable, easy to understand | | Traffic shaping | Leaky Bucket | Smooths bursts, constant output | | User tiers | Token Bucket | Flexible capacity/refill rates | | Per-minute limits | Fixed Window | Clear reset times | | Per-second limits | Sliding Window | Smooth enforcement | ## Per-Request Algorithm Override You can override the algorithm for individual requests: ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, algorithm: 'token-bucket', // Default serviceId: 'my-app', }); // Use default algorithm (token bucket) await client.checkRateLimit({ identifier: 'user-123' }); // Override for this request await client.checkRateLimit({ identifier: 'user-123', algorithm: 'sliding-window', limit: 50, windowSize: 30000, }); ``` ## Algorithm-Specific Parameters Each algorithm uses different parameters: **Token Bucket:** - `capacity`: Maximum tokens - `refillRate`: Tokens per second **Sliding Window:** - `limit`: Maximum requests - `windowSize`: Window size in milliseconds **Fixed Window:** - `limit`: Maximum requests - `windowSize`: Window size in milliseconds **Leaky Bucket:** - `capacity`: Bucket capacity - `leakRate`: Leak rate per second ## Examples ### Different Algorithms Per Endpoint ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'my-api', }); // Strict sliding window for auth async function checkAuth(userId: string) { return client.checkRateLimit({ identifier: userId, algorithm: 'sliding-window', limit: 5, windowSize: 60000, // 5 attempts per minute }); } // Token bucket for general API async function checkAPI(userId: string) { return client.checkRateLimit({ identifier: userId, algorithm: 'token-bucket', capacity: 100, refillRate: 10, }); } // Fixed window for exports async function checkExport(userId: string) { return client.checkRateLimit({ identifier: userId, algorithm: 'fixed-window', limit: 10, windowSize: 3600000, // 10 exports per hour }); } ``` ### User Tier with Different Algorithms ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'tiered-api', }); async function checkLimit(user: { id: string; plan: string }) { if (user.plan === 'free') { // Free users: strict fixed window return client.checkRateLimit({ identifier: user.id, algorithm: 'fixed-window', limit: 100, windowSize: 3600000, // 100 per hour }); } else if (user.plan === 'premium') { // Premium: smooth token bucket return client.checkRateLimit({ identifier: user.id, algorithm: 'token-bucket', capacity: 1000, refillRate: 100, }); } else { // Enterprise: leaky bucket for traffic shaping return client.checkRateLimit({ identifier: user.id, algorithm: 'leaky-bucket', capacity: 10000, leakRate: 1000, }); } } ``` --- # Basic Usage URL: https://limitly.xyz/docs/examples/basic x Description: Simple examples of using Limitly in your application. Learn the fundamentals with practical code examples. # Basic Usage Simple, practical examples for common use cases. ## Simple Rate Limiting **Recommended: Use your own Redis** ```typescript import { createClient } from 'limitly-sdk'; // Recommended for production const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); async function handleRequest(userId: string) { const result = await client.checkRateLimit(userId); if (!result.allowed) { throw new Error('Rate limit exceeded'); } return { success: true, remaining: result.remaining }; } ``` **Without Redis URL (development/testing):** ```typescript // ⚠️ Shares hosted Redis - may collide with other users const client = createClient({ serviceId: 'my-app' }); ``` ## Response Structure ```typescript interface LimitlyResponse { allowed: boolean; limit?: number; remaining?: number; reset?: number; message?: string; } ``` ## Custom Limits ```typescript // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit({ identifier: 'user-123', capacity: 50, refillRate: 5 }); ``` ## Per-User Limiting ```typescript // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'user-api' }); async function handleUserRequest(userId: string) { const result = await client.checkRateLimit(userId); if (!result.allowed) { return { error: 'Rate limit exceeded' }; } return { success: true, remaining: result.remaining }; } ``` ## IP-Based Limiting ```typescript const ip = request.headers.get('x-forwarded-for')?.split(',')[0] || 'unknown'; const result = await client.checkRateLimit(ip); if (!result.allowed) { return Response.json({ error: 'Too many requests' }, { status: 429 }); } ``` ## Endpoint-Specific Limits ```typescript const endpointLimits = { '/api/login': { capacity: 5, refillRate: 0.1 }, '/api/search': { capacity: 100, refillRate: 10 } }; const result = await client.checkRateLimit({ identifier: `${userId}:${endpoint}`, ...endpointLimits[endpoint] }); ``` ## Error Handling ```typescript try { const result = await client.checkRateLimit(userId); if (!result.allowed) { return { error: 'Rate limit exceeded' }; } return { success: true }; } catch (error) { // Fail open return { success: true, rateLimitError: true }; } ``` ## HTTP Headers ```typescript const headers = new Headers(); if (result.limit) headers.set('X-RateLimit-Limit', result.limit.toString()); if (result.remaining !== undefined) { headers.set('X-RateLimit-Remaining', result.remaining.toString()); } if (!result.allowed) { return Response.json({ error: 'Rate limit exceeded' }, { status: 429, headers }); } ``` ## Service Isolation ```typescript // Recommended: Use your own Redis for each service const apiClient = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'api-service' }); const authClient = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'auth-service' }); ``` --- # Custom Rate Limit Strategies URL: https://limitly.xyz/docs/examples/custom-strategies x Description: Implement custom rate limiting strategies for different use cases. Learn patterns for per-endpoint limits, adaptive limits, and tier-based systems. # Custom Rate Limit Strategies Learn how to implement custom rate limiting strategies tailored to your specific use cases. These patterns can be combined and adapted to fit your application's needs. ## Strategy 1: Per-Endpoint Limits Different endpoints have different resource requirements. Apply appropriate limits: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'endpoint-based' }); // Define limits per endpoint based on resource usage const endpointLimits: Record = { // Authentication endpoints - very strict '/api/login': { capacity: 5, refillRate: 0.1 }, // 5 attempts, 1 per 10 seconds '/api/register': { capacity: 3, refillRate: 0.05 }, // 3 attempts, 1 per 20 seconds '/api/reset-password': { capacity: 3, refillRate: 0.05 }, // Data endpoints - moderate limits '/api/data': { capacity: 100, refillRate: 10 }, // 100 requests, 10 per second '/api/search': { capacity: 50, refillRate: 5 }, // 50 requests, 5 per second // Heavy operations - strict limits '/api/export': { capacity: 10, refillRate: 0.5 }, // 10 exports, 1 per 2 seconds '/api/generate-report': { capacity: 5, refillRate: 0.2 }, // 5 reports, 1 per 5 seconds '/api/bulk-upload': { capacity: 3, refillRate: 0.1 }, // 3 uploads, 1 per 10 seconds // Light operations - lenient limits '/api/health': { capacity: 1000, refillRate: 100 }, // 1000 requests, 100 per second '/api/ping': { capacity: 1000, refillRate: 100 } }; async function protectEndpoint(endpoint: string, userId: string) { // Get limits for this endpoint, or use defaults const limits = endpointLimits[endpoint] || { capacity: 50, refillRate: 5 }; // Use endpoint in identifier to separate limits per endpoint const result = await client.checkRateLimit({ identifier: `${userId}:${endpoint}`, ...limits }); return result; } // Usage in your API async function handleRequest(endpoint: string, userId: string) { const result = await protectEndpoint(endpoint, userId); if (!result.allowed) { return { error: 'Rate limit exceeded for this endpoint', endpoint, retryAfter: result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60 }; } // Process request return { success: true }; } ``` ## Strategy 2: Adaptive Limits Based on Load Dynamically adjust rate limits based on system load: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'adaptive' }); // Simulate system monitoring interface SystemMetrics { cpuUsage: number; // 0-100 memoryUsage: number; // 0-100 requestQueue: number; // Number of queued requests } async function getSystemMetrics(): Promise { // In production, get from your monitoring system // (Prometheus, CloudWatch, Datadog, etc.) return { cpuUsage: 45, memoryUsage: 60, requestQueue: 10 }; } function calculateAdaptiveLimits( baseCapacity: number, baseRefillRate: number, metrics: SystemMetrics ): { capacity: number; refillRate: number } { // Calculate overall system load (weighted average) const systemLoad = ( metrics.cpuUsage * 0.4 + metrics.memoryUsage * 0.4 + Math.min(metrics.requestQueue / 100, 1) * 100 * 0.2 ); // Reduce limits when system is under stress if (systemLoad > 80) { // Critical load - reduce to 30% of base return { capacity: Math.floor(baseCapacity * 0.3), refillRate: Math.floor(baseRefillRate * 0.3) }; } else if (systemLoad > 60) { // High load - reduce to 50% of base return { capacity: Math.floor(baseCapacity * 0.5), refillRate: Math.floor(baseRefillRate * 0.5) }; } else if (systemLoad > 40) { // Medium load - reduce to 75% of base return { capacity: Math.floor(baseCapacity * 0.75), refillRate: Math.floor(baseRefillRate * 0.75) }; } // Normal load - use base limits return { capacity: baseCapacity, refillRate: baseRefillRate }; } async function checkAdaptiveLimit(userId: string) { const metrics = await getSystemMetrics(); const baseCapacity = 100; const baseRefillRate = 10; const limits = calculateAdaptiveLimits(baseCapacity, baseRefillRate, metrics); return await client.checkRateLimit({ identifier: userId, ...limits }); } ``` ## Strategy 3: User Tier-Based Limits Implement different limits for free, premium, and enterprise users: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'tiered' }); interface User { id: string; plan: 'free' | 'pro' | 'enterprise'; isAdmin?: boolean; customLimits?: { capacity: number; refillRate: number }; } // Base limits per tier const tierLimits = { free: { capacity: 100, refillRate: 10, endpoints: { '/api/search': { capacity: 50, refillRate: 5 }, '/api/export': { capacity: 1, refillRate: 0.1 } // 1 export per 10 seconds } }, pro: { capacity: 1000, refillRate: 100, endpoints: { '/api/search': { capacity: 500, refillRate: 50 }, '/api/export': { capacity: 10, refillRate: 1 } } }, enterprise: { capacity: 10000, refillRate: 1000, endpoints: { '/api/search': { capacity: 5000, refillRate: 500 }, '/api/export': { capacity: 100, refillRate: 10 } } } }; async function checkTierBasedLimit( user: User, endpoint?: string ) { // Admins bypass all rate limits if (user.isAdmin) { return { allowed: true, limit: Infinity, remaining: Infinity, tier: 'admin' }; } // Use custom limits if provided if (user.customLimits) { return await client.checkRateLimit({ identifier: user.id, ...user.customLimits }); } // Get tier configuration const tierConfig = tierLimits[user.plan]; // Check if endpoint has specific limits const endpointLimits = endpoint && tierConfig.endpoints[endpoint] ? tierConfig.endpoints[endpoint] : null; const limits = endpointLimits || { capacity: tierConfig.capacity, refillRate: tierConfig.refillRate }; const result = await client.checkRateLimit({ identifier: `${user.id}:${endpoint || 'default'}`, ...limits }); return { ...result, tier: user.plan }; } // Usage const freeUser: User = { id: 'user-1', plan: 'free' }; const proUser: User = { id: 'user-2', plan: 'pro' }; const enterpriseUser: User = { id: 'user-3', plan: 'enterprise' }; const freeResult = await checkTierBasedLimit(freeUser, '/api/export'); const proResult = await checkTierBasedLimit(proUser, '/api/search'); const enterpriseResult = await checkTierBasedLimit(enterpriseUser); ``` ## Strategy 4: Time-of-Day Based Limits Adjust limits based on time of day or day of week: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'time-based' }); interface TimeBasedConfig { peakHours: { capacity: number; refillRate: number }; offPeakHours: { capacity: number; refillRate: number }; weekend: { capacity: number; refillRate: number }; } const timeConfig: TimeBasedConfig = { peakHours: { capacity: 50, refillRate: 5 }, // Stricter during business hours offPeakHours: { capacity: 200, refillRate: 20 }, // More lenient during off-hours weekend: { capacity: 300, refillRate: 30 } // Most lenient on weekends }; function getTimeBasedLimits(): { capacity: number; refillRate: number } { const now = new Date(); const hour = now.getHours(); const day = now.getDay(); // 0 = Sunday, 6 = Saturday // Weekend if (day === 0 || day === 6) { return timeConfig.weekend; } // Peak hours: 9 AM - 5 PM (9-17) if (hour >= 9 && hour < 17) { return timeConfig.peakHours; } // Off-peak hours return timeConfig.offPeakHours; } async function checkTimeBasedLimit(userId: string) { const limits = getTimeBasedLimits(); return await client.checkRateLimit({ identifier: userId, ...limits }); } ``` ## Strategy 5: Combined Strategies Combine multiple strategies for comprehensive rate limiting: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'combined' }); interface RequestContext { user: User; endpoint: string; ip: string; country?: string; } async function checkCombinedLimit(context: RequestContext) { // 1. Get base limits from user tier const tierConfig = tierLimits[context.user.plan]; let limits = { capacity: tierConfig.capacity, refillRate: tierConfig.refillRate }; // 2. Adjust for endpoint if (tierConfig.endpoints[context.endpoint]) { limits = tierConfig.endpoints[context.endpoint]; } // 3. Adjust for time of day const timeLimits = getTimeBasedLimits(); limits.capacity = Math.min(limits.capacity, timeLimits.capacity); limits.refillRate = Math.min(limits.refillRate, timeLimits.refillRate); // 4. Adjust for system load const metrics = await getSystemMetrics(); const adaptiveLimits = calculateAdaptiveLimits( limits.capacity, limits.refillRate, metrics ); // 5. Check rate limit const identifier = `${context.user.id}:${context.endpoint}:${context.country || 'unknown'}`; return await client.checkRateLimit({ identifier, ...adaptiveLimits, skip: context.user.isAdmin }); } ``` --- # Installation URL: https://limitly.xyz/docs/getting-started/installation x Description: Get started with Limitly in seconds. Install the SDK and you're ready to protect your APIs - no configuration needed. # Installation Install Limitly in seconds. No API keys or Redis setup required. ## Prerequisites - Node.js 14+ - npm ## Install ```bash npm install limitly-sdk ``` ## Basic Usage **Recommended: Use your own Redis** ```typescript import { createClient } from 'limitly-sdk'; // Recommended for production: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit('user-123'); if (result.allowed) { console.log(`Allowed. ${result.remaining} remaining.`); } ``` **Without Redis URL (development/testing):** ```typescript // ⚠️ Shares hosted Redis - may collide with other users const client = createClient({ serviceId: 'my-app' }); ``` ## Configuration ### Bring Your Own Redis (Recommended) **Always use your own Redis for production to avoid tenant collisions:** ```typescript // Recommended: Full tenant isolation const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); ``` **Without Redis URL:** ```typescript // ⚠️ Shares hosted Redis - may collide with other users using same serviceId const client = createClient({ serviceId: 'my-api-service' }); ``` ## Framework Setup ### Next.js ```typescript // lib/rate-limit.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis export const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'nextjs-api' }); // app/api/route.ts export async function GET(request: Request) { const userId = request.headers.get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return Response.json({ error: 'Rate limit exceeded' }, { status: 429 }); } return Response.json({ success: true }); } ``` ### Express.js ```typescript // middleware/rate-limit.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'express-api' }); export async function rateLimitMiddleware(req, res, next) { const identifier = req.user?.id || req.ip || 'anonymous'; const result = await client.checkRateLimit(identifier); if (!result.allowed) { return res.status(429).json({ error: 'Rate limit exceeded' }); } next(); } ``` ## Troubleshooting **Connection errors**: Check internet connection and firewall settings **Timeout errors**: Increase timeout: `timeout: 10000` **Not working**: Verify identifier is passed and service ID is correct --- # Introduction URL: https://limitly.xyz/docs/getting-started/introduction x Description: Everything you need to know about Limitly. A TypeScript-first rate limiting SDK for Node.js and browsers with Redis-backed distributed rate limiting. # Introduction to Limitly Limitly is a **TypeScript-first rate limiting SDK** for Node.js and browsers. Redis-backed distributed rate limiting with zero configuration. ## What is Rate Limiting? Rate limiting controls traffic to prevent abuse, ensure fair usage, and maintain service availability. ## Key Features - **Zero Configuration**: Get started instantly with sensible defaults - **Redis-Backed**: Distributed rate limiting across multiple servers - **Bring Your Own Redis**: Recommended for production - full tenant isolation - **TypeScript First**: Full type safety and IDE autocomplete - **Flexible**: Customize limits per user, endpoint, or use case - **Production Ready**: Error handling, timeouts, and best practices built-in ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis for production const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app', }); const result = await client.checkRateLimit('user-123'); if (result.allowed) { console.log(`Remaining: ${result.remaining}`); } // Without redisUrl (development/testing only) // ⚠️ Shares hosted Redis - may collide with other users using same serviceId // const client = createClient({ serviceId: 'my-app' }); ``` ## Use Cases - **API Protection**: Protect REST/GraphQL APIs from abuse - **User Tiers**: Different limits for free/premium/enterprise users - **Endpoint Limits**: Custom limits per endpoint based on resource usage - **Bot Protection**: Prevent automated attacks with strict IP-based limits ## How It Works Limitly supports multiple rate limiting algorithms. Default is **Token Bucket**: **Token Bucket (Default):** - Each identifier gets a bucket with maximum capacity - Requests consume tokens; tokens refill at a constant rate - Allows smooth traffic with burst handling **Other Algorithms:** - **Sliding Window**: Smooth limits with better accuracy than fixed windows - **Fixed Window**: Simple, predictable time-based limits - **Leaky Bucket**: Traffic shaping and burst smoothing Choose the algorithm that fits your use case, or use different algorithms per endpoint. ## Getting Started 1. Install: `npm install limitly-sdk` 2. Use: `const client = createClient({ redisUrl: 'redis://localhost:6379' }); await client.checkRateLimit('user-123');` See [Installation](https://limitly.xyz/docs/getting-started/installation) for setup or [Quick Start](https://limitly.xyz/docs/getting-started/quick-start) for examples. --- # Quick Start URL: https://limitly.xyz/docs/getting-started/quick-start x Description: Get your first rate limiter up and running in just a few lines of code. Examples for Next.js, Express, and more. # Quick Start Get started with Limitly in minutes. ## Basic Example **Recommended: Use your own Redis** ```typescript import { createClient } from 'limitly-sdk'; // Recommended for production: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit('user-123'); if (result.allowed) { console.log(`Allowed. Remaining: ${result.remaining}`); } else { console.log('Rate limit exceeded!'); } ``` **Without Redis URL (development/testing):** ```typescript // ⚠️ Shares hosted Redis - may collide with other users const client = createClient({ serviceId: 'my-app' }); ``` ## Next.js ### App Router ```typescript // app/api/route.ts import { createClient } from 'limitly-sdk'; import { NextResponse } from 'next/server'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'nextjs-api' }); export async function GET(request: Request) { const userId = request.headers.get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return NextResponse.json({ error: 'Rate limit exceeded' }, { status: 429 }); } return NextResponse.json({ success: true }); } ``` ### Pages Router ```typescript // pages/api/route.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'nextjs-api' }); export default async function handler(req, res) { const userId = req.headers['x-user-id'] || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return res.status(429).json({ error: 'Rate limit exceeded' }); } res.status(200).json({ success: true }); } ``` ## Express.js ```typescript // middleware/rate-limit.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'express-api' }); export async function rateLimitMiddleware(req, res, next) { const identifier = req.user?.id || req.ip || 'anonymous'; const result = await client.checkRateLimit(identifier); if (!result.allowed) { return res.status(429).json({ error: 'Rate limit exceeded' }); } next(); } // app.ts app.use('/api', rateLimitMiddleware); ``` ## Other Frameworks ### Fastify ```typescript // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'fastify-api' }); fastify.addHook('onRequest', async (request, reply) => { const identifier = request.user?.id || request.ip || 'anonymous'; const result = await client.checkRateLimit(identifier); if (!result.allowed) { return reply.code(429).send({ error: 'Rate limit exceeded' }); } }); ``` ### Hono ```typescript // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'hono-api' }); app.use('*', async (c, next) => { const identifier = c.req.header('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(identifier); if (!result.allowed) { return c.json({ error: 'Rate limit exceeded' }, 429); } await next(); }); ``` ## Custom Limits ```typescript // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-api' }); // Per-endpoint limits const limits = { '/api/search': { capacity: 100, refillRate: 10 }, '/api/upload': { capacity: 10, refillRate: 1 } }; const result = await client.checkRateLimit({ identifier: userId, ...limits[endpoint] }); // User tier limits const tierLimits = { free: { capacity: 100, refillRate: 10 }, premium: { capacity: 1000, refillRate: 100 } }; await client.checkRateLimit({ identifier: user.id, ...tierLimits[user.plan] }); ``` ## Bring Your Own Redis (Recommended) **For production deployments, always use your own Redis URL for full tenant isolation:** ```typescript // Recommended for production const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); // All rate limit data stored in your Redis - no collisions const result = await client.checkRateLimit('user-123'); ``` **Why use your own Redis?** - ✅ **Full tenant isolation** - No collisions with other Limitly users - ✅ **Data privacy** - Your rate limit data stays in your Redis - ✅ **Better performance** - Direct Redis connection (no HTTP overhead) - ✅ **Production ready** - Recommended for all production deployments **Without `redisUrl`:** - ⚠️ Shares hosted Redis with other users - ⚠️ Potential collisions if multiple users use the same `serviceId` - ✅ Works out of the box (good for development/testing) --- # Error Handling URL: https://limitly.xyz/docs/guides/error-handling x Description: Gracefully handle rate limit errors, Redis connection issues, and edge cases in your application. # Error Handling Proper error handling is crucial for production applications. This guide covers how to handle rate limit errors, Redis connection issues, and edge cases gracefully. ## Basic Error Handling Handle rate limit errors in your API: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); async function handleRequest(userId: string) { try { const result = await client.checkRateLimit(userId); if (!result.allowed) { // Calculate retry after seconds const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; return { error: 'Rate limit exceeded', message: 'Too many requests. Please try again later.', retryAfter, resetAt: result.reset ? new Date(result.reset).toISOString() : undefined }; } // Request allowed return { success: true, rateLimit: { remaining: result.remaining, limit: result.limit, resetAt: result.reset ? new Date(result.reset).toISOString() : undefined } }; } catch (error) { // Handle Redis connection errors, timeouts, etc. console.error('Rate limit check failed:', error); // Fail open - allow request if rate limiting fails // In production, you might want to log and alert return { success: true, rateLimitError: true, warning: 'Rate limiting temporarily unavailable' }; } } ``` ## Fail Open vs Fail Closed ### Fail Open (Recommended) Allow requests when rate limiting fails: ```typescript async function checkLimitSafely(userId: string) { try { const result = await client.checkRateLimit(userId); return result; } catch (error) { // Log the error for monitoring console.error('Rate limit service unavailable:', error); // Fail open - allow the request return { allowed: true, error: 'Rate limit service unavailable' }; } } ``` **When to use**: Most production scenarios where availability is more important than strict rate limiting. ### Fail Closed Reject requests when rate limiting fails: ```typescript async function checkLimitStrict(userId: string) { try { const result = await client.checkRateLimit(userId); return result; } catch (error) { // Log the error console.error('Rate limit service unavailable:', error); // Fail closed - reject the request throw new Error('Rate limiting service unavailable. Please try again later.'); } } ``` **When to use**: When strict rate limiting is critical and you prefer to reject requests rather than allow them. ## Error Types ### Redis Connection Errors Handle Redis connection failures: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); async function handleRequest(userId: string) { try { const result = await client.checkRateLimit(userId); return result; } catch (error) { if (error instanceof Error) { // Check for connection errors if (error.message.includes('ECONNREFUSED')) { console.error('Redis connection refused. Is Redis running?'); // Fail open or alert } else if (error.message.includes('ETIMEDOUT')) { console.error('Redis connection timeout'); // Fail open or retry } else if (error.message.includes('NOAUTH')) { console.error('Redis authentication failed'); // Check credentials } } // Default: fail open return { allowed: true, error: 'Rate limit unavailable' }; } } ``` ### Timeout Errors Handle timeout errors: ```typescript import { createClient } from 'limitly-sdk'; // Set a shorter timeout for faster failure detection const client = createClient({ serviceId: 'my-api', timeout: 2000 // 2 seconds }); async function checkWithTimeout(userId: string) { try { const result = await client.checkRateLimit({ identifier: userId }); return result; } catch (error) { if (error instanceof Error && error.message.includes('timeout')) { console.warn('Rate limit check timed out'); // Fail open or use cached result return { allowed: true, timeout: true }; } throw error; } } ``` ## Retry Logic Implement retry logic for transient errors: ```typescript async function checkLimitWithRetry( userId: string, maxRetries: number = 3 ): Promise { let lastError: Error | null = null; for (let attempt = 1; attempt <= maxRetries; attempt++) { try { const result = await client.checkRateLimit(userId); return result; } catch (error) { lastError = error instanceof Error ? error : new Error(String(error)); // Don't retry on certain errors if (lastError.message.includes('NOAUTH')) { throw lastError; // Don't retry auth errors } // Exponential backoff if (attempt < maxRetries) { const delay = Math.pow(2, attempt) * 100; // 200ms, 400ms, 800ms await new Promise(resolve => setTimeout(resolve, delay)); } } } // All retries failed - fail open console.error('Rate limit check failed after retries:', lastError); return { allowed: true, error: 'Rate limit unavailable' }; } ``` ## Circuit Breaker Pattern Implement a circuit breaker to prevent cascading failures: ```typescript class RateLimitCircuitBreaker { private failures = 0; private lastFailureTime = 0; private state: 'closed' | 'open' | 'half-open' = 'closed'; private readonly threshold = 5; private readonly timeout = 60000; // 1 minute async check(userId: string): Promise { if (this.state === 'open') { if (Date.now() - this.lastFailureTime > this.timeout) { this.state = 'half-open'; } else { // Circuit is open - fail open return { allowed: true, circuitOpen: true }; } } try { const result = await client.checkRateLimit(userId); // Success - reset failures if (this.state === 'half-open') { this.state = 'closed'; this.failures = 0; } return result; } catch (error) { this.failures++; this.lastFailureTime = Date.now(); if (this.failures >= this.threshold) { this.state = 'open'; } // Fail open return { allowed: true, error: 'Rate limit unavailable' }; } } } // Usage const circuitBreaker = new RateLimitCircuitBreaker(); const result = await circuitBreaker.check('user-123'); ``` ## Logging and Monitoring Log errors for monitoring and alerting: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); async function checkLimitWithLogging(userId: string) { try { const result = await client.checkRateLimit(userId); // Log rate limit violations if (!result.allowed) { console.warn('Rate limit exceeded', { userId, limit: result.limit, remaining: result.remaining, reset: result.reset }); } return result; } catch (error) { // Log errors for monitoring console.error('Rate limit error', { userId, error: error instanceof Error ? error.message : String(error), stack: error instanceof Error ? error.stack : undefined }); // Send to monitoring service (e.g., Sentry, Datadog) // monitor.captureException(error); // Fail open return { allowed: true, error: 'Rate limit unavailable' }; } } ``` ## HTTP Error Responses Return proper HTTP error responses: ```typescript // Next.js App Router import { createClient } from 'limitly-sdk'; import { NextResponse } from 'next/server'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'nextjs-api' }); export async function GET(request: Request) { try { const userId = request.headers.get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); const headers = new Headers(); if (result.limit) headers.set('X-RateLimit-Limit', result.limit.toString()); if (result.remaining !== undefined) { headers.set('X-RateLimit-Remaining', result.remaining.toString()); } if (!result.allowed) { const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; headers.set('Retry-After', retryAfter.toString()); return NextResponse.json( { error: 'Rate limit exceeded', message: 'Too many requests. Please try again later.', retryAfter }, { status: 429, headers } ); } return NextResponse.json({ success: true }, { headers }); } catch (error) { // Log error console.error('Rate limit check failed:', error); // Return 503 Service Unavailable return NextResponse.json( { error: 'Service temporarily unavailable', message: 'Rate limiting service is currently unavailable. Please try again later.' }, { status: 503 } ); } } ``` ## Best Practices 1. **Always use try-catch**: Wrap rate limit checks in try-catch blocks 2. **Fail open in production**: Allow requests when rate limiting fails 3. **Log errors**: Log all errors for monitoring and debugging 4. **Set timeouts**: Use appropriate timeouts to prevent hanging requests 5. **Monitor**: Set up alerts for rate limit service failures 6. **Graceful degradation**: Have fallback behavior when rate limiting is unavailable --- # Express.js Integration URL: https://limitly.xyz/docs/guides/express x Description: Protect your Express routes with Limitly. Learn how to create middleware, handle errors, and set proper HTTP headers. # Express.js Integration ## Basic Middleware ```typescript // middleware/rate-limit.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'express-api' }); export async function rateLimitMiddleware(req, res, next) { const identifier = req.user?.id || req.ip || 'anonymous'; const result = await client.checkRateLimit(identifier); if (!result.allowed) { return res.status(429).json({ error: 'Rate limit exceeded' }); } next(); } ``` ## Usage ```typescript // app.ts import { rateLimitMiddleware } from './middleware/rate-limit'; app.use('/api', rateLimitMiddleware); // or app.get('/api/data', rateLimitMiddleware, (req, res) => { res.json({ data: 'Protected data' }); }); ``` ## Per-Route Limits ```typescript function createRateLimitMiddleware(capacity: number, refillRate: number) { const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'express-api' }); return async (req, res, next) => { const result = await client.checkRateLimit({ identifier: req.user?.id || req.ip, capacity, refillRate }); if (!result.allowed) return res.status(429).json({ error: 'Rate limit exceeded' }); next(); }; } export const strictRateLimit = createRateLimitMiddleware(10, 1); export const moderateRateLimit = createRateLimitMiddleware(100, 10); // Usage app.post('/api/login', strictRateLimit, handler); app.get('/api/data', moderateRateLimit, handler); ``` ## User-Based Rate Limiting Rate limit based on authenticated users: ```typescript // middleware/user-rate-limit.ts import { createClient } from 'limitly-sdk'; import type { Request, Response, NextFunction } from 'express'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'user-api' }); interface AuthenticatedRequest extends Request { user?: { id: string; plan: 'free' | 'pro' | 'enterprise'; }; } export async function userRateLimitMiddleware( req: AuthenticatedRequest, res: Response, next: NextFunction ) { // Require authentication if (!req.user) { return res.status(401).json({ error: 'Unauthorized' }); } // Different limits based on user plan const limits = { free: { capacity: 100, refillRate: 10 }, pro: { capacity: 1000, refillRate: 100 }, enterprise: { capacity: 10000, refillRate: 1000 } }; const userLimits = limits[req.user.plan] || limits.free; const result = await client.checkRateLimit({ identifier: req.user.id, ...userLimits }); // Set headers if (result.limit) res.setHeader('X-RateLimit-Limit', result.limit.toString()); if (result.remaining !== undefined) { res.setHeader('X-RateLimit-Remaining', result.remaining.toString()); } if (!result.allowed) { const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; res.setHeader('Retry-After', retryAfter.toString()); return res.status(429).json({ error: 'Rate limit exceeded', message: `You have exceeded your ${req.user.plan} plan limits`, retryAfter }); } next(); } ``` ## Error Handling Handle rate limit errors gracefully: ```typescript // middleware/rate-limit.ts import { createClient } from 'limitly-sdk'; import type { Request, Response, NextFunction } from 'express'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'express-api' }); export async function rateLimitMiddleware( req: Request, res: Response, next: NextFunction ) { try { const identifier = (req as any).user?.id || req.ip || 'anonymous'; const result = await client.checkRateLimit(identifier); // Set headers if (result.limit) res.setHeader('X-RateLimit-Limit', result.limit.toString()); if (result.remaining !== undefined) { res.setHeader('X-RateLimit-Remaining', result.remaining.toString()); } if (result.reset) { res.setHeader('X-RateLimit-Reset', Math.ceil(result.reset / 1000).toString()); } if (!result.allowed) { const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; res.setHeader('Retry-After', retryAfter.toString()); return res.status(429).json({ error: 'Rate limit exceeded', message: 'Too many requests, please try again later', retryAfter, resetAt: result.reset ? new Date(result.reset).toISOString() : undefined }); } next(); } catch (error) { // Handle Redis connection errors, timeouts, etc. console.error('Rate limit check failed:', error); // Fail open - allow request if rate limiting fails // In production, you might want to log this and alert next(); } } ``` ## Router-Level Application Apply rate limiting to Express routers: ```typescript // routes/api.ts import { Router } from 'express'; import { rateLimitMiddleware } from '../middleware/rate-limit'; const router = Router(); // Apply to all routes in this router router.use(rateLimitMiddleware); router.get('/data', (req, res) => { res.json({ data: 'Protected data' }); }); router.post('/submit', (req, res) => { res.json({ success: true }); }); export default router; ``` ## Conditional Limiting ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'express-api' }); export async function conditionalRateLimitMiddleware(req, res, next) { if (req.user?.isAdmin) return next(); if (req.ip?.startsWith('192.168.')) return next(); const result = await client.checkRateLimit(req.user?.id || req.ip); if (!result.allowed) return res.status(429).json({ error: 'Rate limit exceeded' }); next(); } ``` ## Best Practices - Use service IDs for isolation - Handle errors gracefully (fail open) - Use authenticated user IDs when available - Apply different limits to different routes --- # Next.js Integration URL: https://limitly.xyz/docs/guides/nextjs x Description: Integrate rate limiting into your Next.js API routes. Works with both App Router and Pages Router. # Next.js Integration ## App Router ```typescript // app/api/route.ts import { createClient } from 'limitly-sdk'; import { NextResponse } from 'next/server'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'nextjs-api' }); export async function GET(request: Request) { const userId = request.headers.get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return NextResponse.json({ error: 'Rate limit exceeded' }, { status: 429 }); } return NextResponse.json({ success: true }); } ``` ## Reusable Helper ```typescript // lib/rate-limit.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL, serviceId: 'nextjs-api' }); export async function withRateLimit(request: Request, handler: Function) { const userId = request.headers.get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return NextResponse.json({ error: 'Rate limit exceeded' }, { status: 429 }); } return handler(request); } ``` Usage: ```typescript export async function GET(request: Request) { return withRateLimit(request, async () => { return NextResponse.json({ data: 'Protected data' }); }); } ``` ## Per-Route Limits ```typescript // lib/rate-limit.ts import { createClient } from 'limitly-sdk'; import { NextResponse } from 'next/server'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'nextjs-api' }); export async function checkRouteLimit( request: Request, route: string, capacity: number, refillRate: number ): Promise { const userId = request.headers.get('x-user-id') || request.headers.get('x-forwarded-for')?.split(',')[0] || 'anonymous'; const result = await client.checkRateLimit({ identifier: `${userId}:${route}`, capacity, refillRate }); const headers = new Headers(); if (result.limit) headers.set('X-RateLimit-Limit', result.limit.toString()); if (result.remaining !== undefined) { headers.set('X-RateLimit-Remaining', result.remaining.toString()); } if (!result.allowed) { const retryAfter = result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60; headers.set('Retry-After', retryAfter.toString()); return NextResponse.json( { error: 'Rate limit exceeded', retryAfter }, { status: 429, headers } ); } return null; // Rate limit passed } ``` Use in routes: ```typescript // app/api/login/route.ts import { checkRouteLimit } from '@/lib/rate-limit'; import { NextResponse } from 'next/server'; export async function POST(request: Request) { // Strict limits for login const rateLimitResponse = await checkRouteLimit(request, '/api/login', 5, 0.1); if (rateLimitResponse) return rateLimitResponse; // Login logic return NextResponse.json({ success: true }); } ``` ## Pages Router ```typescript // pages/api/route.ts import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'nextjs-api' }); export default async function handler(req, res) { const userId = req.headers['x-user-id'] || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return res.status(429).json({ error: 'Rate limit exceeded' }); } res.status(200).json({ success: true }); } ``` ## Server Actions ```typescript 'use server'; import { createClient } from 'limitly-sdk'; import { headers } from 'next/headers'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'server-actions' }); export async function protectedAction(data: FormData) { const userId = headers().get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) throw new Error('Rate limit exceeded'); return { success: true }; } ``` ## Best Practices - Use environment variables for configuration - Handle errors gracefully - Use authenticated user IDs when available --- # TypeScript Support URL: https://limitly.xyz/docs/guides/typescript x Description: Limitly is fully typed with TypeScript. Get complete type safety, IDE autocomplete, and build type-safe wrappers. # TypeScript Support Limitly is built with TypeScript and provides complete type definitions out of the box. This means you get full type safety, excellent IDE autocomplete, and the ability to build type-safe wrappers around Limitly. ## Basic Type Safety All functions and methods are fully typed: ```typescript import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); // TypeScript knows the return type const result = await client.checkRateLimit('user-123'); // result: LimitlyResponse // Type-safe property access if (result.allowed) { console.log(result.remaining); // TypeScript knows remaining exists } ``` ## Importing Types Import types for use in your own code: ```typescript import type { LimitlyConfig, LimitlyResponse, RateLimitOptions, LimitlyClient } from 'limitly-sdk'; ``` ## Typed Configuration Create type-safe configuration: ```typescript import type { LimitlyConfig } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; // Fully typed configuration const config: LimitlyConfig = { serviceId: 'my-app', timeout: 5000 }; const client = createClient(config); ``` ## Typed Responses Work with typed responses: ```typescript import type { LimitlyResponse } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); async function handleRequest(userId: string): Promise { const result: LimitlyResponse = await client.checkRateLimit(userId); if (!result.allowed) { // TypeScript knows reset might exist if (result.reset) { const resetDate = new Date(result.reset); console.log(`Reset at: ${resetDate.toISOString()}`); } } return result; } ``` ## Type Guards Create type guards for better type narrowing: ```typescript import type { LimitlyResponse } from 'limitly-sdk'; function isRateLimited( response: LimitlyResponse ): response is LimitlyResponse & { allowed: false } { return !response.allowed; } function isAllowed( response: LimitlyResponse ): response is LimitlyResponse & { allowed: true; remaining: number } { return response.allowed && response.remaining !== undefined; } // Usage const result = await checkLimit('user-123'); if (isRateLimited(result)) { // TypeScript knows result.allowed is false console.log('Rate limited:', result.message); } else if (isAllowed(result)) { // TypeScript knows result.allowed is true and remaining exists console.log('Allowed. Remaining:', result.remaining); } ``` ## Typed Wrappers Build type-safe wrappers around Limitly: ```typescript import type { LimitlyResponse, RateLimitOptions } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; interface ProtectedRouteOptions extends RateLimitOptions { userId: string; endpoint?: string; } async function protectedRoute( options: ProtectedRouteOptions ): Promise { const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'api' }); return client.checkRateLimit({ identifier: options.endpoint ? `${options.userId}:${options.endpoint}` : options.userId, capacity: options.capacity, refillRate: options.refillRate, skip: options.skip }); } // Usage with full type safety const result = await protectedRoute({ userId: 'user-123', endpoint: '/api/data', capacity: 100, refillRate: 10 }); ``` ## Generic Helpers Create generic helper functions: ```typescript import type { LimitlyResponse } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; type RateLimitHandler = (result: LimitlyResponse) => T; async function withRateLimit( identifier: string, onAllowed: RateLimitHandler, onRateLimited: RateLimitHandler ): Promise { const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit(identifier); return result.allowed ? onAllowed(result) : onRateLimited(result); } // Usage const response = await withRateLimit( 'user-123', (result) => ({ success: true, remaining: result.remaining! }), (result) => ({ success: false, error: 'Rate limited', retryAfter: result.reset ? Math.ceil((result.reset - Date.now()) / 1000) : 60 }) ); ``` ## Framework Integration Types Type-safe integration with frameworks: ```typescript // Next.js App Router import type { LimitlyResponse } from 'limitly-sdk'; import { createClient } from 'limitly-sdk'; import { NextResponse } from 'next/server'; // Recommended: Use your own Redis const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'nextjs-api' }); export async function GET(request: Request): Promise> { const userId = request.headers.get('x-user-id') || 'anonymous'; const result = await client.checkRateLimit(userId); if (!result.allowed) { return NextResponse.json( { error: 'Rate limit exceeded' }, { status: 429 } ); } return NextResponse.json(result); } ``` ## Strict Type Checking Enable strict TypeScript settings for maximum type safety: ```json // tsconfig.json { "compilerOptions": { "strict": true, "noUncheckedIndexedAccess": true, "exactOptionalPropertyTypes": true } } ``` With strict mode, TypeScript will catch potential issues: ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit('user-123'); // TypeScript error if strict: true // Property 'remaining' may be undefined console.log(result.remaining.toString()); // Correct way if (result.remaining !== undefined) { console.log(result.remaining.toString()); } ``` ## Type Assertions Use type assertions when you're certain about types: ```typescript const client = createClient({ redisUrl: process.env.REDIS_URL || 'redis://localhost:6379', serviceId: 'my-app' }); const result = await client.checkRateLimit('user-123'); // Type assertion when you know remaining exists if (result.allowed && result.remaining !== undefined) { const remaining: number = result.remaining; console.log(remaining); } ``` ## Utility Types Create utility types for common patterns: ```typescript import type { LimitlyResponse } from 'limitly-sdk'; // Extract only the required fields type RateLimitInfo = Pick; // Make all fields required type RequiredRateLimitResponse = Required; // Create a response with guaranteed fields interface GuaranteedResponse { allowed: boolean; remaining: number; limit: number; reset: number; } function toGuaranteedResponse( response: LimitlyResponse ): GuaranteedResponse { return { allowed: response.allowed, remaining: response.remaining ?? 0, limit: response.limit ?? 100, reset: response.reset ?? Date.now() + 60000 }; } ``` --- End of Limitly documentation. All links above point to the live docs at https://limitly.xyz/docs.