Usage Rate Limits

To ensure the stability and performance of the Conecto API, rate limiting protects our services from unexpected third-party API usage behavior.

Overview

Rate limiting is a critical mechanism for maintaining API performance and availability. By setting reasonable limits on request frequency, we ensure fair resource allocation across all integrations and prevent system overload.

Why Rate Limiting?

  • Service Stability - Prevents individual integrations from overwhelming the API
  • Fair Usage - Ensures all partners have equitable access to API resources
  • Protection - Guards against accidental infinite loops or misconfigured clients
  • Performance - Maintains consistent response times for all users

The Conecto API implements rate limiting on all public endpoints with clear, predictable limits based on endpoint type and request patterns.


Rate Limiting Strategy

Rate Limit Criteria

Rate limits are applied based on three factors:

  • Name
    Client Identifier
    Description

    Determined by either:

    • Header x-access-key: <CONECTO-CLIENT-ID> for HMAC authentication
    • Authorization: Bearer <JWT-TOKEN> which resolves to a Conecto Client ID
  • Name
    Request Method
    Description

    HTTP method used for the request (GET, POST, PATCH, DELETE, etc.)

  • Name
    Resource Path
    Description

    Specific API endpoint path being accessed

    Examples: /pos/v2/lookup/tickets, /pos/v2/{locationId}/taxes

Window Period

Rate limits are calculated over rolling time windows:

  • Default window: 60 minutes (1 hour)
  • When limits are reached, you must wait until the window resets
  • Each request decrements your remaining quota
  • Quota replenishes as time moves forward in the rolling window

Rate Limit Rules

Endpoint-Specific Limits

Different endpoints have different rate limits based on their typical usage patterns and system impact:

  • Name
    Online Ordering Endpoints
    Description

    Limit: 500 requests per hour per resource

    Applies to: Online Ordering Order V1 and V2 endpoints

    Use case: High-frequency order processing and status updates

  • Name
    POS Checks Endpoints
    Description

    Limit: 500 requests per hour

    Applies to: /pos/v2/{locationId}/checks and related check operations

    Use case: Point-of-sale transaction lookups and updates

  • Name
    Ticket Lookup
    Description

    Limit: 500 requests per hour

    Applies to: POST /pos/v2/lookup/tickets

    Use case: Batch ticket retrieval and synchronization

  • Name
    All Other Endpoints
    Description

    Limit: 100 requests per 60-minute window

    Applies to: Any endpoint not specifically listed above

    Use case: General API operations and data retrieval

Rate Limit Headers

Every API response includes rate limit information in the headers:

  • Name
    X-RateLimit-Limit
    Description

    The maximum number of requests allowed in the current window

  • Name
    X-RateLimit-Remaining
    Description

    The number of requests remaining in the current window

  • Name
    Retry-After
    Description

    Number of seconds to wait before retrying (included when rate limit is exceeded)


Exceeding Rate Limits

Response When Limited

When you exceed the rate limit, the API responds with a 429 Too Many Requests status code:

Status Code: 429 Too Many Requests

Headers:

  • X-RateLimit-Limit: Your rate limit threshold
  • X-RateLimit-Remaining: 0
  • Retry-After: Seconds until you can retry

The response body provides details about the limit exceeded.

Rate Limit Response

{
  "status": 429,
  "timestamp": "2023-11-22T10:34:39.646Z",
  "message": "Too Many Requests, exceeded your 500 requests per hour limit.",
  "details": []
}

Handling Rate Limits

When you receive a 429 response, your application should:

  1. Stop Making Requests - Immediately cease API calls to that endpoint
  2. Read Retry-After Header - Check how long to wait before retrying
  3. Implement Backoff - Wait the specified time before making another request
  4. Queue Requests - Buffer additional requests until rate limit resets
  5. Log the Event - Track rate limit hits for capacity planning

Best Practices

  • Name
    Monitor Your Usage
    Description

    Check X-RateLimit-Remaining headers on every response to track your quota consumption. Alert when approaching limits.

  • Name
    Implement Exponential Backoff
    Description

    When rate limited, wait longer between retries using exponential backoff with jitter to avoid thundering herd problems.

  • Name
    Respect Retry-After
    Description

    Always honor the Retry-After header value. Ignoring it may result in longer cooldown periods or temporary blocks.

  • Name
    Batch Operations
    Description

    Where possible, batch multiple operations into single API calls to reduce request count. Check endpoint documentation for batch support.

  • Name
    Cache Responses
    Description

    Implement client-side caching for data that doesn't change frequently to reduce API call volume.

  • Name
    Request Only What You Need
    Description

    Use query parameters to filter and paginate results, reducing both response size and the need for multiple requests.

  • Name
    Distribute Load
    Description

    For high-volume integrations, distribute requests evenly throughout the rate limit window rather than bursting at the start.

  • Name
    Plan for Growth
    Description

    Monitor your rate limit usage trends. If you consistently approach limits, contact Shift4 support to discuss increased quotas.

  • Name
    Implement Circuit Breakers
    Description

    Use circuit breaker patterns to temporarily halt requests when multiple rate limit errors occur, preventing cascading failures.

  • Name
    Use Webhooks
    Description

    Where available, use webhooks/subscriptions instead of polling to reduce API call volume for real-time updates.


Rate Limit Error Handling

Example Implementation

Rate Limit Handler

JavaScript
async function makeApiRequest(url, options) {
  try {
    const response = await fetch(url, options);

    // Check rate limit headers
    const remaining = response.headers.get('X-RateLimit-Remaining');
    const limit = response.headers.get('X-RateLimit-Limit');

    // Warn when approaching limit
    if (remaining && parseInt(remaining) < parseInt(limit) * 0.1) {
      console.warn(`Approaching rate limit: ${remaining}/${limit} remaining`);
    }

    // Handle rate limit exceeded
    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After');
      const waitSeconds = parseInt(retryAfter) || 60;

      console.error(`Rate limit exceeded. Retrying after ${waitSeconds}s`);

      // Wait and retry
      await new Promise(resolve => setTimeout(resolve, waitSeconds * 1000));
      return makeApiRequest(url, options);
    }

    return response;
  } catch (error) {
    console.error('API request failed:', error);
    throw error;
  }
}

Need Higher Limits?

If your integration requires higher rate limits:

  1. Analyze Your Usage - Document your current usage patterns and projected needs
  2. Optimize First - Ensure you're using batching, caching, and webhooks effectively
  3. Contact Support - Reach out to Shift4 integration support with your use case
  4. Provide Metrics - Share data on request volumes, peak usage times, and business requirements

We work with partners to accommodate legitimate high-volume use cases while maintaining service stability for all users.

Was this page helpful?