Skip to Content
API ProviderOverview

API Provider Overview

This feature only available when paywall is in client mode and has tokenization enabled. Learn more about paywall tokenization.

Learn about API providers and how they enable integration of external APIs with your tokenized paywalls.

What is an API Provider?

An API provider is a configuration that connects your paywall to an external API endpoint. When users make requests through the API provider, the system:

  1. Checks authorization - Verifies user is authenticated and authorized
  2. Checks token balance - Verifies user has sufficient tokens
  3. Forwards request - Sends request to the target API endpoint
  4. Deducts tokens - Charges based on the configured query price level
  5. Returns response - Passes API response back to the user

Key Features

API providers allow you to:

  • Connect external APIs - Integrate third-party services into your monetization flow
  • Control access - Manage user permissions based on token availability
  • Dynamic parameters - Pass custom headers, body, and URL parameters
  • Automatic billing - Charge users based on API usage and query complexity

How It Works

Configuration Examples

OpenAI API Integration

Basic OpenAI API Provider:

  • Name: OpenAI GPT-4 API
  • Query Price Level: Advanced (expensive AI operations)
  • Endpoint URL: https://api.openai.com/v1/chat/completions
  • Method: POST
  • Headers:
    { "Authorization": "Bearer sk-your-openai-key", "Content-Type": "application/json" }
  • Body:
    { "model": "gpt-4", "max_tokens": 150 }

Translation Service

Google Translate API Provider:

  • Name: Google Translate API
  • Query Price Level: Standard (moderate cost operation)
  • Endpoint URL: https://translation.googleapis.com/language/translate/v2
  • Method: POST
  • Headers:
    { "Authorization": "Bearer your-google-token", "Content-Type": "application/json" }

Image Processing

AI Image Enhancement API Provider:

  • Name: AI Image Enhancement
  • Query Price Level: Advanced 2 (high-cost GPU processing)
  • Endpoint URL: https://api.imageenhance.com/v1/enhance
  • Method: POST
  • Headers:
    { "API-Key": "your-api-key", "Content-Type": "multipart/form-data" }

Dynamic Parameters

API providers support dynamic parameters that users can pass at runtime:

Request Headers

Users can add or override headers:

// Additional headers passed by user { "Custom-Header": "user-specific-value", "Authorization": "Bearer user-token" }

Request Body

Users can extend or modify the request body:

// User extends the base body configuration { "messages": [{"role": "user", "content": "Hello"}], "temperature": 0.7 }

URL Pathname

Users can append additional path segments to the API provider’s base URL. This allows accessing different endpoints of the same API.

How it works

When you configure an API provider, you specify a base URL. Users can then add additional paths to it when making requests.

Example API provider configuration:

  • Base URL: https://api.example.com/v1/users

How users add paths:

Simply append the additional path directly to the API provider URL. The system will automatically forward this path to the target API endpoint.

// Request to base endpoint fetch('https://onlineapp.pro/api/v1/api-gateway/your-provider-id', { method: 'POST', body: JSON.stringify({ // Your request data here }) }); // Goes to: https://api.example.com/v1/users // Request with additional path - append to provider URL fetch('https://onlineapp.pro/api/v1/api-gateway/your-provider-id/profile/settings', { method: 'POST', body: JSON.stringify({ // Your request data here }) }); // Goes to: https://api.example.com/v1/users/profile/settings // Another example with user ID and orders fetch('https://onlineapp.pro/api/v1/api-gateway/your-provider-id/123/orders', { method: 'GET' }); // Goes to: https://api.example.com/v1/users/123/orders

1. OpenAI API with different endpoints:

// Provider base URL: https://api.openai.com/v1 // Provider ID: openai-provider // Chat with GPT-4 fetch('https://onlineapp.pro/api/v1/api-gateway/openai-provider/chat/completions', { method: 'POST', body: JSON.stringify({ model: 'gpt-4', messages: [...] }) }) // → https://api.openai.com/v1/chat/completions // Image generation fetch('https://onlineapp.pro/api/v1/api-gateway/openai-provider/images/generations', { method: 'POST', body: JSON.stringify({ prompt: 'A beautiful sunset', n: 1 }) }) // → https://api.openai.com/v1/images/generations // File operations fetch('https://onlineapp.pro/api/v1/api-gateway/openai-provider/files') // → https://api.openai.com/v1/files

Best Practices

Security

  1. Secure API key storage - Store API keys directly in the provider settings on our servers. Never send API keys from client-side code or expose them in frontend applications. This ensures your credentials remain secure and are only used for server-to-server communication.

Token Management

  1. Choose appropriate query price levels - Match the cost of the API operation with the token type
  2. Monitor usage patterns - Track which APIs consume the most tokens
  3. Set reasonable limits - Configure appropriate token deduction rates

Performance

  1. Cache responses when possible - Reduce API calls and token consumption
  2. Use appropriate timeouts - Prevent hanging requests
  3. Handle errors gracefully - Provide meaningful error messages to users

Next Steps

Last updated on