Skip to content

API Quickstart Guide

Welcome to the Redbelt API! This guide will help you get started in just a few minutes. By the end of this guide, you’ll have:

  • Created your API key
  • Set up your development environment
  • Made your first API call to retrieve available LLMs

Before you begin, make sure you have:

  1. Log in to Redbelt and Go to the Dashboard

    Navigate to redbelt.ai and log in to your account. Click the Dashboard button in the left side menu.

    Redbelt Homepage

  2. Access Admin Page

    From the left side menu, click on Admin to access the administration panel.

    Admin Page

  3. Create New Secret Key

    Click the Create a new secret key button to open the API key creation modal.

    Create Secret Key Button

  4. Configure Your API Key

    In the modal that appears:

    1. Choose who the key is owned by: Service Account (a bot user) or You (a user)
    2. Enter a descriptive name for your key (e.g., “Development Key” or “Production Key”)
    3. Click the Create secret key button

    Configure API Key

  5. ⚠️ Important: Copy and Save Your New Key

    Once created, click the Copy button to copy your API key. Save your API key immediately! For security reasons, you won’t be able to see it again.

    Copy API Key

    After your new key is saved, click Done.

Create a new directory for your project and set up your Python environment.

Terminal window
uv init redbelt-quickstart
cd redbelt-quickstart
uv add requests python-dotenv

Create a .env file to store your API key securely:

Terminal window
echo "REDBELT_API_KEY=your-api-key-here" > .env

Replace your-api-key-here with the API key you generated in step 1.

Create a new file called quickstart.py with the following code:

import os
import requests
from dotenv import load_dotenv
load_dotenv()
api_url = "https://redbelt.ai/api"
session = requests.Session()
session.headers.update({
"Authorization": f"Bearer {os.getenv('REDBELT_API_KEY')}",
"Content-Type": "application/json"
})
response = session.get(f"{api_url}/threads/llms")
print(response.json())

This code does the following:

  • Loads your API key from the .env file using python-dotenv
  • Creates a session with proper authentication headers
  • Makes a GET request to the /threads/llms endpoint to retrieve available LLMs
  • Prints the response showing all available language models

Execute your script:

Terminal window
uv run quickstart.py

You should see a JSON response listing all available LLMs:

{
"message": "LLMs retrieved successfully.",
"status_code": 200,
"timestamp": "2025-10-01T08:10:49.447930+00:00",
"data": [
{
"llm_id": 19,
"llm_type_id": 1,
"name": "GPT 5",
"deployment_name": "gpt-5",
"slug": "gpt-5",
"is_active": true,
"is_frontend_visible": true,
"cost_per_1k_input_token_usd": 0.00125,
"cost_per_1k_output_token_usd": 0.01,
"max_input_tokens": 400000,
"max_output_tokens": 128000,
"is_thinking_model": true
},
{
"llm_id": 22,
"llm_type_id": 1,
"name": "GPT 5 Fast",
"deployment_name": "gpt-5",
"slug": "gpt-5-fast",
"is_active": true,
"is_frontend_visible": true,
"cost_per_1k_input_token_usd": 0.00125,
"cost_per_1k_output_token_usd": 0.01,
"max_input_tokens": 400000,
"max_output_tokens": 128000,
"is_thinking_model": false
},
{
"llm_id": 7,
"llm_type_id": 2,
"name": "Gemini 2.5 Pro",
"deployment_name": "gemini-2.5-pro",
"slug": "gemini-2.5-pro",
"is_active": true,
"is_frontend_visible": true,
"cost_per_1k_input_token_usd": 0.00125,
"cost_per_1k_output_token_usd": 0.01,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"is_thinking_model": true
},
and so on...

🎉 Success!

You’ve successfully:

  • ✅ Generated your API key
  • ✅ Made your first API call
  • ✅ Retrieved available LLMs