Skip to main content
OpenAI Compatible

Nebula API

Access Nebula through a familiar OpenAI-compatible API. Drop-in replacement with enhanced security capabilities.

3-Tier Architecture

Available Models

Choose the right model for your use case. Pricing per 1M tokens.

Default
Nebula FlashTier 1
nebula-flash

Ultra-fast responses with intelligent routing. Default model for most tasks.

ChatRoutingClassification
Input$0.10/1M
Output$0.40/1M
Context1M tokens
SpeedUltra Fast
Nebula EdgeTier 2
nebula-edge

GPU-accelerated, unrestricted security analysis on dedicated infrastructure.

ChatReasoningTool CallingSecurity Analysis
Input$0.10/1M
Output$0.30/1M
Context198K tokens
SpeedFast
Nebula ProTier 3
nebula-pro

Premium deep analysis and reasoning for complex security assessments.

ChatFunction CallingExtended ThinkingVision
Input$15.00/1M
Output$75.00/1M
Context200K tokens
SpeedThoughtful
Nebula AITier 3
nebula-120b

Ultimate power for complex analysis, multi-step reasoning, and research.

ChatFunction CallingExtended ThinkingVision
Input$15.00/1M
Output$75.00/1M
Context200K tokens
SpeedDeep

Security

Authentication

Secure your API requests with API keys

Getting an API Key

  1. 1Log in to your BreachLine dashboard and go to Settings
  2. 2Navigate to API Keys section
  3. 3Click "Create API Key"
  4. 4Select "LLM API" scope (llm:*)
  5. 5Copy and securely store your key

Using Your API Key

Include your API key in the X-API-Key header:

X-API-Key: bl_live_xxxxxxxxxxxx
Security: Never expose your API key in client-side code or public repositories.

Getting Started

Quick Start

Get started with Nebula in minutes

cURL

curl -X POST https://api.breachline.io/api/v1/llm/v1/chat/completions \
  -H "X-API-Key: bl_live_xxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "nebula-pro",
    "messages": [
      {"role": "system", "content": "You are a security analyst."},
      {"role": "user", "content": "Analyze this SQL injection: SELECT * FROM users WHERE id = \'" + input + "\'"}
    ],
    "max_tokens": 2048,
    "temperature": 0.3
  }'

Python (OpenAI SDK)

from openai import OpenAI

# Initialize client with Nebula endpoint
client = OpenAI(
    api_key="bl_live_xxxxxxxxxxxx",
    base_url="https://api.breachline.io/api/v1/llm"
)

# Chat completion
response = client.chat.completions.create(
    model="nebula-pro",
    messages=[
        {"role": "system", "content": "You are a security expert."},
        {"role": "user", "content": "Analyze this vulnerability report..."}
    ],
    max_tokens=2048,
    temperature=0.3
)

print(response.choices[0].message.content)

JavaScript / TypeScript

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'bl_live_xxxxxxxxxxxx',
  baseURL: 'https://api.breachline.io/api/v1/llm'
});

async function analyzeVulnerability(finding: string) {
  const response = await client.chat.completions.create({
    model: 'nebula-pro',
    messages: [
      { role: 'system', content: 'You are a security analyst.' },
      { role: 'user', content: `Analyze: ${finding}` }
    ]
  });

  return response.choices[0].message.content;
}

Functions

Tool Calling

Enable Nebula to execute functions and interact with external systems

Supported Models
Tool calling is available on nebula-edge (Tier 2), nebula-pro, and nebula-120b (Tier 3).

Tool Calling Example

from openai import OpenAI

client = OpenAI(
    api_key="bl_live_xxxxxxxxxxxx",
    base_url="https://api.breachline.io/api/v1/llm"
)

# Define security tools
tools = [
    {
        "type": "function",
        "function": {
            "name": "scan_target",
            "description": "Perform a security scan on a target",
            "parameters": {
                "type": "object",
                "properties": {
                    "target": {"type": "string", "description": "URL or IP to scan"},
                    "scan_type": {"type": "string", "enum": ["quick", "full", "stealth"]}
                },
                "required": ["target"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "lookup_cve",
            "description": "Look up CVE details",
            "parameters": {
                "type": "object",
                "properties": {
                    "cve_id": {"type": "string", "description": "CVE ID (e.g., CVE-2024-1234)"}
                },
                "required": ["cve_id"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="nebula-pro",
    messages=[{"role": "user", "content": "Scan example.com for vulnerabilities"}],
    tools=tools,
    tool_choice="auto"
)

# Handle tool calls
if response.choices[0].message.tool_calls:
    for tool_call in response.choices[0].message.tool_calls:
        print(f"Tool: {tool_call.function.name}")
        print(f"Args: {tool_call.function.arguments}")

Real-time

WebSocket Streaming

Get real-time responses via WebSocket connection

Streaming Example

import asyncio
import websockets
import json

async def stream_chat():
    uri = "wss://api.breachline.io/api/v1/ws"

    async with websockets.connect(uri, extra_headers={
        "X-API-Key": "bl_live_xxxxxxxxxxxx"
    }) as ws:
        # Send a chat message
        await ws.send(json.dumps({
            "type": "chat",
            "model": "nebula-pro",
            "messages": [
                {"role": "user", "content": "Write a security audit report for example.com"}
            ]
        }))

        # Receive streaming chunks
        async for message in ws:
            data = json.loads(message)
            if data.get("type") == "chunk":
                print(data["content"], end="", flush=True)
            elif data.get("type") == "done":
                print("\n[Stream complete]")
                break

asyncio.run(stream_chat())

Integration

MCP Protocol

Integrate Nebula with MCP-compatible clients

What is MCP?
Model Context Protocol enables seamless integration between Nebula and tools. Use Nebula with Claude Desktop, VS Code, and other MCP-compatible clients.

MCP Configuration

// MCP Configuration (claude_desktop_config.json)
{
  "mcpServers": {
    "nebula": {
      "command": "npx",
      "args": ["-y", "@breachline/mcp-server"],
      "env": {
        "NEBULA_API_KEY": "bl_live_xxxxxxxxxxxx",
        "NEBULA_BASE_URL": "https://api.breachline.io/api/v1/llm"
      }
    }
  }
}

// Use with MCP SDK
import { Client } from "@modelcontextprotocol/sdk/client/index.js";

const client = new Client({ name: "my-app", version: "1.0.0" });
await client.connect(transport);

const result = await client.callTool({
  name: "nebula_chat",
  arguments: {
    model: "nebula-pro",
    message: "Analyze security headers for example.com"
  }
});

Usage

Rate Limits

Usage limits per API key

60
req/min
1K
req/hour
10K
req/day
100K
tokens/min

Need higher limits? Contact us for enterprise plans.

Reference

API Endpoints

Full API reference

POST/api/v1/llm/v1/chat/completions

Create a chat completion (OpenAI compatible)

POST/api/v1/llm/v1/completions

Simple text completion endpoint

GET/api/v1/llm/v1/models

List available models and pricing

GET/api/v1/llm/v1/usage/current

Get current usage statistics

Get Started

Ready to Build?

Create an API key and start building with Nebula