Meteor AI
API Documentation

Messages API

Claude Messages API guide with multi-language examples

Messages API

MeteorAI provides a fully Anthropic Claude-compatible conversation interface, supporting all features of Claude series models including tool use, vision capabilities, and extended thinking.

Basic Information

API Endpoint

https://api.routin.ai/v1/messages

Authentication Add API Key in request header:

Authorization: Bearer YOUR_API_KEY

API Version

anthropic-version: 2023-06-01

MeteorAI is fully compatible with Anthropic SDK. Simply modify the base_url parameter for seamless integration.

Request Parameters

Required Parameters

ParameterTypeDescription
modelstringModel name, e.g., claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
messagesarrayArray of conversation messages
max_tokensintegerMaximum number of tokens to generate (required)

Optional Parameters

ParameterTypeDefaultDescription
systemstring or array-System prompt defining assistant behavior
temperaturenumber1.0Sampling temperature (0-1)
streambooleanfalseWhether to use streaming output
toolsarray-Available tools list
tool_choiceobjectautoTool selection strategy
thinkingobject-Extended thinking configuration (Claude 3.5 Sonnet only)
metadataobject-Additional metadata

Messages Format

{
  "model": "claude-3-5-sonnet-20241022",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "Hello, please introduce yourself"
    }
  ]
}

Supported role values:

  • user: User message
  • assistant: Assistant response

Supported content types:

  • Text: Simple string or object with type: "text"
  • Image: type: "image" (supports URL or base64)
  • Tool use: type: "tool_use"
  • Tool result: type: "tool_result"

System Prompt

Can be a simple string or structured array:

{
  "system": "You are a helpful assistant."
}

Or use structured format for caching support:

{
  "system": [
    {
      "type": "text",
      "text": "You are a helpful assistant.",
      "cache_control": {"type": "ephemeral"}
    }
  ]
}

Response Format

Standard Response

{
  "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello! I'm Claude, an AI assistant developed by Anthropic..."
    }
  ],
  "model": "claude-3-5-sonnet-20241022",
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 10,
    "output_tokens": 25
  }
}

Streaming Response

event: message_start
data: {"type":"message_start","message":{"id":"msg_01XFDUDYJgAACzvnptvVoYEL","type":"message","role":"assistant","content":[],"model":"claude-3-5-sonnet-20241022","stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":10,"output_tokens":0}}}

event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}

event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Hello"}}

event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"!"}}

event: content_block_stop
data: {"type":"content_block_stop","index":0}

event: message_delta
data: {"type":"message_delta","delta":{"stop_reason":"end_turn","stop_sequence":null},"usage":{"output_tokens":25}}

event: message_stop
data: {"type":"message_stop"}

Code Examples

Basic Call

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, please introduce yourself"}
    ]
)

print(message.content[0].text)
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  const message = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [
      { role: 'user', content: 'Hello, please introduce yourself' }
    ],
  });

  console.log(message.content[0].text);
}

main();
const Anthropic = require('@anthropic-ai/sdk');

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

client.messages.create({
  model: 'claude-3-5-sonnet-20241022',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: 'Hello, please introduce yourself' }
  ],
}).then(message => {
  console.log(message.content[0].text);
});
using Anthropic.SDK;
using Anthropic.SDK.Constants;
using Anthropic.SDK.Messaging;

var client = new AnthropicClient(new APIAuthentication("YOUR_API_KEY"))
{
    BaseUrl = "https://api.routin.ai"
};

var messages = new List<Message>
{
    new Message(RoleType.User, "Hello, please introduce yourself")
};

var parameters = new MessageParameters
{
    Messages = messages,
    MaxTokens = 1024,
    Model = AnthropicModels.Claude35Sonnet,
    Stream = false,
    Temperature = 1.0m
};

var response = await client.Messages.GetClaudeMessageAsync(parameters);

Console.WriteLine(response.Message.ToString());
curl https://api.routin.ai/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-3-5-sonnet-20241022",
    "max_tokens": 1024,
    "messages": [
      {
        "role": "user",
        "content": "Hello, please introduce yourself"
      }
    ]
  }'

Streaming Output

Streaming output allows real-time retrieval of model-generated content for better user experience.

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

with client.messages.stream(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Tell me an interesting story"}
    ],
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  const stream = await client.messages.stream({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [
      { role: 'user', content: 'Tell me an interesting story' }
    ],
  });

  for await (const chunk of stream) {
    if (chunk.type === 'content_block_delta' &&
        chunk.delta.type === 'text_delta') {
      process.stdout.write(chunk.delta.text);
    }
  }
}

main();
const Anthropic = require('@anthropic-ai/sdk');

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  const stream = await client.messages.stream({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [
      { role: 'user', content: 'Tell me an interesting story' }
    ],
  });

  for await (const chunk of stream) {
    if (chunk.type === 'content_block_delta' &&
        chunk.delta.type === 'text_delta') {
      process.stdout.write(chunk.delta.text);
    }
  }
}

main();
using Anthropic.SDK;
using Anthropic.SDK.Constants;
using Anthropic.SDK.Messaging;

var client = new AnthropicClient(new APIAuthentication("YOUR_API_KEY"))
{
    BaseUrl = "https://api.routin.ai"
};

var messages = new List<Message>
{
    new Message(RoleType.User, "Tell me an interesting story")
};

var parameters = new MessageParameters
{
    Messages = messages,
    MaxTokens = 1024,
    Model = AnthropicModels.Claude35Sonnet,
    Stream = true,
    Temperature = 1.0m
};

await foreach (var response in client.Messages.StreamClaudeMessageAsync(parameters))
{
    if (response.Delta?.Text != null)
    {
        Console.Write(response.Delta.Text);
    }
}
curl https://api.routin.ai/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-3-5-sonnet-20241022",
    "max_tokens": 1024,
    "stream": true,
    "messages": [
      {
        "role": "user",
        "content": "Tell me an interesting story"
      }
    ]
  }'

Multi-turn Conversation

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

messages = []

# First turn
messages.append({"role": "user", "content": "My name is John"})
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=messages
)
assistant_message = response.content[0].text
messages.append({"role": "assistant", "content": assistant_message})
print(f"Assistant: {assistant_message}")

# Second turn
messages.append({"role": "user", "content": "What's my name?"})
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=messages
)
assistant_message = response.content[0].text
print(f"Assistant: {assistant_message}")
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  const messages: Anthropic.MessageParam[] = [];

  // First turn
  messages.push({ role: 'user', content: 'My name is John' });
  let response = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages,
  });
  let assistantMessage = response.content[0].text;
  messages.push({ role: 'assistant', content: assistantMessage });
  console.log(`Assistant: ${assistantMessage}`);

  // Second turn
  messages.push({ role: 'user', content: "What's my name?" });
  response = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages,
  });
  assistantMessage = response.content[0].text;
  console.log(`Assistant: ${assistantMessage}`);
}

main();
using Anthropic.SDK;
using Anthropic.SDK.Constants;
using Anthropic.SDK.Messaging;

var client = new AnthropicClient(new APIAuthentication("YOUR_API_KEY"))
{
    BaseUrl = "https://api.routin.ai"
};

var messages = new List<Message>();

// First turn
messages.Add(new Message(RoleType.User, "My name is John"));
var parameters = new MessageParameters
{
    Messages = messages,
    MaxTokens = 1024,
    Model = AnthropicModels.Claude35Sonnet,
    Stream = false
};

var response = await client.Messages.GetClaudeMessageAsync(parameters);
var assistantMessage = response.Message.ToString();
messages.Add(new Message(RoleType.Assistant, assistantMessage));
Console.WriteLine($"Assistant: {assistantMessage}");

// Second turn
messages.Add(new Message(RoleType.User, "What's my name?"));
parameters.Messages = messages;
response = await client.Messages.GetClaudeMessageAsync(parameters);
assistantMessage = response.Message.ToString();
Console.WriteLine($"Assistant: {assistantMessage}");

Vision Capabilities

Claude 3 series models support image input for analyzing and understanding image content.

import anthropic
import base64
import httpx

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

# Method 1: Using image URL
message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "url",
                        "url": "https://example.com/image.jpg"
                    }
                },
                {
                    "type": "text",
                    "text": "What's in this image?"
                }
            ]
        }
    ]
)

# Method 2: Using base64 encoded image
with open("image.jpg", "rb") as image_file:
    image_data = base64.standard_b64encode(image_file.read()).decode("utf-8")

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": "image/jpeg",
                        "data": image_data
                    }
                },
                {
                    "type": "text",
                    "text": "Describe this image"
                }
            ]
        }
    ]
)

print(message.content[0].text)
import Anthropic from '@anthropic-ai/sdk';
import fs from 'fs';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  // Method 1: Using image URL
  const message = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [
      {
        role: 'user',
        content: [
          {
            type: 'image',
            source: {
              type: 'url',
              url: 'https://example.com/image.jpg',
            },
          },
          {
            type: 'text',
            text: "What's in this image?",
          },
        ],
      },
    ],
  });

  // Method 2: Using base64 encoded image
  const imageData = fs.readFileSync('image.jpg').toString('base64');

  const message2 = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [
      {
        role: 'user',
        content: [
          {
            type: 'image',
            source: {
              type: 'base64',
              media_type: 'image/jpeg',
              data: imageData,
            },
          },
          {
            type: 'text',
            text: 'Describe this image',
          },
        ],
      },
    ],
  });

  console.log(message.content[0].text);
}

main();
using Anthropic.SDK;
using Anthropic.SDK.Constants;
using Anthropic.SDK.Messaging;

var client = new AnthropicClient(new APIAuthentication("YOUR_API_KEY"))
{
    BaseUrl = "https://api.routin.ai"
};

// Method 1: Using image URL
var messages = new List<Message>
{
    new Message
    {
        Role = RoleType.User,
        Content = new List<ContentBase>
        {
            new ImageContent
            {
                Source = new ImageSource
                {
                    Type = "url",
                    Url = "https://example.com/image.jpg"
                }
            },
            new TextContent
            {
                Text = "What's in this image?"
            }
        }
    }
};

var parameters = new MessageParameters
{
    Messages = messages,
    MaxTokens = 1024,
    Model = AnthropicModels.Claude35Sonnet,
    Stream = false
};

var response = await client.Messages.GetClaudeMessageAsync(parameters);
Console.WriteLine(response.Message.ToString());

// Method 2: Using base64 encoded image
var imageBytes = File.ReadAllBytes("image.jpg");
var base64Image = Convert.ToBase64String(imageBytes);

messages = new List<Message>
{
    new Message
    {
        Role = RoleType.User,
        Content = new List<ContentBase>
        {
            new ImageContent
            {
                Source = new ImageSource
                {
                    Type = "base64",
                    MediaType = "image/jpeg",
                    Data = base64Image
                }
            },
            new TextContent
            {
                Text = "Describe this image"
            }
        }
    }
};

Tool Use (Function Calling)

Claude supports tool calling, allowing the model to invoke external functions and APIs.

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

# Define tools
tools = [
    {
        "name": "get_weather",
        "description": "Get weather information for a specified city",
        "input_schema": {
            "type": "object",
            "properties": {
                "city": {
                    "type": "string",
                    "description": "City name"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"],
                    "description": "Temperature unit"
                }
            },
            "required": ["city"]
        }
    }
]

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": "What's the weather like in Beijing today?"}
    ]
)

# Check if tool use is required
if message.stop_reason == "tool_use":
    tool_use = next(
        block for block in message.content
        if block.type == "tool_use"
    )

    # Execute tool call
    if tool_use.name == "get_weather":
        # Call actual weather API here
        weather_result = {
            "temperature": "22",
            "condition": "sunny",
            "unit": "celsius"
        }

        # Return tool result to model
        response = client.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=1024,
            tools=tools,
            messages=[
                {"role": "user", "content": "What's the weather like in Beijing today?"},
                {"role": "assistant", "content": message.content},
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "tool_result",
                            "tool_use_id": tool_use.id,
                            "content": str(weather_result)
                        }
                    ]
                }
            ]
        )

        print(response.content[0].text)
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  // Define tools
  const tools: Anthropic.Tool[] = [
    {
      name: 'get_weather',
      description: 'Get weather information for a specified city',
      input_schema: {
        type: 'object',
        properties: {
          city: {
            type: 'string',
            description: 'City name',
          },
          unit: {
            type: 'string',
            enum: ['celsius', 'fahrenheit'],
            description: 'Temperature unit',
          },
        },
        required: ['city'],
      },
    },
  ];

  const message = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    tools,
    messages: [
      { role: 'user', content: "What's the weather like in Beijing today?" }
    ],
  });

  // Check if tool use is required
  if (message.stop_reason === 'tool_use') {
    const toolUse = message.content.find(
      (block): block is Anthropic.ToolUseBlock => block.type === 'tool_use'
    );

    if (toolUse && toolUse.name === 'get_weather') {
      // Execute tool call
      const weatherResult = {
        temperature: '22',
        condition: 'sunny',
        unit: 'celsius',
      };

      // Return tool result to model
      const response = await client.messages.create({
        model: 'claude-3-5-sonnet-20241022',
        max_tokens: 1024,
        tools,
        messages: [
          { role: 'user', content: "What's the weather like in Beijing today?" },
          { role: 'assistant', content: message.content },
          {
            role: 'user',
            content: [
              {
                type: 'tool_result',
                tool_use_id: toolUse.id,
                content: JSON.stringify(weatherResult),
              },
            ],
          },
        ],
      });

      console.log(response.content[0].text);
    }
  }
}

main();

Extended Thinking

Claude 3.5 Sonnet supports extended thinking, allowing the model to engage in deeper reasoning before responding.

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=4096,
    thinking={
        "type": "enabled",
        "budget_tokens": 2000
    },
    messages=[
        {
            "role": "user",
            "content": "Solve this math problem: A sequence has first three terms 2, 5, 10. Find the 10th term."
        }
    ]
)

# View thinking process
for block in message.content:
    if block.type == "thinking":
        print(f"Thinking: {block.thinking}")
    elif block.type == "text":
        print(f"Answer: {block.text}")
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  const message = await client.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 4096,
    thinking: {
      type: 'enabled',
      budget_tokens: 2000,
    },
    messages: [
      {
        role: 'user',
        content: 'Solve this math problem: A sequence has first three terms 2, 5, 10. Find the 10th term.',
      },
    ],
  });

  // View thinking process
  for (const block of message.content) {
    if (block.type === 'thinking') {
      console.log(`Thinking: ${block.thinking}`);
    } else if (block.type === 'text') {
      console.log(`Answer: ${block.text}`);
    }
  }
}

main();
curl https://api.routin.ai/v1/messages \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "claude-3-5-sonnet-20241022",
    "max_tokens": 4096,
    "thinking": {
      "type": "enabled",
      "budget_tokens": 2000
    },
    "messages": [
      {
        "role": "user",
        "content": "Solve this math problem: A sequence has first three terms 2, 5, 10. Find the 10th term."
      }
    ]
  }'

Prompt Caching

Claude supports prompt caching to significantly reduce costs and latency for long prompts.

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    system=[
        {
            "type": "text",
            "text": "You are a professional code review assistant...(long system prompt)",
            "cache_control": {"type": "ephemeral"}
        }
    ],
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Review this code:\n\n" + long_code,
                    "cache_control": {"type": "ephemeral"}
                }
            ]
        }
    ]
)

# Subsequent requests will reuse cached content, reducing cost and latency

Cached content is retained for 5 minutes. Subsequent requests during this period will read cached content at a lower price.

Error Handling

Always add error handling in production to prevent application crashes from API failures.

import anthropic
from anthropic import APIError, RateLimitError, APIConnectionError

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.routin.ai"
)

try:
    message = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[
            {"role": "user", "content": "Hello!"}
        ]
    )
    print(message.content[0].text)
except RateLimitError as e:
    print(f"Rate limit exceeded: {e}")
except APIConnectionError as e:
    print(f"Connection error: {e}")
except APIError as e:
    print(f"API error: {e}")
except Exception as e:
    print(f"Unknown error: {e}")
import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.routin.ai',
});

async function main() {
  try {
    const message = await client.messages.create({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      messages: [
        { role: 'user', content: 'Hello!' }
      ],
    });
    console.log(message.content[0].text);
  } catch (error) {
    if (error instanceof Anthropic.APIError) {
      console.error(`API error [${error.status}]: ${error.message}`);
    } else if (error instanceof Anthropic.RateLimitError) {
      console.error('Rate limit exceeded');
    } else {
      console.error('Unknown error:', error);
    }
  }
}

main();
using Anthropic.SDK;
using Anthropic.SDK.Constants;
using Anthropic.SDK.Messaging;

var client = new AnthropicClient(new APIAuthentication("YOUR_API_KEY"))
{
    BaseUrl = "https://api.routin.ai"
};

try
{
    var messages = new List<Message>
    {
        new Message(RoleType.User, "Hello!")
    };

    var parameters = new MessageParameters
    {
        Messages = messages,
        MaxTokens = 1024,
        Model = AnthropicModels.Claude35Sonnet,
        Stream = false
    };

    var response = await client.Messages.GetClaudeMessageAsync(parameters);
    Console.WriteLine(response.Message.ToString());
}
catch (HttpRequestException ex) when (ex.StatusCode == System.Net.HttpStatusCode.TooManyRequests)
{
    Console.WriteLine($"Rate limit exceeded: {ex.Message}");
}
catch (HttpRequestException ex)
{
    Console.WriteLine($"API error [{ex.StatusCode}]: {ex.Message}");
}
catch (Exception ex)
{
    Console.WriteLine($"Unknown error: {ex.Message}");
}

Common Error Codes

CodeDescriptionSolution
401Invalid or missing API KeyCheck x-api-key or Authorization header
400Invalid request parametersCheck model, messages, max_tokens format
429Rate limit exceededReduce request frequency or upgrade quota
500Internal server errorRetry later or contact support
503Service temporarily unavailableRetry later

Supported Models

MeteorAI supports all Claude series models:

  • Claude 3.5 Sonnet (claude-3-5-sonnet-20241022): Latest flagship model, balancing performance and cost
  • Claude 3.5 Haiku (claude-3-5-haiku-20241022): Fast response, lower cost
  • Claude 3 Opus (claude-3-opus-20240229): Most powerful model for complex tasks
  • Claude 3 Sonnet (claude-3-sonnet-20240229): Balanced version
  • Claude 3 Haiku (claude-3-haiku-20240307): Fast and economical

Different models have different pricing and performance. Choose the appropriate model based on your use case. Claude 3.5 series supports vision capabilities and tool use.

Best Practices

  1. Use system prompts: Define assistant behavior via system parameter
  2. Set reasonable max_tokens: Set based on actual needs to avoid unnecessary costs
  3. Enable prompt caching: Use cache_control for frequently used long prompts to reduce costs
  4. Error retry: Implement exponential backoff retry mechanism for temporary errors
  5. Streaming output: Use stream=true for long text generation for better user experience
  6. Save conversation history: Include complete conversation history in messages array for multi-turn conversations
  7. Monitor usage: Regularly check admin panel statistics to optimize API usage
  8. Use tool calling: Use tool calling feature for tasks requiring external data or computation

Token Billing

  • Input Tokens: system prompt + all user and assistant messages in messages
  • Output Tokens: All content generated by the model (including tool calls)
  • Cache Read: Tokens read from cache at lower price (approximately 10% of normal price)
  • Cache Write: First-time caching at slightly higher price than normal