Skip to main content
Generate text completions using Anthropic’s Claude models. This function supports multi-turn conversations, system prompts, tool use, and vision capabilities for sophisticated reasoning and analysis tasks.

Samples

Basic text generation

Generate a simple response:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'Explain PostgreSQL in one sentence')
    )
)->'content'->0->>'text';

Multi-turn conversation

Continue a conversation with message history:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'What is PostgreSQL?'),
        jsonb_build_object('role', 'assistant', 'content', 'PostgreSQL is a powerful open-source relational database.'),
        jsonb_build_object('role', 'user', 'content', 'What makes it different from MySQL?')
    )
)->'content'->0->>'text';

Use a system prompt

Guide Claude’s behavior:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'Explain databases')
    ),
    system_prompt => 'You are a helpful database expert. Give concise, technical answers with code examples.'
)->'content'->0->>'text';

Control creativity with temperature

Adjust the randomness of responses:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'Write a creative story about databases')
    ),
    temperature => 0.9,
    max_tokens => 2000
)->'content'->0->>'text';

Use tools (function calling)

Enable Claude to call functions:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'What is the weather in Paris?')
    ),
    tools => '[
        {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "input_schema": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name"
                    }
                },
                "required": ["location"]
            }
        }
    ]'::jsonb,
    tool_choice => '{"type": "auto"}'::jsonb
);

Control stop sequences

Stop generation at specific sequences:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'List three database types')
    ),
    stop_sequences => ARRAY['4.', 'Fourth']
)->'content'->0->>'text';

Use with API key name

Reference a stored API key:
SELECT ai.anthropic_generate(
    'claude-3-5-sonnet-20241022',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'Hello, Claude!')
    ),
    api_key_name => 'ANTHROPIC_API_KEY'
)->'content'->0->>'text';

Arguments

NameTypeDefaultRequiredDescription
modelTEXT-The Claude model to use (e.g., claude-3-5-sonnet-20241022)
messagesJSONB-Array of message objects with role and content
max_tokensINT1024Maximum tokens to generate (required by Anthropic API)
api_keyTEXTNULLAnthropic API key. If not provided, uses configured secret
api_key_nameTEXTNULLName of the secret containing the API key
base_urlTEXTNULLCustom API base URL
timeoutFLOAT8NULLRequest timeout in seconds
max_retriesINTNULLMaximum number of retry attempts
system_promptTEXTNULLSystem prompt to guide model behavior
user_idTEXTNULLUnique identifier for the end user
stop_sequencesTEXT[]NULLSequences that stop generation
temperatureFLOAT8NULLSampling temperature (0.0 to 1.0)
tool_choiceJSONBNULLHow the model should use tools (e.g., {"type": "auto"})
toolsJSONBNULLFunction definitions for tool use
top_kINTNULLOnly sample from top K options
top_pFLOAT8NULLNucleus sampling threshold (0.0 to 1.0)
verboseBOOLEANFALSEEnable verbose logging for debugging

Returns

JSONB: The complete API response including:
  • id: Unique message identifier
  • type: Response type (always "message")
  • role: Role of the responder (always "assistant")
  • content: Array of content blocks (text, tool use, etc.)
  • model: Model used for generation
  • stop_reason: Why generation stopped (e.g., "end_turn", "max_tokens")
  • usage: Token usage statistics