Skip to main content
Generate chat completions using locally hosted Ollama models. This function supports multi-turn conversations, tool calling, and structured output with complete data privacy.

Samples

Basic chat completion

Have a conversation with a local model:
SELECT ai.ollama_chat_complete(
    'llama2',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'What is PostgreSQL?')
    )
)->'message'->>'content';

Multi-turn conversation

Continue a conversation with message history:
SELECT ai.ollama_chat_complete(
    'llama2',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'What is PostgreSQL?'),
        jsonb_build_object('role', 'assistant', 'content', 'PostgreSQL is a powerful open-source database.'),
        jsonb_build_object('role', 'user', 'content', 'What makes it different from MySQL?')
    )
)->'message'->>'content';

Use with specific host

Connect to a custom Ollama server:
SELECT ai.ollama_chat_complete(
    'llama2',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'Explain databases')
    ),
    host => 'http://ollama-server:11434'
)->'message'->>'content';

Configure chat options

Customize the chat parameters:
SELECT ai.ollama_chat_complete(
    'llama2',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'Write a creative story')
    ),
    chat_options => '{"temperature": 0.9, "top_p": 0.95}'::jsonb
)->'message'->>'content';

Structured output with JSON

Request JSON responses:
SELECT ai.ollama_chat_complete(
    'llama2',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'List 3 database types')
    ),
    response_format => '{"type": "json"}'::jsonb
)->'message'->>'content';

Use tools (function calling)

Enable the model to call tools:
SELECT ai.ollama_chat_complete(
    'llama2',
    jsonb_build_array(
        jsonb_build_object('role', 'user', 'content', 'What is the weather in Paris?')
    ),
    tools => '[
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get current weather",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string"}
                    }
                }
            }
        }
    ]'::jsonb
);

Arguments

NameTypeDefaultRequiredDescription
modelTEXT-The Ollama model to use (e.g., llama2, mistral, codellama)
messagesJSONB-Array of message objects with role and content
hostTEXTNULLOllama server URL (defaults to http://localhost:11434)
keep_aliveTEXTNULLHow long to keep the model loaded (e.g., 5m, 1h)
chat_optionsJSONBNULLModel-specific options like temperature, top_p
toolsJSONBNULLFunction definitions for tool calling
response_formatJSONBNULLFormat specification (e.g., {"type": "json"})
verboseBOOLEANFALSEEnable verbose logging for debugging

Returns

JSONB: The complete API response including:
  • model: Model used for the chat
  • message: The assistant’s response with role and content
  • created_at: Response timestamp
  • done: Whether generation is complete
  • total_duration: Total time taken
  • prompt_eval_count: Number of tokens in prompt
  • eval_count: Number of tokens generated