Skip to main content
Generate vector embeddings from text using locally hosted Ollama models. Embeddings are numerical representations of text that capture semantic meaning, ideal for semantic search, recommendations, and clustering without sending data to external APIs.

Samples

Generate an embedding

Create a vector embedding using a local model:
SELECT ai.ollama_embed(
    'llama2',
    'PostgreSQL is a powerful database'
);

Specify Ollama host

Connect to a specific Ollama server:
SELECT ai.ollama_embed(
    'llama2',
    'PostgreSQL is a powerful database',
    host => 'http://ollama-server:11434'
);

Configure model options

Customize the embedding generation:
SELECT ai.ollama_embed(
    'llama2',
    'PostgreSQL is a powerful database',
    embedding_options => '{"temperature": 0.5}'::jsonb
);

Store embeddings in a table

Generate and store embeddings for your data:
UPDATE documents
SET embedding = ai.ollama_embed(
    'llama2',
    content,
    host => 'http://localhost:11434'
)
WHERE embedding IS NULL;

Arguments

NameTypeDefaultRequiredDescription
modelTEXT-The Ollama model to use (e.g., llama2, mistral, nomic-embed-text)
input_textTEXT-Text input to embed
hostTEXTNULLOllama server URL (defaults to http://localhost:11434)
keep_aliveTEXTNULLHow long to keep the model loaded (e.g., 5m, 1h)
embedding_optionsJSONBNULLModel-specific options as JSON
verboseBOOLEANFALSEEnable verbose logging for debugging

Returns

vector: A pgvector compatible vector containing the embedding.