Samples
Generate an embedding
Create a vector embedding using a local model:Specify Ollama host
Connect to a specific Ollama server:Configure model options
Customize the embedding generation:Store embeddings in a table
Generate and store embeddings for your data:Arguments
| Name | Type | Default | Required | Description |
|---|---|---|---|---|
model | TEXT | - | ✔ | The Ollama model to use (e.g., llama2, mistral, nomic-embed-text) |
input_text | TEXT | - | ✔ | Text input to embed |
host | TEXT | NULL | ✖ | Ollama server URL (defaults to http://localhost:11434) |
keep_alive | TEXT | NULL | ✖ | How long to keep the model loaded (e.g., 5m, 1h) |
embedding_options | JSONB | NULL | ✖ | Model-specific options as JSON |
verbose | BOOLEAN | FALSE | ✖ | Enable verbose logging for debugging |
Returns
vector: A pgvector compatible vector containing the embedding.
Related functions
ollama_generate(): generate text completionsollama_chat_complete(): chat with local modelsollama_list_models(): see available models