Purpose
- Define which Ollama model to use
- Specify the dimensionality of the embeddings
- Configure how the Ollama API is accessed
- Configure the model’s truncation behavior and keep alive settings
- Configure optional, model-specific parameters like temperature
Samples
Basic Ollama embedding
With custom Ollama server
With model options and keep alive
Arguments
| Name | Type | Default | Required | Description |
|---|---|---|---|---|
model | text | - | ✔ | Name of the Ollama model to use (e.g., nomic-embed-text). The model must already be pulled on your Ollama server |
dimensions | int | - | ✔ | Number of dimensions for the embedding vectors |
base_url | text | - | ✖ | Base URL of the Ollama API. If not provided, uses OLLAMA_HOST environment variable |
options | jsonb | - | ✖ | Additional model parameters such as temperature or num_ctx |
keep_alive | text | - | ✖ | How long the model stays loaded in memory after the request (e.g., 5m, 1h) |
Returns
A JSON configuration object for use increate_vectorizer().
Related functions
embedding_openai(): use OpenAI modelsembedding_litellm(): use any provider through LiteLLMembedding_voyageai(): use Voyage AI models