Samples
Generate a completion
Get a text completion from a local model:Use a system prompt
Set a system prompt to control the model’s behavior:Add context for continuation
Continue a previous generation using context:Generate with images
Analyze images with vision-capable models:Configure model options
Customize the generation parameters:Arguments
| Name | Type | Default | Required | Description |
|---|---|---|---|---|
model | TEXT | - | ✔ | The Ollama model to use (e.g., llama2, mistral, codellama) |
prompt | TEXT | - | ✔ | The prompt to generate a response for |
host | TEXT | NULL | ✖ | Ollama server URL (defaults to http://localhost:11434) |
images | BYTEA[] | NULL | ✖ | Array of images for multimodal models |
keep_alive | TEXT | NULL | ✖ | How long to keep the model loaded (e.g., 5m, 1h) |
embedding_options | JSONB | NULL | ✖ | Model-specific options like temperature, top_p |
system_prompt | TEXT | NULL | ✖ | System prompt to set model behavior |
template | TEXT | NULL | ✖ | Custom prompt template |
context | INT[] | NULL | ✖ | Context from a previous generation for continuation |
verbose | BOOLEAN | FALSE | ✖ | Enable verbose logging for debugging |
Returns
JSONB: The complete API response including:
model: Model used for generationresponse: The generated textcontext: Context array for continuationcreated_at: Generation timestampdone: Whether generation is completetotal_duration: Total time takenprompt_eval_count: Number of tokens in prompteval_count: Number of tokens generated
Related functions
ollama_chat_complete(): multi-turn conversationsollama_embed(): generate embeddingsollama_list_models(): see available models