Purpose
- Define the embedding model to use (from any supported provider)
- Specify the dimensionality of the embeddings
- Configure optional, provider-specific parameters
- Set the name of the environment variable that holds your API key
Samples
Hugging Face model
Azure OpenAI
AWS Bedrock
Arguments
| Name | Type | Default | Required | Description |
|---|---|---|---|---|
model | text | - | ✔ | Name of the embedding model with optional provider prefix (e.g., huggingface/model-name, azure/deployment-name) |
dimensions | int | - | ✔ | Number of dimensions for the embedding vectors |
api_key_name | text | - | ✖ | Name of the environment variable containing the API key |
extra_options | jsonb | - | ✖ | Provider-specific configuration options (API base URL, region, etc.) |
Returns
A JSON configuration object for use increate_vectorizer().
Supported providers
LiteLLM supports 100+ providers including:- OpenAI and Azure OpenAI
- AWS Bedrock
- Google Vertex AI
- Hugging Face
- Cohere
- Anthropic
- And many more
Related functions
embedding_openai(): direct OpenAI integrationembedding_ollama(): local Ollama modelsembedding_voyageai(): Voyage AI models