What is LiteLLM?
LiteLLM is a unified interface that lets you call different LLM APIs using the same format. Instead of learning each provider’s API, you can use one consistent interface and switch between providers by changing the model name.Key features
- 100+ providers: Access OpenAI, Azure, AWS Bedrock, Google Vertex AI, Anthropic, Cohere, and more
- Unified API: One interface for all providers
- Easy switching: Change providers by updating the model name
- Fallback support: Automatically retry failed requests with different providers
- Cost tracking: Built-in usage and cost tracking across providers
Prerequisites
To use LiteLLM functions, you need:- API keys for the providers you want to use
- API keys configured in your database (see configuration section below)
Quick start
Use OpenAI through LiteLLM
Use Azure OpenAI
Use AWS Bedrock
Use Google Vertex AI
Configuration
LiteLLM typically requires provider-specific configuration through environment variables or theextra_options
parameter. Consult the LiteLLM documentation for provider-specific setup.
Available functions
Embeddings
litellm_embed(): generate embeddings from any supported provider
Model naming convention
LiteLLM uses a simple naming convention:- OpenAI models: Use the model name directly (e.g.,
text-embedding-ada-002) - Other providers: Prefix with provider name (e.g.,
azure/deployment-name,bedrock/model-id,vertex_ai/model-name)