Samples
Use OpenAI
Use Azure OpenAI
Embed with Azure OpenAI deployment:Use AWS Bedrock
Embed with Amazon Titan:Use Google Vertex AI
Embed with Vertex AI:Batch embeddings
Process multiple texts efficiently:Store embeddings in a table
Arguments
| Name | Type | Default | Required | Description |
|---|---|---|---|---|
model | TEXT | - | ✔ | Model identifier with optional provider prefix (e.g., text-embedding-ada-002, azure/deployment, bedrock/model-id) |
input_text | TEXT | - | ✔ | Single text input to embed (use this OR input_texts) |
input_texts | TEXT[] | - | ✔ | Array of text inputs to embed in a batch |
api_key | TEXT | NULL | ✖ | API key for the provider |
api_key_name | TEXT | NULL | ✖ | Name of the secret containing the API key |
extra_options | JSONB | NULL | ✖ | Provider-specific options (API base URL, region, project, etc.) |
verbose | BOOLEAN | FALSE | ✖ | Enable verbose logging for debugging |
Returns
For single text input:vector: A pgvector compatible vector containing the embedding
TABLE(index INT, embedding vector): A table with an index and embedding for each input text
Provider-specific configuration
Different providers require different configurations through theextra_options parameter:
Azure OpenAI
AWS Bedrock
Google Vertex AI
Related functions
openai_embed(): direct OpenAI integrationcohere_embed(): direct Cohere integrationvoyageai_embed(): direct Voyage AI integration