models
langroid/embedding_models/models.py
FastEmbedEmbeddingsConfig
¶
Bases: EmbeddingModelsConfig
Config for qdrant/fastembed embeddings, see here: https://github.com/qdrant/fastembed
EmbeddingFunctionCallable(model, batch_size=512)
¶
A callable class designed to generate embeddings for a list of texts using the OpenAI API, with automatic retries on failure.
Attributes:
Name | Type | Description |
---|---|---|
model |
OpenAIEmbeddings
|
An instance of OpenAIEmbeddings that provides configuration and utilities for generating embeddings. |
Methods:
Name | Description |
---|---|
__call__ |
List[str]) -> Embeddings: Generate embeddings for a list of input texts. |
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
OpenAIEmbeddings
|
An instance of OpenAIEmbeddings to use for |
required |
batch_size |
int
|
Batch size |
512
|
Source code in langroid/embedding_models/models.py
OpenAIEmbeddings(config=OpenAIEmbeddingsConfig())
¶
Bases: EmbeddingModel
Source code in langroid/embedding_models/models.py
truncate_texts(texts)
¶
Truncate texts to the embedding model's context length. TODO: Maybe we should show warning, and consider doing T5 summarization?
Source code in langroid/embedding_models/models.py
embedding_model(embedding_fn_type='openai')
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embedding_fn_type |
str
|
"openai" or "sentencetransformer" # others soon |
'openai'
|
Returns: EmbeddingModel