base
langroid/language_models/base.py
LLMFunctionCall
¶
Bases: BaseModel
Structure of LLM response indicate it "wants" to call a function.
Modeled after OpenAI spec for function_call
field in ChatCompletion API.
from_dict(message)
staticmethod
¶
Initialize from dictionary. Args: d: dictionary containing fields to initialize
Source code in langroid/language_models/base.py
LLMFunctionSpec
¶
Bases: BaseModel
Description of a function available for the LLM to use.
To be used when calling the LLM chat()
method with the functions
parameter.
Modeled after OpenAI spec for functions
fields in ChatCompletion API.
LLMMessage
¶
Bases: BaseModel
Class representing message sent to, or received from, LLM.
api_dict()
¶
Convert to dictionary for API request. DROP the tool_id, since it is only for use in the Assistant API, not the completion API. Returns: dict: dictionary representation of LLM message
Source code in langroid/language_models/base.py
LLMResponse
¶
Bases: BaseModel
Class representing response from LLM.
get_recipient_and_message()
¶
If message
or function_call
of an LLM response contains an explicit
recipient name, return this recipient name and message
stripped
of the recipient name if specified.
Two cases:
(a) message
contains "TO: message
is empty and function_call
with to: <name>
Returns:
Type | Description |
---|---|
str
|
name of recipient, which may be empty string if no recipient |
str
|
content of message |
Source code in langroid/language_models/base.py
LanguageModel(config=LLMConfig())
¶
Bases: ABC
Abstract base class for language models.
Source code in langroid/language_models/base.py
create(config)
staticmethod
¶
Create a language model. Args: config: configuration for language model Returns: instance of language model
Source code in langroid/language_models/base.py
user_assistant_pairs(lst)
staticmethod
¶
Given an even-length sequence of strings, split into a sequence of pairs
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lst |
List[str]
|
sequence of strings |
required |
Returns:
Type | Description |
---|---|
List[Tuple[str, str]]
|
List[Tuple[str,str]]: sequence of pairs of strings |
Source code in langroid/language_models/base.py
get_chat_history_components(messages)
staticmethod
¶
From the chat history, extract system prompt, user-assistant turns, and final user msg.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages |
List[LLMMessage]
|
List of messages in the chat history |
required |
Returns:
Type | Description |
---|---|
Tuple[str, List[Tuple[str, str]], str]
|
Tuple[str, List[Tuple[str,str]], str]: system prompt, user-assistant turns, final user msg |
Source code in langroid/language_models/base.py
set_stream(stream)
abstractmethod
¶
get_stream()
abstractmethod
¶
update_usage_cost(chat, prompts, completions, cost)
¶
Update usage cost for this LLM. Args: chat (bool): whether to update for chat or completion model prompts (int): number of tokens used for prompts completions (int): number of tokens used for completions cost (float): total token cost in USD
Source code in langroid/language_models/base.py
tot_tokens_cost()
classmethod
¶
Return total tokens used and total cost across all models.
Source code in langroid/language_models/base.py
followup_to_standalone(chat_history, question)
¶
Given a chat history and a question, convert it to a standalone question. Args: chat_history: list of tuples of (question, answer) query: follow-up question
Returns: standalone version of the question
Source code in langroid/language_models/base.py
get_verbatim_extract_async(question, passage)
async
¶
Asynchronously, get verbatim extract from passage that is relevant to a question. Asynch allows parallel calls to the LLM API.
Source code in langroid/language_models/base.py
get_verbatim_extracts(question, passages)
¶
From each passage, extract verbatim text that is relevant to a question, using concurrent API calls to the LLM. Args: question: question to be answered passages: list of passages from which to extract relevant verbatim text LLM: LanguageModel to use for generating the prompt and extract Returns: list of verbatim extracts from passages that are relevant to question
Source code in langroid/language_models/base.py
get_summary_answer(question, passages)
¶
Given a question and a list of (possibly) doc snippets,
generate an answer if possible
Args:
question: question to answer
passages: list of Document
objects each containing a possibly relevant
snippet, and metadata
Returns:
a Document
object containing the answer,
and metadata containing source citations