base
langroid/language_models/base.py
LLMConfig
¶
Bases: BaseSettings
Common configuration for all language models.
LLMFunctionCall
¶
Bases: BaseModel
Structure of LLM response indicating it "wants" to call a function.
Modeled after OpenAI spec for function_call
field in ChatCompletion API.
from_dict(message)
staticmethod
¶
Initialize from dictionary. Args: d: dictionary containing fields to initialize
Source code in langroid/language_models/base.py
LLMFunctionSpec
¶
Bases: BaseModel
Description of a function available for the LLM to use.
To be used when calling the LLM chat()
method with the functions
parameter.
Modeled after OpenAI spec for functions
fields in ChatCompletion API.
OpenAIToolCall
¶
Bases: BaseModel
Represents a single tool call in a list of tool calls generated by OpenAI LLM API. See https://platform.openai.com/docs/api-reference/chat/create
Attributes:
Name | Type | Description |
---|---|---|
id |
str | None
|
The id of the tool call. |
type |
ToolTypes
|
The type of the tool call; only "function" is currently possible (7/26/24). |
function |
LLMFunctionCall | None
|
The function call. |
from_dict(message)
staticmethod
¶
Initialize from dictionary. Args: d: dictionary containing fields to initialize
Source code in langroid/language_models/base.py
LLMTokenUsage
¶
Bases: BaseModel
Usage of tokens by an LLM.
Role
¶
Bases: str
, Enum
Possible roles for a message in a chat.
LLMMessage
¶
Bases: BaseModel
Class representing an entry in the msg-history sent to the LLM API. It could be one of these: - a user message - an LLM ("Assistant") response - a fn-call or tool-call-list from an OpenAI-compatible LLM API response - a result or results from executing a fn or tool-call(s)
api_dict(has_system_role=True)
¶
Convert to dictionary for API request, keeping ONLY the fields that are expected in an API call! E.g., DROP the tool_id, since it is only for use in the Assistant API, not the completion API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
has_system_role |
bool
|
whether the message has a system role (if not, set to "user" role) |
True
|
Returns: dict: dictionary representation of LLM message
Source code in langroid/language_models/base.py
LLMResponse
¶
Bases: BaseModel
Class representing response from LLM.
to_LLMMessage()
¶
Convert LLM response to an LLMMessage, to be included in the message-list sent to the API. This is currently NOT used in any significant way in the library, and is only provided as a utility to construct a message list for the API when directly working with an LLM object.
In a ChatAgent
, an LLM response is first converted to a ChatDocument,
which is in turn converted to an LLMMessage via ChatDocument.to_LLMMessage()
See ChatAgent._prep_llm_messages()
and ChatAgent.llm_response_messages
Source code in langroid/language_models/base.py
get_recipient_and_message()
¶
If message
or function_call
of an LLM response contains an explicit
recipient name, return this recipient name and message
stripped
of the recipient name if specified.
Two cases:
(a) message
contains addressing string "TO: message
is empty and function_call/tool_call with explicit recipient
Returns:
Type | Description |
---|---|
str
|
name of recipient, which may be empty string if no recipient |
str
|
content of message |
Source code in langroid/language_models/base.py
LanguageModel(config=LLMConfig())
¶
Bases: ABC
Abstract base class for language models.
Source code in langroid/language_models/base.py
create(config)
staticmethod
¶
Create a language model. Args: config: configuration for language model Returns: instance of language model
Source code in langroid/language_models/base.py
user_assistant_pairs(lst)
staticmethod
¶
Given an even-length sequence of strings, split into a sequence of pairs
Parameters:
Name | Type | Description | Default |
---|---|---|---|
lst |
List[str]
|
sequence of strings |
required |
Returns:
Type | Description |
---|---|
List[Tuple[str, str]]
|
List[Tuple[str,str]]: sequence of pairs of strings |
Source code in langroid/language_models/base.py
get_chat_history_components(messages)
staticmethod
¶
From the chat history, extract system prompt, user-assistant turns, and final user msg.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages |
List[LLMMessage]
|
List of messages in the chat history |
required |
Returns:
Type | Description |
---|---|
Tuple[str, List[Tuple[str, str]], str]
|
Tuple[str, List[Tuple[str,str]], str]: system prompt, user-assistant turns, final user msg |
Source code in langroid/language_models/base.py
set_stream(stream)
abstractmethod
¶
get_stream()
abstractmethod
¶
chat(messages, max_tokens=200, tools=None, tool_choice='auto', functions=None, function_call='auto')
abstractmethod
¶
Get chat-completion response from LLM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages |
Union[str, List[LLMMessage]]
|
message-history to send to the LLM |
required |
max_tokens |
int
|
max tokens to generate |
200
|
tools |
Optional[List[OpenAIToolSpec]]
|
tools available for the LLM to use in its response |
None
|
tool_choice |
ToolChoiceTypes | Dict[str, str | Dict[str, str]]
|
tool call mode, one of "none", "auto", "required", or a dict specifying a specific tool. |
'auto'
|
functions |
Optional[List[LLMFunctionSpec]]
|
functions available for LLM to call (deprecated) |
None
|
function_call |
str | Dict[str, str]
|
function calling mode, "auto", "none", or a specific fn (deprecated) |
'auto'
|
Source code in langroid/language_models/base.py
achat(messages, max_tokens=200, tools=None, tool_choice='auto', functions=None, function_call='auto')
abstractmethod
async
¶
Async version of chat
. See chat
for details.
Source code in langroid/language_models/base.py
update_usage_cost(chat, prompts, completions, cost)
¶
Update usage cost for this LLM. Args: chat (bool): whether to update for chat or completion model prompts (int): number of tokens used for prompts completions (int): number of tokens used for completions cost (float): total token cost in USD
Source code in langroid/language_models/base.py
tot_tokens_cost()
classmethod
¶
Return total tokens used and total cost across all models.
Source code in langroid/language_models/base.py
followup_to_standalone(chat_history, question)
¶
Given a chat history and a question, convert it to a standalone question. Args: chat_history: list of tuples of (question, answer) query: follow-up question
Returns: standalone version of the question