base
AgentConfig
¶
Bases: BaseSettings
General config settings for an LLM agent. This is nested, combining configs of various components.
Agent(config=AgentConfig())
¶
Bases: ABC
An Agent is an abstraction that typically (but not necessarily) encapsulates an LLM.
Source code in langroid/agent/base.py
indent
property
writable
¶
Indentation to print before any responses from the agent's entities.
all_llm_tools_known
property
¶
All known tools; this may extend self.llm_tools_known.
init_state()
¶
entity_responders()
¶
Sequence of (entity, response_method) pairs. This sequence is used
in a Task
to respond to the current pending message.
See Task.step()
for details.
Returns:
Sequence of (entity, response_method) pairs.
Source code in langroid/agent/base.py
entity_responders_async()
¶
Async version of entity_responders
. See there for details.
Source code in langroid/agent/base.py
enable_message_handling(message_class=None)
¶
Enable an agent to RESPOND (i.e. handle) a "tool" message of a specific type
from LLM. Also "registers" (i.e. adds) the message_class
to the
self.llm_tools_map
dict.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message_class
|
Optional[Type[ToolMessage]]
|
The message class to enable; Optional; if None, all known message classes are enabled for handling. |
None
|
Source code in langroid/agent/base.py
disable_message_handling(message_class=None)
¶
Disable a message class from being handled by this Agent.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message_class
|
Optional[Type[ToolMessage]]
|
The message class to disable. If None, all message classes are disabled. |
None
|
Source code in langroid/agent/base.py
sample_multi_round_dialog()
¶
Generate a sample multi-round dialog based on enabled message classes. Returns: str: The sample dialog string.
Source code in langroid/agent/base.py
create_agent_response(content=None, files=[], content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for agent_response.
Source code in langroid/agent/base.py
render_agent_response(results)
¶
Render the response from the agent, typically from tool-handling. Args: results: results from tool-handling, which may be a string, a dict of tool results, or a ChatDocument.
Source code in langroid/agent/base.py
agent_response_async(msg=None)
async
¶
Asynch version of agent_response
. See there for details.
Source code in langroid/agent/base.py
agent_response(msg=None)
¶
Response from the "agent itself", typically (but not only)
used to handle LLM's "tool message" or function_call
(e.g. OpenAI function_call
).
Args:
msg (str|ChatDocument): the input to respond to: if msg is a string,
and it contains a valid JSON-structured "tool message", or
if msg is a ChatDocument, and it contains a function_call
.
Returns:
Optional[ChatDocument]: the response, packaged as a ChatDocument
Source code in langroid/agent/base.py
process_tool_results(results, id2result, tool_calls=None)
¶
Process results from a response, based on whether they are results of OpenAI tool-calls from THIS agent, so that we can construct an appropriate LLMMessage that contains tool results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
str
|
A possible string result from handling tool(s) |
required |
id2result
|
OrderedDict[str, str] | None
|
A dict of OpenAI tool id -> result, if there are multiple tool results. |
required |
tool_calls
|
List[OpenAIToolCall] | None
|
List of OpenAI tool-calls that the results are a response to. |
None
|
Return
- str: The response string
- Dict[str,str]|None: A dict of OpenAI tool id -> result, if there are multiple tool results.
- str|None: tool_id if there was a single tool result
Source code in langroid/agent/base.py
743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 |
|
response_template(e, content=None, files=[], content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for response from entity e
.
Source code in langroid/agent/base.py
create_user_response(content=None, files=[], content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for user_response.
Source code in langroid/agent/base.py
user_can_respond(msg=None)
¶
Whether the user can respond to a message.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str | ChatDocument
|
the string to respond to. |
None
|
Returns:
Source code in langroid/agent/base.py
user_response_async(msg=None)
async
¶
Asynch version of user_response
. See there for details.
Source code in langroid/agent/base.py
user_response(msg=None)
¶
Get user response to current message. Could allow (human) user to intervene with an actual answer, or quit using "q" or "x"
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str | ChatDocument
|
the string to respond to. |
None
|
Returns:
Type | Description |
---|---|
Optional[ChatDocument]
|
(str) User response, packaged as a ChatDocument |
Source code in langroid/agent/base.py
llm_can_respond(message=None)
¶
Whether the LLM can respond to a message. Args: message (str|ChatDocument): message or ChatDocument object to respond to.
Returns:
Source code in langroid/agent/base.py
can_respond(message=None)
¶
Whether the agent can respond to a message. Used in Task.py to skip a sub-task when we know it would not respond. Args: message (str|ChatDocument): message or ChatDocument object to respond to.
Source code in langroid/agent/base.py
create_llm_response(content=None, content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for llm_response.
Source code in langroid/agent/base.py
llm_response_async(message=None)
async
¶
Asynch version of llm_response
. See there for details.
Source code in langroid/agent/base.py
llm_response(message=None)
¶
LLM response to a prompt. Args: message (str|ChatDocument): prompt string, or ChatDocument object
Returns:
Type | Description |
---|---|
Optional[ChatDocument]
|
Response from LLM, packaged as a ChatDocument |
Source code in langroid/agent/base.py
1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 |
|
has_tool_message_attempt(msg)
¶
Check whether msg contains a Tool/fn-call attempt (by the LLM).
CAUTION: This uses self.get_tool_messages(msg) which as a side-effect may update msg.tool_messages when msg is a ChatDocument, if there are any tools in msg.
Source code in langroid/agent/base.py
has_only_unhandled_tools(msg)
¶
Does the msg have at least one tool, and none of the tools in the msg are handleable by this agent?
Source code in langroid/agent/base.py
get_tool_messages(msg, all_tools=False)
¶
Get ToolMessages recognized in msg, handle-able by this agent. NOTE: as a side-effect, this will update msg.tool_messages when msg is a ChatDocument and msg contains tool messages. The intent here is that update=True should be set ONLY within agent_response() or agent_response_async() methods. In other words, we want to persist the msg.tool_messages only AFTER the agent has had a chance to handle the tools.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str | ChatDocument
|
the message to extract tools from. |
required |
all_tools
|
bool
|
|
False
|
Returns:
Type | Description |
---|---|
List[ToolMessage]
|
List[ToolMessage]: list of ToolMessage objects |
Source code in langroid/agent/base.py
1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 |
|
get_formatted_tool_messages(input_str, from_llm=True)
¶
Returns ToolMessage objects (tools) corresponding to tool-formatted substrings, if any. ASSUMPTION - These tools are either ALL JSON-based, or ALL XML-based (i.e. not a mix of both). Terminology: a "formatted tool msg" is one which the LLM generates as part of its raw string output, rather than within a JSON object in the API response (i.e. this method does not extract tools/fns returned by OpenAI's tools/fns API or similar APIs).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_str
|
str
|
input string, typically a message sent by an LLM |
required |
from_llm
|
bool
|
whether the input was generated by the LLM. If so, we track malformed tool calls. |
True
|
Returns:
Type | Description |
---|---|
List[ToolMessage]
|
List[ToolMessage]: list of ToolMessage objects |
Source code in langroid/agent/base.py
get_function_call_class(msg)
¶
From ChatDocument (constructed from an LLM Response), get the ToolMessage
corresponding to the function_call
if it exists.
Source code in langroid/agent/base.py
get_oai_tool_calls_classes(msg)
¶
From ChatDocument (constructed from an LLM Response), get
a list of ToolMessages corresponding to the tool_calls
, if any.
Source code in langroid/agent/base.py
tool_validation_error(ve)
¶
Handle a validation error raised when parsing a tool message, when there is a legit tool name used, but it has missing/bad fields. Args: tool (ToolMessage): The tool message that failed validation ve (ValidationError): The exception raised
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
The error message to send back to the LLM |
Source code in langroid/agent/base.py
handle_message_async(msg)
async
¶
Asynch version of handle_message
. See there for details.
Source code in langroid/agent/base.py
handle_message(msg)
¶
Handle a "tool" message either a string containing one or more
valid "tool" JSON substrings, or a
ChatDocument containing a function_call
attribute.
Handle with the corresponding handler method, and return
the results as a combined string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str | ChatDocument
|
The string or ChatDocument to handle |
required |
Returns:
Type | Description |
---|---|
None | str | OrderedDict[str, str] | ChatDocument
|
The result of the handler method can be:
- None if no tools successfully handled, or no tools present
- str if langroid-native JSON tools were handled, and results concatenated,
OR there's a SINGLE OpenAI tool-call.
(We do this so the common scenario of a single tool/fn-call
has a simple behavior).
- Dict[str, str] if multiple OpenAI tool-calls were handled
(dict is an id->result map)
- ChatDocument if a handler returned a ChatDocument, intended to be the
final response of the |
Source code in langroid/agent/base.py
1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 |
|
handle_message_fallback(msg)
¶
Fallback method for the case where the msg has no tools that can be handled by this agent. This method can be overridden by subclasses, e.g., to create a "reminder" message when a tool is expected but the LLM "forgot" to generate one.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str | ChatDocument
|
The input msg to handle |
required |
Returns: Any: The result of the handler method
Source code in langroid/agent/base.py
to_ChatDocument(msg, orig_tool_name=None, chat_doc=None, author_entity=Entity.AGENT)
¶
Convert result of a responder (agent_response or llm_response, or task.run()), or tool handler, or handle_message_fallback, to a ChatDocument, to enable handling by other responders/tasks in a task loop possibly involving multiple agents.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
Any
|
The result of a responder or tool handler or task.run() |
required |
orig_tool_name
|
str
|
The original tool name that generated the response, if any. |
None
|
chat_doc
|
ChatDocument
|
The original ChatDocument object that |
None
|
author_entity
|
Entity
|
The intended author of the result ChatDocument |
AGENT
|
Source code in langroid/agent/base.py
from_ChatDocument(msg, output_type)
¶
Extract a desired output_type from a ChatDocument object.
We use this fallback order:
- if msg.content_any
exists and matches the output_type, return it
- if msg.content
exists and output_type is str return it
- if output_type is a ToolMessage, return the first tool in msg.tool_messages
- if output_type is a list of ToolMessage,
return all tools in msg.tool_messages
- search for a tool in msg.tool_messages
that has a field of output_type,
and if found, return that field value
- return None if all the above fail
Source code in langroid/agent/base.py
handle_tool_message_async(tool, chat_doc=None)
async
¶
Asynch version of handle_tool_message
. See there for details.
Source code in langroid/agent/base.py
handle_tool_message(tool, chat_doc=None)
¶
Respond to a tool request from the LLM, in the form of an ToolMessage object.
Args:
tool: ToolMessage object representing the tool request.
chat_doc: Optional ChatDocument object containing the tool request.
This is passed to the tool-handler method only if it has a chat_doc
argument.
Returns:
Source code in langroid/agent/base.py
update_token_usage(response, prompt, stream, chat=True, print_response_stats=True)
¶
Updates response.usage
obj (token usage and cost fields) if needed.
An update is needed only if:
- stream is True (i.e. streaming was enabled), and
- the response was NOT obtained from cached, and
- the API did NOT provide the usage/cost fields during streaming
(As of Sep 2024, the OpenAI API started providing these; for other APIs
this may not necessarily be the case).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response
|
LLMResponse
|
LLMResponse object |
required |
prompt
|
str | List[LLMMessage]
|
prompt or list of LLMMessage objects |
required |
stream
|
bool
|
whether to update the usage in the response object if the response is not cached. |
required |
chat
|
bool
|
whether this is a chat model or a completion model |
True
|
print_response_stats
|
bool
|
whether to print the response stats |
True
|
Source code in langroid/agent/base.py
2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 |
|
ask_agent(agent, request, no_answer=NO_ANSWER, user_confirm=True)
¶
Send a request to another agent, possibly after confirming with the user.
This is not currently used, since we rely on the task loop and
RecipientTool
to address requests to other agents. It is generally best to
avoid using this method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
agent
|
Agent
|
agent to ask |
required |
request
|
str
|
request to send |
required |
no_answer
|
str
|
expected response when agent does not know the answer |
NO_ANSWER
|
user_confirm
|
bool
|
whether to gate the request with a human confirmation |
True
|
Returns:
Name | Type | Description |
---|---|---|
str |
Optional[str]
|
response from agent |