base
AgentConfig
¶
Bases: BaseSettings
General config settings for an LLM agent. This is nested, combining configs of various components.
Agent(config=AgentConfig())
¶
Bases: ABC
An Agent is an abstraction that typically (but not necessarily) encapsulates an LLM.
Source code in langroid/agent/base.py
indent
property
writable
¶
Indentation to print before any responses from the agent's entities.
all_llm_tools_known
property
¶
All known tools; this may extend self.llm_tools_known.
init_state()
¶
entity_responders()
¶
Sequence of (entity, response_method) pairs. This sequence is used
in a Task to respond to the current pending message.
See Task.step() for details.
Returns:
Sequence of (entity, response_method) pairs.
Source code in langroid/agent/base.py
entity_responders_async()
¶
Async version of entity_responders. See there for details.
Source code in langroid/agent/base.py
enable_message_handling(message_class=None)
¶
Enable an agent to RESPOND (i.e. handle) a "tool" message of a specific type
from LLM. Also "registers" (i.e. adds) the message_class to the
self.llm_tools_map dict.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message_class
|
Optional[Type[ToolMessage]]
|
The message class to enable; Optional; if None, all known message classes are enabled for handling. |
None
|
Source code in langroid/agent/base.py
disable_message_handling(message_class=None)
¶
Disable a message class from being handled by this Agent.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message_class
|
Optional[Type[ToolMessage]]
|
The message class to disable. If None, all message classes are disabled. |
None
|
Source code in langroid/agent/base.py
sample_multi_round_dialog()
¶
Generate a sample multi-round dialog based on enabled message classes. Returns: str: The sample dialog string.
Source code in langroid/agent/base.py
create_agent_response(content=None, files=[], content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for agent_response.
Source code in langroid/agent/base.py
render_agent_response(results)
¶
Render the response from the agent, typically from tool-handling. Args: results: results from tool-handling, which may be a string, a dict of tool results, or a ChatDocument.
Source code in langroid/agent/base.py
agent_response_async(msg=None)
async
¶
Asynch version of agent_response. See there for details.
Source code in langroid/agent/base.py
agent_response(msg=None)
¶
Response from the "agent itself", typically (but not only)
used to handle LLM's "tool message" or function_call
(e.g. OpenAI function_call).
Args:
msg (str|ChatDocument): the input to respond to: if msg is a string,
and it contains a valid JSON-structured "tool message", or
if msg is a ChatDocument, and it contains a function_call.
Returns:
Optional[ChatDocument]: the response, packaged as a ChatDocument
Source code in langroid/agent/base.py
process_tool_results(results, id2result, tool_calls=None)
¶
Process results from a response, based on whether they are results of OpenAI tool-calls from THIS agent, so that we can construct an appropriate LLMMessage that contains tool results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
results
|
str
|
A possible string result from handling tool(s) |
required |
id2result
|
OrderedDict[str, str] | None
|
A dict of OpenAI tool id -> result, if there are multiple tool results. |
required |
tool_calls
|
List[OpenAIToolCall] | None
|
List of OpenAI tool-calls that the results are a response to. |
None
|
Return
- str: The response string
- Dict[str,str]|None: A dict of OpenAI tool id -> result, if there are multiple tool results.
- str|None: tool_id if there was a single tool result
Source code in langroid/agent/base.py
764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 | |
response_template(e, content=None, files=[], content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for response from entity e.
Source code in langroid/agent/base.py
create_user_response(content=None, files=[], content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for user_response.
Source code in langroid/agent/base.py
user_can_respond(msg=None)
¶
Whether the user can respond to a message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str | ChatDocument
|
the string to respond to. |
None
|
Returns:
Source code in langroid/agent/base.py
user_response_async(msg=None)
async
¶
Asynch version of user_response. See there for details.
Source code in langroid/agent/base.py
user_response(msg=None)
¶
Get user response to current message. Could allow (human) user to intervene with an actual answer, or quit using "q" or "x"
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str | ChatDocument
|
the string to respond to. |
None
|
Returns:
| Type | Description |
|---|---|
Optional[ChatDocument]
|
(str) User response, packaged as a ChatDocument |
Source code in langroid/agent/base.py
llm_can_respond(message=None)
¶
Whether the LLM can respond to a message. Args: message (str|ChatDocument): message or ChatDocument object to respond to.
Returns:
Source code in langroid/agent/base.py
can_respond(message=None)
¶
Whether the agent can respond to a message. Used in Task.py to skip a sub-task when we know it would not respond. Args: message (str|ChatDocument): message or ChatDocument object to respond to.
Source code in langroid/agent/base.py
create_llm_response(content=None, content_any=None, tool_messages=[], oai_tool_calls=None, oai_tool_choice='auto', oai_tool_id2result=None, function_call=None, recipient='')
¶
Template for llm_response.
Source code in langroid/agent/base.py
llm_response_async(message=None)
async
¶
Asynch version of llm_response. See there for details.
Source code in langroid/agent/base.py
llm_response(message=None)
¶
LLM response to a prompt. Args: message (str|ChatDocument): prompt string, or ChatDocument object
Returns:
| Type | Description |
|---|---|
Optional[ChatDocument]
|
Response from LLM, packaged as a ChatDocument |
Source code in langroid/agent/base.py
1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 | |
has_tool_message_attempt(msg)
¶
Check whether msg contains a Tool/fn-call attempt (by the LLM).
CAUTION: This uses self.get_tool_messages(msg) which as a side-effect may update msg.tool_messages when msg is a ChatDocument, if there are any tools in msg.
Source code in langroid/agent/base.py
has_only_unhandled_tools(msg)
¶
Does the msg have at least one tool, and none of the tools in the msg are handleable by this agent?
Source code in langroid/agent/base.py
get_tool_messages(msg, all_tools=False)
¶
Get ToolMessages recognized in msg, handle-able by this agent. NOTE: as a side-effect, this will update msg.tool_messages when msg is a ChatDocument and msg contains tool messages.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str | ChatDocument
|
the message to extract tools from. |
required |
all_tools
|
bool
|
|
False
|
Returns:
| Type | Description |
|---|---|
List[ToolMessage]
|
List[ToolMessage]: list of ToolMessage objects |
Source code in langroid/agent/base.py
1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 | |
get_formatted_tool_messages(input_str, from_llm=True)
¶
Returns ToolMessage objects (tools) corresponding to tool-formatted substrings, if any. ASSUMPTION - These tools are either ALL JSON-based, or ALL XML-based (i.e. not a mix of both). Terminology: a "formatted tool msg" is one which the LLM generates as part of its raw string output, rather than within a JSON object in the API response (i.e. this method does not extract tools/fns returned by OpenAI's tools/fns API or similar APIs).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_str
|
str
|
input string, typically a message sent by an LLM |
required |
from_llm
|
bool
|
whether the input was generated by the LLM. If so, we track malformed tool calls. |
True
|
Returns:
| Type | Description |
|---|---|
List[ToolMessage]
|
List[ToolMessage]: list of ToolMessage objects |
Source code in langroid/agent/base.py
get_function_call_class(msg)
¶
From ChatDocument (constructed from an LLM Response), get the ToolMessage
corresponding to the function_call if it exists.
Source code in langroid/agent/base.py
get_oai_tool_calls_classes(msg)
¶
From ChatDocument (constructed from an LLM Response), get
a list of ToolMessages corresponding to the tool_calls, if any.
Source code in langroid/agent/base.py
tool_validation_error(ve, tool_class=None)
¶
Handle a validation error raised when parsing a tool message, when there is a legit tool name used, but it has missing/bad fields. Args: ve (ValidationError): The exception raised tool_class (Optional[Type[ToolMessage]]): The tool class that failed validation
Returns:
| Name | Type | Description |
|---|---|---|
str |
str
|
The error message to send back to the LLM |
Source code in langroid/agent/base.py
handle_message_async(msg)
async
¶
Asynch version of handle_message. See there for details.
Source code in langroid/agent/base.py
handle_message(msg)
¶
Handle a "tool" message either a string containing one or more
valid "tool" JSON substrings, or a
ChatDocument containing a function_call attribute.
Handle with the corresponding handler method, and return
the results as a combined string.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str | ChatDocument
|
The string or ChatDocument to handle |
required |
Returns:
| Type | Description |
|---|---|
None | str | OrderedDict[str, str] | ChatDocument
|
The result of the handler method can be:
- None if no tools successfully handled, or no tools present
- str if langroid-native JSON tools were handled, and results concatenated,
OR there's a SINGLE OpenAI tool-call.
(We do this so the common scenario of a single tool/fn-call
has a simple behavior).
- Dict[str, str] if multiple OpenAI tool-calls were handled
(dict is an id->result map)
- ChatDocument if a handler returned a ChatDocument, intended to be the
final response of the |
Source code in langroid/agent/base.py
1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 | |
handle_message_fallback(msg)
¶
Fallback method for the case where the msg has no tools that can be handled by this agent. This method can be overridden by subclasses, e.g., to create a "reminder" message when a tool is expected but the LLM "forgot" to generate one.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
str | ChatDocument
|
The input msg to handle |
required |
Returns: Any: The result of the handler method
Source code in langroid/agent/base.py
to_ChatDocument(msg, orig_tool_name=None, chat_doc=None, author_entity=Entity.AGENT)
¶
Convert result of a responder (agent_response or llm_response, or task.run()), or tool handler, or handle_message_fallback, to a ChatDocument, to enable handling by other responders/tasks in a task loop possibly involving multiple agents.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msg
|
Any
|
The result of a responder or tool handler or task.run() |
required |
orig_tool_name
|
str
|
The original tool name that generated the response, if any. |
None
|
chat_doc
|
ChatDocument
|
The original ChatDocument object that |
None
|
author_entity
|
Entity
|
The intended author of the result ChatDocument |
AGENT
|
Source code in langroid/agent/base.py
from_ChatDocument(msg, output_type)
¶
Extract a desired output_type from a ChatDocument object.
We use this fallback order:
- if msg.content_any exists and matches the output_type, return it
- if msg.content exists and output_type is str return it
- if output_type is a ToolMessage, return the first tool in msg.tool_messages
- if output_type is a list of ToolMessage,
return all tools in msg.tool_messages
- search for a tool in msg.tool_messages that has a field of output_type,
and if found, return that field value
- return None if all the above fail
Source code in langroid/agent/base.py
handle_tool_message_async(tool, chat_doc=None)
async
¶
Asynch version of handle_tool_message. See there for details.
Source code in langroid/agent/base.py
handle_tool_message(tool, chat_doc=None)
¶
Respond to a tool request from the LLM, in the form of an ToolMessage object.
Args:
tool: ToolMessage object representing the tool request.
chat_doc: Optional ChatDocument object containing the tool request.
This is passed to the tool-handler method only if it has a chat_doc
argument.
Returns:
Source code in langroid/agent/base.py
update_token_usage(response, prompt, stream, chat=True, print_response_stats=True)
¶
Updates response.usage obj (token usage and cost fields) if needed.
An update is needed only if:
- stream is True (i.e. streaming was enabled), and
- the response was NOT obtained from cached, and
- the API did NOT provide the usage/cost fields during streaming
(As of Sep 2024, the OpenAI API started providing these; for other APIs
this may not necessarily be the case).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
response
|
LLMResponse
|
LLMResponse object |
required |
prompt
|
str | List[LLMMessage]
|
prompt or list of LLMMessage objects |
required |
stream
|
bool
|
whether to update the usage in the response object if the response is not cached. |
required |
chat
|
bool
|
whether this is a chat model or a completion model |
True
|
print_response_stats
|
bool
|
whether to print the response stats |
True
|
Source code in langroid/agent/base.py
2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 | |
ask_agent(agent, request, no_answer=NO_ANSWER, user_confirm=True)
¶
Send a request to another agent, possibly after confirming with the user.
This is not currently used, since we rely on the task loop and
RecipientTool to address requests to other agents. It is generally best to
avoid using this method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
agent
|
Agent
|
agent to ask |
required |
request
|
str
|
request to send |
required |
no_answer
|
str
|
expected response when agent does not know the answer |
NO_ANSWER
|
user_confirm
|
bool
|
whether to gate the request with a human confirmation |
True
|
Returns:
| Name | Type | Description |
|---|---|---|
str |
Optional[str]
|
response from agent |