A quick tour of Langroid¶
This is a quick tour of some Langroid features. For a more detailed guide, see the Getting Started guide. There are many more features besides the ones shown here. To explore langroid more, see the sections of the main docs, and a Colab notebook you can try yourself.
Chat directly with LLM¶
Imports:
Set up the LLM; note how you can specify the chat model -- if omitted, defaults to OpenAI GPT4o. See the guide to using Langroid with local/open LLMs, and with non-OpenAI LLMs.
llm_config = lm.OpenAIGPTConfig( chat_model="glhf/hf:Qwen/Qwen2.5-Coder-32B-Instruct" )
llm = lm.OpenAIGPT(llm_config)
Chat with bare LLM -- no chat accumulation, i.e. follow-up responses will not be aware of prior conversation history (you need an Agent for that, see below).
Agent¶
Make a ChatAgent
,
and chat with it; now accumulates conv history
agent = lr.ChatAgent(lr.ChatAgentConfig(llm=llm_config))
agent.llm_response("Find the next number: 1 2 4 7 11 ?")
# => responds 16
agent.llm_response("and then?)
# => answers 22
Task¶
Make a Task
and create a chat loop with the user:
Tools/Functions/Structured outputs:¶
Define a ToolMessage
using Pydantic (v1) -- this gets transpiled into system-message instructions
to the LLM, so you never have to deal with writing a JSON schema.
(Besides JSON-based tools, Langroid also supports
XML-based tools, which
are far more reliable when having the LLM return code in a structured output.)
from langroid.pydantic_v1 import BaseModel
from langroid.agent.tools.orchestration import ResultTool
class CityTemperature(BaseModel):
city: str
temp: float
class WeatherTool(lr.ToolMessage):
request: str = "weather_tool" #(1)!
purpose: str = "To extract <city_temp> info from text" #(2)!
city_temp: CityTemperature
# tool handler
def handle(self) -> ResultTool:
return ResultTool(city_temp = self.city_temp)
- When this tool is enabled for an agent, a method named
weather_tool
gets auto-inserted in the agent class, with body being thehandle
method -- this method handles the LLM's generation of this tool. - The value of the
purpose
field is used to populate the system message to the LLM, along with the Tool's schema derived from its Pydantic-based definition.
Enable the Agent to use the ToolMessage
:
Create specialized task that returns a CityTemperature
object:
task = lr.Task(agent, interactive=False)[CityTemperature]
# run task, with built-in tool-handling loop
data = task.run("It is 45 degrees F in Boston")
assert data.city == "Boston"
assert int(data.temp) == 45
Chat with a document (RAG)¶
Create a DocChatAgent
.
doc_agent_config = lr.agent.special.DocChatAgentConfig(llm=llm_config)
doc_agent = lr.agent.special.DocChatAgent(doc_agent_config)
Ingest the contents of a web page into the agent (this involves chunking, indexing into a vector-database, etc.):
Ask a question:
You should see the streamed response with citations like this:
Two-agent interaction¶
Set up a teacher agent:
from langroid.agent.tools.orchestration import DoneTool
teacher = lr.ChatAgent(
lr.ChatAgentConfig(
llm=llm_config,
system_message=f"""
Ask a numbers-based question, and your student will answer.
You can then provide feedback or hints to the student to help them
arrive at the right answer. Once you receive the right answer,
use the `{DoneTool.name()}` tool to end the session.
"""
)
)
teacher.enable_message(DoneTool)
teacher_task = lr.Task(teacher, interactive=False)
Set up a student agent:
student = lr.ChatAgent(
lr.ChatAgentConfig(
llm=llm_config,
system_message=f"""
You will receive a numbers-related question. Answer to the best of
your ability. If your answer is wrong, you will receive feedback or hints,
and you can revise your answer, and repeat this process until you get
the right answer.
"""
)
)
student_task = lr.Task(student, interactive=False, single_round=True)
Make the student_task
a subtask of the teacher_task
:
Run the teacher task:
You should then see this type of interaction: