Skip to content

Blog

Overview of Langroid's Multi-Agent Architecture (prelim)

Agent, as an intelligent message transformer

A natural and convenient abstraction in designing a complex LLM-powered system is the notion of an agent that is instructed to be responsible for a specific aspect of the overall task. In terms of code, an Agent is essentially a class representing an intelligent entity that can respond to messages, i.e., an agent is simply a message transformer. An agent typically encapsulates an (interface to an) LLM, and may also be equipped with so-called tools (as described below) and external documents/data (e.g., via a vector database, as described below). Much like a team of humans, agents interact by exchanging messages, in a manner reminiscent of the actor framework in programming languages. An orchestration mechanism is needed to manage the flow of messages between agents, to ensure that progress is made towards completion of the task, and to handle the inevitable cases where an agent deviates from instructions. Langroid is founded on this multi-agent programming paradigm, where agents are first-class citizens, acting as message transformers, and communicate by exchanging messages.

Language Models: Completion and Chat-Completion

Transformer-based language models are fundamentally next-token predictors, so naturally all LLM APIs today at least provide a completion endpoint. If an LLM is a next-token predictor, how could it possibly be used to generate a response to a question or instruction, or to engage in a conversation with a human user? This is where the idea of "chat-completion" comes in. This post is a refresher on the distinction between completion and chat-completion, and some interesting details on how chat-completion is implemented in practice.

Langroid: Harness LLMs with Multi-Agent Programming

The LLM Opportunity

Given the remarkable abilities of recent Large Language Models (LLMs), there is an unprecedented opportunity to build intelligent applications powered by this transformative technology. The top question for any enterprise is: how best to harness the power of LLMs for complex applications? For technical and practical reasons, building LLM-powered applications is not as simple as throwing a task at an LLM-system and expecting it to do it.