batch
run_batch_task_gen(gen_task, items, input_map=lambda x: str(x), output_map=lambda x: x, sequential=True, batch_size=None, turns=-1, message=None, handle_exceptions=False, max_cost=0.0, max_tokens=0)
¶
Generate and run copies of a task async/concurrently one per item in items
list.
For each item, apply input_map
to get the initial message to process.
For each result, apply output_map
to get the final result.
Args:
gen_task (Callable[[int], Task]): generates the tasks to run
items (list[T]): list of items to process
input_map (Callable[[T], str|ChatDocument]): function to map item to
initial message to process
output_map (Callable[[ChatDocument|str], U]): function to map result
to final result
sequential (bool): whether to run sequentially
(e.g. some APIs such as ooba don't support concurrent requests)
batch_size (Optional[int]): The number of tasks to run at a time,
if None, unbatched
turns (int): number of turns to run, -1 for infinite
message (Optional[str]): optionally overrides the console status messages
handle_exceptions: bool: Whether to replace exceptions with outputs of None
max_cost: float: maximum cost to run the task (default 0.0 for unlimited)
max_tokens: int: maximum token usage (in and out) (default 0 for unlimited)
Returns:
Type | Description |
---|---|
list[U]
|
list[Any]: list of final results |
Source code in langroid/agent/batch.py
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
|
run_batch_tasks(task, items, input_map=lambda x: str(x), output_map=lambda x: x, sequential=True, batch_size=None, turns=-1, max_cost=0.0, max_tokens=0)
¶
Run copies of task
async/concurrently one per item in items
list.
For each item, apply input_map
to get the initial message to process.
For each result, apply output_map
to get the final result.
Args:
task (Task): task to run
items (list[T]): list of items to process
input_map (Callable[[T], str|ChatDocument]): function to map item to
initial message to process
output_map (Callable[[ChatDocument|str], U]): function to map result
to final result
sequential (bool): whether to run sequentially
(e.g. some APIs such as ooba don't support concurrent requests)
batch_size (Optional[int]): The number of tasks to run at a time,
if None, unbatched
turns (int): number of turns to run, -1 for infinite
max_cost: float: maximum cost to run the task (default 0.0 for unlimited)
max_tokens: int: maximum token usage (in and out) (default 0 for unlimited)
Returns:
Type | Description |
---|---|
List[U]
|
list[Any]: list of final results |
Source code in langroid/agent/batch.py
run_batch_agent_method(agent, method, items, input_map=lambda x: str(x), output_map=lambda x: x, sequential=True)
¶
Run the method
on copies of agent
, async/concurrently one per
item in items
list.
ASSUMPTION: The method
is an async method and has signature:
method(self, input: str|ChatDocument|None) -> ChatDocument|None
So this would typically be used for the agent's "responder" methods,
e.g. llm_response_async
or agent_responder_async
.
For each item, apply input_map
to get the initial message to process.
For each result, apply output_map
to get the final result.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
agent |
Agent
|
agent whose method to run |
required |
method |
str
|
Async method to run on copies of |
required |
input_map |
Callable[[Any], str | ChatDocument]
|
function to map item to initial message to process |
lambda x: str(x)
|
output_map |
Callable[[ChatDocument | str], Any]
|
function to map result to final result |
lambda x: x
|
sequential |
bool
|
whether to run sequentially (e.g. some APIs such as ooba don't support concurrent requests) |
True
|
Returns: List[Any]: list of final results