agent

Submodules


class ReActAgent(tools: List[Callable | Callable[[...], Awaitable[Any]] | FunctionTool] = [], max_steps: int = 10, add_llm_as_fallback: bool = True, examples: List[Function] | List[str] = [], *, model_client: ModelClient, model_kwargs: Dict = {}, template: str | None = None, context_variables: Dict | None = None, use_cache: bool = True, debug: bool = False)[source]

Bases: Component

ReActAgent uses generator as a planner that runs multiple and sequential functional call steps to generate the final response. The planner will generate a Function data class as action for each step that includes a “thought” field. The execution result is stored in the “observation” field of the StepOutput data class. If the execution failed, it will store the error message in the “observation” field so that we can auto-optimize it to correct the error.

The final answer can be different in training and eval mode: - Training: the final answer will be Users need to set up: - tools: a list of tools to use to complete the task. Each tool is a function or a function tool. - max_steps: the maximum number of steps the agent can take to complete the task. - use_llm_as_fallback: a boolean to decide whether to use an additional LLM model as a fallback tool to answer the query. - model_client: the model client to use to generate the response. - model_kwargs: the model kwargs to use to generate the response. - template: the template to use to generate the prompt. Default is DEFAULT_REACT_AGENT_SYSTEM_PROMPT. - context_variables: the context variables to use in the prompt. - use_cache: a boolean to decide whether to use the cache to store the generated responses for the planner. - debug: a boolean to decide whether to print debug information.

For the generator, the default arguments are: (1) default prompt: DEFAULT_REACT_AGENT_SYSTEM_PROMPT (2) default output_processors: JsonParser

There are examples which is optional, a list of string examples in the prompt.

Example:

from core.openai_client import OpenAIClient
from components.agent.react import ReActAgent
from core.func_tool import FunctionTool
# define the tools
def multiply(a: int, b: int) -> int:
    '''Multiply two numbers.'''
    return a * b
def add(a: int, b: int) -> int:
    '''Add two numbers.'''
    return a + b
agent = ReActAgent(
    tools=[multiply, add],
    model_client=OpenAIClient(),
    model_kwargs={"model": "gpt-3.5-turbo"},
)

# Using examples:

call_multiply = FunctionExpression.from_function(
    thought="I want to multiply 3 and 4.",

Reference: [1] https://arxiv.org/abs/2210.03629, published in Mar, 2023.

call(*args, **kwargs) ReActOutput[source]

User must override this for the inference scenario if bicall is not defined.

forward(*args, **kwargs) Parameter[source]

User must override this for the training scenario if bicall is not defined.

bicall(input: str, promt_kwargs: Dict | None = {}, model_kwargs: Dict | None = {}, id: str | None = None) Parameter | ReActOutput[source]

prompt_kwargs: additional prompt kwargs to either replace or add to the preset prompt kwargs.