react

Implementation and optimization of React agent.

Classes

CombineStepHistory()

ReActAgent([tools, max_steps, ...])

ReActAgent uses generator as a planner that runs multiple and sequential functional call steps to generate the final response.

ReActOutput(id, step_history, answer)

Similar to GeneratorOutput, but with additional step history and final answer.

Constants

DEFAULT_REACT_AGENT_SYSTEM_PROMPT = '<START_OF_SYSTEM_PROMPT>\n{{react_agent_task_desc}}\n- You cant use more than {{max_steps}} steps. At the {{max_steps}}th current step, must finish with answer.\n\n{# Tools #}\n{% if tools %}\n<START_OF_TOOLS>\nTools and instructions:\n{% for tool in tools %}\n{{ loop.index }}.\n{{tool}}\n------------------------\n{% endfor %}\n<END_OF_TOOLS>\n{% endif %}\n{# Context Variables #}\n{% if context_variables is not none %}\n<START_OF_CONTEXT>\nYou have access to context_variables with the following keys:\n{% for key, value in context_variables.items() %}\n{{ key }}\n------------------------\n{% endfor %}\nYou can either pass context_variables or context_variables[\'key\'] to the tools depending on the tool\'s requirements.\n<END_OF_CONTEXT>\n{% endif %}\n{# output format and examples for output format #}\n<START_OF_OUTPUT_FORMAT>\n{{output_format_str}}\n<END_OF_OUTPUT_FORMAT>\n{% if examples %}\n<START_OF_EXAMPLES>\nExamples:\n{% for example in examples %}\n{{example}}\n------------------------\n{% endfor %}\n<END_OF_EXAMPLES>\n{% endif %}\n<END_OF_SYSTEM_PROMPT>\n-----------------\n<START_OF_USER_QUERY>\nInput query:\n{{ input_str }}\n_____________________\nCurrent Step/Max Step: {{step_history|length + 1}} / {{max_steps}}\n{# Step History #}\n{% if step_history %}\n<STEPS>\nYour previous steps:\n{% for history in step_history %}\nStep {{ loop.index }}.\n{% if history.action %}\n"thought": "{{history.action.thought}}",\n"name": "{{history.action.name}},\n"kwargs": {{history.action.kwargs}}",\n{% endif %}\n"Observation": "{{history.observation}}"\n------------------------\n{% endfor %}\n</STEPS>\n{% endif %}\n<END_OF_USER_QUERY>\n'

str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.

class ReActAgent(tools: List[Callable | Callable[[...], Awaitable[Any]] | FunctionTool] = [], max_steps: int = 10, add_llm_as_fallback: bool = True, examples: List[Function] | List[str] = [], *, model_client: ModelClient, model_kwargs: Dict = {}, template: str | None = None, context_variables: Dict | None = None, use_cache: bool = True, debug: bool = False)[source]

Bases: Component

ReActAgent uses generator as a planner that runs multiple and sequential functional call steps to generate the final response. The planner will generate a Function data class as action for each step that includes a “thought” field. The execution result is stored in the “observation” field of the StepOutput data class. If the execution failed, it will store the error message in the “observation” field so that we can auto-optimize it to correct the error.

The final answer can be different in training and eval mode: - Training: the final answer will be Users need to set up: - tools: a list of tools to use to complete the task. Each tool is a function or a function tool. - max_steps: the maximum number of steps the agent can take to complete the task. - use_llm_as_fallback: a boolean to decide whether to use an additional LLM model as a fallback tool to answer the query. - model_client: the model client to use to generate the response. - model_kwargs: the model kwargs to use to generate the response. - template: the template to use to generate the prompt. Default is DEFAULT_REACT_AGENT_SYSTEM_PROMPT. - context_variables: the context variables to use in the prompt. - use_cache: a boolean to decide whether to use the cache to store the generated responses for the planner. - debug: a boolean to decide whether to print debug information.

For the generator, the default arguments are: (1) default prompt: DEFAULT_REACT_AGENT_SYSTEM_PROMPT (2) default output_processors: JsonParser

There are examples which is optional, a list of string examples in the prompt.

Example:

from core.openai_client import OpenAIClient
from components.agent.react import ReActAgent
from core.func_tool import FunctionTool
# define the tools
def multiply(a: int, b: int) -> int:
    '''Multiply two numbers.'''
    return a * b
def add(a: int, b: int) -> int:
    '''Add two numbers.'''
    return a + b
agent = ReActAgent(
    tools=[multiply, add],
    model_client=OpenAIClient(),
    model_kwargs={"model": "gpt-3.5-turbo"},
)

# Using examples:

call_multiply = FunctionExpression.from_function(
    thought="I want to multiply 3 and 4.",

Reference: [1] https://arxiv.org/abs/2210.03629, published in Mar, 2023.

call(*args, **kwargs) ReActOutput[source]

User must override this for the inference scenario if bicall is not defined.

forward(*args, **kwargs) Parameter[source]

User must override this for the training scenario if bicall is not defined.

bicall(input: str, promt_kwargs: Dict | None = {}, model_kwargs: Dict | None = {}, id: str | None = None) Parameter | ReActOutput[source]

prompt_kwargs: additional prompt kwargs to either replace or add to the preset prompt kwargs.