runner

Classes

Runner(agent[, ctx, max_steps, ...])

Executes Agent instances with multi-step iterative planning and tool execution.

class Runner(agent: Agent, ctx: Dict | None = None, max_steps: int | None = None, permission_manager: PermissionManager | None = None, conversation_memory: ConversationMemory | None = None, **kwargs)[source]

Bases: Component

Executes Agent instances with multi-step iterative planning and tool execution.

The Runner orchestrates the execution of an Agent through multiple reasoning and action cycles. It manages the step-by-step execution loop where the Agent’s planner generates Function calls that get executed by the ToolManager, with results fed back into the planning context for the next iteration.

Execution Flow:
  1. Initialize step history and prompt context

  2. For each step (up to max_steps): a. Call Agent’s planner to get next Function b. Execute the Function using ToolManager c. Add step result to history d. Check if Function is “finish” to terminate

  3. Process final answer to expected output type

The Runner supports both synchronous and asynchronous execution modes, as well as streaming execution with real-time event emission. It includes comprehensive tracing and error handling throughout the execution pipeline.

agent

The Agent instance to execute

Type:

Agent

max_steps

Maximum number of execution steps allowed

Type:

int

answer_data_type

Expected type for final answer processing

Type:

Type

step_history

History of all execution steps

Type:

List[StepOutput]

ctx

Additional context passed to tools

Type:

Optional[Dict]

set_permission_manager(permission_manager: PermissionManager | None) None[source]

Set or update the permission manager after runner initialization.

Parameters:

permission_manager – The permission manager instance to use for tool approval

call(prompt_kwargs: Dict[str, Any], model_kwargs: Dict[str, Any] | None = None, use_cache: bool | None = None, id: str | None = None) RunnerResult[source]

Execute the planner synchronously for multiple steps with function calling support.

At the last step the action should be set to “finish” instead which terminates the sequence

Parameters:
  • prompt_kwargs – Dictionary of prompt arguments for the generator

  • model_kwargs – Optional model parameters to override defaults

  • use_cache – Whether to use cached results if available

  • id – Optional unique identifier for the request

Returns:

RunnerResult containing step history and final processed output

async acall(prompt_kwargs: Dict[str, Any], model_kwargs: Dict[str, Any] | None = None, use_cache: bool | None = None, id: str | None = None) RunnerResult[source]

Execute the planner asynchronously for multiple steps with function calling support.

At the last step the action should be set to “finish” instead which terminates the sequence

Parameters:
  • prompt_kwargs – Dictionary of prompt arguments for the generator

  • model_kwargs – Optional model parameters to override defaults

  • use_cache – Whether to use cached results if available

  • id – Optional unique identifier for the request

Returns:

RunnerResponse containing step history and final processed output

astream(prompt_kwargs: Dict[str, Any], model_kwargs: Dict[str, Any] | None = None, use_cache: bool | None = None, id: str | None = None) RunnerStreamingResult[source]

Execute the runner asynchronously with streaming support.

Returns:

A streaming result object with stream_events() method

Return type:

RunnerStreamingResult

async impl_astream(prompt_kwargs: Dict[str, Any], model_kwargs: Dict[str, Any] | None = None, use_cache: bool | None = None, id: str | None = None, streaming_result: RunnerStreamingResult | None = None) None[source]

Execute the planner asynchronously for multiple steps with function calling support.

At the last step the action should be set to “finish” instead which terminates the sequence

Parameters:
  • prompt_kwargs – Dictionary of prompt arguments for the generator

  • model_kwargs – Optional model parameters to override defaults

  • use_cache – Whether to use cached results if available

  • id – Optional unique identifier for the request

async stream_tool_execution(function: Function, tool_call_id: str, tool_call_name: str, streaming_result: RunnerStreamingResult) tuple[Any, Any, Any][source]

Execute a tool/function call with streaming support and proper event handling.

This method handles: - Tool span creation for tracing - Async generator support for streaming results - Tool activity events - Tool completion events - Error handling and observation extraction

Parameters:
  • function – The Function object to execute

  • tool_call_id – Unique identifier for this tool call

  • tool_call_name – Name of the tool being called

  • streaming_result – Queue for streaming events

Returns:

(function_output, function_output_observation)

Return type:

tuple