generator

Generator is a user-facing orchestration component with a simple and unified interface for LLM prediction.

It is a pipeline that consists of three subcomponents.

Functions

create_teacher_generator(student, ...[, ...])

Create a teacher generator from the student generator.

Classes

BackwardEngine(**kwargs)

A Generator with a default template for the backward pass in auto-differentiation.

BackwardPassSetup([all_pred_at_once, ...])

Generator(*, model_client[, model_kwargs, ...])

An user-facing orchestration component for LLM prediction.

class Generator(*, model_client: ModelClient, model_kwargs: Dict[str, str | Parameter] = {}, template: str | None = None, prompt_kwargs: Dict | None = {}, output_processors: DataComponent | None = None, name: str | None = None, cache_path: str | None = None, use_cache: bool = False)[source]

Bases: GradComponent, CachedEngine, CallbackManager

An user-facing orchestration component for LLM prediction.

It is also a GradComponent that can be used for backpropagation through the LLM model.

By orchestrating the following three components along with their required arguments, it enables any LLM prediction with required task output format. - Prompt - Model client - Output processors

Parameters:
  • model_client (ModelClient) – The model client to use for the generator.

  • model_kwargs (Dict[str, Any], optional) – The model kwargs to pass to the model client. Defaults to {}. Please refer to ModelClient for the details on how to set the model_kwargs for your specific model if it is from our library.

  • template (Optional[str], optional) – The template for the prompt. Defaults to DEFAULT_ADALFLOW_SYSTEM_PROMPT.

  • prompt_kwargs (Optional[Dict], optional) – The preset prompt kwargs to fill in the variables in the prompt. Defaults to None.

  • output_processors (Optional[Component], optional) – The output processors after model call. It can be a single component or a chained component via Sequential. Defaults to None.

  • trainable_params (Optional[List[str]], optional) – The list of trainable parameters. Defaults to [].

Note

1. The output_processors will be applied to the string output of the model completion. And the result will be stored in the data field of the output. And we encourage you to only use it to parse the response to data format you will use later. 2. For structured output, you should avoid using stream as the output_processors can only be run after all the data is available.

model_type: ModelType = 2
backward_pass_setup: BackwardPassSetup = BackwardPassSetup(all_pred_at_once=False, threshold_score_to_compute_grad_for_errors=0.9, compute_grad_for_errors_only=True)
model_client: ModelClient
update_default_backward_pass_setup(setup: BackwardPassSetup)[source]
set_cache_path(cache_path: str, model_client: object, model: str)[source]

Set the cache path for the generator.

get_cache_path() str[source]

Get the cache path for the generator.

set_mock_output(mock_output: bool = True, mock_output_data: str = 'mock data')[source]
reset_mock_output()[source]
set_parameters(prompt_kwargs: Dict[str, str | Parameter])[source]

Set name for each paramter and set all context for each other. Make all parameters attributes to the generator for finding them easily for optimizers and other components.

classmethod from_config(config: Dict[str, Any]) Generator[source]

Create a Generator instance from the config dictionary.

Example:

config = {
            "model_client": {
                "component_name": "OpenAIClient",
                "component_config": {}
            },
            "model_kwargs": {"model": "gpt-3.5-turbo", "temperature": 0}
        }
generator = Generator.from_config(config)
print_prompt(**kwargs) str[source]
get_prompt(**kwargs) str[source]
create_demo_data_instance(input_prompt_kwargs: Dict[str, Any], output: GeneratorOutput, id: str | None = None)[source]

Automatically create a demo data instance from the input and output of the generator. Used to trace the demos for the demo paramter in the prompt_kwargs. Part of the few-shot learning.

set_backward_engine(backward_engine: BackwardEngine = None)[source]
set_teacher_generator(teacher: Generator = None)[source]
static find_demo_parameter(prompt_kwargs: Dict) Parameter | None[source]
forward(prompt_kwargs: Dict[str, str | Parameter] | None = {}, model_kwargs: Dict | None = {}, id: str | None = None) Parameter[source]

Customized forward pass on top of the GradComponent forward method.

backward(response: Parameter, prompt_kwargs: Dict, template: str, prompt_str: str, backward_engine: Generator | None = None, id: str | None = None, disable_backward_engine: bool = False) Parameter[source]

Backward pass of the function. In default, it will pass all the scores to the predecessors.

Note: backward is mainly used internally and better to only allow kwargs as the input.

Subclass should implement this method if you need additional backward logic.

call(prompt_kwargs: Dict | None = {}, model_kwargs: Dict | None = {}, use_cache: bool | None = None, id: str | None = None) GeneratorOutput[object][source]

Call the model_client by formatting prompt from the prompt_kwargs, and passing the combined model_kwargs to the model client.

async acall(prompt_kwargs: Dict | None = {}, model_kwargs: Dict | None = {}, use_cache: bool | None = None, id: str | None = None) GeneratorOutput[object][source]

Async call the model with the input and model_kwargs.

Warning::

Training is not supported in async call yet.

to_dict() Dict[str, Any][source]

Convert the generator to a dictionary.

static failure_message_to_backward_engine(gradient_response: GeneratorOutput) str | None[source]
class BackwardEngine(**kwargs)[source]

Bases: Generator

A Generator with a default template for the backward pass in auto-differentiation.

As a component, the forward pass is simply the same as the call method. So it will always return GeneratorOutputType instead of Parameter.

If you want to customize the template, you can create your own backward engine. Yet, we will forever keep the training mode to False for the backward engine. This is achieved by making forward the same as call.

call(**kwargs) GeneratorOutput[object][source]

Catch the rate limit error and raise it.

forward(**kwargs)[source]

Forward pass for the backward engine.

static failure_message_to_optimizer(gradient_response: GeneratorOutput) str | None[source]
create_teacher_generator(student: Generator, model_client: ModelClient, model_kwargs: Dict[str, Any], template: str | None = None) Generator[source]

Create a teacher generator from the student generator.

Note

Teacher generator will have no parameters. If you want to keep it to be the same as the student, just create one each time your student has been updated. Or else, task.parameters will list teacher parameters.

Parameters:
  • student (Generator) – The student generator.

  • model_client (ModelClient) – The model client to use for the teacher generator.

  • model_kwargs (Dict[str, Any]) – The model kwargs to pass to the model client.

  • name (str, optional) – The name of the teacher generator. Defaults to “teacher”.

Returns:

The teacher generator.

Return type:

Generator