parameter#

Parameter is used by Optimizer, Trainers, AdalComponent to auto-optimizations

Classes

GradientContext(variable_desc, ...)

Parameter(*, id, data, requires_opt, ...)

A data container to represent a parameter used for optimization.

class GradientContext(variable_desc: str, response_desc: str, context: str)[source]#

Bases: object

variable_desc: str#
response_desc: str#
context: str#
class Parameter(*, id: str | None = None, data: ~optim.parameter.T = None, requires_opt: bool = True, role_desc: str = '', param_type: ~adalflow.optim.types.ParameterType = <ParameterType.PROMPT: prompt, 'Instruction to the language model on task, data, and format.'>, name: str = None, gradient_prompt: str = None, raw_response: str = None, instruction_to_optimizer: str = None, instruction_to_backward_engine: str = None, score: float | None = None, eval_input: object = None, from_response_id: str | None = None, successor_map_fn: ~typing.Dict[str, ~typing.Callable] | None = None)[source]#

Bases: Generic[T]

A data container to represent a parameter used for optimization.

A parameter enforce a specific data type and can be updated in-place. When parameters are used in a component - when they are assigned as Component attributes they are automatically added to the list of its parameters, and will appear in the parameters() or named_parameters() method.

Args:

End users only need to create the Parameter with four arguments and pass it to the prompt_kwargs in the Generator.

  • data (str): the data of the parameter

  • requires_opt (bool, optional): if the parameter requires optimization. Default: True

  • role_desc.

  • param_type, incuding ParameterType.PROMPT for instruction optimization, ParameterType.DEMOS

for few-shot optimization. - instruction_to_optimizer (str, optional): instruction to the optimizer. Default: None - instruction_to_backward_engine (str, optional): instruction to the backward engine. Default: None

The parameter users created will be automatically assigned to the variable_name/key in the prompt_kwargs for easy reading and debugging in the trace_graph.

References:

  1. karpathy/micrograd

proposing: bool = False#
predecessors: Set[Parameter] = {}#
peers: Set[Parameter] = {}#
input_args: Dict[str, Any] = None#
full_response: object = None#
backward_engine_disabled: bool = False#
id: str = None#
role_desc: str = ''#
name: str = None#
param_type: ParameterType#
data: T = None#
gradients: List[Parameter]#
gradient_prompt: str#
gradients_context: Dict[Parameter, GradientContext]#
instruction_to_optimizer: str#
instruction_to_backward_engine: str#
eval_input: object = None#
from_response_id: str = None#
successor_map_fn: Dict[str, Callable] = None#
map_to_successor(successor: object) T[source]#

Apply the map function to the successor based on the successor’s id.

add_successor_map_fn(successor: object, map_fn: Callable)[source]#

Add or update a map function for a specific successor using its id.

check_if_already_computed_gradient_respect_to(response_id: str) bool[source]#
add_gradient(gradient: Parameter)[source]#
set_predecessors(predecessors: List[Parameter] = None)[source]#
set_grad_fn(grad_fn)[source]#
get_param_info()[source]#
set_peers(peers: List[Parameter] = None)[source]#
trace_forward_pass(input_args: Dict[str, Any], full_response: object)[source]#

Trace the forward pass of the parameter.

set_eval_fn_input(eval_input: object)[source]#

Set the input for the eval_fn.

set_score(score: float)[source]#
add_to_trace(trace: DataClass, is_teacher: bool = True)[source]#

Called by the generator.forward to add a trace to the parameter.

It is important to allow updating to the trace, as this will give different sampling weight. If the score increases as the training going on, it will become less likely to be sampled, allowing the samples to be more diverse. Or else, it will keep sampling failed examples.

add_score_to_trace(trace_id: str, score: float, is_teacher: bool = True)[source]#

Called by the generator.backward to add the eval score to the trace.

propose_data(data: T, demos: List[DataClass] | None = None)[source]#

Used by optimizer to put the new data, and save the previous data in case of revert.

revert_data(include_demos: bool = False)[source]#

Revert the data to the previous data.

step_data(include_demos: bool = False)[source]#

Use PyTorch’s optimizer syntax to finalize the update of the data.

get_grad_fn()[source]#
update_value(data: T)[source]#

Update the parameter’s value in-place, checking for type correctness.

reset_gradients()[source]#
reset_gradients_context()[source]#
get_gradients_names() str[source]#
get_gradient_and_context_text() str[source]#

Aggregates and returns: 1. the gradients 2. the context text for which the gradients are computed

get_short_value(n_words_offset: int = 10) str[source]#

Returns a short version of the value of the variable. We sometimes use it during optimization, when we want to see the value of the variable, but don’t want to see the entire value. This is sometimes to save tokens, sometimes to reduce repeating very long variables, such as code or solutions to hard problems. :param n_words_offset: The number of words to show from the beginning and the end of the value. :type n_words_offset: int

static trace_graph(root: Parameter) Tuple[Set[Parameter], Set[Tuple[Parameter, Parameter]]][source]#
backward()[source]#
draw_graph(add_grads: bool = True, format: Literal['png', 'svg'] = 'png', rankdir: Literal['LR', 'TB'] = 'TB', filepath: str | None = None)[source]#

Draw the graph of the parameter and its gradients.

Parameters:
  • add_grads (bool, optional) – Whether to add gradients to the graph. Defaults to True.

  • format (str, optional) – The format of the output file. Defaults to “png”.

  • rankdir (str, optional) – The direction of the graph. Defaults to “TB”.

  • filepath (str, optional) – The path to save the graph. Defaults to None.

to_dict()[source]#
classmethod from_dict(data: dict)[source]#