parameter#

Parameter is used by Optimizer, Trainers, AdalComponent to auto-optimizations

Classes

ComponentNode(id, name[, type])

Used to represent a node in the component graph.

ComponentTrace([name, id, input_args, ...])

OutputParameter(*, id, data, data_id, ...)

The output parameter is the most complex type of parameter in the system.

Parameter(*, id, data, data_id, ...)

A data container to represent a parameter used for optimization.

ScoreTrace([score, eval_comp_id, eval_comp_name])

class Parameter(*, id: str | None = None, data: ~optim.parameter.T = None, data_id: str = None, requires_opt: bool = True, role_desc: str = '', param_type: ~adalflow.optim.types.ParameterType = <ParameterType.NONE: none, ''>, name: str = None, instruction_to_optimizer: str = None, instruction_to_backward_engine: str = None, score: float | None = None, eval_input: object = None, successor_map_fn: ~typing.Dict[str, ~typing.Callable] | None = None, data_in_prompt: ~typing.Callable = None)[source]#

Bases: Generic[T]

A data container to represent a parameter used for optimization.

A parameter enforce a specific data type and can be updated in-place. When parameters are used in a component - when they are assigned as Component attributes they are automatically added to the list of its parameters, and will appear in the parameters() or named_parameters() method.

Args:

End users only need to create the Parameter with four arguments and pass it to the prompt_kwargs in the Generator.

  • data (str): the data of the parameter

  • requires_opt (bool, optional): if the parameter requires optimization. Default: True

  • role_desc.

  • param_type, incuding ParameterType.PROMPT for instruction optimization, ParameterType.DEMOS

for few-shot optimization. - instruction_to_optimizer (str, optional): instruction to the optimizer. Default: None - instruction_to_backward_engine (str, optional): instruction to the backward engine. Default: None

The parameter users created will be automatically assigned to the variable_name/key in the prompt_kwargs for easy reading and debugging in the trace_graph.

References:

  1. karpathy/micrograd

allowed_types = {<ParameterType.DEMOS: demos, 'A few examples to guide the language model.'>, <ParameterType.HYPERPARAM: hyperparam, 'Hyperparameters/args for the component.'>, <ParameterType.INPUT: input, 'The input to the component.'>, <ParameterType.NONE: none, ''>, <ParameterType.PROMPT: prompt, 'Instruction to the language model on task, data, and format.'>}#
proposing: bool = False#
predecessors: Set[Parameter] = {}#
peers: Set[Parameter] = {}#
tgd_optimizer_trace: TGDOptimizerTrace = None#
id: str = None#
data_id: str = None#
role_desc: str = ''#
name: str = None#
param_type: ParameterType#
data: T = None#
gradients: Set[Gradient]#
instruction_to_optimizer: str#
instruction_to_backward_engine: str#
score: float#
eval_input: object = None#
successor_map_fn: Dict[str, Callable] = None#
data_in_prompt: Callable = None#
gt: object = None#
map_to_successor(successor: object) T[source]#

Apply the map function to the successor based on the successor’s id.

add_successor_map_fn(successor: object, map_fn: Callable)[source]#

Add or update a map function of the value for a specific successor using its id. succssor will know the value of the current parameter.

check_if_already_computed_gradient_respect_to(response_id: str) bool[source]#
set_gt(gt: object)[source]#
get_gt() object[source]#
add_gradient(gradient: Gradient)[source]#
reset_gradients()[source]#
get_gradients_names() str[source]#
get_prompt_data() str[source]#
get_gradients_str() str[source]#
get_gradient_and_context_text(skip_correct_sample: bool = False) str[source]#

Aggregates and returns: 1. the gradients 2. the context text for which the gradients are computed

Sort the gradients from the lowest score to the highest score. Highlight the gradients with the lowest score to the optimizer.

get_gradients_component_schema(skip_correct_sample: bool = False) str[source]#

Aggregates and returns: 1. the gradients 2. the context text for which the gradients are computed

Sort the gradients from the lowest score to the highest score. Highlight the gradients with the lowest score to the optimizer.

merge_gradients_for_cycle_components()[source]#

Merge data_id, from_response_component_id into the same gradient

sort_gradients()[source]#

With rules mentioned in Graient class, we will track the gradients by data_id, then response_component_id, then score

set_predecessors(predecessors: List[Parameter] = None)[source]#
set_grad_fn(grad_fn)[source]#
get_param_info()[source]#

Used to represent the parameter in the prompt.

set_peers(peers: List[Parameter] = None)[source]#
trace_optimizer(api_kwargs: Dict[str, Any], response: TGDData)[source]#

Trace the inputs and output of a TGD optimizer.

set_eval_fn_input(eval_input: object)[source]#

Set the input for the eval_fn.

set_score(score: float)[source]#

Set the score of the parameter in the backward pass For intermediate nodes, there is only one score per each eval fn behind this node. For leaf nodes, like DEMO or PROMPT, it will have [batch_size] of scores.

But this score is only used to relay the score to the demo parametr.

add_dataclass_to_trace(trace: DataClass, is_teacher: bool = True)[source]#

Called by the generator.forward to add a trace to the parameter.

It is important to allow updating to the trace, as this will give different sampling weight. If the score increases as the training going on, it will become less likely to be sampled, allowing the samples to be more diverse. Or else, it will keep sampling failed examples.

add_score_to_trace(trace_id: str, score: float, is_teacher: bool = True)[source]#

Called by the generator.backward to add the eval score to the trace.

propose_data(data: T, demos: List[DataClass] | None = None)[source]#

Used by optimizer to put the new data, and save the previous data in case of revert.

revert_data(include_demos: bool = False)[source]#

Revert the data to the previous data.

step_data(include_demos: bool = False)[source]#

Use PyTorch’s optimizer syntax to finalize the update of the data.

get_grad_fn()[source]#
update_value(data: T)[source]#

Update the parameter’s value in-place, checking for type correctness.

get_short_value(n_words_offset: int = 10) str[source]#

Returns a short version of the value of the variable. We sometimes use it during optimization, when we want to see the value of the variable, but don’t want to see the entire value. This is sometimes to save tokens, sometimes to reduce repeating very long variables, such as code or solutions to hard problems. :param n_words_offset: The number of words to show from the beginning and the end of the value. :type n_words_offset: int

reset_all_gradients()[source]#

Traverse the graph and reset the gradients for all nodes.

static trace_graph(root: Parameter) Tuple[Set[Parameter], Set[Tuple[Parameter, Parameter]]][source]#
backward()[source]#

Apply backward pass for for all nodes in the graph by reversing the topological order.

static generate_node_html(node: Parameter, output_dir='node_pages')[source]#

Generate an HTML page for a specific node.

draw_interactive_html_graph(filepath: str | None = None, nodes: List[Parameter] = None, edges: List[Tuple[Parameter, Parameter]] = None) Dict[str, Any][source]#

Generate an interactive graph with pyvis and save as an HTML file.

Parameters:
  • nodes (list) – A list of Parameter objects.

  • edges (list) – A list of edges as tuples (source, target).

  • filepath (str, optional) – Path to save the graph file. Defaults to None.

Returns:

A dictionary containing the graph file path.

Return type:

dict

static wrap_and_escape(text, width=40)[source]#

Wrap text to the specified width, considering HTML breaks, and escape special characters.

draw_graph(add_grads: bool = True, full_trace: bool = False, format: Literal['png', 'svg'] = 'png', rankdir: Literal['LR', 'TB'] = 'TB', filepath: str | None = None) Dict[str, Any][source]#

Draw the graph of the parameter and its gradients.

Parameters:
  • add_grads (bool, optional) – Whether to add gradients to the graph. Defaults to True.

  • format (str, optional) – The format of the output file. Defaults to “png”.

  • rankdir (str, optional) – The direction of the graph. Defaults to “TB”.

  • filepath (str, optional) – The path to save the graph. Defaults to None.

  • full_trace (bool, optional) – Whether to include more detailed trace such as api_kwargs. Defaults to False.

draw_output_subgraph(add_grads: bool = True, format: str = 'png', rankdir: str = 'TB', filepath: str = None) Dict[source]#

Build and visualize a subgraph containing only OUTPUT parameters.

Parameters:
  • add_grads (bool) – Whether to include gradient edges.

  • format (str) – Format for output (e.g., png, svg).

  • rankdir (str) – Graph layout direction (“LR” or “TB”).

  • filepath (str) – Path to save the graph.

draw_component_subgraph(format: str = 'png', rankdir: str = 'TB', filepath: str = None)[source]#

Build and visualize a subgraph containing only OUTPUT parameters.

Parameters:
  • format (str) – Format for output (e.g., png, svg).

  • rankdir (str) – Graph layout direction (“LR” or “TB”).

  • filepath (str) – Path to save the graph.

to_dict()[source]#
classmethod from_dict(data: dict)[source]#
class ComponentNode(id: str, name: str, type: Literal['INPUT', 'COMPONENT'] = 'COMPONENT')[source]#

Bases: DataClass

Used to represent a node in the component graph.

id: str#
name: str#
type: Literal['INPUT', 'COMPONENT'] = 'COMPONENT'#
class ComponentTrace(name: str = None, id: str = None, input_args: Dict[str, Any] = None, full_response: object = None, raw_response: str = None, api_kwargs: Dict[str, Any] = None)[source]#

Bases: DataClass

name: str = None#
id: str = None#
input_args: Dict[str, Any] = None#
full_response: object = None#
raw_response: str = None#
api_kwargs: Dict[str, Any] = None#
to_context_str()[source]#
class ScoreTrace(score: float = None, eval_comp_id: str = None, eval_comp_name: str = None)[source]#

Bases: object

score: float = None#
eval_comp_id: str = None#
eval_comp_name: str = None#