optimizer#

Base Classes for AdalFlow Optimizers, including Optimizer, TextOptimizer, and DemoOptimizer.

Classes

DemoOptimizer([weighted, dataset, ...])

Base class for all demo optimizers.

Optimizer()

Base class for all optimizers.

TextOptimizer(*args, **kwargs)

Base class for all text optimizers.

class Optimizer[source]#

Bases: object

Base class for all optimizers.

proposing: bool = False#
params: Iterable[Parameter] | Iterable[Dict[str, Any]]#
state_dict()[source]#
propose(*args, **kwargs)[source]#
step(*args, **kwargs)[source]#
revert(*args, **kwargs)[source]#
class TextOptimizer(*args, **kwargs)[source]#

Bases: Optimizer

Base class for all text optimizers.

Text optimizer is via textual gradient descent, which is a variant of gradient descent that optimizes the text directly. It will generate new values for a given text prompt.This includes: - System prompt - output format - prompt template

zero_grad()[source]#

Clear all the gradients of the parameters.

class DemoOptimizer(weighted: bool = True, dataset: Sequence[DataClass] = None, exclude_input_fields_from_bootstrap_demos: bool = False, *args, **kwargs)[source]#

Bases: Optimizer

Base class for all demo optimizers.

Demo optimizer are few-shot optimization, where it will sample raw examples from train dataset or bootstrap examples from the model’s output. It will work with a sampler to generate new values for a given text prompt.

If bootstrap is used, it will require a teacher genearator to generate the examples.

dataset: Sequence[DataClass]#
exclude_input_fields_from_bootstrap_demos: bool = False#
use_weighted_sampling(weighted: bool)[source]#
params: Iterable[Parameter] | Iterable[Dict[str, Any]]#
config_shots(*args, **kwargs)[source]#

Initialize the samples for each parameter.

set_dataset(dataset: Sequence[DataClass])[source]#

Set the dataset for the optimizer.