adal

AdalComponent provides an interface to compose different parts, from eval_fn, train_step, loss_step, optimizers, backward engine, teacher generator, etc to work with Trainer.

Classes

AdalComponent(task[, eval_fn, loss_eval_fn, ...])

Define a train, eval, and test step for a task pipeline.

class AdalComponent(task: Component, eval_fn: Callable | None = None, loss_eval_fn: Callable | None = None, loss_fn: LossComponent | None = None, backward_engine: BackwardEngine | None = None, backward_engine_model_config: Dict | None = None, teacher_model_config: Dict | None = None, text_optimizer_model_config: Dict | None = None, *args, **kwargs)[source]

Bases: Component

Define a train, eval, and test step for a task pipeline.

This serves the following purposes: 1. Organize all parts for training a task pipeline in one place. 2. Help with debugging and testing before the actual training. 3. Adds multi-threading support for training and evaluation.

It has no need on call, forward, bicall, or __call__, so we need to overwrite the base ones.

task: Component
eval_fn: Callable | None
loss_eval_fn: Callable | None
loss_fn: LossComponent | None
backward_engine: BackwardEngine | None
prepare_task(sample: Any, *args, **kwargs) Tuple[Callable, Dict][source]

Tell Trainer how to call the task in both training and inference mode.

Return a task call and kwargs for one training sample.

If you just need to eval, ensure the Callable has the inference mode. If you need to also train, ensure the Callable has the training mode which returns a Parameter and mainly call forward for all subcomponents within the task.

Example:

def prepare_task(self, sample: Any, *args, **kwargs) -> Tuple[Callable, Dict]:
    return self.task, {"x": sample.x}
prepare_loss(sample: Any, y_pred: Parameter, *args, **kwargs) Tuple[Callable, Dict][source]

Tell Trainer how to calculate the loss in the training mode.

Return a loss call and kwargs for one loss sample.

Need to ensure y_pred is a Parameter, and the real input to use for y_gt and y_pred is eval_input. Make sure it is setup.

Example:

# "y" and "y_gt" are arguments needed
#by the eval_fn inside of the loss_fn if it is a EvalFnToTextLoss

def prepare_loss(self, sample: Example, pred: adal.Parameter) -> Dict:
    # prepare gt parameter
    y_gt = adal.Parameter(
        name="y_gt",
        data=sample.answer,
        eval_input=sample.answer,
        requires_opt=False,
    )

    # pred's full_response is the output of the task pipeline which is GeneratorOutput
    pred.eval_input = pred.full_response.data
    return self.loss_fn, {"kwargs": {"y": y_gt, "y_pred": pred}}
prepare_eval(sample: Any, y_pred: Any, *args, **kwargs) float[source]

Tell Trainer how to eval in inference mode. Return the eval_fn and kwargs for one evaluation sample.

Ensure the eval_fn is a callable that takes the predicted output and the ground truth output. Ensure the kwargs are setup correctly.

prepare_loss_eval(sample: Any, y_pred: Any, *args, **kwargs) float[source]

Tell Trainer how to eval in inference mode. Return the eval_fn and kwargs for one evaluation sample.

Ensure the eval_fn is a callable that takes the predicted output and the ground truth output. Ensure the kwargs are setup correctly.

configure_optimizers(*args, **text_optimizer_kwargs) List[Optimizer][source]

Note: When you use text optimizor, ensure you call configure_backward_engine_engine too.

configure_backward_engine(*args, **kwargs)[source]

Configure a backward engine for all GradComponent in the task for bootstrapping examples.

disable_backward_engine()[source]

Disable the backward engine for all GradComponent in the task. No more gradients generation.

evaluate_samples(samples: Any, y_preds: List, metadata: Dict[str, Any] | None = None, num_workers: int = 2, use_loss_eval_fn: bool = False) EvaluationResult[source]

Evaluate predictions against the ground truth samples. Run evaluation on samples using parallel processing. Utilizes prepare_eval defined by the user.

Metadata is used for storing context that you can find from generator input.

Parameters:
  • samples (Any) – The input samples to evaluate.

  • y_preds (List) – The predicted outputs corresponding to each sample.

  • metadata (Optional[Dict[str, Any]]) – Optional metadata dictionary.

  • num_workers (int) – Number of worker threads for parallel processing.

Returns:

An object containing the average score and per-item scores.

Return type:

EvaluationResult

pred_step(batch, batch_idx, num_workers: int = 2, running_eval: bool = False, min_score: float | None = None, use_loss_eval_fn: bool = False) Tuple[List[Parameter], List, Dict[int, float]][source]

Applies to only the eval mode.

Parameters:
  • batch (Any) – The input batch to predict.

  • batch_idx (int) – The index of the batch.

  • num_workers (int) – Number of worker threads for parallel processing.

  • running_eval – bool = False,

Returns:

The predicted outputs, the samples, and the scores.

Return type:

Tuple[List[“Parameter”], List, Dict[int, float]]

train_step(batch, batch_idx, num_workers: int = 2) List[source]

Run a training step and return the predicted outputs. Likely a list of Parameters.

validate_condition(steps: int, total_steps: int) bool[source]

In default, trainer will validate at every step.

validation_step(batch, batch_idx, num_workers: int = 2, minimum_score: float | None = None, use_loss_eval_fn: bool = False) EvaluationResult[source]
Parameters:
  • batch (Any) – The input batch to validate, can be a whole dataset

  • batch_idx (int) – The index of the batch. or current_step

  • num_workers (int) – Number of worker threads for parallel processing.

  • minimum_score (Optional[float]) – The max potential score needs to be larger than this to continue evaluating.

Evaluate a batch or the validate dataset by setting the batch=val_dataset. Uses self.eval_fn to evaluate the samples. If you require self.task.eval() to be called before validation, you can override this method as:

def validation_step(self, batch, batch_idx, num_workers: int = 2) -> List:
    self.task.eval()
    return super().validation_step(batch, batch_idx, num_workers)
loss_step(batch, y_preds: List[Parameter], batch_idx, num_workers: int = 2) List[Parameter][source]

Calculate the loss for the batch.

configure_teacher_generator()[source]

Configure a teach generator for all generators in the task for bootstrapping examples.

You can call configure_teacher_generator_helper to easily configure it by passing the model_client and model_kwargs.

configure_teacher_generator_helper(model_client: ModelClient, model_kwargs: Dict[str, Any], template: str | None = None)[source]

Configure a teach generator for all generators in the task for bootstrapping examples.

disable_backward_engine_helper()[source]

Disable the backward engine for all gradcomponents in the task.

configure_backward_engine_helper(model_client: ModelClient, model_kwargs: Dict[str, Any], template: str | None = None, backward_pass_setup: BackwardPassSetup | None = None)[source]

Configure a backward engine for all generators in the task for bootstrapping examples.

configure_callbacks(save_dir: str | None = 'traces', *args, **kwargs) List[str][source]

In default we config the failure generator callback. User can overwrite this method to add more callbacks.

run_one_task_sample(sample: Any) Any[source]

Run one training sample. Used for debugging and testing.

run_one_loss_sample(sample: Any, y_pred: Any) Any[source]

Run one loss sample. Used for debugging and testing.

training: bool
teacher_mode: bool
tracing: bool
configure_demo_optimizer_helper() List[DemoOptimizer][source]

One demo optimizer can handle multiple demo parameters. But the demo optimizer will only have one dataset (trainset) configured by the Trainer.

If users want to use different trainset for different demo optimizer, they can configure it by themselves.

configure_text_optimizer_helper(model_client: ModelClient, model_kwargs: Dict[str, Any], **kwargs) List[TextOptimizer][source]

Text optimizer hands prompt parameter type. One text optimizer can handle multiple text parameters.

bicall(*args, **kwargs)[source]

If the user provides a bicall method, then __call__ will automatically dispatch here for both training and inference scenarios. This can internally decide how to handle training vs. inference, or just produce a single unified output type.

call(*args, **kwargs)[source]

User must override this for the inference scenario if bicall is not defined.

forward(*args, **kwargs)[source]

User must override this for the training scenario if bicall is not defined.