trainer#
Submodules#
- adal
AdalComponent
AdalComponent.task
AdalComponent.eval_fn
AdalComponent.loss_fn
AdalComponent.backward_engine
AdalComponent.prepare_task()
AdalComponent.prepare_loss()
AdalComponent.prepare_eval()
AdalComponent.configure_optimizers()
AdalComponent.configure_backward_engine()
AdalComponent.evaluate_samples()
AdalComponent.pred_step()
AdalComponent.train_step()
AdalComponent.validate_condition()
AdalComponent.validation_step()
AdalComponent.loss_step()
AdalComponent.configure_teacher_generator()
AdalComponent.configure_teacher_generator_helper()
AdalComponent.configure_backward_engine_helper()
AdalComponent.configure_callbacks()
AdalComponent.run_one_task_sample()
AdalComponent.training
AdalComponent.run_one_loss_sample()
AdalComponent.configure_demo_optimizer_helper()
AdalComponent.configure_text_optimizer_helper()
- trainer
Trainer
Trainer.optimizer
Trainer.ckpt_file
Trainer.optimization_order
Trainer.strategy
Trainer.max_steps
Trainer.ckpt_path
Trainer.adaltask
Trainer.num_workers
Trainer.train_loader
Trainer.val_dataset
Trainer.test_dataset
Trainer.batch_val_score_threshold
Trainer.max_error_samples
Trainer.max_correct_samples
Trainer.max_proposals_per_step
Trainer.demo_optimizers
Trainer.text_optimizers
Trainer.train_batch_size
Trainer.debug
Trainer.diagnose()
Trainer.debug_report()
Trainer.fit()
Trainer.initial_validation()
Trainer.gather_trainer_states()
Trainer.prep_ckpt_file_path()
Trainer.training
- class Trainer(adaltask: AdalComponent, optimization_order: Literal['sequential', 'mix'] = 'sequential', strategy: Literal['random', 'constrained'] = 'constrained', max_steps: int = 1000, train_batch_size: int | None = 4, num_workers: int = 4, ckpt_path: str = None, batch_val_score_threshold: float | None = 1.0, max_error_samples: int | None = 4, max_correct_samples: int | None = 4, max_proposals_per_step: int = 5, train_loader: Any | None = None, train_dataset: Any | None = None, val_dataset: Any | None = None, test_dataset: Any | None = None, raw_shots: int | None = None, bootstrap_shots: int | None = None, weighted_sampling: bool = False, exclude_input_fields_from_bootstrap_demos: bool = False, debug: bool = False, save_traces: bool = False, *args, **kwargs)[source]#
Bases:
Component
Ready to use trainer for LLM task pipeline to optimize all types of parameters.
Training set: can be used for passing initial proposed prompt or for few-shot sampling. Validation set: Will be used to select the final prompt or samples. Test set: Will be used to evaluate the final prompt or samples.
- Parameters:
adaltask – AdalComponent: AdalComponent instance
strategy – Literal[“random”, “constrained”]: Strategy to use for the optimizer
max_steps – int: Maximum number of steps to run the optimizer
num_workers – int: Number of workers to use for parallel processing
ckpt_path – str: Path to save the checkpoint files, default to ~/.adalflow/ckpt.
batch_val_score_threshold – Optional[float]: Threshold for skipping a batch
max_error_samples – Optional[int]: Maximum number of error samples to keep
max_correct_samples – Optional[int]: Maximum number of correct samples to keep
max_proposals_per_step – int: Maximum number of proposals to generate per step
train_loader – Any: DataLoader instance for training
train_dataset – Any: Training dataset
val_dataset – Any: Validation dataset
test_dataset – Any: Test dataset
few_shots_config – Optional[FewShotConfig]: Few shot configuration
save_traces – bool: Save traces for for synthetic data generation or debugging
debug (and for demo) – bool: Debug mode to run the trainer in debug mode. If debug is True, for text debug, the graph will be under /ckpt/YourAdalComponentName/debug_text_grads for prompt parameter,
debug
parameters. (the graph will be under /ckpt/YourAdalComponentName/debug_demos for demo)
Note
When you are in the debug mode, you can use get_logger api to show more detailed log on your own.
Example:
from adalflow.utils import get_logger
get_logger(level=”DEBUG”)
- optimizer: Optimizer = None#
- ckpt_file: str | None = None#
- optimization_order: Literal['sequential', 'mix'] = 'sequential'#
- strategy: Literal['random', 'constrained']#
- max_steps: int#
- ckpt_path: str | None = None#
- adaltask: AdalComponent#
- num_workers: int = 4#
- train_loader: Any#
- val_dataset = None#
- test_dataset = None#
- batch_val_score_threshold: float | None = 1.0#
- max_error_samples: int | None = 8#
- max_correct_samples: int | None = 8#
- max_proposals_per_step: int = 5#
- demo_optimizers: List[DemoOptimizer]#
- text_optimizers: List[TextOptimizer]#
- train_batch_size: int | None = 4#
- debug: bool = False#
- diagnose(dataset: Any, split: str = 'train')[source]#
Run an evaluation on the trainset to track all error response, and its raw response using AdaplComponent’s default configure_callbacks :param dataset: Any: Dataset to evaluate :param split: str: Split name, default to train and it is also used as set the directory name for saving the logs
Example:
trainset, valset, testset = load_datasets(max_samples=10) adaltask = TGDWithEvalFnLoss( task_model_config=llama3_model, backward_engine_model_config=llama3_model, optimizer_model_config=llama3_model, ) trainer = Trainer(adaltask=adaltask) diagnose = trainer.diagnose(dataset=trainset) print(diagnose)
- debug_report(text_grad_debug_path: str | None = None, few_shot_demo_debug_path: str | None = None)[source]#
- fit(*, adaltask: AdalComponent | None = None, train_loader: Any | None = None, train_dataset: Any | None = None, val_dataset: Any | None = None, test_dataset: Any | None = None, debug: bool = False, save_traces: bool = False, raw_shots: int | None = None, bootstrap_shots: int | None = None, resume_from_ckpt: str | None = None)[source]#
train_loader: An iterable or collection of iterables specifying training samples.
- prep_ckpt_file_path(trainer_state: Dict[str, Any] = None)[source]#
Prepare the checkpoint root path: ~/.adalflow/ckpt/task_name/.
It also generates a unique checkpoint file name based on the strategy, max_steps, and a unique hash key. For multiple runs but with the same adalcomponent + trainer setup, the run number will be incremented.
- training: bool#
- class AdalComponent(task: Component, eval_fn: Callable | None = None, loss_fn: LossComponent | None = None, backward_engine: BackwardEngine | None = None, backward_engine_model_config: Dict | None = None, teacher_model_config: Dict | None = None, text_optimizer_model_config: Dict | None = None, *args, **kwargs)[source]#
Bases:
Component
Define a train, eval, and test step for a task pipeline.
This serves the following purposes: 1. Organize all parts for training a task pipeline in one place. 2. Help with debugging and testing before the actual training. 3. Adds multi-threading support for training and evaluation.
- task: Component#
- eval_fn: Callable | None#
- loss_fn: LossComponent | None#
- backward_engine: BackwardEngine | None#
- prepare_task(sample: Any, *args, **kwargs) Tuple[Callable, Dict] [source]#
Tell Trainer how to call the task in both training and inference mode.
Return a task call and kwargs for one training sample.
If you just need to eval, ensure the Callable has the inference mode. If you need to also train, ensure the Callable has the training mode which returns a Parameter and mainly call forward for all subcomponents within the task.
Example:
def prepare_task(self, sample: Any, *args, **kwargs) -> Tuple[Callable, Dict]: return self.task, {"x": sample.x}
- prepare_loss(sample: Any, y_pred: Parameter, *args, **kwargs) Tuple[Callable, Dict] [source]#
Tell Trainer how to calculate the loss in the training mode.
Return a loss call and kwargs for one loss sample.
Need to ensure y_pred is a Parameter, and the real input to use for y_gt and y_pred is eval_input. Make sure it is setup.
Example:
# "y" and "y_gt" are arguments needed #by the eval_fn inside of the loss_fn if it is a EvalFnToTextLoss def prepare_loss(self, sample: Example, pred: adal.Parameter) -> Dict: # prepare gt parameter y_gt = adal.Parameter( name="y_gt", data=sample.answer, eval_input=sample.answer, requires_opt=False, ) # pred's full_response is the output of the task pipeline which is GeneratorOutput pred.eval_input = pred.full_response.data return self.loss_fn, {"kwargs": {"y": y_gt, "y_pred": pred}}
- prepare_eval(sample: Any, y_pred: Any, *args, **kwargs) float [source]#
Tell Trainer how to eval in inference mode. Return the eval_fn and kwargs for one evaluation sample.
Ensure the eval_fn is a callable that takes the predicted output and the ground truth output. Ensure the kwargs are setup correctly.
- configure_optimizers(*args, **kwargs) List[Optimizer] [source]#
Note: When you use text optimizor, ensure you call configure_backward_engine_engine too.
- configure_backward_engine(*args, **kwargs)[source]#
Configure a backward engine for all generators in the task for bootstrapping examples.
- evaluate_samples(samples: Any, y_preds: List, metadata: Dict[str, Any] | None = None, num_workers: int = 2) EvaluationResult [source]#
Run evaluation on samples using parallel processing. Utilizes
prepare_eval
defined by the user.Metadata is used for storing context that you can find from generator input.
- Parameters:
samples (Any) – The input samples to evaluate.
y_preds (List) – The predicted outputs corresponding to each sample.
metadata (Optional[Dict[str, Any]]) – Optional metadata dictionary.
num_workers (int) – Number of worker threads for parallel processing.
- Returns:
An object containing the average score and per-item scores.
- Return type:
- pred_step(batch, batch_idx, num_workers: int = 2, running_eval: bool = False, min_score: float | None = None)[source]#
Applies to both train and eval mode.
If you require self.task.train() to be called before training, you can override this method as:
def train_step(self, batch, batch_idx, num_workers: int = 2) -> List: self.task.train() return super().train_step(batch, batch_idx, num_workers)
- validate_condition(steps: int, total_steps: int) bool [source]#
In default, trainer will validate at every step.
- validation_step(batch, batch_idx, num_workers: int = 2, minimum_score: float | None = None) EvaluationResult [source]#
If you require self.task.eval() to be called before validation, you can override this method as:
def validation_step(self, batch, batch_idx, num_workers: int = 2) -> List: self.task.eval() return super().validation_step(batch, batch_idx, num_workers)
- loss_step(batch, y_preds: List[Parameter], batch_idx, num_workers: int = 2) List[Parameter] [source]#
Calculate the loss for the batch.
- configure_teacher_generator()[source]#
Configure a teach generator for all generators in the task for bootstrapping examples.
You can call configure_teacher_generator_helper to easily configure it by passing the model_client and model_kwargs.
- configure_teacher_generator_helper(model_client: ModelClient, model_kwargs: Dict[str, Any], template: str | None = None)[source]#
Configure a teach generator for all generators in the task for bootstrapping examples.
- configure_backward_engine_helper(model_client: ModelClient, model_kwargs: Dict[str, Any], template: str | None = None)[source]#
Configure a backward engine for all generators in the task for bootstrapping examples.
- configure_callbacks(save_dir: str | None = 'traces', *args, **kwargs)[source]#
In default we config the failure generator callback. User can overwrite this method to add more callbacks.
- run_one_task_sample(sample: Any) Any [source]#
Run one training sample. Used for debugging and testing.
- training: bool#
- run_one_loss_sample(sample: Any, y_pred: Any) Any [source]#
Run one loss sample. Used for debugging and testing.
- configure_demo_optimizer_helper() List[DemoOptimizer] [source]#
One demo optimizer can handle multiple demo parameters. But the demo optimizer will only have one dataset (trainset) configured by the Trainer.
If users want to use different trainset for different demo optimizer, they can configure it by themselves.
- configure_text_optimizer_helper(model_client: ModelClient, model_kwargs: Dict[str, Any]) List[TextOptimizer] [source]#
One text optimizer can handle multiple text parameters.