text_loss_with_eval_fn#

Adapted from text_grad’s String Based Function

Classes

EvalFnToTextLoss(eval_fn, eval_fn_desc[, ...])

Convert an evaluation function to a text loss.

class EvalFnToTextLoss(eval_fn: Callable | BaseEvaluator, eval_fn_desc: str, backward_engine: BackwardEngine | None = None, model_client: ModelClient = None, model_kwargs: Dict[str, object] = None)[source]#

Bases: LossComponent

Convert an evaluation function to a text loss.

LossComponent will take an eval function and output a score (usually a float in range [0, 1], and the higher the better, unlike the loss function in model training).

In math:

score/loss = eval_fn(y_pred, y_gt)

The gradident/feedback = d(score)/d(y_pred) will be computed using a backward engine. Gradient_context = GradientContext(

context=conversation_str, response_desc=response.role_desc, variable_desc=role_desc,

)

Parameters:
  • eval_fn – The evaluation function that takes a pair of y and y_gt and returns a score.

  • eval_fn_desc – Description of the evaluation function.

  • backward_engine – The backward engine to use for the text prompt optimization.

  • model_client – The model client to use for the backward engine if backward_engine is not provided.

  • model_kwargs – The model kwargs to use for the backward engine if backward_engine is not provided.

forward(kwargs: Dict[str, Parameter], response_desc: str = None, metadata: Dict[str, str] = None) Parameter[source]#

Default just wraps the call method.

set_backward_engine(backward_engine: BackwardEngine = None, model_client: ModelClient = None, model_kwargs: Dict[str, object] = None)[source]#
backward(response: Parameter, eval_fn_desc: str, kwargs: Dict[str, Parameter], backward_engine: BackwardEngine | None = None, metadata: Dict[str, str] = None)[source]#

Ensure to set backward_engine for the text prompt optimization. It can be None if you are only doing demo optimization and it will not have gradients but simply backpropagate the score.