llm_text_loss

Implementation of TextGrad: Automatic “Differentiation” via Text. This code is not used as we treat the non-optimizable version of LLM judge as a form of eval_fn. We use class EvalFnToTextLoss instead as of today 12/9/2024

Classes

LLMAsTextLoss(prompt_kwargs, model_client, ...)

Evaluate the final RAG response using an LLM judge.

class LLMAsTextLoss(prompt_kwargs: Dict[str, str | Parameter], model_client: ModelClient, model_kwargs: Dict[str, object])[source]

Bases: LossComponent

Evaluate the final RAG response using an LLM judge.

The LLM judge will have: - eval_system_prompt: The system prompt to evaluate the response. - y_hat: The response to evaluate. - Optional: y: The correct response to compare against.

The loss will be a Parameter with the evaluation result and can be used to compute gradients. This loss use LLM/Generator as the computation/transformation operator, so it’s gradient will be found from the Generator’s backward method.

forward(*args, **kwargs) Parameter[source]

Default just wraps the call method.