llm_text_loss#
Implementation of TextGrad: Automatic “Differentiation” via Text
Classes
|
Evaluate the final RAG response using an LLM judge. |
- class LLMAsTextLoss(prompt_kwargs: Dict[str, str | Parameter], model_client: ModelClient, model_kwargs: Dict[str, object])[source]#
Bases:
LossComponent
Evaluate the final RAG response using an LLM judge.
The LLM judge will have: - eval_system_prompt: The system prompt to evaluate the response. - y_hat: The response to evaluate. - Optional: y: The correct response to compare against.
The loss will be a Parameter with the evaluation result and can be used to compute gradients. This loss use LLM/Generator as the computation/transformation operator, so it’s gradient will be found from the Generator’s backward method.