text_grad#

Submodules#


class LLMAsTextLoss(prompt_kwargs: Dict[str, str | Parameter], model_client: ModelClient, model_kwargs: Dict[str, object])[source]#

Bases: LossComponent

Evaluate the final RAG response using an LLM judge.

The LLM judge will have: - eval_system_prompt: The system prompt to evaluate the response. - y_hat: The response to evaluate. - Optional: y: The correct response to compare against.

The loss will be a Parameter with the evaluation result and can be used to compute gradients. This loss use LLM/Generator as the computation/transformation operator, so it’s gradient will be found from the Generator’s backward method.

forward(*args, **kwargs) Parameter[source]#

Default just wraps the call method.

class EvalFnToTextLoss(eval_fn: Callable | BaseEvaluator, eval_fn_desc: str, backward_engine: BackwardEngine | None = None, model_client: ModelClient = None, model_kwargs: Dict[str, object] = None)[source]#

Bases: LossComponent

Convert an evaluation function to a text loss.

LossComponent will take an eval function and output a score (usually a float in range [0, 1], and the higher the better, unlike the loss function in model training).

In math:

score/loss = eval_fn(y_pred, y_gt)

The gradident/feedback = d(score)/d(y_pred) will be computed using a backward engine. Gradient_context = GradientContext(

context=conversation_str, response_desc=response.role_desc, variable_desc=role_desc,

)

Parameters:
  • eval_fn – The evaluation function that takes a pair of y and y_gt and returns a score.

  • eval_fn_desc – Description of the evaluation function.

  • backward_engine – The backward engine to use for the text prompt optimization.

  • model_client – The model client to use for the backward engine if backward_engine is not provided.

  • model_kwargs – The model kwargs to use for the backward engine if backward_engine is not provided.

forward(kwargs: Dict[str, Parameter], response_desc: str = None, metadata: Dict[str, str] = None) Parameter[source]#

Default just wraps the call method.

set_backward_engine(backward_engine: BackwardEngine = None, model_client: ModelClient = None, model_kwargs: Dict[str, object] = None)[source]#
backward(response: Parameter, eval_fn_desc: str, kwargs: Dict[str, Parameter], backward_engine: BackwardEngine | None = None, metadata: Dict[str, str] = None)[source]#

Ensure to set backward_engine for the text prompt optimization. It can be None if you are only doing demo optimization and it will not have gradients but simply backpropagate the score.

class TGDOptimizer(params: Iterable[Parameter] | Iterable[Dict[str, Any]], model_client: ModelClient, model_kwargs: Dict[str, object] = {}, constraints: List[str] = None, optimizer_system_prompt: str = '\nYou are part of an optimization system that refines existing variable values based on feedback.\n\nYour task: Propose a new variable value in response to the feedback.\n1. Address the concerns raised in the feedback while preserving positive aspects.\n2. Observe past performance patterns when provided and to keep the good quality.\n3. Consider the variable in the context of its peers if provided.\n   FYI:\n   - If a peer will be optimized itself, do not overlap with its scope.\n   - Otherwise, you can overlap if it is necessary to address the feedback.\n\nOutput:\nProvide only the new variable value between {{new_variable_start_tag}} and {{new_variable_end_tag}} tags.\n\nTips:\n1. Eliminate unnecessary words or phrases.\n2. Add new elements to address specific feedback.\n3. Be creative and present the variable differently.\n{% if instruction_to_optimizer %}\n4. {{instruction_to_optimizer}}\n{% endif %}\n', in_context_examples: List[str] = None, num_gradient_memory: int = 0, max_past_history: int = 3)[source]#

Bases: TextOptimizer

Textual Gradient Descent(LLM) optimizer for text-based variables.

proposing: bool = False#
params_history: Dict[str, List[HistoryPrompt]] = {}#
params: Iterable[Parameter] | Iterable[Dict[str, Any]]#
constraints: List[str]#
property constraint_text#

Returns a formatted string representation of the constraints.

Returns:

A string containing the constraints in the format “Constraint {index}: {constraint}”.

Return type:

str

add_score_to_params(val_score: float)[source]#
add_score_to_current_param(param_id: str, param: Parameter, score: float)[source]#
add_history(param_id: str, history: HistoryPrompt)[source]#
render_history(param_id: str) List[str][source]#
get_gradient_memory_text(param: Parameter) str[source]#
update_gradient_memory(param: Parameter)[source]#
zero_grad()[source]#

Clear all the gradients of the parameters.

propose()[source]#

Proposing a value while keeping previous value saved on parameter.

revert()[source]#

Revert to the previous value when the evaluation is worse.

step()[source]#

Discard the previous value and keep the proposed value.

sum_ops(params: List[Parameter]) Parameter[source]#

Represents a sum operation on a list of variables. In TextGrad, sum is simply concatenation of the values of the variables.

Parameters:

variables (List[Variable]) – The list of variables to be summed (concatenated).

Returns:

A new variable representing the sum of the input variables.

Return type:

Variable

class Sum(*args, **kwargs)[source]#

Bases: GradComponent

The class to define a sum operation on a list of parameters, such as losses or gradients.

name: str = 'Sum'#
forward(params: List[Parameter]) Parameter[source]#

Performs the forward pass of the sum operation. This is a simple operation that concatenates the values of the parameters.

Parameters:

params (List[Parameter]) – The list of parameters to be summed.

Return type:

Parameter

backward(summation: Parameter)[source]#

Performs the backward pass of the sum operation. This is simply an idempotent operation, where we make a gradient with the combined feedback and add it to the predecessors’grads.

Parameters:

summation (Parameter) – The parameter representing the sum.