generator#
Generator is a user-facing orchestration component with a simple and unified interface for LLM prediction.
It is a pipeline that consists of three subcomponents.
Functions
|
Create a teacher generator from the student generator. |
Classes
|
The backward engine is a Generator with a default template for the backward pass. |
|
An user-facing orchestration component for LLM prediction. |
- class Generator(*, model_client: ModelClient, model_kwargs: Dict[str, str | Parameter] = {}, template: str | None = None, prompt_kwargs: Dict | None = {}, output_processors: Component | None = None, name: str | None = None, cache_path: str | None = None, use_cache: bool = False)[source]#
Bases:
GradComponent
,CachedEngine
,CallbackManager
An user-facing orchestration component for LLM prediction.
It is also a GradComponent that can be used for backpropagation through the LLM model.
By orchestrating the following three components along with their required arguments, it enables any LLM prediction with required task output format. - Prompt - Model client - Output processors
- Parameters:
model_client (ModelClient) – The model client to use for the generator.
model_kwargs (Dict[str, Any], optional) – The model kwargs to pass to the model client. Defaults to {}. Please refer to ModelClient for the details on how to set the model_kwargs for your specific model if it is from our library.
template (Optional[str], optional) – The template for the prompt. Defaults to DEFAULT_LIGHTRAG_SYSTEM_PROMPT.
prompt_kwargs (Optional[Dict], optional) – The preset prompt kwargs to fill in the variables in the prompt. Defaults to None.
output_processors (Optional[Component], optional) – The output processors after model call. It can be a single component or a chained component via
Sequential
. Defaults to None.trainable_params (Optional[List[str]], optional) – The list of trainable parameters. Defaults to [].
Note
The output_processors will be applied to the string output of the model completion. And the result will be stored in the data field of the output. And we encourage you to only use it to parse the response to data format you will use later.
- model_type: ModelType = 2#
- model_client: ModelClient#
- set_cache_path(cache_path: str, model_client: object, model: str)[source]#
Set the cache path for the generator.
- set_parameters(prompt_kwargs: Dict[str, str | Parameter])[source]#
Set name for each paramter and set all context for each other. Make all parameters attributes to the generator for finding them easily for optimizers and other components.
- classmethod from_config(config: Dict[str, Any]) Generator [source]#
Create a Generator instance from the config dictionary.
Example:
config = { "model_client": { "component_name": "OpenAIClient", "component_config": {} }, "model_kwargs": {"model": "gpt-3.5-turbo", "temperature": 0} } generator = Generator.from_config(config)
- create_demo_data_instance(input_prompt_kwargs: Dict[str, Any], output: GeneratorOutput, id: str | None = None)[source]#
Automatically create a demo data instance from the input and output of the generator. Used to trace the demos for the demo paramter in the prompt_kwargs. Part of the few-shot learning.
- set_backward_engine(backward_engine: BackwardEngine = None)[source]#
- forward(prompt_kwargs: Dict | None = {}, model_kwargs: Dict | None = {}, id: str | None = None) Parameter [source]#
Default forward method for training: 1. for all args and kwargs, if it is a Parameter object, it will be tracked as Predecessor. 2. Trace input_args and full_response in the parameter object. 3. Return the parameter object.
TODO: all Gradcomponent should not allow args but only kwargs. For now, just check if id is in kwargs.
- backward(response: Parameter, prompt_kwargs: Dict, template: str, prompt_str: str, backward_engine: Generator | None = None, id: str | None = None) Parameter [source]#
- call(prompt_kwargs: Dict | None = {}, model_kwargs: Dict | None = {}, use_cache: bool | None = None, id: str | None = None) GeneratorOutput[object] [source]#
Call the model_client by formatting prompt from the prompt_kwargs, and passing the combined model_kwargs to the model client.
- class BackwardEngine(**kwargs)[source]#
Bases:
Generator
The backward engine is a Generator with a default template for the backward pass.
If you want to customize the template, you can create your own backward engine
- create_teacher_generator(student: Generator, model_client: ModelClient, model_kwargs: Dict[str, Any], template: str | None = None) Generator [source]#
Create a teacher generator from the student generator.
Note
Teacher generator will have no parameters. If you want to keep it to be the same as the student, just create one each time your student has been updated. Or else, task.parameters will list teacher parameters.
- Parameters:
student (Generator) – The student generator.
model_client (ModelClient) – The model client to use for the teacher generator.
model_kwargs (Dict[str, Any]) – The model kwargs to pass to the model client.
name (str, optional) – The name of the teacher generator. Defaults to “teacher”.
- Returns:
The teacher generator.
- Return type: