Use Cases#
How different parts are used to build and to auto-optimize various LLM applications.
We will build use cases end-to-end, ranging from classification (classical NLP tasks) to question answering, retrieval-augmented generation (RAG), and multi-generator pipelines.
Optimization#
Part |
Description |
---|---|
Question Answering with bhh_hard_object_count dataset, including textual-gradient descent and few-shot boostrap optimization. |
|
Classification with gpt-3.5-turbo. The optimized task pipeline performs on-par with gpt-4o. |
|
Different from previous tasks where we only used one generator component, in this of hotpotqa dataset, we will demonstrates how to optimize a pipeline composed of multiple GradComponent`(`Retriever & Generator) with a standard RAG architectures |