API Reference#

Welcome to AdalFlow. The API reference is organized by subdirectories.

Core#

All base/abstract classes, core components like generator, embedder, and basic functions are here.

core.component

Base building block for building LLM task pipelines.

core.container

Container component for composing multiple components, such as Sequential.

core.base_data_class

A base class that provides an easy way for data to interact with LLMs.

core.default_prompt_template

This is the default system prompt template used in the LightRAG.

core.model_client

ModelClient is the protocol and base class for all models(either via APIs or local models) to communicate with components.

core.db

LocalDB to perform in-memory storage and data persistence(pickle or any filesystem) for data models like documents and dialogturn.

core.functional

Functional interface.

core.generator

Generator is a user-facing orchestration component with a simple and unified interface for LLM prediction.

core.string_parser

Extract and convert common string to Python objects.

core.embedder

The component that orchestrates model client (Embedding models in particular) and output processors.

core.retriever

The base class for all retrievers who in particular retrieve documents from a given database.

core.prompt_builder

Class prompt builder for LightRAG system prompt.

core.tokenizer

Tokenizer from tiktoken.

core.func_tool

Tool is LLM's extended capability which is one of the core design pattern of Agent.

core.tool_manager

The ToolManager manages a list of tools, context, and all ways to execute functions.

core.types

Functional data classes to support functional components like Generator, Retriever, and Assistant.

Components#

Functional components like model client, retriever, agent, local data processing, and output parsers are here.

components.agent.react

Implementation and optimization of React agent.

components.model_client.anthropic_client

Anthropic ModelClient integration.

components.model_client.cohere_client

Cohere ModelClient integration.

components.model_client.google_client

Google GenAI ModelClient integration.

components.model_client.groq_client

Groq ModelClient integration.

components.model_client.openai_client

OpenAI ModelClient integration.

components.model_client.transformers_client

Huggingface transformers ModelClient integration.

components.model_client.utils

Helpers for model client for integrating models and parsing the output.

components.data_process.data_components

Helper components for data transformation such as embeddings and document splitting.

components.data_process.text_splitter

Splitting texts is commonly used as a preprocessing step before embedding and retrieving texts.

components.reasoning.chain_of_thought

Chain of the thought(CoT) is to mimic a step-by-step thought process for arriving at the answer.

components.retriever.bm25_retriever

BM25 retriever implementation.

components.retriever.faiss_retriever

Semantic search/embedding-based retriever using FAISS.

components.retriever.llm_retriever

LLM as retriever module.

components.retriever.postgres_retriever

Leverage a postgres database to store and retrieve documents.

components.retriever.reranker_retriever

Reranking model using modelclient as a retriever.

components.output_parsers.outputs

The most commonly used output parsers for the Generator.

Datasets#

Evaluation#

eval.base

Abstract base class for evaluation metrics.

eval.answer_match_acc

This is the metric for QA generation.

eval.retriever_recall

Retriever Recall @k metric.

eval.llm_as_judge

This is the metric to use an LLM as a judge for evaluating the performance of predicted answers.

eval.g_eval

Implementation of G-Eval: G-eval <https://arxiv.org/abs/2303.08774, nlpyang/geval> Instead of getting 1/5 as the score, AdalFlow will use 0.2 as the score, so that we can have a score in range [0, 1] for all metrics.

Optimization#

optim.parameter

Parameter is used by Optimizer, Trainers, AdalComponent to auto-optimizations

optim.optimizer

Base Classes for AdalFlow Optimizers, including Optimizer, TextOptimizer, and DemoOptimizer.

optim.grad_component

Base class for Autograd Components that can be called and backpropagated through.

optim.types

All data types used by Parameter, Optimizer, AdalComponent, and Trainer.

optim.function

optim.few_shot.bootstrap_optimizer

Adapted and optimized boostrap fewshot optimizer:

optim.text_grad.text_loss_with_eval_fn

Adapted from text_grad's String Based Function

optim.text_grad.tgd_optimizer

Text-grad optimizer and prompts.

optim.text_grad.llm_text_loss

Implementation of TextGrad: Automatic “Differentiation” via Text

optim.trainer.trainer

Ready to use trainer for LLM task pipeline

optim.trainer.adal

AdalComponent provides an interface to compose different parts, from eval_fn, train_step, loss_step, optimizers, backward engine, teacher generator, etc to work with Trainer.

Tracing#

Utils#

utils.data

Default Dataset, DataLoader similar to utils.data in PyTorch.

utils.logger

This logger file provides easy configurability of the root and named loggers, along with a color print function for console output.

utils.setup_env([dotenv_path])

Load environment variables from .env file.

utils.lazy_import

Lazy import a module and class.

utils.serialization

utils.config

Config helper functions to manage configuration and rebuilt your task pipeline.

utils.registry