openai_client#

OpenAI ModelClient integration.

Functions

get_all_messages_content(completion)

When the n > 1, get all the messages content.

get_first_message_content(completion)

When we only need the content of the first message.

get_probabilities(completion)

Get the probabilities of each token in the completion.

handle_streaming_response(generator)

Handle the streaming response.

parse_stream_response(completion)

Parse the response of the stream API.

Classes

OpenAIClient([api_key, ...])

A component wrapper for the OpenAI API client.

get_first_message_content(completion: ChatCompletion) str[source]#

When we only need the content of the first message. It is the default parser for chat completion.

parse_stream_response(completion: ChatCompletionChunk) str[source]#

Parse the response of the stream API.

handle_streaming_response(generator: Stream[ChatCompletionChunk])[source]#

Handle the streaming response.

get_all_messages_content(completion: ChatCompletion) List[str][source]#

When the n > 1, get all the messages content.

get_probabilities(completion: ChatCompletion) List[List[TokenLogProb]][source]#

Get the probabilities of each token in the completion.

class OpenAIClient(api_key: str | None = None, chat_completion_parser: Callable[[Completion], Any] = None, input_type: Literal['text', 'messages'] = 'text')[source]#

Bases: ModelClient

A component wrapper for the OpenAI API client.

Support both embedding and chat completion API.

Users (1) simplify use Embedder and Generator components by passing OpenAIClient() as the model_client. (2) can use this as an example to create their own API client or extend this class(copying and modifing the code) in their own project.

Note

We suggest users not to use response_format to enforce output data type or tools and tool_choice in your model_kwargs when calling the API. We do not know how OpenAI is doing the formating or what prompt they have added. Instead - use OutputParser for response parsing and formating.

Parameters:
  • api_key (Optional[str], optional) – OpenAI API key. Defaults to None.

  • chat_completion_parser (Callable[[Completion], Any], optional) – A function to parse the chat completion to a str. Defaults to None. Default is get_first_message_content.

References

init_sync_client()[source]#
init_async_client()[source]#
parse_chat_completion(completion: ChatCompletion | Generator[ChatCompletionChunk, None, None]) GeneratorOutput[source]#

Parse the completion, and put it into the raw_response.

track_completion_usage(completion: ChatCompletion | Generator[ChatCompletionChunk, None, None]) CompletionUsage[source]#

Track the chat completion usage. Use OpenAI standard API for tracking.

parse_embedding_response(response: CreateEmbeddingResponse) EmbedderOutput[source]#

Parse the embedding response to a structure LightRAG components can understand.

Should be called in Embedder.

convert_inputs_to_api_kwargs(input: Any | None = None, model_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED) Dict[source]#

Specify the API input type and output api_kwargs that will be used in _call and _acall methods. Convert the Component’s standard input, and system_input(chat model) and model_kwargs into API-specific format

call(api_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED)[source]#

kwargs is the combined input and model_kwargs. Support streaming call.

async acall(api_kwargs: Dict = {}, model_type: ModelType = ModelType.UNDEFINED)[source]#

kwargs is the combined input and model_kwargs

classmethod from_dict(data: Dict[str, Any]) T[source]#

Create an instance from previously serialized data using to_dict() method.

to_dict() Dict[str, Any][source]#

Convert the component to a dictionary.