model_client¶
Submodules¶
- anthropic_client
- azureai_client
AzureAIClientAzureAIClient.init_sync_client()AzureAIClient.init_async_client()AzureAIClient.parse_chat_completion()AzureAIClient.track_completion_usage()AzureAIClient.parse_embedding_response()AzureAIClient.convert_inputs_to_api_kwargs()AzureAIClient.call()AzureAIClient.acall()AzureAIClient.from_dict()AzureAIClient.to_dict()
- bedrock_client
- chat_completion_to_response_converter
- cohere_client
- deepseek_client
- fireworks_client
- google_client
- groq_client
GroqAPIClientGroqAPIClient.init_sync_client()GroqAPIClient.init_async_client()GroqAPIClient.parse_chat_completion()GroqAPIClient.track_completion_usage()GroqAPIClient.convert_inputs_to_api_kwargs()GroqAPIClient.call()GroqAPIClient.acall()GroqAPIClient.from_dict()GroqAPIClient.to_dict()GroqAPIClient.list_models()
- mistral_client
- ollama_client
- openai_client
ParsedResponseContentParsedResponseContent.textParsedResponseContent.imagesParsedResponseContent.tool_callsParsedResponseContent.reasoningParsedResponseContent.code_outputsParsedResponseContent.raw_outputParsedResponseContent.textParsedResponseContent.imagesParsedResponseContent.tool_callsParsedResponseContent.reasoningParsedResponseContent.code_outputsParsedResponseContent.raw_output
get_response_output_text()parse_response_output()estimate_token_count()handle_streaming_response()handle_streaming_response_sync()OpenAIClientOpenAIClient.init_sync_client()OpenAIClient.init_async_client()OpenAIClient.parse_chat_completion()OpenAIClient.track_completion_usage()OpenAIClient.parse_embedding_response()OpenAIClient.convert_inputs_to_api_kwargs()OpenAIClient.parse_image_generation_response()OpenAIClient.call()OpenAIClient.acall()OpenAIClient.from_dict()OpenAIClient.to_dict()
- sambanova_client
- together_client
- transformers_client
average_pool()TransformerEmbedderget_device()clean_device_cache()TransformerRerankerTransformerLLMTransformersClientTransformersClient.support_modelsTransformersClient.init_sync_client()TransformersClient.init_reranker_client()TransformersClient.init_llm_client()TransformersClient.set_llm_client()TransformersClient.parse_embedding_response()TransformersClient.parse_chat_completion()TransformersClient.call()TransformersClient.convert_inputs_to_api_kwargs()
- utils
- xai_client
We let users install the required SDKs conditionally for our integrated model providers.
- process_images_for_response_api(images: str | Dict | List[str | Dict], encode_local_images: bool = True) List[Dict[str, Any]][source]¶
Process and validate images for OpenAI’s responses.create API.
This function handles various image input formats and converts them to the expected format for the responses.create API.
- Parameters:
images – Can be: - A single image URL (str) - A single local file path (str) - A pre-formatted image dict with type=’input_image’ - A list containing any combination of the above
encode_local_images – Whether to encode local image files to base64
- Returns:
type: “input_image”
image_url: Either a URL or base64-encoded data URI
- Return type:
List of formatted image dicts ready for the API, each containing
- Raises:
ValueError – If image dict format is invalid
FileNotFoundError – If local image file doesn’t exist
Examples
>>> # Single URL >>> process_images_for_response_api("https://example.com/image.jpg") [{"type": "input_image", "image_url": "https://example.com/image.jpg"}]
>>> # Local file >>> process_images_for_response_api("/path/to/image.jpg") [{"type": "input_image", "image_url": "data:image/jpeg;base64,..."}]
>>> # Multiple mixed sources >>> process_images_for_response_api([ ... "https://example.com/img.jpg", ... "/local/img.png", ... {"type": "input_image", "image_url": "..."} ... ]) [...]
- format_content_for_response_api(text: str, images: str | Dict | List[str | Dict] | None = None) List[Dict[str, Any]][source]¶
Format text and optional images into content array for responses.create API.
- Parameters:
text – The text prompt/question
images – Optional images in various formats (see process_images_for_response_api)
- Returns:
List of content items formatted for the API
Examples
>>> # Text only >>> format_content_for_response_api("What is this?") [{"type": "input_text", "text": "What is this?"}]
>>> # Text with image >>> format_content_for_response_api( ... "What's in this image?", ... "https://example.com/img.jpg" ... ) [ {"type": "input_text", "text": "What's in this image?"}, {"type": "input_image", "image_url": "https://example.com/img.jpg"} ]