model_client

Submodules


We let users install the required SDKs conditionally for our integrated model providers.

process_images_for_response_api(images: str | Dict | List[str | Dict], encode_local_images: bool = True) List[Dict[str, Any]][source]

Process and validate images for OpenAI’s responses.create API.

This function handles various image input formats and converts them to the expected format for the responses.create API.

Parameters:
  • images – Can be: - A single image URL (str) - A single local file path (str) - A pre-formatted image dict with type=’input_image’ - A list containing any combination of the above

  • encode_local_images – Whether to encode local image files to base64

Returns:

  • type: “input_image”

  • image_url: Either a URL or base64-encoded data URI

Return type:

List of formatted image dicts ready for the API, each containing

Raises:
  • ValueError – If image dict format is invalid

  • FileNotFoundError – If local image file doesn’t exist

Examples

>>> # Single URL
>>> process_images_for_response_api("https://example.com/image.jpg")
[{"type": "input_image", "image_url": "https://example.com/image.jpg"}]
>>> # Local file
>>> process_images_for_response_api("/path/to/image.jpg")
[{"type": "input_image", "image_url": "data:image/jpeg;base64,..."}]
>>> # Multiple mixed sources
>>> process_images_for_response_api([
...     "https://example.com/img.jpg",
...     "/local/img.png",
...     {"type": "input_image", "image_url": "..."}
... ])
[...]
format_content_for_response_api(text: str, images: str | Dict | List[str | Dict] | None = None) List[Dict[str, Any]][source]

Format text and optional images into content array for responses.create API.

Parameters:
  • text – The text prompt/question

  • images – Optional images in various formats (see process_images_for_response_api)

Returns:

List of content items formatted for the API

Examples

>>> # Text only
>>> format_content_for_response_api("What is this?")
[{"type": "input_text", "text": "What is this?"}]
>>> # Text with image
>>> format_content_for_response_api(
...     "What's in this image?",
...     "https://example.com/img.jpg"
... )
[
    {"type": "input_text", "text": "What's in this image?"},
    {"type": "input_image", "image_url": "https://example.com/img.jpg"}
]
parse_embedding_response(api_response) EmbedderOutput[source]

Parse embedding model output from the API response to EmbedderOutput.

Follows the OpenAI API response pattern.