mlflow_integration¶
MLflow integration helpers for AdalFlow tracing.
- enable_mlflow_local(tracking_uri: str = None, experiment_name: str = 'AdalFlow-Agent-Experiment', project_name: str = 'AdalFlow-Agent-Project', port: int = 8080) bool [source]¶
Enable MLflow local tracing without auto-starting server.
This function sets up MLflow tracing but does NOT automatically start an MLflow server. You need to have an MLflow server already running.
To start an MLflow server:
`bash mlflow server --host 127.0.0.1 --port 8080 `
- Parameters:
tracking_uri – MLflow tracking server URI. If None, defaults to http://localhost:{port}
experiment_name – Name of the MLflow experiment to create/use
project_name – Project name for the MLflow tracing processor
port – Port for the default tracking URI if tracking_uri is None
- Returns:
True if MLflow was successfully enabled, False otherwise
- Return type:
bool
Example
>>> from adalflow.tracing import enable_mlflow_local >>> # Use with existing MLflow server on default port >>> enable_mlflow_local() >>> # Or with custom tracking URI: >>> enable_mlflow_local(tracking_uri="http://localhost:5000")
- enable_mlflow_local_with_server(tracking_uri: str = None, experiment_name: str = 'AdalFlow-Agent-Experiment', project_name: str = 'AdalFlow-Agent-Project', use_adalflow_dir: bool = True, port: int = 8080, kill_existing_server: bool = False) bool [source]¶
Enable MLflow local tracing with auto-starting server.
This function sets up MLflow tracing and automatically starts an MLflow server if needed.
- Parameters:
tracking_uri – MLflow tracking server URI. If None, will auto-start server
experiment_name – Name of the MLflow experiment to create/use
project_name – Project name for the MLflow tracing processor
use_adalflow_dir – If True and server fails to start, use ~/.adalflow/mlruns as fallback
port – Port to run MLflow server on
kill_existing_server – If True, kill any existing MLflow server on the port before starting
- Returns:
True if MLflow was successfully enabled, False otherwise
- Return type:
bool
Example
>>> from adalflow.tracing import enable_mlflow_local_with_server >>> # Auto-start server on port 8080 >>> enable_mlflow_local_with_server() >>> # Or kill existing and start fresh: >>> enable_mlflow_local_with_server(kill_existing_server=True) >>> # Or use custom port: >>> enable_mlflow_local_with_server(port=5000)
- get_mlflow_server_command(host: str = '0.0.0.0', port: int = 8080) str [source]¶
Get the MLflow server command with AdalFlow backend store.
- Parameters:
host – Host to bind the server to (default: “0.0.0.0”)
port – Port to run the server on (default: 8080)
- Returns:
The MLflow server command to run
- Return type:
str
Example
>>> from adalflow.tracing import get_mlflow_server_command >>> print(get_mlflow_server_command()) mlflow server --backend-store-uri file:///Users/yourname/.adalflow/mlruns --host 0.0.0.0 --port 8080
- start_mlflow_server(host: str = '0.0.0.0', port: int = 8080, wait: bool = True, kill_existing: bool = False) bool [source]¶
Start MLflow server with AdalFlow backend store.
- Parameters:
host – Host to bind the server to (default: “0.0.0.0”)
port – Port to run the server on (default: 8080)
wait – If True, wait for server to be ready before returning
kill_existing – If True, kill any existing process on the port before starting
- Returns:
True if server started successfully, False otherwise
- Return type:
bool
Example
>>> from adalflow.tracing import start_mlflow_server >>> start_mlflow_server() # Starts server on http://0.0.0.0:8080 >>> # Or with custom settings: >>> start_mlflow_server(port=5000)