LiteLLM Proxy is an OpenAI-compatible proxy server that allows you to call 100+ LLMs through a unified interface.
Using LiteLLM Proxy with jupyterlite-ai provides flexibility to switch between different AI providers (OpenAI, Anthropic, Google, Azure, local models, etc.) without changing your JupyterLite configuration. It’s particularly useful for enterprise deployments where the proxy can be hosted within private infrastructure to manage external API calls and keep API keys server-side.
Setting up LiteLLM Proxy¶
Install LiteLLM following the instructions at https://
docs .litellm .ai /docs /simple _proxy. Create a
litellm_config.yamlfile with your model configuration:
model_list:
- model_name: gpt-5
litellm_params:
model: gpt-5
api_key: os.environ/OPENAI_API_KEY
- model_name: claude-sonnet
litellm_params:
model: claude-sonnet-4-5-20250929
api_key: os.environ/ANTHROPIC_API_KEYStart the proxy server, for example:
litellm --config litellm_config.yamlThe proxy will start on http://0.0.0.0:4000 by default.
Configuring jupyterlite-ai to use LiteLLM Proxy¶
Configure the Generic provider (OpenAI-compatible) with the following settings:
Base URL:
http://0.0.0.0:4000(or your proxy server URL)Model: The model name from your
litellm_config.yaml(e.g.,gpt-5,claude-sonnet)API Key (optional): If the LiteLLM Proxy server requires an API key, provide it here.