Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Using LiteLLM Proxy

LiteLLM Proxy is an OpenAI-compatible proxy server that allows you to call 100+ LLMs through a unified interface.

Using LiteLLM Proxy with jupyterlite-ai provides flexibility to switch between different AI providers (OpenAI, Anthropic, Google, Azure, local models, etc.) without changing your JupyterLite configuration. It’s particularly useful for enterprise deployments where the proxy can be hosted within private infrastructure to manage external API calls and keep API keys server-side.

Setting up LiteLLM Proxy

  1. Install LiteLLM following the instructions at https://docs.litellm.ai/docs/simple_proxy.

  2. Create a litellm_config.yaml file with your model configuration:

model_list:
  - model_name: gpt-5
    litellm_params:
      model: gpt-5
      api_key: os.environ/OPENAI_API_KEY

  - model_name: claude-sonnet
    litellm_params:
      model: claude-sonnet-4-5-20250929
      api_key: os.environ/ANTHROPIC_API_KEY
  1. Start the proxy server, for example:

litellm --config litellm_config.yaml

The proxy will start on http://0.0.0.0:4000 by default.

Configuring jupyterlite-ai to use LiteLLM Proxy

Configure the Generic provider (OpenAI-compatible) with the following settings: