AI providers typically require using an API key to access their models.
The process is different for each provider, so you may refer to their documentation to learn how to generate new API keys.
Using a provider with an API key¶
For providers like Anthropic, MistralAI, or OpenAI:
Open the AI settings
Click on “Add a new provider”
Enter the details for the provider
In the chat, select the new provider

Using a generic OpenAI-compatible provider¶
The Generic provider allows you to connect to any OpenAI-compatible API endpoint, including local servers like Ollama and LiteLLM.
In JupyterLab, open the AI settings panel and go to the Providers section
Click on “Add a new provider”
Select the Generic (OpenAI-compatible) provider
Configure the following settings:
Base URL: The base URL of your API endpoint (suggestions are provided for common local servers)
Model: The model name to use
API Key: Your API key (if required by the provider)
See the dedicated pages for specific providers:
Controlling MIME auto-rendering in chat¶
When the AI model uses execute_command, some commands may return rich MIME bundles
(plots, maps, HTML, etc.). You can control which commands automatically render
those bundles as chat messages:
Open AI settings and go to Behavior Settings
In Commands Auto-Rendering MIME Bundles, add or remove command IDs
In Trusted MIME Types for Auto-Render, add or remove MIME types to mark as trusted when those commands are auto-rendered in chat
Default:
jupyterlab-ai-commands:execute-in-kerneltext/html(trusted MIME type)
This helps avoid side effects where inspection commands return existing notebook outputs that you do not want replayed in chat.