Using LiteLLM Proxy with OpenAIGPTConfig¶
You can easily configure Langroid to use LiteLLM proxy for accessing models with a
simple prefix litellm-proxy/
in the chat_model
name:
Using the litellm-proxy/
prefix¶
When you specify a model with the litellm-proxy/
prefix, Langroid automatically uses the LiteLLM proxy configuration:
from langroid.language_models.openai_gpt import OpenAIGPTConfig
config = OpenAIGPTConfig(
chat_model="litellm-proxy/your-model-name"
)
Setting LiteLLM Proxy Parameters¶
When using the litellm-proxy/
prefix, Langroid will read connection parameters from either:
-
The
litellm_proxy
config object: -
Environment variables (which take precedence):
This approach makes it simple to switch between using LiteLLM proxy and other model providers by just changing the model name prefix, without needing to modify the rest of your code or tweaking env variables.
Note: LiteLLM Proxy vs LiteLLM Library¶
Important distinction: Using the litellm-proxy/
prefix connects to a LiteLLM proxy server, which is different from using the litellm/
prefix. The latter utilizes the LiteLLM adapter library directly without requiring a proxy server. Both approaches are supported in Langroid, but they serve different use cases:
- Use
litellm-proxy/
when connecting to a deployed LiteLLM proxy server - Use
litellm/
when you want to use the LiteLLM library's routing capabilities locally
Choose the approach that best fits your infrastructure and requirements.