Skip to main content

Drop Unsupported Params

Drop unsupported OpenAI params by your LLM Provider.

Default Behavior​

By default, LiteLLM raises an exception if you send a parameter to a model that doesn't support it.

For example, if you send temperature=0.2 to a model that doesn't support the temperature parameter, LiteLLM will raise an exception.

When drop_params=True is set, LiteLLM will drop the unsupported parameter instead of raising an exception. This allows your code to work seamlessly across different providers without having to customize parameters for each one.

Quick Start​

import litellm 
import os

# set keys
os.environ["COHERE_API_KEY"] = "co-.."

litellm.drop_params = True # 👈 KEY CHANGE

response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
)

LiteLLM maps all supported openai params by provider + model (e.g. function calling is supported by anthropic on bedrock but not titan).

See litellm.get_supported_openai_params("command-r") Code

If a provider/model doesn't support a particular param, you can drop it.

OpenAI Proxy Usage​

litellm_settings:
drop_params: true

Pass drop_params in completion(..)​

Just drop_params when calling specific models

import litellm 
import os

# set keys
os.environ["COHERE_API_KEY"] = "co-.."

response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
drop_params=True
)

Specify params to drop​

To drop specific params when calling a provider (E.g. 'logit_bias' for vllm)

Use additional_drop_params

import litellm 
import os

# set keys
os.environ["COHERE_API_KEY"] = "co-.."

response = litellm.completion(
model="command-r",
messages=[{"role": "user", "content": "Hey, how's it going?"}],
response_format={"key": "value"},
additional_drop_params=["response_format"]
)

additional_drop_params: List or null - Is a list of openai params you want to drop when making a call to the model.

Specify allowed openai params in a request​

Tell litellm to allow specific openai params in a request. Use this if you get a litellm.UnsupportedParamsError and want to allow a param. LiteLLM will pass the param as is to the model.

In this example we pass allowed_openai_params=["tools"] to allow the tools param.

Pass allowed_openai_params to LiteLLM Python SDK
await litellm.acompletion(
model="azure/o_series/<my-deployment-name>",
api_key="xxxxx",
api_base=api_base,
messages=[{"role": "user", "content": "Hello! return a json object"}],
tools=[{"type": "function", "function": {"name": "get_current_time", "description": "Get the current time in a given location.", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city name, e.g. San Francisco"}}, "required": ["location"]}}}]
allowed_openai_params=["tools"],
)