LiteLLM๏
- pydantic model llama_index.llms.litellm.LiteLLM๏
Show JSON schema
{ "title": "LiteLLM", "description": "LLM interface.", "type": "object", "properties": { "callback_manager": { "title": "Callback Manager" }, "model": { "title": "Model", "description": "The LiteLLM model to use.", "type": "string" }, "temperature": { "title": "Temperature", "description": "The tempature to use during generation.", "type": "number" }, "max_tokens": { "title": "Max Tokens", "description": "The maximum number of tokens to generate.", "type": "integer" }, "additional_kwargs": { "title": "Additional Kwargs", "description": "Additonal kwargs for the LLM API.", "type": "object" }, "max_retries": { "title": "Max Retries", "description": "The maximum number of API retries.", "type": "integer" }, "class_type": { "title": "Class Type", "default": "litellm", "type": "string" } }, "required": [ "model", "temperature", "max_retries" ] }
- Config
arbitrary_types_allowed: bool = True
- Fields
- Validators
_validate_callback_manager
ยปcallback_manager
- field additional_kwargs: Dict[str, Any] [Optional]๏
Additonal kwargs for the LLM API.
- field max_retries: int [Required]๏
The maximum number of API retries.
- field max_tokens: Optional[int] = None๏
The maximum number of tokens to generate.
- field model: str [Required]๏
The LiteLLM model to use.
- field temperature: float [Required]๏
The tempature to use during generation.
- async achat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Async chat endpoint for LLM.
- async acomplete(*args: Any, **kwargs: Any) Any ๏
Async completion endpoint for LLM.
- async astream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Async streaming chat endpoint for LLM.
- async astream_complete(*args: Any, **kwargs: Any) Any ๏
Async streaming completion endpoint for LLM.
- chat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Chat endpoint for LLM.
- classmethod class_name() str ๏
Get class name.
- complete(*args: Any, **kwargs: Any) Any ๏
Completion endpoint for LLM.
- stream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Streaming chat endpoint for LLM.
- stream_complete(*args: Any, **kwargs: Any) Any ๏
Streaming completion endpoint for LLM.
- property metadata: LLMMetadata๏
LLM metadata.