Anthropic๏
- pydantic model llama_index.llms.anthropic.Anthropic๏
Show JSON schema
{ "title": "Anthropic", "description": "LLM interface.", "type": "object", "properties": { "callback_manager": { "title": "Callback Manager" }, "model": { "title": "Model", "description": "The anthropic model to use.", "type": "string" }, "temperature": { "title": "Temperature", "description": "The temperature to use for sampling.", "type": "number" }, "max_tokens": { "title": "Max Tokens", "description": "The maximum number of tokens to generate.", "type": "integer" }, "base_url": { "title": "Base Url", "description": "The base URL to use.", "type": "string" }, "timeout": { "title": "Timeout", "description": "The timeout to use in seconds.", "type": "number" }, "max_retries": { "title": "Max Retries", "description": "The maximum number of API retries.", "default": 10, "type": "integer" }, "additional_kwargs": { "title": "Additional Kwargs", "description": "Additonal kwargs for the anthropic API.", "type": "object" } }, "required": [ "model", "temperature", "max_tokens" ] }
- Config
arbitrary_types_allowed: bool = True
- Fields
- Validators
_validate_callback_manager
ยปcallback_manager
- field additional_kwargs: Dict[str, Any] [Optional]๏
Additonal kwargs for the anthropic API.
- field base_url: Optional[str] = None๏
The base URL to use.
- field max_retries: int = 10๏
The maximum number of API retries.
- field max_tokens: int [Required]๏
The maximum number of tokens to generate.
- field model: str [Required]๏
The anthropic model to use.
- field temperature: float [Required]๏
The temperature to use for sampling.
- field timeout: Optional[float] = None๏
The timeout to use in seconds.
- async achat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Async chat endpoint for LLM.
- async acomplete(*args: Any, **kwargs: Any) Any ๏
Async completion endpoint for LLM.
- async astream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Async streaming chat endpoint for LLM.
- async astream_complete(*args: Any, **kwargs: Any) Any ๏
Async streaming completion endpoint for LLM.
- chat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Chat endpoint for LLM.
- classmethod class_name() str ๏
Get class name.
- complete(*args: Any, **kwargs: Any) Any ๏
Completion endpoint for LLM.
- stream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any ๏
Streaming chat endpoint for LLM.
- stream_complete(*args: Any, **kwargs: Any) Any ๏
Streaming completion endpoint for LLM.
- property metadata: LLMMetadata๏
LLM metadata.