Anthropic๏ƒ

pydantic model llama_index.llms.anthropic.Anthropic๏ƒ

Show JSON schema
{
   "title": "Anthropic",
   "description": "LLM interface.",
   "type": "object",
   "properties": {
      "callback_manager": {
         "title": "Callback Manager"
      },
      "model": {
         "title": "Model",
         "description": "The anthropic model to use.",
         "type": "string"
      },
      "temperature": {
         "title": "Temperature",
         "description": "The temperature to use for sampling.",
         "type": "number"
      },
      "max_tokens": {
         "title": "Max Tokens",
         "description": "The maximum number of tokens to generate.",
         "type": "integer"
      },
      "base_url": {
         "title": "Base Url",
         "description": "The base URL to use.",
         "type": "string"
      },
      "timeout": {
         "title": "Timeout",
         "description": "The timeout to use in seconds.",
         "type": "number"
      },
      "max_retries": {
         "title": "Max Retries",
         "description": "The maximum number of API retries.",
         "default": 10,
         "type": "integer"
      },
      "additional_kwargs": {
         "title": "Additional Kwargs",
         "description": "Additonal kwargs for the anthropic API.",
         "type": "object"
      }
   },
   "required": [
      "model",
      "temperature",
      "max_tokens"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
Validators
  • _validate_callback_manager ยป callback_manager

field additional_kwargs: Dict[str, Any] [Optional]๏ƒ

Additonal kwargs for the anthropic API.

field base_url: Optional[str] = None๏ƒ

The base URL to use.

field max_retries: int = 10๏ƒ

The maximum number of API retries.

field max_tokens: int [Required]๏ƒ

The maximum number of tokens to generate.

field model: str [Required]๏ƒ

The anthropic model to use.

field temperature: float [Required]๏ƒ

The temperature to use for sampling.

field timeout: Optional[float] = None๏ƒ

The timeout to use in seconds.

async achat(messages: Sequence[ChatMessage], **kwargs: Any) Any๏ƒ

Async chat endpoint for LLM.

async acomplete(*args: Any, **kwargs: Any) Any๏ƒ

Async completion endpoint for LLM.

async astream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any๏ƒ

Async streaming chat endpoint for LLM.

async astream_complete(*args: Any, **kwargs: Any) Any๏ƒ

Async streaming completion endpoint for LLM.

chat(messages: Sequence[ChatMessage], **kwargs: Any) Any๏ƒ

Chat endpoint for LLM.

classmethod class_name() str๏ƒ

Get class name.

complete(*args: Any, **kwargs: Any) Any๏ƒ

Completion endpoint for LLM.

stream_chat(messages: Sequence[ChatMessage], **kwargs: Any) Any๏ƒ

Streaming chat endpoint for LLM.

stream_complete(*args: Any, **kwargs: Any) Any๏ƒ

Streaming completion endpoint for LLM.

property metadata: LLMMetadata๏ƒ

LLM metadata.