Prompt Templates

These are the reference prompt templates.

We first show links to default prompts.

We then show the base prompt template class and its subclasses.

Prompt Classes

pydantic model llama_index.prompts.base.BasePromptTemplate

Show JSON schema
{
   "title": "BasePromptTemplate",
   "type": "object",
   "properties": {
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "template_vars": {
         "title": "Template Vars",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "kwargs": {
         "title": "Kwargs",
         "type": "object",
         "additionalProperties": {
            "type": "string"
         }
      },
      "output_parser": {
         "title": "Output Parser"
      }
   },
   "required": [
      "metadata",
      "template_vars",
      "kwargs"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field kwargs: Dict[str, str] [Required]
field metadata: Dict[str, Any] [Required]
field output_parser: Optional[BaseOutputParser] = None
field template_vars: List[str] [Required]
abstract format(llm: Optional[LLM] = None, **kwargs: Any) str
abstract format_messages(llm: Optional[LLM] = None, **kwargs: Any) List[ChatMessage]
abstract get_template(llm: Optional[LLM] = None) str
abstract partial_format(**kwargs: Any) BasePromptTemplate
pydantic model llama_index.prompts.base.PromptTemplate

Show JSON schema
{
   "title": "PromptTemplate",
   "type": "object",
   "properties": {
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "template_vars": {
         "title": "Template Vars",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "kwargs": {
         "title": "Kwargs",
         "type": "object",
         "additionalProperties": {
            "type": "string"
         }
      },
      "output_parser": {
         "title": "Output Parser"
      },
      "template": {
         "title": "Template",
         "type": "string"
      }
   },
   "required": [
      "metadata",
      "template_vars",
      "kwargs",
      "template"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field template: str [Required]
format(llm: Optional[LLM] = None, **kwargs: Any) str

Format the prompt into a string.

format_messages(llm: Optional[LLM] = None, **kwargs: Any) List[ChatMessage]

Format the prompt into a list of chat messages.

get_template(llm: Optional[LLM] = None) str
partial_format(**kwargs: Any) PromptTemplate

Partially format the prompt.

pydantic model llama_index.prompts.base.ChatPromptTemplate

Show JSON schema
{
   "title": "ChatPromptTemplate",
   "type": "object",
   "properties": {
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "template_vars": {
         "title": "Template Vars",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "kwargs": {
         "title": "Kwargs",
         "type": "object",
         "additionalProperties": {
            "type": "string"
         }
      },
      "output_parser": {
         "title": "Output Parser"
      },
      "message_templates": {
         "title": "Message Templates",
         "type": "array",
         "items": {
            "$ref": "#/definitions/ChatMessage"
         }
      }
   },
   "required": [
      "metadata",
      "template_vars",
      "kwargs",
      "message_templates"
   ],
   "definitions": {
      "MessageRole": {
         "title": "MessageRole",
         "description": "Message role.",
         "enum": [
            "system",
            "user",
            "assistant",
            "function"
         ],
         "type": "string"
      },
      "ChatMessage": {
         "title": "ChatMessage",
         "description": "Chat message.",
         "type": "object",
         "properties": {
            "role": {
               "default": "user",
               "allOf": [
                  {
                     "$ref": "#/definitions/MessageRole"
                  }
               ]
            },
            "content": {
               "title": "Content",
               "default": "",
               "type": "string"
            },
            "additional_kwargs": {
               "title": "Additional Kwargs",
               "type": "object"
            }
         }
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field message_templates: List[ChatMessage] [Required]
format(llm: Optional[LLM] = None, **kwargs: Any) str
format_messages(llm: Optional[LLM] = None, **kwargs: Any) List[ChatMessage]
get_template(llm: Optional[LLM] = None) str
partial_format(**kwargs: Any) ChatPromptTemplate
pydantic model llama_index.prompts.base.SelectorPromptTemplate

Show JSON schema
{
   "title": "SelectorPromptTemplate",
   "type": "object",
   "properties": {
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "template_vars": {
         "title": "Template Vars",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "kwargs": {
         "title": "Kwargs",
         "type": "object",
         "additionalProperties": {
            "type": "string"
         }
      },
      "output_parser": {
         "title": "Output Parser"
      },
      "default_template": {
         "title": "Default Template"
      },
      "conditionals": {
         "title": "Conditionals"
      }
   },
   "required": [
      "metadata",
      "template_vars",
      "kwargs"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field conditionals: Optional[List[Tuple[Callable[[LLM], bool], BasePromptTemplate]]] = None
field default_template: BasePromptTemplate [Required]
format(llm: Optional[LLM] = None, **kwargs: Any) str

Format the prompt into a string.

format_messages(llm: Optional[LLM] = None, **kwargs: Any) List[ChatMessage]

Format the prompt into a list of chat messages.

get_template(llm: Optional[LLM] = None) str
partial_format(**kwargs: Any) SelectorPromptTemplate
pydantic model llama_index.prompts.base.LangchainPromptTemplate

Show JSON schema
{
   "title": "LangchainPromptTemplate",
   "type": "object",
   "properties": {
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "template_vars": {
         "title": "Template Vars",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "kwargs": {
         "title": "Kwargs",
         "type": "object",
         "additionalProperties": {
            "type": "string"
         }
      },
      "output_parser": {
         "title": "Output Parser"
      },
      "selector": {
         "$ref": "#/definitions/ConditionalPromptSelector"
      }
   },
   "required": [
      "metadata",
      "template_vars",
      "kwargs",
      "selector"
   ],
   "definitions": {
      "BaseOutputParser": {
         "title": "BaseOutputParser",
         "description": "Base class to parse the output of an LLM call.\n\nOutput parsers help structure language model responses.\n\nExample:\n    .. code-block:: python\n\n        class BooleanOutputParser(BaseOutputParser[bool]):\n            true_val: str = \"YES\"\n            false_val: str = \"NO\"\n\n            def parse(self, text: str) -> bool:\n                cleaned_text = text.strip().upper()\n                if cleaned_text not in (self.true_val.upper(), self.false_val.upper()):\n                    raise OutputParserException(\n                        f\"BooleanOutputParser expected output value to either be \"\n                        f\"{self.true_val} or {self.false_val} (case-insensitive). \"\n                        f\"Received {cleaned_text}.\"\n                    )\n                return cleaned_text == self.true_val.upper()\n\n                @property\n                def _type(self) -> str:\n                        return \"boolean_output_parser\"",
         "type": "object",
         "properties": {}
      },
      "BasePromptTemplate": {
         "title": "BasePromptTemplate",
         "description": "Base class for all prompt templates, returning a prompt.",
         "type": "object",
         "properties": {
            "input_variables": {
               "title": "Input Variables",
               "type": "array",
               "items": {
                  "type": "string"
               }
            },
            "output_parser": {
               "$ref": "#/definitions/BaseOutputParser"
            }
         },
         "required": [
            "input_variables"
         ]
      },
      "ConditionalPromptSelector": {
         "title": "ConditionalPromptSelector",
         "description": "Prompt collection that goes through conditionals.",
         "type": "object",
         "properties": {
            "default_prompt": {
               "$ref": "#/definitions/BasePromptTemplate"
            }
         },
         "required": [
            "default_prompt"
         ]
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field selector: ConditionalPromptSelector [Required]
format(llm: Optional[LLM] = None, **kwargs: Any) str

Format the prompt into a string.

format_messages(llm: Optional[LLM] = None, **kwargs: Any) List[ChatMessage]

Format the prompt into a list of chat messages.

get_template(llm: Optional[LLM] = None) str
partial_format(**kwargs: Any) BasePromptTemplate

Partially format the prompt.

Subclass Prompts (deprecated)

Deprecated, but still available for reference at this link.