Prompt Templates

These are the reference prompt templates.

We first show links to default prompts. We then document all core prompts, with their required variables.

We then show the base prompt class, derived from Langchain.

Default Prompts

The list of default prompts can be found here.

NOTE: we’ve also curated a set of refine prompts for ChatGPT use cases. The list of ChatGPT refine prompts can be found here.

Prompts

Subclasses from base prompt.

class llama_index.prompts.prompts.KeywordExtractPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Keyword extract prompt.

Prompt to extract keywords from a text text with a maximum of max_keywords keywords.

Required template variables: text, max_keywords

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.KnowledgeGraphPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Define the knowledge graph triplet extraction prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.PandasPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Pandas prompt. Convert query to python code.

Required template variables: query_str, df_str, instruction_str.

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.QueryKeywordExtractPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Query keyword extract prompt.

Prompt to extract keywords from a query query_str with a maximum of max_keywords keywords.

Required template variables: query_str, max_keywords

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.QuestionAnswerPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Question Answer prompt.

Prompt to answer a question query_str given a context context_str.

Required template variables: context_str, query_str

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.RefinePrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Refine prompt.

Prompt to refine an existing answer existing_answer given a context context_msg, and a query query_str.

Required template variables: query_str, existing_answer, context_msg

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.RefineTableContextPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Refine Table context prompt.

Prompt to refine a table context given a table schema schema, as well as unstructured text context context_msg, and a task query_str. This includes both a high-level description of the table as well as a description of each column in the table.

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.SchemaExtractPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Schema extract prompt.

Prompt to extract schema from unstructured text text.

Required template variables: text, schema

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.SimpleInputPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Simple Input prompt.

Required template variables: query_str.

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.SummaryPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Summary prompt.

Prompt to summarize the provided context_str.

Required template variables: context_str

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.TableContextPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Table context prompt.

Prompt to generate a table context given a table schema schema, as well as unstructured text context context_str, and a task query_str. This includes both a high-level description of the table as well as a description of each column in the table.

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.TextToSQLPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Text to SQL prompt.

Prompt to translate a natural language query into SQL in the dialect dialect given a schema schema.

Required template variables: query_str, schema, dialect

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.TreeInsertPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Tree Insert prompt.

Prompt to insert a new chunk of text new_chunk_text into the tree index. More specifically, this prompt has the LLM select the relevant candidate child node to continue tree traversal.

Required template variables: num_chunks, context_list, new_chunk_text

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.TreeSelectMultiplePrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Tree select multiple prompt.

Prompt to select multiple candidate child nodes out of all child nodes provided in context_list, given a query query_str. branching_factor refers to the number of child nodes to select, and num_chunks is the number of child nodes in context_list.

Required template variables: num_chunks, context_list, query_str,

branching_factor

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

class llama_index.prompts.prompts.TreeSelectPrompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Tree select prompt.

Prompt to select a candidate child node out of all child nodes provided in context_list, given a query query_str. num_chunks is the number of child nodes in context_list.

Required template variables: num_chunks, context_list, query_str

Parameters
  • template (str) – Template for the prompt.

  • **prompt_kwargs – Keyword arguments for the prompt.

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.

Base Prompt Class

Prompt class.

class llama_index.prompts.Prompt(template: Optional[str] = None, langchain_prompt: Optional[BasePromptTemplate] = None, langchain_prompt_selector: Optional[ConditionalPromptSelector] = None, stop_token: Optional[str] = None, output_parser: Optional[BaseOutputParser] = None, **prompt_kwargs: Any)

Prompt class for LlamaIndex.

Wrapper around langchain’s prompt class. Adds ability to:
  • enforce certain prompt types

  • partially fill values

  • define stop token

format(llm: Optional[BaseLanguageModel] = None, **kwargs: Any) str

Format the prompt.

classmethod from_langchain_prompt(prompt: BasePromptTemplate, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_langchain_prompt_selector(prompt_selector: ConditionalPromptSelector, **kwargs: Any) PMT

Load prompt from LangChain prompt.

classmethod from_prompt(prompt: Prompt, llm: Optional[BaseLanguageModel] = None) PMT

Create a prompt from an existing prompt.

Use case: If the existing prompt is already partially filled, and the remaining fields satisfy the requirements of the prompt class, then we can create a new prompt from the existing partially filled prompt.

get_langchain_prompt(llm: Optional[BaseLanguageModel] = None) BasePromptTemplate

Get langchain prompt.

partial_format(**kwargs: Any) PMT

Format the prompt partially.

Return an instance of itself.