PromptHelper

General prompt helper that can help deal with token limitations.

The helper can split text. It can also concatenate text from Node structs but keeping token limitations in mind.

class llama_index.indices.prompt_helper.PromptHelper(max_input_size: int, num_output: int, max_chunk_overlap: int, embedding_limit: Optional[int] = None, chunk_size_limit: Optional[int] = None, tokenizer: Optional[Callable[[str], List]] = None, separator: str = ' ')

Prompt helper.

This utility helps us fill in the prompt, split the text, and fill in context information according to necessary token limitations.

Parameters
  • max_input_size (int) – Maximum input size for the LLM.

  • num_output (int) – Number of outputs for the LLM.

  • max_chunk_overlap (int) – Maximum chunk overlap for the LLM.

  • embedding_limit (Optional[int]) – Maximum number of embeddings to use.

  • chunk_size_limit (Optional[int]) – Maximum chunk size to use.

  • tokenizer (Optional[Callable[[str], List]]) – Tokenizer to use.

compact_text_chunks(prompt: Prompt, text_chunks: Sequence[str]) List[str]

Compact text chunks.

This will combine text chunks into consolidated chunks that more fully “pack” the prompt template given the max_input_size.

classmethod from_llm_predictor(llm_predictor: LLMPredictor, max_chunk_overlap: Optional[int] = None, embedding_limit: Optional[int] = None, chunk_size_limit: Optional[int] = None, tokenizer: Optional[Callable[[str], List]] = None) PromptHelper

Create from llm predictor.

This will autofill values like max_input_size and num_output.

get_biggest_prompt(prompts: List[Prompt]) Prompt

Get biggest prompt.

Oftentimes we need to fetch the biggest prompt, in order to be the most conservative about chunking text. This is a helper utility for that.

get_chunk_size_given_prompt(prompt_text: str, num_chunks: int, padding: Optional[int] = 1) int

Get chunk size making sure we can also fit the prompt in.

Chunk size is computed based on a function of the total input size, the prompt length, the number of outputs, and the number of chunks.

If padding is specified, then we subtract that from the chunk size. By default we assume there is a padding of 1 (for the newline between chunks).

Limit by embedding_limit and chunk_size_limit if specified.

get_numbered_text_from_nodes(node_list: List[Node], prompt: Optional[Prompt] = None) str

Get text from nodes in the format of a numbered list.

Used by tree-structured indices.

get_text_from_nodes(node_list: List[Node], prompt: Optional[Prompt] = None) str

Get text from nodes. Used by tree-structured indices.

get_text_splitter_given_prompt(prompt: Prompt, num_chunks: int, padding: Optional[int] = 1) TokenTextSplitter

Get text splitter given initial prompt.

Allows us to get the text splitter which will split up text according to the desired chunk size.