PromptHelper#

General prompt helper that can help deal with LLM context window token limitations.

At its core, it calculates available context size by starting with the context window size of an LLM and reserve token space for the prompt template, and the output.

It provides utility for “repacking” text chunks (retrieved from index) to maximally make use of the available context window (and thereby reducing the number of LLM calls needed), or truncating them so that they fit in a single LLM call.

pydantic model llama_index.core.indices.prompt_helper.PromptHelper#

Prompt helper.

General prompt helper that can help deal with LLM context window token limitations.

At its core, it calculates available context size by starting with the context window size of an LLM and reserve token space for the prompt template, and the output.

It provides utility for “repacking” text chunks (retrieved from index) to maximally make use of the available context window (and thereby reducing the number of LLM calls needed), or truncating them so that they fit in a single LLM call.

Parameters
  • context_window (int) – Context window for the LLM.

  • num_output (int) – Number of outputs for the LLM.

  • chunk_overlap_ratio (float) – Chunk overlap as a ratio of chunk size

  • chunk_size_limit (Optional[int]) – Maximum chunk size to use.

  • tokenizer (Optional[Callable[[str], List]]) – Tokenizer to use.

  • separator (str) – Separator for text splitter

Show JSON schema
{
   "title": "PromptHelper",
   "description": "Prompt helper.\n\nGeneral prompt helper that can help deal with LLM context window token limitations.\n\nAt its core, it calculates available context size by starting with the context\nwindow size of an LLM and reserve token space for the prompt template, and the\noutput.\n\nIt provides utility for \"repacking\" text chunks (retrieved from index) to maximally\nmake use of the available context window (and thereby reducing the number of LLM\ncalls needed), or truncating them so that they fit in a single LLM call.\n\nArgs:\n    context_window (int):                   Context window for the LLM.\n    num_output (int):                       Number of outputs for the LLM.\n    chunk_overlap_ratio (float):            Chunk overlap as a ratio of chunk size\n    chunk_size_limit (Optional[int]):         Maximum chunk size to use.\n    tokenizer (Optional[Callable[[str], List]]): Tokenizer to use.\n    separator (str):                        Separator for text splitter",
   "type": "object",
   "properties": {
      "context_window": {
         "title": "Context Window",
         "description": "The maximum context size that will get sent to the LLM.",
         "default": 3900,
         "type": "integer"
      },
      "num_output": {
         "title": "Num Output",
         "description": "The amount of token-space to leave in input for generation.",
         "default": 256,
         "type": "integer"
      },
      "chunk_overlap_ratio": {
         "title": "Chunk Overlap Ratio",
         "description": "The percentage token amount that each chunk should overlap.",
         "default": 0.1,
         "type": "number"
      },
      "chunk_size_limit": {
         "title": "Chunk Size Limit",
         "description": "The maximum size of a chunk.",
         "type": "integer"
      },
      "separator": {
         "title": "Separator",
         "description": "The separator when chunking tokens.",
         "default": " ",
         "type": "string"
      },
      "class_name": {
         "title": "Class Name",
         "type": "string",
         "default": "PromptHelper"
      }
   }
}

Config
  • schema_extra: function = <function BaseComponent.Config.schema_extra at 0x7f3c8c4f3880>

Fields
field chunk_overlap_ratio: float = 0.1#

The percentage token amount that each chunk should overlap.

field chunk_size_limit: Optional[int] = None#

The maximum size of a chunk.

field context_window: int = 3900#

The maximum context size that will get sent to the LLM.

field num_output: int = 256#

The amount of token-space to leave in input for generation.

field separator: str = ' '#

The separator when chunking tokens.

classmethod class_name() str#

Get the class name, used as a unique ID in serialization.

This provides a key that makes serialization robust against actual class name changes.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict[str, Any]#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self#
classmethod from_json(data_str: str, **kwargs: Any) Self#
classmethod from_llm_metadata(llm_metadata: LLMMetadata, chunk_overlap_ratio: float = 0.1, chunk_size_limit: Optional[int] = None, tokenizer: Optional[Callable[[str], List]] = None, separator: str = ' ') PromptHelper#

Create from llm predictor.

This will autofill values like context_window and num_output.

classmethod from_orm(obj: Any) Model#
get_text_splitter_given_prompt(prompt: BasePromptTemplate, num_chunks: int = 1, padding: int = 5, llm: Optional[LLM] = None) TokenTextSplitter#

Get text splitter configured to maximally pack available context window, taking into account of given prompt, and desired number of chunks.

json(**kwargs: Any) str#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
repack(prompt: BasePromptTemplate, text_chunks: Sequence[str], padding: int = 5, llm: Optional[LLM] = None) List[str]#

Repack text chunks to fit available context window.

This will combine text chunks into consolidated chunks that more fully “pack” the prompt template given the context_window.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
to_dict(**kwargs: Any) Dict[str, Any]#
to_json(**kwargs: Any) str#
truncate(prompt: BasePromptTemplate, text_chunks: Sequence[str], padding: int = 5, llm: Optional[LLM] = None) List[str]#

Truncate text chunks to fit available context window.

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model#