LLM Predictors#

Init params.

pydantic model llama_index.llm_predictor.LLMPredictor#

LLM predictor class.

A lightweight wrapper on top of LLMs that handles: - conversion of prompts to the string input format expected by LLMs - logging of prompts and responses to a callback manager

NOTE: Mostly keeping around for legacy reasons. A potential future path is to deprecate this class and move all functionality into the LLM class.

Show JSON schema
{
   "title": "LLMPredictor",
   "description": "LLM predictor class.\n\nA lightweight wrapper on top of LLMs that handles:\n- conversion of prompts to the string input format expected by LLMs\n- logging of prompts and responses to a callback manager\n\nNOTE: Mostly keeping around for legacy reasons. A potential future path is to\ndeprecate this class and move all functionality into the LLM class.",
   "type": "object",
   "properties": {
      "system_prompt": {
         "title": "System Prompt",
         "type": "string"
      },
      "query_wrapper_prompt": {
         "title": "Query Wrapper Prompt"
      },
      "pydantic_program_mode": {
         "default": "default",
         "allOf": [
            {
               "$ref": "#/definitions/PydanticProgramMode"
            }
         ]
      },
      "class_name": {
         "title": "Class Name",
         "type": "string",
         "default": "LLMPredictor"
      }
   },
   "definitions": {
      "PydanticProgramMode": {
         "title": "PydanticProgramMode",
         "description": "Pydantic program mode.",
         "enum": [
            "default",
            "openai",
            "llm",
            "guidance",
            "lm-format-enforcer"
         ],
         "type": "string"
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • pydantic_program_mode (llama_index.types.PydanticProgramMode)

  • query_wrapper_prompt (Optional[llama_index.prompts.base.BasePromptTemplate])

  • system_prompt (Optional[str])

field pydantic_program_mode: PydanticProgramMode = PydanticProgramMode.DEFAULT#
field query_wrapper_prompt: Optional[BasePromptTemplate] = None#
field system_prompt: Optional[str] = None#
async apredict(prompt: BasePromptTemplate, output_cls: Optional[BaseModel] = None, **prompt_args: Any) str#

Async predict.

async astream(prompt: BasePromptTemplate, output_cls: Optional[BaseModel] = None, **prompt_args: Any) AsyncGenerator[str, None]#

Async stream.

classmethod class_name() str#

Get the class name, used as a unique ID in serialization.

This provides a key that makes serialization robust against actual class name changes.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict[str, Any]#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self#
classmethod from_json(data_str: str, **kwargs: Any) Self#
classmethod from_orm(obj: Any) Model#
json(**kwargs: Any) str#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
predict(prompt: BasePromptTemplate, output_cls: Optional[BaseModel] = None, **prompt_args: Any) str#

Predict.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
stream(prompt: BasePromptTemplate, output_cls: Optional[BaseModel] = None, **prompt_args: Any) Generator[str, None, None]#

Stream.

to_dict(**kwargs: Any) Dict[str, Any]#
to_json(**kwargs: Any) str#
classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model#
property callback_manager: CallbackManager#

Get callback manager.

property llm: LLM#

Get LLM.

property metadata: LLMMetadata#

Get LLM metadata.

pydantic model llama_index.llm_predictor.StructuredLLMPredictor#

Structured LLM predictor class.

Parameters

llm_predictor (BaseLLMPredictor) – LLM Predictor to use.

Show JSON schema
{
   "title": "StructuredLLMPredictor",
   "description": "Structured LLM predictor class.\n\nArgs:\n    llm_predictor (BaseLLMPredictor): LLM Predictor to use.",
   "type": "object",
   "properties": {
      "system_prompt": {
         "title": "System Prompt",
         "type": "string"
      },
      "query_wrapper_prompt": {
         "title": "Query Wrapper Prompt"
      },
      "pydantic_program_mode": {
         "default": "default",
         "allOf": [
            {
               "$ref": "#/definitions/PydanticProgramMode"
            }
         ]
      },
      "class_name": {
         "title": "Class Name",
         "type": "string",
         "default": "StructuredLLMPredictor"
      }
   },
   "definitions": {
      "PydanticProgramMode": {
         "title": "PydanticProgramMode",
         "description": "Pydantic program mode.",
         "enum": [
            "default",
            "openai",
            "llm",
            "guidance",
            "lm-format-enforcer"
         ],
         "type": "string"
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • pydantic_program_mode (llama_index.types.PydanticProgramMode)

  • query_wrapper_prompt (Optional[llama_index.prompts.base.BasePromptTemplate])

  • system_prompt (Optional[str])

field pydantic_program_mode: PydanticProgramMode = PydanticProgramMode.DEFAULT#
field query_wrapper_prompt: Optional[BasePromptTemplate] = None#
field system_prompt: Optional[str] = None#
async apredict(prompt: BasePromptTemplate, output_cls: Optional[Any] = None, **prompt_args: Any) str#

Async predict the answer to a query.

Parameters

prompt (BasePromptTemplate) – BasePromptTemplate to use for prediction.

Returns

Tuple of the predicted answer and the formatted prompt.

Return type

Tuple[str, str]

async astream(prompt: BasePromptTemplate, output_cls: Optional[BaseModel] = None, **prompt_args: Any) AsyncGenerator[str, None]#

Async stream.

classmethod class_name() str#

Get the class name, used as a unique ID in serialization.

This provides a key that makes serialization robust against actual class name changes.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(**kwargs: Any) Dict[str, Any]#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self#
classmethod from_json(data_str: str, **kwargs: Any) Self#
classmethod from_orm(obj: Any) Model#
json(**kwargs: Any) str#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
predict(prompt: BasePromptTemplate, output_cls: Optional[Any] = None, **prompt_args: Any) str#

Predict the answer to a query.

Parameters

prompt (BasePromptTemplate) – BasePromptTemplate to use for prediction.

Returns

Tuple of the predicted answer and the formatted prompt.

Return type

Tuple[str, str]

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
stream(prompt: BasePromptTemplate, output_cls: Optional[Any] = None, **prompt_args: Any) Generator[str, None, None]#

Stream the answer to a query.

NOTE: this is a beta feature. Will try to build or use better abstractions about response handling.

Parameters

prompt (BasePromptTemplate) – BasePromptTemplate to use for prediction.

Returns

The predicted answer.

Return type

str

to_dict(**kwargs: Any) Dict[str, Any]#
to_json(**kwargs: Any) str#
classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model#
property callback_manager: CallbackManager#

Get callback manager.

property llm: LLM#

Get LLM.

property metadata: LLMMetadata#

Get LLM metadata.