Langchain Integrations

Agent Tools + Functions

Llama integration with Langchain agents.

pydantic model llama_index.langchain_helpers.agents.IndexToolConfig

Configuration for LlamaIndex index tool.

Show JSON schema
{
   "title": "IndexToolConfig",
   "description": "Configuration for LlamaIndex index tool.",
   "type": "object",
   "properties": {
      "query_engine": {
         "title": "Query Engine"
      },
      "name": {
         "title": "Name",
         "type": "string"
      },
      "description": {
         "title": "Description",
         "type": "string"
      },
      "tool_kwargs": {
         "title": "Tool Kwargs",
         "type": "object"
      }
   },
   "required": [
      "name",
      "description"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • description (str)

  • name (str)

  • query_engine (llama_index.indices.query.base.BaseQueryEngine)

  • tool_kwargs (Dict)

field description: str [Required]
field name: str [Required]
field query_engine: BaseQueryEngine [Required]
field tool_kwargs: Dict [Optional]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
pydantic model llama_index.langchain_helpers.agents.LlamaIndexTool

Tool for querying a LlamaIndex.

Show JSON schema
{
   "title": "LlamaIndexTool",
   "description": "Tool for querying a LlamaIndex.",
   "type": "object",
   "properties": {
      "name": {
         "title": "Name",
         "type": "string"
      },
      "description": {
         "title": "Description",
         "type": "string"
      },
      "args_schema": {
         "title": "Args Schema"
      },
      "return_direct": {
         "title": "Return Direct",
         "default": false,
         "type": "boolean"
      },
      "verbose": {
         "title": "Verbose",
         "default": false,
         "type": "boolean"
      },
      "callbacks": {
         "title": "Callbacks"
      },
      "callback_manager": {
         "title": "Callback Manager"
      },
      "tags": {
         "title": "Tags",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "query_engine": {
         "title": "Query Engine"
      },
      "return_sources": {
         "title": "Return Sources",
         "default": false,
         "type": "boolean"
      }
   },
   "required": [
      "name",
      "description"
   ],
   "additionalProperties": false
}

Config
  • arbitrary_types_allowed: bool = True

  • extra: Extra = Extra.forbid

Fields
  • args_schema (Optional[Type[pydantic.main.BaseModel]])

  • callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager])

  • callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]])

  • description (str)

  • handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]])

  • metadata (Optional[Dict[str, Any]])

  • name (str)

  • query_engine (llama_index.indices.query.base.BaseQueryEngine)

  • return_direct (bool)

  • return_sources (bool)

  • tags (Optional[List[str]])

  • verbose (bool)

field args_schema: Optional[Type[BaseModel]] = None

Pydantic model class to validate and parse the tool’s input arguments.

Validated by
  • raise_deprecation

field callback_manager: Optional[BaseCallbackManager] = None

Deprecated. Please use callbacks instead.

Validated by
  • raise_deprecation

field callbacks: Callbacks = None

Callbacks to be called during tool execution.

Validated by
  • raise_deprecation

field description: str [Required]

Used to tell the model how/when/why to use the tool.

You can provide few-shot examples as a part of the description.

Validated by
  • raise_deprecation

field handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False

Handle the content of the ToolException thrown.

Validated by
  • raise_deprecation

field metadata: Optional[Dict[str, Any]] = None

Optional metadata associated with the tool. Defaults to None This metadata will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case.

Validated by
  • raise_deprecation

field name: str [Required]

The unique name of the tool that clearly communicates its purpose.

Validated by
  • raise_deprecation

field query_engine: BaseQueryEngine [Required]
Validated by
  • raise_deprecation

field return_direct: bool = False

Whether to return the tool’s output directly. Setting this to True means

that after the tool is called, the AgentExecutor will stop looping.

Validated by
  • raise_deprecation

field return_sources: bool = False
Validated by
  • raise_deprecation

field tags: Optional[List[str]] = None

Optional list of tags associated with the tool. Defaults to None These tags will be associated with each call to this tool, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a tool with its use case.

Validated by
  • raise_deprecation

field verbose: bool = False

Whether to log the tool’s progress.

Validated by
  • raise_deprecation

async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output]

Default implementation of abatch, which calls ainvoke N times. Subclasses should override this method if they can batch more efficiently.

async ainvoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) Any

Default implementation of ainvoke, which calls invoke in a thread pool. Subclasses should override this method if they can run asynchronously.

async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) Any

Run the tool asynchronously.

async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) AsyncIterator[Output]

Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.

async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) AsyncIterator[RunLogPatch]

Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc.

Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.

The jsonpatch ops can be applied in order to construct state.

async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) AsyncIterator[Output]

Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated.

batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) List[Output]

Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.

bind(**kwargs: Any) Runnable[Input, Output]

Bind arguments to a Runnable, returning a new Runnable.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model
classmethod from_tool_config(tool_config: IndexToolConfig) LlamaIndexTool

Create a tool from a tool config.

invoke(input: Union[str, Dict], config: Optional[RunnableConfig] = None, **kwargs: Any) Any
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

map() Runnable[List[Input], List[Output]]

Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input.

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
validator raise_deprecation  Β»  all fields

Raise deprecation warning if callback_manager is used.

run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, **kwargs: Any) Any

Run the tool.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) Iterator[Output]

Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output.

transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) Iterator[Output]

Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) Runnable[Input, Output]

Bind config to a Runnable, returning a new Runnable.

with_fallbacks(fallbacks: ~typing.Sequence[~langchain.schema.runnable.base.Runnable[~langchain.schema.runnable.utils.Input, ~langchain.schema.runnable.utils.Output]], *, exceptions_to_handle: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,)) RunnableWithFallbacks[Input, Output]
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) Runnable[Input, Output]
property args: dict
property is_single_input: bool

Whether the tool only accepts a single input.

pydantic model llama_index.langchain_helpers.agents.LlamaToolkit

Toolkit for interacting with Llama indices.

Show JSON schema
{
   "title": "LlamaToolkit",
   "description": "Toolkit for interacting with Llama indices.",
   "type": "object",
   "properties": {
      "index_configs": {
         "title": "Index Configs"
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • index_configs (List[llama_index.langchain_helpers.agents.tools.IndexToolConfig])

field index_configs: List[IndexToolConfig] [Optional]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model
get_tools() List[BaseTool]

Get the tools in the toolkit.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
llama_index.langchain_helpers.agents.create_llama_agent(toolkit: LlamaToolkit, llm: BaseLLM, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) AgentExecutor

Load an agent executor given a Llama Toolkit and LLM.

NOTE: this is a light wrapper around initialize_agent in langchain.

Parameters
  • toolkit – LlamaToolkit to use.

  • llm – Language model to use as the agent.

  • agent –

    A string that specified the agent type to use. Valid options are:

    zero-shot-react-description react-docstore self-ask-with-search conversational-react-description chat-zero-shot-react-description, chat-conversational-react-description,

    If None and agent_path is also None, will default to

    zero-shot-react-description.

  • callback_manager – CallbackManager to use. Global callback manager is used if not provided. Defaults to None.

  • agent_path – Path to serialized agent to use.

  • agent_kwargs – Additional key word arguments to pass to the underlying agent

  • **kwargs – Additional key word arguments passed to the agent executor

Returns

An agent executor

llama_index.langchain_helpers.agents.create_llama_chat_agent(toolkit: LlamaToolkit, llm: BaseLLM, callback_manager: Optional[BaseCallbackManager] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) AgentExecutor

Load a chat llama agent given a Llama Toolkit and LLM.

Parameters
  • toolkit – LlamaToolkit to use.

  • llm – Language model to use as the agent.

  • callback_manager – CallbackManager to use. Global callback manager is used if not provided. Defaults to None.

  • agent_kwargs – Additional key word arguments to pass to the underlying agent

  • **kwargs – Additional key word arguments passed to the agent executor

Returns

An agent executor

Memory Module

Langchain memory wrapper (for LlamaIndex).

pydantic model llama_index.langchain_helpers.memory_wrapper.GPTIndexChatMemory

Langchain chat memory wrapper (for LlamaIndex).

Parameters
  • human_prefix (str) – Prefix for human input. Defaults to β€œHuman”.

  • ai_prefix (str) – Prefix for AI output. Defaults to β€œAI”.

  • memory_key (str) – Key for memory. Defaults to β€œhistory”.

  • index (BaseIndex) – LlamaIndex instance.

  • query_kwargs (Dict[str, Any]) – Keyword arguments for LlamaIndex query.

  • input_key (Optional[str]) – Input key. Defaults to None.

  • output_key (Optional[str]) – Output key. Defaults to None.

Show JSON schema
{
   "title": "GPTIndexChatMemory",
   "description": "Langchain chat memory wrapper (for LlamaIndex).\n\nArgs:\n    human_prefix (str): Prefix for human input. Defaults to \"Human\".\n    ai_prefix (str): Prefix for AI output. Defaults to \"AI\".\n    memory_key (str): Key for memory. Defaults to \"history\".\n    index (BaseIndex): LlamaIndex instance.\n    query_kwargs (Dict[str, Any]): Keyword arguments for LlamaIndex query.\n    input_key (Optional[str]): Input key. Defaults to None.\n    output_key (Optional[str]): Output key. Defaults to None.",
   "type": "object",
   "properties": {
      "chat_memory": {
         "title": "Chat Memory"
      },
      "output_key": {
         "title": "Output Key",
         "type": "string"
      },
      "input_key": {
         "title": "Input Key",
         "type": "string"
      },
      "return_messages": {
         "title": "Return Messages",
         "default": false,
         "type": "boolean"
      },
      "human_prefix": {
         "title": "Human Prefix",
         "default": "Human",
         "type": "string"
      },
      "ai_prefix": {
         "title": "Ai Prefix",
         "default": "AI",
         "type": "string"
      },
      "memory_key": {
         "title": "Memory Key",
         "default": "history",
         "type": "string"
      },
      "index": {
         "title": "Index"
      },
      "query_kwargs": {
         "title": "Query Kwargs",
         "type": "object"
      },
      "return_source": {
         "title": "Return Source",
         "default": false,
         "type": "boolean"
      },
      "id_to_message": {
         "title": "Id To Message",
         "type": "object",
         "additionalProperties": {
            "$ref": "#/definitions/BaseMessage"
         }
      }
   },
   "required": [
      "index"
   ],
   "definitions": {
      "BaseMessage": {
         "title": "BaseMessage",
         "description": "The base abstract Message class.\n\nMessages are the inputs and outputs of ChatModels.",
         "type": "object",
         "properties": {
            "content": {
               "title": "Content",
               "type": "string"
            },
            "additional_kwargs": {
               "title": "Additional Kwargs",
               "type": "object"
            }
         },
         "required": [
            "content"
         ]
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field ai_prefix: str = 'AI'
field chat_memory: BaseChatMessageHistory [Optional]
field human_prefix: str = 'Human'
field id_to_message: Dict[str, BaseMessage] [Optional]
field index: BaseIndex [Required]
field input_key: Optional[str] = None
field memory_key: str = 'history'
field output_key: Optional[str] = None
field query_kwargs: Dict [Optional]
field return_messages: bool = False
field return_source: bool = False
clear() None

Clear memory contents.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model
classmethod get_lc_namespace() List[str]

Get the namespace of the langchain object.

For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [β€œlangchain”, β€œllms”, β€œopenai”]

classmethod is_lc_serializable() bool

Is this class serializable?

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

load_memory_variables(inputs: Dict[str, Any]) Dict[str, str]

Return key-value pairs given the text input to the chain.

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None

Save the context of this model run to memory.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
to_json() Union[SerializedConstructor, SerializedNotImplemented]
to_json_not_implemented() SerializedNotImplemented
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property lc_attributes: Dict

Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.

property lc_secrets: Dict[str, str]

Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”}

property memory_variables: List[str]

Return memory variables.

pydantic model llama_index.langchain_helpers.memory_wrapper.GPTIndexMemory

Langchain memory wrapper (for LlamaIndex).

Parameters
  • human_prefix (str) – Prefix for human input. Defaults to β€œHuman”.

  • ai_prefix (str) – Prefix for AI output. Defaults to β€œAI”.

  • memory_key (str) – Key for memory. Defaults to β€œhistory”.

  • index (BaseIndex) – LlamaIndex instance.

  • query_kwargs (Dict[str, Any]) – Keyword arguments for LlamaIndex query.

  • input_key (Optional[str]) – Input key. Defaults to None.

  • output_key (Optional[str]) – Output key. Defaults to None.

Show JSON schema
{
   "title": "GPTIndexMemory",
   "description": "Langchain memory wrapper (for LlamaIndex).\n\nArgs:\n    human_prefix (str): Prefix for human input. Defaults to \"Human\".\n    ai_prefix (str): Prefix for AI output. Defaults to \"AI\".\n    memory_key (str): Key for memory. Defaults to \"history\".\n    index (BaseIndex): LlamaIndex instance.\n    query_kwargs (Dict[str, Any]): Keyword arguments for LlamaIndex query.\n    input_key (Optional[str]): Input key. Defaults to None.\n    output_key (Optional[str]): Output key. Defaults to None.",
   "type": "object",
   "properties": {
      "human_prefix": {
         "title": "Human Prefix",
         "default": "Human",
         "type": "string"
      },
      "ai_prefix": {
         "title": "Ai Prefix",
         "default": "AI",
         "type": "string"
      },
      "memory_key": {
         "title": "Memory Key",
         "default": "history",
         "type": "string"
      },
      "index": {
         "title": "Index"
      },
      "query_kwargs": {
         "title": "Query Kwargs",
         "type": "object"
      },
      "output_key": {
         "title": "Output Key",
         "type": "string"
      },
      "input_key": {
         "title": "Input Key",
         "type": "string"
      }
   },
   "required": [
      "index"
   ]
}

Config
  • arbitrary_types_allowed: bool = True

Fields
field ai_prefix: str = 'AI'
field human_prefix: str = 'Human'
field index: BaseIndex [Required]
field input_key: Optional[str] = None
field memory_key: str = 'history'
field output_key: Optional[str] = None
field query_kwargs: Dict [Optional]
clear() None

Clear memory contents.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model
classmethod get_lc_namespace() List[str]

Get the namespace of the langchain object.

For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [β€œlangchain”, β€œllms”, β€œopenai”]

classmethod is_lc_serializable() bool

Is this class serializable?

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

load_memory_variables(inputs: Dict[str, Any]) Dict[str, str]

Return key-value pairs given the text input to the chain.

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None

Save the context of this model run to memory.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
to_json() Union[SerializedConstructor, SerializedNotImplemented]
to_json_not_implemented() SerializedNotImplemented
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property lc_attributes: Dict

Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.

property lc_secrets: Dict[str, str]

Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”}

property memory_variables: List[str]

Return memory variables.

llama_index.langchain_helpers.memory_wrapper.get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) str

Get prompt input key.

Copied over from langchain.