Langchain Integrationsο
Agent Tools + Functions
Llama integration with Langchain agents.
- class llama_index.langchain_helpers.agents.IndexToolConfig(*, query_engine: BaseQueryEngine, name: str, description: str, tool_kwargs: Dict = None)ο
Configuration for LlamaIndex index tool.
- class Configο
Configuration for this pydantic object.
- classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = βallowβ was set since it adds all passed values
- copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model ο
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep β set to True to make a deep copy of the model
- Returns
new model instance
- dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
- classmethod update_forward_refs(**localns: Any) None ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
- class llama_index.langchain_helpers.agents.LlamaIndexTool(*, name: str, description: str, args_schema: Optional[Type[BaseModel]] = None, return_direct: bool = False, verbose: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]] = False, query_engine: BaseQueryEngine, return_sources: bool = False)ο
Tool for querying a LlamaIndex.
- class Configο
Configuration for this pydantic object.
- args_schema: Optional[Type[BaseModel]]ο
Pydantic model class to validate and parse the toolβs input arguments.
- async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Any ο
Run the tool asynchronously.
- callback_manager: Optional[BaseCallbackManager]ο
Deprecated. Please use callbacks instead.
- callbacks: Callbacksο
Callbacks to be called during tool execution.
- classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = βallowβ was set since it adds all passed values
- copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model ο
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep β set to True to make a deep copy of the model
- Returns
new model instance
- description: strο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
- dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- classmethod from_tool_config(tool_config: IndexToolConfig) LlamaIndexTool ο
Create a tool from a tool config.
- handle_tool_error: Optional[Union[bool, str, Callable[[ToolException], str]]]ο
Handle the content of the ToolException thrown.
- property is_single_input: boolο
Whether the tool only accepts a single input.
- json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
- name: strο
The unique name of the tool that clearly communicates its purpose.
- classmethod raise_deprecation(values: Dict) Dict ο
Raise deprecation warning if callback_manager is used.
- return_direct: boolο
Whether to return the toolβs output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
- run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) Any ο
Run the tool.
- classmethod update_forward_refs(**localns: Any) None ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
- verbose: boolο
Whether to log the toolβs progress.
- class llama_index.langchain_helpers.agents.LlamaToolkit(*, index_configs: List[IndexToolConfig] = None)ο
Toolkit for interacting with Llama indices.
- class Configο
Configuration for this pydantic object.
- classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = βallowβ was set since it adds all passed values
- copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model ο
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep β set to True to make a deep copy of the model
- Returns
new model instance
- dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- get_tools() List[BaseTool] ο
Get the tools in the toolkit.
- json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
- classmethod update_forward_refs(**localns: Any) None ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
- llama_index.langchain_helpers.agents.create_llama_agent(toolkit: LlamaToolkit, llm: BaseLLM, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) AgentExecutor ο
Load an agent executor given a Llama Toolkit and LLM.
NOTE: this is a light wrapper around initialize_agent in langchain.
- Parameters
toolkit β LlamaToolkit to use.
llm β Language model to use as the agent.
agent β
- A string that specified the agent type to use. Valid options are:
zero-shot-react-description react-docstore self-ask-with-search conversational-react-description chat-zero-shot-react-description, chat-conversational-react-description,
- If None and agent_path is also None, will default to
zero-shot-react-description.
callback_manager β CallbackManager to use. Global callback manager is used if not provided. Defaults to None.
agent_path β Path to serialized agent to use.
agent_kwargs β Additional key word arguments to pass to the underlying agent
**kwargs β Additional key word arguments passed to the agent executor
- Returns
An agent executor
- llama_index.langchain_helpers.agents.create_llama_chat_agent(toolkit: LlamaToolkit, llm: BaseLLM, callback_manager: Optional[BaseCallbackManager] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) AgentExecutor ο
Load a chat llama agent given a Llama Toolkit and LLM.
- Parameters
toolkit β LlamaToolkit to use.
llm β Language model to use as the agent.
callback_manager β CallbackManager to use. Global callback manager is used if not provided. Defaults to None.
agent_kwargs β Additional key word arguments to pass to the underlying agent
**kwargs β Additional key word arguments passed to the agent executor
- Returns
An agent executor
Memory Module
Langchain memory wrapper (for LlamaIndex).
- class llama_index.langchain_helpers.memory_wrapper.GPTIndexChatMemory(*, chat_memory: BaseChatMessageHistory = None, output_key: Optional[str] = None, input_key: Optional[str] = None, return_messages: bool = False, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history', index: BaseIndex, query_kwargs: Dict = None, return_source: bool = False, id_to_message: Dict[str, BaseMessage] = None)ο
Langchain chat memory wrapper (for LlamaIndex).
- Parameters
human_prefix (str) β Prefix for human input. Defaults to βHumanβ.
ai_prefix (str) β Prefix for AI output. Defaults to βAIβ.
memory_key (str) β Key for memory. Defaults to βhistoryβ.
index (BaseIndex) β LlamaIndex instance.
query_kwargs (Dict[str, Any]) β Keyword arguments for LlamaIndex query.
input_key (Optional[str]) β Input key. Defaults to None.
output_key (Optional[str]) β Output key. Defaults to None.
- class Configο
Configuration for this pydantic object.
- clear() None ο
Clear memory contents.
- classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = βallowβ was set since it adds all passed values
- copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model ο
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep β set to True to make a deep copy of the model
- Returns
new model instance
- dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
- load_memory_variables(inputs: Dict[str, Any]) Dict[str, str] ο
Return key-value pairs given the text input to the chain.
- property memory_variables: List[str]ο
Return memory variables.
- save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None ο
Save the context of this model run to memory.
- classmethod update_forward_refs(**localns: Any) None ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
- class llama_index.langchain_helpers.memory_wrapper.GPTIndexMemory(*, human_prefix: str = 'Human', ai_prefix: str = 'AI', memory_key: str = 'history', index: BaseIndex, query_kwargs: Dict = None, output_key: Optional[str] = None, input_key: Optional[str] = None)ο
Langchain memory wrapper (for LlamaIndex).
- Parameters
human_prefix (str) β Prefix for human input. Defaults to βHumanβ.
ai_prefix (str) β Prefix for AI output. Defaults to βAIβ.
memory_key (str) β Key for memory. Defaults to βhistoryβ.
index (BaseIndex) β LlamaIndex instance.
query_kwargs (Dict[str, Any]) β Keyword arguments for LlamaIndex query.
input_key (Optional[str]) β Input key. Defaults to None.
output_key (Optional[str]) β Output key. Defaults to None.
- class Configο
Configuration for this pydantic object.
- clear() None ο
Clear memory contents.
- classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = βallowβ was set since it adds all passed values
- copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model ο
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep β set to True to make a deep copy of the model
- Returns
new model instance
- dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny ο
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
- load_memory_variables(inputs: Dict[str, Any]) Dict[str, str] ο
Return key-value pairs given the text input to the chain.
- property memory_variables: List[str]ο
Return memory variables.
- save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) None ο
Save the context of this model run to memory.
- classmethod update_forward_refs(**localns: Any) None ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
- llama_index.langchain_helpers.memory_wrapper.get_prompt_input_key(inputs: Dict[str, Any], memory_variables: List[str]) str ο
Get prompt input key.
Copied over from langchain.