Agents#

This doc shows our agent classes - both the high-level and lower-level components.

There are also legacy classes (e.g. OldOpenAIAgent, and OldReActAgent) that still work but are deprecated.

class llama_index.agent.AgentChatResponse(response: str = '', sources: ~typing.List[~llama_index.tools.types.ToolOutput] = <factory>, source_nodes: ~typing.List[~llama_index.schema.NodeWithScore] = <factory>)#

Agent chat response.

class llama_index.agent.AgentRunner(agent_worker: BaseAgentWorker, chat_history: Optional[List[ChatMessage]] = None, state: Optional[AgentState] = None, memory: Optional[BaseMemory] = None, llm: Optional[LLM] = None, callback_manager: Optional[CallbackManager] = None, init_task_state_kwargs: Optional[dict] = None, delete_task_on_finish: bool = False, default_tool_choice: str = 'auto', verbose: bool = False)#

Agent runner.

Top-level agent orchestrator that can create tasks, run each step in a task, or run a task e2e. Stores state and keeps track of tasks.

Parameters
  • agent_worker (BaseAgentWorker) – step executor

  • chat_history (Optional[List[ChatMessage]], optional) – chat history. Defaults to None.

  • state (Optional[AgentState], optional) – agent state. Defaults to None.

  • memory (Optional[BaseMemory], optional) – memory. Defaults to None.

  • llm (Optional[LLM], optional) – LLM. Defaults to None.

  • callback_manager (Optional[CallbackManager], optional) – callback manager. Defaults to None.

  • init_task_state_kwargs (Optional[dict], optional) – init task state kwargs. Defaults to None.

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) AgentChatResponse#

Async version of main chat interface.

async arun_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async).

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) StreamingAgentChatResponse#

Async version of main chat interface.

async astream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async stream).

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) AgentChatResponse#

Main chat interface.

chat_repl() None#

Enter interactive chat REPL.

create_task(input: str, **kwargs: Any) Task#

Create task.

delete_task(task_id: str) None#

Delete task.

NOTE: this will not delete any previous executions from memory.

finalize_response(task_id: str, step_output: Optional[TaskStepOutput] = None) Union[AgentChatResponse, StreamingAgentChatResponse]#

Finalize response.

get_completed_step(task_id: str, step_id: str, **kwargs: Any) TaskStepOutput#

Get completed step.

get_completed_steps(task_id: str, **kwargs: Any) List[TaskStepOutput]#

Get completed steps.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_task(task_id: str, **kwargs: Any) Task#

Get task.

get_upcoming_steps(task_id: str, **kwargs: Any) List[TaskStep]#

Get upcoming steps.

list_tasks(**kwargs: Any) List[Task]#

List tasks.

reset() None#

Reset conversation state.

run_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) StreamingAgentChatResponse#

Stream chat interface.

stream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (stream).

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

undo_step(task_id: str) None#

Undo previous step.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

class llama_index.agent.ContextRetrieverOpenAIAgent(tools: List[BaseTool], retriever: BaseRetriever, qa_prompt: PromptTemplate, context_separator: str, llm: OpenAI, memory: BaseMemory, prefix_messages: List[ChatMessage], verbose: bool = False, max_function_calls: int = 5, callback_manager: Optional[CallbackManager] = None)#

ContextRetriever OpenAI Agent.

This agent performs retrieval from BaseRetriever before calling the LLM. Allows it to augment user message with context.

NOTE: this is a beta feature, function interfaces might change.

Parameters
  • tools (List[BaseTool]) – A list of tools.

  • retriever (BaseRetriever) – A retriever.

  • qa_prompt (Optional[PromptTemplate]) – A QA prompt.

  • context_separator (str) – A context separator.

  • llm (Optional[OpenAI]) – An OpenAI LLM.

  • chat_history (Optional[List[ChatMessage]]) – A chat history.

  • prefix_messages – List[ChatMessage]: A list of prefix messages.

  • verbose (bool) – Whether to print debug statements.

  • max_function_calls (int) – Maximum number of function calls.

  • callback_manager (Optional[CallbackManager]) – A callback manager.

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') AgentChatResponse#

Chat.

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Async version of main chat interface.

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') AgentChatResponse#

Chat.

chat_repl() None#

Enter interactive chat REPL.

classmethod from_tools_and_retriever(tools: ~typing.List[~llama_index.tools.types.BaseTool], retriever: ~llama_index.core.base_retriever.BaseRetriever, qa_prompt: ~typing.Optional[~llama_index.prompts.base.PromptTemplate] = None, context_separator: str = '\n', llm: ~typing.Optional[~llama_index.llms.llm.LLM] = None, chat_history: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None, memory: ~typing.Optional[~llama_index.memory.types.BaseMemory] = None, memory_cls: ~typing.Type[~llama_index.memory.types.BaseMemory] = <class 'llama_index.memory.chat_memory_buffer.ChatMemoryBuffer'>, verbose: bool = False, max_function_calls: int = 5, callback_manager: ~typing.Optional[~llama_index.callbacks.base.CallbackManager] = None, system_prompt: ~typing.Optional[str] = None, prefix_messages: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None) ContextRetrieverOpenAIAgent#

Create a ContextRetrieverOpenAIAgent from a retriever.

Parameters
  • retriever (BaseRetriever) – A retriever.

  • qa_prompt (Optional[PromptTemplate]) – A QA prompt.

  • context_separator (str) – A context separator.

  • llm (Optional[OpenAI]) – An OpenAI LLM.

  • chat_history (Optional[ChatMessageHistory]) – A chat history.

  • verbose (bool) – Whether to print debug statements.

  • max_function_calls (int) – Maximum number of function calls.

  • callback_manager (Optional[CallbackManager]) – A callback manager.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(message: str) List[BaseTool]#

Get tools.

reset() None#

Reset conversation state.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Stream chat interface.

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

pydantic model llama_index.agent.CustomSimpleAgentWorker#

Custom simple agent worker.

This is “simple” in the sense that some of the scaffolding is setup already. Assumptions: - assumes that the agent has tools, llm, callback manager, and tool retriever - has a from_tools convenience function - assumes that the agent is sequential, and doesn’t take in any additional intermediate inputs.

Parameters
  • tools (Sequence[BaseTool]) – Tools to use for reasoning

  • llm (LLM) – LLM to use

  • callback_manager (CallbackManager) – Callback manager

  • tool_retriever (Optional[ObjectRetriever[BaseTool]]) – Tool retriever

  • verbose (bool) – Whether to print out reasoning steps

Show JSON schema
{
   "title": "CustomSimpleAgentWorker",
   "description": "Custom simple agent worker.\n\nThis is \"simple\" in the sense that some of the scaffolding is setup already.\nAssumptions:\n- assumes that the agent has tools, llm, callback manager, and tool retriever\n- has a `from_tools` convenience function\n- assumes that the agent is sequential, and doesn't take in any additional\nintermediate inputs.\n\nArgs:\n    tools (Sequence[BaseTool]): Tools to use for reasoning\n    llm (LLM): LLM to use\n    callback_manager (CallbackManager): Callback manager\n    tool_retriever (Optional[ObjectRetriever[BaseTool]]): Tool retriever\n    verbose (bool): Whether to print out reasoning steps",
   "type": "object",
   "properties": {
      "tools": {
         "title": "Tools"
      },
      "llm": {
         "title": "Llm"
      },
      "callback_manager": {
         "title": "Callback Manager"
      },
      "tool_retriever": {
         "title": "Tool Retriever"
      },
      "verbose": {
         "title": "Verbose",
         "description": "Whether to print out reasoning steps",
         "default": false,
         "type": "boolean"
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • callback_manager (llama_index.callbacks.base.CallbackManager)

  • llm (llama_index.llms.llm.LLM)

  • tool_retriever (Optional[llama_index.objects.base.ObjectRetriever[llama_index.tools.types.BaseTool]])

  • tools (Sequence[llama_index.tools.types.BaseTool])

  • verbose (bool)

field callback_manager: CallbackManager [Optional]#
field llm: LLM [Required]#

LLM to use

field tool_retriever: Optional[ObjectRetriever[BaseTool]] = None#

Tool retriever

field tools: Sequence[BaseTool] [Required]#

Tools to use for reasoning

field verbose: bool = False#

Whether to print out reasoning steps

async arun_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async).

async astream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async stream).

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

finalize_task(task: Task, **kwargs: Any) None#

Finalize task, after all the steps are completed.

classmethod from_orm(obj: Any) Model#
classmethod from_tools(tools: Optional[Sequence[BaseTool]] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, llm: Optional[LLM] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, **kwargs: Any) CustomSimpleAgentWorker#

Convenience constructor method from set of of BaseTools (Optional).

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(input: str) List[AsyncBaseTool]#

Get tools.

initialize_step(task: Task, **kwargs: Any) TaskStep#

Initialize step from task.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
run_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
set_callback_manager(callback_manager: CallbackManager) None#

Set callback manager.

stream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (stream).

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

classmethod validate(value: Any) Model#
class llama_index.agent.FnRetrieverOpenAIAgent(tools: List[BaseTool], llm: OpenAI, memory: BaseMemory, prefix_messages: List[ChatMessage], verbose: bool = False, max_function_calls: int = 5, callback_manager: Optional[CallbackManager] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None)#

Function Retriever OpenAI Agent.

Uses our object retriever module to retrieve openai agent.

NOTE: This is deprecated, you can just use the base OpenAIAgent class by specifying the following: ` agent = OpenAIAgent.from_tools(tool_retriever=retriever, ...) `

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') AgentChatResponse#

Async version of main chat interface.

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Async version of main chat interface.

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') AgentChatResponse#

Main chat interface.

chat_repl() None#

Enter interactive chat REPL.

classmethod from_tools(tools: ~typing.Optional[~typing.List[~llama_index.tools.types.BaseTool]] = None, tool_retriever: ~typing.Optional[~llama_index.objects.base.ObjectRetriever[~llama_index.tools.types.BaseTool]] = None, llm: ~typing.Optional[~llama_index.llms.llm.LLM] = None, chat_history: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None, memory: ~typing.Optional[~llama_index.memory.types.BaseMemory] = None, memory_cls: ~typing.Type[~llama_index.memory.types.BaseMemory] = <class 'llama_index.memory.chat_memory_buffer.ChatMemoryBuffer'>, verbose: bool = False, max_function_calls: int = 5, callback_manager: ~typing.Optional[~llama_index.callbacks.base.CallbackManager] = None, system_prompt: ~typing.Optional[str] = None, prefix_messages: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None, **kwargs: ~typing.Any) OpenAIAgent#

Create an OpenAIAgent from a list of tools.

Similar to from_defaults in other classes, this method will infer defaults for a variety of parameters, including the LLM, if they are not specified.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(message: str) List[BaseTool]#

Get tools.

reset() None#

Reset conversation state.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Stream chat interface.

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

class llama_index.agent.MultimodalReActAgentWorker(tools: Sequence[BaseTool], multi_modal_llm: MultiModalLLM, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None)#

Multimodal ReAct Agent worker.

NOTE: This is a BETA feature.

async arun_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async).

async astream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async stream).

finalize_task(task: Task, **kwargs: Any) None#

Finalize task, after all the steps are completed.

classmethod from_tools(tools: Optional[Sequence[BaseTool]] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, multi_modal_llm: Optional[MultiModalLLM] = None, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, **kwargs: Any) MultimodalReActAgentWorker#

Convenience constructor method from set of of BaseTools (Optional).

NOTE: kwargs should have been exhausted by this point. In other words the various upstream components such as BaseSynthesizer (response synthesizer) or BaseRetriever should have picked up off their respective kwargs in their constructions.

Returns

ReActAgent

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(input: str) List[AsyncBaseTool]#

Get tools.

initialize_step(task: Task, **kwargs: Any) TaskStep#

Initialize step from task.

run_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step.

set_callback_manager(callback_manager: CallbackManager) None#

Set callback manager.

stream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (stream).

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

llama_index.agent.OldOpenAIAgent#

alias of OpenAIAgent

llama_index.agent.OldReActAgent#

alias of ReActAgent

class llama_index.agent.OpenAIAgent(tools: List[BaseTool], llm: OpenAI, memory: BaseMemory, prefix_messages: List[ChatMessage], verbose: bool = False, max_function_calls: int = 5, default_tool_choice: str = 'auto', callback_manager: Optional[CallbackManager] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None)#

OpenAI agent.

Subclasses AgentRunner with a OpenAIAgentWorker.

For the legacy implementation see: `python from llama_index.agent.legacy.openai.base import OpenAIAgent `

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) AgentChatResponse#

Async version of main chat interface.

async arun_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async).

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) StreamingAgentChatResponse#

Async version of main chat interface.

async astream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async stream).

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) AgentChatResponse#

Main chat interface.

chat_repl() None#

Enter interactive chat REPL.

create_task(input: str, **kwargs: Any) Task#

Create task.

delete_task(task_id: str) None#

Delete task.

NOTE: this will not delete any previous executions from memory.

finalize_response(task_id: str, step_output: Optional[TaskStepOutput] = None) Union[AgentChatResponse, StreamingAgentChatResponse]#

Finalize response.

classmethod from_tools(tools: ~typing.Optional[~typing.List[~llama_index.tools.types.BaseTool]] = None, tool_retriever: ~typing.Optional[~llama_index.objects.base.ObjectRetriever[~llama_index.tools.types.BaseTool]] = None, llm: ~typing.Optional[~llama_index.llms.llm.LLM] = None, chat_history: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None, memory: ~typing.Optional[~llama_index.memory.types.BaseMemory] = None, memory_cls: ~typing.Type[~llama_index.memory.types.BaseMemory] = <class 'llama_index.memory.chat_memory_buffer.ChatMemoryBuffer'>, verbose: bool = False, max_function_calls: int = 5, default_tool_choice: str = 'auto', callback_manager: ~typing.Optional[~llama_index.callbacks.base.CallbackManager] = None, system_prompt: ~typing.Optional[str] = None, prefix_messages: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None, **kwargs: ~typing.Any) OpenAIAgent#

Create an OpenAIAgent from a list of tools.

Similar to from_defaults in other classes, this method will infer defaults for a variety of parameters, including the LLM, if they are not specified.

get_completed_step(task_id: str, step_id: str, **kwargs: Any) TaskStepOutput#

Get completed step.

get_completed_steps(task_id: str, **kwargs: Any) List[TaskStepOutput]#

Get completed steps.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_task(task_id: str, **kwargs: Any) Task#

Get task.

get_upcoming_steps(task_id: str, **kwargs: Any) List[TaskStep]#

Get upcoming steps.

list_tasks(**kwargs: Any) List[Task]#

List tasks.

reset() None#

Reset conversation state.

run_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) StreamingAgentChatResponse#

Stream chat interface.

stream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (stream).

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

undo_step(task_id: str) None#

Undo previous step.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

class llama_index.agent.OpenAIAgentWorker(tools: List[BaseTool], llm: OpenAI, prefix_messages: List[ChatMessage], verbose: bool = False, max_function_calls: int = 5, callback_manager: Optional[CallbackManager] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None)#

OpenAI Agent agent worker.

async arun_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async).

async astream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async stream).

finalize_task(task: Task, **kwargs: Any) None#

Finalize task, after all the steps are completed.

classmethod from_tools(tools: Optional[List[BaseTool]] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, llm: Optional[LLM] = None, verbose: bool = False, max_function_calls: int = 5, callback_manager: Optional[CallbackManager] = None, system_prompt: Optional[str] = None, prefix_messages: Optional[List[ChatMessage]] = None, **kwargs: Any) OpenAIAgentWorker#

Create an OpenAIAgent from a list of tools.

Similar to from_defaults in other classes, this method will infer defaults for a variety of parameters, including the LLM, if they are not specified.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(input: str) List[BaseTool]#

Get tools.

initialize_step(task: Task, **kwargs: Any) TaskStep#

Initialize step from task.

run_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step.

set_callback_manager(callback_manager: CallbackManager) None#

Set callback manager.

stream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (stream).

undo_step(task: Task, **kwargs: Any) Optional[TaskStep]#

Undo step from task.

If this cannot be implemented, return None.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

class llama_index.agent.OpenAIAssistantAgent(client: Any, assistant: Any, tools: Optional[List[BaseTool]], callback_manager: Optional[CallbackManager] = None, thread_id: Optional[str] = None, instructions_prefix: Optional[str] = None, run_retrieve_sleep_time: float = 0.1, file_dict: Dict[str, str] = {}, verbose: bool = False)#

OpenAIAssistant agent.

Wrapper around OpenAI assistant API: https://platform.openai.com/docs/assistants/overview

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, function_call: Union[str, dict] = 'auto') AgentChatResponse#

Async version of main chat interface.

add_message(message: str, file_ids: Optional[List[str]] = None) Any#

Add message to assistant.

async arun_assistant(instructions_prefix: Optional[str] = None) Tuple[Any, Dict]#

Run assistant.

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

property assistant: Any#

Get assistant.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, function_call: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Async version of main chat interface.

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, function_call: Union[str, dict] = 'auto') AgentChatResponse#

Main chat interface.

chat_repl() None#

Enter interactive chat REPL.

property client: Any#

Get client.

property files_dict: Dict[str, str]#

Get files dict.

classmethod from_existing(assistant_id: str, tools: Optional[List[BaseTool]] = None, thread_id: Optional[str] = None, instructions_prefix: Optional[str] = None, run_retrieve_sleep_time: float = 0.1, callback_manager: Optional[CallbackManager] = None, api_key: Optional[str] = None, verbose: bool = False) OpenAIAssistantAgent#

From existing assistant id.

Parameters
  • assistant_id – id of assistant

  • tools – list of BaseTools Assistant can use

  • thread_id – thread id

  • run_retrieve_sleep_time – run retrieve sleep time

  • instructions_prefix – instructions prefix

  • callback_manager – callback manager

  • api_key – OpenAI API key

  • verbose – verbose

classmethod from_new(name: str, instructions: str, tools: Optional[List[BaseTool]] = None, openai_tools: Optional[List[Dict]] = None, thread_id: Optional[str] = None, model: str = 'gpt-4-1106-preview', instructions_prefix: Optional[str] = None, run_retrieve_sleep_time: float = 0.1, files: Optional[List[str]] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, file_ids: Optional[List[str]] = None, api_key: Optional[str] = None) OpenAIAssistantAgent#

From new assistant.

Parameters
  • name – name of assistant

  • instructions – instructions for assistant

  • tools – list of tools

  • openai_tools – list of openai tools

  • thread_id – thread id

  • model – model

  • run_retrieve_sleep_time – run retrieve sleep time

  • files – files

  • instructions_prefix – instructions prefix

  • callback_manager – callback manager

  • verbose – verbose

  • file_ids – list of file ids

  • api_key – OpenAI API key

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(message: str) List[BaseTool]#

Get tools.

property latest_message: ChatMessage#

Get latest message.

reset() None#

Delete and create a new thread.

run_assistant(instructions_prefix: Optional[str] = None) Tuple[Any, Dict]#

Run assistant.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, function_call: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Stream chat interface.

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

property thread_id: str#

Get thread id.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

upload_files(files: List[str]) Dict[str, Any]#

Upload files.

class llama_index.agent.ParallelAgentRunner(agent_worker: BaseAgentWorker, chat_history: Optional[List[ChatMessage]] = None, state: Optional[DAGAgentState] = None, memory: Optional[BaseMemory] = None, llm: Optional[LLM] = None, callback_manager: Optional[CallbackManager] = None, init_task_state_kwargs: Optional[dict] = None, delete_task_on_finish: bool = False)#

Parallel agent runner.

Executes steps in queue in parallel. Requires async support.

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') AgentChatResponse#

Async version of main chat interface.

async arun_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async).

async arun_steps_in_queue(task_id: str, mode: ChatResponseMode = ChatResponseMode.WAIT, **kwargs: Any) List[TaskStepOutput]#

Execute all steps in queue.

All steps in queue are assumed to be ready.

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Async version of main chat interface.

async astream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async stream).

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') AgentChatResponse#

Main chat interface.

chat_repl() None#

Enter interactive chat REPL.

create_task(input: str, **kwargs: Any) Task#

Create task.

delete_task(task_id: str) None#

Delete task.

NOTE: this will not delete any previous executions from memory.

finalize_response(task_id: str, step_output: Optional[TaskStepOutput] = None) Union[AgentChatResponse, StreamingAgentChatResponse]#

Finalize response.

get_completed_step(task_id: str, step_id: str, **kwargs: Any) TaskStepOutput#

Get completed step.

get_completed_steps(task_id: str, **kwargs: Any) List[TaskStepOutput]#

Get completed steps.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_task(task_id: str, **kwargs: Any) Task#

Get task.

get_upcoming_steps(task_id: str, **kwargs: Any) List[TaskStep]#

Get upcoming steps.

list_tasks(**kwargs: Any) List[Task]#

List tasks.

reset() None#

Reset conversation state.

run_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step.

run_steps_in_queue(task_id: str, mode: ChatResponseMode = ChatResponseMode.WAIT, **kwargs: Any) List[TaskStepOutput]#

Execute steps in queue.

Run all steps in queue, clearing it out.

Assume that all steps can be run in parallel.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Union[str, dict] = 'auto') StreamingAgentChatResponse#

Stream chat interface.

stream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (stream).

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

undo_step(task_id: str) None#

Undo previous step.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

pydantic model llama_index.agent.QueryPipelineAgentWorker#

Query Pipeline agent worker.

Barebones agent worker that takes in a query pipeline.

Assumes that the first component in the query pipeline is an AgentInputComponent and last is AgentFnComponent.

Parameters

pipeline (QueryPipeline) – Query pipeline

Show JSON schema
{
   "title": "QueryPipelineAgentWorker",
   "description": "Query Pipeline agent worker.\n\nBarebones agent worker that takes in a query pipeline.\n\nAssumes that the first component in the query pipeline is an\n`AgentInputComponent` and last is `AgentFnComponent`.\n\nArgs:\n    pipeline (QueryPipeline): Query pipeline",
   "type": "object",
   "properties": {
      "pipeline": {
         "title": "Pipeline"
      },
      "callback_manager": {
         "title": "Callback Manager"
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • callback_manager (llama_index.callbacks.base.CallbackManager)

  • pipeline (llama_index.query_pipeline.query.QueryPipeline)

field callback_manager: CallbackManager [Required]#
field pipeline: QueryPipeline [Required]#

Query pipeline

async arun_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async).

async astream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async stream).

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

finalize_task(task: Task, **kwargs: Any) None#

Finalize task, after all the steps are completed.

classmethod from_orm(obj: Any) Model#
get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

initialize_step(task: Task, **kwargs: Any) TaskStep#

Initialize step from task.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
run_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step.

classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
set_callback_manager(callback_manager: CallbackManager) None#

Set callback manager.

stream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (stream).

classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

classmethod validate(value: Any) Model#
property agent_components: List[AgentFnComponent]#

Get agent output component.

property agent_input_component: AgentInputComponent#

Get agent input component.

class llama_index.agent.ReActAgent(tools: Sequence[BaseTool], llm: LLM, memory: BaseMemory, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, context: Optional[str] = None)#

ReAct agent.

Subclasses AgentRunner with a ReActAgentWorker.

For the legacy implementation see: `python from llama_index.agent.legacy.react.base import ReActAgent `

async achat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) AgentChatResponse#

Async version of main chat interface.

async arun_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async).

as_query_component(partial: Optional[Dict[str, Any]] = None, **kwargs: Any) QueryComponent#

Get query component.

async astream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) StreamingAgentChatResponse#

Async version of main chat interface.

async astream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (async stream).

chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) AgentChatResponse#

Main chat interface.

chat_repl() None#

Enter interactive chat REPL.

create_task(input: str, **kwargs: Any) Task#

Create task.

delete_task(task_id: str) None#

Delete task.

NOTE: this will not delete any previous executions from memory.

finalize_response(task_id: str, step_output: Optional[TaskStepOutput] = None) Union[AgentChatResponse, StreamingAgentChatResponse]#

Finalize response.

classmethod from_tools(tools: ~typing.Optional[~typing.List[~llama_index.tools.types.BaseTool]] = None, tool_retriever: ~typing.Optional[~llama_index.objects.base.ObjectRetriever[~llama_index.tools.types.BaseTool]] = None, llm: ~typing.Optional[~llama_index.llms.llm.LLM] = None, chat_history: ~typing.Optional[~typing.List[~llama_index.core.llms.types.ChatMessage]] = None, memory: ~typing.Optional[~llama_index.memory.types.BaseMemory] = None, memory_cls: ~typing.Type[~llama_index.memory.types.BaseMemory] = <class 'llama_index.memory.chat_memory_buffer.ChatMemoryBuffer'>, max_iterations: int = 10, react_chat_formatter: ~typing.Optional[~llama_index.agent.react.formatter.ReActChatFormatter] = None, output_parser: ~typing.Optional[~llama_index.agent.react.output_parser.ReActOutputParser] = None, callback_manager: ~typing.Optional[~llama_index.callbacks.base.CallbackManager] = None, verbose: bool = False, context: ~typing.Optional[str] = None, **kwargs: ~typing.Any) ReActAgent#

Convenience constructor method from set of of BaseTools (Optional).

NOTE: kwargs should have been exhausted by this point. In other words the various upstream components such as BaseSynthesizer (response synthesizer) or BaseRetriever should have picked up off their respective kwargs in their constructions.

Returns

ReActAgent

get_completed_step(task_id: str, step_id: str, **kwargs: Any) TaskStepOutput#

Get completed step.

get_completed_steps(task_id: str, **kwargs: Any) List[TaskStepOutput]#

Get completed steps.

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_task(task_id: str, **kwargs: Any) Task#

Get task.

get_upcoming_steps(task_id: str, **kwargs: Any) List[TaskStep]#

Get upcoming steps.

list_tasks(**kwargs: Any) List[Task]#

List tasks.

reset() None#

Reset conversation state.

run_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step.

stream_chat(message: str, chat_history: Optional[List[ChatMessage]] = None, tool_choice: Optional[Union[str, dict]] = None) StreamingAgentChatResponse#

Stream chat interface.

stream_step(task_id: str, input: Optional[str] = None, step: Optional[TaskStep] = None, **kwargs: Any) TaskStepOutput#

Run step (stream).

streaming_chat_repl() None#

Enter interactive chat REPL with streaming responses.

undo_step(task_id: str) None#

Undo previous step.

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

class llama_index.agent.ReActAgentWorker(tools: Sequence[BaseTool], llm: LLM, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None)#

OpenAI Agent worker.

async arun_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async).

async astream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (async stream).

finalize_task(task: Task, **kwargs: Any) None#

Finalize task, after all the steps are completed.

classmethod from_tools(tools: Optional[Sequence[BaseTool]] = None, tool_retriever: Optional[ObjectRetriever[BaseTool]] = None, llm: Optional[LLM] = None, max_iterations: int = 10, react_chat_formatter: Optional[ReActChatFormatter] = None, output_parser: Optional[ReActOutputParser] = None, callback_manager: Optional[CallbackManager] = None, verbose: bool = False, **kwargs: Any) ReActAgentWorker#

Convenience constructor method from set of of BaseTools (Optional).

NOTE: kwargs should have been exhausted by this point. In other words the various upstream components such as BaseSynthesizer (response synthesizer) or BaseRetriever should have picked up off their respective kwargs in their constructions.

Returns

ReActAgent

get_prompts() Dict[str, BasePromptTemplate]#

Get a prompt.

get_tools(input: str) List[AsyncBaseTool]#

Get tools.

initialize_step(task: Task, **kwargs: Any) TaskStep#

Initialize step from task.

run_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step.

set_callback_manager(callback_manager: CallbackManager) None#

Set callback manager.

stream_step(step: TaskStep, task: Task, **kwargs: Any) TaskStepOutput#

Run step (stream).

update_prompts(prompts_dict: Dict[str, BasePromptTemplate]) None#

Update prompts.

Other prompts will remain in place.

pydantic model llama_index.agent.ReActChatFormatter#

ReAct chat formatter.

Show JSON schema
{
   "title": "ReActChatFormatter",
   "description": "ReAct chat formatter.",
   "type": "object",
   "properties": {
      "system_header": {
         "title": "System Header",
         "default": "\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\n\n## Tools\nYou have access to a wide variety of tools. You are responsible for using\nthe tools in any sequence you deem appropriate to complete the task at hand.\nThis may require breaking the task into subtasks and using different tools\nto complete each subtask.\n\nYou have access to the following tools:\n{tool_desc}\n\n## Output Format\nTo answer the question, please use the following format.\n\n```\nThought: I need to use a tool to help me answer the question.\nAction: tool name (one of {tool_names}) if using a tool.\nAction Input: the input to the tool, in a JSON format representing the kwargs (e.g. {{\"input\": \"hello world\", \"num_beams\": 5}})\n```\n\nPlease ALWAYS start with a Thought.\n\nPlease use a valid JSON format for the Action Input. Do NOT do this {{'input': 'hello world', 'num_beams': 5}}.\n\nIf this format is used, the user will respond in the following format:\n\n```\nObservation: tool response\n```\n\nYou should keep repeating the above format until you have enough information\nto answer the question without using any more tools. At that point, you MUST respond\nin the one of the following two formats:\n\n```\nThought: I can answer without using any more tools.\nAnswer: [your answer here]\n```\n\n```\nThought: I cannot answer the question with the provided tools.\nAnswer: Sorry, I cannot answer your query.\n```\n\n## Current Conversation\nBelow is the current conversation consisting of interleaving human and assistant messages.\n\n",
         "type": "string"
      },
      "context": {
         "title": "Context",
         "default": "",
         "type": "string"
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • context (str)

  • system_header (str)

field context: str = ''#
field system_header: str = '\nYou are designed to help with a variety of tasks, from answering questions     to providing summaries to other types of analyses.\n\n## Tools\nYou have access to a wide variety of tools. You are responsible for using\nthe tools in any sequence you deem appropriate to complete the task at hand.\nThis may require breaking the task into subtasks and using different tools\nto complete each subtask.\n\nYou have access to the following tools:\n{tool_desc}\n\n## Output Format\nTo answer the question, please use the following format.\n\n```\nThought: I need to use a tool to help me answer the question.\nAction: tool name (one of {tool_names}) if using a tool.\nAction Input: the input to the tool, in a JSON format representing the kwargs (e.g. {{"input": "hello world", "num_beams": 5}})\n```\n\nPlease ALWAYS start with a Thought.\n\nPlease use a valid JSON format for the Action Input. Do NOT do this {{\'input\': \'hello world\', \'num_beams\': 5}}.\n\nIf this format is used, the user will respond in the following format:\n\n```\nObservation: tool response\n```\n\nYou should keep repeating the above format until you have enough information\nto answer the question without using any more tools. At that point, you MUST respond\nin the one of the following two formats:\n\n```\nThought: I can answer without using any more tools.\nAnswer: [your answer here]\n```\n\n```\nThought: I cannot answer the question with the provided tools.\nAnswer: Sorry, I cannot answer your query.\n```\n\n## Current Conversation\nBelow is the current conversation consisting of interleaving human and assistant messages.\n\n'#
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

format(tools: Sequence[BaseTool], chat_history: List[ChatMessage], current_reasoning: Optional[List[BaseReasoningStep]] = None) List[ChatMessage]#

Format chat history into list of ChatMessage.

classmethod from_context(context: str) ReActChatFormatter#

Create ReActChatFormatter from context.

NOTE: deprecated

classmethod from_defaults(system_header: Optional[str] = None, context: Optional[str] = None) ReActChatFormatter#

Create ReActChatFormatter from defaults.

classmethod from_orm(obj: Any) Model#
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model#
llama_index.agent.RetrieverOpenAIAgent#

alias of FnRetrieverOpenAIAgent

pydantic model llama_index.agent.Task#

Agent Task.

Represents a “run” of an agent given a user input.

Show JSON schema
{
   "title": "Task",
   "description": "Agent Task.\n\nRepresents a \"run\" of an agent given a user input.",
   "type": "object",
   "properties": {
      "task_id": {
         "title": "Task Id",
         "description": "Task ID",
         "type": "string"
      },
      "input": {
         "title": "Input",
         "description": "User input",
         "type": "string"
      },
      "memory": {
         "title": "Memory",
         "description": "Conversational Memory. Maintains state before execution of this task.",
         "type": "<class 'llama_index.memory.types.BaseMemory'>",
         "allOf": [
            {
               "$ref": "#/definitions/BaseMemory"
            }
         ]
      },
      "callback_manager": {
         "title": "Callback Manager"
      },
      "extra_state": {
         "title": "Extra State",
         "description": "Additional user-specified state for a given task. Can be modified throughout the execution of a task.",
         "type": "object"
      }
   },
   "required": [
      "input",
      "memory"
   ],
   "definitions": {
      "BaseMemory": {
         "title": "BaseMemory",
         "description": "Base class for all memory types.\n\nNOTE: The interface for memory is not yet finalized and is subject to change.",
         "type": "object",
         "properties": {
            "class_name": {
               "title": "Class Name",
               "type": "string",
               "default": "BaseMemory"
            }
         }
      }
   }
}

Config
  • arbitrary_types_allowed: bool = True

Fields
  • callback_manager (llama_index.callbacks.base.CallbackManager)

  • extra_state (Dict[str, Any])

  • input (str)

  • memory (llama_index.memory.types.BaseMemory)

  • task_id (str)

field callback_manager: CallbackManager [Optional]#

Callback manager for the task.

field extra_state: Dict[str, Any] [Optional]#

Additional user-specified state for a given task. Can be modified throughout the execution of a task.

field input: str [Required]#

User input

field memory: BaseMemory [Required]#

Conversational Memory. Maintains state before execution of this task.

field task_id: str [Optional]#

Task ID

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_orm(obj: Any) Model#
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model#
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny#
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode#
classmethod update_forward_refs(**localns: Any) None#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model#