Service Context#

The service context container is a utility container for LlamaIndex index and query classes. The container contains the following objects that are commonly used for configuring every index and query, such as the LLM, the PromptHelper (for configuring input size/chunk size), the BaseEmbedding (for configuring the embedding model), and more.



class llama_index.core.indices.service_context.ServiceContext(llm_predictor: BaseLLMPredictor, prompt_helper: PromptHelper, embed_model: BaseEmbedding, transformations: List[TransformComponent], llama_logger: LlamaLogger, callback_manager: CallbackManager)#

Service Context container.

The service context container is a utility container for LlamaIndex index and query classes. It contains the following: - llm_predictor: BaseLLMPredictor - prompt_helper: PromptHelper - embed_model: BaseEmbedding - node_parser: NodeParser - llama_logger: LlamaLogger (deprecated) - callback_manager: CallbackManager

classmethod from_defaults(llm_predictor: Optional[BaseLLMPredictor] = None, llm: Optional[Union[str, LLM, BaseLanguageModel]] = 'default', prompt_helper: Optional[PromptHelper] = None, embed_model: Optional[Any] = 'default', node_parser: Optional[NodeParser] = None, text_splitter: Optional[TextSplitter] = None, transformations: Optional[List[TransformComponent]] = None, llama_logger: Optional[LlamaLogger] = None, callback_manager: Optional[CallbackManager] = None, system_prompt: Optional[str] = None, query_wrapper_prompt: Optional[BasePromptTemplate] = None, pydantic_program_mode: PydanticProgramMode = PydanticProgramMode.DEFAULT, chunk_size: Optional[int] = None, chunk_overlap: Optional[int] = None, context_window: Optional[int] = None, num_output: Optional[int] = None, chunk_size_limit: Optional[int] = None) ServiceContext#

Create a ServiceContext from defaults. If an argument is specified, then use the argument value provided for that parameter. If an argument is not specified, then use the default value.

You can change the base defaults by setting llama_index.global_service_context to a ServiceContext object with your desired settings.

Parameters
  • llm_predictor (Optional[BaseLLMPredictor]) – LLMPredictor

  • prompt_helper (Optional[PromptHelper]) – PromptHelper

  • embed_model (Optional[BaseEmbedding]) – BaseEmbedding or “local” (use local model)

  • node_parser (Optional[NodeParser]) – NodeParser

  • llama_logger (Optional[LlamaLogger]) – LlamaLogger (deprecated)

  • chunk_size (Optional[int]) – chunk_size

  • callback_manager (Optional[CallbackManager]) – CallbackManager

  • system_prompt (Optional[str]) – System-wide prompt to be prepended to all input prompts, used to guide system “decision making”

  • query_wrapper_prompt (Optional[BasePromptTemplate]) – A format to wrap passed-in input queries.

Deprecated Args:

chunk_size_limit (Optional[int]): renamed to chunk_size

classmethod from_service_context(service_context: ServiceContext, llm_predictor: Optional[BaseLLMPredictor] = None, llm: Optional[Union[str, LLM, BaseLanguageModel]] = 'default', prompt_helper: Optional[PromptHelper] = None, embed_model: Optional[Any] = 'default', node_parser: Optional[NodeParser] = None, text_splitter: Optional[TextSplitter] = None, transformations: Optional[List[TransformComponent]] = None, llama_logger: Optional[LlamaLogger] = None, callback_manager: Optional[CallbackManager] = None, system_prompt: Optional[str] = None, query_wrapper_prompt: Optional[BasePromptTemplate] = None, chunk_size: Optional[int] = None, chunk_overlap: Optional[int] = None, context_window: Optional[int] = None, num_output: Optional[int] = None, chunk_size_limit: Optional[int] = None) ServiceContext#

Instantiate a new service context using a previous as the defaults.

property node_parser: NodeParser#

Get the node parser.

to_dict() dict#

Convert service context to dict.