Service Context

The service context container is a utility container for LlamaIndex index and query classes. The container contains the following objects that are commonly used for configuring every index and query, such as the LLMPredictor (for configuring the LLM), the PromptHelper (for configuring input size/chunk size), the BaseEmbedding (for configuring the embedding model), and more.



class llama_index.indices.service_context.ServiceContext(llm_predictor: LLMPredictor, prompt_helper: PromptHelper, embed_model: BaseEmbedding, node_parser: NodeParser, llama_logger: LlamaLogger, callback_manager: CallbackManager, chunk_size_limit: Optional[int] = None)

Service Context container.

The service context container is a utility container for LlamaIndex index and query classes. It contains the following: - llm_predictor: LLMPredictor - prompt_helper: PromptHelper - embed_model: BaseEmbedding - node_parser: NodeParser - llama_logger: LlamaLogger (deprecated) - callback_manager: CallbackManager - chunk_size_limit: chunk size limit

classmethod from_defaults(llm_predictor: Optional[LLMPredictor] = None, prompt_helper: Optional[PromptHelper] = None, embed_model: Optional[BaseEmbedding] = None, node_parser: Optional[NodeParser] = None, llama_logger: Optional[LlamaLogger] = None, callback_manager: Optional[CallbackManager] = None, chunk_size_limit: Optional[int] = None) ServiceContext

Create a ServiceContext from defaults. If an argument is specified, then use the argument value provided for that parameter. If an argument is not specified, then use the default value.

Parameters
  • llm_predictor (Optional[LLMPredictor]) – LLMPredictor

  • prompt_helper (Optional[PromptHelper]) – PromptHelper

  • embed_model (Optional[BaseEmbedding]) – BaseEmbedding

  • node_parser (Optional[NodeParser]) – NodeParser

  • llama_logger (Optional[LlamaLogger]) – LlamaLogger (deprecated)

  • chunk_size_limit (Optional[int]) – chunk_size_limit