OpensearchVectorClient#

class llama_index.vector_stores.OpensearchVectorClient(endpoint: str, index: str, dim: int, embedding_field: str = 'embedding', text_field: str = 'content', method: Optional[dict] = None, max_chunk_bytes: int = 1048576, search_pipeline: Optional[str] = None, **kwargs: Any)#

Bases: object

Object encapsulating an Opensearch index that has vector search enabled.

If the index does not yet exist, it is created during init. Therefore, the underlying index is assumed to either: 1) not exist yet or 2) be created due to previous usage of this class.

Parameters
  • endpoint (str) – URL (http/https) of elasticsearch endpoint

  • index (str) – Name of the elasticsearch index

  • dim (int) – Dimension of the vector

  • embedding_field (str) – Name of the field in the index to store embedding array in.

  • text_field (str) – Name of the field to grab text from

  • method (Optional[dict]) – Opensearch “method” JSON obj for configuring the KNN index. This includes engine, metric, and other config params. Defaults to: {“name”: “hnsw”, “space_type”: “l2”, “engine”: “faiss”, “parameters”: {“ef_construction”: 256, “m”: 48}}

  • **kwargs – Optional arguments passed to the OpenSearch client from opensearch-py.

Methods Summary

delete_doc_id(doc_id)

Delete a document.

index_results(nodes, **kwargs)

Store results in the index.

query(query_mode, query_str, query_embedding, k)

Methods Documentation

delete_doc_id(doc_id: str) None#

Delete a document.

Parameters

doc_id (str) – document id

index_results(nodes: List[BaseNode], **kwargs: Any) List[str]#

Store results in the index.

query(query_mode: VectorStoreQueryMode, query_str: Optional[str], query_embedding: List[float], k: int, filters: Optional[MetadataFilters] = None) VectorStoreQueryResult#