Vector Storeο
Vector stores.
- class llama_index.vector_stores.ChatGPTRetrievalPluginClient(endpoint_url: str, bearer_token: Optional[str] = None, retries: Optional[Retry] = None, batch_size: int = 100, **kwargs: Any)ο
ChatGPT Retrieval Plugin Client.
In this client, we make use of the endpoints defined by ChatGPT.
- Parameters
endpoint_url (str) β URL of the ChatGPT Retrieval Plugin.
bearer_token (Optional[str]) β Bearer token for the ChatGPT Retrieval Plugin.
retries (Optional[Retry]) β Retry object for the ChatGPT Retrieval Plugin.
batch_size (int) β Batch size for the ChatGPT Retrieval Plugin.
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding_results to index.
- property client: Noneο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Get nodes for response.
- class llama_index.vector_stores.ChromaVectorStore(chroma_collection: Any, **kwargs: Any)ο
Chroma vector store.
In this vector store, embeddings are stored within a ChromaDB collection.
During query time, the index uses ChromaDB to query for the top k most similar nodes.
- Parameters
chroma_collection (chromadb.api.models.Collection.Collection) β ChromaDB collection instance
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Return client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) β query embedding
similarity_top_k (int) β top k most similar nodes
- class llama_index.vector_stores.DeepLakeVectorStore(dataset_path: str = 'llama_index', token: Optional[str] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, ingestion_num_workers: int = 4, overwrite: bool = False)ο
The DeepLake Vector Store.
In this vector store we store the text, its embedding and a few pieces of its metadata in a deeplake dataset. This implemnetation allows the use of an already existing deeplake dataset if it is one that was created this vector store. It also supports creating a new one if the dataset doesnt exist or if overwrite is set to True.
- Parameters
deeplake_path (str, optional) β Path to the deeplake dataset, where data will be
"llama_index". (stored. Defaults to) β
overwrite (bool, optional) β Whether to overwrite existing dataset with same name. Defaults to False.
token (str, optional) β the deeplake token that allows you to access the dataset with proper access. Defaults to None.
read_only (bool, optional) β Whether to open the dataset with read only mode.
ingestion_batch_size (bool, 1024) β used for controlling batched data injestion to deeplake dataset. Defaults to 1024.
injestion_num_workers (int, 1) β number of workers to use during data injestion. Defaults to 4.
overwrite β Whether to overwrite existing dataset with the new dataset with the same name.
- Raises
ImportError β Unable to import deeplake.
UserNotLoggedinException β When user is not logged in with credentials or token.
TokenPermissionError β When dataset does not exist or user doesnβt have enough permissions to modify the dataset.
InvalidTokenException β If the specified token is invalid
- Returns
Vectorstore that supports add, delete, and query.
- Return type
DeepLakeVectorstore
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add the embeddings and their nodes into DeepLake.
- Parameters
embedding_results (List[NodeWithEmbedding]) β The embeddings and their data to insert.
- Raises
UserNotLoggedinException β When user is not logged in with credentials or token.
TokenPermissionError β When dataset does not exist or user doesnβt have enough permissions to modify the dataset.
InvalidTokenException β If the specified token is invalid
- Returns
List of ids inserted.
- Return type
List[str]
- property client: Noneο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) β query embedding
similarity_top_k (int) β top k most similar nodes
- class llama_index.vector_stores.DocArrayHnswVectorStore(work_dir: str, dim: int = 1536, dist_metric: Literal['cosine', 'ip', 'l2'] = 'cosine', max_elements: int = 1024, ef_construction: int = 200, ef: int = 10, M: int = 16, allow_replace_deleted: bool = True, num_threads: int = 1)ο
Class representing a DocArray HNSW vector store.
This class is a lightweight Document Index implementation provided by Docarray. It stores vectors on disk in hnswlib, and stores all other data in SQLite.
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Adds embedding results to the vector store.
- Parameters
embedding_results (List[NodeWithEmbedding]) β List of nodes
embeddings. (with corresponding) β
- Returns
List of document IDs added to the vector store.
- Return type
List[str]
- property client: Anyο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Deletes a document from the vector store.
- Parameters
ref_doc_id (str) β Document ID to be deleted.
**delete_kwargs (Any) β Additional arguments to pass to the delete method.
- num_docs() int ο
Retrieves the number of documents in the index.
- Returns
The number of documents in the index.
- Return type
int
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Queries the vector store and retrieves the results.
- Parameters
query (VectorStoreQuery) β Query for the vector store.
- Returns
Result of the query from vector store.
- Return type
VectorStoreQueryResult
- class llama_index.vector_stores.DocArrayInMemoryVectorStore(index_path: Optional[str] = None, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim')ο
Class representing a DocArray In-Memory vector store.
This class is a document index provided by Docarray that stores documents in memory.
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Adds embedding results to the vector store.
- Parameters
embedding_results (List[NodeWithEmbedding]) β List of nodes
embeddings. (with corresponding) β
- Returns
List of document IDs added to the vector store.
- Return type
List[str]
- property client: Anyο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Deletes a document from the vector store.
- Parameters
ref_doc_id (str) β Document ID to be deleted.
**delete_kwargs (Any) β Additional arguments to pass to the delete method.
- num_docs() int ο
Retrieves the number of documents in the index.
- Returns
The number of documents in the index.
- Return type
int
- persist(persist_path: str, fs: Optional[AbstractFileSystem] = None) None ο
Persists the in-memory vector store to a file.
- Parameters
persist_path (str) β The path to persist the index.
fs (fsspec.AbstractFileSystem, optional) β Filesystem to persist to. (doesnβt apply)
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Queries the vector store and retrieves the results.
- Parameters
query (VectorStoreQuery) β Query for the vector store.
- Returns
Result of the query from vector store.
- Return type
VectorStoreQueryResult
- class llama_index.vector_stores.FaissVectorStore(faiss_index: Any)ο
Faiss Vector Store.
Embeddings are stored within a Faiss index.
During query time, the index uses Faiss to query for the top k embeddings, and returns the corresponding indices.
- Parameters
faiss_index (faiss.Index) β Faiss index instance
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
NOTE: in the Faiss vector store, we do not store text in Faiss.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Return the faiss index.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- persist(persist_path: str = './storage/vector_store.json', fs: Optional[AbstractFileSystem] = None) None ο
Save to file.
This method saves the vector store to disk.
- Parameters
persist_path (str) β The save_path of the file.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) β query embedding
similarity_top_k (int) β top k most similar nodes
- class llama_index.vector_stores.LanceDBVectorStore(uri: str, table_name: str = 'vectors', nprobes: int = 20, refine_factor: Optional[int] = None, **kwargs: Any)ο
The LanceDB Vector Store.
- Stores text and embeddings in LanceDB. The vector store will open an existing
LanceDB dataset or create the dataset if it does not exist.
- Parameters
uri (str, required) β Location where LanceDB will store its files.
table_name (str, optional) β The table name where the embeddings will be stored. Defaults to βvectorsβ.
nprobes (int, optional) β The number of probes used. A higher number makes search more accurate but also slower. Defaults to 20.
refine_factor β (int, optional): Refine the results by reading extra elements and re-ranking them in memory. Defaults to None
- Raises
ImportError β Unable to import lancedb.
- Returns
- VectorStore that supports creating LanceDB datasets and
querying it.
- Return type
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to vector store.
- property client: Noneο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- class llama_index.vector_stores.MetalVectorStore(api_key: str, client_id: str, index_id: str)ο
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeEmbeddingResult]: list of embedding results
- property client: Anyο
Return Metal client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query vector store.
- class llama_index.vector_stores.MilvusVectorStore(collection_name: str = 'llamalection', index_params: Optional[dict] = None, search_params: Optional[dict] = None, dim: Optional[int] = None, host: str = 'localhost', port: int = 19530, user: str = '', password: str = '', use_secure: bool = False, overwrite: bool = False, **kwargs: Any)ο
The Milvus Vector Store.
In this vector store we store the text, its embedding and a few pieces of its metadata in a Milvus collection. This implemnetation allows the use of an already existing collection if it is one that was created this vector store. It also supports creating a new one if the collection doesnt exist or if overwrite is set to True.
- Parameters
collection_name (str, optional) β The name of the collection where data will be stored. Defaults to βllamalectionβ.
index_params (dict, optional) β The index parameters for Milvus, if none are provided an HNSW index will be used. Defaults to None.
search_params (dict, optional) β The search parameters for a Milvus query. If none are provided, default params will be generated. Defaults to None.
dim (int, optional) β The dimension of the embeddings. If it is not provided, collection creation will be done on first insert. Defaults to None.
host (str, optional) β The host address of Milvus. Defaults to βlocalhostβ.
port (int, optional) β The port of Milvus. Defaults to 19530.
user (str, optional) β The username for RBAC. Defaults to ββ.
password (str, optional) β The password for RBAC. Defaults to ββ.
use_secure (bool, optional) β Use https. Required for Zilliz Cloud. Defaults to False.
overwrite (bool, optional) β Whether to overwrite existing collection with same name. Defaults to False.
- Raises
ImportError β Unable to import pymilvus.
MilvusException β Error communicating with Milvus, more can be found in logging under Debug.
- Returns
Vectorstore that supports add, delete, and query.
- Return type
MilvusVectorstore
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add the embeddings and their nodes into Milvus.
- Parameters
embedding_results (List[NodeWithEmbedding]) β The embeddings and their data to insert.
- Raises
MilvusException β Failed to insert data.
- Returns
List of ids inserted.
- Return type
List[str]
- property client: Anyο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- Raises
MilvusException β Failed to delete the doc.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) β query embedding
similarity_top_k (int) β top k most similar nodes
doc_ids (Optional[List[str]]) β list of doc_ids to filter by
- class llama_index.vector_stores.MyScaleVectorStore(myscale_client: Optional[Any] = None, table: str = 'llama_index', database: str = 'default', index_type: str = 'IVFFLAT', metric: str = 'cosine', batch_size: int = 32, index_params: Optional[dict] = None, search_params: Optional[dict] = None, service_context: Optional[ServiceContext] = None, **kwargs: Any)ο
MyScale Vector Store.
In this vector store, embeddings and docs are stored within an existing MyScale cluster.
During query time, the index uses MyScale to query for the top k most similar nodes.
- Parameters
myscale_client (httpclient) β clickhouse-connect httpclient of an existing MyScale cluster.
table (str, optional) β The name of the MyScale table where data will be stored. Defaults to βllama_indexβ.
database (str, optional) β The name of the MyScale database where data will be stored. Defaults to βdefaultβ.
index_type (str, optional) β The type of the MyScale vector index. Defaults to βIVFFLATβ.
metric (str, optional) β The metric type of the MyScale vector index. Defaults to βcosineβ.
batch_size (int, optional) β the size of documents to insert. Defaults to 32.
index_params (dict, optional) β The index parameters for MyScale. Defaults to None.
search_params (dict, optional) β The search parameters for a MyScale query. Defaults to None.
service_context (ServiceContext, optional) β Vector store service context. Defaults to None
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- drop() None ο
Drop MyScale Index and table
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query (VectorStoreQuery) β query
- class llama_index.vector_stores.OpensearchVectorClient(endpoint: str, index: str, dim: int, embedding_field: str = 'embedding', text_field: str = 'content', extra_info_field: str = 'extra_info', method: Optional[dict] = None, auth: Optional[dict] = None)ο
Object encapsulating an Opensearch index that has vector search enabled.
If the index does not yet exist, it is created during init. Therefore, the underlying index is assumed to either: 1) not exist yet or 2) be created due to previous usage of this class.
- Parameters
endpoint (str) β URL (http/https) of elasticsearch endpoint
index (str) β Name of the elasticsearch index
dim (int) β Dimension of the vector
embedding_field (str) β Name of the field in the index to store embedding array in.
text_field (str) β Name of the field to grab text from
method (Optional[dict]) β Opensearch βmethodβ JSON obj for configuring the KNN index. This includes engine, metric, and other config params. Defaults to: {βnameβ: βhnswβ, βspace_typeβ: βl2β, βengineβ: βfaissβ, βparametersβ: {βef_constructionβ: 256, βmβ: 48}}
- delete_doc_id(doc_id: str) None ο
Delete a document.
- Parameters
doc_id (str) β document id
- do_approx_knn(query_embedding: List[float], k: int) VectorStoreQueryResult ο
Do approximate knn.
- index_results(results: List[NodeWithEmbedding]) List[str] ο
Store results in the index.
- class llama_index.vector_stores.OpensearchVectorStore(client: OpensearchVectorClient)ο
Elasticsearch/Opensearch vector store.
- Parameters
client (OpensearchVectorClient) β Vector index client to use for data insertion/querying.
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) β query embedding
similarity_top_k (int) β top k most similar nodes
- class llama_index.vector_stores.PineconeVectorStore(pinecone_index: Optional[Any] = None, index_name: Optional[str] = None, environment: Optional[str] = None, namespace: Optional[str] = None, insert_kwargs: Optional[Dict] = None, add_sparse_vector: bool = False, tokenizer: Optional[Callable] = None, **kwargs: Any)ο
Pinecone Vector Store.
In this vector store, embeddings and docs are stored within a Pinecone index.
During query time, the index uses Pinecone to query for the top k most similar nodes.
- Parameters
pinecone_index (Optional[pinecone.Index]) β Pinecone index instance
insert_kwargs (Optional[Dict]) β insert kwargs during upsert call.
add_sparse_vector (bool) β whether to add sparse vector to index.
tokenizer (Optional[Callable]) β tokenizer to use to generate sparse
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Return Pinecone client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query_embedding (List[float]) β query embedding
similarity_top_k (int) β top k most similar nodes
- class llama_index.vector_stores.QdrantVectorStore(collection_name: str, client: Optional[Any] = None, **kwargs: Any)ο
Qdrant Vector Store.
In this vector store, embeddings and docs are stored within a Qdrant collection.
During query time, the index uses Qdrant to query for the top k most similar nodes.
- Parameters
collection_name β (str): name of the Qdrant collection
client (Optional[Any]) β QdrantClient instance from qdrant-client package
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Return the Qdrant client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query (VectorStoreQuery) β query
- class llama_index.vector_stores.RedisVectorStore(index_name: str, index_prefix: str = 'llama_index', index_args: Optional[Dict[str, Any]] = None, metadata_fields: Optional[List[str]] = None, redis_url: str = 'redis://localhost:6379', overwrite: bool = False, **kwargs: Any)ο
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to the index.
- Parameters
embedding_results (List[NodeWithEmbedding]) β List of embedding results to add to the index.
- Returns
List of ids of the documents added to the index.
- Return type
List[str]
- Raises
ValueError β If the index already exists and overwrite is False.
- property client: RedisTypeο
Return the redis client instance
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- delete_index() None ο
Delete the index and all documents.
- persist(persist_path: str, fs: Optional[AbstractFileSystem] = None, in_background: bool = True) None ο
Persist the vector store to disk.
- Parameters
persist_path (str) β Path to persist the vector store to. (doesnβt apply)
in_background (bool, optional) β Persist in background. Defaults to True.
fs (fsspec.AbstractFileSystem, optional) β Filesystem to persist to. (doesnβt apply)
- Raises
redis.exceptions.RedisError β If there is an error persisting the index to disk.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query the index.
- Parameters
query (VectorStoreQuery) β query object
- Returns
query result
- Return type
VectorStoreQueryResult
- Raises
ValueError β If query.query_embedding is None.
redis.exceptions.RedisError β If there is an error querying the index.
redis.exceptions.TimeoutError β If there is a timeout querying the index.
ValueError β If no documents are found when querying the index.
- class llama_index.vector_stores.SimpleVectorStore(data: Optional[SimpleVectorStoreData] = None, fs: Optional[AbstractFileSystem] = None, **kwargs: Any)ο
Simple Vector Store.
In this vector store, embeddings are stored within a simple, in-memory dictionary.
- Parameters
simple_vector_store_data_dict (Optional[dict]) β data dict containing the embeddings and doc_ids. See SimpleVectorStoreData for more details.
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding_results to index.
- property client: Noneο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- classmethod from_persist_dir(persist_dir: str = './storage', fs: Optional[AbstractFileSystem] = None) SimpleVectorStore ο
Load from persist dir.
- classmethod from_persist_path(persist_path: str, fs: Optional[AbstractFileSystem] = None) SimpleVectorStore ο
Create a SimpleKVStore from a persist directory.
- get(text_id: str) List[float] ο
Get embedding.
- persist(persist_path: str = './storage/vector_store.json', fs: Optional[AbstractFileSystem] = None) None ο
Persist the SimpleVectorStore to a directory.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Get nodes for response.
- class llama_index.vector_stores.SupabaseVectorStore(postgres_connection_string: str, collection_name: str, dimension: int = 1536, **kwargs: Any)ο
Supbabase Vector.
In this vector store, embeddings are stored in Postgres table using pgvector.
During query time, the index uses pgvector/Supabase to query for the top k most similar nodes.
- Parameters
postgres_connection_string (str) β postgres connection string
collection_name (str) β name of the collection to store the embeddings in
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Noneο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete doc.
- Parameters
doc_id (str) β document id
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.
- Parameters
query (List[float]) β query embedding
- class llama_index.vector_stores.TairVectorStore(tair_url: str, index_name: str, index_type: str = 'HNSW', index_args: Optional[Dict[str, Any]] = None, overwrite: bool = False, **kwargs: Any)ο
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to the index.
- Parameters
embedding_results (List[NodeWithEmbedding]) β List of embedding results to add to the index.
- Returns
List of ids of the documents added to the index.
- Return type
List[str]
- property client: Tairο
Return the Tair client instance
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete a document.
- Parameters
doc_id (str) β document id
- delete_index() None ο
Delete the index and all documents.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query the index.
- Parameters
query (VectorStoreQuery) β query object
- Returns
query result
- Return type
VectorStoreQueryResult
- Raises
ValueError β If query.query_embedding is None.
- class llama_index.vector_stores.WeaviateVectorStore(weaviate_client: Optional[Any] = None, class_prefix: Optional[str] = None, **kwargs: Any)ο
Weaviate vector store.
In this vector store, embeddings and docs are stored within a Weaviate collection.
During query time, the index uses Weaviate to query for the top k most similar nodes.
- Parameters
weaviate_client (weaviate.Client) β WeaviateClient instance from weaviate-client package
class_prefix (Optional[str]) β prefix for Weaviate classes
- add(embedding_results: List[NodeWithEmbedding]) List[str] ο
Add embedding results to index.
- Args
embedding_results: List[NodeWithEmbedding]: list of embedding results
- property client: Anyο
Get client.
- delete(ref_doc_id: str, **delete_kwargs: Any) None ο
Delete nodes using with ref_doc_id.
- Parameters
ref_doc_id (str) β The doc_id of the document to delete.
- query(query: VectorStoreQuery, **kwargs: Any) VectorStoreQueryResult ο
Query index for top k most similar nodes.