Contents Menu Expand Light mode Dark mode Auto light/dark mode
LlamaIndex 🦙 0.6.21
LlamaIndex 🦙 0.6.21

Getting Started

  • Installation and Setup
  • Starter Tutorial

Guides

  • A Primer to using LlamaIndex
    • LlamaIndex Usage Pattern
    • How Each Index Works
    • Query Interface
    • Architecture Overview
  • Tutorials
    • Discover LlamaIndex Video Series
    • 💬🤖 How to Build a Chatbot
    • A Guide to Building a Full-Stack Web App with LLamaIndex
    • A Guide to Building a Full-Stack LlamaIndex Web App with Delphic
    • A Guide to LlamaIndex + Structured Data
    • A Guide to Extracting Terms and Definitions
    • A Guide to Creating a Unified Query Framework over your Indexes
    • SEC 10k Analysis
    • Using LlamaIndex with Local Models

Use Cases

  • Queries over your Data
  • Agents
  • Integrations into LLM Applications

Key Components

  • 🔌 Data Connectors (LlamaHub)
    • Usage Pattern
    • Module Guides
  • 🗃️ Data Index
    • Usage Pattern
      • Document Management
    • Module Guides
      • Vector Store Index
      • List Index
      • Tree Index
      • Keyword Table Index
      • Knowledge Graph Index
      • Pandas Index
      • SQL Index
      • Document Summary Index
    • Composability
      • Composable Graph Basic
      • Composable Graph with Weaviate
      • Composable Graph
  • 🔍 Retriever
    • Usage Pattern
      • Retriever Modes
      • Define Custom Retriever
    • Module Guides
      • VectorIndexAutoRetriever
  • ❓ Query Engine
    • Usage Pattern
      • Response Modes
    • Module Guides
      • Retriever Query Engine
      • Router Query Engine
      • Retriever Router Query Engine
      • Joint QA Summary Query Engine
      • Sub Question Query Engine
      • Multi-Step Query Engine
      • SQL Router Query Engine
      • SQL Auto Vector Query Engine
      • JSON Query Engine
    • Advanced Concepts
      • Token Optimizers
        • Sentence Embedding Optimizer
      • Query Transformations
        • HyDE Query Transform
        • Multi-Step Query Engine
      • Second-Stage Processing
        • Forward/Backward Augmentation
        • Recency Filtering
        • Time-Weighted Rerank
        • PII Masking
        • Cohere Rerank
        • LLM Reranker Demonstration (Great Gatsby)
      • Response Synthesis
  • 💬 Chat Engine
    • Usage Pattern
    • Module Guides
      • Simple Chat Engine
      • Condense Question Chat Engine
      • ReAct Chat Engine
  • 🛠️ Customization
    • Defining LLMs
      • Azure OpenAI
      • HuggingFace LLM - Camel-5b
      • HuggingFace LLM - StableLM
      • ChatGPT
    • Defining Prompts
    • Embedding support
    • ServiceContext
    • Customizing Storage
    • Streaming
      • Streaming
    • Customizing Documents
    • Customizing the doc_id
  • 🧠 Analysis
    • Cost Analysis
      • Token Predictors
    • Playground
      • Playground
  • 🔢 Output Parsing
    • Guardrails Output Parsing
    • Langchain Output Parsing
  • 🔬 Evaluation
    • Response Evaluator
    • Query Response Evaluator
    • Question Generation
  • ⛓️ Integrations
    • Using Vector Stores
      • Simple Vector Store
      • Redis Vector Store
      • Query the data
      • Working with Metadata
      • Qdrant Vector Store
      • Faiss Vector Store
      • DeepLake Vector Store
      • MyScale Vector Store
      • Metal Vector Store
      • Weaviate Vector Store
      • Using as a vector index.
      • Pinecone Vector Store
      • Chroma Vector Store
      • LanceDB Vector Store
      • Milvus Vector Store
      • Weaviate Vector Store - Hybrid Search
      • Pinecone Vector Store - Hybrid Search
      • Simple Vector Store - Async Index Creation
      • Supabase Vector Store
      • DocArray Hnsw Vector Store
      • DocArray InMemory Vector Store
    • ChatGPT Plugin Integrations
    • Using with Langchain 🦜🔗
    • Tracing with Graphsignal
  • 📞 Callbacks
    • Llama Debug Handler Demo
    • WandbCallbackHandler Demo
    • Setup LLM
    • W&B Callback Manager Setup
    • 1. Indexing
    • 2. Query Over Index
    • 3. Build Complex Indices
    • AimCallback Demo
    • Callbacks
  • 💾 Storage
    • Persisting & Loading Data
    • Customizing Storage
    • Document Stores
    • Index Stores
    • Vector Stores
      • Simple Vector Store
      • Qdrant Vector Store
      • Faiss Vector Store
      • DeepLake Vector Store
      • MyScale Vector Store
      • Metal Vector Store
      • Weaviate Vector Store
      • Using as a vector index.
      • Pinecone Vector Store
      • Chroma Vector Store
      • LanceDB Vector Store
      • Milvus Vector Store
      • Redis Vector Store
      • Query the data
      • Working with Metadata
      • Weaviate Vector Store - Hybrid Search
      • Pinecone Vector Store - Hybrid Search
      • Simple Vector Store - Async Index Creation
      • Tair Vector Store
      • Supabase Vector Store
      • DocArray Hnsw Vector Store
      • DocArray InMemory Vector Store
    • Key-Value Stores

Reference

  • Indices
    • List Index
    • Table Index
    • Tree Index
    • Vector Store Index
    • Structured Store Index
    • Knowledge Graph Index
    • Empty Index
  • Querying an Index
    • Retrievers
      • Empty Index Retriever
      • Knowledge Graph Retriever
      • List Retriever
      • Keyword Table Retrievers
      • Tree Retrievers
      • Vector Store Retrievers
      • Transform Retriever
    • Response Synthesizer
    • Query Engines
      • Graph Query Engine
      • Multistep Query Engine
      • Retriever Query Engine
      • Transform Query Engine
      • Router Query Engine
      • Sub Question Query Engine
      • SQL Query Engine
      • Pandas Query Engine
    • Chat Engines
      • Simple Chat Engine
      • Condense Question Chat Engine
      • ReAct Chat Engine
    • Query Bundle
    • Query Transform
  • Node
  • LLM Predictors
  • Node Postprocessor
  • Storage Context
    • Document Store
    • Index Store
    • Vector Store
    • KV Storage
    • Loading Indices
  • Composability
  • Data Connectors
  • Prompt Templates
  • Service Context
    • Embeddings
    • LLMPredictor
    • PromptHelper
    • Llama Logger 🪵
  • Optimizers
  • Callbacks
  • Structured Index Configuration
  • Response
  • Playground
  • Node Parser
  • Example Notebooks
  • Langchain Integrations

Gallery

  • 😎 App Showcase
  v: latest
Versions
latest
stable
v0.6.21
v0.6.20
v0.6.19
v0.6.18
v0.6.17
v0.6.16.post1
v0.6.16
v0.6.15
v0.6.14
v0.6.13
v0.6.12
v0.6.11
v0.6.10.post1
v0.6.10
v0.6.9
v0.6.8
v0.6.7
v0.6.6
v0.6.5
v0.6.4
v0.6.3
v0.6.2
v0.6.1
v0.6.0
v0.5.27
Downloads
On Read the Docs
Project Home
Builds
Back to top
Edit this page

Weaviate Vector Store - Hybrid Search

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

Creating a Weaviate Client

import weaviate
resource_owner_config = weaviate.AuthClientPassword(
  username = "<username>", 
  password = "<password>", 
)
# Connect to cloud instance
# client = weaviate.Client("https://<cluster-id>.semi.network/", auth_client_secret=resource_owner_config)

# Connect to local instance
client = weaviate.Client("http://localhost:8080")

Load documents

from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores import WeaviateVectorStore
from llama_index.response.notebook_utils import display_response
# load documents
documents = SimpleDirectoryReader('../paul_graham_essay/data').load_data()

Build the VectorStoreIndex with WeaviateVectorStore

from llama_index.storage.storage_context import StorageContext


vector_store = WeaviateVectorStore(weaviate_client=client)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)

# NOTE: you may also choose to define a class_prefix manually.
# class_prefix = "test_prefix"
# vector_store = WeaviateVectorStore(weaviate_client=client, class_prefix=class_prefix)

Query Index with Default Vector Search

# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(
    similarity_top_k=2
)
response = query_engine.query("What did the author do growing up?")
display_response(response)

Query Index with Hybrid Search

Use hybrid search with bm25 and vector.
alpha parameter determines weighting (alpha = 0 -> bm25, alpha=1 -> vector search).

By default, alpha=0.75 is used (very similar to vector search)

# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(
    vector_store_query_mode="hybrid", 
    similarity_top_k=2
)
response = query_engine.query(
    "What did the author do growing up?", 
)
display_response(response)

Set alpha=0. to favor bm25

# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(
    vector_store_query_mode="hybrid", 
    similarity_top_k=2, 
    alpha=0.
)
response = query_engine.query(
    "What did the author do growing up?", 
)
display_response(response)
Next
Pinecone Vector Store - Hybrid Search
Previous
Milvus Vector Store
Copyright © 2022, Jerry Liu
Made with Sphinx and @pradyunsg's Furo
On this page
  • Weaviate Vector Store - Hybrid Search
    • Creating a Weaviate Client
    • Load documents
    • Build the VectorStoreIndex with WeaviateVectorStore
    • Query Index with Default Vector Search
    • Query Index with Hybrid Search
      • By default, alpha=0.75 is used (very similar to vector search)
      • Set alpha=0. to favor bm25