Contents Menu Expand Light mode Dark mode Auto light/dark mode
LlamaIndex 🦙 0.6.21
LlamaIndex 🦙 0.6.21

Getting Started

  • Installation and Setup
  • Starter Tutorial

Guides

  • A Primer to using LlamaIndex
    • LlamaIndex Usage Pattern
    • How Each Index Works
    • Query Interface
    • Architecture Overview
  • Tutorials
    • Discover LlamaIndex Video Series
    • 💬🤖 How to Build a Chatbot
    • A Guide to Building a Full-Stack Web App with LLamaIndex
    • A Guide to Building a Full-Stack LlamaIndex Web App with Delphic
    • A Guide to LlamaIndex + Structured Data
    • A Guide to Extracting Terms and Definitions
    • A Guide to Creating a Unified Query Framework over your Indexes
    • SEC 10k Analysis
    • Using LlamaIndex with Local Models

Use Cases

  • Queries over your Data
  • Agents
  • Integrations into LLM Applications

Key Components

  • 🔌 Data Connectors (LlamaHub)
    • Usage Pattern
    • Module Guides
  • 🗃️ Data Index
    • Usage Pattern
      • Document Management
    • Module Guides
      • Vector Store Index
      • List Index
      • Tree Index
      • Keyword Table Index
      • Knowledge Graph Index
      • Pandas Index
      • SQL Index
      • Document Summary Index
    • Composability
      • Composable Graph Basic
      • Composable Graph with Weaviate
      • Composable Graph
  • 🔍 Retriever
    • Usage Pattern
      • Retriever Modes
      • Define Custom Retriever
    • Module Guides
      • VectorIndexAutoRetriever
  • ❓ Query Engine
    • Usage Pattern
      • Response Modes
    • Module Guides
      • Retriever Query Engine
      • Router Query Engine
      • Retriever Router Query Engine
      • Joint QA Summary Query Engine
      • Sub Question Query Engine
      • Multi-Step Query Engine
      • SQL Router Query Engine
      • SQL Auto Vector Query Engine
      • JSON Query Engine
    • Advanced Concepts
      • Token Optimizers
        • Sentence Embedding Optimizer
      • Query Transformations
        • HyDE Query Transform
        • Multi-Step Query Engine
      • Second-Stage Processing
        • Forward/Backward Augmentation
        • Recency Filtering
        • Time-Weighted Rerank
        • PII Masking
        • Cohere Rerank
        • LLM Reranker Demonstration (Great Gatsby)
      • Response Synthesis
  • 💬 Chat Engine
    • Usage Pattern
    • Module Guides
      • Simple Chat Engine
      • Condense Question Chat Engine
      • ReAct Chat Engine
  • 🛠️ Customization
    • Defining LLMs
      • Azure OpenAI
      • HuggingFace LLM - Camel-5b
      • HuggingFace LLM - StableLM
      • ChatGPT
    • Defining Prompts
    • Embedding support
    • ServiceContext
    • Customizing Storage
    • Streaming
      • Streaming
    • Customizing Documents
    • Customizing the doc_id
  • 🧠 Analysis
    • Cost Analysis
      • Token Predictors
    • Playground
      • Playground
  • 🔢 Output Parsing
    • Guardrails Output Parsing
    • Langchain Output Parsing
  • 🔬 Evaluation
    • Response Evaluator
    • Query Response Evaluator
    • Question Generation
  • ⛓️ Integrations
    • Using Vector Stores
      • Simple Vector Store
      • Redis Vector Store
      • Query the data
      • Working with Metadata
      • Qdrant Vector Store
      • Faiss Vector Store
      • DeepLake Vector Store
      • MyScale Vector Store
      • Metal Vector Store
      • Weaviate Vector Store
      • Using as a vector index.
      • Pinecone Vector Store
      • Chroma Vector Store
      • LanceDB Vector Store
      • Milvus Vector Store
      • Weaviate Vector Store - Hybrid Search
      • Pinecone Vector Store - Hybrid Search
      • Simple Vector Store - Async Index Creation
      • Supabase Vector Store
      • DocArray Hnsw Vector Store
      • DocArray InMemory Vector Store
    • ChatGPT Plugin Integrations
    • Using with Langchain 🦜🔗
    • Tracing with Graphsignal
  • 📞 Callbacks
    • Llama Debug Handler Demo
    • WandbCallbackHandler Demo
    • Setup LLM
    • W&B Callback Manager Setup
    • 1. Indexing
    • 2. Query Over Index
    • 3. Build Complex Indices
    • AimCallback Demo
    • Callbacks
  • 💾 Storage
    • Persisting & Loading Data
    • Customizing Storage
    • Document Stores
    • Index Stores
    • Vector Stores
      • Simple Vector Store
      • Qdrant Vector Store
      • Faiss Vector Store
      • DeepLake Vector Store
      • MyScale Vector Store
      • Metal Vector Store
      • Weaviate Vector Store
      • Using as a vector index.
      • Pinecone Vector Store
      • Chroma Vector Store
      • LanceDB Vector Store
      • Milvus Vector Store
      • Redis Vector Store
      • Query the data
      • Working with Metadata
      • Weaviate Vector Store - Hybrid Search
      • Pinecone Vector Store - Hybrid Search
      • Simple Vector Store - Async Index Creation
      • Tair Vector Store
      • Supabase Vector Store
      • DocArray Hnsw Vector Store
      • DocArray InMemory Vector Store
    • Key-Value Stores

Reference

  • Indices
    • List Index
    • Table Index
    • Tree Index
    • Vector Store Index
    • Structured Store Index
    • Knowledge Graph Index
    • Empty Index
  • Querying an Index
    • Retrievers
      • Empty Index Retriever
      • Knowledge Graph Retriever
      • List Retriever
      • Keyword Table Retrievers
      • Tree Retrievers
      • Vector Store Retrievers
      • Transform Retriever
    • Response Synthesizer
    • Query Engines
      • Graph Query Engine
      • Multistep Query Engine
      • Retriever Query Engine
      • Transform Query Engine
      • Router Query Engine
      • Sub Question Query Engine
      • SQL Query Engine
      • Pandas Query Engine
    • Chat Engines
      • Simple Chat Engine
      • Condense Question Chat Engine
      • ReAct Chat Engine
    • Query Bundle
    • Query Transform
  • Node
  • LLM Predictors
  • Node Postprocessor
  • Storage Context
    • Document Store
    • Index Store
    • Vector Store
    • KV Storage
    • Loading Indices
  • Composability
  • Data Connectors
  • Prompt Templates
  • Service Context
    • Embeddings
    • LLMPredictor
    • PromptHelper
    • Llama Logger 🪵
  • Optimizers
  • Callbacks
  • Structured Index Configuration
  • Response
  • Playground
  • Node Parser
  • Example Notebooks
  • Langchain Integrations

Gallery

  • 😎 App Showcase
  v: latest
Versions
latest
stable
v0.6.21
v0.6.20
v0.6.19
v0.6.18
v0.6.17
v0.6.16.post1
v0.6.16
v0.6.15
v0.6.14
v0.6.13
v0.6.12
v0.6.11
v0.6.10.post1
v0.6.10
v0.6.9
v0.6.8
v0.6.7
v0.6.6
v0.6.5
v0.6.4
v0.6.3
v0.6.2
v0.6.1
v0.6.0
v0.5.27
Downloads
On Read the Docs
Project Home
Builds
Back to top
Edit this page

Pinecone Vector Store - Hybrid Search

Creating a Pinecone Index

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import pinecone
api_key = ""
pinecone.init(api_key=api_key, environment="us-west1-gcp")
pinecone.describe_index("quickstart")
# dimensions are for text-embedding-ada-002
# NOTE: needs dotproduct for hybrid search
pinecone.create_index("quickstart", dimension=1536, metric="dotproduct", pod_type="p1")
pinecone_index = pinecone.Index("quickstart")

Load documents, build the PineconeVectorStore

from llama_index import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores import PineconeVectorStore
from IPython.display import Markdown, display
# load documents
documents = SimpleDirectoryReader("../paul_graham_essay/data").load_data()
# set add_sparse_vector=True to compute sparse vectors during upsert
from llama_index.storage.storage_context import StorageContext


vector_store = PineconeVectorStore(
    pinecone_index=pinecone_index,
    add_sparse_vector=True,
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)

Query Index

# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine(vector_store_query_mode="hybrid")
response = query_engine.query("What did the author do growing up?")
display(Markdown(f"<b>{response}</b>"))
Next
Simple Vector Store - Async Index Creation
Previous
Weaviate Vector Store - Hybrid Search
Copyright © 2022, Jerry Liu
Made with Sphinx and @pradyunsg's Furo
On this page
  • Pinecone Vector Store - Hybrid Search
    • Creating a Pinecone Index
    • Load documents, build the PineconeVectorStore
    • Query Index