Open In Colab

Composable Graph#

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

If you’re opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

!pip install llama-index
from llama_index import (
    VectorStoreIndex,
    SimpleKeywordTableIndex,
    SimpleDirectoryReader,
)

Load Datasets#

Load both the NYC Wikipedia page as well as Paul Graham’s “What I Worked On” essay

# fetch "New York City" page from Wikipedia
from pathlib import Path

import requests

response = requests.get(
    "https://en.wikipedia.org/w/api.php",
    params={
        "action": "query",
        "format": "json",
        "titles": "New York City",
        "prop": "extracts",
        # 'exintro': True,
        "explaintext": True,
    },
).json()
page = next(iter(response["query"]["pages"].values()))
nyc_text = page["extract"]

data_path = Path("data/test_wiki")
if not data_path.exists():
    Path.mkdir(data_path)

with open("./data/test_wiki/nyc_text.txt", "w") as fp:
    fp.write(nyc_text)
# load NYC dataset
nyc_documents = SimpleDirectoryReader("./data/test_wiki").load_data()

Download Paul Graham Essay data

!mkdir -p 'data/paul_graham_essay/'
!wget 'https://github.com/jerryjliu/llama_index/blob/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham_essay/paul_graham_essay.txt'
# load PG's essay
essay_documents = SimpleDirectoryReader("./data/paul_graham_essay").load_data()

Building the document indices#

Build a tree index for the NYC wiki page and PG essay

# build NYC index
nyc_index = VectorStoreIndex.from_documents(nyc_documents)
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
> [build_index_from_nodes] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 28492 tokens
> [build_index_from_nodes] Total embedding token usage: 28492 tokens
# build essay index
essay_index = VectorStoreIndex.from_documents(essay_documents)
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
> [build_index_from_nodes] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 17617 tokens
> [build_index_from_nodes] Total embedding token usage: 17617 tokens

Set summaries for the indices#

Add text summaries to indices, so we can compose other indices on top of it

nyc_index_summary = """
    New York, often called New York City or NYC, 
    is the most populous city in the United States. 
    With a 2020 population of 8,804,190 distributed over 300.46 square miles (778.2 km2), 
    New York City is also the most densely populated major city in the United States, 
    and is more than twice as populous as second-place Los Angeles. 
    New York City lies at the southern tip of New York State, and 
    constitutes the geographical and demographic center of both the 
    Northeast megalopolis and the New York metropolitan area, the 
    largest metropolitan area in the world by urban landmass.[8] With over 
    20.1 million people in its metropolitan statistical area and 23.5 million 
    in its combined statistical area as of 2020, New York is one of the world's 
    most populous megacities, and over 58 million people live within 250 mi (400 km) of 
    the city. New York City is a global cultural, financial, and media center with 
    a significant influence on commerce, health care and life sciences, entertainment, 
    research, technology, education, politics, tourism, dining, art, fashion, and sports. 
    Home to the headquarters of the United Nations, 
    New York is an important center for international diplomacy,
    an established safe haven for global investors, and is sometimes described as the capital of the world.
"""
essay_index_summary = """
    Author: Paul Graham. 
    The author grew up painting and writing essays. 
    He wrote a book on Lisp and did freelance Lisp hacking work to support himself. 
    He also became the de facto studio assistant for Idelle Weber, an early photorealist painter. 
    He eventually had the idea to start a company to put art galleries online, but the idea was unsuccessful. 
    He then had the idea to write software to build online stores, which became the basis for his successful company, Viaweb. 
    After Viaweb was acquired by Yahoo!, the author returned to painting and started writing essays online. 
    He wrote a book of essays, Hackers & Painters, and worked on spam filters. 
    He also bought a building in Cambridge to use as an office. 
    He then had the idea to start Y Combinator, an investment firm that would 
    make a larger number of smaller investments and help founders remain as CEO. 
    He and his partner Jessica Livingston ran Y Combinator and funded a batch of startups twice a year. 
    He also continued to write essays, cook for groups of friends, and explore the concept of invented vs discovered in software. 
"""

Build Keyword Table Index on top of tree indices!#

We set summaries for each of the NYC and essay indices, and then compose a keyword index on top of it.

from llama_index.indices.composability import ComposableGraph
graph = ComposableGraph.from_indices(
    SimpleKeywordTableIndex,
    [nyc_index, essay_index],
    index_summaries=[nyc_index_summary, essay_index_summary],
    max_keywords_per_chunk=50,
)
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens
> [build_index_from_nodes] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 0 tokens
> [build_index_from_nodes] Total embedding token usage: 0 tokens
# set Logging to DEBUG for more detailed outputs
# ask it a question about NYC
query_engine = graph.as_query_engine()
response = query_engine.query(
    "What is the climate of New York City like? How cold is it during the"
    " winter?",
)
INFO:llama_index.indices.keyword_table.retrievers:> Starting query: What is the climate of New York City like? How cold is it during the winter?
> Starting query: What is the climate of New York City like? How cold is it during the winter?
INFO:llama_index.indices.keyword_table.retrievers:query keywords: ['cold', 'new york city', 'winter', 'new', 'city', 'climate', 'york']
query keywords: ['cold', 'new york city', 'winter', 'new', 'city', 'climate', 'york']
INFO:llama_index.indices.keyword_table.retrievers:> Extracted keywords: ['new', 'city', 'york']
> Extracted keywords: ['new', 'city', 'york']
INFO:llama_index.token_counter.token_counter:> [retrieve] Total LLM token usage: 0 tokens
> [retrieve] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 18 tokens
> [retrieve] Total embedding token usage: 18 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 3834 tokens
> [get_response] Total LLM token usage: 3834 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
> [get_response] Total embedding token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 282 tokens
> [get_response] Total LLM token usage: 282 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
> [get_response] Total embedding token usage: 0 tokens
print(str(response))
The climate of New York City is humid subtropical, with hot and humid summers and cold, wet winters. The average temperature in the winter is around 32°F (0°C), but temperatures can drop below freezing. Snowfall is common in the winter months, with an average of 25 inches (63 cm) of snow per year.
# Get source of response
print(response.get_formatted_sources())
> Source (Doc id: b58b74a6-c0c8-4020-8076-fdcd265dc7a3): 

The climate of New York City is humid subtropical, with hot and humid summers and cold, wet win...

> Source (Doc id: e92aafcf-08c2-4a8c-897b-930ad420179a): one of the world's highest. New York City real estate is a safe haven for global investors.


===...
# ask it a question about PG's essay
response = query_engine.query(
    "What did the author do growing up, before his time at Y Combinator?",
)
INFO:llama_index.indices.keyword_table.retrievers:> Starting query: What did the author do growing up, before his time at Y Combinator?
> Starting query: What did the author do growing up, before his time at Y Combinator?
INFO:llama_index.indices.keyword_table.retrievers:query keywords: ['growing up', 'y combinator', 'time', 'growing', 'author', 'combinator']
query keywords: ['growing up', 'y combinator', 'time', 'growing', 'author', 'combinator']
INFO:llama_index.indices.keyword_table.retrievers:> Extracted keywords: ['author', 'combinator']
> Extracted keywords: ['author', 'combinator']
INFO:llama_index.token_counter.token_counter:> [retrieve] Total LLM token usage: 0 tokens
> [retrieve] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 17 tokens
> [retrieve] Total embedding token usage: 17 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 3947 tokens
> [get_response] Total LLM token usage: 3947 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
> [get_response] Total embedding token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 218 tokens
> [get_response] Total LLM token usage: 218 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
> [get_response] Total embedding token usage: 0 tokens
print(str(response))
The author likely grew up doing a variety of activities, such as writing essays, painting, cooking, writing software, and hosting dinners for friends. He may have also been involved in giving talks and was likely driven by the idea of working hard to set the upper bound for everyone else.
# Get source of response
print(response.get_formatted_sources())
> Source (Doc id: 92bc5ce3-3a76-4570-9726-f7e0405ec6cc): 
Before his time at Y Combinator, the author worked on building the infrastructure of the web, wr...

> Source (Doc id: ed37130a-3138-42d4-9e77-1c792fe22f4e): write something and put it on the web, anyone can read it. That may seem obvious now, but it was ...