Metadata Replacement + Node Sentence Window

In this notebook, we use the SentenceWindowNodeParser to parse documents into single sentences per node. Each node also contains a “window” with the sentences on either side of the node sentence.

Then, during retrieval, before passing the retrieved sentences to the LLM, the single sentences are replaced with a window containing the surrounding sentences using the MetadataReplacementNodePostProcessor.

This is most useful for large documents/indexes, as it helps to retrieve more fine-grained details.

By default, the sentence window is 5 sentences on either side of the original sentence.

In this case, chunk size settings are not used, in favor of following the window settings.

Setup

import os
import openai

os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
from llama_index import ServiceContext, set_global_service_context
from llama_index.llms import OpenAI
from llama_index.embeddings import OpenAIEmbedding, HuggingFaceEmbedding
from llama_index.node_parser import SentenceWindowNodeParser

# create the sentence window node parser w/ default settings
node_parser = SentenceWindowNodeParser.from_defaults(
    window_size=3,
    window_metadata_key="window",
    original_text_metadata_key="original_text",
)

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
ctx = ServiceContext.from_defaults(
    llm=llm,
    embed_model=HuggingFaceEmbedding(
        model_name="sentence-transformers/all-mpnet-base-v2"
    ),
    node_parser=node_parser,
)

# if you wanted to use OpenAIEmbedding, we should also increase the batch size,
# since it involves many more calls to the API
# ctx = ServiceContext.from_defaults(llm=llm, embed_model=OpenAIEmbedding(embed_batch_size=50)), node_parser=node_parser)

Build the index

Here, we build an index using chapter 3 of the recent IPCC climate report.

!curl https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_Chapter03.pdf --output IPCC_AR6_WGII_Chapter03.pdf
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
	- Avoid using `tokenizers` before the fork if possible
	- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 20.7M  100 20.7M    0     0  20.7M      0 --:--:-- --:--:--  0:00:02 8706k--:--:-- --:--:-- 20.8M
from llama_index import SimpleDirectoryReader

documents = SimpleDirectoryReader(
    input_files=["./IPCC_AR6_WGII_Chapter03.pdf"]
).load_data()
from llama_index import VectorStoreIndex

sentence_index = VectorStoreIndex.from_documents(documents, service_context=ctx)

Querying

With MetadataReplacementPostProcessor

Here, we now use the MetadataReplacementPostProcessor to replace the sentence in each node with it’s surrounding context.

from llama_index.indices.postprocessor import MetadataReplacementPostProcessor

query_engine = sentence_index.as_query_engine(
    similarity_top_k=2,
    # the target key defaults to `window` to match the node_parser's default
    node_postprocessors=[
        MetadataReplacementPostProcessor(target_metadata_key="window")
    ],
)
window_response = query_engine.query("What are the concerns surrounding the AMOC?")
print(window_response)
There is low confidence in the quantification of AMOC changes in the 20th century due to low agreement in quantitative reconstructed and simulated trends. Additionally, direct observational records since the mid-2000s remain too short to determine the relative contributions of internal variability, natural forcing, and anthropogenic forcing to AMOC change. However, it is very likely that AMOC will decline over the 21st century for all SSP scenarios, but there will not be an abrupt collapse before 2100.

We can also check the original sentence that was retrieved for each node, as well as the actual window of sentences that was sent to the LLM.

window = window_response.source_nodes[0].node.metadata["window"]
sentence = window_response.source_nodes[0].node.metadata["original_text"]

print(f"Window: {window}")
print("------------------")
print(f"Original Sentence: {sentence}")
Window: Nevertheless, projected future annual cumulative upwelling wind 
changes at most locations and seasons remain within ±10–20% of 
present-day values (medium confidence) (WGI AR6 Section  9.2.3.5; 
Fox-Kemper et al., 2021). Continuous observation of the Atlantic meridional overturning 
circulation (AMOC) has improved the understanding of its variability 
(Frajka-Williams et  al., 2019), but there is low confidence in the 
quantification of AMOC changes in the 20th century because of low 
agreement in quantitative reconstructed and simulated trends (WGI 
AR6 Sections 2.3.3, 9.2.3.1; Fox-Kemper et al., 2021; Gulev et al., 2021). Direct observational records since the mid-2000s remain too short to 
determine the relative contributions of internal variability, natural 
forcing and anthropogenic forcing to AMOC change (high confidence) 
(WGI AR6 Sections 2.3.3, 9.2.3.1; Fox-Kemper et al., 2021; Gulev et al., 
2021). Over the 21st century, AMOC will very likely decline for all SSP 
scenarios but will not involve an abrupt collapse before 2100 (WGI 
AR6 Sections 4.3.2, 9.2.3.1; Fox-Kemper et al., 2021; Lee et al., 2021). 3.2.2.4 Sea Ice Changes
Sea ice is a key driver of polar marine life, hosting unique ecosystems 
and affecting diverse marine organisms and food webs through its 
impact on light penetration and supplies of nutrients and organic 
matter (Arrigo, 2014). Since the late 1970s, Arctic sea ice area has 
decreased for all months, with an estimated decrease of 2 million km2 
(or 25%) for summer sea ice (averaged for August, September and 
October) in 2010–2019 as compared with 1979–1988 (WGI AR6 
Section 9.3.1.1; Fox-Kemper et al., 2021).
------------------
Original Sentence: Over the 21st century, AMOC will very likely decline for all SSP 
scenarios but will not involve an abrupt collapse before 2100 (WGI 
AR6 Sections 4.3.2, 9.2.3.1; Fox-Kemper et al., 2021; Lee et al., 2021).

Contrast with normal VectorStoreIndex

from llama_index import VectorStoreIndex
from llama_index import ServiceContext
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index.llms import OpenAI

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
ctx = ServiceContext.from_defaults(
    llm=llm,
    embed_model=HuggingFaceEmbedding(
        model_name="sentence-transformers/all-mpnet-base-v2"
    ),
)

vector_index = VectorStoreIndex.from_documents(documents, service_context=ctx)
query_engine = vector_index.as_query_engine(similarity_top_k=2)
vector_response = query_engine.query("What are the concerns surrounding the AMOC?")
print(vector_response)
I'm sorry, but the concerns surrounding the AMOC (Atlantic Meridional Overturning Circulation) are not mentioned in the provided context.

Well, that didn’t work. Let’s bump up the top k! This will be slower and use more tokens compared to the sentence window index.

query_engine = vector_index.as_query_engine(similarity_top_k=5)
vector_response = query_engine.query("What are the concerns surrounding the AMOC?")
print(vector_response)
The context information does not provide any specific concerns surrounding the AMOC (Atlantic Meridional Overturning Circulation).

Analysis

So the SentenceWindowNodeParser + MetadataReplacementNodePostProcessor combo is the clear winner here. But why?

Embeddings at a sentence level seem to capture more fine-grained details, like the word AMOC.

We can also compare the retrieved chunks for each index!

for source_node in window_response.source_nodes:
    print(source_node.node.metadata["original_text"])
    print("--------")
Over the 21st century, AMOC will very likely decline for all SSP 
scenarios but will not involve an abrupt collapse before 2100 (WGI 
AR6 Sections 4.3.2, 9.2.3.1; Fox-Kemper et al., 2021; Lee et al., 2021).
--------
Direct observational records since the mid-2000s remain too short to 
determine the relative contributions of internal variability, natural 
forcing and anthropogenic forcing to AMOC change (high confidence) 
(WGI AR6 Sections 2.3.3, 9.2.3.1; Fox-Kemper et al., 2021; Gulev et al., 
2021).
--------

Here, we can see that the sentence window index easily retrieved two nodes that talk about AMOC. Remember, the embeddings are based purely on the original sentence here, but the LLM actually ends up reading the surrounding context as well!

Now, let’s try and disect why the naive vector index failed.

for node in vector_response.source_nodes:
    print("AMOC mentioned?", "AMOC" in node.node.text)
    print("--------")
AMOC mentioned? False
--------
AMOC mentioned? False
--------
AMOC mentioned? True
--------
AMOC mentioned? False
--------
AMOC mentioned? False
--------

So source node at index [2] mentions AMOC, but what did this text actually look like?

print(vector_response.source_nodes[2].node.text)
Nevertheless, projected future annual cumulative upwelling wind 
changes at most locations and seasons remain within ±10–20% of 
present-day values (medium confidence) (WGI AR6 Section  9.2.3.5; 
Fox-Kemper et al., 2021).Continuous observation of the Atlantic meridional overturning 
circulation (AMOC) has improved the understanding of its variability 
(Frajka-Williams et  al., 2019), but there is low confidence in the 
quantification of AMOC changes in the 20th century because of low 
agreement in quantitative reconstructed and simulated trends (WGI 
AR6 Sections 2.3.3, 9.2.3.1; Fox-Kemper et al., 2021; Gulev et al., 2021).Direct observational records since the mid-2000s remain too short to 
determine the relative contributions of internal variability, natural 
forcing and anthropogenic forcing to AMOC change (high confidence) 
(WGI AR6 Sections 2.3.3, 9.2.3.1; Fox-Kemper et al., 2021; Gulev et al., 
2021).Over the 21st century, AMOC will very likely decline for all SSP 
scenarios but will not involve an abrupt collapse before 2100 (WGI 
AR6 Sections 4.3.2, 9.2.3.1; Fox-Kemper et al., 2021; Lee et al., 2021).3.2.2.4 Sea Ice Changes
Sea ice is a key driver of polar marine life, hosting unique ecosystems 
and affecting diverse marine organisms and food webs through its 
impact on light penetration and supplies of nutrients and organic 
matter (Arrigo, 2014).Since the late 1970s, Arctic sea ice area has 
decreased for all months, with an estimated decrease of 2 million km2 
(or 25%) for summer sea ice (averaged for August, September and 
October) in 2010–2019 as compared with 1979–1988 (WGI AR6 
Section 9.3.1.1; Fox-Kemper et al., 2021).For Antarctic sea ice there is 
no significant global trend in satellite-observed sea ice area from 1979 
to 2020 in either winter or summer, due to regionally opposing trends 
and large internal variability (WGI AR6 Section 9.3.2.1; Maksym, 2019; 
Fox-Kemper et al., 2021).CMIP6 simulations project that the Arctic Ocean will likely become 
practically sea ice free (area below 1 million km2) for the first time before 
2050 and in the seasonal sea ice minimum in each of the four emission 
scenarios SSP1-1.9, SSP1-2.6, SSP2-4.5 and SSP5-8.5 (Figure 3.7; WGI 
AR6 Section 9.3.2.2; Notze and SIMIP Community, 2020; Fox-Kemper 
et al., 2021).Antarctic sea ice area is also projected to decrease during 
the 21st century, but due to mismatches between model simulations 
and observations, combined with a lack of understanding of reasons 
for substantial inter-model spread, there is low confidence in model 
projections of future Antarctic sea ice changes, particularly at the 
regional level (WGI AR6 Section  9.3.2.2; Roach et  al., 2020; Fox-
Kemper et al., 2021).3.2.3 Chemical Changes
3.2.3.1  Ocean Acidification
The ocean’s uptake of anthropogenic carbon affects its chemistry 
in a process referred to as ocean acidification, which increases the 
concentrations of aqueous CO 2, bicarbonate and hydrogen ions, and 
decreases pH, carbonate ion concentrations and calcium carbonate 
mineral saturation states (Doney et  al., 2009).Ocean acidification

So AMOC is disuccsed, but sadly it is in the middle chunk. With LLMs, it is often observed that text in the middle of retrieved context is often ignored or less useful. A recent paper “Lost in the Middle” discusses this here.