ChatGPT
Load documents, build the VectorStoreIndex
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index import (
VectorStoreIndex,
SimpleDirectoryReader,
LLMPredictor,
ServiceContext,
)
from llama_index.llms import OpenAI
from IPython.display import Markdown, display
# load documents
documents = SimpleDirectoryReader("../../data/paul_graham").load_data()
# setup service context
llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512)
index = VectorStoreIndex.from_documents(documents, service_context=service_context)
Query Index
By default, with the help of langchain’s PromptSelector abstraction, we use a modified refine prompt tailored for ChatGPT-use if the ChatGPT model is used.
query_engine = index.as_query_engine(
service_context=service_context,
similarity_top_k=3,
streaming=True,
)
response = query_engine.query(
"What did the author do growing up?",
)
INFO:llama_index.token_counter.token_counter:> [retrieve] Total LLM token usage: 0 tokens
> [retrieve] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 8 tokens
> [retrieve] Total embedding token usage: 8 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 0 tokens
> [get_response] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
> [get_response] Total embedding token usage: 0 tokens
response.print_response_stream()
Before college, the author worked on writing short stories and programming on an IBM 1401 using an early version of Fortran. They also worked on programming with microcomputers and eventually created a new dialect of Lisp called Arc. They later realized the potential of publishing essays on the web and began writing and publishing them. The author also worked on spam filters, painting, and cooking for groups.
query_engine = index.as_query_engine(
service_context=service_context,
similarity_top_k=5,
streaming=True,
)
response = query_engine.query(
"What did the author do during his time at RISD?",
)
INFO:llama_index.token_counter.token_counter:> [retrieve] Total LLM token usage: 0 tokens
> [retrieve] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 12 tokens
> [retrieve] Total embedding token usage: 12 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 0 tokens
> [get_response] Total LLM token usage: 0 tokens
INFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens
> [get_response] Total embedding token usage: 0 tokens
response.print_response_stream()
The author attended RISD and took classes in fundamental subjects like drawing, color, and design. They also learned a lot in the color class they took, but otherwise, they were basically teaching themselves to paint. The author dropped out of RISD in 1993.
Refine Prompt: Here is the chat refine prompt
from llama_index.prompts.chat_prompts import CHAT_REFINE_PROMPT
dict(CHAT_REFINE_PROMPT.prompt)
Query Index (Using the standard Refine Prompt)
If we use the “standard” refine prompt (where the prompt is one text template instead of multiple messages), we find that the results over ChatGPT are worse.
from llama_index.prompts.default_prompts import DEFAULT_REFINE_PROMPT
query_engine = index.as_query_engine(
service_context=service_context,
refine_template=DEFAULT_REFINE_PROMPT,
similarity_top_k=5,
streaming=True,
)
response = query_engine.query(
"What did the author do during his time at RISD?",
)
response.print_response_stream()