An Introduction to LlamaIndex Query Pipelines#

Overview#

LlamaIndex provides a declarative query API that allows you to chain together different modules in order to orchestrate simple-to-advanced workflows over your data.

This is centered around our QueryPipeline abstraction. Load in a variety of modules (from LLMs to prompts to retrievers to other pipelines), connect them all together into a sequential chain or DAG, and run it end2end.

NOTE: You can orchestrate all these workflows without the declarative pipeline abstraction (by using the modules imperatively and writing your own functions). So what are the advantages of QueryPipeline?

  • Express common workflows with fewer lines of code/boilerplate

  • Greater readability

  • Greater parity / better integration points with common low-code / no-code solutions (e.g. LangFlow)

  • [In the future] A declarative interface allows easy serializability of pipeline components, providing portability of pipelines/easier deployment to different systems.

Cookbook#

In this cookbook we give you an introduction to our QueryPipeline interface and show you some basic workflows you can tackle.

  • Chain together prompt and LLM

  • Chain together query rewriting (prompt + LLM) with retrieval

  • Chain together a full RAG query pipeline (query rewriting, retrieval, reranking, response synthesis)

  • Setting up a custom query component

Setup#

Here we setup some data + indexes (from PG’s essay) that we’ll be using in the rest of the cookbook.

# setup Arize Phoenix for logging/observability
import phoenix as px

px.launch_app()
import llama_index

llama_index.set_global_handler("arize_phoenix")
🌍 To view the Phoenix app in your browser, visit http://127.0.0.1:6006/
📺 To view the Phoenix app in a notebook, run `px.active_session().view()`
📖 For more information on how to use Phoenix, check out https://docs.arize.com/phoenix
from llama_index.query_pipeline.query import QueryPipeline
from llama_index.llms import OpenAI
from llama_index.prompts import PromptTemplate
from llama_index import (
    VectorStoreIndex,
    ServiceContext,
    SimpleDirectoryReader,
    load_index_from_storage,
)
reader = SimpleDirectoryReader("../data/paul_graham")
docs = reader.load_data()
import os
from llama_index.storage import StorageContext

if not os.path.exists("storage"):
    index = VectorStoreIndex.from_documents(docs)
    # save index to disk
    index.set_index_id("vector_index")
    index.storage_context.persist("./storage")
else:
    # rebuild storage context
    storage_context = StorageContext.from_defaults(persist_dir="storage")
    # load index
    index = load_index_from_storage(storage_context, index_id="vector_index")

1. Chain Together Prompt and LLM#

In this section we show a super simple workflow of chaining together a prompt with LLM.

We simply define chain on initialization. This is a special case of a query pipeline where the components are purely sequential, and we automatically convert outputs into the right format for the next inputs.

# try chaining basic prompts
prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)
llm = OpenAI(model="gpt-3.5-turbo")

p = QueryPipeline(chain=[prompt_tmpl, llm], verbose=True)
output = p.run(movie_name="The Departed")
> Running module 8d886c0b-5091-4650-af32-48dc62e6cdf1 with input: 
movie_name: The Departed

> Running module 9f5fe0b1-bb02-4b04-a11a-ad4f5eb33916 with input: 
messages: Please generate related movies to The Departed


print(str(output))
assistant: 1. Infernal Affairs (2002) - This Hong Kong crime thriller is the original film on which The Departed is based. It follows a similar storyline of undercover cops infiltrating a criminal organization.

2. Internal Affairs (1990) - This American crime thriller, starring Richard Gere and Andy Garcia, revolves around a corrupt cop and an internal affairs officer determined to expose him.

3. The Town (2010) - Directed by and starring Ben Affleck, this crime drama follows a group of bank robbers in Boston who find themselves in a dangerous situation when they take a hostage during a heist.

4. Mystic River (2003) - Directed by Clint Eastwood, this psychological crime drama features an ensemble cast including Sean Penn, Tim Robbins, and Kevin Bacon. It explores the aftermath of a childhood trauma and its impact on three friends.

5. The Wire (TV Series, 2002-2008) - Although not a movie, this critically acclaimed TV series created by David Simon delves into the interconnected lives of law enforcement, drug dealers, and residents of Baltimore. It offers a complex and realistic portrayal of crime and corruption.

6. Training Day (2001) - Starring Denzel Washington and Ethan Hawke, this crime thriller follows a rookie cop who is assigned to an experienced but corrupt detective for his first day on the job.

7. American Gangster (2007) - Based on a true story, this crime film directed by Ridley Scott stars Denzel Washington as a Harlem drug lord and Russell Crowe as the detective determined to bring him down.

8. The Departed (2006) (Remake) - Although not a separate movie, it is worth mentioning that The Departed is a remake of the original Hong Kong film Infernal Affairs. Both films have their own unique qualities and are worth watching.

9. Donnie Brasco (1997) - Starring Johnny Depp and Al Pacino, this crime drama is based on the true story of an FBI agent who infiltrates the mob and forms a close bond with a mafia hitman.

10. City of God (2002) - This Brazilian crime drama depicts the brutal and violent world of organized crime in the slums of Rio de Janeiro. It follows the lives of two young boys who take different paths, one becoming a photographer and the other a drug dealer.

Try Output Parsing#

Let’s parse the outputs into a structured Pydantic object.

from typing import List
from pydantic import BaseModel, Field
from llama_index.output_parsers import PydanticOutputParser


class Movie(BaseModel):
    """Object representing a single movie."""

    name: str = Field(..., description="Name of the movie.")
    year: int = Field(..., description="Year of the movie.")


class Movies(BaseModel):
    """Object representing a list of movies."""

    movies: List[Movie] = Field(..., description="List of movies.")


llm = OpenAI(model="gpt-3.5-turbo")
output_parser = PydanticOutputParser(Movies)
json_prompt_str = """\
Please generate related movies to {movie_name}. Output with the following JSON format: 
"""
json_prompt_str = output_parser.format(json_prompt_str)
# add JSON spec to prompt template
json_prompt_tmpl = PromptTemplate(json_prompt_str)

p = QueryPipeline(chain=[json_prompt_tmpl, llm, output_parser], verbose=True)
output = p.run(movie_name="Toy Story")
> Running module 246c804a-47a5-4cdd-a83b-90064d82740b with input: 
movie_name: Toy Story

> Running module 51d2c847-6a34-4a0d-841a-68cf009f8ded with input: 
messages: Please generate related movies to Toy Story. Output with the following JSON format: 



Here's a JSON schema to follow:
{"title": "Movies", "description": "Object representing a list of movies.", "typ...

> Running module 1589c44b-a136-4093-abd7-286cb5d19d47 with input: 
input: assistant: {
  "movies": [
    {
      "name": "Finding Nemo",
      "year": 2003
    },
    {
      "name": "Cars",
      "year": 2006
    },
    {
      "name": "Up",
      "year": 2009
    },
    {...


output
Movies(movies=[Movie(name='Finding Nemo', year=2003), Movie(name='Cars', year=2006), Movie(name='Up', year=2009), Movie(name='Inside Out', year=2015), Movie(name='Coco', year=2017)])

Streaming Support#

The query pipelines have LLM streaming support (simply do as_query_component(streaming=True)). Intermediate outputs will get autoconverted, and the final output can be a streaming output. Here’s some examples.

1. Chain multiple Prompts with Streaming

prompt_str = "Please generate related movies to {movie_name}"
prompt_tmpl = PromptTemplate(prompt_str)
# let's add some subsequent prompts for fun
prompt_str2 = """\
Here's some text:

{text}

Can you rewrite this with a summary of each movie?
"""
prompt_tmpl2 = PromptTemplate(prompt_str2)
llm = OpenAI(model="gpt-3.5-turbo")
llm_c = llm.as_query_component(streaming=True)

p = QueryPipeline(
    chain=[prompt_tmpl, llm_c, prompt_tmpl2, llm_c], verbose=True
)
# p = QueryPipeline(chain=[prompt_tmpl, llm_c], verbose=True)
output = p.run(movie_name="The Dark Knight")
for o in output:
    print(o.delta, end="")
> Running module c6f3bd13-5f78-4d37-9a00-4b23d3172619 with input: 
movie_name: The Dark Knight

> Running module d3c6bdb9-e891-428f-ae75-ef4181ba9dbd with input: 
messages: Please generate related movies to The Dark Knight

> Running module 8c84b2be-a338-4018-9cee-cc9d94a88b79 with input: 
text: <generator object llm_chat_callback.<locals>.wrap.<locals>.wrapped_llm_chat.<locals>.wrapped_gen at 0x2978e4e40>

> Running module d6203b7b-57bd-499a-aad8-3dfa7de093e5 with input: 
messages: Here's some text:

1. Batman Begins (2005)
2. The Dark Knight Rises (2012)
3. Batman v Superman: Dawn of Justice (2016)
4. Man of Steel (2013)
5. The Avengers (2012)
6. Iron Man (2008)
7. Captain Amer...

1. Batman Begins (2005): A young Bruce Wayne becomes Batman to protect Gotham City from corruption and crime, facing his fears and training under the guidance of Ra's al Ghul.
2. The Dark Knight Rises (2012): Batman returns from exile to save Gotham City from the ruthless terrorist Bane, who plans to destroy it, while facing personal sacrifices and a new ally, Catwoman.
3. Batman v Superman: Dawn of Justice (2016): Batman and Superman clash as their ideologies collide, leading to an epic battle, while a new threat, Lex Luthor, manipulates them into a confrontation.
4. Man of Steel (2013): The origin story of Superman, as he embraces his powers and faces General Zod, a fellow Kryptonian, who threatens Earth's existence.
5. The Avengers (2012): Earth's mightiest heroes, including Iron Man, Captain America, Thor, and Hulk, unite to stop Loki and his alien army from conquering the world.
6. Iron Man (2008): Billionaire Tony Stark builds a high-tech suit to escape captivity and becomes the superhero Iron Man, using his technology to fight against evil forces.
7. Captain America: The Winter Soldier (2014): Captain America teams up with Black Widow and Falcon to uncover a conspiracy within S.H.I.E.L.D., facing a powerful assassin known as the Winter Soldier.
8. The Amazing Spider-Man (2012): Peter Parker, a high school student, gains spider-like abilities and becomes Spider-Man, battling against the Lizard and discovering his father's secrets.
9. Watchmen (2009): Set in an alternate reality, a group of retired vigilantes investigates the murder of one of their own, uncovering a plot that could lead to global destruction.
10. Sin City (2005): A neo-noir anthology film, showcasing various interconnected stories set in the crime-ridden and corrupt Basin City.
11. V for Vendetta (2005): In a dystopian future, a masked vigilante known as V fights against a totalitarian regime, inspiring the people to rise up against oppression.
12. Blade Runner 2049 (2017): A young blade runner uncovers a long-buried secret that leads him to seek out former blade runner Rick Deckard, while unraveling the mysteries of a future society.
13. Inception (2010): A skilled thief enters people's dreams to steal information, but is tasked with planting an idea instead, leading to a complex and mind-bending journey.
14. The Matrix (1999): A computer hacker discovers the truth about reality, joining a group of rebels to fight against sentient machines that have enslaved humanity in a simulated world.
15. The Crow (1994): A musician, resurrected by a supernatural crow, seeks revenge against those who murdered him and his fiancée, unleashing his dark and vengeful powers.

2. Feed streaming output to output parser

p = QueryPipeline(
    chain=[
        json_prompt_tmpl,
        llm.as_query_component(streaming=True),
        output_parser,
    ],
    verbose=True,
)
output = p.run(movie_name="Toy Story")
print(output)
> Running module 5cfd9352-07a6-4edd-90ac-f60cf0727a31 with input: 
movie_name: Toy Story

> Running module 1ccba87d-4d06-4bc2-bd7f-f044059a4091 with input: 
messages: Please generate related movies to Toy Story. Output with the following JSON format: 



Here's a JSON schema to follow:
{"title": "Movies", "description": "Object representing a list of movies.", "typ...

> Running module 4ea05b9b-e4e2-4831-92d0-56790038c551 with input: 
input: <generator object llm_chat_callback.<locals>.wrap.<locals>.wrapped_llm_chat.<locals>.wrapped_gen at 0x2978e7760>

movies=[Movie(name='Finding Nemo', year=2003), Movie(name='Cars', year=2006), Movie(name='Ratatouille', year=2007), Movie(name='WALL-E', year=2008), Movie(name='Up', year=2009), Movie(name='Inside Out', year=2015), Movie(name='Coco', year=2017), Movie(name='Incredibles 2', year=2018), Movie(name='Toy Story 4', year=2019), Movie(name='Onward', year=2020)]

Chain Together Query Rewriting Workflow (prompts + LLM) with Retrieval#

Here we try a slightly more complex workflow where we send the input through two prompts before initiating retrieval.

  1. Generate question about given topic.

  2. Hallucinate answer given question, for better retrieval.

Since each prompt only takes in one input, note that the QueryPipeline will automatically chain LLM outputs into the prompt and then into the LLM.

You’ll see how to define links more explicitly in the next section.

from llama_index.postprocessor import CohereRerank

# generate question regarding topic
prompt_str1 = "Please generate a concise question about Paul Graham's life regarding the following topic {topic}"
prompt_tmpl1 = PromptTemplate(prompt_str1)
# use HyDE to hallucinate answer.
prompt_str2 = (
    "Please write a passage to answer the question\n"
    "Try to include as many key details as possible.\n"
    "\n"
    "\n"
    "{query_str}\n"
    "\n"
    "\n"
    'Passage:"""\n'
)
prompt_tmpl2 = PromptTemplate(prompt_str2)

llm = OpenAI(model="gpt-3.5-turbo")
retriever = index.as_retriever(similarity_top_k=5)
p = QueryPipeline(
    chain=[prompt_tmpl1, llm, prompt_tmpl2, llm, retriever], verbose=True
)
nodes = p.run(topic="college")
len(nodes)
> Running module 5b44aee5-2afe-4adb-8c56-370b047e5a8f with input: 
topic: college

> Running module 945590ad-31c1-4e26-b319-c43c887dd0c2 with input: 
messages: Please generate a concise question about Paul Graham's life regarding the following topic college

> Running module 0268b84c-3bed-46f2-9860-484763ec992c with input: 
query_str: assistant: How did Paul Graham's college experience shape his career and entrepreneurial mindset?

> Running module f3cbf237-e973-4e8a-ae8b-35fa7527c1b6 with input: 
messages: Please write a passage to answer the question
Try to include as many key details as possible.


assistant: How did Paul Graham's college experience shape his career and entrepreneurial mindset?


Pass...

> Running module da0f3129-7404-4c96-a8c8-35c01af710d8 with input: 
input: assistant: Paul Graham's college experience played a pivotal role in shaping his career and entrepreneurial mindset. As a student at Cornell University, Graham immersed himself in the world of compute...


5

Create a Full RAG Pipeline as a DAG#

Here we chain together a full RAG pipeline consisting of query rewriting, retrieval, reranking, and response synthesis.

Here we can’t use chain syntax because certain modules depend on multiple inputs (for instance, response synthesis expects both the retrieved nodes and the original question). Instead we’ll construct a DAG explicitly, through add_modules and then add_link.

1. RAG Pipeline with Query Rewriting#

We use an LLM to rewrite the query first before passing it to our downstream modules - retrieval/reranking/synthesis.

from llama_index.postprocessor import CohereRerank
from llama_index.response_synthesizers import TreeSummarize
from llama_index import ServiceContext

# define modules
prompt_str = "Please generate a question about Paul Graham's life regarding the following topic {topic}"
prompt_tmpl = PromptTemplate(prompt_str)
llm = OpenAI(model="gpt-3.5-turbo")
retriever = index.as_retriever(similarity_top_k=3)
reranker = CohereRerank()
summarizer = TreeSummarize(
    service_context=ServiceContext.from_defaults(llm=llm)
)
# define query pipeline
p = QueryPipeline(verbose=True)
p.add_modules(
    {
        "llm": llm,
        "prompt_tmpl": prompt_tmpl,
        "retriever": retriever,
        "summarizer": summarizer,
        "reranker": reranker,
    }
)

Next we draw links between modules with add_link. add_link takes in the source/destination module ids, and optionally the source_key and dest_key. Specify the source_key or dest_key if there are multiple outputs/inputs respectively.

You can view the set of input/output keys for each module through module.as_query_component().input_keys and module.as_query_component().output_keys.

Here we explicitly specify dest_key for the reranker and summarizer modules because they take in two inputs (query_str and nodes).

p.add_link("prompt_tmpl", "llm")
p.add_link("llm", "retriever")
p.add_link("retriever", "reranker", dest_key="nodes")
p.add_link("llm", "reranker", dest_key="query_str")
p.add_link("reranker", "summarizer", dest_key="nodes")
p.add_link("llm", "summarizer", dest_key="query_str")

# look at summarizer input keys
print(summarizer.as_query_component().input_keys)
required_keys={'query_str', 'nodes'} optional_keys=set()

We use networkx to store the graph representation. This gives us an easy way to view the DAG!

## create graph
from pyvis.network import Network

net = Network(notebook=True, cdn_resources="in_line", directed=True)
net.from_nx(p.dag)
net.show("rag_dag.html")

## another option using `pygraphviz`
# from networkx.drawing.nx_agraph import to_agraph
# from IPython.display import Image
# agraph = to_agraph(p.dag)
# agraph.layout(prog="dot")
# agraph.draw('rag_dag.png')
# display(Image('rag_dag.png'))
rag_dag.html
response = p.run(topic="YC")
> Running module prompt_tmpl with input: 
topic: YC

> Running module llm with input: 
messages: Please generate a question about Paul Graham's life regarding the following topic YC

> Running module retriever with input: 
input: assistant: What role did Paul Graham play in the founding and development of Y Combinator (YC)?

> Running module reranker with input: 
query_str: assistant: What role did Paul Graham play in the founding and development of Y Combinator (YC)?
nodes: [NodeWithScore(node=TextNode(id_='543f958b-2c46-4c0f-b046-22e0a60ea950', embedding=None, metadata={'file_path': '../data/paul_graham/paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file...

> Running module summarizer with input: 
query_str: assistant: What role did Paul Graham play in the founding and development of Y Combinator (YC)?
nodes: [NodeWithScore(node=TextNode(id_='6b43fef2-821a-49a2-a043-9d816b70560f', embedding=None, metadata={'file_path': '../data/paul_graham/paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file...


print(str(response))
Paul Graham played a significant role in the founding and development of Y Combinator (YC). He was one of the co-founders of YC and was actively involved in its operations. He initially intended YC to be a part-time job, but as it grew, he became more excited about it and it started to take up a larger portion of his attention. Paul Graham worked on various aspects of YC, including selecting and helping founders, writing essays, and working on YC's internal software. However, over time, he gradually reduced his involvement in certain areas, such as working on the programming language Arc, as YC's infrastructure became more dependent on it. Eventually, Paul Graham decided to hand over the reins of YC to someone else and recruited Sam Altman to take over as president.
# you can do async too
response = await p.arun(topic="YC")
print(str(response))
> Running module prompt_tmpl with input: 
topic: YC

> Running module llm with input: 
messages: Please generate a question about Paul Graham's life regarding the following topic YC

> Running module retriever with input: 
input: assistant: What role did Paul Graham play in the founding and development of Y Combinator (YC)?

> Running module reranker with input: 
query_str: assistant: What role did Paul Graham play in the founding and development of Y Combinator (YC)?
nodes: [NodeWithScore(node=TextNode(id_='543f958b-2c46-4c0f-b046-22e0a60ea950', embedding=None, metadata={'file_path': '../data/paul_graham/paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file...

> Running module summarizer with input: 
query_str: assistant: What role did Paul Graham play in the founding and development of Y Combinator (YC)?
nodes: [NodeWithScore(node=TextNode(id_='6b43fef2-821a-49a2-a043-9d816b70560f', embedding=None, metadata={'file_path': '../data/paul_graham/paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file...

Paul Graham played a significant role in the founding and development of Y Combinator (YC). He was one of the co-founders of YC and was actively involved in its early stages. He helped establish the Summer Founders Program (SFP) and was responsible for selecting and funding the initial batch of startups. Graham also played a key role in shaping the funding model for YC, based on previous deals and experiences. As YC grew, Graham's involvement shifted, and he focused more on writing essays and working on YC, while also being involved in other projects such as Hacker News. However, he eventually decided to hand over the reins of YC to Sam Altman and retire from his role in the organization.

2. RAG Pipeline without Query Rewriting#

Here we setup a RAG pipeline without the query rewriting step.

Here we need a way to link the input query to both the retriever, reranker, and summarizer. We can do this by defining a special InputComponent, allowing us to link the inputs to multiple downstream modules.

from llama_index.postprocessor import CohereRerank
from llama_index.response_synthesizers import TreeSummarize
from llama_index import ServiceContext
from llama_index.query_pipeline import InputComponent

retriever = index.as_retriever(similarity_top_k=5)
summarizer = TreeSummarize(
    service_context=ServiceContext.from_defaults(
        llm=OpenAI(model="gpt-3.5-turbo")
    )
)
reranker = CohereRerank()
p = QueryPipeline(verbose=True)
p.add_modules(
    {
        "input": InputComponent(),
        "retriever": retriever,
        "summarizer": summarizer,
    }
)
p.add_link("input", "retriever")
p.add_link("input", "summarizer", dest_key="query_str")
p.add_link("retriever", "summarizer", dest_key="nodes")
output = p.run(input="what did the author do in YC")
> Running module input with input: 
input: what did the author do in YC

> Running module retriever with input: 
input: what did the author do in YC

> Running module summarizer with input: 
query_str: what did the author do in YC
nodes: [NodeWithScore(node=TextNode(id_='d2fc236f-2120-4f43-92d9-6c8e8725b806', embedding=None, metadata={'file_path': '../data/paul_graham/paul_graham_essay.txt', 'file_name': 'paul_graham_essay.txt', 'file...


print(str(output))
The author had a diverse range of responsibilities at YC, including writing essays, working on YC's internal software, funding and supporting startups, dealing with disputes between cofounders, identifying dishonesty, and addressing issues with people mistreating startups. They were also involved in the Summer Founders Program and the batch model of funding startups. Additionally, the author mentioned that they wrote all of YC's internal software in Arc, although they gradually stopped working on it.

Defining a Custom Component in a Query Pipeline#

You can easily define a custom component. Simply subclass a QueryComponent, implement validation/run functions + some helpers, and plug it in.

Let’s wrap the related movie generation prompt+LLM chain from the first example into a custom component.

from llama_index.query_pipeline import (
    CustomQueryComponent,
    InputKeys,
    OutputKeys,
)
from typing import Dict, Any
from llama_index.llms.llm import BaseLLM
from pydantic import Field


class RelatedMovieComponent(CustomQueryComponent):
    """Related movie component."""

    llm: BaseLLM = Field(..., description="OpenAI LLM")

    def _validate_component_inputs(
        self, input: Dict[str, Any]
    ) -> Dict[str, Any]:
        """Validate component inputs during run_component."""
        # NOTE: this is OPTIONAL but we show you here how to do validation as an example
        return input

    @property
    def _input_keys(self) -> set:
        """Input keys dict."""
        # NOTE: These are required inputs. If you have optional inputs please override
        # `optional_input_keys_dict`
        return {"movie"}

    @property
    def _output_keys(self) -> set:
        return {"output"}

    def _run_component(self, **kwargs) -> Dict[str, Any]:
        """Run the component."""
        # use QueryPipeline itself here for convenience
        prompt_str = "Please generate related movies to {movie_name}"
        prompt_tmpl = PromptTemplate(prompt_str)
        p = QueryPipeline(chain=[prompt_tmpl, llm])
        return {"output": p.run(movie_name=kwargs["movie"])}

Let’s try the custom component out! We’ll also add a step to convert the output to Shakespeare.

llm = OpenAI(model="gpt-3.5-turbo")
component = RelatedMovieComponent(llm=llm)

# let's add some subsequent prompts for fun
prompt_str = """\
Here's some text:

{text}

Can you rewrite this in the voice of Shakespeare?
"""
prompt_tmpl = PromptTemplate(prompt_str)

p = QueryPipeline(chain=[component, prompt_tmpl, llm], verbose=True)
output = p.run(movie="Love Actually")
> Running module 4987ea07-87d3-43ec-98dc-2bec13b2a237 with input: 
movie: Love Actually

> Running module 787b9daf-ee18-4e21-85ed-d04fe6dcc347 with input: 
text: assistant: 1. "Valentine's Day" (2010) - A star-studded ensemble cast explores interconnected love stories on Valentine's Day in Los Angeles.

2. "New Year's Eve" (2011) - Similar to "Love Actually," ...

> Running module cd470014-5b67-4e21-b769-bb5f3aa9f2b2 with input: 
messages: Here's some text:

assistant: 1. "Valentine's Day" (2010) - A star-studded ensemble cast explores interconnected love stories on Valentine's Day in Los Angeles.

2. "New Year's Eve" (2011) - Similar t...


print(str(output))
assistant: 1. "Valentine's Daye" (2010) - A troupe of stars doth explore interconnected love stories on Valentine's Daye in fair Los Angeles.

2. "New Year's Eve" (2011) - Similar to "Love Actually," this play doth followeth multiple characters as they navigate love and relationships on New Year's Eve in fair New York City.

3. "He's Just Not That Into Thee" (2009) - This romantic comedy doth feature intersecting storylines that explore the complexities of modern relationships and the search for true love.

4. "Crazy, Stupid, Love" (2011) - A middle-aged man's life doth unravel when his wife doth ask him for a divorce, leading him to seeketh guidance from a young bachelor who doth help him rediscover love.

5. "The Holiday" (2006) - Two women, one from fair Los Angeles and the other from England, doth swap homes during the Christmas season and unexpectedly findeth love in their new surroundings.

6. "Four Weddings and a Funeral" (1994) - This British romantic comedy doth followeth a group of friends as they attendeth various weddings and a funeral, exploring the ups and downs of love and relationships.

7. "Notting Hill" (1999) - A famous actress and a humble bookstore owner doth fall in love in the vibrant neighborhood of Notting Hill in London, leading to a series of comedic and heartfelt moments.

8. "Bridget Jones's Diary" (2001) - Bridget Jones, a quirky and relatable woman, doth navigate her love life whilst documenting her experiences in a diary, leading to unexpected romantic entanglements.

9. "About Time" (2013) - A young man doth discover he hath the ability to time travel and useth it to find love, but soon realizeth that altering the past can have unforeseen consequences.

10. "The Best Exotic Marigold Hotel" (2011) - A group of British retirees doth decide to spendeth their golden years in a seemingly luxurious hotel in India, where they findeth love, friendship, and new beginnings.