Node

Base schema for data structures.

pydantic model llama_index.schema.BaseComponent

Base component object to caputure class names.

Show JSON schema
{
   "title": "BaseComponent",
   "description": "Base component object to caputure class names.",
   "type": "object",
   "properties": {}
}

abstract classmethod class_name() str

Get class name.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self
classmethod from_json(data_str: str, **kwargs: Any) Self
classmethod from_orm(obj: Any) Model
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
to_dict(**kwargs: Any) Dict[str, Any]
to_json(**kwargs: Any) str
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
pydantic model llama_index.schema.BaseNode

Base node Object.

Generic abstract interface for retrievable nodes

Show JSON schema
{
   "title": "BaseNode",
   "description": "Base node Object.\n\nGeneric abstract interface for retrievable nodes",
   "type": "object",
   "properties": {
      "id_": {
         "title": "Id ",
         "description": "Unique ID of the node.",
         "type": "string"
      },
      "embedding": {
         "title": "Embedding",
         "description": "Embedding of the node.",
         "type": "array",
         "items": {
            "type": "number"
         }
      },
      "extra_info": {
         "title": "Extra Info",
         "description": "A flat dictionary of metadata fields",
         "type": "object"
      },
      "excluded_embed_metadata_keys": {
         "title": "Excluded Embed Metadata Keys",
         "description": "Metadata keys that are exluded from text for the embed model.",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "excluded_llm_metadata_keys": {
         "title": "Excluded Llm Metadata Keys",
         "description": "Metadata keys that are exluded from text for the LLM.",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "relationships": {
         "title": "Relationships",
         "description": "A mapping of relationships to other node information.",
         "type": "object",
         "additionalProperties": {
            "anyOf": [
               {
                  "$ref": "#/definitions/RelatedNodeInfo"
               },
               {
                  "type": "array",
                  "items": {
                     "$ref": "#/definitions/RelatedNodeInfo"
                  }
               }
            ]
         }
      },
      "hash": {
         "title": "Hash",
         "description": "Hash of the node content.",
         "default": "",
         "type": "string"
      }
   },
   "definitions": {
      "ObjectType": {
         "title": "ObjectType",
         "description": "An enumeration.",
         "enum": [
            "1",
            "2",
            "3",
            "4"
         ],
         "type": "string"
      },
      "RelatedNodeInfo": {
         "title": "RelatedNodeInfo",
         "description": "Base component object to caputure class names.",
         "type": "object",
         "properties": {
            "node_id": {
               "title": "Node Id",
               "type": "string"
            },
            "node_type": {
               "$ref": "#/definitions/ObjectType"
            },
            "metadata": {
               "title": "Metadata",
               "type": "object"
            },
            "hash": {
               "title": "Hash",
               "type": "string"
            }
         },
         "required": [
            "node_id"
         ]
      }
   }
}

Config
  • allow_population_by_field_name: bool = True

Fields
field embedding: Optional[List[float]] = None

” metadata fields - injected as part of the text shown to LLMs as context - injected as part of the text for generating embeddings - used by vector DBs for metadata filtering

Embedding of the node.

field excluded_embed_metadata_keys: List[str] [Optional]

Metadata keys that are exluded from text for the embed model.

field excluded_llm_metadata_keys: List[str] [Optional]

Metadata keys that are exluded from text for the LLM.

field hash: str = ''

Hash of the node content.

field id_: str [Optional]

Unique ID of the node.

field metadata: Dict[str, Any] [Optional] (alias 'extra_info')

A flat dictionary of metadata fields

field relationships: Dict[NodeRelationship, Union[RelatedNodeInfo, List[RelatedNodeInfo]]] [Optional]

A mapping of relationships to other node information.

Get node as RelatedNodeInfo.

abstract classmethod class_name() str

Get class name.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self
classmethod from_json(data_str: str, **kwargs: Any) Self
classmethod from_orm(obj: Any) Model
abstract get_content(metadata_mode: MetadataMode = MetadataMode.ALL) str

Get object content.

get_embedding() List[float]

Get embedding.

Errors if embedding is None.

abstract get_metadata_str(mode: MetadataMode = MetadataMode.ALL) str

Metadata string.

abstract classmethod get_type() str

Get Object type.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
abstract set_content(value: Any) None

Set the content of the node.

to_dict(**kwargs: Any) Dict[str, Any]
to_json(**kwargs: Any) str
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property child_nodes: Optional[List[RelatedNodeInfo]]

Child nodes.

property extra_info: Dict[str, Any]

Extra info.

Type

TODO

Type

DEPRECATED

property next_node: Optional[RelatedNodeInfo]

Next node.

property node_id: str
property parent_node: Optional[RelatedNodeInfo]

Parent node.

property prev_node: Optional[RelatedNodeInfo]

Prev node.

property ref_doc_id: Optional[str]

Get ref doc id.

Type

Deprecated

property source_node: Optional[RelatedNodeInfo]

Source object node.

Extracted from the relationships field.

pydantic model llama_index.schema.Document

Generic interface for a data document.

This document connects to data sources.

Show JSON schema
{
   "title": "Document",
   "description": "Generic interface for a data document.\n\nThis document connects to data sources.",
   "type": "object",
   "properties": {
      "doc_id": {
         "title": "Doc Id",
         "description": "Unique ID of the node.",
         "type": "string"
      },
      "embedding": {
         "title": "Embedding",
         "description": "Embedding of the node.",
         "type": "array",
         "items": {
            "type": "number"
         }
      },
      "extra_info": {
         "title": "Extra Info",
         "description": "A flat dictionary of metadata fields",
         "type": "object"
      },
      "excluded_embed_metadata_keys": {
         "title": "Excluded Embed Metadata Keys",
         "description": "Metadata keys that are exluded from text for the embed model.",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "excluded_llm_metadata_keys": {
         "title": "Excluded Llm Metadata Keys",
         "description": "Metadata keys that are exluded from text for the LLM.",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "relationships": {
         "title": "Relationships",
         "description": "A mapping of relationships to other node information.",
         "type": "object",
         "additionalProperties": {
            "anyOf": [
               {
                  "$ref": "#/definitions/RelatedNodeInfo"
               },
               {
                  "type": "array",
                  "items": {
                     "$ref": "#/definitions/RelatedNodeInfo"
                  }
               }
            ]
         }
      },
      "hash": {
         "title": "Hash",
         "description": "Hash of the node content.",
         "default": "",
         "type": "string"
      },
      "text": {
         "title": "Text",
         "description": "Text content of the node.",
         "default": "",
         "type": "string"
      },
      "start_char_idx": {
         "title": "Start Char Idx",
         "description": "Start char index of the node.",
         "type": "integer"
      },
      "end_char_idx": {
         "title": "End Char Idx",
         "description": "End char index of the node.",
         "type": "integer"
      },
      "text_template": {
         "title": "Text Template",
         "description": "Template for how text is formatted, with {content} and {metadata_str} placeholders.",
         "default": "{metadata_str}\n\n{content}",
         "type": "string"
      },
      "metadata_template": {
         "title": "Metadata Template",
         "description": "Template for how metadata is formatted, with {key} and {value} placeholders.",
         "default": "{key}: {value}",
         "type": "string"
      },
      "metadata_seperator": {
         "title": "Metadata Seperator",
         "description": "Seperator between metadata fields when converting to string.",
         "default": "\n",
         "type": "string"
      }
   },
   "definitions": {
      "ObjectType": {
         "title": "ObjectType",
         "description": "An enumeration.",
         "enum": [
            "1",
            "2",
            "3",
            "4"
         ],
         "type": "string"
      },
      "RelatedNodeInfo": {
         "title": "RelatedNodeInfo",
         "description": "Base component object to caputure class names.",
         "type": "object",
         "properties": {
            "node_id": {
               "title": "Node Id",
               "type": "string"
            },
            "node_type": {
               "$ref": "#/definitions/ObjectType"
            },
            "metadata": {
               "title": "Metadata",
               "type": "object"
            },
            "hash": {
               "title": "Hash",
               "type": "string"
            }
         },
         "required": [
            "node_id"
         ]
      }
   }
}

Config
  • allow_population_by_field_name: bool = True

Fields
field embedding: Optional[List[float]] = None

” metadata fields - injected as part of the text shown to LLMs as context - injected as part of the text for generating embeddings - used by vector DBs for metadata filtering

Embedding of the node.

Validated by
  • _check_hash

field end_char_idx: Optional[int] = None

End char index of the node.

Validated by
  • _check_hash

field excluded_embed_metadata_keys: List[str] [Optional]

Metadata keys that are exluded from text for the embed model.

Validated by
  • _check_hash

field excluded_llm_metadata_keys: List[str] [Optional]

Metadata keys that are exluded from text for the LLM.

Validated by
  • _check_hash

field hash: str = ''

Hash of the node content.

Validated by
  • _check_hash

field id_: str [Optional] (alias 'doc_id')

Unique ID of the node.

Validated by
  • _check_hash

field metadata: Dict[str, Any] [Optional] (alias 'extra_info')

A flat dictionary of metadata fields

Validated by
  • _check_hash

field metadata_seperator: str = '\n'

Seperator between metadata fields when converting to string.

Validated by
  • _check_hash

field metadata_template: str = '{key}: {value}'

Template for how metadata is formatted, with {key} and {value} placeholders.

Validated by
  • _check_hash

field relationships: Dict[NodeRelationship, Union[RelatedNodeInfo, List[RelatedNodeInfo]]] [Optional]

A mapping of relationships to other node information.

Validated by
  • _check_hash

field start_char_idx: Optional[int] = None

Start char index of the node.

Validated by
  • _check_hash

field text: str = ''

Text content of the node.

Validated by
  • _check_hash

field text_template: str = '{metadata_str}\n\n{content}'

Template for how text is formatted, with {content} and {metadata_str} placeholders.

Validated by
  • _check_hash

Get node as RelatedNodeInfo.

classmethod class_name() str

Get class name.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod example() Document
classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self
classmethod from_json(data_str: str, **kwargs: Any) Self
classmethod from_langchain_format(doc: Document) Document

Convert struct from LangChain document format.

classmethod from_orm(obj: Any) Model
get_content(metadata_mode: MetadataMode = MetadataMode.NONE) str

Get object content.

get_doc_id() str

TODO: Deprecated: Get document ID.

get_embedding() List[float]

Get embedding.

Errors if embedding is None.

get_metadata_str(mode: MetadataMode = MetadataMode.ALL) str

metadata info string.

get_node_info() Dict[str, Any]

Get node info.

get_text() str
classmethod get_type() str

Get Document type.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
set_content(value: str) None

Set the content of the node.

to_dict(**kwargs: Any) Dict[str, Any]
to_json(**kwargs: Any) str
to_langchain_format() Document

Convert struct to LangChain document format.

classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property child_nodes: Optional[List[RelatedNodeInfo]]

Child nodes.

property doc_id: str

Get document ID.

property extra_info: Dict[str, Any]

Extra info.

Type

TODO

Type

DEPRECATED

property next_node: Optional[RelatedNodeInfo]

Next node.

property node_id: str
property node_info: Dict[str, Any]

Get node info.

Type

Deprecated

property parent_node: Optional[RelatedNodeInfo]

Parent node.

property prev_node: Optional[RelatedNodeInfo]

Prev node.

property ref_doc_id: Optional[str]

Get ref doc id.

Type

Deprecated

property source_node: Optional[RelatedNodeInfo]

Source object node.

Extracted from the relationships field.

pydantic model llama_index.schema.ImageDocument

Data document containing an image.

Show JSON schema
{
   "title": "ImageDocument",
   "description": "Data document containing an image.",
   "type": "object",
   "properties": {
      "doc_id": {
         "title": "Doc Id",
         "description": "Unique ID of the node.",
         "type": "string"
      },
      "embedding": {
         "title": "Embedding",
         "description": "Embedding of the node.",
         "type": "array",
         "items": {
            "type": "number"
         }
      },
      "extra_info": {
         "title": "Extra Info",
         "description": "A flat dictionary of metadata fields",
         "type": "object"
      },
      "excluded_embed_metadata_keys": {
         "title": "Excluded Embed Metadata Keys",
         "description": "Metadata keys that are exluded from text for the embed model.",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "excluded_llm_metadata_keys": {
         "title": "Excluded Llm Metadata Keys",
         "description": "Metadata keys that are exluded from text for the LLM.",
         "type": "array",
         "items": {
            "type": "string"
         }
      },
      "relationships": {
         "title": "Relationships",
         "description": "A mapping of relationships to other node information.",
         "type": "object",
         "additionalProperties": {
            "anyOf": [
               {
                  "$ref": "#/definitions/RelatedNodeInfo"
               },
               {
                  "type": "array",
                  "items": {
                     "$ref": "#/definitions/RelatedNodeInfo"
                  }
               }
            ]
         }
      },
      "hash": {
         "title": "Hash",
         "description": "Hash of the node content.",
         "default": "",
         "type": "string"
      },
      "text": {
         "title": "Text",
         "description": "Text content of the node.",
         "default": "",
         "type": "string"
      },
      "start_char_idx": {
         "title": "Start Char Idx",
         "description": "Start char index of the node.",
         "type": "integer"
      },
      "end_char_idx": {
         "title": "End Char Idx",
         "description": "End char index of the node.",
         "type": "integer"
      },
      "text_template": {
         "title": "Text Template",
         "description": "Template for how text is formatted, with {content} and {metadata_str} placeholders.",
         "default": "{metadata_str}\n\n{content}",
         "type": "string"
      },
      "metadata_template": {
         "title": "Metadata Template",
         "description": "Template for how metadata is formatted, with {key} and {value} placeholders.",
         "default": "{key}: {value}",
         "type": "string"
      },
      "metadata_seperator": {
         "title": "Metadata Seperator",
         "description": "Seperator between metadata fields when converting to string.",
         "default": "\n",
         "type": "string"
      },
      "image": {
         "title": "Image",
         "type": "string"
      }
   },
   "definitions": {
      "ObjectType": {
         "title": "ObjectType",
         "description": "An enumeration.",
         "enum": [
            "1",
            "2",
            "3",
            "4"
         ],
         "type": "string"
      },
      "RelatedNodeInfo": {
         "title": "RelatedNodeInfo",
         "description": "Base component object to caputure class names.",
         "type": "object",
         "properties": {
            "node_id": {
               "title": "Node Id",
               "type": "string"
            },
            "node_type": {
               "$ref": "#/definitions/ObjectType"
            },
            "metadata": {
               "title": "Metadata",
               "type": "object"
            },
            "hash": {
               "title": "Hash",
               "type": "string"
            }
         },
         "required": [
            "node_id"
         ]
      }
   }
}

Config
  • allow_population_by_field_name: bool = True

Fields
field embedding: Optional[List[float]] = None

” metadata fields - injected as part of the text shown to LLMs as context - injected as part of the text for generating embeddings - used by vector DBs for metadata filtering

Embedding of the node.

Validated by
  • _check_hash

field end_char_idx: Optional[int] = None

End char index of the node.

Validated by
  • _check_hash

field excluded_embed_metadata_keys: List[str] [Optional]

Metadata keys that are exluded from text for the embed model.

Validated by
  • _check_hash

field excluded_llm_metadata_keys: List[str] [Optional]

Metadata keys that are exluded from text for the LLM.

Validated by
  • _check_hash

field hash: str = ''

Hash of the node content.

Validated by
  • _check_hash

field id_: str [Optional] (alias 'doc_id')

Unique ID of the node.

Validated by
  • _check_hash

field image: Optional[str] = None
Validated by
  • _check_hash

field metadata: Dict[str, Any] [Optional] (alias 'extra_info')

A flat dictionary of metadata fields

Validated by
  • _check_hash

field metadata_seperator: str = '\n'

Seperator between metadata fields when converting to string.

Validated by
  • _check_hash

field metadata_template: str = '{key}: {value}'

Template for how metadata is formatted, with {key} and {value} placeholders.

Validated by
  • _check_hash

field relationships: Dict[NodeRelationship, Union[RelatedNodeInfo, List[RelatedNodeInfo]]] [Optional]

A mapping of relationships to other node information.

Validated by
  • _check_hash

field start_char_idx: Optional[int] = None

Start char index of the node.

Validated by
  • _check_hash

field text: str = ''

Text content of the node.

Validated by
  • _check_hash

field text_template: str = '{metadata_str}\n\n{content}'

Template for how text is formatted, with {content} and {metadata_str} placeholders.

Validated by
  • _check_hash

Get node as RelatedNodeInfo.

classmethod class_name() str

Get class name.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod example() Document
classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self
classmethod from_json(data_str: str, **kwargs: Any) Self
classmethod from_langchain_format(doc: Document) Document

Convert struct from LangChain document format.

classmethod from_orm(obj: Any) Model
get_content(metadata_mode: MetadataMode = MetadataMode.NONE) str

Get object content.

get_doc_id() str

TODO: Deprecated: Get document ID.

get_embedding() List[float]

Get embedding.

Errors if embedding is None.

get_metadata_str(mode: MetadataMode = MetadataMode.ALL) str

metadata info string.

get_node_info() Dict[str, Any]

Get node info.

get_text() str
classmethod get_type() str

Get Document type.

json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
set_content(value: str) None

Set the content of the node.

to_dict(**kwargs: Any) Dict[str, Any]
to_json(**kwargs: Any) str
to_langchain_format() Document

Convert struct to LangChain document format.

classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property child_nodes: Optional[List[RelatedNodeInfo]]

Child nodes.

property doc_id: str

Get document ID.

property extra_info: Dict[str, Any]

Extra info.

Type

TODO

Type

DEPRECATED

property next_node: Optional[RelatedNodeInfo]

Next node.

property node_id: str
property node_info: Dict[str, Any]

Get node info.

Type

Deprecated

property parent_node: Optional[RelatedNodeInfo]

Parent node.

property prev_node: Optional[RelatedNodeInfo]

Prev node.

property ref_doc_id: Optional[str]

Get ref doc id.

Type

Deprecated

property source_node: Optional[RelatedNodeInfo]

Source object node.

Extracted from the relationships field.

class llama_index.schema.MetadataMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
capitalize()

Return a capitalized version of the string.

More specifically, make the first character have upper case and the rest lower case.

casefold()

Return a version of the string suitable for caseless comparisons.

center(width, fillchar=' ', /)

Return a centered string of length width.

Padding is done using the specified fill character (default is a space).

count(sub[, start[, end]]) int

Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.

encode(encoding='utf-8', errors='strict')

Encode the string using the codec registered for encoding.

encoding

The encoding in which to encode the string.

errors

The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.

endswith(suffix[, start[, end]]) bool

Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.

expandtabs(tabsize=8)

Return a copy where all tab characters are expanded using spaces.

If tabsize is not given, a tab size of 8 characters is assumed.

find(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

format(*args, **kwargs) str

Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{‘ and ‘}’).

format_map(mapping) str

Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{‘ and ‘}’).

index(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

isalnum()

Return True if the string is an alpha-numeric string, False otherwise.

A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.

isalpha()

Return True if the string is an alphabetic string, False otherwise.

A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.

isascii()

Return True if all characters in the string are ASCII, False otherwise.

ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.

isdecimal()

Return True if the string is a decimal string, False otherwise.

A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.

isdigit()

Return True if the string is a digit string, False otherwise.

A string is a digit string if all characters in the string are digits and there is at least one character in the string.

isidentifier()

Return True if the string is a valid Python identifier, False otherwise.

Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.

islower()

Return True if the string is a lowercase string, False otherwise.

A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.

isnumeric()

Return True if the string is a numeric string, False otherwise.

A string is numeric if all characters in the string are numeric and there is at least one character in the string.

isprintable()

Return True if the string is printable, False otherwise.

A string is printable if all of its characters are considered printable in repr() or if it is empty.

isspace()

Return True if the string is a whitespace string, False otherwise.

A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.

istitle()

Return True if the string is a title-cased string, False otherwise.

In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.

isupper()

Return True if the string is an uppercase string, False otherwise.

A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.

join(iterable, /)

Concatenate any number of strings.

The string whose method is called is inserted in between each given string. The result is returned as a new string.

Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’

ljust(width, fillchar=' ', /)

Return a left-justified string of length width.

Padding is done using the specified fill character (default is a space).

lower()

Return a copy of the string converted to lowercase.

lstrip(chars=None, /)

Return a copy of the string with leading whitespace removed.

If chars is given and not None, remove characters in chars instead.

static maketrans()

Return a translation table usable for str.translate().

If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.

partition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing the original string and two empty strings.

removeprefix(prefix, /)

Return a str with the given prefix string removed if present.

If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.

removesuffix(suffix, /)

Return a str with the given suffix string removed if present.

If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.

replace(old, new, count=-1, /)

Return a copy with all occurrences of substring old replaced by new.

count

Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.

If the optional argument count is given, only the first count occurrences are replaced.

rfind(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

rindex(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

rjust(width, fillchar=' ', /)

Return a right-justified string of length width.

Padding is done using the specified fill character (default is a space).

rpartition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing two empty strings and the original string.

rsplit(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Splitting starts at the end of the string and works to the front.

rstrip(chars=None, /)

Return a copy of the string with trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

split(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.

splitlines(keepends=False)

Return a list of the lines in the string, breaking at line boundaries.

Line breaks are not included in the resulting list unless keepends is given and true.

startswith(prefix[, start[, end]]) bool

Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.

strip(chars=None, /)

Return a copy of the string with leading and trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

swapcase()

Convert uppercase characters to lowercase and lowercase characters to uppercase.

title()

Return a version of the string where each word is titlecased.

More specifically, words start with uppercased characters and all remaining cased characters have lower case.

translate(table, /)

Replace each character in the string using the given translation table.

table

Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.

The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.

upper()

Return a copy of the string converted to uppercase.

zfill(width, /)

Pad a numeric string with zeros on the left, to fill a field of the given width.

The string is never truncated.

llama_index.schema.Node

alias of TextNode

class llama_index.schema.NodeRelationship(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Node relationships used in BaseNode class.

SOURCE

The node is the source document.

PREVIOUS

The node is the previous node in the document.

NEXT

The node is the next node in the document.

PARENT

The node is the parent node in the document.

CHILD

The node is a child node in the document.

capitalize()

Return a capitalized version of the string.

More specifically, make the first character have upper case and the rest lower case.

casefold()

Return a version of the string suitable for caseless comparisons.

center(width, fillchar=' ', /)

Return a centered string of length width.

Padding is done using the specified fill character (default is a space).

count(sub[, start[, end]]) int

Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.

encode(encoding='utf-8', errors='strict')

Encode the string using the codec registered for encoding.

encoding

The encoding in which to encode the string.

errors

The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.

endswith(suffix[, start[, end]]) bool

Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.

expandtabs(tabsize=8)

Return a copy where all tab characters are expanded using spaces.

If tabsize is not given, a tab size of 8 characters is assumed.

find(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

format(*args, **kwargs) str

Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{‘ and ‘}’).

format_map(mapping) str

Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{‘ and ‘}’).

index(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

isalnum()

Return True if the string is an alpha-numeric string, False otherwise.

A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.

isalpha()

Return True if the string is an alphabetic string, False otherwise.

A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.

isascii()

Return True if all characters in the string are ASCII, False otherwise.

ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.

isdecimal()

Return True if the string is a decimal string, False otherwise.

A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.

isdigit()

Return True if the string is a digit string, False otherwise.

A string is a digit string if all characters in the string are digits and there is at least one character in the string.

isidentifier()

Return True if the string is a valid Python identifier, False otherwise.

Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.

islower()

Return True if the string is a lowercase string, False otherwise.

A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.

isnumeric()

Return True if the string is a numeric string, False otherwise.

A string is numeric if all characters in the string are numeric and there is at least one character in the string.

isprintable()

Return True if the string is printable, False otherwise.

A string is printable if all of its characters are considered printable in repr() or if it is empty.

isspace()

Return True if the string is a whitespace string, False otherwise.

A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.

istitle()

Return True if the string is a title-cased string, False otherwise.

In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.

isupper()

Return True if the string is an uppercase string, False otherwise.

A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.

join(iterable, /)

Concatenate any number of strings.

The string whose method is called is inserted in between each given string. The result is returned as a new string.

Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’

ljust(width, fillchar=' ', /)

Return a left-justified string of length width.

Padding is done using the specified fill character (default is a space).

lower()

Return a copy of the string converted to lowercase.

lstrip(chars=None, /)

Return a copy of the string with leading whitespace removed.

If chars is given and not None, remove characters in chars instead.

static maketrans()

Return a translation table usable for str.translate().

If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.

partition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing the original string and two empty strings.

removeprefix(prefix, /)

Return a str with the given prefix string removed if present.

If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.

removesuffix(suffix, /)

Return a str with the given suffix string removed if present.

If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.

replace(old, new, count=-1, /)

Return a copy with all occurrences of substring old replaced by new.

count

Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.

If the optional argument count is given, only the first count occurrences are replaced.

rfind(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

rindex(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

rjust(width, fillchar=' ', /)

Return a right-justified string of length width.

Padding is done using the specified fill character (default is a space).

rpartition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing two empty strings and the original string.

rsplit(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Splitting starts at the end of the string and works to the front.

rstrip(chars=None, /)

Return a copy of the string with trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

split(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.

splitlines(keepends=False)

Return a list of the lines in the string, breaking at line boundaries.

Line breaks are not included in the resulting list unless keepends is given and true.

startswith(prefix[, start[, end]]) bool

Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.

strip(chars=None, /)

Return a copy of the string with leading and trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

swapcase()

Convert uppercase characters to lowercase and lowercase characters to uppercase.

title()

Return a version of the string where each word is titlecased.

More specifically, words start with uppercased characters and all remaining cased characters have lower case.

translate(table, /)

Replace each character in the string using the given translation table.

table

Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.

The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.

upper()

Return a copy of the string converted to uppercase.

zfill(width, /)

Pad a numeric string with zeros on the left, to fill a field of the given width.

The string is never truncated.

pydantic model llama_index.schema.NodeWithScore

Show JSON schema
{
   "title": "NodeWithScore",
   "description": "Base component object to caputure class names.",
   "type": "object",
   "properties": {
      "node": {
         "$ref": "#/definitions/BaseNode"
      },
      "score": {
         "title": "Score",
         "type": "number"
      }
   },
   "required": [
      "node"
   ],
   "definitions": {
      "ObjectType": {
         "title": "ObjectType",
         "description": "An enumeration.",
         "enum": [
            "1",
            "2",
            "3",
            "4"
         ],
         "type": "string"
      },
      "RelatedNodeInfo": {
         "title": "RelatedNodeInfo",
         "description": "Base component object to caputure class names.",
         "type": "object",
         "properties": {
            "node_id": {
               "title": "Node Id",
               "type": "string"
            },
            "node_type": {
               "$ref": "#/definitions/ObjectType"
            },
            "metadata": {
               "title": "Metadata",
               "type": "object"
            },
            "hash": {
               "title": "Hash",
               "type": "string"
            }
         },
         "required": [
            "node_id"
         ]
      },
      "BaseNode": {
         "title": "BaseNode",
         "description": "Base node Object.\n\nGeneric abstract interface for retrievable nodes",
         "type": "object",
         "properties": {
            "id_": {
               "title": "Id ",
               "description": "Unique ID of the node.",
               "type": "string"
            },
            "embedding": {
               "title": "Embedding",
               "description": "Embedding of the node.",
               "type": "array",
               "items": {
                  "type": "number"
               }
            },
            "extra_info": {
               "title": "Extra Info",
               "description": "A flat dictionary of metadata fields",
               "type": "object"
            },
            "excluded_embed_metadata_keys": {
               "title": "Excluded Embed Metadata Keys",
               "description": "Metadata keys that are exluded from text for the embed model.",
               "type": "array",
               "items": {
                  "type": "string"
               }
            },
            "excluded_llm_metadata_keys": {
               "title": "Excluded Llm Metadata Keys",
               "description": "Metadata keys that are exluded from text for the LLM.",
               "type": "array",
               "items": {
                  "type": "string"
               }
            },
            "relationships": {
               "title": "Relationships",
               "description": "A mapping of relationships to other node information.",
               "type": "object",
               "additionalProperties": {
                  "anyOf": [
                     {
                        "$ref": "#/definitions/RelatedNodeInfo"
                     },
                     {
                        "type": "array",
                        "items": {
                           "$ref": "#/definitions/RelatedNodeInfo"
                        }
                     }
                  ]
               }
            },
            "hash": {
               "title": "Hash",
               "description": "Hash of the node content.",
               "default": "",
               "type": "string"
            }
         }
      }
   }
}

Fields
field node: BaseNode [Required]
field score: Optional[float] = None
classmethod class_name() str

Get class name.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self
classmethod from_json(data_str: str, **kwargs: Any) Self
classmethod from_orm(obj: Any) Model
get_content(metadata_mode: MetadataMode = MetadataMode.NONE) str
get_embedding() List[float]
get_score(raise_error: bool = False) float

Get score.

get_text() str
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
to_dict(**kwargs: Any) Dict[str, Any]
to_json(**kwargs: Any) str
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model
property embedding: Optional[List[float]]
property id_: str
property metadata: Dict[str, Any]
property node_id: str
property text: str
class llama_index.schema.ObjectType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
capitalize()

Return a capitalized version of the string.

More specifically, make the first character have upper case and the rest lower case.

casefold()

Return a version of the string suitable for caseless comparisons.

center(width, fillchar=' ', /)

Return a centered string of length width.

Padding is done using the specified fill character (default is a space).

count(sub[, start[, end]]) int

Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.

encode(encoding='utf-8', errors='strict')

Encode the string using the codec registered for encoding.

encoding

The encoding in which to encode the string.

errors

The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.

endswith(suffix[, start[, end]]) bool

Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.

expandtabs(tabsize=8)

Return a copy where all tab characters are expanded using spaces.

If tabsize is not given, a tab size of 8 characters is assumed.

find(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

format(*args, **kwargs) str

Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{‘ and ‘}’).

format_map(mapping) str

Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{‘ and ‘}’).

index(sub[, start[, end]]) int

Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

isalnum()

Return True if the string is an alpha-numeric string, False otherwise.

A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.

isalpha()

Return True if the string is an alphabetic string, False otherwise.

A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.

isascii()

Return True if all characters in the string are ASCII, False otherwise.

ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.

isdecimal()

Return True if the string is a decimal string, False otherwise.

A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.

isdigit()

Return True if the string is a digit string, False otherwise.

A string is a digit string if all characters in the string are digits and there is at least one character in the string.

isidentifier()

Return True if the string is a valid Python identifier, False otherwise.

Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.

islower()

Return True if the string is a lowercase string, False otherwise.

A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.

isnumeric()

Return True if the string is a numeric string, False otherwise.

A string is numeric if all characters in the string are numeric and there is at least one character in the string.

isprintable()

Return True if the string is printable, False otherwise.

A string is printable if all of its characters are considered printable in repr() or if it is empty.

isspace()

Return True if the string is a whitespace string, False otherwise.

A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.

istitle()

Return True if the string is a title-cased string, False otherwise.

In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.

isupper()

Return True if the string is an uppercase string, False otherwise.

A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.

join(iterable, /)

Concatenate any number of strings.

The string whose method is called is inserted in between each given string. The result is returned as a new string.

Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’

ljust(width, fillchar=' ', /)

Return a left-justified string of length width.

Padding is done using the specified fill character (default is a space).

lower()

Return a copy of the string converted to lowercase.

lstrip(chars=None, /)

Return a copy of the string with leading whitespace removed.

If chars is given and not None, remove characters in chars instead.

static maketrans()

Return a translation table usable for str.translate().

If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.

partition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing the original string and two empty strings.

removeprefix(prefix, /)

Return a str with the given prefix string removed if present.

If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.

removesuffix(suffix, /)

Return a str with the given suffix string removed if present.

If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.

replace(old, new, count=-1, /)

Return a copy with all occurrences of substring old replaced by new.

count

Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.

If the optional argument count is given, only the first count occurrences are replaced.

rfind(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Return -1 on failure.

rindex(sub[, start[, end]]) int

Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.

Raises ValueError when the substring is not found.

rjust(width, fillchar=' ', /)

Return a right-justified string of length width.

Padding is done using the specified fill character (default is a space).

rpartition(sep, /)

Partition the string into three parts using the given separator.

This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.

If the separator is not found, returns a 3-tuple containing two empty strings and the original string.

rsplit(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Splitting starts at the end of the string and works to the front.

rstrip(chars=None, /)

Return a copy of the string with trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

split(sep=None, maxsplit=-1)

Return a list of the substrings in the string, using sep as the separator string.

sep

The separator used to split the string.

When set to None (the default value), will split on any whitespace character (including \n \r \t \f and spaces) and will discard empty strings from the result.

maxsplit

Maximum number of splits (starting from the left). -1 (the default value) means no limit.

Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.

splitlines(keepends=False)

Return a list of the lines in the string, breaking at line boundaries.

Line breaks are not included in the resulting list unless keepends is given and true.

startswith(prefix[, start[, end]]) bool

Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.

strip(chars=None, /)

Return a copy of the string with leading and trailing whitespace removed.

If chars is given and not None, remove characters in chars instead.

swapcase()

Convert uppercase characters to lowercase and lowercase characters to uppercase.

title()

Return a version of the string where each word is titlecased.

More specifically, words start with uppercased characters and all remaining cased characters have lower case.

translate(table, /)

Replace each character in the string using the given translation table.

table

Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.

The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.

upper()

Return a copy of the string converted to uppercase.

zfill(width, /)

Pad a numeric string with zeros on the left, to fill a field of the given width.

The string is never truncated.

pydantic model llama_index.schema.RelatedNodeInfo

Show JSON schema
{
   "title": "RelatedNodeInfo",
   "description": "Base component object to caputure class names.",
   "type": "object",
   "properties": {
      "node_id": {
         "title": "Node Id",
         "type": "string"
      },
      "node_type": {
         "$ref": "#/definitions/ObjectType"
      },
      "metadata": {
         "title": "Metadata",
         "type": "object"
      },
      "hash": {
         "title": "Hash",
         "type": "string"
      }
   },
   "required": [
      "node_id"
   ],
   "definitions": {
      "ObjectType": {
         "title": "ObjectType",
         "description": "An enumeration.",
         "enum": [
            "1",
            "2",
            "3",
            "4"
         ],
         "type": "string"
      }
   }
}

Fields
field hash: Optional[str] = None
field metadata: Dict[str, Any] [Optional]
field node_id: str [Required]
field node_type: Optional[ObjectType] = None
classmethod class_name() str

Get class name.

classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) Model

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) Model

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters
  • include – fields to include in new model

  • exclude – fields to exclude from new model, as with values this takes precedence over include

  • update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep – set to True to make a deep copy of the model

Returns

new model instance

dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) DictStrAny

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

classmethod from_dict(data: Dict[str, Any], **kwargs: Any) Self
classmethod from_json(data_str: str, **kwargs: Any) Self
classmethod from_orm(obj: Any) Model
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) unicode

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod parse_obj(obj: Any) Model
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) Model
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') DictStrAny
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) unicode
to_dict(**kwargs: Any) Dict[str, Any]
to_json(**kwargs: Any) str
classmethod update_forward_refs(**localns: Any) None

Try to update ForwardRefs on fields based on this Model, globalns and localns.

classmethod validate(value: Any) Model