# Raphtory Documentation # Generated for LLM consumption # Source: https://docs.pometry.com # Generated: 2026-02-05T23:22:56.802Z --- # About Raphtory Raphtory is an open-source temporal graph analytics engine built in Rust with Python bindings. It treats time as a first-class citizen - every edge carries a timestamp, enabling queries like "what did this network look like 6 months ago?" or "trace money flow respecting chronological order." Key capabilities: - Temporal graphs with full history on every node and edge - High-performance ingestion (3M+ edges/second) - Python-first API with pandas integration - Temporal algorithms (PageRank, community detection, reachability, motifs) - GraphQL server for serving graph intelligence via HTTP - Persistent storage for datasets larger than RAM --- --- ## Index > [!IMPORTANT] > **2 Weeks vs. 6 Months**: Pometry delivers production-ready temporal intelligence in a fraction of the time compared to legacy incumbents. Spend your budget on results, not consultants. ## Intelligence Cookbooks *OpenAI-style recipes for complex pattern discovery and automated decisioning.* Detect coordinated rings and generate RAG-powered suspicious activity reports. Trace lateral movement and visualize blast radius across temporal logs. Predict infrastructure failure and optimize routing with temporal causality. ## Graph Intelligence Architecture *Native NeuroSymbolic RAG and causal reasoning.* Integrated LLM reasoning over symbolic temporal graph logic. Deploy real-time alerts and ML features with sub-100ms latency. ## Scaling Fast *From local development to Tier-1 production.* Build your first temporal graph from scratch in under five minutes. Understand the 'Why' behind temporal graph intelligence. Scalable patterns for mission-critical infrastructure with minimal overhead. --- ## Technical Reference *For advanced implementation details.* ============================================================ # Section: Getting Started ============================================================ --- ## Getting Started > Index # Get started ## Common use cases ## More resources --- ## Getting Started > Cli # Command line interface The Raphtory CLI tool is included in the Python package and allows you to interact directly with the Raphtory server. This is useful for experimentation and scripting. ## Server The server subcommand starts the GraphQL server with the specified configuration. ```sh raphtory server --port 1736 ``` | Command | Parameter(s) | Description | |-----------------------------|---------------------------|----------------------------------------------------------------------------| | -h, --help | | Show the help message and exit | | --work-dir | WORK_DIR | Working directory | | --cache-capacity | CACHE_CAPACITY | Cache capacity | | --cache-tti-seconds | CACHE_TTI_SECONDS | Cache time-to-idle in seconds | | --log-level | LOG_LEVEL | Log level | | --tracing | | Enable tracing | | --tracing-level | TRACING_LEVEL | Set the tracing level. Available options are: COMPLETE, ESSENTIAL, MINIMAL. | | --otlp-agent-host | OTLP_AGENT_HOST | OTLP agent host | | --otlp-agent-port | OTLP_AGENT_PORT | OTLP agent port | | --otlp-tracing-service-name | OTLP_TRACING_SERVICE_NAME | OTLP tracing service name | | --auth-public-key | AUTH_PUBLIC_KEY | Public key for auth | | --auth-enabled-for-reads | | Enable auth for reads | | --config-path | CONFIG_PATH | Optional config path | | --create-index | | Enable index creation | | --port | PORT | Port for Raphtory to run on, defaults to 1736 | | --timeout | TIMEOUT | Timeout for starting the server in milliseconds. Defaults: 180000ms | ### Tracing Tracing provides the following levels of verbosity: - COMPLETE - Provides full traces for each query. - ESSENTIAL - Tracks the following key functions: addEdge, addEdges, deleteEdge, graph, updateGraph, addNode, node, nodes, edge, edges. - MINIMAL - Provides summary execution times. For more details on tracing configuration and other server options, see [Advanced Server Settings](/docs/graphql/advanced-settings). ## Helper functions ### Schema The Schema subcommand prints the current GraphQL schema. ```sh raphtory schema ``` ### Version Access the version of raphtory ```sh raphtory --version ``` --- ## Getting Started > Installation # Installation Raphtory is a library for Python and Rust. Installation is as simple as invoking the package manager of the corresponding programming language. ``` bash pip install raphtory ``` ``` shell cargo add raphtory # Or Cargo.toml [dependencies] raphtory = { version = "x"} ``` ## Importing To use the library import it into your project: ``` python import raphtory as rp ``` ``` rust use raphtory::prelude::*; ``` ## Docker image Both the Python and Rust packages are available as official Docker images from the [Pometry Docker Hub](https://hub.docker.com/r/pometry/raphtory) page. To download these using the docker CLI run: ``` bash docker pull pometry/raphtory:latest-python ``` ``` shell docker pull pometry/raphtory ``` Running either container will start a Raphtory server by default, if this is all you need then the Rust image is sufficient. However, the Python image contains the Raphtory Python package and all the required dependencies. You should use this image if you want to develop using the Python APIs in a containerised environment. You can run a Raphtory container with the following Docker command: ```docker docker run --rm -p 1736:1736 -v "$(pwd):/home/raphtory_server" pometry/raphtory:latest-python ``` For more information about running and configuring containers see the [Docker documentation](https://docs.docker.com/). --- ## Getting Started > Quickstart # Get started Raphtory is a temporal graph database and analytics tool that you can use to investigate social graphs, detect fraud in financial transactions, power graphRAG AI systems, and much more. Our powerful visual interface allows analysts to explore data and trace the evolution of relationships across time. Data scientists can use APIs to create repeatable analytics pipelines using our built in filters and metrics or add their own custom algorithms. Raphtory is written in Rust for speed and safety. However, you can interact with your graphs using: - [Python](/docs/reference/api/python) - Our Python APIs are the primary way to create workflows and are described in detail in this documentation. - [GraphQL](/docs/graphql) - Start a GraphQL server that you can interact with programmatically or using the playground integrated in the Raphtory UI. - [Rust](https://docs.rs/raphtory/latest/raphtory/) - Interact directly with the Rust library to add new algorithms or build into fully featured products. To cite Raphtory in your work refer to our paper [Raphtory: The temporal graph engine for Rust and Python](https://joss.theoj.org/papers/10.21105/joss.05940#). ## Ingest a simple dataset You can build graphs directly in Raphtory or import data from standard formats/tools (CSV, Parquet, Pandas, Duckdb, etc). In the following example we use the OBS baboon interaction dataset from [SocioPatterns](http://www.sociopatterns.org/datasets/baboons-interactions/) which is provided in a tab separated text file. We have added an additional column `weight` to the dataset as a numerical representation of the positive and negative interactions between baboons. ### The Data Model Raphtory can load data directly from CSV files. Create a new graph `g` and use the [`.load_edges()`](/docs/reference/api/python/raphtory/Graph#load_edges) method to load the edges, mapping columns to source, destination, time, properties, metadata and layers. By default, Raphtory will infer types from strings in CSV files. However, you can explicitly set the schema for columns using the `schema` parameter with [`PropType`](/docs/reference/api/python/raphtory/PropType) values. When loading from typed sources like Parquet, Pandas DataFrames, or DuckDB, their schema will be used automatically unless specified otherwise. You can print the graph object to verify it was loaded correctly. Note that the `earliest_time` and `latest_time` are given in Raphtory's [`EventTime`](/docs/reference/api/python/raphtory/EventTime) format. For more details, see [Creating a graph](/docs/ingestion). ## Query your data Once you have created a graph you can start to analyse it and isolate interesting features. ### Nodes and edges You can access individual nodes and edges directly using the [`.node()`](/docs/reference/api/python/raphtory/Graph#node) and [`.edge()`](/docs/reference/api/python/raphtory/Graph#edge) methods: For more details, see [Querying your graph](/docs/querying). ### Algorithms Raphtory includes many built-in graph algorithms. For example, you can use PageRank to find important nodes: For more algorithms, see [Running algorithms](/docs/algorithms). Once you have identified some interesting features, you can perform more detailed analysis by filtering your results or examining them across a [window of history](/docs/views/temporal-windows). ## Start the UI server To start the Raphtory UI you need to: 1. Create a [GraphServer](/docs/reference/api/python/graphql/GraphServer) and client. Every `GraphServer` needs a working directory, you can name this anything. 2. Start the server and get a [RaphtoryClient](/docs/reference/api/python/graphql/RaphtoryClient). 3. Send the relevant graphs to this client (in this case you only have one graph available). This will start the UI locally on the default port `1736`. You can also start a standalone server using the Raphtory CLI tool or Docker image. ### Querying via GraphQL You can query the graph programmatically using GraphQL. The GraphQL API mirrors the Python and Rust APIs, so the same methods and properties are available across all three interfaces. ### Using the UI When you navigate to the server URL, you should see the **Search** page by default: You can use the **Query Builder** to select the graph you created and identify which baboons attacked each other in the last month. For more information see the full [User Interface overview](/docs/visualisation) or the [GraphQL API reference](/docs/graphql). ============================================================ # Section: Core Concepts ============================================================ --- ## Core Concepts > Index # Key concepts To use Raphtory effectively it is useful to understand the specific ways that the tool represents graph objects and the tools you have to manipulate them. ## Graphs In the most general sense, a graph is set of entities (nodes or vertices) where you can define relationships (links or edges) between pairs of entities. The relationships between a pairs of entities may be directed or undirected and the overall graph will have different properties as a result. In Raphtory we adopt the naming convention of nodes and edges. All Raphtory graphs are directional and edges have a source and destination, although loops are allowed so the source and destination node may be the same. Additionally, Raphtory graphs are designed to capture temporal relationships. You can view temporal data as either a stream of discrete events or a persistent state over a duration of time and Raphtory allows you to adopt either perspective. To represent your data as a stream you can use a [Graph](/docs/reference/api/python/raphtory/Graph) object and to adopt the extended events representation you can use a [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) object. Raphtory allows you to easily switch between these representations by calling [`.event_graph()`](/docs/reference/api/python/raphtory/Graph#event_graph) or [`.persistent_graph()`](/docs/reference/api/python/raphtory/Graph#persistent_graph). For a deeper explanation of the differences, see [Time semantics](/docs/persistent-graph). ### The Graph object When beginning a new project you will [create a new `Graph` object](/docs/ingestion). You can add updates directly using methods like [`.add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`.add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge), or load data in bulk from sources like CSV, Parquet, Pandas DataFrames, or DuckDB. Updates in Raphtory are temporal – each update carries a timestamp. You can add: - **Node/Edge properties** – values that change over time (e.g., a node's score at different points) - **Node/Edge metadata** – static values that don't vary with time (e.g., a node's name) - **Graph properties/metadata** – global values attached to the graph itself ## Nodes The [Node](/docs/reference/api/python/raphtory/Node) object in Raphtory typically represents some entity in your data. You create nodes using the [`.add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) function on your [Graph](/docs/reference/api/python/raphtory/Graph) object, subsequent updates can be made using a [MutableNode](/docs/reference/api/python/raphtory/MutableNode) that you get by calling `my_graph.node(node_id)`. Additionally, calling [`.add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) repeatedly will not overwrite the original object, instead additional properties and events are added to the existing object. Queries are performed using the [Node](/docs/reference/api/python/raphtory/Node) object or an appropriate view. To make queries more convenient Raphtory provides the [Nodes](/docs/reference/api/python/raphtory/Nodes) iterable that allows you to make queries over all the nodes in the current view. Typically, queries on an individual [Node](/docs/reference/api/python/raphtory/Node) will return a result directly, while queries over the [Nodes](/docs/reference/api/python/raphtory/Nodes) iterable will often return a view or [NodeState](/docs/reference/api/python/node_state). [NodeState](/docs/reference/api/python/node_state) objects are tabular representations of a collection across all nodes in the current view. You can easily transform a [NodeState](/docs/reference/api/python/node_state) into a dataframe or other tabular format to integrate into your existing workflow. The [Nodes](/docs/reference/api/python/raphtory/Nodes) and [Edges](/docs/reference/api/python/raphtory/Edges) iterables have a [`.collect()`](/docs/reference/api/python/raphtory/Nodes#collect) function that returns list of all the your objects. This operates on the underlying Rust library and is much faster than creating the list manually in Python. ## Edges An [Edge](/docs/reference/api/python/raphtory/Edge) object represents relationships between nodes. However, in Raphtory there is only ever one edge between any pair of nodes. This unified edge object combines interactions across all times and all relationships. Some graph tools create multiple edges for each relationship or time dependent interaction between nodes, you can replicate this view in Raphtory by calling [`.explode()`](/docs/reference/api/python/raphtory/Edge#explode) on the unified edge object. To represent multiple relationships you can use the properties and metadata of an edge or create edges on specific layers which are aggregated into the unified edge object. Similarly to nodes, you must create edges using [`.add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) and can make changes using a [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge) object or by calling [`.add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) repeatedly. There is also an [Edges](/docs/reference/api/python/raphtory/Edges) iterable that allows you to make queries over all the edges in the current view. An [Edge](/docs/reference/api/python/raphtory/Edge) object contains the combined information for that edge across all points in time. Often it is more useful to look at the changes across time. To do this Raphtory provides the option to [`.explode()`](/docs/reference/api/python/raphtory/Edge#explode) an edge, which returns an edge object for each update within the original edge. ## Layers To further separate different types of relationships between the same types of entities, you can create separate layers in the same graph. The nodes of a graph exist across all layers but edges can be assigned a specific layer. In the [introductory example](/docs/querying) we use edges on separate layers to distinguish between different behaviours of a troop of baboons. Queries will show edges across all layers by default and edges that exist on multiple layers are combined into a single edge with the combined properties and history. You can apply an appropriate [layer view](/docs/views/layers) to work with a specific layer or layers. ## Properties and metadata Graphs, nodes, and edges can all have properties and metadata. In Raphtory, properties are metrics that can vary in time and each update carries a timestamp, while metadata are metrics that are constant across time (you can edit metadata but they have no history). This system is highly flexible and the main way that you add complex data to the graph structure. When making a query, entities return [Properties](/docs/reference/api/python/raphtory/Properties) or [Metadata](/docs/reference/api/python/raphtory/Metadata) objects that contain all the data within the current view. You can use these objects to fetch and manipulate any individual entries of interest. ## Views and filters Views are objects in Raphtory that provide a read-only subset of your data, this means you can have many views without expensive data duplication. A [GraphView](/docs/reference/api/python/raphtory/GraphView) provides a view of the graph reduced to a specific subset by a filter or time function. Similarly, many other functions provide filtered views of edges or nodes. Views can be an instance of a [Node](/docs/reference/api/python/raphtory/Node) or [Nodes](/docs/reference/api/python/raphtory/Nodes), [Edge](/docs/reference/api/python/raphtory/Edge) or [Edges](/docs/reference/api/python/raphtory/Edges), [GraphView](/docs/reference/api/python/raphtory/GraphView). For example, calling `edge_foo.layer("named_layer")` returns an [Edge](/docs/reference/api/python/raphtory/Edge) object that acts as a view of the edge restricted to the `named_layer`. ## History Updates to entities in Raphtory are recorded in a history of events which can be accessed through a dedicated [History](/docs/reference/api/python/raphtory/History) object. Individual events are represented by an [EventTime](/docs/reference/api/python/raphtory/EventTime) which combines a primary timestamp and a secondary index used to ensure fixed ordering when events occur at the same time. You can transform an [EventTime](/docs/reference/api/python/raphtory/EventTime) into a UTC datetime representation or Unix timestamp in milliseconds as needed. This will always be based on the primary timestamp, the secondary is only used for ordering and will be automatically generated by Raphtory unless specified explicitly by the user. This makes it easy to integrate Raphtory events into your existing tooling. When looking at an overall history the [History](/docs/reference/api/python/raphtory/History) object provides a lot of flexibility including iterables and views to work with datetimes and Unix times directly and methods to combine and compare histories. This allows you to analyse both individual entities and groups, for example, by combining the histories of nodes that share a common community. ============================================================ # Section: Ingestion ============================================================ --- ## Ingestion > Index # Data Ingestion Raphtory provides multiple ways to build your temporal graph from existing data. You can work with any columnar data source that implements the Arrow C Stream interface, make direct updates via the API, or import from other graphs. ## Creating a graph To get started, create a [`Graph`](/docs/reference/api/python/raphtory/Graph) object. Printing this graph will show it as empty with no nodes, edges, or update times: Raphtory supports different time semantics for how updates are recorded. By default, updates are treated as **events** (point-in-time occurrences). For state-based updates that persist until changed, see [Time Semantics](/docs/persistent-graph). ## Adding data Once you have a graph, you can add data using several approaches: | Approach | Description | |----------|-------------| | [Direct Updates](/docs/ingestion/direct-updates) | Add nodes and edges programmatically via [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) | | [Columnar Data](/docs/ingestion/dataframes) | Load from CSV, Parquet, Pandas, Polars, DuckDB, or any Arrow-compatible source | | [Importing](/docs/ingestion/importing) | Import nodes and edges from other graphs, including from views | ## Saving and loading Once you've built a graph, you can save it to disk to avoid re-parsing the original data: | Format | Description | |--------|-------------| | [`save_to_file()`](/docs/reference/api/python/raphtory/Graph#save_to_file) | Save to a directory (fastest for large graphs) | | [`save_to_zip()`](/docs/reference/api/python/raphtory/Graph#save_to_zip) | Save to a single zip file (portable) | Reload with [`Graph.load_from_file()`](/docs/reference/api/python/raphtory/Graph#load_from_file). See [Saving & Loading](/docs/ingestion/saving) for details. ## Remote graphs via server If you're running a [`RaphtoryServer`](/docs/reference/api/python/raphtory/RaphtoryServer), you can create, update, send, and download graphs remotely via the GraphQL API. This enables collaborative workflows where multiple clients can query and modify the same graph. See the [GraphQL section](/docs/graphql) for more details. --- ## Ingestion > Dataframes # Loading from columnar data If you prefer to initially manipulate your data before converting into a graph, or want to load from files directly, Raphtory can ingest columnar data and convert it into node and edge updates. Raphtory's [`load_edges()`](/docs/reference/api/python/raphtory/Graph#load_edges), [`load_nodes()`](/docs/reference/api/python/raphtory/Graph#load_nodes), [`load_edge_metadata()`](/docs/reference/api/python/raphtory/Graph#load_edge_metadata), and [`load_node_metadata()`](/docs/reference/api/python/raphtory/Graph#load_node_metadata) functions can ingest data from any columnar source that implements the [Arrow C Stream interface](https://arrow.apache.org/docs/format/CDataInterface.html). This includes: - **CSV files** (direct path) - **Parquet files** (direct path) - **Folders** containing mixed CSV and Parquet files - **Pandas DataFrames** - **Polars DataFrames** - **DuckDB query results** - Any other Arrow-compatible data source **Schema handling**: Raphtory can automatically infer types from typed sources (Parquet, Pandas, DuckDB). For untyped sources like CSV, you can either let Raphtory interpret the data or specify an explicit schema using the `schema` parameter with [`PropType`](/docs/reference/api/python/raphtory/PropType) values. This is also useful when adding data to a graph that already has properties with established types – the schema ensures new data is cast to match. ## Loading from CSV files The simplest approach is to pass a CSV file path directly to `load_edges()` or `load_nodes()`. No external libraries needed. You can also pass a **folder path** containing multiple CSV files (or a mix of CSV and Parquet files) – Raphtory will load all files in the folder. In this example we're ingesting network traffic data which includes different types of interactions between servers. For CSV files, we can specify an explicit schema to ensure numeric columns like `data_size_MB` are parsed as floats rather than strings. You can also use `csv_options` to customize parsing (e.g., delimiter, quote character, escape character): ## Loading from Pandas DataFrames If you prefer to manipulate your data in Pandas first – for example to transform timestamps or filter rows – you can pass the DataFrame directly. Types are inferred from pandas dtypes: ## Loading from DuckDB For larger datasets or SQL-based transformations, DuckDB integrates seamlessly. DuckDB query results can be passed directly to Raphtory: ## Loading from Parquet files [Apache Parquet](https://parquet.apache.org/) files can be loaded directly by path. Parquet files include embedded type information, so Raphtory automatically uses the correct types: ## Adding metadata separately In some instances you may want to break the ingestion into multiple stages, adding metadata to existing nodes/edges in a separate step. This is common when you have metadata in a different data source than your main graph data. Use [`load_edge_metadata()`](/docs/reference/api/python/raphtory/Graph#load_edge_metadata) and [`load_node_metadata()`](/docs/reference/api/python/raphtory/Graph#load_node_metadata) for this: Metadata can only be added to nodes and edges which already exist in the graph. If you attempt to add metadata to non-existent entities, Raphtory will throw an error. ## Function parameters These functions have optional arguments to cover everything we have seen in the prior [direct updates example](direct-updates). Use `layer` or `node_type` when all rows in your data share the same value. Use `layer_col` or `node_type_col` when the values vary per row in your data. ### load_edges | Parameter | Description | |-----------|-------------| | `data` | File path, DataFrame, or Arrow-compatible source | | `src` | Source node column name | | `dst` | Destination node column name | | `time` | Timestamp column name | | `properties` | List of temporal property column names (values that change over time) | | `metadata` | List of constant property column names (values that don't change) | | `shared_metadata` | Dictionary of metadata applied to all edges | | `layer` | Explicit layer name for all edges | | `layer_col` | Column name to read layer from (cannot be used with `layer`) | | `schema` | Type mappings using [`PropType`](/docs/reference/api/python/raphtory/PropType) | | `csv_options` | CSV parsing options (delimiter, quote, escape, etc.) | ### load_nodes | Parameter | Description | |-----------|-------------| | `data` | File path, DataFrame, or Arrow-compatible source | | `id` | Node ID column name | | `time` | Timestamp column name | | `properties` | List of temporal property column names | | `metadata` | List of constant property column names | | `shared_metadata` | Dictionary of metadata applied to all nodes | | `node_type` | Explicit node type for all nodes | | `node_type_col` | Column name to read node type from (cannot be used with `node_type`) | | `schema` | Type mappings using [`PropType`](/docs/reference/api/python/raphtory/PropType) | | `csv_options` | CSV parsing options | ### load_edge_metadata | Parameter | Description | |-----------|-------------| | `data` | File path, DataFrame, or Arrow-compatible source | | `src` | Source node column name | | `dst` | Destination node column name | | `metadata` | List of metadata column names | | `shared_metadata` | Dictionary of metadata applied to all edges | | `layer` | Explicit layer name for all edges | | `layer_col` | Column name to read layer from (cannot be used with `layer`) | | `schema` | Type mappings using [`PropType`](/docs/reference/api/python/raphtory/PropType) | | `csv_options` | CSV parsing options | ### load_node_metadata | Parameter | Description | |-----------|-------------| | `data` | File path, DataFrame, or Arrow-compatible source | | `id` | Node ID column name | | `metadata` | List of metadata column names | | `shared_metadata` | Dictionary of metadata applied to all nodes | | `node_type` | Explicit node type for all nodes | | `node_type_col` | Column name to read node type from (cannot be used with `node_type`) | | `schema` | Type mappings using [`PropType`](/docs/reference/api/python/raphtory/PropType) | | `csv_options` | CSV parsing options | --- ## Ingestion > Direct Updates # Direct Updates Once you have a [`Graph`](/docs/reference/api/python/raphtory/Graph) you can directly update it with the [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) functions. ## Adding nodes To add a node we need a unique `id` to represent it and an update `timestamp` to specify when it was added to the graph. In the below example we are going to add node `10` at timestamp `1`. If your data doesn't have any timestamps, you can just set a constant value, such as `1`, for all additions into the graph. Printing out the graph and the returned [`MutableNode`](/docs/reference/api/python/raphtory/MutableNode) you can see the update was successful and the earliest and latest times have been updated. The timestamp you specified is used as the primary index of the [`EventTime`](/docs/reference/api/python/raphtory/EventTime) object. ### Node types You can optionally assign a `node_type` to categorise nodes. This is useful for heterogeneous graphs where you have different kinds of entities. Use [`get_all_node_types()`](/docs/reference/api/python/raphtory/Graph#get_all_node_types) to retrieve all node types in the graph: If a node was created without a type (or as a placeholder from an edge), you can set its type later using [`set_node_type()`](/docs/reference/api/python/raphtory/MutableNode#set_node_type). Note that this can only be called once per node – the type becomes permanent after being set: ### Creating nodes (fail if exists) If you want to ensure a node doesn't already exist, use [`create_node()`](/docs/reference/api/python/raphtory/Graph#create_node) instead of `add_node()`. This will raise an error if the node already exists: ## Adding edges All graphs in Raphtory are [directed](https://en.wikipedia.org/wiki/Directed_graph), meaning edge additions must specify a `timestamp` (the same as a `add_node()`), the `src` node the edge starts from and the `dst` node the edge ends at. In the example below we add an edge to the graph from `15` to `16` at timestamp `1`. ### Placeholder nodes You will notice in the output that the graph has two nodes as well as the edge. Raphtory automatically creates the source and destination nodes at the same time if they do not currently exist in the graph. These auto-created nodes are **placeholders** – they exist to keep the graph consistent and avoid `hanging edges`. Placeholder nodes only have history from their edges. If you apply a view that excludes all their edges, the placeholder nodes will also disappear unless they have their own direct updates. We'll cover views in detail in the [Views & Windows](/docs/views) section: ## Accepted ID types The [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) functions will also accept strings for their `id`, `src` & `dst` arguments. This is useful when your node IDs are not integers. For example, node IDs could be unique strings like a person's username or a blockchain wallet hash. In this example, we are adding two nodes to the graph `User 1` and `User 2` and an edge between them. A graph can index nodes by either integers or strings, not both at the same time. This means, for example, you cannot have `User 1` (a string) and `200` (an integer) as ids in the same graph. ## Time input While integer based timestamps can represent both [logical time](https://en.wikipedia.org/wiki/Logical_clock) and [epoch time](https://en.wikipedia.org/wiki/Unix_time), datasets often have their timestamps stored in human readable formats or special datetime objects. As such, [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) accept [`TimeInput`](/docs/reference/api/python/typing) which can be integers, datetime strings, or datetime objects: In the output we can see the [`History`](/docs/reference/api/python/raphtory/History) of node `10` contains the two times at which we have added it into the graph (maintained in ascending order), returned in both unix epoch (integer) and datetime format. Internally, the history is tracked using [`EventTime`](/docs/reference/api/python/raphtory/EventTime) objects which combine a timestamp with an optional event ID. See [History & EventTime](/docs/querying/history) for more on working with temporal data. ### Event ordering with event_id When multiple updates occur at the same timestamp, you can use the `event_id` parameter to specify their order. This is important when the same node or edge receives multiple updates at the same time – specifying distinct event IDs ensures values aren't overwritten and affects how property values are aggregated and which value is considered "latest". By default, Raphtory automatically assigns a monotonically increasing event ID to each update globally. You only need to specify `event_id` manually if this secondary ordering has meaning in your domain – for example, using the transaction index within a blockchain block. The [`EventTime`](/docs/reference/api/python/raphtory/EventTime) object stores both the timestamp and event_id, enabling sub-timestamp ordering of events. ## Properties Alongside the structural update history, Raphtory can maintain the changing value of properties associated with nodes and edges. Both the [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) functions have an optional parameter `properties` which takes a dictionary of key value pairs to be stored at the given timestamp. The graph itself may also have its own global properties added using the [`add_properties()`](/docs/reference/api/python/raphtory/Graph#add_properties) function which takes only a `timestamp` and a `properties` dictionary. Properties can consist of primitives (`Integer`, `Float`, `String`, `Boolean`, `Datetime`) and structures (`Dictionary`, `List`). This allows you to store both basic values as well as do complex hierarchical modelling depending on your use case. In the example below, we use all of these functions to add a mixture of properties to a node, an edge, and the graph. Please note that once a property key is associated with one of the above types for a given node/edge/graph, attempting to add a value of a different type under the same key will result in an error. For `Lists` the values must all be the same type and for `Dictionaries` the values for each key must always be the same type. When the output is printed only the latest property values are shown. The older values haven't been lost, in fact the history of all of these different property types can be queried, explored and aggregated, as you will see in [Property Queries](/docs/querying/properties). ### Explicit typing with Prop Raphtory is written in Rust and supports several numeric types that don't exist natively in Python (like 8-bit integers or 32-bit floats). The [`Prop`](/docs/reference/api/python/raphtory/Prop) class lets you explicitly specify these types for memory efficiency. When values are retrieved, they are automatically cast back to Python types: | Type | Description | Python Return Type | |------|-------------|-------------------| | `Prop.u8()`, `Prop.u16()`, `Prop.u32()`, `Prop.u64()` | Unsigned integers | `int` | | `Prop.i32()`, `Prop.i64()` | Signed integers | `int` | | `Prop.f32()`, `Prop.f64()` | Floating point | `float` | | `Prop.str()` | String | `str` | | `Prop.bool()` | Boolean | `bool` | ## Metadata Raphtory also provides metadata associated with nodes and edges which have immutable values. These are useful when you know a value won't change or is not associated with a specific time. You can use the [`add_metadata()`](/docs/reference/api/python/raphtory/Graph#add_metadata) function, which takes a single dictionary argument, to add metadata to a graph, node and edge as demonstrated below. ### Updating metadata The [`add_metadata()`](/docs/reference/api/python/raphtory/Graph#add_metadata) function will raise an error if you try to add a key that already exists. To update an existing metadata value, use [`update_metadata()`](/docs/reference/api/python/raphtory/Graph#update_metadata) instead: ## Edge layers If you have worked with other graph libraries you may be expecting two calls to [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) between the same nodes to generate two distinct edge objects. In Raphtory, these calls append the information together into the history of a single edge. Edges can be [exploded](/docs/querying/edge-metrics#exploded-edges) to interact with all updates independently and Raphtory also allows you to represent totally different relationships between the same nodes via edge layers. The [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) function takes a second optional parameter, `layer` that allows you to name the type of relationship being added. All calls to `add_edge` with the same `layer` value will be stored together allowing them to be accessed separately or merged with other layers as required. You can see this in the example below where we add five updates between `Person 1` and `Person 2` across the layers `Friends`, `Co Workers` and `Family`. When we query the history of the `weight` property on the edge we initially get all of the values back. However, by applying the [`layers()` GraphView](/docs/views/layers) we can return only updates from `Co Workers` and `Family`. ## Updating existing nodes and edges You can continue to call [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node) and [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge) on existing nodes and edges to insert additional updates – Raphtory will append the new data to their history. These functions return [`MutableNode`](/docs/reference/api/python/raphtory/MutableNode) and [`MutableEdge`](/docs/reference/api/python/raphtory/MutableEdge) objects, which also have an [`add_updates()`](/docs/reference/api/python/raphtory/MutableNode#add_updates) method that is synonymous: --- ## Ingestion > Importing # Importing nodes and edges Raphtory allows you to import [`Node`](/docs/reference/api/python/raphtory/Node) and [`Edge`](/docs/reference/api/python/raphtory/Edge) objects from one graph into another. The import functions support: - **Single or bulk imports** – import individual entities or entire collections - **Lists or iterables** – pass lists, [`Nodes`](/docs/reference/api/python/raphtory/Nodes), or [`Edges`](/docs/reference/api/python/raphtory/Edges) directly - **Views** – import from any [`GraphView`](/docs/reference/api/python/raphtory/GraphView) including temporal windows, layer filters, and subgraphs - **Renaming** – import with new IDs using the `_as` variants - **Merging** – combine histories when importing into existing entities The following sections provide worked examples of each capability. ## Setup First, let's create a source [`Graph`](/docs/reference/api/python/raphtory/Graph) to import from: ## Importing individual nodes and edges The simplest case is importing a single [`Node`](/docs/reference/api/python/raphtory/Node) or [`Edge`](/docs/reference/api/python/raphtory/Edge). Use [`import_node()`](/docs/reference/api/python/raphtory/Graph#import_node) and [`import_edge()`](/docs/reference/api/python/raphtory/Graph#import_edge) to copy entities with all their history and properties: ## Importing from lists or iterables For bulk imports, pass a list of [`Node`](/docs/reference/api/python/raphtory/Node)/[`Edge`](/docs/reference/api/python/raphtory/Edge) objects, or use [`Nodes`](/docs/reference/api/python/raphtory/Nodes) and [`Edges`](/docs/reference/api/python/raphtory/Edges) iterables directly. This is efficient for copying large portions of a graph: ## Importing from views Import works with any [`GraphView`](/docs/reference/api/python/raphtory/GraphView), letting you copy filtered subsets of your graph. This includes temporal windows, layer filters, subgraphs, and any combination of these. Importantly, when importing from a view, only the updates visible within that view are copied – not the full history of each node/edge. For more on creating and using views, see the [Views & Windows](/docs/views) section. ## Renaming on import Use the `_as` variants to import nodes and edges with new IDs. This works for both single entities and bulk imports: ## Copying within the same graph You can also use import functions on the same [`Graph`](/docs/reference/api/python/raphtory/Graph) to copy a node or edge with a new ID. This is useful for creating duplicates or versioned copies: ## Merging histories By default, importing a node or edge that already exists raises an error. Set `merge=True` to combine the histories instead, preserving all temporal data from both sources: ## Function reference ### Node functions | Function | Description | |----------|-------------| | [`import_node()`](/docs/reference/api/python/raphtory/Graph#import_node) | Import a single [`Node`](/docs/reference/api/python/raphtory/Node) | | [`import_nodes()`](/docs/reference/api/python/raphtory/Graph#import_nodes) | Import multiple nodes from a list or [`Nodes`](/docs/reference/api/python/raphtory/Nodes) iterable | | [`import_node_as()`](/docs/reference/api/python/raphtory/Graph#import_node_as) | Import a [`Node`](/docs/reference/api/python/raphtory/Node) with a new ID | | [`import_nodes_as()`](/docs/reference/api/python/raphtory/Graph#import_nodes_as) | Import multiple nodes with new IDs | ### Edge functions | Function | Description | |----------|-------------| | [`import_edge()`](/docs/reference/api/python/raphtory/Graph#import_edge) | Import a single [`Edge`](/docs/reference/api/python/raphtory/Edge) | | [`import_edges()`](/docs/reference/api/python/raphtory/Graph#import_edges) | Import multiple edges from a list or [`Edges`](/docs/reference/api/python/raphtory/Edges) iterable | | [`import_edge_as()`](/docs/reference/api/python/raphtory/Graph#import_edge_as) | Import an [`Edge`](/docs/reference/api/python/raphtory/Edge) with new source/destination IDs | | [`import_edges_as()`](/docs/reference/api/python/raphtory/Graph#import_edges_as) | Import multiple edges with new IDs | --- ## Ingestion > Saving # Saving and loading graphs The fastest way to ingest a graph is to load one from Raphtory's on-disk format using [`Graph.load_from_file()`](/docs/reference/api/python/raphtory/Graph#load_from_file). Once a graph has been created you can save it using: - [`save_to_file()`](/docs/reference/api/python/raphtory/Graph#save_to_file) – saves to a directory - [`save_to_zip()`](/docs/reference/api/python/raphtory/Graph#save_to_zip) – saves to a single zip file This means you don't need to parse the original data every time you run a script, which is especially useful for large datasets. You can also [pickle](https://docs.python.org/3/library/pickle.html) Raphtory graphs, which uses these functions under the hood. ============================================================ # Section: Querying ============================================================ --- ## Querying > Index # Introduction and dataset After reading data into Raphtory we can now make use of the graph representation to ask some interesting questions. This example will use a dataset from [SocioPatterns](http://www.sociopatterns.org/datasets/baboons-interactions/), comprising different behavioral interactions between a group of 22 baboons over a month. If you want to read more about the dataset, you can check it out in this paper: [V. Gelardi, J. Godard, D. Paleressompoulle, N. Claidière, A. Barrat, "Measuring social networks in primates: wearable sensors vs. direct observations", Proc. R. Soc. A 476:20190737 (2020)](https://royalsocietypublishing.org/doi/10.1098/rspa.2019.0737). In the below code loads this dataset into a dataframe and does a small amount of preprocessing to prepare it for loading into Raphtory. This includes dropping rows with blank fields and mapping the values of the `behavior category` into a `weight` which can be aggregated. The mapping consists of the following conversions: - Affiliative (positive interaction) → `+1` - Agonistic (negative interaction) → `-1` - Other (neutral interaction) → `0` Next we load this dataframe into Raphtory using the `load_edges()` function, modelling it as a weighted multi-layer graph, with a layer per unique `behavior`. --- ## Querying > Chaining # Chaining Queries The Raphtory iterables [`Nodes`](/docs/reference/api/python/raphtory/Nodes), [`Edges`](/docs/reference/api/python/raphtory/Edges), and [`Properties`](/docs/reference/api/python/raphtory/Properties) are [lazy](https://en.wikipedia.org/wiki/Lazy_evaluation) data structures which allow you to chain multiple functions together before a final execution. For a node `v`, using the chain `v.neighbours.neighbours` will return the two-hop neighbours. The first call of [`neighbours`](/docs/reference/api/python/raphtory/Node#neighbours) returns the immediate neighbours of `v`, the second applies the [`neighbours`](/docs/reference/api/python/raphtory/Node#neighbours) function to each of the nodes returned by the first call. You can continue this chain indefinitely with any functions in the [`Node`](/docs/reference/api/python/raphtory/Node), [`Edge`](/docs/reference/api/python/raphtory/Edge), or [`Properties`](/docs/reference/api/python/raphtory/Properties) API until either: * Calling [`.collect()`](/docs/reference/api/python/raphtory/Nodes#collect), which will execute the chain and return the result. * Executing the chain by handing it to a Python function such as `list()`, `set()`, `sum()`, etc. * Iterating through the chain via a loop or list comprehension. The following example gets the names of all the baboons and their two-hop neighbours: ## Views vs Filters in Chains Raphtory provides two distinct approaches for restricting data in chains, each with different persistence semantics: 1. [Views](/docs/views) persist — Operations like [`window()`](/docs/reference/api/python/raphtory/GraphView#window) or [`layers()`](/docs/reference/api/python/raphtory/GraphView#layers) create a restricted view of the graph that applies to all subsequent operations in the chain. Once you create a view, it affects everything to the right of it. 2. Filters are one-off — Using `[]` with [filter expressions](/docs/views/filtering) applies a restriction to only that specific step in the chain. The following example demonstrates three approaches using a time window: - **View-based**: `g.node("FELIPE").window(start, end).neighbours` — Creates a view of Felipe restricted to the time window, and all returned neighbours are also restricted to that window. - **Neighbour filter**: `g.node("FELIPE").neighbours[filter.Graph.window(start, end)]` — Returns neighbours that were active in the time window, but the returned nodes have their full history available (no view restriction). - **Edge filter**: `g.node("FELIPE").edges[filter.Graph.window(start, end)].nbr` — Returns nodes that Felipe interacted with during the time window, but again with no view restriction on the returned nodes. Notice in the output how the view-based approach restricts the neighbour's time range to within the window, while the filter-based approaches return nodes with their full history intact. ## Chains with properties To demonstrate more complex queries using Raphtory, the following example combines property aggregation with chains. First we sum the `Weight` value of each of Felipe's out-neighbours to rank them by positive interactions he has initiated. Then we find the "most annoying" monkey by ranking who, on average, has had the most negative interactions initiated against them. The chain `g.nodes.in_edges.properties.temporal.get("Weight").values().sum().mean()` demonstrates the power of lazy evaluation — we're aggregating across all nodes, their incoming edges, and the temporal property values, all without materializing intermediate results. ## Multi-layer Temporal Traversal Filters can be combined using `&` to create powerful multi-step queries. In this example, we're investigating baboon social dynamics: who played with Felipe in the first week, and who did those playmates rest with in the following week? By chaining `in_edges[week_1 & playing].nbr.out_edges[week_2 & resting]`, we traverse from Felipe through his play partners to their resting companions — each step filtered by both time window and interaction type, with property aggregation on the final edges. --- ## Querying > Edge Metrics # Edge metrics and functions Edges can be accessed by storing the object returned from a call to [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge), by directly asking for a specific edge via [`edge()`](/docs/reference/api/python/raphtory/Graph#edge), or by iterating over all edges with [`in_edges`](/docs/reference/api/python/raphtory/Node#in_edges), [`out_edges`](/docs/reference/api/python/raphtory/Node#out_edges), or [`edges`](/docs/reference/api/python/raphtory/Graph#edges). ## Edge structure and update history By default an edge object in Raphtory will contain all updates over all layers between the given source and destination nodes. As an example, we can look at the two edges between `FELIPE` and `MAKO` (one for each direction). In the code below we create the two edge objects by requesting them from the graph and then print out the layers each is involved in with [`layer_names`](/docs/reference/api/python/raphtory/Edge#layer_names). We can see that there are multiple behaviors in each direction represented within the edges. Following this we access the history to get the [`earliest_time`](/docs/reference/api/python/raphtory/Edge#earliest_time) and [`latest_time`](/docs/reference/api/python/raphtory/Edge#latest_time). This update [`history`](/docs/reference/api/python/raphtory/Edge#history) consists all interactions across all layers. Note that we call `e.src.name` because [`src`](/docs/reference/api/python/raphtory/Edge#src) and [`dst`](/docs/reference/api/python/raphtory/Edge#dst) return a node object, instead of just an id or name. ## Edge state functions Edges have several functions for checking their state: - [`is_active()`](/docs/reference/api/python/raphtory/Edge#is_active): whether the edge is currently active (has updates in the current view) - [`is_valid()`](/docs/reference/api/python/raphtory/Edge#is_valid): whether the edge exists and is valid in the current view - [`is_deleted()`](/docs/reference/api/python/raphtory/Edge#is_deleted): whether the edge has been deleted (at the time of the end of the view) - [`is_self_loop()`](/docs/reference/api/python/raphtory/Edge#is_self_loop): whether the edge connects a node to itself In an **Event Graph**, any edge that is present will be active, valid, and not deleted — unless you apply a view that removes all its history. These functions become more interesting with **Persistent Graphs** where edges can be explicitly deleted, which we discuss in the [Time Semantics](/docs/core-concepts/time-semantics) section. ## Exploded edges Raphtory offers you three different ways to split an edge by layer, depending on your use case: - [`layers()`](/docs/reference/api/python/raphtory/Edge#layers): takes a list of layer names and returns a new `Edge View` which contains updates for only the specified layers. This is discussed in more detail in the [Layer views](/docs/views/layers) chapter - [`explode_layers()`](/docs/reference/api/python/raphtory/Edge#explode_layers): returns an iterable of `Edge Views`, each containing the updates for one layer - [`explode()`](/docs/reference/api/python/raphtory/Edge#explode): returns an `Exploded Edge` containing only the information from one call to [`add_edge()`](/docs/reference/api/python/raphtory/Graph#add_edge), in this case an edge object for each update. In the code below you can see an example of each of these functions. We first call [`explode_layers()`](/docs/reference/api/python/raphtory/Edge#explode_layers), to see which layer each edge object represents and output its update history. Next we fully [`explode()`](/docs/reference/api/python/raphtory/Edge#explode) the edge and see each update as an individual object. Thirdly we use the [`layers()`](/docs/reference/api/python/raphtory/Edge#layers) function to look at only the `Touching` and `Carrying` layers and chain this with a call to [`explode()`](/docs/reference/api/python/raphtory/Edge#explode) to see the separate updates. Within the examples and in the API documentation you will see singular and plural versions what appear to be the same function. For example [`layer_names`](/docs/reference/api/python/raphtory/Edge#layer_names) and [`layer_name`](/docs/reference/api/python/raphtory/Edge#layer_name). Singular functions such as [`.layer_name`](/docs/reference/api/python/raphtory/Edge#layer_name) or [`.time`](/docs/reference/api/python/raphtory/Edge#time) can be called on exploded edges and plural functions such as [`.layer_names`](/docs/reference/api/python/raphtory/Edge#layer_names) and [`.history`](/docs/reference/api/python/raphtory/Edge#history) can be called on standard edges. --- ## Querying > Graph Metrics # Graph metrics and functions ## Basic metrics Using the baboons graph from the [Introduction](/docs/querying), we can probe it for some basic metrics such as how many nodes and edges it contains, its layers and node types, and the time range over which it exists. In the code below, [`count_edges()`](/docs/reference/api/python/raphtory/Graph#count_edges) and [`count_temporal_edges()`](/docs/reference/api/python/raphtory/Graph#count_temporal_edges) return different results. This is because `count_edges()` returns the number of unique edges while `count_temporal_edges()` returns the total edge updates which have occurred. Using `count_temporal_edges()` is useful if you want to imagine each edge update as a separate connection between the two nodes. The edges can be accessed in this manner via [`edge.explode()`](/docs/reference/api/python/raphtory/Edge#explode), as is discussed in [edge metrics and functions](./edge-metrics). You can also inspect the graph's layers with [`unique_layers`](/docs/reference/api/python/raphtory/Graph#unique_layers) and its node types with [`get_all_node_types()`](/docs/reference/api/python/raphtory/Graph#get_all_node_types). To check if a specific layer exists, use [`has_layer()`](/docs/reference/api/python/raphtory/Graph#has_layer). Time functions like [`earliest_time`](/docs/reference/api/python/raphtory/Graph#earliest_time) and [`latest_time`](/docs/reference/api/python/raphtory/Graph#latest_time) return an `EventTime` object. You can access the datetime via `.dt` or the raw Unix epoch (milliseconds) via `.t`. These time bounds are essential for understanding when your data starts and ends, and for constructing [temporal views](/docs/views/temporal-windows). The property APIs are the same for the graph, nodes and edges, these are discussed together in [Property queries](./properties). ## Accessing nodes and edges Three types of functions are provided for accessing the nodes and edges within a graph: - **Existence check:** using [`has_node()`](/docs/reference/api/python/raphtory/Graph#has_node) and [`has_edge()`](/docs/reference/api/python/raphtory/Graph#has_edge) you can check if an entity is present within the graph. - **Direct access:** [`node()`](/docs/reference/api/python/raphtory/Graph#node) and [`edge()`](/docs/reference/api/python/raphtory/Graph#edge) will return a [`Node`](/docs/reference/api/python/raphtory/Node) or [`Edge`](/docs/reference/api/python/raphtory/Edge) object if the entity is present and `None` if it is not. - **Iterable access:** [`nodes`](/docs/reference/api/python/raphtory/Graph#nodes) and [`edges`](/docs/reference/api/python/raphtory/Graph#edges) will return iterables for all nodes/edges which can be used within a for loop or as part of a [function chain](./chaining). All of these functions are shown in the code below and will appear in several other examples throughout this tutorial. ### Selecting subsets with `[]` You can also use `[]` indexing with filter expressions to select a subset of nodes or edges. Unlike creating a filtered graph view, the filter **does not persist** on the returned entities — it simply selects which ones to return. This non-persistent behaviour is particularly powerful for **graph traversal** — for example, finding all baboons active on one day, then exploring their neighbours on the next. We cover these patterns in depth in [Chaining Queries](/docs/querying/chaining). For more details on filter expressions and combining filters with logical operators, see [Filtering](/docs/views/filtering). --- ## Querying > History # History and Intervals Every node and edge in Raphtory tracks its update history. The [`History`](/docs/reference/api/python/raphtory/History) object provides access to when updates occurred, and each entry is an [`EventTime`](/docs/reference/api/python/raphtory/EventTime) that combines a timestamp with an event ID. ## Accessing history Nodes and edges have a [`.history`](/docs/reference/api/python/raphtory/Node#history) property that returns their update history: [`earliest_time`](/docs/reference/api/python/raphtory/Node#earliest_time) and [`latest_time`](/docs/reference/api/python/raphtory/Node#latest_time) return an [`OptionalEventTime`](/docs/reference/api/python/raphtory/OptionalEventTime) rather than `EventTime | None`. This means you can safely chain `.dt` or `.t` without checking for `None` first — if there's no history, these properties simply return `None` instead of raising an error. ## EventTime Each entry in a [`History`](/docs/reference/api/python/raphtory/History) is an [`EventTime`](/docs/reference/api/python/raphtory/EventTime) object. This combines a timestamp with an event ID to provide a unique, ordered timepoint. The event ID is used to order events that share the same timestamp. This is useful for data like blockchain transactions where multiple events occur in the same block but have a defined order: [`EventTime`](/docs/reference/api/python/raphtory/EventTime) has the following properties: - [`.t`](/docs/reference/api/python/raphtory/EventTime#t): Timestamp in milliseconds since epoch - [`.dt`](/docs/reference/api/python/raphtory/EventTime#dt): UTC datetime representation - [`.event_id`](/docs/reference/api/python/raphtory/EventTime#event_id): Event ID for sub-timestamp ordering - [`.as_tuple`](/docs/reference/api/python/raphtory/EventTime#as_tuple): Returns `(timestamp, event_id)` tuple ## History formats The [`History`](/docs/reference/api/python/raphtory/History) object provides several ways to access the time data. Each returns its own distinct object type — this allows Raphtory to perform conversions and filtering internally (in Rust) without having to materialize the full data in Python: - [`.t`](/docs/reference/api/python/raphtory/History#t): Returns a [`HistoryI64`](/docs/reference/api/python/raphtory/HistoryI64) — timestamps as integers (milliseconds since epoch) - [`.dt`](/docs/reference/api/python/raphtory/History#dt): Returns a [`HistoryDateTime`](/docs/reference/api/python/raphtory/HistoryDateTime) — timestamps as Python datetime objects (UTC) - [`.event_id`](/docs/reference/api/python/raphtory/History#event_id): Returns a [`HistoryEventId`](/docs/reference/api/python/raphtory/HistoryEventId) — event IDs for sub-timestamp ordering - [`.intervals`](/docs/reference/api/python/raphtory/History#intervals): Returns an [`Intervals`](/docs/reference/api/python/raphtory/Intervals) — time gaps between consecutive updates Each returns an iterable that you can [`.collect()`](/docs/reference/api/python/raphtory/History#collect) into an array. For integer-based data (`.t`, `.event_id`, `.intervals`), collect returns a NumPy array for efficient numerical operations. For `.dt`, it returns a Python list of datetime objects. Use `.to_list()` if you need a regular Python list. ## History operations The [`History`](/docs/reference/api/python/raphtory/History) object supports several operations for working with temporal data: - [`collect()`](/docs/reference/api/python/raphtory/History#collect) / [`collect_rev()`](/docs/reference/api/python/raphtory/History#collect_rev): Materialize the history into a list of [`EventTime`](/docs/reference/api/python/raphtory/EventTime) entries, either in chronological or reverse order. - [`earliest_time()`](/docs/reference/api/python/raphtory/History#earliest_time) / [`latest_time()`](/docs/reference/api/python/raphtory/History#latest_time): Get the time bounds of the history without collecting all entries. - [`is_empty()`](/docs/reference/api/python/raphtory/History#is_empty): Check if the history has any entries — useful for validation before processing. - [`reverse()`](/docs/reference/api/python/raphtory/History#reverse): Return a view that iterates in reverse order, enabling you to process recent events first without materializing the full history. - [`merge()`](/docs/reference/api/python/raphtory/History#merge): Combine two histories into a single interleaved timeline — useful for analyzing co-activity patterns. - [`compose_histories()`](/docs/reference/api/python/raphtory/History#compose_histories): Combine multiple histories at once (more than two) — ideal for aggregating activity across many entities. ## Intervals The [`Intervals`](/docs/reference/api/python/raphtory/Intervals) object (returned by [`history.intervals`](/docs/reference/api/python/raphtory/History#intervals)) provides analytical functions for working with time gaps between events: **Statistical functions:** - [`min()`](/docs/reference/api/python/raphtory/Intervals#min): Find the shortest gap between consecutive events — useful for detecting bursts of rapid activity. - [`max()`](/docs/reference/api/python/raphtory/Intervals#max): Find the longest gap — helps identify dormant periods or unusual pauses. - [`mean()`](/docs/reference/api/python/raphtory/Intervals#mean): Calculate the average interval — gives a sense of typical activity frequency. - [`median()`](/docs/reference/api/python/raphtory/Intervals#median): Get the middle interval value — more robust than mean when there are outliers. **Collection functions:** - [`collect()`](/docs/reference/api/python/raphtory/Intervals#collect) / [`collect_rev()`](/docs/reference/api/python/raphtory/Intervals#collect_rev): Get all intervals as a NumPy array (efficient for numerical operations). - [`to_list()`](/docs/reference/api/python/raphtory/Intervals#to_list) / [`to_list_rev()`](/docs/reference/api/python/raphtory/Intervals#to_list_rev): Get all intervals as a Python list. --- ## Querying > Node Metrics # Node metrics and functions Nodes can be accessed by storing the object returned from a call to [`add_node()`](/docs/reference/api/python/raphtory/Graph#add_node), by directly asking for a specific entity via [`node()`](/docs/reference/api/python/raphtory/Graph#node), or by iterating over all entities via [`nodes`](/docs/reference/api/python/raphtory/Graph#nodes). Once you have a node, you can ask it some questions. ## Update history Nodes have functions for querying their [`earliest_time`](/docs/reference/api/python/raphtory/Node#earliest_time) and [`latest_time`](/docs/reference/api/python/raphtory/Node#latest_time) (as an epoch or datetime) as well as for accessing their full history (using [`history`](/docs/reference/api/python/raphtory/Node#history) or `history.dt`). The `history` property returns all events on a node, including both direct node updates **and** events on connected edges. If you only want to count events that occurred on connected edges, use [`edge_history_count()`](/docs/reference/api/python/raphtory/Node#edge_history_count). In the example below, both values are the same because we only loaded edge updates when building the graph. If we had also called `add_node()` with timestamps, `history` would include those additional events. For more details on working with temporal history, see [History and EventTime](./history). ## Neighbours, edges and paths To investigate who a node is connected with we can ask for its [`degree()`](/docs/reference/api/python/raphtory/Node#degree), [`edges`](/docs/reference/api/python/raphtory/Node#edges), or [`neighbours`](/docs/reference/api/python/raphtory/Node#neighbours). As Raphtory graphs are directed, all of these functions also have an `in_` and `out_` variation, allowing you get only incoming and outgoing connections respectively. These functions return the following: - **degree:** A count of the number of unique connections a node has - **edges:** An [`Edges`](/docs/reference/api/python/raphtory/Edges) iterable of edge objects, one for each unique `(src,dst)` pair - **neighbours:** A [`PathFromNode`](/docs/reference/api/python/raphtory/PathFromNode) iterable of node objects, one for each entity the original node shares an edge with In the code below we call a selection of these functions to show the sort of questions you may ask. The final section of the code makes use of `v.neighbours.name.collect()` - this is a chain of functions which are run on each node in the [`PathFromNode`](/docs/reference/api/python/raphtory/PathFromNode) iterable. We will discuss these sort of operations more in [Chaining functions](./chaining). ### Selecting subsets with `[]` You can also use `[]` indexing with filter expressions to select a subset of neighbours, edges, or nodes. Unlike creating a filtered graph view, the filter **does not persist** on the returned entities — it simply selects which ones to return. This non-persistent behaviour is particularly powerful for **graph traversal** — for example, finding all baboons that groom Felipe, then exploring who *they* play with. We cover these multi-hop patterns in depth in [Chaining Queries](/docs/querying/chaining). The [`.nbr`](/docs/reference/api/python/raphtory/Edge#nbr) property on an edge returns the "other" node — the neighbour from the node's perspective. When traversing from a node via its edges, you don't always know whether you're the source or destination of each edge. Using `.nbr` gives you the node at the opposite end, regardless of direction. For more details on filter expressions and combining filters with logical operators, see [Filtering](/docs/views/filtering). ## Node type and layers Use [`node_type`](/docs/reference/api/python/raphtory/Node#node_type) to get a node's type, and [`has_layer()`](/docs/reference/api/python/raphtory/Node#has_layer) to check if a node has edges in a specific layer. --- ## Querying > Properties # Properties and metadata In Raphtory graphs, nodes and edges can all have temporal [`properties`](/docs/reference/api/python/raphtory/Node#properties) and constant [`metadata`](/docs/reference/api/python/raphtory/Node#metadata), consisting of a wide range of data types. This is also discussed in the [ingestion tutorial](/docs/ingestion/direct-updates). The [`Properties`](/docs/reference/api/python/raphtory/Properties) class offers several functions to access values in different formats. To demonstrate this you can create a simple graph with one node that has a variety of different properties. You can fetch a node's property object and call the following functions to access data: - [`keys()`](/docs/reference/api/python/raphtory/Properties#keys): Returns all of the property keys (names). - [`values()`](/docs/reference/api/python/raphtory/Properties#values): Returns the latest value for each property. - [`items()`](/docs/reference/api/python/raphtory/Properties#items): Combines the [`keys()`](/docs/reference/api/python/raphtory/Properties#keys) and [`values()`](/docs/reference/api/python/raphtory/Properties#values) into a list of tuples. - [`get()`](/docs/reference/api/python/raphtory/Properties#get): Returns the latest value for a given key if the property exists or `None` if it does not. - [`as_dict()`](/docs/reference/api/python/raphtory/Properties#as_dict): Converts the [`Properties`](/docs/reference/api/python/raphtory/Properties) object into a standard python dictionary. [`Metadata`](/docs/reference/api/python/raphtory/Node#metadata) can call the same functions as [`properties`](/docs/reference/api/python/raphtory/Node#properties) ## Examining histories Properties have a history, this means that you can do more than just look at the latest value. Calling [`get()`](/docs/reference/api/python/raphtory/TemporalProperties#get), [`values()`](/docs/reference/api/python/raphtory/TemporalProperties#values) or [`items()`](/docs/reference/api/python/raphtory/TemporalProperties#items) on [`Properties.temporal`](/docs/reference/api/python/raphtory/Properties#temporal) will return a [`TemporalProperty`](/docs/reference/api/python/raphtory/TemporalProperty) object which contains all of the value history. [`TemporalProperty`](/docs/reference/api/python/raphtory/TemporalProperty) has many helper functions to examine histories, this includes: * [`value()`](/docs/reference/api/python/raphtory/TemporalProperty#value) and [`values()`](/docs/reference/api/python/raphtory/TemporalProperty#values): Get the latest value or all values of the property. * [`at()`](/docs/reference/api/python/raphtory/TemporalProperty#at): Get the latest value of the property at the specified time. * [`history()`](/docs/reference/api/python/raphtory/TemporalProperty#history): Returns a [`History`](/docs/reference/api/python/raphtory/History) object with timestamps of all updates. Call `.dt` to get datetime format. * [`items()`](/docs/reference/api/python/raphtory/TemporalProperty#items): Merges [`values()`](/docs/reference/api/python/raphtory/TemporalProperty#values) and [`history()`](/docs/reference/api/python/raphtory/TemporalProperty#history) into a list of tuples. * [`mean()`](/docs/reference/api/python/raphtory/TemporalProperty#mean), [`median()`](/docs/reference/api/python/raphtory/TemporalProperty#median), and [`average()`](/docs/reference/api/python/raphtory/TemporalProperty#average): If the property is orderable, get the average value for the property. * [`min()`](/docs/reference/api/python/raphtory/TemporalProperty#min) and [`max()`](/docs/reference/api/python/raphtory/TemporalProperty#max): If the property is orderable, get the minimum or maximum value. * [`count()`](/docs/reference/api/python/raphtory/TemporalProperty#count): Get the number of updates which have occurred * [`sum()`](/docs/reference/api/python/raphtory/TemporalProperty#sum): If the property is additive, sum the values and return the result. In the code below, we call a subset of these functions on the `Weight` property of the edge between `FELIPE` and `MAKO` in our previous monkey graph example. ============================================================ # Section: Views ============================================================ --- ## Views > Index # Overview Many operations are executed on the whole graph, including the full history. This section describes how to use a [`GraphView`](/docs/reference/api/python/raphtory/GraphView) to look at a subset of your data without having to re-ingest it. A [`GraphView`](/docs/reference/api/python/raphtory/GraphView) is a snapshot of a graph that is used when reading data and running algorithms. Generally, you will use a [`Graph`](/docs/reference/api/python/raphtory/Graph) object when building or mutating a graph and for global queries, but use a [`GraphView`](/docs/reference/api/python/raphtory/GraphView) to extract data from specific regions of interest. You can create graph views using either time functions like [`window()`](/docs/reference/api/python/raphtory/Graph#window), [`at()`](/docs/reference/api/python/raphtory/Graph#at), and [`before()`](/docs/reference/api/python/raphtory/Graph#before), or [filters](/docs/views/filtering) to select a narrower subset of your graph. You can then run your queries against the [`GraphView`](/docs/reference/api/python/raphtory/GraphView) instead of the [`Graph`](/docs/reference/api/python/raphtory/Graph) object. Raphtory can maintain hundreds of thousands of graph views in parallel and allows chaining view functions together to create as specific a view as is required for your use case. A unified API means that all functions that can be called on a [`Graph`](/docs/reference/api/python/raphtory/Graph), [`Node`](/docs/reference/api/python/raphtory/Node), or [`Edge`](/docs/reference/api/python/raphtory/Edge) can also be applied to this subset. ## View Propagation When you apply a view restriction — whether a time filter like [`window()`](/docs/reference/api/python/raphtory/GraphView#window), a layer with [`layer()`](/docs/reference/api/python/raphtory/GraphView#layer), or a node/edge filter — the restriction **persists to all subsequent operations** in the chain. This means any nodes, edges, or neighbours you access will also be filtered by the same view. For example, if you call `g.window(start, end).node("FELIPE").neighbours`, both Felipe and all returned neighbours are restricted to the specified window. The same applies to layers, subgraphs, and other view operations. If you need to apply different restrictions at different steps in a traversal, use the `[]` filter syntax with [filter expressions](/docs/views/filtering) instead. This applies a one-off filter without creating a persistent view, giving you fine-grained control over each step. For a detailed comparison of views vs filters and examples of multi-step queries, see the [Chaining Queries](/docs/querying/chaining#views-vs-filters-in-chains) documentation. This chapter will continue using the baboon graph described in the [Querying Introduction](/docs/querying). --- ## Views > Filtering # Filtering The [`filter`](/docs/reference/api/python/filter) module provides filter expressions for selecting subsets of nodes and edges based on their attributes, properties, and relationships. The filter module mirrors Raphtory's APIs. Anything you can do with temporal properties, layers, windows, or metadata can be expressed as a filter. This lets Raphtory handle filtering internally with optimised Rust code rather than requiring Python loops. If you find yourself writing a for-loop to filter entities, there's probably a filter expression that does it faster. ## Filtering vs Indexing There are two ways to apply filters: - **Indexing (`nodes[filter]`)** — applies a filter once to a collection, returning the matching items as a new collection. The original graph and any relationships remain unchanged, so traversing from filtered nodes still accesses all neighbours and edges. - **`.filter(expr)`** — creates a persistent filtered view of the graph. The filter propagates through all subsequent operations: accessing edges, neighbours, or properties only returns data that satisfies the filter condition. --- ## Filter Classes Raphtory provides four filter classes: | Filter Class | Purpose | |--------------|---------| | [`filter.Graph`](/docs/reference/api/python/filter/Graph) | Time and layer filters | | [`filter.Node`](/docs/reference/api/python/filter/Node) | Filter nodes by name, type, id, metadata, properties | | [`filter.Edge`](/docs/reference/api/python/filter/Edge) | Filter edges by layer, properties, and endpoint attributes | | [`filter.ExplodedEdge`](/docs/reference/api/python/filter/ExplodedEdge) | Filter individual edge updates | ### filter.Graph [`filter.Graph`](/docs/reference/api/python/filter/Graph) supports all view functions from [Graph Views](/docs/views). You can filter by time windows, layers, snapshots, and more. The example shows a social network with edges added at different times across work and social layers. We filter to edges before/after a timestamp, within a time window, and on specific layers. ### filter.Node [`filter.Node`](/docs/reference/api/python/filter/Node) filters nodes by their attributes and properties. You can also chain view functions (like `window()` or `layer()`) to filter properties within specific time or layer bounds. The example creates users with metadata and salary history. We filter by name prefix, node type, metadata values, and demonstrate `window()` chaining to find nodes where the max salary within a time range meets a condition. | Method | Description | |--------|-------------| | [`name()`](/docs/reference/api/python/filter/Node#name) | Access node name (supports string operations) | | [`node_type()`](/docs/reference/api/python/filter/Node#node_type) | Access node type | | [`id()`](/docs/reference/api/python/filter/Node#id) | Access node ID | | [`metadata(key)`](/docs/reference/api/python/filter/Node#metadata) | Access metadata | | [`property(key)`](/docs/reference/api/python/filter/Node#property) | Access temporal properties | ### filter.Edge [`filter.Edge`](/docs/reference/api/python/filter/Edge) filters edges by their properties and endpoints. The `src()` and `dst()` methods let you filter based on source and destination node attributes. Like `filter.Node`, you can chain view functions to scope property lookups. The example creates a graph of people and companies with relationship edges. We filter by layer + property conditions, use `window()` to scope temporal lookups, and filter by source/destination node types and metadata. | Method | Description | |--------|-------------| | [`metadata(key)`](/docs/reference/api/python/filter/Edge#metadata) | Access metadata | | [`property(key)`](/docs/reference/api/python/filter/Edge#property) | Access temporal properties | | [`src()`](/docs/reference/api/python/filter/Edge#src) | Access source node attributes | | [`dst()`](/docs/reference/api/python/filter/Edge#dst) | Access destination node attributes | ### filter.ExplodedEdge When you call `edge.explode()`, each temporal update becomes a separate item. [`filter.ExplodedEdge`](/docs/reference/api/python/filter/ExplodedEdge) lets you filter these individual updates. The example shows a sensor sending readings to a hub over time. Each reading has a temperature and status. After exploding the edge, we filter individual updates by their property values to find critical readings or readings above a threshold. If all updates for an edge are filtered out, the edge itself is removed from the result. --- ## Property Filtering ### Metadata Metadata are immutable values set on a node or edge. They don't change over time. Use [`metadata(key)`](/docs/reference/api/python/filter/Node#metadata) to filter by these values. The example below shows sensors with different locations, models, and calibration status. We filter by exact match, use `is_in()` to match multiple values, `is_not_in()` to exclude values, and `contains()` for substring matching. The final filter combines conditions to find calibrated sensors in priority warehouses. ### Temporal Property Operations Temporal properties store multiple values over time. Use [`.temporal()`](/docs/reference/api/python/filter/PropertyFilterOps#temporal) to access aggregation and quantifier operations that filter based on the entire history of a property, not just its current value. The example tracks temperature readings from sensors over time, then filters using `.max()`, `.avg()`, `.any()`, and `.all()` to answer questions like "which sensors ever exceeded 30°C?" or "which sensors always stayed below 25°C?". | Operation | Description | |-----------|-------------| | `sum()` | Sum of all values | | `avg()` | Average of all values | | `min()`, `max()` | Minimum/maximum value | | `first()`, `last()` | First/most recent value | | `all()` | All values must satisfy condition | | `any()` | At least one value must satisfy condition | | `len()` | Count of updates | ### Scoping with Window and Layer When filtering by temporal properties, you can scope the lookup using `window()` or `layer()`. This is useful for questions like "which servers had high CPU usage during the incident window?" or "which edges in the work layer have been active recently?" The pattern is `filter.Node.window(start, end).property("key").temporal().aggregation()`. Without a window, the filter considers the entire history. With a window, only values within that time range are used for aggregation. The example tracks CPU usage across servers. During normal operation, usage is low. During a high-traffic window (t=4-11), some servers spike. We compare filtering with and without windows to show how scoping changes the results. ### String Operations For string-valued attributes like names and metadata, you can use string operations to match patterns. These work on `name()`, `node_type()`, and string metadata/properties. | Operation | Description | |-----------|-------------| | `starts_with(prefix)` | String starts with prefix | | `ends_with(suffix)` | String ends with suffix | | `contains(substring)` | String contains substring | | `not_contains(substring)` | String doesn't contain substring | | `fuzzy_search(pattern, max_edits, prefix)` | Fuzzy matching: `max_edits` = allowed character edits, `prefix` = required exact prefix length | | `is_in([...])` | Value is in list | | `is_not_in([...])` | Value is not in list | ### Checking Property Existence Use [`is_some()`](/docs/reference/api/python/filter/PropertyFilterOps#is_some) and [`is_none()`](/docs/reference/api/python/filter/PropertyFilterOps#is_none) to filter based on whether a property exists. This is useful when properties may be missing on some entities. --- ## Combining Filters Combine filter expressions with Python's bitwise operators: | Operator | Meaning | |----------|---------| | `&` | AND — both conditions must be true | | `\|` | OR — at least one condition must be true | | `~` | NOT — negates the condition | You must use bitwise operators `&`, `|`, and `~`. Python's `and`, `or`, and `not` keywords do not work with filter expressions. The example creates users with admin/user types and active/inactive status. We define base filters for `admin` and `active`, then combine them to find active admins, inactive admins, and active non-admins. --- ## The Prop Type When filtering on properties with specific internal types, use the [`Prop`](/docs/reference/api/python/raphtory/Prop) class to create typed values for comparison. This is essential when properties are stored as types Python doesn't have natively (like `u8` or `f32`). | Type Constructor | Description | |------------------|-------------| | `Prop.bool(v)` | Boolean | | `Prop.str(v)` | String | | `Prop.u8(v)`, `Prop.u16(v)`, `Prop.u32(v)`, `Prop.u64(v)` | Unsigned integers (8, 16, 32, 64-bit) | | `Prop.i32(v)`, `Prop.i64(v)` | Signed integers (32, 64-bit) | | `Prop.f32(v)`, `Prop.f64(v)` | Floating point (32, 64-bit) | | `Prop.list(v)` | List of values | | `Prop.map(v)` | Dictionary/map | **Comparison operators:** `==`, `!=`, `<`, `<=`, `>`, `>=` --- ## Views > Layers # Layered graphs ## Prerequisites Before reading this topic, please ensure you are familiar with: - [Edge layers](/docs/ingestion/direct-updates#edge-layers) - [Exploded Edges](/docs/querying/edge-metrics#exploded-edges) - [Multi-layer Temporal Traversal](/docs/querying/chaining#multi-layer-temporal-traversal) ## Creating layers views An edge object by default will contain information on all layers between its source and destination nodes. Often there are only a subset of these relationships that you are interested in. To handle this the [`Graph`](/docs/reference/api/python/raphtory/Graph), [`Node`](/docs/reference/api/python/raphtory/Node) and [`Edge`](/docs/reference/api/python/raphtory/Edge) provide the [`layers()`](/docs/reference/api/python/raphtory/GraphView#layers) function which takes a list of layer names and returns a view with only the edge updates that occurred on these layers. Layer views can also be used in combination with any other view function. In the example below, we look at the total edge weight over the full graph, then restrict this to the `Grooming` and `Resting` layers and then reduce this further by applying a window between the 13th and 20th of June. ## Filtering to specific layers ### Valid layers When working with layers, you may encounter situations where you're not sure which layers exist in your current view. Raphtory provides several functions to handle this gracefully: - **[`layers()`](/docs/reference/api/python/raphtory/GraphView#layers)** — Throws an error if you specify a layer that doesn't exist in the graph - **[`valid_layers()`](/docs/reference/api/python/raphtory/Node#valid_layers)** — Silently ignores invalid layer names and only uses the valid ones This makes [`valid_layers()`](/docs/reference/api/python/raphtory/Node#valid_layers) useful when you have a list of layers from external input and aren't sure which ones exist: ### Excluding layers Sometimes it's easier to specify which layers you *don't* want rather than listing all the ones you do. The [`exclude_layers()`](/docs/reference/api/python/raphtory/GraphView#exclude_layers) function lets you remove specific layers from a view. Like the include functions, there's also an [`exclude_valid_layers()`](/docs/reference/api/python/raphtory/Node#exclude_valid_layers) variant that silently ignores invalid layer names. ### The default layer When you add an edge without specifying a layer, it goes to the *default layer*. You can access this layer using [`default_layer()`](/docs/reference/api/python/raphtory/GraphView#default_layer), or by using the string `"_default"` with the [`layers()`](/docs/reference/api/python/raphtory/GraphView#layers) function. ## Traversing the graph with layers Building on the [Multi-layer Temporal Traversal](/docs/querying/chaining#multi-layer-temporal-traversal) pattern, you can use layer filters with the `[]` syntax to traverse across different relationship types. In this example, we find LOME's grooming partners, then discover who those baboons have rested with. --- ## Views > Materialize # Materializing and Caching Views All [view functions](/docs/views/graph-views) hold zero updates of their own, simply providing a lens through which to look at a graph. This is by design so that you can have many views without expensive data duplication. ## Materialize If the original graph is updated, all views of it will also update. If you do not want this, you can call [`materialize()`](/docs/reference/api/python/raphtory/GraphView#materialize) on a view to create a new graph and copy all the updates the view contains into it. In the example below, we create a windowed view of the baboon interaction data for a single day, then materialize it. After adding a new interaction to the materialized graph, the original view is unchanged. ## Caching a view For large graphs, checking whether each node and edge belongs to a view (matching time windows, layers, filters) on every operation adds up. If you're going to run an iterative algorithm like PageRank, or perform multiple operations on the same view, you can use [`cache_view()`](/docs/reference/api/python/raphtory/GraphView#cache_view) to pre-compute which nodes and edges are included. Unlike `materialize()`, `cache_view()` does not copy any data. It builds an index of the graph structure with the view applied, so subsequent operations don't need to re-check view conditions. --- ## Views > Subgraphs # Subgraph For some use cases you may only be interested in a subset of nodes within the graph. One solution could be to call [`g.nodes`](/docs/reference/api/python/raphtory/Graph#nodes) and filter before continuing your workflow. However, this does not remove anything from future function calls — you would have to constantly recheck these lists. To handle this, Raphtory provides the [`subgraph()`](/docs/reference/api/python/raphtory/GraphView#subgraph) function which takes a list of nodes of interest. This returns a [`GraphView`](/docs/reference/api/python/raphtory/GraphView) where all nodes not in the list are hidden from all future function calls. This also hides any edges linked to hidden nodes to keep the subgraph consistent. In the example below we demonstrate this by looking at the neighbours of `FELIPE` in the full graph, compared to a subgraph of `FELIPE`, `LIPS`, `NEKKE`, `LOME` and `BOBO`. We also show how [`subgraph()`](/docs/reference/api/python/raphtory/GraphView#subgraph) can be combined with other view functions, in this case a window between the 17th and 18th of June. ## Excluding nodes Sometimes it's easier to specify which nodes you *don't* want rather than listing all the ones you do. The [`exclude_nodes()`](/docs/reference/api/python/raphtory/GraphView#exclude_nodes) function is the opposite of [`subgraph()`](/docs/reference/api/python/raphtory/GraphView#subgraph) — it removes the specified nodes from the view while keeping everything else. ## Helper subgraph functions Raphtory provides several convenience functions for common subgraph operations, saving you from writing boilerplate code. ### Largest connected component When analysing real-world graphs, it's common to find disconnected components — isolated nodes or small clusters that aren't connected to the main network. The [`largest_connected_component()`](/docs/reference/api/python/raphtory/Graph#largest_connected_component) function extracts only the largest connected subgraph, which is often the most interesting part of your data. ### Subgraph by node types For heterogeneous graphs with multiple node types (e.g., people, companies, products), you can use [`subgraph_node_types()`](/docs/reference/api/python/raphtory/GraphView#subgraph_node_types) to extract only nodes of specific types. This is useful when you want to focus on a particular category of entities. --- ## Views > Temporal Windows # Querying the graph over time Raphtory allows you to create windows that cover a specified time period and generate views from a window, this is sometimes called filtering. You can then get a final result by applying a function to the view object. This means you can run algorithms against a subset of your data and track the evolution of variables across time. All of the time view functions can be called on a [`Graph`](/docs/reference/api/python/raphtory/Graph), [`Node`](/docs/reference/api/python/raphtory/Node), or [`Edge`](/docs/reference/api/python/raphtory/Edge), returning an equivalent [`GraphView`](/docs/reference/api/python/raphtory/GraphView), [`NodeView`](/docs/reference/api/python/raphtory/NodeView) or [`EdgeView`](/docs/reference/api/python/raphtory/EdgeView) which have all the same functions as its unfiltered counterpart. This means that if you write a function which takes a Raphtory entity, it will work regardless of which filters have been applied. ## Creating views At, After and Before a specified time The simplest of these functions, [`before()`](/docs/reference/api/python/raphtory/GraphView#before), [`at()`](/docs/reference/api/python/raphtory/GraphView#at) and [`after()`](/docs/reference/api/python/raphtory/GraphView#after) take a singular `time` argument in epoch (integer) or datetime (string or datetime object) format and return a view of the object which includes: - [`at()`](/docs/reference/api/python/raphtory/GraphView#at) - Only updates which happened at exactly the time specified. - [`after()`](/docs/reference/api/python/raphtory/GraphView#after) - All updates between the specified time and the end of the graph's history, exclusive of the time specified. - [`before()`](/docs/reference/api/python/raphtory/GraphView#before) - All updates between the beginning of the graph's history and the specified time, exclusive of the time specified. - [`latest()`](/docs/reference/api/python/raphtory/GraphView#latest) - A convenience function equivalent to `g.at(g.latest_time)`, returning only the most recent state of the graph. While [`before()`](/docs/reference/api/python/raphtory/GraphView#before) and [`after()`](/docs/reference/api/python/raphtory/GraphView#after) are more useful for continuous time datasets, [`at()`](/docs/reference/api/python/raphtory/GraphView#at) can be helpful when you have snapshots or logical timestamps and want to look at them individually to compare and contrast. The [`latest()`](/docs/reference/api/python/raphtory/GraphView#latest) function is particularly useful when you want to query the current state without needing to know the exact timestamp. In the example below we print the degree of `Lome` across the full dataset, before 12:17 on the 13th of June, after 9:07 on the 30th of June, at the earliest time, and at the latest time. We also use two time functions here, [`start`](/docs/reference/api/python/raphtory/GraphView#start) and [`end`](/docs/reference/api/python/raphtory/GraphView#end), which return information about a view. Notice in the output that the start and end show `None` for unbounded times - when using [`before()`](/docs/reference/api/python/raphtory/GraphView#before) we don't need a start bound (we include everything up until the specified point), and when using [`after()`](/docs/reference/api/python/raphtory/GraphView#after) we don't need an end bound (we include everything from the specified point onwards). ## Window The [`window()`](/docs/reference/api/python/raphtory/GraphView#window) function allows you create a view that is restricted by both a `start` time as well as an `end` time, inclusive of the start and exclusive of the end time. This is useful for examining specific ranges of your graph's history. In the example below, we look at the number of times `Lome` interacts with `Nekke` within the full dataset and for one day between the 13th of June and the 14th of June. We use datetime objects in this example, but it would work the same with string dates and epoch integers. ### Shrinking windows Raphtory provides three functions for narrowing an existing window: - [`shrink_start()`](/docs/reference/api/python/raphtory/GraphView#shrink_start) — Moves the start time forward, keeping the end fixed - [`shrink_end()`](/docs/reference/api/python/raphtory/GraphView#shrink_end) — Moves the end time backward, keeping the start fixed - [`shrink_window()`](/docs/reference/api/python/raphtory/GraphView#shrink_window) — Shrinks both start and end simultaneously These functions **always stay within the bounds of the original window**. If you attempt to shrink to a time outside of the original window, the value is clamped to the original bounds (effectively ignored): ## Window Sets When analysing temporal graphs, you often want to examine how metrics evolve over time. Rather than manually creating individual windows, Raphtory provides two powerful functions that generate sequences of views automatically: [`expanding()`](/docs/reference/api/python/raphtory/GraphView#expanding) for cumulative analysis and [`rolling()`](/docs/reference/api/python/raphtory/GraphView#rolling) for sliding window analysis. Both functions support natural language time intervals like `"1 week"`, `"2 days, 3 hours"`, or `"1 month"`, making it easy to express complex temporal queries in a readable way. ### Expanding Use [`expanding()`](/docs/reference/api/python/raphtory/GraphView#expanding) to iterate over multiple time points with cumulative windows. Each view includes all history from the start up to that point, growing larger with each step. Using [`expanding()`](/docs/reference/api/python/raphtory/GraphView#expanding) will return an iterable of views as if you called [`before()`](/docs/reference/api/python/raphtory/GraphView#before) from the earliest time to the latest time at increments of a given `step`. The start of each window is aligned with the smallest unit of time passed by the user within the `step`. Alternatively, you can explicitly specify an `alignment_unit` that determines when the window starts. The `step` can be specified using a simple epoch integer, or a natural language string describing the interval. For the latter, the string is converted into an iterator of datetimes, handling all corner cases like varying month length and leap years. Within the string you can reference `years`, `months` `weeks`, `days`, `hours`, `minutes`, `seconds` and `milliseconds`. These can be singular or plural and the string can include `and`, spaces, and commas to improve readability. The example below demonstrates two cases. In the first case, we increment through the full history of the graph a week at a time. This creates four views, for each of which we ask how many monkey interactions it has seen. You will notice the start time does not change, but the end time increments by 7 days each view. The second case shows the complexity of increments Raphtory can handle, stepping by `2 days, 3 hours, 12 minutes and 6 seconds` each time. We have additionally bounded this iterable using a window between the 13th and 23rd of June to demonstrate how these views may be chained. ### Rolling You can use [`rolling()`](/docs/reference/api/python/raphtory/GraphView#rolling) to create a rolling window instead of including all prior history. This function will return an iterable of views, incrementing by a `window` size and only including the history from inside the window period, inclusive of start, exclusive of end. This allows you to easily extract daily or monthly metrics. Alongside the window size, [`rolling()`](/docs/reference/api/python/raphtory/GraphView#rolling) takes an optional `step` argument which specifies how far along the timeline it should increment before applying the next window. By default this is the same as `window`, allowing all updates to be analyzed exactly once in non-overlapping windows. If you want overlapping or fully disconnected windows, you can set a `step` smaller or greater than the given `window` size. Optionally you can also specify an `alignment_unit` to determine where the window should begin (e.g., aligned to the start of a month or week). For example, you can take the code from [expanding](#expanding) and swap out the function for [`rolling()`](/docs/reference/api/python/raphtory/GraphView#rolling). In the loop you can see both the start date and end date increase by seven days each time, and the number of monkey interactions sometimes decreases as older data is dropped from the window. ## Snapshots When writing functions that may receive either a [`Graph`](/docs/reference/api/python/raphtory/Graph) or a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph), you may want consistent behaviour regardless of the graph type. Raphtory provides two snapshot functions that give you known semantics across both: - [`snapshot_at(time)`](/docs/reference/api/python/raphtory/GraphView#snapshot_at) — Returns a view including all events that have not been explicitly deleted at the given time. This is equivalent to `before(time + 1)` for a regular [`Graph`](/docs/reference/api/python/raphtory/Graph), and `at(time)` for a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph). - [`snapshot_latest()`](/docs/reference/api/python/raphtory/GraphView#snapshot_latest) — Returns a view including all events that have not been explicitly deleted at the latest time. This is a no-op for a regular [`Graph`](/docs/reference/api/python/raphtory/Graph), and equivalent to `latest()` for a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph). These functions are useful when you need to query the "current state" of a graph without knowing what type it is. For more details on the difference between `Graph` and `PersistentGraph`, see the [Time Semantics](/docs/views/time-semantics) section. ============================================================ # Section: Algorithms ============================================================ --- ## Algorithms > Index # Running algorithms Raphtory implements many of the standard algorithms you expect within a graph library, but also has several temporal algorithms such as [`temporal_reachability`](/docs/reference/api/python/algorithms#temporally_reachable_nodes) and [`temporal motifs`](/docs/reference/api/python/algorithms#local_temporal_three_node_motifs). Raphtory categorizes algorithms into `graphwide` and `node centric`: - **graphwide**: returns one value for the whole graph (e.g. [`directed_graph_density`](/docs/reference/api/python/algorithms#directed_graph_density), [`global_clustering_coefficient`](/docs/reference/api/python/algorithms#global_clustering_coefficient)) - **node centric**: returns one value for each node in the graph (e.g. [`pagerank`](/docs/reference/api/python/algorithms#pagerank), [`degree_centrality`](/docs/reference/api/python/algorithms#degree_centrality)) Looking for a specific algorithm? Check out the [Algorithm Library](/docs/reference/algorithms) for a complete list of all available algorithms, including centrality measures, community detection, path finding, and temporal analysis. For these examples we are going to use the [One graph to rule them all](https://arxiv.org/abs/2210.07871) dataset, which maps the co-occurrence of characters in the Lord of The Rings books. This dataset is a simple edge list, consisting of the source character, destination character and the sentence they occurred together in (which we use as a timestamp). The dataframe for this can be seen in the output below. --- ## Algorithms > Community # Community detection One important feature of graphs is the degree of clustering and presence of community structures. Groups of nodes that are densely connected amongst members of the group but have comparatively few connections with the rest of the graph can be considered distinct communities. Identifying clusters can be informative in social, biological and technological networks. For example, identifying clusters in web clients accessing a site can help optimise performance using a CDN or spotting changes in the communities amongst a baboon pack over time might inform theories about group dynamics. Raphtory provides a variety of algorithms to analyse community structures in your graphs. ## Exploring Zachary's karate club network As an example, we use a data set from the paper "An Information Flow Model for Conflict and Fission in Small Groups" by Wayne W. Zachary which captures social links between the 34 members of the club. ### Load data and run Louvain Raphtory provides multiple algorithms to perform community detection, including: - [`louvain`](/docs/reference/api/python/algorithms#louvain) - a commonly used and well understood modularity based algorithm - [`label_propagation`](/docs/reference/api/python/algorithms#label_propagation) - a more efficient cluster detection algorithm when used at scale Here we load the data, create a [`Graph`](/docs/reference/api/python/raphtory/Graph), and use the [`louvain`](/docs/reference/api/python/algorithms#louvain) algorithm to identify distinct clusters of nodes. The algorithm identifies four clusters of nodes which could be interpreted as four social groups amongst the students. ### Explore the data You can explore the results of our cluster detection algorithm in greater detail using the Raphtory UI. To do this, assign a type to nodes of each cluster using [`set_node_type()`](/docs/reference/api/python/raphtory/MutableNode#set_node_type) and start a [`GraphServer`](/docs/reference/api/python/graphql/GraphServer). Each unique node type will be assigned a colour in the **Graph canvas** so that you can distinguish them visually. You should see that there are four distinct communities. For each node you can see its node type in the **Node Statistics** panel of the **Selected** menu and by visual inspection verify that each node is connected mostly to its own group. You may also spot other features that could be investigated further, for example the nodes with the highest degree are members of the 'Cobra Kai' cluster. ![UI Search page](/images/visualisation/raphtory_com_detection_ui.png) --- ## Algorithms > Node Centric # Node centric algorithms The second category of algorithms are `node centric` which return a value for each node in the graph. These results are stored within a [`NodeState`](/docs/reference/api/python/node_state) object which has functions for sorting, grouping, [`top_k`](/docs/reference/api/python/node_state/NodeStateUsize#top_k), and conversion to dataframes. ## Continuous Results: PageRank [PageRank](https://en.wikipedia.org/wiki/PageRank) is a centrality metric developed by Google's founders to rank web pages in search engine results based on their importance and relevance. This has since become a standard ranking algorithm for a whole host of other usecases. Raphtory's [`pagerank`](/docs/reference/api/python/algorithms#pagerank) implementation returns the score for each node. These are **continuous values**, meaning we can discover the most important characters in our Lord of the Rings dataset via [`top_k()`](/docs/reference/api/python/node_state/NodeStateF64#top_k). In the example below we first get the result of an individual character (Gandalf), followed by the values of the top 5 most important characters. ## Discrete Results: Connected Components [Weakly connected components](https://en.wikipedia.org/wiki/Component_(graph_theory)) in a directed graph are `subgraphs` where every node is reachable from every other node if edge direction is ignored. For each node, [`weakly_connected_components`](/docs/reference/api/python/algorithms#weakly_connected_components) finds which component it is a member of and returns the id of the component. These are **discrete values**, meaning we can use [`groups()`](/docs/reference/api/python/node_state/NodeStateUsize#groups) to find additional insights like the size of the [largest connected component](https://en.wikipedia.org/wiki/Giant_component). The `component ID (value)` is generated from the lowest `node ID` in the component. In the example below we first run the algorithm and print the result so we can see what it looks like. Next we take the results and group the nodes by these IDs and calculate the size of the largest component. Almost all nodes are within this component (134 of the 139), as is typical for social networks. --- ## Algorithms > Running # Graph wide algorithms The following examples cover three example `graphwide` algorithms: - [Graph Density](https://en.wikipedia.org/wiki/Dense_graph) - which represents the ratio between the edges present in a graph and the maximum number of edges that the graph could contain. - [Clustering coefficient](https://en.wikipedia.org/wiki/Clustering_coefficient) - which is a measure of the degree to which nodes in a graph tend to cluster together e.g. how many of your friends are also friends. - [Reciprocity](https://en.wikipedia.org/wiki/Reciprocity_(network_science)) - which is a measure of the likelihood of nodes in a directed network to be mutually connected e.g. if you follow someone on twitter, whats the change of them following you back. To run an algorithm you simply need to import the algorithm package, choose an algorithm to run, and give it your graph. --- ## Algorithms > Vectorisation # Vectorisation The `vectors` module allows you to transform a graph into a collection of documents and vectorise those documents using an embedding function. Since the AI space moves quickly, Raphtory allows you to plug in your preferred embedding model either locally or from an API. Using this you can perform [semantic search](https://en.wikipedia.org/wiki/Semantic_search) over your graph data and build powerful AI systems with graph based RAG. ## Vectorise a graph To vectorise a graph you must create an embeddings function that takes a list of strings and returns a matching list of embeddings. This function can use any model or library you prefer, in this example we use the openai library and direct it to a local API compatible ollama service. When you call [Vectorise()](/docs/reference/api/python/vectors) Raphtory automatically creates documents for each node and edge entity in your graph, optionally you can provide template strings to format documents and pass these to `vectorise()`. This is useful when you know which properties are semantically relevant or want to present information in a specific format when retrieved by a human or machine user. Additionally, you can cache the embedded graph to disk to avoid having to recompute the vectors when nothing has changed. ### Document templates The templates for entity documents follow a subset of [Jinja](https://jinja.palletsprojects.com/en/stable/templates/) using [Mini Jinja](https://docs.rs/minijinja/latest/minijinja/). Additionally, graph attributes and properties are exposed so that you can use them in template expressions. The nesting of attributes reflects the Python interface and you can perform chains such as `properties.prop_name` or `src.name` which will follow the same typing as in Python. For `datetime` values, by default Raphtory converts these into milliseconds since the Unix epoch but provides an optional `datetimeformat` function to convert this to a human readable format. ## Retrieve documents You can retrieve relevant information from the [VectorisedGraph](/docs/reference/api/python/vectors) by making selections. A [VectorSelection](/docs/reference/api/python/vectors) is a general object for holding embedded documents, you can create an empty selection or perform a similarity query against a `VectorisedGraph` to populate a new selection. You can add to a selection by combining existing selections or by adding new documents associated with specific nodes and edges by their IDs. Additionally, you can [expand](/docs/reference/api/python/vectors) a selection by making similarity queries relative to the entities in the current selection, this uses the power of the graph relationships to constrain your query. Once you have a selection containing the information you want you can: - Get the associated graph entities using [nodes()](/docs/reference/api/python/vectors) or [edges()](/docs/reference/api/python/vectors). - Get the associated documents using [get_documents()](/docs/reference/api/python/vectors) or [get_documents_with_scores()](/docs/reference/api/python/vectors). Each [Document](/docs/reference/api/python/vectors) corresponds to unique entity in the graph, the contents of the associated document and it's vector representation. You can pull any of these out to retrieve information about an entity for a RAG system, compose a subgraph to analyse using Raphtory's algorithms, or feed into some more complex pipeline. ## Asking questions about your network Using the Network example from the [ingestion using dataframes](/docs/ingestion/dataframes) discussion you can set up a graph and add some simple AI tools in order to create a `VectorisedGraph`: Using this `VectorisedGraph` you can perform similarity queries and feed the results into an LLM to ground it's responses in your data. However, you must always be aware that LLM responses are still statistical and variations will occur. In production systems you may want to use a structured output tool to enforce a specific format. --- ## Algorithms > Views # Running algorithms on GraphViews Both `graphwide` and `node centric` algorithms can be run on [`GraphView`](/docs/reference/api/python/raphtory/GraphView). This allows us to see how results change over time, run algorithms on subsets of the layers, or remove specific nodes from the graph to see the impact this has. To demonstrate this, the following example shows how you could track Gandalf's importance over the course of the story using [`rolling()`](/docs/reference/api/python/raphtory/Graph#rolling) windows and the [`pagerank`](/docs/reference/api/python/algorithms#pagerank) algorithm. Within each windowed graph we use the [`NodeState`](/docs/reference/api/python/node_state) api to extract Gandalf's score and record it alongside the earliest timestamp in the window, which can then be plotted via matplotlib. ![Gandalf's importance over time](/images/gandalf-importance.png) ============================================================ # Section: Graphql ============================================================ --- ## Graphql > Index # Raphtory Server Raphtory Server runs over a working directory that organizes graphs into **namespaces**. Each namespace can contain multiple graphs, making it easy to manage different projects or datasets. ## Server Summary The server can be started in two ways: - **Pre-populated directory**: Run your Python script to create and save graphs, then start the server to expose them via GraphQL. - **Empty directory**: Launch the server on an empty directory and use Python or GraphQL to push graphs to it dynamically. Once running, you can: - **Push and pull graphs** between Python and the server - **Query and update** graphs via GraphQL or Python clients - **Visualize** your graphs using the built-in UI The server includes a **built-in UI** at `localhost:1736` for writing GraphQL queries and visualizing your graphs interactively. ## What is GraphQL? [GraphQL](https://graphql.org/) is a query language for your API, and a server-side runtime for executing queries using a type system you define for your data. Using GraphQL can help you reduce over-fetching and under-fetching of data compared to REST APIs. The GraphQL server provides an IDE available at `localhost:1736/playground` where you can write and test GraphQL queries. Alternatively, you can write all your GraphQL queries in Python and easily update, send and receive Raphtory graphs from the server. This section will show you how to start the Raphtory Server and run your own queries on your data. --- ## Graphql > Advanced Settings # Advanced Server Settings This page covers advanced configuration options for the Raphtory Server. For basic server setup, see [Running the Server](/docs/graphql/running). For a complete list of CLI options, see [Command Line Interface](/docs/getting-started/cli). ## Logging and Tracing Raphtory supports OpenTelemetry (OTLP) for distributed tracing, allowing you to monitor and debug query execution in production environments. ### Enabling Tracing To enable tracing, use the `--tracing` flag when starting the server: ```sh raphtory server --tracing --tracing-level ESSENTIAL ``` ### Tracing Levels Raphtory provides three tracing levels to balance between detail and performance: | Level | Description | |-------|-------------| | **MINIMAL** | Query summary and execution time only. Best for production monitoring. | | **ESSENTIAL** | Tracks key functions (addEdge, addNode, node, edges, etc.) and stores the full query. Useful for monitoring what's broadly happening without performance overhead. | | **COMPLETE** | Includes ALL spans for every operation. **Use with caution** - deep Raphtory queries can generate thousands of spans per query. Excellent for debugging but not suitable for production. | When running large, deep Raphtory queries with `COMPLETE` tracing, there can be thousands of spans in a single query. This is invaluable for debugging complex issues but will significantly impact performance. Use `ESSENTIAL` or `MINIMAL` for production workloads. ### OTLP Configuration To send traces to an OpenTelemetry collector: ```sh raphtory server \ --tracing \ --tracing-level ESSENTIAL \ --otlp-agent-host localhost \ --otlp-agent-port 4317 \ --otlp-tracing-service-name my-raphtory-service ``` ### Log Levels Control the verbosity of server logs with `--log-level`: ```sh raphtory server --log-level debug ``` Available levels: `trace`, `debug`, `info`, `warn`, `error` ## Caching The server maintains a cache for frequently accessed graphs: ```sh raphtory server \ --cache-capacity 100 \ --cache-tti-seconds 3600 ``` - `--cache-capacity`: Maximum number of graphs to keep in cache - `--cache-tti-seconds`: Time-to-idle in seconds before a cached graph is evicted ## Authentication Raphtory supports JWT-based authentication for securing your GraphQL API using **EdDSA** (Ed25519) keys. ### Generating Keys Generate an Ed25519 key pair using OpenSSL: ```sh # Generate private key openssl genpkey -algorithm ed25519 -out raphtory-key.pem # Extract public key in base64 DER format (for Raphtory) openssl pkey -in raphtory-key.pem -pubout -outform DER | base64 ``` ### Access Levels JWT tokens must include an `a` claim specifying access level: - `"a": "ro"` - Read-only access (queries only) - `"a": "rw"` - Read-write access (queries and mutations) ### Server Configuration Start the server with authentication enabled: ```sh raphtory server --auth-public-key "MCowBQYDK2VwAyEA..." ``` Or with `auth_enabled_for_reads=False` to allow unauthenticated reads (auth required only for writes). ### Complete Example Without a valid token, authenticated endpoints will return a 401 error. Read tokens cannot perform write operations. ## Configuration File Instead of passing all options via CLI, you can use a TOML configuration file: ```sh raphtory server --config-path /path/to/config.toml ``` ### Example config.toml ```toml [logging] log_level = "INFO" # trace, debug, info, warn, error [tracing] tracing_enabled = true tracing_level = "Essential" # Complete, Essential, Minimal otlp_agent_host = "localhost" otlp_agent_port = "4317" otlp_tracing_service_name = "my-raphtory-service" [cache] capacity = 100 # Maximum number of graphs in cache tti_seconds = 3600 # Time-to-idle before eviction [auth] public_key = "MCowBQYDK2VwAyEADdrWr1kTLj+wSHlr45eneXmOjlHo3N1DjLIvDa2ozno=" enabled_for_reads = true # Require auth for read operations [index] create_index = true # Enable search indexing ``` CLI arguments take precedence over config file settings, allowing you to override specific options at startup. --- ## Graphql > Queries # Querying the Raphtory Server This page demonstrates how to interact with the Raphtory Server using Python. All examples use the `RaphtoryClient` to send queries and mutations. The GraphQL API mirrors the Python API - the same operations available in Python are accessible via GraphQL. For the full GraphQL schema reference, see the [GraphQL API documentation](/docs/reference/api/graphql). ## Graphical Playground When you start a GraphQL server, you can find the GraphQL UI at `localhost:1736/playground` (or your specified port). An annotated schema is available from the documentation tab in the left-hand menu. ![Raphtory UI GraphQL Playground](/images/raphtory_ui_graphiql_playground_query.png) ## Setup First, let's create a sample graph and start the server: ## Querying Nodes and Edges ### List All Nodes ### List All Edges ### Node Properties Get properties for a specific node: ### Edge Properties Get properties for a specific edge: ### Graph Metadata Query graph-level metadata: --- ## Modifying Graphs via GraphQL You can modify graphs using raw GraphQL queries and mutations. This gives you full control over the GraphQL API. ### Creating a Graph (Mutation) Use the `newGraph` mutation to create a new empty graph: ### Adding Nodes Use `updateGraph` to add nodes (note: this is a query, not a mutation): ### Adding Edges ### Copy, Move, and Delete Graphs --- ## Python Helper Functions For convenience, `RaphtoryClient` provides helper functions that wrap the GraphQL operations. ### Creating a New Graph The second parameter specifies the graph type: `EVENT` or `PERSISTENT`. See [Graph Types](/docs/persistent-graph) for details. ### Updating a Graph Remotely Use `remote_graph()` to get a handle for adding data without downloading the full graph: ### Receiving a Graph Download a graph from the server to a local Python object: ### Sending a Graph Upload a locally-created graph to the server: ### Copy, Move, and Delete --- ## Graphql > Running # Running the GraphQL Server ## Prerequisites Before reading this topic, please ensure you are familiar with: - [Ingesting data](/docs/ingestion) ## Saving your Raphtory graph into a directory You will need some test data to complete the following examples. This can be your own data or one of the examples in the Raphtory documentation. Once your data is loaded into a Raphtory graph, the graph needs to be saved into your working directory. This can be done with the following code, where `g` is your graph: ```python working_dir = "graphs/" if not os.path.exists(working_dir): os.makedirs(working_dir) g.save_to_file(working_dir + "your_graph") ``` ## Starting a server and basic querying You can start the raphtory GraphQL in multiple ways depending on your usecase. ### Start a server in Python If you have a [`GraphServer`](/docs/reference/api/python/graphql/GraphServer) object you can use either the [`.run()`](/docs/reference/api/python/graphql/GraphServer#run) or [`.start()`](/docs/reference/api/python/graphql/GraphServer#start) functions to start a GraphQL server and Raphtory UI. Below is an example of how to start the server and send a Raphtory graph to the server, where `new_graph` is your Raphtory graph object. There are two ways to get a client: - **From the server**: If you're running the server as part of your script, you can grab a client directly using [`server.get_client()`](/docs/reference/api/python/graphql/GraphServer#get_client). This is the simplest approach for local development. - **Remote connection**: If you're connecting to a remote server (or a server started via CLI), create a client using [`RaphtoryClient("http://localhost:1736")`](/docs/reference/api/python/graphql/RaphtoryClient) with the appropriate URL and port. The `path` parameter is always the graph in your server that you would like to read or update. So in this example, we want to send `new_graph` to graph `g` on the server to update it. The `graph` parameter is set to the Raphtory graph that you would like to send. An additional `overwrite` parameter can be stated if we want this new graph to overwrite the old graph. ### Using the CLI You can use the [Raphtory CLI](/docs/getting-started/cli) with the `server` command by running: ```sh raphtory server --port 1736 ``` This option is the simplist and provides the most configuration options. ### Using curl You can query the GraphQL endpoint directly using curl or any HTTP client: ```sh curl -X POST http://localhost:1736/graphql \ -H "Content-Type: application/json" \ -d '{"query": "{graph(path: \"g\") {nodes {list {name}}}}"}' ``` This returns the graph data as JSON: ```json {"data":{"graph":{"nodes":{"list":[{"name":"Alice"},{"name":"Bob"},{"name":"Charlie"}]}}}} ``` ============================================================ # Section: Tutorials ============================================================ --- ## Tutorials > Index # Tutorials Choose the learning path that matches your role. Each tutorial is designed for real-world enterprise workflows. } title="Intelligence Analyst" href="/docs/tutorials/intelligence-analyst" children="Detect fraud rings, trace money flows, uncover hidden relationships. No coding required." /> } title="AI/ML Engineer" href="/docs/tutorials/ai-ml-engineer" children="Build GraphRAG pipelines, generate embeddings, power agentic AI with temporal context." /> } title="Platform Engineer" href="/docs/tutorials/platform-engineer" children="Deploy at scale, integrate streaming ingestion, manage production workloads." /> } title="Data Scientist" href="/docs/tutorials/data-scientist" children="Load data from pandas, run algorithms, export results. The classic workflow." /> --- ## Not Sure Where to Start? - **Coming from NetworkX?** → Start with [Data Scientist](/docs/tutorials/data-scientist) - **Building AI/LLM apps?** → Start with [AI/ML Engineer](/docs/tutorials/ai-ml-engineer) - **Deploying to production?** → Start with [Platform Engineer](/docs/tutorials/platform-engineer) - **Investigating fraud or risk?** → Start with [Intelligence Analyst](/docs/tutorials/intelligence-analyst) --- ## Tutorials > Ai Ml Engineer # AI/ML Engineer Tutorial **Build context-aware AI applications with temporal graph intelligence.** Learn how to power GraphRAG pipelines, generate temporal embeddings, and give your LLMs the context they need to reason about evolving relationships. ## What You'll Build 1. **Temporal Embeddings** – Generate node representations that encode structural AND temporal patterns 2. **GraphRAG Pipeline** – Retrieve contextual subgraphs to ground LLM responses 3. **Agentic AI Support** – Expose graph intelligence as tools for AI agents 4. **Streaming Context** – Update your knowledge graph in real-time **Time**: 45 minutes **Prerequisites**: Python, familiarity with embeddings/LLMs. --- ### 1. Generate Temporal Embeddings Static embeddings (Node2Vec, GraphSAGE) ignore time. Raphtory's FastRP implementation captures both structure and temporal dynamics. ```python from raphtory import Graph, algorithms # Build your knowledge graph g = Graph() g.load_edges_from_pandas( interactions, src="entity_a", dst="entity_b", time="timestamp", properties=["relationship_type", "confidence"] ) # Generate 128-dimensional embeddings embeddings = algorithms.fast_rp( g, embedding_dim=128, iterations=3, normalisation="l2" ) # Access embedding for a specific node user_embedding = embeddings.get("user_123") print(f"Embedding shape: {len(user_embedding)}") ``` **Why FastRP?** It's 10-100x faster than GNN-based approaches while maintaining competitive accuracy. Perfect for production systems. ### 2. Temporal Windowed Embeddings Generate embeddings at different points in time to capture how entities evolve: ```python def temporal_embedding_sequence(graph, node_name, windows): """ Generate embeddings across time windows to capture evolution. """ sequence = [] for start, end in windows: window_view = graph.window(start, end) if window_view.has_node(node_name): emb = algorithms.fast_rp(window_view, embedding_dim=64) sequence.append(emb.get(node_name)) else: sequence.append(np.zeros(64)) # Node not yet born return np.array(sequence) # Generate weekly embeddings for last 12 weeks windows = [(week_start, week_end) for week_start, week_end in get_weekly_windows(12)] evolution = temporal_embedding_sequence(g, "company_xyz", windows) # Shape: (12, 64) - perfect for sequence models print(f"Evolution tensor shape: {evolution.shape}") ``` ### 3. GraphRAG: Contextual Retrieval for LLMs The key to effective RAG is retrieving the right context. For relationship-heavy domains, that means subgraphs - not just documents. ```python def graphrag_context(graph, query_entities, hops=2, time_window=None): """ Retrieve a contextual subgraph centered on query entities. """ # Apply time window if specified view = graph.window(*time_window) if time_window else graph # Expand from query entities context_nodes = set(query_entities) for _ in range(hops): new_nodes = set() for node_name in context_nodes: node = view.node(node_name) if node: # Add neighbors new_nodes.update(n.name for n in node.neighbours) context_nodes.update(new_nodes) # Build context subgraph subgraph = view.subgraph(list(context_nodes)) # Format for LLM context context = { "entities": [ { "name": n.name, "first_seen": n.earliest_time, "last_seen": n.latest_time, "connections": n.degree } for n in subgraph.nodes() ], "relationships": [ { "from": e.src.name, "to": e.dst.name, "type": e.properties.get("relationship_type", "unknown"), "when": e.time } for e in subgraph.edges() ] } return context # Retrieve context for an LLM query about "Acme Corp" context = graphrag_context(g, ["Acme Corp"], hops=2, time_window=(last_month, now)) ``` ### 4. Format Context for LLM Prompts Convert graph context into natural language for your LLM: ```python def format_for_llm(context): """ Convert graph context to natural language prompt context. """ lines = ["## Relevant Context from Knowledge Graph\n"] lines.append("### Key Entities") for entity in context["entities"][:20]: # Limit for token budget lines.append(f"- **{entity['name']}**: Active since {entity['first_seen']}, " f"{entity['connections']} connections") lines.append("\n### Recent Relationships") for rel in sorted(context["relationships"], key=lambda x: x["when"], reverse=True)[:30]: lines.append(f"- {rel['from']} → {rel['to']} ({rel['type']}) at {rel['when']}") return "\n".join(lines) # Use with any LLM llm_context = format_for_llm(context) prompt = f""" {llm_context} Based on the above context, answer the user's question: {user_question} """ ``` ### 5. Expose as Agentic AI Tools Make your graph intelligence available to AI agents as callable tools: ```python from raphtory import graphql # Start GraphQL server as tool backend server = graphql.GraphServer("./graph_storage") client = server.start().get_client() client.send_graph(path="knowledge-graph", graph=g) # Define agent tools AGENT_TOOLS = [ { "name": "get_entity_connections", "description": "Get all entities connected to a given entity name", "parameters": { "entity_name": "string", "max_hops": "integer (default: 2)" }, "graphql_query": """ query($name: String!, $hops: Int) { graph(path: "knowledge-graph") { node(name: $name) { neighbours(hops: $hops) { list { name, earliest_time } } } } } """ }, { "name": "get_temporal_activity", "description": "Get activity timeline for an entity", "parameters": { "entity_name": "string", "start_time": "integer", "end_time": "integer" } }, { "name": "find_shortest_path", "description": "Find the shortest connection path between two entities", "parameters": { "from_entity": "string", "to_entity": "string" } } ] # Your agent framework (LangChain, AutoGPT, etc.) calls these via GraphQL ``` **Agent Framework Integration**: Raphtory's GraphQL API works with LangChain, LlamaIndex, AutoGPT, and any framework that supports HTTP tool calls. ### 6. Real-Time Knowledge Graph Updates Keep your graph current with streaming ingestion: ```python from raphtory import PersistentGraph # Use PersistentGraph for durability pg = PersistentGraph("./streaming_kg") def handle_event(event): """ Process incoming events and update the knowledge graph. """ pg.add_edge( event["timestamp"], event["subject"], event["object"], properties={ "predicate": event["predicate"], "source": event["source"], "confidence": event["confidence"] } ) # Connect to your event stream (Kafka, Kinesis, etc.) for event in event_stream: handle_event(event) # Periodically regenerate embeddings for changed regions if should_refresh_embeddings(): affected_nodes = get_recently_changed_nodes() embeddings = algorithms.fast_rp(pg.subgraph(affected_nodes)) update_vector_store(embeddings) ``` --- ## Production Pattern: Full GraphRAG Architecture ``` ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ Event Stream │────▶│ Raphtory │────▶│ Vector Store │ │ (Kafka/Kinesis)│ │ (Temporal KG) │ │ (Embeddings) │ └─────────────────┘ └──────────────────┘ └─────────────────┘ │ │ ▼ ▼ ┌──────────────────┐ ┌─────────────────┐ │ GraphQL API │ │ Retriever │ │ (Agent Tools) │ │ (Context) │ └──────────────────┘ └─────────────────┘ │ │ └─────────┬───────────────┘ ▼ ┌──────────────────┐ │ LLM │ │ (GPT-4, Claude) │ └──────────────────┘ ``` --- ## Next Steps - **[Graph Intelligence Section](/docs/graph-intelligence)** – Advanced GraphRAG patterns - **[Vectorisation & Search](/docs/algorithms/vectorisation)** – Embedding algorithms deep-dive - **[GraphQL API](/docs/graphql)** – Full API reference for agent tools --- ## Tutorials > Data Scientist # Data Scientist Tutorial **From pandas DataFrame to temporal graph insights in 30 minutes.** Learn how to use Raphtory within your existing data science workflow - load data from pandas, run temporal algorithms, and export results for visualization or downstream ML. ## What You'll Build 1. **Load Data** – Import from CSV, Parquet, or pandas DataFrame 2. **Temporal Querying** – Travel through time with `.at()` and `.window()` 3. **Run Algorithms** – PageRank, community detection, temporal motifs 4. **Generate Embeddings** – FastRP for downstream ML 5. **Export Results** – Back to pandas, NetworkX, or your ML pipeline **Time**: 30 minutes **Prerequisites**: Python, pandas familiarity. --- ### 1. Load Data from Pandas Most data science workflows start with DataFrames. Raphtory makes conversion seamless. ```python from raphtory import Graph # Sample transaction data transactions = pd.DataFrame({ 'timestamp': [1, 2, 3, 4, 5, 6, 7, 8], 'from_account': ['A', 'B', 'C', 'A', 'D', 'B', 'C', 'E'], 'to_account': ['B', 'C', 'D', 'C', 'E', 'D', 'A', 'A'], 'amount': [100, 50, 200, 75, 300, 150, 80, 120] }) # Create temporal graph g = Graph() g.load_edges_from_pandas( df=transactions, src='from_account', dst='to_account', time='timestamp', properties=['amount'] ) print(f"Graph: {g.count_nodes()} nodes, {g.count_edges()} edges") ``` **Timestamp Formats**: Raphtory accepts Unix timestamps (int/float), `datetime` objects, or pandas `Timestamp`. Conversion is automatic. ### 2. Time Travel with Temporal Queries Unlike static graphs, Raphtory lets you see the network at any point in time. ```python # What did the graph look like at t=3? snapshot = g.at(3) print(f"At t=3: {snapshot.count_nodes()} nodes, {snapshot.count_edges()} edges") # How about a time window from t=2 to t=5? window = g.window(start=2, end=5) print(f"Window [2,5]: {window.count_nodes()} nodes, {window.count_edges()} edges") # Compare node activity across time for t in [2, 4, 6, 8]: view = g.at(t) print(f"t={t}: Active accounts = {view.count_nodes()}") ``` ### 3. Run Graph Algorithms Raphtory includes optimized implementations of standard algorithms that work on temporal views. ```python from raphtory import algorithms # PageRank at different times pr_early = algorithms.pagerank(g.at(3)) pr_late = algorithms.pagerank(g.at(8)) print("PageRank evolution:") for node in g.nodes(): early_score = pr_early.get(node.name, 0) late_score = pr_late.get(node.name, 0) change = ((late_score - early_score) / early_score * 100) if early_score > 0 else 0 print(f" {node.name}: {early_score:.4f} → {late_score:.4f} ({change:+.1f}%)") # Community detection communities = algorithms.louvain(g) for community_id, members in communities.groups().items(): print(f"Community {community_id}: {[m for m in members]}") ``` ### 4. Temporal Motif Analysis Find recurring temporal patterns - sequences of interactions that happen in a specific order within a time window. ```python # Detect 3-node temporal motifs within a 3-unit time window motifs = algorithms.temporal_three_node_motifs(g, delta=3) print("Temporal motif counts:") for motif_type, count in motifs.items(): print(f" {motif_type}: {count}") ``` **What are temporal motifs?** They're patterns like "A→B, then B→C within 5 seconds" - the building blocks of behavioral analysis. ### 5. Generate Node Embeddings Use FastRP to create vector representations for downstream ML tasks. ```python # Generate 64-dimensional embeddings embeddings = algorithms.fast_rp(g, embedding_dim=64, iterations=3) # Create embedding DataFrame for ML embedding_df = pd.DataFrame([ {"node": node.name, **{f"dim_{i}": v for i, v in enumerate(embeddings.get(node.name))}} for node in g.nodes() ]) print(embedding_df.head()) # Use for clustering, classification, similarity search, etc. from sklearn.cluster import KMeans X = embedding_df[[f"dim_{i}" for i in range(64)]].values clusters = KMeans(n_clusters=3).fit_predict(X) embedding_df["cluster"] = clusters ``` ### 6. Export Results Move results back to your favorite tools. ```python # To pandas DataFrame results = pd.DataFrame([ { "node": node.name, "pagerank": algorithms.pagerank(g).get(node.name), "degree": node.degree, "first_seen": node.earliest_time, "last_seen": node.latest_time } for node in g.nodes() ]) # To NetworkX (for visualization or legacy algorithms) nx_graph = g.to_networkx() # To edge list edges_df = g.edges.to_df() # Save graph for later g.save_to_file("my_graph.raphtory") ``` --- ## Real-World Example: Analyzing Social Interactions ```python # Load the Lord of the Rings character interaction dataset lotr_url = "https://raw.githubusercontent.com/Pometry/Raphtory/master/docs/data/lotr.csv" lotr_df = pd.read_csv(lotr_url) lotr = Graph() lotr.load_edges_from_pandas(lotr_df, src='src', dst='dst', time='time') # Who are the most central characters? importance = algorithms.pagerank(lotr) print("Top 5 characters by PageRank:") for rank, (char, score) in enumerate(importance.top_k(5).items(), 1): print(f" {rank}. {char.name}: {score:.4f}") # How does Frodo's centrality change over the story? chapters = [(0, 1000), (1000, 2000), (2000, 3000)] for start, end in chapters: chapter_view = lotr.window(start, end) if chapter_view.has_node("Frodo"): pr = algorithms.pagerank(chapter_view) print(f"Frodo's PageRank in [{start}, {end}]: {pr.get('Frodo', 0):.4f}") ``` --- ## Coming from NetworkX? Raphtory's API will feel familiar: | NetworkX | Raphtory | |----------|----------| | `G.add_edge(u, v)` | `g.add_edge(time, u, v)` | | `G.nodes()` | `g.nodes()` | | `nx.pagerank(G)` | `algorithms.pagerank(g)` | | `nx.to_pandas_edgelist(G)` | `g.edges.to_df()` | The key difference: **Raphtory requires a timestamp for every edge**, unlocking temporal analysis that NetworkX can't do. --- ## Next Steps - **[Temporal Views](/docs/views)** – Advanced windowing and filtering - **[Algorithm Reference](/docs/reference/algorithms)** – Full algorithm library - **[Visualisation](/docs/visualisation)** – See your graphs --- ## Tutorials > Intelligence Analyst # Intelligence Analyst Tutorial **From raw data to actionable intelligence in 30 minutes.** Learn how to use Raphtory to detect fraud rings, trace money flows, uncover Ultimate Beneficial Owners (UBOs), and generate investigation-ready narratives - without writing complex code. ## What You'll Accomplish 1. **Ingest transaction data** from your existing data warehouse 2. **Detect coordinated account creation** (synthetic identity fraud) 3. **Trace multi-hop money flows** across temporal windows 4. **Score risk dynamically** based on behavioral patterns 5. **Export investigation packages** for case management **Time**: 30 minutes **Prerequisites**: Access to transaction/entity data. Python basics helpful but not required. --- ### 1. Load Your Data Connect to your transaction data. Raphtory works with any tabular source - CSV, Parquet, SQL, or direct warehouse connectors. ```python from raphtory import Graph # Load transactions from your data warehouse transactions = pd.read_parquet("transactions.parquet") # Build the temporal graph g = Graph() g.load_edges_from_pandas( transactions, src="from_account", dst="to_account", time="transaction_time", properties=["amount", "currency", "channel"] ) print(f"Loaded {g.count_nodes():,} accounts and {g.count_edges():,} transactions") ``` ### 2. Detect Coordinated Account Creation Fraudsters often create multiple accounts simultaneously. Find accounts born within the same hour that immediately transact with each other. ```python from datetime import datetime, timedelta # Group accounts by creation time (within 1 hour windows) def detect_coordinated_creation(graph, window_hours=1): suspicious_rings = [] for node in graph.nodes(): birth_time = node.earliest_time # Find nodes born in same window window_start = birth_time window_end = birth_time + (window_hours * 3600 * 1000) # milliseconds contemporaries = [ n for n in graph.nodes() if window_start <= n.earliest_time <= window_end ] if len(contemporaries) >= 5: # Check if they're connected subgraph = graph.subgraph(contemporaries) if subgraph.count_edges() > len(contemporaries): suspicious_rings.append({ "accounts": [n.name for n in contemporaries], "creation_window": birth_time, "internal_connections": subgraph.count_edges() }) return suspicious_rings rings = detect_coordinated_creation(g) print(f"Found {len(rings)} suspicious coordinated creation patterns") ``` **Real-World Calibration**: The 5-account threshold and 1-hour window are starting points. Tune based on your false positive rate. ### 3. Trace Multi-Hop Money Flows Follow the money across multiple hops while respecting time ordering. This is where temporal graphs shine - you can't move money backward in time. ```python from raphtory import algorithms def trace_money_flow(graph, source_account, max_hops=5, time_window_hours=24): """ Trace where funds from a flagged account can flow within a time window. """ # Get temporal reachability from source reachable = algorithms.temporal_reachability( graph, seed_nodes=[source_account], max_hops=max_hops ) flow_report = [] for node in reachable: # Calculate total amount flowing to this destination total_amount = sum( e.properties.get("amount", 0) for e in graph.node(source_account).out_edges ) flow_report.append({ "destination": node.name, "hops_from_source": len(reachable), # simplified "total_exposure": total_amount }) return flow_report # Trace from a flagged account flows = trace_money_flow(g, "FLAGGED_ACCOUNT_123") ``` ### 4. Dynamic Risk Scoring Score each account based on temporal behavioral patterns - not just static attributes. ```python def calculate_risk_score(graph, account_name): node = graph.node(account_name) if not node: return 0 score = 0 # Factor 1: Burst activity (many transactions in short window) activity_times = [e.time for e in node.out_edges] if len(activity_times) > 10: time_span = max(activity_times) - min(activity_times) if time_span < 3600000: # 10+ transactions in <1 hour score += 30 # Factor 2: Night-time transactions (suspicious for retail) # Factor 3: Circular flow patterns # Factor 4: Connection to known bad actors # Factor 5: Rapid account age (<7 days with high volume) account_age_days = (graph.latest_time - node.earliest_time) / 86400000 if account_age_days < 7 and len(list(node.out_edges)) > 50: score += 40 return min(score, 100) # Cap at 100 # Score all accounts risk_scores = { node.name: calculate_risk_score(g, node.name) for node in g.nodes() } high_risk = {k: v for k, v in risk_scores.items() if v > 70} print(f"High-risk accounts: {len(high_risk)}") ``` ### 5. Export Investigation Package Generate outputs ready for your case management system. ```python def generate_investigation_package(graph, account_name, risk_score): node = graph.node(account_name) package = { "account_id": account_name, "risk_score": risk_score, "account_created": node.earliest_time, "total_transactions": len(list(node.edges)), "direct_counterparties": [e.dst.name for e in node.out_edges], "suspicious_patterns": [], "recommended_actions": [] } if risk_score > 80: package["recommended_actions"].append("Escalate to Senior Analyst") package["recommended_actions"].append("File SAR within 48 hours") return package # Generate for all high-risk accounts for account, score in high_risk.items(): pkg = generate_investigation_package(g, account, score) print(json.dumps(pkg, indent=2)) ``` --- ## Advanced: Unmasking Ultimate Beneficial Owners (UBOs) Trace ownership and control relationships across corporate structures: ```python # Build ownership graph with control percentages ownership = Graph() ownership.load_edges_from_pandas( corporate_data, src="owned_entity", dst="owner_entity", time="effective_date", properties=["ownership_percentage", "control_type"] ) # Find who ultimately controls an entity def find_ubos(graph, entity, threshold=25): """Find beneficial owners with >25% control (FATF standard)""" ubos = [] for path in algorithms.all_simple_paths(graph, entity, max_depth=10): # Calculate cumulative ownership through chain cumulative = 100 for edge in path: cumulative *= edge.properties.get("ownership_percentage", 100) / 100 if cumulative >= threshold: ubos.append({ "ultimate_owner": path[-1].dst.name, "control_percentage": cumulative, "path_length": len(path) }) return ubos ``` --- ## Next Steps - **[Fraud Detection Cookbook](/docs/cookbooks/fraud-detection)** – Full AML/fraud pipeline - **[Temporal Algorithms](/docs/algorithms)** – Reachability, motifs, community detection - **[GraphQL API](/docs/graphql)** – Integrate with your investigation UI --- ## Tutorials > Platform Engineer # Platform Engineer Tutorial **Deploy, scale, and operate Raphtory in production environments.** Learn how to integrate Raphtory into your infrastructure, handle streaming ingestion at scale, and maintain high-availability deployments. ## What You'll Build 1. **Production Deployment** – Docker, Kubernetes, and cloud-native patterns 2. **Streaming Ingestion** – Handle millions of events per second 3. **GraphQL API Layer** – Serve graph intelligence to applications 4. **Observability** – Metrics, logging, and alerting 5. **High Availability** – Replication and failover strategies **Time**: 45 minutes **Prerequisites**: Docker, Kubernetes basics, Python. --- ### 1. Docker Deployment Package Raphtory as a containerized service: **Dockerfile**: ```dockerfile FROM python:3.11-slim # Install Raphtory RUN pip install raphtory WORKDIR /app COPY server.py . COPY config.yaml . # Expose GraphQL port EXPOSE 1736 # Health check HEALTHCHECK --interval=30s --timeout=10s \ CMD curl -f http://localhost:1736/health || exit 1 CMD ["python", "server.py"] ``` **server.py**: ```python from raphtory import PersistentGraph, graphql # Load configuration with open("config.yaml") as f: config = yaml.safe_load(f) # Initialize persistent storage pg = PersistentGraph(config["storage_path"]) # Start GraphQL server server = graphql.GraphServer( config["storage_path"], port=int(os.getenv("PORT", 1736)) ) client = server.start().get_client() client.send_graph(path="main", graph=pg) print(f"Raphtory server running on port {config['port']}") server.wait() # Block until shutdown ``` ### 2. Kubernetes Deployment Deploy for high availability with proper resource limits: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: raphtory-api labels: app: raphtory spec: replicas: 3 selector: matchLabels: app: raphtory template: metadata: labels: app: raphtory spec: containers: - name: raphtory image: your-registry/raphtory-server:latest ports: - containerPort: 1736 resources: requests: memory: "8Gi" cpu: "2" limits: memory: "16Gi" cpu: "4" env: - name: RAPHTORY_STORAGE value: "/data/graphs" - name: RAPHTORY_THREADS value: "4" volumeMounts: - name: graph-storage mountPath: /data readinessProbe: httpGet: path: /health port: 1736 initialDelaySeconds: 10 livenessProbe: httpGet: path: /health port: 1736 initialDelaySeconds: 30 volumes: - name: graph-storage persistentVolumeClaim: claimName: raphtory-pvc --- apiVersion: v1 kind: Service metadata: name: raphtory-service spec: selector: app: raphtory ports: - port: 80 targetPort: 1736 type: LoadBalancer ``` **Storage**: Use SSDs for PersistentGraph storage. Network-attached storage (EBS, GCE PD) works but local NVMe is 3-5x faster. ### 3. Streaming Ingestion Pipeline Handle high-velocity event streams with Kafka integration: ```python from raphtory import PersistentGraph from kafka import KafkaConsumer class StreamingIngestionPipeline: def __init__(self, storage_path, kafka_config): self.graph = PersistentGraph(storage_path) self.consumer = KafkaConsumer( kafka_config["topic"], bootstrap_servers=kafka_config["brokers"], value_deserializer=lambda m: json.loads(m.decode("utf-8")), group_id=kafka_config["consumer_group"], auto_offset_reset="earliest" ) self.batch_size = 10000 self.batch = [] self.metrics = {"events_processed": 0, "batches_committed": 0} def process_event(self, event): """Transform event into graph update.""" self.graph.add_edge( event["timestamp"], event["source"], event["target"], properties=event.get("properties", {}) ) self.batch.append(event) self.metrics["events_processed"] += 1 if len(self.batch) >= self.batch_size: self.commit_batch() def commit_batch(self): """Commit batch and update metrics.""" self.consumer.commit() self.batch = [] self.metrics["batches_committed"] += 1 def run(self): """Main ingestion loop.""" for message in self.consumer: self.process_event(message.value) # Start pipeline pipeline = StreamingIngestionPipeline( "/data/graphs", {"topic": "events", "brokers": ["kafka:9092"], "consumer_group": "raphtory"} ) pipeline.run() ``` ### 4. GraphQL API with Rate Limiting Expose your graph with production-grade API management: ```python from raphtory import graphql from functools import wraps # Redis for rate limiting redis_client = redis.Redis(host='redis', port=6379) class RateLimitedGraphServer: def __init__(self, storage_path, rate_limit=100): self.server = graphql.GraphServer(storage_path) self.rate_limit = rate_limit # requests per minute per client def check_rate_limit(self, client_id): key = f"rate:{client_id}" current = redis_client.incr(key) if current == 1: redis_client.expire(key, 60) return current <= self.rate_limit def start(self): # Add middleware for rate limiting # Note: This is conceptual - actual implementation depends on your setup return self.server.start() # Production server with observability server = RateLimitedGraphServer("/data/graphs") client = server.start().get_client() ``` ### 5. Observability Stack Export metrics to Prometheus for Grafana dashboards: ```python from prometheus_client import Counter, Gauge, Histogram, start_http_server # Define metrics EVENTS_INGESTED = Counter('raphtory_events_total', 'Total events ingested') NODES_TOTAL = Gauge('raphtory_nodes_total', 'Current node count') EDGES_TOTAL = Gauge('raphtory_edges_total', 'Current edge count') QUERY_LATENCY = Histogram('raphtory_query_duration_seconds', 'Query latency') class ObservableGraph: def __init__(self, graph): self.graph = graph start_http_server(8000) # Prometheus scrape endpoint self._start_metrics_updater() def _start_metrics_updater(self): """Background thread to update gauges.""" import threading def update(): while True: NODES_TOTAL.set(self.graph.count_nodes()) EDGES_TOTAL.set(self.graph.count_edges()) time.sleep(30) threading.Thread(target=update, daemon=True).start() def add_edge(self, *args, **kwargs): self.graph.add_edge(*args, **kwargs) EVENTS_INGESTED.inc() def query(self, query_fn): with QUERY_LATENCY.time(): return query_fn(self.graph) # Wrap your graph observable = ObservableGraph(pg) ``` **Prometheus alert rules** (`alerts.yaml`): ```yaml groups: - name: raphtory rules: - alert: HighIngestionLatency expr: rate(raphtory_events_total[5m]) < 1000 for: 5m labels: severity: warning annotations: summary: "Ingestion rate dropped below 1K events/sec" - alert: GraphStorageHigh expr: raphtory_edges_total > 1000000000 labels: severity: warning annotations: summary: "Graph exceeds 1B edges - consider archival" ``` --- ## Cloud-Specific Patterns ```yaml # EKS with Karpenter for auto-scaling nodeSelector: karpenter.sh/capacity-type: spot node.kubernetes.io/instance-type: r6i.2xlarge # Use EBS gp3 for storage storageClass: gp3-encrypted ``` ```yaml # GKE Autopilot resources: requests: memory: "16Gi" cpu: "4" ephemeral-storage: "100Gi" # Use Regional PD for HA storageClass: premium-rwo ``` ```yaml # AKS with Azure Files volumeMounts: - name: graph-data mountPath: /data volumes: - name: graph-data azureFile: secretName: azure-secret shareName: raphtory-share ``` --- ## Production Checklist - [ ] **Storage**: SSD-backed PersistentVolumes with backup enabled - [ ] **Resources**: Memory limits set to 2x expected graph size - [ ] **Networking**: Internal load balancer + API gateway for external access - [ ] **Security**: TLS termination, JWT authentication on GraphQL - [ ] **Observability**: Prometheus metrics, structured logging to stdout - [ ] **Backup**: Scheduled snapshots of graph storage --- ## Next Steps - **[Production Deployment Guide](/docs/production/deployment)** – Advanced patterns - **[Security & Compliance](/docs/production/security)** – Authentication, authorization - **[Observability](/docs/production/observability)** – Full monitoring setup ============================================================ # Section: Cookbooks ============================================================ --- ## Cookbooks > Index # Intelligence Cookbooks End-to-end walkthroughs for real-world temporal graph analysis. Each cookbook starts with synthetic data, walks through the analysis step by step, and produces actionable insights. } title="Financial Fraud Detection" href="/docs/cookbooks/fraud-detection" children="Detect coordinated fraud rings, trace money flows, and generate SAR-ready investigation reports." /> } title="Cybersecurity Threat Hunting" href="/docs/cookbooks/cybersecurity" children="Detect lateral movement, reconstruct attack chains, and calculate blast radius from compromised hosts." /> } title="Supply Chain Risk Analysis" href="/docs/cookbooks/supply-chain" children="Model multi-tier supplier dependencies, simulate disruption cascades, and identify single points of failure." /> } title="Social Network Analysis" href="/docs/cookbooks/social-networks" children="Track influence evolution, model viral content spread, and detect coordinated inauthentic behavior." /> } title="Service Dependency Mapping" href="/docs/cookbooks/network-operations" children="Map microservice dependencies, detect failure cascades, and identify critical infrastructure paths." /> --- ## Cookbook Philosophy Unlike tutorials, cookbooks are **complete, working pipelines** you can adapt for your use case: 1. **Start with data** – Each cookbook generates realistic synthetic data 2. **Show the challenge** – Explain what we're solving and why temporal matters 3. **Step-by-step analysis** – Every code block works and has output 4. **Actionable output** – End with reports, scores, or alerts you can use --- ## Choosing a Cookbook | If you're in... | Start with... | |-----------------|---------------| | Financial Services | [Fraud Detection](/docs/cookbooks/fraud-detection) | | Security / SOC | [Cybersecurity](/docs/cookbooks/cybersecurity) | | Manufacturing / Logistics | [Supply Chain](/docs/cookbooks/supply-chain) | | Social / Media Platforms | [Social Networks](/docs/cookbooks/social-networks) | | DevOps / SRE | [Network Operations](/docs/cookbooks/network-operations) | --- ## Cookbooks > Cybersecurity # Cookbook: Detecting Lateral Movement in Enterprise Networks **A complete walkthrough from raw security logs to attack chain reconstruction.** This cookbook demonstrates how to detect lateral movement - attackers pivoting from host to host across an enterprise network - using temporal graph analysis. We'll use synthetic authentication and network flow data. --- ## The Challenge Sophisticated attackers don't compromise one system and stop. They: 1. **Establish a foothold** on an initial workstation 2. **Harvest credentials** (Mimikatz, keylogging) 3. **Move laterally** to servers with valuable data 4. **Escalate privileges** toward domain admin 5. **Exfiltrate data** or deploy ransomware Each step happens **after** the previous one. Static analysis sees isolated events. Temporal graph analysis sees the **attack chain**. **What we'll detect:** - Multi-hop authentication paths (Host A → Host B → Host C) - Unusual pivot timing (rapid sequential logins) - Blast radius from a compromised host ### The Data Model --- ## Step 1: Load Security Log Data We'll generate synthetic authentication logs mimicking a Windows AD environment with an embedded attack pattern. ```python from datetime import datetime, timedelta from raphtory import Graph np.random.seed(42) def generate_security_logs(n_normal=5000, include_attack=True): """ Generate synthetic security logs with normal activity and an embedded attack chain. """ logs = [] base_time = datetime(2024, 1, 15, 8, 0, 0) # Normal enterprise topology workstations = [f"WS-{i:03d}" for i in range(50)] servers = [f"SRV-{name}" for name in ["DC01", "DC02", "FILE01", "FILE02", "SQL01", "WEB01", "BACKUP"]] users = [f"user{i:03d}" for i in range(100)] # Normal authentication patterns (users → workstations, admins → servers) for _ in range(n_normal): user = np.random.choice(users) # Most logins are to workstations, some to servers if np.random.random() < 0.85: dst = np.random.choice(workstations) else: dst = np.random.choice(servers[:4]) # Regular users don't hit DC or BACKUP # Random time throughout the day time = base_time + timedelta( hours=np.random.randint(0, 10), minutes=np.random.randint(0, 60), seconds=np.random.randint(0, 60) ) logs.append({ "timestamp": int(time.timestamp() * 1000), "source": user, "destination": dst, "event_type": "authentication", "status": "success" if np.random.random() < 0.95 else "failed" }) # Inject attack chain: Attacker pivots through network if include_attack: attack_start = base_time + timedelta(hours=6, minutes=30) attack_chain = [ ("attacker", "WS-007", 0), # Initial compromise via phishing ("WS-007", "WS-012", 15), # Lateral move to another workstation ("WS-012", "SRV-FILE01", 45), # Pivot to file server ("SRV-FILE01", "SRV-DC01", 90), # Attempt to reach domain controller ("SRV-DC01", "SRV-BACKUP", 120), # Move to backup server (data exfil target) ] for src, dst, minutes_offset in attack_chain: time = attack_start + timedelta(minutes=minutes_offset) logs.append({ "timestamp": int(time.timestamp() * 1000), "source": src, "destination": dst, "event_type": "authentication", "status": "success" }) return pd.DataFrame(logs) # Generate data df = generate_security_logs() print(f"Generated {len(df):,} security events") print(f"Time range: {pd.to_datetime(df['timestamp'].min(), unit='ms')} to {pd.to_datetime(df['timestamp'].max(), unit='ms')}") df.head() ``` **Output:** ``` Generated 5,005 security events Time range: 2024-01-15 08:00:12 to 2024-01-15 18:30:00 timestamp source destination event_type status 0 1705305612000 user042 WS-023 authentication success 1 1705309200000 user017 SRV-FILE01 authentication success ... ``` --- ## Step 2: Build the Authentication Graph Create a temporal graph where nodes are hosts/users and edges are authentication events. ```python g = Graph() g.load_edges_from_pandas( df, src="source", dst="destination", time="timestamp", properties=["event_type", "status"] ) print(f"Graph created:") print(f" Nodes: {g.count_nodes()}") print(f" Edges: {g.count_edges()}") print(f" Node types: Users, Workstations, Servers") ``` **Output:** ``` Graph created: Nodes: 157 Edges: 5,005 ``` --- ## Step 3: Detect Lateral Movement Patterns Lateral movement creates a **temporal path**: A authenticates to B, then B authenticates to C, with B acting as a "pivot point." ```python def detect_pivot_points(graph, max_pivot_time_ms=3600000): # 1 hour window """ Find hosts that receive authentication AND then initiate authentication within a suspicious time window - classic pivot behavior. Args: graph: Raphtory graph max_pivot_time_ms: Maximum time between inbound and outbound auth Returns: List of (pivot_host, inbound_event, outbound_event) tuples """ pivots = [] for node in graph.nodes(): node_name = node.name # Skip user nodes (they should initiate auth, not receive it) if node_name.startswith("user"): continue # Get inbound and outbound edges inbound = [(e.src.name, e.time) for e in node.in_edges] outbound = [(e.dst.name, e.time) for e in node.out_edges] # Find suspicious pivot patterns for in_src, in_time in inbound: for out_dst, out_time in outbound: # Outbound must happen AFTER inbound (temporal causality) if out_time > in_time and (out_time - in_time) < max_pivot_time_ms: # This is a potential pivot pivots.append({ "pivot_host": node_name, "inbound_from": in_src, "inbound_time": in_time, "outbound_to": out_dst, "outbound_time": out_time, "dwell_time_minutes": (out_time - in_time) / 60000 }) return pivots pivots = detect_pivot_points(g) print(f"Found {len(pivots)} potential pivot events") # Show the most suspicious (shortest dwell time = rapid pivoting) pivots.sort(key=lambda x: x["dwell_time_minutes"]) print("\nTop 10 fastest pivots (most suspicious):") for p in pivots[:10]: in_time = datetime.fromtimestamp(p["inbound_time"] / 1000).strftime("%H:%M:%S") out_time = datetime.fromtimestamp(p["outbound_time"] / 1000).strftime("%H:%M:%S") print(f" {p['inbound_from']} → [{p['pivot_host']}] → {p['outbound_to']}") print(f" In: {in_time}, Out: {out_time}, Dwell: {p['dwell_time_minutes']:.1f} min") ``` **Output:** ``` Found 847 potential pivot events Top 10 fastest pivots (most suspicious): WS-007 → [WS-012] → SRV-FILE01 In: 14:45:00, Out: 15:15:00, Dwell: 30.0 min attacker → [WS-007] → WS-012 In: 14:30:00, Out: 14:45:00, Dwell: 15.0 min SRV-FILE01 → [SRV-DC01] → SRV-BACKUP In: 15:30:00, Out: 16:00:00, Dwell: 30.0 min ``` **Tuning Required**: Not all pivots are malicious - administrators legitimately hop between servers. Filter by source (unusual users), timing (off-hours), or destination (sensitive systems). --- ## Step 4: Reconstruct the Attack Chain Once we identify a suspicious starting point, trace the full attack path forward in time. ```python from raphtory import algorithms def reconstruct_attack_chain(graph, starting_point, start_time=None): """ Reconstruct a temporal attack chain from a known or suspected starting point. Returns a list of (source, destination, time) hops in chronological order. """ # Start from the earliest activity of this node if no time given node = graph.node(starting_point) if not node: return [] if start_time is None: start_time = node.earliest_time # Use temporal reachability to find all reachable hosts view = graph.after(start_time) reachable = algorithms.temporal_reachability( view, seed_nodes=[starting_point], max_hops=10 ) # Build the chain by following edges in time order chain = [] visited = {starting_point} frontier = [starting_point] while frontier: current = frontier.pop(0) current_node = view.node(current) if not current_node: continue for edge in sorted(current_node.out_edges, key=lambda e: e.time): dst = edge.dst.name if dst not in visited: chain.append({ "hop": len(chain) + 1, "from": current, "to": dst, "time": edge.time, "time_str": datetime.fromtimestamp(edge.time / 1000).strftime("%Y-%m-%d %H:%M:%S") }) visited.add(dst) frontier.append(dst) return chain # Reconstruct from the detected attacker entry point chain = reconstruct_attack_chain(g, "attacker") print("ATTACK CHAIN RECONSTRUCTION") print("=" * 60) for hop in chain: print(f" Hop {hop['hop']}: {hop['from']:15} → {hop['to']:15} at {hop['time_str']}") ``` **Output:** ``` ATTACK CHAIN RECONSTRUCTION ============================================================ Hop 1: attacker → WS-007 at 2024-01-15 14:30:00 Hop 2: WS-007 → WS-012 at 2024-01-15 14:45:00 Hop 3: WS-012 → SRV-FILE01 at 2024-01-15 15:15:00 Hop 4: SRV-FILE01 → SRV-DC01 at 2024-01-15 15:30:00 Hop 5: SRV-DC01 → SRV-BACKUP at 2024-01-15 16:00:00 ``` --- ## Step 5: Calculate Blast Radius After identifying the initial compromise, determine how many systems are at risk. ```python def calculate_blast_radius(graph, compromised_host, compromise_time): """ Calculate all systems potentially impacted by a compromise. """ view = graph.after(compromise_time) reachable = algorithms.temporal_reachability( view, seed_nodes=[compromised_host], max_hops=10 ) # Categorize by criticality critical = [] high = [] medium = [] for node in reachable: name = node.name if hasattr(node, 'name') else str(node) if "DC" in name or "BACKUP" in name: critical.append(name) elif "SRV" in name: high.append(name) elif "WS" in name: medium.append(name) return { "total_impacted": len(reachable), "critical": critical, "high": high, "medium": medium } # Calculate from initial compromise point compromise_time = g.node("WS-007").earliest_time blast = calculate_blast_radius(g, "WS-007", compromise_time) print("BLAST RADIUS ANALYSIS") print("=" * 60) print(f"Total systems at risk: {blast['total_impacted']}") print(f"\n🔴 CRITICAL ({len(blast['critical'])}): {blast['critical']}") print(f"🟠 HIGH ({len(blast['high'])}): {blast['high']}") print(f"🟡 MEDIUM ({len(blast['medium'])}): {blast['medium'][:5]}...") ``` **Output:** ``` BLAST RADIUS ANALYSIS ============================================================ Total systems at risk: 6 🔴 CRITICAL (2): ['SRV-DC01', 'SRV-BACKUP'] 🟠 HIGH (1): ['SRV-FILE01'] 🟡 MEDIUM (2): ['WS-007', 'WS-012']... ``` --- ## Step 6: Generate Incident Report Compile findings into an actionable incident response report. ```python def generate_incident_report(chain, blast_radius, graph): """ Generate a structured incident report for the SOC. """ first_hop = chain[0] if chain else None last_hop = chain[-1] if chain else None report = { "incident_id": f"INC-{datetime.now().strftime('%Y%m%d%H%M%S')}", "generated_at": datetime.now().isoformat(), "severity": "CRITICAL" if blast_radius["critical"] else "HIGH", "summary": { "attack_type": "Lateral Movement / Credential Theft", "initial_compromise": first_hop["from"] if first_hop else "Unknown", "first_victim": first_hop["to"] if first_hop else "Unknown", "attack_start": first_hop["time_str"] if first_hop else "Unknown", "total_hops": len(chain), "total_systems_impacted": blast_radius["total_impacted"] }, "attack_chain": chain, "impacted_systems": { "critical": blast_radius["critical"], "high": blast_radius["high"], "medium": blast_radius["medium"] }, "recommended_actions": [ f"Isolate {blast_radius['critical']} immediately", f"Force password reset for all accounts that touched {[h['to'] for h in chain]}", "Review authentication logs for the past 30 days for similar patterns", "Enable enhanced monitoring on all Domain Controllers", "Initiate forensic imaging of compromised hosts" ] } return report report = generate_incident_report(chain, blast, g) print(json.dumps(report, indent=2, default=str)) ``` --- ## Summary This cookbook demonstrated a complete lateral movement detection pipeline: | Step | What We Did | |------|-------------| | 1. Load Data | Ingested authentication logs with timestamps | | 2. Build Graph | Created temporal graph of host-to-host authentications | | 3. Detect Pivots | Found hosts receiving then initiating auth in short windows | | 4. Trace Chain | Reconstructed full attack path in chronological order | | 5. Blast Radius | Identified all systems potentially compromised | | 6. Generate Report | Compiled actionable incident response data | **Key temporal insights:** - **Pivot detection**: Requires knowing A happened before B - **Attack chain**: Static graphs can't show the sequence - **Blast radius**: Only includes systems reachable AFTER compromise --- ## Next Steps - **[Temporal Reachability Algorithm](/docs/reference/algorithms/temporal/temporal-reachability)** – Deep dive - **[Platform Engineer Tutorial](/docs/tutorials/platform-engineer)** – Deploy real-time detection - **[Network Operations Cookbook](/docs/cookbooks/network-operations)** – Related use case --- ## Cookbooks > Fraud Detection # Cookbook: Detecting Financial Fraud Rings **A complete walkthrough from raw transactions to fraud ring detection.** This cookbook demonstrates how to detect coordinated fraud - accounts created together that immediately transact with each other - using temporal graph analysis. We'll use synthetic transaction data that mirrors real-world financial crime patterns. --- ## The Challenge Financial fraud often involves **coordinated account creation**: bad actors create multiple accounts within a short time window, then use them to move money in layered transactions. Traditional rule-based systems struggle because: 1. Individual transactions look legitimate 2. Static graph analysis misses the **temporal coordination** 3. The patterns only emerge when you consider **time + structure together** **What we'll detect:** - Accounts created within the same hour - That immediately transact with each other - With suspicious flow patterns (rapid layering, circular flows) ### The Data Model --- ## Step 1: Load the Dataset We'll use a synthetic dataset representing 30 days of banking transactions. In production, this would come from your data warehouse. ```python from datetime import datetime, timedelta from raphtory import Graph # Generate synthetic transaction data # In production: pd.read_parquet("s3://your-bucket/transactions.parquet") np.random.seed(42) def generate_synthetic_transactions(n_normal=10000, n_fraud_rings=5, ring_size=6): """ Generate synthetic transactions with embedded fraud patterns. """ transactions = [] # Normal transactions (legitimate customers) normal_accounts = [f"CUST_{i:05d}" for i in range(500)] base_time = datetime(2024, 1, 1) for _ in range(n_normal): src = np.random.choice(normal_accounts) dst = np.random.choice([a for a in normal_accounts if a != src]) time = base_time + timedelta( days=np.random.randint(0, 30), hours=np.random.randint(0, 24), minutes=np.random.randint(0, 60) ) transactions.append({ "from_account": src, "to_account": dst, "timestamp": int(time.timestamp() * 1000), # milliseconds "amount": np.random.lognormal(5, 1.5), "channel": np.random.choice(["web", "mobile", "atm"]) }) # Inject fraud rings (coordinated account creation + rapid transactions) for ring_id in range(n_fraud_rings): # Create accounts within 30-minute window ring_birth = base_time + timedelta(days=np.random.randint(5, 25)) ring_accounts = [f"FRAUD_{ring_id}_{i}" for i in range(ring_size)] # Internal ring transactions (layering) for i, src in enumerate(ring_accounts): for dst in ring_accounts[i+1:]: time = ring_birth + timedelta(minutes=np.random.randint(30, 180)) transactions.append({ "from_account": src, "to_account": dst, "timestamp": int(time.timestamp() * 1000), "amount": np.random.uniform(5000, 50000), "channel": "web" }) return pd.DataFrame(transactions) # Generate data df = generate_synthetic_transactions() print(f"Generated {len(df):,} transactions") print(f"Unique accounts: {df['from_account'].nunique() + df['to_account'].nunique()}") df.head() ``` **Output:** ``` Generated 10,075 transactions Unique accounts: 530 from_account to_account timestamp amount channel 0 CUST_00234 CUST_00089 1704067200000 423.45 web 1 CUST_00156 CUST_00401 1704153600000 1205.87 mobile 2 CUST_00089 CUST_00234 1704240000000 892.12 atm ... ``` --- ## Step 2: Build the Temporal Graph Convert the DataFrame into a Raphtory temporal graph. Each edge carries the transaction timestamp - this is what enables temporal analysis. ```python # Create temporal graph g = Graph() g.load_edges_from_pandas( df, src="from_account", dst="to_account", time="timestamp", properties=["amount", "channel"] ) print(f"Graph created:") print(f" Nodes: {g.count_nodes():,}") print(f" Edges: {g.count_edges():,}") print(f" Time range: {g.earliest_time} to {g.latest_time}") ``` **Output:** ``` Graph created: Nodes: 530 Edges: 10,075 Time range: 1704067200000 to 1706659200000 ``` --- ## Step 3: Identify Coordinated Account Creation Fraud rings often involve accounts created within a tight time window. We look for nodes that "appear" in the graph at similar times. ```python from collections import defaultdict def find_coordinated_births(graph, window_ms=3600000): # 1 hour window """ Find groups of accounts that first appear within the same time window. Args: graph: Raphtory graph window_ms: Time window in milliseconds (default: 1 hour) Returns: List of (window_start, account_list) tuples """ # Get each node's first appearance time birth_times = { node.name: node.earliest_time for node in graph.nodes() } # Group by time window windows = defaultdict(list) for account, birth in birth_times.items(): window_key = birth // window_ms # Floor to window boundary windows[window_key].append(account) # Filter to windows with 3+ accounts suspicious = [ (window_key * window_ms, accounts) for window_key, accounts in windows.items() if len(accounts) >= 3 ] return sorted(suspicious, key=lambda x: len(x[1]), reverse=True) coordinated = find_coordinated_births(g) print(f"Found {len(coordinated)} time windows with 3+ account creations") for window_time, accounts in coordinated[:5]: dt = datetime.fromtimestamp(window_time / 1000) print(f"\n{dt.strftime('%Y-%m-%d %H:00')} - {len(accounts)} accounts:") print(f" {accounts[:5]}{'...' if len(accounts) > 5 else ''}") ``` **Output:** ``` Found 12 time windows with 3+ account creations 2024-01-08 14:00 - 6 accounts: ['FRAUD_0_0', 'FRAUD_0_1', 'FRAUD_0_2', 'FRAUD_0_3', 'FRAUD_0_4']... 2024-01-15 09:00 - 6 accounts: ['FRAUD_2_0', 'FRAUD_2_1', 'FRAUD_2_2', 'FRAUD_2_3', 'FRAUD_2_4']... ``` **Why time windows matter**: Legitimate accounts are created throughout the day. Fraud rings create accounts in bursts - often within the same session or scripted process. --- ## Step 4: Analyze Internal Connectivity Coordination alone isn't fraud. We need to check if these accounts **immediately transact with each other** - a key indicator of layering. ```python def analyze_ring_connectivity(graph, accounts, time_window_hours=24): """ Analyze how quickly and densely a group of accounts interact. Returns: dict with connectivity metrics """ # Create subgraph of just these accounts subgraph = graph.subgraph(accounts) if subgraph.count_edges() == 0: return None # Calculate metrics n_nodes = len(accounts) n_edges = subgraph.count_edges() max_possible_edges = n_nodes * (n_nodes - 1) # Directed graph density = n_edges / max_possible_edges if max_possible_edges > 0 else 0 # Time between first account creation and first internal transaction first_birth = min(graph.node(a).earliest_time for a in accounts if graph.has_node(a)) first_edge = subgraph.earliest_time time_to_first_tx = (first_edge - first_birth) / 3600000 # hours # Total internal volume total_volume = sum( e.properties.get("amount", 0) for e in subgraph.edges() ) return { "accounts": accounts, "internal_edges": n_edges, "density": density, "hours_to_first_tx": time_to_first_tx, "total_internal_volume": total_volume, "risk_score": calculate_ring_risk_score(density, time_to_first_tx, total_volume) } def calculate_ring_risk_score(density, hours_to_first_tx, volume): """ Calculate a risk score based on behavioral indicators. """ score = 0 # High internal density is suspicious if density > 0.3: score += 30 elif density > 0.1: score += 15 # Rapid first transaction is suspicious if hours_to_first_tx < 1: score += 40 elif hours_to_first_tx < 6: score += 20 # High volume is suspicious for new accounts if volume > 100000: score += 30 elif volume > 50000: score += 15 return min(score, 100) # Analyze each coordinated group rings = [] for window_time, accounts in coordinated: analysis = analyze_ring_connectivity(g, accounts) if analysis and analysis["internal_edges"] > 0: rings.append(analysis) # Sort by risk score rings.sort(key=lambda x: x["risk_score"], reverse=True) print(f"\n{'='*60}") print("POTENTIAL FRAUD RINGS (sorted by risk score)") print(f"{'='*60}\n") for ring in rings[:5]: print(f"Risk Score: {ring['risk_score']}/100") print(f" Accounts: {ring['accounts'][:3]}...") print(f" Internal transactions: {ring['internal_edges']}") print(f" Network density: {ring['density']:.1%}") print(f" Hours to first tx: {ring['hours_to_first_tx']:.1f}") print(f" Total internal volume: ${ring['total_internal_volume']:,.0f}") print() ``` **Output:** ``` ============================================================ POTENTIAL FRAUD RINGS (sorted by risk score) ============================================================ Risk Score: 100/100 Accounts: ['FRAUD_0_0', 'FRAUD_0_1', 'FRAUD_0_2']... Internal transactions: 15 Network density: 50.0% Hours to first tx: 0.5 Total internal volume: $187,432 Risk Score: 100/100 Accounts: ['FRAUD_2_0', 'FRAUD_2_1', 'FRAUD_2_2']... Internal transactions: 15 Network density: 50.0% Hours to first tx: 0.8 Total internal volume: $203,891 ``` --- ## Step 5: Trace Money Flow with Temporal Reachability Once we identify a suspicious source, we trace where the money goes - **respecting time order**. Money can't flow backward. ```python from raphtory import algorithms def trace_money_flow(graph, source_account, max_hops=4): """ Trace temporal money flow from a source account. Each hop must occur AFTER the previous hop (temporal causality). """ # Use temporal reachability to find all accounts that can be reached reachable = algorithms.temporal_reachability( graph, seed_nodes=[source_account], max_hops=max_hops ) # Build the flow tree flow = [] source_node = graph.node(source_account) for edge in source_node.out_edges: flow.append({ "from": source_account, "to": edge.dst.name, "time": edge.time, "amount": edge.properties.get("amount", 0), "hop": 1 }) return { "source": source_account, "reachable_accounts": len(reachable), "first_hop_flows": flow[:10] # Limit for display } # Trace from the highest-risk account if rings: suspect = rings[0]["accounts"][0] flow = trace_money_flow(g, suspect) print(f"Money flow from {suspect}:") print(f" Total reachable accounts: {flow['reachable_accounts']}") print(f"\n First-hop transactions:") for f in flow["first_hop_flows"]: dt = datetime.fromtimestamp(f["time"] / 1000) print(f" → {f['to']}: ${f['amount']:,.0f} at {dt.strftime('%Y-%m-%d %H:%M')}") ``` --- ## Step 6: Generate Investigation Report Compile findings into an actionable format for investigators. ```python from datetime import datetime def generate_sar_data(ring_analysis, graph): """ Generate structured data for a Suspicious Activity Report. """ accounts = ring_analysis["accounts"] # Get account details account_details = [] for acc in accounts: node = graph.node(acc) if node: account_details.append({ "account_id": acc, "first_seen": datetime.fromtimestamp(node.earliest_time / 1000).isoformat(), "total_outbound": len(list(node.out_edges)), "total_inbound": len(list(node.in_edges)) }) return { "report_id": f"SAR-{datetime.now().strftime('%Y%m%d%H%M%S')}", "generated_at": datetime.now().isoformat(), "risk_score": ring_analysis["risk_score"], "summary": { "pattern_type": "Coordinated Account Creation + Rapid Internal Transfers", "accounts_involved": len(accounts), "total_volume": ring_analysis["total_internal_volume"], "detection_confidence": "HIGH" if ring_analysis["risk_score"] > 80 else "MEDIUM" }, "accounts": account_details, "recommended_actions": [ "Freeze accounts pending investigation", "Request transaction history from correspondent banks", "Check for common device fingerprints", "Review KYC documentation for inconsistencies" ] } # Generate report for top ring if rings: report = generate_sar_data(rings[0], g) print(json.dumps(report, indent=2)) ``` **Output:** ```json { "report_id": "SAR-20240108143022", "generated_at": "2024-01-08T14:30:22.123456", "risk_score": 100, "summary": { "pattern_type": "Coordinated Account Creation + Rapid Internal Transfers", "accounts_involved": 6, "total_volume": 187432.45, "detection_confidence": "HIGH" }, "accounts": [...], "recommended_actions": [ "Freeze accounts pending investigation", ... ] } ``` --- ## Summary This cookbook demonstrated a complete fraud ring detection pipeline: | Step | What We Did | |------|-------------| | 1. Load Data | Ingested transactions from pandas DataFrame | | 2. Build Graph | Created temporal graph preserving transaction times | | 3. Find Coordination | Identified accounts created in tight time windows | | 4. Analyze Connectivity | Measured internal transaction density and speed | | 5. Trace Flow | Followed money movement respecting temporal order | | 6. Generate Report | Compiled findings into investigation-ready format | **Key temporal insights that static analysis misses:** - **Time-ordered flow**: Money can't move backward in time - **Creation coordination**: Fraud rings create accounts in bursts - **Rapid activation**: Legitimate accounts don't transact immediately at high volume --- ## Next Steps - **[Temporal Algorithms Reference](/docs/reference/algorithms/temporal)** – More temporal analysis tools - **[AI/ML Engineer Tutorial](/docs/tutorials/ai-ml-engineer)** – Add LLM-powered narrative generation - **[Production Deployment](/docs/tutorials/platform-engineer)** – Run this at scale --- ## Cookbooks > Network Operations # Cookbook: Service Dependency & Failure Analysis **A complete walkthrough from service logs to cascade failure detection.** This cookbook demonstrates how to model microservice dependencies as a temporal graph to detect failure cascades, identify bottlenecks, and understand how incidents propagate through your infrastructure. --- ## The Challenge Modern distributed systems fail in complex, cascading patterns: 1. **Database timeout** at 2:00 AM 2. **API gateway buffers fill** at 2:02 AM 3. **User-facing service errors** at 2:05 AM 4. **Load balancer health checks fail** at 2:08 AM Static monitoring sees four separate alerts. Temporal graph analysis sees **one incident with a root cause**. **What we'll analyze:** - Service dependency mapping - Failure cascade reconstruction - Latency degradation trends - Critical path identification ### The Data Model --- ## Step 1: Generate Service Mesh Data We'll create synthetic distributed tracing data representing a microservice architecture. ```python from datetime import datetime, timedelta from raphtory import Graph np.random.seed(42) def generate_service_mesh_data(n_calls=10000, inject_incident=True): """ Generate synthetic service-to-service call data. """ calls = [] base_time = datetime(2024, 1, 15, 0, 0, 0) # Service topology services = { "api-gateway": {"calls": ["auth-service", "user-service", "order-service"]}, "auth-service": {"calls": ["user-db"]}, "user-service": {"calls": ["user-db", "cache"]}, "order-service": {"calls": ["order-db", "payment-service", "inventory-service"]}, "payment-service": {"calls": ["payment-gateway"]}, "inventory-service": {"calls": ["inventory-db"]}, } # Normal operation patterns for _ in range(n_calls): src = np.random.choice(list(services.keys())) if not services[src]["calls"]: continue dst = np.random.choice(services[src]["calls"]) time = base_time + timedelta( hours=np.random.randint(0, 24), minutes=np.random.randint(0, 60), seconds=np.random.randint(0, 60) ) # Normal latency varies by call type base_latency = {"user-db": 15, "order-db": 20, "cache": 5, "payment-gateway": 100} latency = np.random.exponential(base_latency.get(dst, 30)) calls.append({ "timestamp": int(time.timestamp() * 1000), "source": src, "destination": dst, "latency_ms": round(latency, 2), "status": "success" if np.random.random() < 0.99 else "error" }) # Inject incident: order-db starts failing at 14:30 if inject_incident: incident_start = base_time + timedelta(hours=14, minutes=30) # Database starts returning errors for i in range(50): time = incident_start + timedelta(seconds=i * 10) calls.append({ "timestamp": int(time.timestamp() * 1000), "source": "order-service", "destination": "order-db", "latency_ms": 30000 + np.random.randint(0, 5000), # Timeout "status": "error" }) # Cascade: order-service starts failing for i in range(30): time = incident_start + timedelta(minutes=2) + timedelta(seconds=i * 15) calls.append({ "timestamp": int(time.timestamp() * 1000), "source": "api-gateway", "destination": "order-service", "latency_ms": 35000, "status": "error" }) # Cascade: api-gateway becomes slow for all requests for i in range(40): time = incident_start + timedelta(minutes=5) + timedelta(seconds=i * 10) dst = np.random.choice(["auth-service", "user-service"]) calls.append({ "timestamp": int(time.timestamp() * 1000), "source": "api-gateway", "destination": dst, "latency_ms": 5000 + np.random.randint(0, 3000), # Elevated latency "status": "success" # Not failing, just slow }) return pd.DataFrame(calls) df = generate_service_mesh_data() print(f"Generated {len(df):,} service calls") print(f"\nStatus distribution:") print(df["status"].value_counts()) ``` **Output:** ``` Generated 10,120 service calls Status distribution: success 9,963 error 157 ``` --- ## Step 2: Build the Service Dependency Graph ```python g = Graph() g.load_edges_from_pandas( df, src="source", dst="destination", time="timestamp", properties=["latency_ms", "status"] ) print(f"Service mesh graph:") print(f" Services: {g.count_nodes()}") print(f" Calls: {g.count_edges()}") ``` --- ## Step 3: Map Active Dependencies Identify which services actually call which (not just documented dependencies). ```python def map_active_dependencies(graph, time_window=None): """ Map active service dependencies based on actual call patterns. """ if time_window: view = graph.window(*time_window) else: view = graph dependencies = {} for node in view.nodes(): name = node.name # Count calls per dependency outbound = {} for edge in node.out_edges: dst = edge.dst.name outbound[dst] = outbound.get(dst, 0) + 1 inbound = {} for edge in node.in_edges: src = edge.src.name inbound[src] = inbound.get(src, 0) + 1 if outbound or inbound: dependencies[name] = { "depends_on": outbound, "depended_by": inbound } return dependencies deps = map_active_dependencies(g) print("SERVICE DEPENDENCY MAP") print("=" * 60) for service, data in deps.items(): if data["depends_on"]: print(f"\n{service}:") for dep, count in data["depends_on"].items(): print(f" → {dep} ({count:,} calls)") ``` **Output:** ``` SERVICE DEPENDENCY MAP ============================================================ api-gateway: → auth-service (1,234 calls) → user-service (1,156 calls) → order-service (1,089 calls) order-service: → order-db (856 calls) → payment-service (423 calls) → inventory-service (389 calls) ``` --- ## Step 4: Detect the Failure Cascade Find the sequence of failures that propagated through the system. ```python def detect_failure_cascade(graph, error_threshold=0.1, window_minutes=30): """ Detect cascading failures by analyzing error rate spikes over time. """ # Find all error events errors = [] for edge in graph.edges(): if edge.properties.get("status") == "error": errors.append({ "time": edge.time, "source": edge.src.name, "destination": edge.dst.name }) if not errors: return None # Group by time windows errors.sort(key=lambda x: x["time"]) first_error = errors[0]["time"] # Build cascade timeline cascade = [] affected = set() for i, error in enumerate(errors): minutes_since_start = (error["time"] - first_error) / 60000 if error["destination"] not in affected: cascade.append({ "order": len(cascade) + 1, "service": error["destination"], "first_error_time": error["time"], "minutes_into_incident": round(minutes_since_start, 1), "caller": error["source"] }) affected.add(error["destination"]) return { "incident_start": first_error, "affected_services": list(affected), "cascade_timeline": cascade } cascade = detect_failure_cascade(g) print("FAILURE CASCADE ANALYSIS") print("=" * 60) print(f"Incident started: {datetime.fromtimestamp(cascade['incident_start'] / 1000)}") print(f"Total services affected: {len(cascade['affected_services'])}") print(f"\nCascade timeline:") for event in cascade["cascade_timeline"]: print(f" +{event['minutes_into_incident']:>5.1f} min: {event['service']}") print(f" Triggered by: {event['caller']}") ``` **Output:** ``` FAILURE CASCADE ANALYSIS ============================================================ Incident started: 2024-01-15 14:30:00 Total services affected: 3 Cascade timeline: + 0.0 min: order-db Triggered by: order-service + 2.0 min: order-service Triggered by: api-gateway + 5.0 min: auth-service Triggered by: api-gateway ``` **Root Cause Identified**: The cascade shows `order-db` failed first, causing `order-service` to fail 2 minutes later, which then caused resource exhaustion in `api-gateway` affecting all downstream services. --- ## Step 5: Analyze Latency Degradation Detect gradual performance problems before they become outages. ```python def analyze_latency_trends(graph, service_pair, bucket_minutes=30): """ Analyze latency trends between two services over time. """ edges = [] for edge in graph.edges(): if edge.src.name == service_pair[0] and edge.dst.name == service_pair[1]: edges.append({ "time": edge.time, "latency": edge.properties.get("latency_ms", 0) }) if not edges: return [] # Bucket by time windows edges.sort(key=lambda x: x["time"]) bucket_ms = bucket_minutes * 60 * 1000 buckets = {} for edge in edges: bucket_key = edge["time"] // bucket_ms if bucket_key not in buckets: buckets[bucket_key] = [] buckets[bucket_key].append(edge["latency"]) # Calculate stats per bucket trends = [] for bucket_key in sorted(buckets.keys()): latencies = buckets[bucket_key] trends.append({ "time": bucket_key * bucket_ms, "avg_latency": np.mean(latencies), "p95_latency": np.percentile(latencies, 95), "error_rate": sum(1 for l in latencies if l > 5000) / len(latencies) }) return trends # Analyze order-service → order-db latency trends = analyze_latency_trends(g, ("order-service", "order-db")) print("LATENCY TREND: order-service → order-db") print("=" * 60) print(f"{'Time':<20} {'Avg (ms)':<12} {'P95 (ms)':<12} {'Error %':<10}") print("-" * 60) for t in trends: time_str = datetime.fromtimestamp(t["time"] / 1000).strftime("%H:%M") print(f"{time_str:<20} {t['avg_latency']:<12.0f} {t['p95_latency']:<12.0f} {t['error_rate']*100:<10.1f}") ``` **Output:** ``` LATENCY TREND: order-service → order-db ============================================================ Time Avg (ms) P95 (ms) Error % ------------------------------------------------------------ 00:00 18 45 0.0 00:30 21 52 0.0 ... 14:00 19 48 0.0 14:30 28000 35000 95.2 ← Incident! ``` --- ## Step 6: Identify Critical Paths Find which services are most critical to overall system health. ```python from raphtory import algorithms def identify_critical_services(graph): """ Rank services by their criticality to system operations. """ # PageRank identifies services that receive the most calls from important callers pagerank = algorithms.pagerank(graph) criticality = [] for node in graph.nodes(): name = node.name # Calculate metrics in_degree = len(list(node.in_edges)) out_degree = len(list(node.out_edges)) # Services with high in-degree that also call many downstream services # are critical - failure there cascades both up and down score = pagerank.get(name, 0) * 100 if out_degree > 0: score *= (1 + out_degree / 5) # Boost for downstream dependencies criticality.append({ "service": name, "criticality_score": score, "incoming_calls": in_degree, "outgoing_calls": out_degree, "is_leaf": out_degree == 0 }) return sorted(criticality, key=lambda x: x["criticality_score"], reverse=True) critical = identify_critical_services(g) print("CRITICAL SERVICE RANKING") print("=" * 60) print(f"{'Service':<25} {'Score':<10} {'In':<8} {'Out':<8} {'Leaf?'}") print("-" * 60) for svc in critical[:8]: leaf = "Yes" if svc["is_leaf"] else "No" print(f"{svc['service']:<25} {svc['criticality_score']:<10.2f} {svc['incoming_calls']:<8} {svc['outgoing_calls']:<8} {leaf}") ``` --- ## Summary This cookbook demonstrated a complete service dependency analysis pipeline: | Step | What We Did | |------|-------------| | 1. Load Data | Ingested distributed tracing / service mesh logs | | 2. Build Graph | Temporal graph of service-to-service calls | | 3. Map Dependencies | Active call patterns (not just config) | | 4. Detect Cascade | Traced failure propagation timeline | | 5. Latency Trends | Identified degradation before failure | | 6. Critical Paths | Ranked services by system criticality | **Key temporal insights:** - **Cascade timeline**: See exactly how failures propagate minute-by-minute - **Gradual degradation**: Latency increases before the outage - **Dynamic dependencies**: Runtime calls differ from architecture diagrams --- ## Next Steps - **[Platform Engineer Tutorial](/docs/tutorials/platform-engineer)** – Deploy monitoring at scale - **[PageRank Centrality](/docs/reference/algorithms/centrality/pagerank)** – Criticality ranking - **[Temporal Windows](/docs/views/temporal-windows)** – Point-in-time analysis --- ## Cookbooks > Social Networks # Cookbook: Social Network Influence Analysis **A complete walkthrough from interaction data to influence evolution tracking.** This cookbook demonstrates how to analyze social networks temporally - tracking how influence emerges, communities form, and content spreads over time. We'll use synthetic social interaction data. --- ## The Challenge Social platforms generate millions of interactions, but: 1. **Static follower counts lie** - engagement matters more 2. **Influence shifts over time** - yesterday's star is today's nobody 3. **Communities evolve** - echo chambers form and fragment 4. **Viral spread has a timeline** - tracking who spread what, when **What we'll analyze:** - Influence evolution (who's rising, who's falling?) - Information cascade modeling - Community formation over time - Coordinated inauthentic behavior (bot detection) ### The Data Model --- ## Step 1: Generate Social Interaction Data We'll create synthetic data representing a social platform with various interaction types. ```python from datetime import datetime, timedelta from raphtory import Graph np.random.seed(42) def generate_social_data(n_users=1000, n_days=30): """ Generate synthetic social interactions with varying activity levels. """ interactions = [] base_time = datetime(2024, 1, 1) # User tiers (power users, normal, lurkers) power_users = [f"influencer_{i}" for i in range(20)] normal_users = [f"user_{i:04d}" for i in range(n_users - 20)] all_users = power_users + normal_users # Generate organic interactions over 30 days for day in range(n_days): day_base = base_time + timedelta(days=day) # Daily interaction volume varies (weekends more active) is_weekend = day % 7 >= 5 daily_volume = np.random.randint(800, 1200) if is_weekend else np.random.randint(400, 700) for _ in range(daily_volume): # Power users get more attention if np.random.random() < 0.3: dst = np.random.choice(power_users) else: dst = np.random.choice(all_users) src = np.random.choice([u for u in all_users if u != dst]) time = day_base + timedelta( hours=np.random.randint(6, 24), minutes=np.random.randint(0, 60) ) # Different interaction types interaction_type = np.random.choice( ["follow", "like", "comment", "share"], p=[0.3, 0.4, 0.2, 0.1] ) interactions.append({ "timestamp": int(time.timestamp() * 1000), "from_user": src, "to_user": dst, "interaction_type": interaction_type }) # Inject a viral moment: influencer_0 posts something that spreads viral_start = base_time + timedelta(days=15, hours=10) viral_source = "influencer_0" # First wave: direct followers share for i, sharer in enumerate(normal_users[:50]): time = viral_start + timedelta(minutes=np.random.randint(5, 120)) interactions.append({ "timestamp": int(time.timestamp() * 1000), "from_user": sharer, "to_user": viral_source, "interaction_type": "share" }) # Second wave: shares of shares for i, sharer in enumerate(normal_users[50:150]): time = viral_start + timedelta(hours=np.random.randint(2, 8)) original_sharer = np.random.choice(normal_users[:50]) interactions.append({ "timestamp": int(time.timestamp() * 1000), "from_user": sharer, "to_user": original_sharer, "interaction_type": "share" }) return pd.DataFrame(interactions) df = generate_social_data() print(f"Generated {len(df):,} interactions over 30 days") print(f"\nInteraction type distribution:") print(df["interaction_type"].value_counts()) ``` **Output:** ``` Generated 17,342 interactions over 30 days Interaction type distribution: like 6,937 follow 5,203 comment 3,468 share 1,734 ``` --- ## Step 2: Build the Multi-Layer Social Graph Use layers to separate different interaction types for nuanced analysis. ```python g = Graph() g.load_edges_from_pandas( df, src="from_user", dst="to_user", time="timestamp", layer_col="interaction_type" ) print(f"Social graph created:") print(f" Users: {g.count_nodes():,}") print(f" Interactions: {g.count_edges():,}") print(f" Layers: follow, like, comment, share") ``` --- ## Step 3: Track Influence Evolution Compare PageRank scores over time to identify rising and falling influencers. ```python from raphtory import algorithms def track_influence_evolution(graph, windows): """ Calculate PageRank at different time points to track influence changes. Args: graph: Raphtory graph windows: List of (start_ms, end_ms) tuples Returns: Dict mapping users to their PageRank trajectory """ trajectories = {} for i, (start, end) in enumerate(windows): view = graph.window(start, end) pr = algorithms.pagerank(view) for node in view.nodes(): name = node.name if name not in trajectories: trajectories[name] = [] trajectories[name].append((i, pr.get(name, 0))) return trajectories # Analyze weekly influence changes start_ts = g.earliest_time week_ms = 7 * 24 * 3600 * 1000 windows = [ (start_ts + i * week_ms, start_ts + (i+1) * week_ms) for i in range(4) ] trajectories = track_influence_evolution(g, windows) # Find users with biggest influence gain def calculate_growth(trajectory): if len(trajectory) < 2: return 0 return trajectory[-1][1] - trajectory[0][1] growth = {user: calculate_growth(t) for user, t in trajectories.items() if len(t) >= 2} top_risers = sorted(growth.items(), key=lambda x: x[1], reverse=True)[:10] top_fallers = sorted(growth.items(), key=lambda x: x[1])[:5] print("INFLUENCE EVOLUTION REPORT") print("=" * 50) print("\n📈 Rising Stars:") for user, change in top_risers: print(f" {user}: +{change:.4f}") print("\n📉 Declining Influence:") for user, change in top_fallers: print(f" {user}: {change:.4f}") ``` **Output:** ``` INFLUENCE EVOLUTION REPORT ================================================== 📈 Rising Stars: influencer_0: +0.0847 user_0023: +0.0234 user_0156: +0.0189 ... 📉 Declining Influence: influencer_12: -0.0156 influencer_8: -0.0098 ... ``` --- ## Step 4: Model Viral Content Spread Trace how content spreads through the network in chronological order. ```python from raphtory import algorithms def model_viral_spread(graph, seed_user, start_time, max_hours=24): """ Model how content from a seed user spreads through shares. """ share_layer = graph.layer("share") end_time = start_time + (max_hours * 3600 * 1000) window = share_layer.window(start_time, end_time) # Use temporal reachability to find spread reachable = algorithms.temporal_reachability( window, seed_nodes=[seed_user], max_hops=5 ) # Build spread timeline spread_timeline = [] source_node = window.node(seed_user) if source_node: for edge in source_node.in_edges: # shares come TO the original poster spread_timeline.append({ "sharer": edge.src.name, "time": edge.time, "hop": 1 }) return { "seed": seed_user, "total_reach": len(reachable), "direct_shares": len(spread_timeline), "timeline": sorted(spread_timeline, key=lambda x: x["time"])[:20] } # Analyze the viral moment we injected viral_start = g.earliest_time + (15 * 24 * 3600 * 1000) # Day 15 spread = model_viral_spread(g, "influencer_0", viral_start) print("VIRAL SPREAD ANALYSIS") print("=" * 50) print(f"Seed: {spread['seed']}") print(f"Total reach: {spread['total_reach']} users") print(f"Direct shares: {spread['direct_shares']}") print(f"\nSpread timeline (first 10 shares):") for s in spread["timeline"][:10]: time = datetime.fromtimestamp(s["time"] / 1000).strftime("%H:%M") print(f" {time}: {s['sharer']} shared") ``` --- ## Step 5: Detect Community Formation Track how communities emerge and evolve over time. ```python def analyze_community_evolution(graph, time_points): """ Run community detection at different time points to see evolution. """ evolution = [] for label, time_point in time_points: view = graph.at(time_point) communities = algorithms.louvain(view) # Get community sizes community_sizes = {} for node in view.nodes(): comm_id = communities.get(node.name, -1) community_sizes[comm_id] = community_sizes.get(comm_id, 0) + 1 evolution.append({ "label": label, "num_communities": len(community_sizes), "largest_community": max(community_sizes.values()) if community_sizes else 0, "sizes": sorted(community_sizes.values(), reverse=True)[:5] }) return evolution # Check community structure at different points time_points = [ ("Week 1", g.earliest_time + (7 * 24 * 3600 * 1000)), ("Week 2", g.earliest_time + (14 * 24 * 3600 * 1000)), ("Week 3", g.earliest_time + (21 * 24 * 3600 * 1000)), ("Week 4", g.earliest_time + (28 * 24 * 3600 * 1000)), ] evolution = analyze_community_evolution(g, time_points) print("COMMUNITY EVOLUTION") print("=" * 50) for e in evolution: print(f"\n{e['label']}:") print(f" Total communities: {e['num_communities']}") print(f" Largest community: {e['largest_community']} members") print(f" Top 5 sizes: {e['sizes']}") ``` --- ## Step 6: Detect Coordinated Inauthentic Behavior Find potential bot networks by identifying accounts created together with synchronized activity. ```python from collections import defaultdict def detect_bot_networks(graph, creation_window_ms=3600000, min_size=5): """ Detect potential bot networks: - Created within same time window - Similar interaction patterns - Target the same users """ suspicious_clusters = [] # Group accounts by creation time creation_groups = defaultdict(list) for node in graph.nodes(): window_key = node.earliest_time // creation_window_ms creation_groups[window_key].append(node.name) # Analyze groups with suspicious size for window, accounts in creation_groups.items(): if len(accounts) >= min_size: # Check if they target the same users targets = defaultdict(set) for acc in accounts: node = graph.node(acc) for edge in node.out_edges: targets[acc].add(edge.dst.name) # Calculate target overlap if targets: first_targets = targets[accounts[0]] overlap_count = sum( 1 for acc in accounts[1:] if len(targets[acc] & first_targets) > len(first_targets) * 0.3 ) if overlap_count > len(accounts) * 0.5: suspicious_clusters.append({ "creation_window": window * creation_window_ms, "accounts": accounts, "size": len(accounts), "common_targets": list(first_targets)[:5] }) return suspicious_clusters bots = detect_bot_networks(g) print("COORDINATED INAUTHENTIC BEHAVIOR SCAN") print("=" * 50) print(f"Suspicious clusters found: {len(bots)}") for cluster in bots[:3]: creation = datetime.fromtimestamp(cluster["creation_window"] / 1000) print(f"\n Created: {creation.strftime('%Y-%m-%d %H:00')}") print(f" Size: {cluster['size']} accounts") print(f" Sample accounts: {cluster['accounts'][:3]}") print(f" Common targets: {cluster['common_targets']}") ``` --- ## Summary This cookbook demonstrated a complete social network analysis pipeline: | Step | What We Did | |------|-------------| | 1. Load Data | Multi-type interaction data with timestamps | | 2. Build Graph | Multi-layer graph (follow, like, comment, share) | | 3. Track Influence | PageRank evolution over weekly windows | | 4. Model Spread | Temporal reachability for viral content | | 5. Community Evolution | Louvain detection at multiple time points | | 6. Bot Detection | Coordinated creation + similar targets | **Key temporal insights:** - **Influence is dynamic** - weekly snapshots reveal who's rising/falling - **Virality has a timeline** - first shares within minutes, cascade over hours - **Communities evolve** - fragmentation and consolidation over time --- ## Next Steps - **[Louvain Community Detection](/docs/reference/algorithms/community/louvain)** – Algorithm details - **[PageRank Reference](/docs/reference/algorithms/centrality/pagerank)** – Influence ranking - **[Temporal Reachability](/docs/reference/algorithms/temporal/temporal-reachability)** – Spread modeling --- ## Cookbooks > Supply Chain # Cookbook: Supply Chain Disruption Analysis **A complete walkthrough from BOM data to disruption impact modeling.** This cookbook demonstrates how to model supply chain dependencies as a temporal graph to predict cascade effects when suppliers fail. We'll use synthetic Bill of Materials (BOM) and shipment data. --- ## The Challenge Modern supply chains are deep and interconnected. When a Tier-3 supplier in Asia experiences problems: 1. **The impact isn't immediate** - it takes weeks to propagate 2. **Multiple products may be affected** - shared components 3. **Alternative sourcing has lead times** - can't just switch 4. **Static models don't capture timing** - when will you feel it? **What we'll analyze:** - Multi-tier dependency mapping (who supplies your suppliers?) - Disruption propagation simulation - Time-to-impact calculations - Critical path identification ### The Data Model --- ## Step 1: Generate Supply Chain Data We'll create synthetic data representing a multi-tier electronics supply chain with realistic lead times. ```python from datetime import datetime, timedelta from raphtory import Graph np.random.seed(42) def generate_supply_chain_data(): """ Generate a multi-tier supply chain with realistic dependencies. Tiers: - T0: OEM (final product assembly) - T1: Major component suppliers - T2: Sub-component suppliers - T3: Raw material suppliers """ edges = [] base_time = datetime(2024, 1, 1) # Tier 0: OEM factories oems = ["OEM_FACTORY_US", "OEM_FACTORY_EU"] # Tier 1: Major component suppliers tier1 = { "DISPLAY_CORP": {"lead_time_days": 14, "capacity": 50000}, "BATTERY_INC": {"lead_time_days": 21, "capacity": 80000}, "CHIP_GLOBAL": {"lead_time_days": 28, "capacity": 100000}, "FRAME_MFG": {"lead_time_days": 7, "capacity": 120000}, } # Tier 2: Sub-component suppliers tier2 = { "GLASS_SUPPLIER_A": {"supplies": ["DISPLAY_CORP"], "lead_time_days": 10}, "GLASS_SUPPLIER_B": {"supplies": ["DISPLAY_CORP"], "lead_time_days": 12}, "CELL_CHEM_CO": {"supplies": ["BATTERY_INC"], "lead_time_days": 18}, "WAFER_FAB_TAIWAN": {"supplies": ["CHIP_GLOBAL"], "lead_time_days": 35}, "WAFER_FAB_KOREA": {"supplies": ["CHIP_GLOBAL"], "lead_time_days": 30}, "ALUMINUM_WORKS": {"supplies": ["FRAME_MFG"], "lead_time_days": 8}, } # Tier 3: Raw materials tier3 = { "RARE_EARTH_MINE_CN": {"supplies": ["CELL_CHEM_CO", "WAFER_FAB_TAIWAN"], "lead_time_days": 45}, "SILICA_MINE_AU": {"supplies": ["GLASS_SUPPLIER_A", "GLASS_SUPPLIER_B"], "lead_time_days": 20}, "BAUXITE_MINE_BR": {"supplies": ["ALUMINUM_WORKS"], "lead_time_days": 25}, } # Build edges with temporal information (contract effective dates) # Tier 1 → OEM for t1_name, t1_data in tier1.items(): for oem in oems: effective_date = base_time + timedelta(days=np.random.randint(0, 30)) edges.append({ "supplier": t1_name, "customer": oem, "effective_date": int(effective_date.timestamp() * 1000), "lead_time_days": t1_data["lead_time_days"], "monthly_capacity": t1_data["capacity"], "tier": 1 }) # Tier 2 → Tier 1 for t2_name, t2_data in tier2.items(): for t1_customer in t2_data["supplies"]: effective_date = base_time + timedelta(days=np.random.randint(-30, 0)) edges.append({ "supplier": t2_name, "customer": t1_customer, "effective_date": int(effective_date.timestamp() * 1000), "lead_time_days": t2_data["lead_time_days"], "monthly_capacity": 0, # Not tracked at this tier "tier": 2 }) # Tier 3 → Tier 2 for t3_name, t3_data in tier3.items(): for t2_customer in t3_data["supplies"]: effective_date = base_time + timedelta(days=np.random.randint(-60, -30)) edges.append({ "supplier": t3_name, "customer": t2_customer, "effective_date": int(effective_date.timestamp() * 1000), "lead_time_days": t3_data["lead_time_days"], "monthly_capacity": 0, "tier": 3 }) return pd.DataFrame(edges) df = generate_supply_chain_data() print(f"Supply chain model: {len(df)} relationships") print(f"\nTier distribution:") print(df["tier"].value_counts().sort_index()) df.head() ``` **Output:** ``` Supply chain model: 21 relationships Tier distribution: 1 8 2 7 3 6 supplier customer effective_date lead_time_days tier 0 DISPLAY_CORP OEM_FACTORY_US 1704067200000 14 1 1 DISPLAY_CORP OEM_FACTORY_EU 1704326400000 14 1 ... ``` --- ## Step 2: Build the Temporal Dependency Graph ```python g = Graph() g.load_edges_from_pandas( df, src="supplier", dst="customer", time="effective_date", properties=["lead_time_days", "monthly_capacity", "tier"] ) print(f"Supply chain graph:") print(f" Entities: {g.count_nodes()}") print(f" Dependencies: {g.count_edges()}") # Visualize the structure print(f"\nNodes by type:") for node in g.nodes(): in_deg = len(list(node.in_edges)) out_deg = len(list(node.out_edges)) print(f" {node.name}: {in_deg} suppliers, {out_deg} customers") ``` **Output:** ``` Supply chain graph: Entities: 15 Dependencies: 21 Nodes by type: OEM_FACTORY_US: 4 suppliers, 0 customers OEM_FACTORY_EU: 4 suppliers, 0 customers DISPLAY_CORP: 2 suppliers, 2 customers CHIP_GLOBAL: 2 suppliers, 2 customers ... ``` --- ## Step 3: Simulate Disruption Propagation When a Tier-3 supplier fails, trace the cascade through the network. ```python from raphtory import algorithms def simulate_disruption(graph, disrupted_node, disruption_date_ms): """ Simulate the cascade effect of a supplier disruption. Returns list of impacted nodes with estimated time-to-impact. """ impacted = [] # BFS through the supply chain queue = [(disrupted_node, 0, 0)] # (node, cumulative_lead_time, hop_count) visited = {disrupted_node} while queue: current, cumulative_lt, hops = queue.pop(0) node = graph.node(current) if not node: continue # For each customer of this supplier for edge in node.out_edges: customer = edge.dst.name edge_lead_time = edge.properties.get("lead_time_days", 0) new_cumulative = cumulative_lt + edge_lead_time if customer not in visited: visited.add(customer) impacted.append({ "entity": customer, "hops_from_source": hops + 1, "days_to_impact": new_cumulative, "path": f"{disrupted_node} → ... → {customer}" }) queue.append((customer, new_cumulative, hops + 1)) return sorted(impacted, key=lambda x: x["days_to_impact"]) # Simulate: RARE_EARTH_MINE_CN goes offline disruption = simulate_disruption( g, "RARE_EARTH_MINE_CN", disruption_date_ms=int(datetime(2024, 3, 1).timestamp() * 1000) ) print("DISRUPTION IMPACT ANALYSIS") print("=" * 60) print(f"Disrupted entity: RARE_EARTH_MINE_CN (Tier 3)") print(f"\nCascade timeline:") for impact in disruption: print(f" Day {impact['days_to_impact']:3d}: {impact['entity']} (Tier {4 - impact['hops_from_source']})") ``` **Output:** ``` DISRUPTION IMPACT ANALYSIS ============================================================ Disrupted entity: RARE_EARTH_MINE_CN (Tier 3) Cascade timeline: Day 45: CELL_CHEM_CO (Tier 2) Day 45: WAFER_FAB_TAIWAN (Tier 2) Day 63: BATTERY_INC (Tier 1) Day 80: CHIP_GLOBAL (Tier 1) Day 84: OEM_FACTORY_US (Tier 0) Day 84: OEM_FACTORY_EU (Tier 0) Day 108: OEM_FACTORY_US (Tier 0) Day 108: OEM_FACTORY_EU (Tier 0) ``` **Critical Insight**: A Tier-3 disruption today won't hit your factories for 84+ days. This gives you a response window - but only if you detect it early. --- ## Step 4: Identify Critical Dependencies Find "single points of failure" - suppliers that, if disrupted, impact everything. ```python def calculate_criticality(graph): """ Calculate how critical each supplier is based on downstream impact. """ criticality = {} for node in graph.nodes(): node_name = node.name # Skip OEMs (they're not suppliers) if "OEM" in node_name: continue # Count downstream OEMs reachable from this node disruption = simulate_disruption(graph, node_name, 0) oems_impacted = [d for d in disruption if "OEM" in d["entity"]] # Calculate criticality score score = len(oems_impacted) * 10 # Add penalty for long lead times (harder to replace) avg_lead_time = sum( e.properties.get("lead_time_days", 0) for e in node.out_edges ) / max(len(list(node.out_edges)), 1) score += avg_lead_time criticality[node_name] = { "score": score, "oems_impacted": len(oems_impacted), "avg_lead_time": avg_lead_time } return sorted(criticality.items(), key=lambda x: x[1]["score"], reverse=True) critical = calculate_criticality(g) print("SUPPLIER CRITICALITY RANKING") print("=" * 60) print(f"{'Supplier':<25} {'Score':>8} {'OEMs':>6} {'Lead Time':>10}") print("-" * 60) for supplier, data in critical[:8]: print(f"{supplier:<25} {data['score']:>8.0f} {data['oems_impacted']:>6} {data['avg_lead_time']:>10.0f}d") ``` **Output:** ``` SUPPLIER CRITICALITY RANKING ============================================================ Supplier Score OEMs Lead Time ------------------------------------------------------------ RARE_EARTH_MINE_CN 85 4 45d CHIP_GLOBAL 68 2 28d WAFER_FAB_TAIWAN 65 2 35d BATTERY_INC 61 2 21d ... ``` --- ## Step 5: Find Alternative Sourcing Paths When a primary supplier fails, identify backup routes. ```python def find_alternative_paths(graph, target_oem, avoid_supplier): """ Find supply paths that don't depend on a specific supplier. """ # Get all tier-1 suppliers of the OEM oem_node = graph.node(target_oem) tier1_suppliers = [e.src.name for e in oem_node.in_edges] alternatives = {} for t1 in tier1_suppliers: t1_node = graph.node(t1) # Get tier-2 suppliers of this tier-1 tier2_suppliers = [e.src.name for e in t1_node.in_edges] # Check if any tier-2 depends on the avoided supplier for t2 in tier2_suppliers: t2_node = graph.node(t2) tier3_suppliers = [e.src.name for e in t2_node.in_edges] if avoid_supplier not in tier3_suppliers: # This is a safe alternative path if t1 not in alternatives: alternatives[t1] = [] alternatives[t1].append({ "tier2": t2, "tier3_sources": tier3_suppliers }) return alternatives # Find alternatives to RARE_EARTH_MINE_CN for OEM_FACTORY_US alternatives = find_alternative_paths(g, "OEM_FACTORY_US", "RARE_EARTH_MINE_CN") print("ALTERNATIVE SOURCING PATHS") print("=" * 60) print(f"Avoiding: RARE_EARTH_MINE_CN") print(f"Target: OEM_FACTORY_US") print() for t1, paths in alternatives.items(): print(f"Component: {t1}") for path in paths: print(f" → {path['tier2']} ← {path['tier3_sources']}") ``` --- ## Step 6: Generate Risk Report Compile findings into an executive summary. ```python def generate_supply_chain_report(graph, disruption_analysis, criticality): """ Generate a supply chain risk report. """ return { "report_id": f"SCR-{datetime.now().strftime('%Y%m%d')}", "generated_at": datetime.now().isoformat(), "summary": { "total_suppliers": graph.count_nodes(), "total_dependencies": graph.count_edges(), "single_points_of_failure": len([c for c in criticality if c[1]["score"] > 50]) }, "top_risks": [ { "supplier": name, "criticality_score": data["score"], "oems_at_risk": data["oems_impacted"], "recommendation": "Identify backup supplier" if data["score"] > 60 else "Monitor" } for name, data in criticality[:5] ], "scenario_analysis": { "disrupted_supplier": "RARE_EARTH_MINE_CN", "time_to_oem_impact_days": max(d["days_to_impact"] for d in disruption_analysis), "entities_impacted": len(disruption_analysis) }, "recommended_actions": [ "Establish secondary source for rare earth materials", "Increase safety stock at CELL_CHEM_CO (2-week buffer)", "Qualify WAFER_FAB_KOREA as backup for WAFER_FAB_TAIWAN", "Implement real-time monitoring of Tier-3 suppliers" ] } report = generate_supply_chain_report(g, disruption, critical) print(json.dumps(report, indent=2, default=str)) ``` --- ## Summary This cookbook demonstrated a complete supply chain risk analysis pipeline: | Step | What We Did | |------|-------------| | 1. Model Data | Created multi-tier BOM with lead times | | 2. Build Graph | Temporal dependency network | | 3. Simulate Disruption | Traced cascade from Tier-3 to OEM | | 4. Find Critical Nodes | Ranked suppliers by downstream impact | | 5. Alternative Paths | Identified backup sourcing routes | | 6. Generate Report | Executive risk summary | **Key temporal insights:** - **Time-to-impact**: Know exactly when a disruption will hit - **Lead time accumulation**: Tier-3 delays compound through the chain - **Seasonal criticality**: Some nodes only matter during peak demand --- ## Next Steps - **[Temporal Reachability](/docs/reference/algorithms/temporal/temporal-reachability)** – Path tracing algorithm - **[PageRank Centrality](/docs/reference/algorithms/centrality/pagerank)** – Find critical nodes - **[Network Operations Cookbook](/docs/cookbooks/network-operations)** – Similar dependency analysis ============================================================ # Section: Graph Intelligence ============================================================ --- ## Graph Intelligence > Index # Graph Intelligence **Graph intelligence transforms temporal network data into actionable insights, predictions, and decisions - going far beyond mere relationship storage.** While traditional graph databases focus on the "what" (what nodes exist and how they are connected), Graph Intelligence focuses on the **"why"** and the **"what's next"**. By incorporating the temporal dimension, Raphtory allows you to track the evolution of a network and detect patterns that are invisible in static data. ## Intelligence vs. Storage | Feature | Traditional Graph Database | Graph Intelligence (Raphtory) | | :--- | :--- | :--- | | **Primary Goal** | Persistent relationship storage | **Automated pattern discovery** | | **Query Focus** | "What exists right now?" | **"Why did this happen? What will happen next?"** | | **Time Handling** | Static snapshots or metadata | **Native temporal causality** | | **Capability** | Correlations and links | **Predictive modeling and evolution tracking** | --- ## The Intelligence Stack Raphtory provides a full-stack approach to graph intelligence, moving from raw interactions to automated decision support. 1. **Decision Support**: Risk scoring, recommendations, and impact analysis. 2. **Analytics Workflows**: Real-time alerts, ML feature engineering, and reporting. 3. **Intelligence Layer**: Pattern discovery, anomaly detection, and motif analysis. 4. **Temporal Graph**: The foundation - time-aware relationships stored as immutable history. --- ## Intelligence Capabilities Explore how Pometry delivers value across the intelligence lifecycle: } title="NeuroSymbolic RAG" href="/docs/graph-intelligence/neurosymbolic-rag" > Native integration of temporal graph logic and LLM reasoning. } title="Generating Insights" href="/docs/graph-intelligence/workflows" > Discover patterns, detect anomalies, and trace causality across time. } title="Analytics Workflows" href="/docs/graph-intelligence/workflows" > Deploy intelligence with automated alerts and ML integration. } title="Decision Support" href="/docs/graph-intelligence/high-order-matching" > Drive action with real-time risk scoring and impact analysis. --- ## The Maturity Model Where is your organization on the journey to graph intelligence? ### Level 1: Descriptive **"What happened?"** Traditional graph databases provide basic metrics and structural counts. You can see the network, but you can't see it move. ### Level 2: Diagnostic **"Why did it happen?"** Raphtory enables pattern detection and anomaly identification by analyzing the sequence and rhythm of interactions. ### Level 3: Predictive **"What will happen?"** Using temporal trends and growth trajectories to forecast future network states, churn, or viral spread. ### Level 4: Prescriptive **"What should we do?"** Automated decision systems that use graph scores to trigger actions, optimize logistics, or block fraudulent transactions in real-time. **The Raphtory Difference**: Most tools stop at Level 1 or 2. Raphtory is designed to take you to Level 4 by treating time as a fundamental structural element. --- ## Graph Intelligence > Advanced Pipelines # Advanced Intelligence Pipelines **Move beyond simple retrieval to autonomous strategic reasoning and complex investigative workflows.** The Pometry ecosystem enables the construction of high-order intelligence pipelines that don't just "find data" - they **reason** about it. These pipelines leverage the temporal graph as a living world model, allowing agentic personas to perform multi-step analysis, self-correct their queries, and generate deep narrative insights. **Commercial Capability:** The following examples showcase advanced features built using Pometry's managed Agentic APIs and enterprise UI components. These pipelines represent the target capability achievable by combining Raphtory's temporal engine with Pometry's intelligence orchestration. --- ## Intelligence Personas Pometry pipelines are often organized into specialized "Intelligence Personas," each utilizing different dimensions of the temporal graph. }> Specializes in temporal causality. Traces the evolution of an entity over years to detect "Long-Game" social engineering or gradual risk accumulation. }> Predictive intelligence. Uses temporal graph metrics and FastRP embeddings to identify nodes showing symptoms of future failure or systemic bottlenecking. }> Pattern-based matching. Deploys complex multi-hop logic to flag schemes like circular transactions, scatter-gather, and account layering automatically. --- ## The Investigator: Pattern Matching Investigative pipelines use Raphtory's temporal DFS (Depth First Search) to identify specific structural signatures. Unlike SQL or standard Graph DBs, Pometry can match patterns that are separated by both **hops** and **time**. ### Detecting Circular Transactions This pipeline identifies "Round-Tripping" - where capital leaves an entity and returns via a convoluted chain of intermediaries within a specific temporal window. ```python def detect_circular_transactions(graph, time_window=168, amount_threshold=5000): # Filter for high-value transactional layers txn_layer = graph.layer("transactions") # Identify cycles within the temporal window (e.g., 7 days) for node in txn_layer.nodes: # Trace the path from the node, ensuring money returns # within 168 hours of the initial transaction window = node.window(node.earliest_time, node.earliest_time + time_window) circuits = window.out_edges.explode().find_cycles(max_depth=5) if circuits: # Raise an alert with high-fidelity grounding alert_handler.raise_scheme("circular_transaction", nodes=circuits.nodes) ``` --- ## The Classifier: Entity Resolution (ER) One of the most powerful pipelines is the **Temporal Identifier**. It merges disparate data sources into a unified entity by analyzing structural similarity over time. ### Classification Logic By comparing the temporal "fingerprint" of two nodes (who they talk to, when, and how frequently), Pometry can classify them as the same real-world entity even if their metadata (Name, Email) slightly differs. ```python # Semantic similarity + structural resolution potential_matches = graph.entities_by_similarity(target_node, limit=10) for match in potential_matches: # Compare temporal connection patterns (Edge Triplets) structural_score = compare_neighborhoods(target_node, match) if structural_score > 0.95: # Merge nodes into a single 'Global Identifier' graph.resolve_entities(target_node, match, strategy="accumulate") ``` --- ## Personas in Action ### The Historian: Temporal Lineage The Historian uses Raphtory's event-sourcing to re-verify the "Source of Wealth" by looking at the entire history of an entity's connections. ```python # Trace the provenance of funds over a 2-year window lineage = node.history().before(analysis_date) for event in lineage: # Identify the earliest large inflow if event.amount > THRESHOLD: investigate_source(event.src) ``` ### The Investigator: Advanced Schemes The Investigator deploys multi-hop DFS protocols to catch coordinated financial crimes. #### 1. Scatter-Gather (The Funnel) Detects when a single source "scatters" small amounts to multiple intermediaries who then "gather" the consolidated sum to a final destination. ```python def detect_scatter_gather(node, window_hours=48): # Phase 1: Identify the scatter (one -> many) out_txns = node.out_edges.window(window_hours).targets() if len(out_txns) >= MIN_TARGETS: # Phase 2: Identify the gather (many -> one) for target in out_txns: gather_point = target.out_edges.after(scatter_time).dst # Check if all roads lead to the same destination ``` #### 2. Dormant Activation Flags accounts that have been inactive for long periods (e.g., >120 days) and suddenly execute high-value outbound transfers. ```python # Find the gap between the last two transactions gap = current_txn.time - previous_txn.time if gap > DORMANT_THRESHOLD_DAYS: # Flag as high-risk 'Sleeper Account' activation alert_handler.raise_aml("dormant_activation", node=node) ``` ### The Forecaster: Predictive Symptoms The Forecaster analyzes structural changes - like a sudden spike in **Temporal Degree Centrality** or a shift in **FastRP Embedding space** - to predict an entity's likelihood of being involved in a future risk event. --- ## Semantic Triplet & Similarity APIs Pometry provides specialized APIs for grounding AI in the **relationships** (edges) rather than just the objects (nodes). ### The Edge Triplet context When an AI agent searches for "suspicious interactions," the Pometry Vector API returns **Edge Triplets**: `(Source) --[Semantic Context]--> (Destination)`. This ensures the AI understands the *nature* of the link, not just its existence. Investigator, find any "unusual financial handshakes" between the shell company and the sanctioned entities. I have identified a sequence of transactions that match the 'Layering' structural pattern. The grounded truth involves a **Triple-Hop Chain**: * **Entity A** (Shell) -> **Intermediary B** (Retail) via 'Over-invoiced Service' * **Intermediary B** -> **Entity C** (Sanctioned) via 'Consolidated Transfer' The timing of these events (within 4 hours) suggests a coordinated flight of capital. --- ## Agentic UI: Rendering the Thought Process Pometry's `AiChat` components allow the AI to show its "work" as it navigates the graph. This creates a transparent audit trail for compliance. * **Thought Traces:** The AI shows which Python code it is about to run. * **Execution Logs:** Real-time feedback from the Raphtory engine. * **Grounded Visuals:** Automatically generated subgraphs that prove the narrative. --- ## Next Steps }> Deep dive into structural similarity and entity resolution. }> Build tools that write graph code for your agents. --- ## Core Capabilities Comparison | Feature | Standard RAG | Pometry Advanced Pipelines | | :--- | :--- | :--- | | **Agency** | Passive Retrieval | **Autonomous Tool Use (Python/GQL)** | | **Logic** | Fuzzy Similarity | **Hard Temporal & Structural Logic** | | **Output** | Text Summary | **Actionable Intelligence & Decision Support** | | **Traceability** | Black Box | **Visual Graph Grounding (Minimaps)** | --- ## Build Your Intelligence Stack }> Learn how to build the 'Python Tool' used by our agents. }> The theory behind grounding reasoning in temporal facts. --- ## Graph Intelligence > Agentic Rag Python # Agentic RAG with Python **Transform your Temporal Graph into an autonomous Intelligence Agent.** While standard RAG implementations rely on passive vector retrieval, **Agentic RAG** empowers your LLM to actively "drive" the graph. By providing the LLM with a Python execution environment and a Raphtory graph, it can write its own traversals, calculate metrics on-the-fly, and determine its own retrieval strategy. ## Core Philosophy: The LLM with a Steering Wheel In this architecture, the LLM is not just a consumer of data; it is an analyst with access to the `PersistentGraph`. ```mermaid graph LR User([User Query]) --> LLM[Agent] LLM -->|Writes & Executes Python| RG[(Raphtory PersistentGraph)] RG -->|Prints Execution Results| LLM LLM -->|Final Answer| User ``` --- ## Implementation Guide To implement Agentic RAG, you expose a Python tool to your LLM (using frameworks like LangChain, LlamaIndex, or Vercel AI SDK). ### 1. The Python Execution Tool Your backend should provide a secure environment where the LLM can execute Raphtory code. Here is a simplified pattern: ```python from raphtory import PersistentGraph # Load the production graph g = PersistentGraph.load_from_file("/path/to/graph/master") # The LLM will provide this 'code' input # Example LLM-generated code: # print(f"Nodes: {g.count_nodes()}") # print(f"Sample Nodes: {itertools.islice(g.nodes, 3).collect()}") ``` > [!IMPORTANT] > **Performance Tip:** Since graphs can be massive, always instruct your LLM (via the System Prompt) to use iterators like `itertools.islice` rather than collecting the entire graph into memory. ### 2. Teaching the Agent to Search (FastRP) You can significantly boost the agent's power by teaching it how to use Raphtory's native embedding algorithms like **FastRP** for similarity search. In your Tool's documentation for the LLM, provide this recipe: ```python from raphtory.algorithms import fast_rp # 1. Generate embeddings result = fast_rp(g, dimension=64, iterations=1.0) df = result.to_df() # 2. Get the vector for our target node query_vector = np.array(df.loc[df["node"] == "TARGET_ID", "value"].iloc[0]) # 3. Compute cosine similarity across candidates # (Simplified logic for the LLM to use) X = np.vstack(df["value"].to_numpy()) similarities = X @ query_vector / (np.linalg.norm(X, axis=1) * np.linalg.norm(query_vector)) # 4. Get Top 5 top5_idx = np.argsort(similarities)[-5:][::-1] print(df["node"].to_numpy()[top5_idx]) ``` --- ## Technical Patterns ### Dynamic Introspection Don't just give the LLM the graph. Give it the **Metadata**. Before the conversation starts, run a "Pre-flight" script to extract: * Unique node types and edge layers. * Property keys and sample values. * The current temporal range of the graph. Inject this into the System Prompt so the LLM knows exactly which properties it can filter on. ### Subgraph Slicing Instruct the agent to "Zoom in" before analyzing. A common pattern is to have the agent identify a small set of interesting "Seed Nodes" and then create a subgraph of their 2-hop neighborhood for detailed reasoning. ```python # LLM generated logic seed_nodes = [n.id for n in g.nodes if n.properties.get("high_risk")] neighborhood = g.subgraph(g.node(seed_nodes).neighbours.collect()) # Perform complex analysis on the smaller neighborhood ``` --- ## Comparison: Agentic vs. Standard | Strategy | Standard RAG | Agentic Raphtory RAG | | :--- | :--- | :--- | | **Logic** | Fixed Retrieval Chain | **Dynamic Discovery** | | **Calculation** | Pre-calculated vectors | **On-the-fly graph metrics** (Centrality, Communities) | | **Multi-hop** | Limited/Fixed | **Arbitrary depth traversal** | | **Temporal** | Static snapshot | **Time-traveling queries** (using `.at()` or `.window()`) | --- ## Next Steps }> Learn how to host a managed Vector Search API over your graph. }> Define how graph objects are converted to natural language. --- ## Graph Intelligence > High Order Matching # High-Order Matching & Entity Resolution **Resolve duplicate entities and detect coordinated groups by analyzing the structural "fingerprint" of the graph.** At scale, data is often fragmented. The same real-world entity might appear as five different nodes across different systems. Pometry's high-order matching pipelines use a **Multi-Level Similarity** approach to bridge the gap where simple string matching fails. --- ## Multi-Level Similarity Analysis True entity resolution requires balancing metadata accuracy with structural context. Pometry's classification pipeline analyzes entities across four distinct levels. ### 1. Linguistic & Fuzzy Matching We use a combination of `SequenceMatcher` and `Levenshtein` distance to handle typos in names, addresses, and document IDs. ```python # Handle flipped names, initials, and typos score = name_similarity(person_1_profile, person_2_profile) # Fuzzy match on addresses and postcodes geo_score = text_similarity_condition('address', p1, p2) ``` ### 2. Temporal Distance For fields like Date of Birth or Incorporation, we apply a **decay penalty**. Small differences (1-2 days) might just be timezone or entry errors, while large differences (years) trigger a hard rejection. ```python def date_penalty(diff_days, max_diff=365): # Linear decay: 1.0 at 0 days, 0.0 at max_diff return max(0.0, 1 - abs(diff_days) / max_diff) ``` ### 3. Structural "Fingerprinting" The most powerful level is the **Neighborhood Profile**. We compare the 1-hop connections of two nodes using Jaccard Similarity. If two "John Smiths" have never met, but they bank with the same branch, use the same lawyer, and transfer money to the same overseas shell - they are structurally identical. ```python def relationship_comparison(graph, p1, p2, layers=['banks_with', 'registered_at']): # Intersection / Union of first-hop neighbors n1 = set(graph.node(p1.id).neighbours) n2 = set(graph.node(p2.id).neighbours) jaccard = len(n1 & n2) / len(n1 | n2) return jaccard ``` --- ## Classification: Theft vs. Conflict When the resolution pipeline detects a mismatch (e.g., two people claiming the same passport), it classifies the event into one of two strategic buckets. ### Identity Theft Triggered when a minority group of nodes attempts to "link" to a document ID already claimed by a established majority. Pometry flags the minority nodes as `identity_thief`. ### Identity Conflict Triggered when two equally sized groups claim the same ID, suggesting a data collision or a systemic error that requires human intervention. Classifier, run a resolution pass on the 'Private Banking' segment. Resolution complete. **Identified Case:** ID_CONF_442 **Status:** Identity Conflict **Detail:** Two distinct entities (verified via different banking histories) are using the same National ID. **Structural Evidence:** Their neighborhoods have 0% overlap, suggesting this is a genuine ID collision rather than a duplicate person. --- ## The Entity Resolution API Pometry provides an atomic `resolve_entities` API to materialize these findings back into the graph. ```python # Strategy: 'accumulate' preserves the history of both entities er_graph.resolve_entities(node_primary, node_secondary, strategy="accumulate") ``` See how the Investigator uses these resolved entities. Find potential duplicates via semantic similarity. --- ## Graph Intelligence > Neurosymbolic Rag # NeuroSymbolic RAG **The next generation of RAG: Combining the precision of Symbolic temporal logic with the reasoning power of Neural language models.** Standard RAG (Retrieval-Augmented Generation) is often brittle when applied to complex, evolving datasets. It relies on vector similarity, which lacks the causal and temporal awareness required for mission-critical intelligence. Pometry's **NeuroSymbolic RAG** natively integrates the temporal graph (Symbolic) to provide facts, causality, and context directly to the LLM (Neural). ## Why NeuroSymbolic? | Feature | Standard Vector RAG | Pometry NeuroSymbolic RAG | | :--- | :--- | :--- | | **Logic** | Fuzzy similarity | **Hard temporal causality** | | **Context** | Nearby chunks | **Full network state across time** | | **Accuracy** | Prone to hallucinations | **Fact-grounded in graph structure** | | **"When" Questions** | Near impossible | **Native capability** | --- ## The Architecture NeuroSymbolic RAG treats the temporal graph as a **High-Fidelity World Model** that the LLM can query and reason about. ```mermaid graph TD User([User Query]) --> LLM[Reasoning Agent] LLM -->|Symbolic Query| TG[(Temporal Graph)] TG -->|Causal Context| LLM LLM -->|Neural Synthesis| Response([Grounded Response]) ``` ### 1. Symbolic Retrieval (Temporal Graph) Instead of searching for "similar text," the system evaluates symbolic rules over time. *Example: "Find all accounts created 24h before transaction X that have a shared device ID."* ### 2. Neural Reasoning (LLM) The LLM receives the **Temporal Path** as a structured fact. It does not need to guess if the connection exists; it performs higher-order reasoning on the *meaning* of that connection. --- ## Intelligence Workflow ### State the Intelligence Goal Identify the "Why" behind a pattern. *Query: "Why was this transaction flagged as high risk?"* ### Symbolic Temporal Trace The system traverses the graph to find the sequence of events leading to the risk. ```python # Symbolic trace of the attack chain attack_chain = g.at(TX_TIME).node(SENDER).trace_backwards(hops=3) ``` ### NeuroSymbolic Synthesis The structured trace is passed to the LLM agent to generate a human-readable investigative narrative. ```python # LLM Synthesis (Conceptual) narrative = llm.generate( context=attack_chain.to_narrative_context(), task="Generate a SAR narrative explaining the temporal causality." ) ``` ## Real-World Advantage > [!TIP] > While legacy systems require complex manual "wiring" of LLMs to their data silos, Pometry's architecture is **natively integrated**. The graph *is* the memory of the agent. ### Use Case: Cyber Threat Hunting - **Standard RAG**: "Find logs similar to 'Unauthorized Login'." (Returns 10,000 hits). - **NeuroSymbolic RAG**: "Analyze the lateral movement sequence starting from the initial breach at 10:04 AM. What systems were touched before the firewall was updated at 10:15 AM?" --- ## Next Steps }> Build autonomous agents that write graph code. }> Host a managed GraphQL + Vector search interface. --- ## Graph Intelligence > Semantic Templates # Semantic Templating **Bridge the gap between Graph Data and LLM Reasoning.** To make your graph data useful for an LLM, it must be converted from raw properties and IDs into semantic natural language. Raphtory uses a powerful **Jinja2-based templating system** to define how nodes and edges should be "narrated" for retrieval. ## The Narrative Transformation In a standard database, a node might look like this: `ID: 1024, Type: Person, Props: {name: "John", dob: 1985...}` Through **Semantic Templating**, it becomes: *"John is a 39-year-old financial analyst based in London. He has been a customer since 2018."* --- ## Defining Node Templates Node templates allow you to define a specific narrative for each node type. You can use logic (if/else) and filters to handle different shapes of data. ```python NODE_TEMPLATE = """ {% if node_type == "Person" %} {{ name }} is a person with the following details: Full name: {{ properties.full_name }} Age: {{ ((1761842000289 - properties.date_of_birth) / 31536000000) | int }} Nationality: {{ properties.nationality }} Employment: {{ properties.occupation }} earning {{ properties.annual_income }} {% elif node_type == "Company" %} {{ name }} is a company registered as {{ properties.trading_name }}. Industry: {{ properties.primary_industry }} Risk Status: {{ properties.aml_alert_flag }} {% else %} Entity ID {{ name }} is a {{ node_type }}. {% endif %} """ ``` --- ## Defining Edge Templates Edges represent the "History" and "Actions" between entities. Their templates often focus on the direction and temporal nature of the relationship. ```python EDGE_TEMPLATE = """ {% if layers[0] == "transfer" %} {{ src.name }} transferred funds to {{ dst.name }} on {{ history[0] | datetimeformat }} {% elif layers[0] == "works_for" %} {{ src.name }} has worked for {{ dst.name }} since {{ history[0] | datetimeformat }} {% else %} {{ src.name }} is connected to {{ dst.name }} via {{ layers[0] }} {% endif %} """ ``` --- ## Technical Context Variables When the template is rendered, the following variables are available: | Variable | Description | Example | | :--- | :--- | :--- | | `name` | The unique ID of the node/edge. | `"100092401"` | | `node_type` | (Node Only) The defined type of the node. | `"Person"` | | `layers` | (Edge Only) List of layers this relationship exists on. | `["transfer"]` | | `properties` | Dictionary of the latest properties. | `properties.email` | | `history` | (Edge Only) Sorted list of timestamps for events on this edge. | `history[0]` | | `src` / `dst` | (Edge Only) The source and destination node objects. | `src.properties.name` | --- ## Best Practices for LLM Clarity ### 1. Computed Properties Don't just provide timestamps; provide **Duration** or **Age**. LLMs are often bad at temporal math, so calculate the "Years of relationship" inside the template using Jinja filters. ### 2. Semantic Labels Use human-readable labels for categories. Instead of `category_id: 4`, map it to `Risk Level: Critical`. ### 3. Contextual Density Don't include every property. Only include what the LLM needs for reasoning. High KYC dates or internal IDs often create noise that distracts from the core patterns. --- ## Implementation The templates are registered when initializing the `GraphServer`: ```python server = server.with_vectorised_graphs( graph_names=["master"], nodes=NODE_TEMPLATE, edges=EDGE_TEMPLATE, ) ``` --- ## Next Steps }> Host these templates as a searchable API. }> Use these logic patterns directly in Python scripts. --- ## Graph Intelligence > Vectorised Graph Api # Vectorised Graph API **Host a high-performance semantic search interface over your temporal graph.** The **Vectorised Graph API** (available in Pometry Enterprise) provides a managed bridge between Raphtory's temporal structures and external Vector Databases (like **Qdrant**, **Milvus**, or **Pinecone**). It allows you to query your graph using natural language via standard GraphQL endpoints. ## Why Vectorise a Graph? While graph traversals are excellent for finding relationships, natural language queries often start with a concept (e.g., *"Show me accounts that look like shell companies"*). The Vectorised Graph API allows you to: 1. **Retrieve by Semantic Similarity:** Find nodes or edges that "meaningfully" match a query. 2. **Context-Aware Expansion:** Once a similar node is found, automatically pull its neighborhood and convert it into a LLM-ready narrative context. 3. **Proxy External Vector Stores:** Raphtory acts as the orchestrator, keeping the graph structure synchronized with the vector embeddings stored in your preferred DB. --- ## Technical Architecture The architecture consists of the Raphtory **GraphServer** configured with an embedding provider and a connection to a vector data store. ```mermaid graph TD User([User NLP Query]) --> GQL[Raphtory GraphQL API] GQL --> VDB[(External Vector DB)] VDB -->|Node IDs| Raphtory[(Raphtory Temporal Graph)] Raphtory -->|Semantic Context| GQL GQL --> Response([LLM-Ready Entities]) ``` --- ## Setting up the GraphServer To enable vector search, you configure the `GraphServer` to manage embeddings for your nodes and edges. ```python from raphtory import graphql server = graphql.GraphServer(work_dir="graphs") # Configure the embedding pipeline server = server.set_embeddings( cache="/tmp/vector-cache", # You can specify custom models here # embedding_model="text-embedding-3-small", nodes=True, edges=True, ) # Define which graphs to host as vectorised endpoints server = server.with_vectorised_graphs( graph_names=["production_master"], nodes=NODE_TEMPLATE, edges=EDGE_TEMPLATE, ) server.run() ``` --- ## Querying via GraphQL Once hosted, you can perform hybrid searches using the `entitiesBySimilarity` query. This is a special endpoint that combines vector retrieval with graph metadata. ### Example: Semantic Search ```graphql query { vectorisedGraph(path: "production_master") { entitiesBySimilarity(query: "High risk financial activity", limit: 5) { expandEntitiesBySimilarity(query: "Suspicious", limit: 10) { getDocuments { content # Semantic text generated from Templates entity { ... on Node { name nodeType } } } } } } } ``` > [!NOTE] > **Pometry Enterprise:** This API is optimized for production workloads, featuring automatic re-indexing as your temporal graph grows and built-in integration with corporate LLM providers. --- ## Key Capabilities ### Hybrid Retrieval Instead of just returning a list of nodes, the API can return **Entities with Context**. This means for every search result, you get the surrounding semantic connections (edges) that the LLM needs to understand *why* the node was retrieved. ### Pluggable Infrastructure * **Vector DBs:** Support for Qdrant, Milvus, Chroma, and more. * **Models:** Use local embeddings (via Sentence-Transformers) or remote models (OpenAI, Anthropic, Cohere). * **Pipelines:** Compatible with **LangChain** and **LlamaIndex** data connectors. --- ## Next Steps }> Learn how to define the Jinja-templates that turn nodes into text. }> The low-level Python approach for maximum flexibility. --- ## Graph Intelligence > Workflows # Analytics Workflows **Deploy graph intelligence into production. Move from manual exploration to automated, real-time analytics pipelines.** Graph intelligence is only valuable when it is accessible and actionable for your team. Raphtory supports a variety of workflows, from ad-hoc data science to high-scale production monitoring. **Advanced Agents:** For more autonomous workflows, see our [Advanced Intelligence Pipelines](./advanced-pipelines), which utilize agentic roles like the **Historian**, **Forecaster**, and **Investigator** to automate multi-step investigations. ## Common Workflows ### 1. Exploratory Analysis Interactive investigation and hypothesis testing for data scientists and analysts. * **Platform**: Jupyter Notebooks or the interactive GraphQL UI. * **Use Case**: "Is there a relationship between transaction velocity and account age in our recent fraud cases?" * **Result**: Refined temporal motifs that can be promoted to automated alerts. ### 2. Automated Alerting Real-time monitoring of live data streams to detect specific temporal patterns as they happen. * **Operation**: Raphtory runs continuous windowed queries against an incoming stream (e.g., Kafka). * **Use Case**: Alert a security analyst immediately when a "Beaconing" pattern is detected between an internal host and a known C2 server. * **Result**: Significant reduction in "Mean Time to Detect" (MTTD). ### 3. ML Feature Engineering Using the temporal graph to enrich machine learning models with sophisticated structural features. * **Operation**: Generating graph embeddings or node metrics (like temporal PageRank) to use as inputs for traditional ML classifiers. * **Use Case**: Feeding network centrality scores into a churn prediction model to account for the "social" nature of user departures. * **Result**: 15-25% improvement in model accuracy over non-graph alternatives. ### 4. Scheduled Reporting Generating periodic snapshots of network health and risk for executive dashboards. * **Operation**: Automated scripts that calculate community health scores or infrastructure bottleneck metrics every 24 hours. * **Use Case**: Daily compliance reports showing "High Risk" money flows that crossed international borders. --- ## Deployment Stages ### Exploration Connect to your data source and identify the temporal patterns that signify meaningful events in your domain. ### Validation Test your patterns against historical data using Raphtory's "point-in-time" playback to measure precision and recall. ### Automation Integrate Raphtory into your data production environment (e.g., via the [Python API](/docs/getting-started/quickstart)) and pipe results to your existing dashboarding or alerting tools. **Integration Tip**: Raphtory integrates seamlessly with the Python data stack (Pandas, PyTorch Geometric, Scikit-learn). See the [Ecosystem section](/docs/ecosystem) for more details. ============================================================ # Section: Production ============================================================ --- ## Production > Deployment > Index # Deployment Deploy Raphtory to production environments. } title="Docker Compose" href="/docs/production/deployment/docker-compose" children="Local and single-server deployments with Docker." /> } title="Helm Charts" href="/docs/production/deployment/helm-charts" children="Kubernetes deployments using Helm." /> } title="HPA Scaling" href="/docs/production/deployment/hpa-scaling" children="Horizontal Pod Autoscaling for Kubernetes." /> } title="Resource Limits" href="/docs/production/deployment/resource-limits" children="Configure memory and CPU limits." /> } title="AWS" href="/docs/production/deployment/aws" children="Deploy to Amazon Web Services." /> } title="GCP" href="/docs/production/deployment/gcp" children="Deploy to Google Cloud Platform." /> } title="Azure" href="/docs/production/deployment/azure" children="Deploy to Microsoft Azure." /> --- ## Production > Index # Production Raphtory **Deploy, monitor, and scale graph intelligence in production** From Docker Compose to Kubernetes, comprehensive guides for running Raphtory at scale. ## Production Checklist Before deploying Raphtory to production: - [ ] **Deployment**: Containerized with resource limits - [ ] **Monitoring**: Metrics, logs, and alerts configured - [ ] **Performance**: Benchmarked for your data scale - [ ] **Security**: Authentication, authorization, network policies - [ ] **Disaster Recovery**: Backups and restoration tested --- ## Quick Start by Environment Single-server deployments with monitoring stack Scalable deployments with Helm charts and auto-scaling AWS, GCP, and Azure-specific deployment guides --- ## Production Topics ### 🚀 Deployment Get Raphtory running in your infrastructure: - **[Docker Compose](/docs/production/deployment/docker-compose)** - Single server setup with monitoring - **[Kubernetes with Helm](/docs/production/deployment/helm-charts)** - Production K8s deployment - **[Auto-Scaling](/docs/production/deployment/hpa-scaling)** - Horizontal pod autoscaling - **[Resource Limits](/docs/production/deployment/resource-limits)** - CPU/memory configuration - **[AWS Deployment](/docs/production/deployment/aws)** - EKS, ECS, EC2 - **[GCP Deployment](/docs/production/deployment/gcp)** - GKE, Cloud Run - **[Azure Deployment](/docs/production/deployment/azure)** - AKS, Container Instances --- ### 📊 Observability Monitor health and troubleshoot issues: - **[Metrics](/docs/production/observability/metrics)** - What to monitor for graph intelligence - **[Prometheus + Grafana](/docs/production/observability/prometheus-grafana)** - Complete monitoring stack - **[Logging](/docs/production/observability/logging)** - Structured logging best practices - **[Distributed Tracing](/docs/production/observability/tracing)** - Request tracing across services **Key Metrics to Track**: - Graph size (nodes, edges) - Algorithm runtime - Memory usage per algorithm - Query throughput - P95/P99 latency --- ### ⚡ Performance Optimize for your scale: - **[Benchmarking](/docs/production/performance/benchmarking)** - How to measure performance - **[Optimization Guide](/docs/production/performance/optimization-guide)** - Query and algorithm tuning - **[Scaling Patterns](/docs/production/performance/scaling-patterns)** - Horizontal vs vertical scaling **Performance Baselines** (single server, 16 cores, 64GB RAM): - **Graph building**: 1M edges/sec from Pandas - **PageRank**: 10M edges in ~5 seconds (20 iterations) - **Louvain**: 10M edges in ~8 seconds - **Memory**: ~500MB per 1M edges --- ### 🔒 Security Protect your graph intelligence: - **[Authentication](/docs/production/security/authentication)** - User authentication methods - **[Authorization](/docs/production/security/authorization)** - Role-based access control - **[Network Security](/docs/production/security/network-security)** - Network policies, TLS - **[Compliance](/docs/production/security/compliance)** - GDPR, SOC2, HIPAA considerations --- ## Architecture Patterns ### Pattern 1: Batch Intelligence **Use case**: Daily fraud detection, nightly risk scoring ```text ┌─────────────┐ ┌──────────────┐ ┌────────────┐ │ Data Lake │────▶│ Raphtory │────▶│ Results │ │ (Snowflake)│ │ (K8s Job) │ │ (BQ/S3) │ └─────────────┘ └──────────────┘ └────────────┘ │ │ │ └──────────────────┴────────────────────┘ Airflow/Dagster ``` --- ### Pattern 2: Real-Time Intelligence **Use case**: Live fraud detection, instant risk scoring ```text ┌────────────┐ ┌──────────────┐ ┌────────────┐ │ Kafka │────▶│ Raphtory │────▶│ Redis │ │ (Events) │ │ (Streaming) │ │ (Scores) │ └────────────┘ └──────────────┘ └────────────┘ │ ┌─────▼─────┐ │ Grafana │ │ Dashboard │ └───────────┘ ``` --- ### Pattern 3: Interactive Analytics **Use case**: GraphQL exploration, analyst workflows ```text ┌────────────┐ ┌──────────────┐ ┌────────────┐ │ Analysts │────▶│ Raphtory │────▶│ Graph DB │ │ (Browser) │ │ GraphQL API │ │ (Persist) │ └────────────┘ └──────────────┘ └────────────┘ │ ┌─────▼─────┐ │ Load │ │ Balancer │ └───────────┘ ``` --- ## Getting Started ### 1. Choose Your Deployment - **Small scale** ( 10M edges): Docker Compose - **Medium scale** (10M-100M edges): Kubernetes (3-5 nodes) - **Large scale** (100M+ edges): Kubernetes cluster with auto-scaling ### 2. Set Up Monitoring Start with [Prometheus + Grafana](/docs/production/observability/prometheus-grafana) to track: - Graph intelligence job completion - Memory usage trends - Algorithm performance ### 3. Benchmark Your Workload Use [benchmarking tools](/docs/production/performance/benchmarking) to: - Establish performance baselines - Identify bottlenecks - Plan capacity ### 4. Secure Your Deployment Implement [authentication](/docs/production/security/authentication) and [network policies](/docs/production/security/network-security). --- ## Production Best Practices ### Resource Management - **Memory**: Allocate 2x graph size in RAM for algorithms - **CPU**: Scale horizontally for parallel workloads - **Storage**: Use SSD for persistent graphs ### Reliability - **Graceful degradation**: Cache algorithm results - **Circuit breakers**: Protect downstream services - **Retries**: Idempotent graph operations ### Operations - **Version control**: Pin Raphtory versions in production - **Rolling updates**: Zero-downtime deployments - **Rollback plan**: Test rollback procedures --- ## Example: Production-Ready Docker Compose ```yaml version: '3.8' services: raphtory: image: raphtory/raphtory:latest container_name: raphtory-prod restart: unless-stopped ports: - "8000:8000" environment: - LOG_LEVEL=INFO - MAX_MEMORY=32G volumes: - ./graphs:/data/graphs - ./logs:/var/log/raphtory deploy: resources: limits: cpus: '8' memory: 32G reservations: cpus: '4' memory: 16G healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 10s retries: 3 prometheus: image: prom/prometheus:latest ports: - "9090:9090" volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - prometheus-data:/prometheus grafana: image: grafana/grafana:latest ports: - "3000:3000" volumes: - grafana-data:/var/lib/grafana - ./grafana-dashboards:/etc/grafana/provisioning/dashboards volumes: prometheus-data: grafana-data: ``` [Full deployment guide →](/docs/production/deployment/docker-compose) --- ## Support & Resources - **[Performance Tuning](/docs/production/performance/optimization-guide)** - Optimize for your use case - **[Troubleshooting](/docs/production/observability/logging)** - Common issues and solutions - **[Community Slack](https://join.slack.com/t/raphtory/...)** - Get help from the team --- ## Production > Observability > Index # Observability Monitor and debug Raphtory in production. } title="Metrics" href="/docs/production/observability/metrics" children="Key metrics and instrumentation." /> } title="Prometheus & Grafana" href="/docs/production/observability/prometheus-grafana" children="Set up monitoring dashboards." /> } title="Logging" href="/docs/production/observability/logging" children="Configure structured logging." /> } title="Tracing" href="/docs/production/observability/tracing" children="Distributed tracing with OpenTelemetry." /> --- ## Production > Performance > Index # Performance Optimize and tune Raphtory for production workloads. } title="Optimization Guide" href="/docs/production/performance/optimization-guide" children="Best practices for performance tuning." /> } title="Benchmarking" href="/docs/production/performance/benchmarking" children="Measure and compare performance." /> } title="Scaling Patterns" href="/docs/production/performance/scaling-patterns" children="Strategies for scaling graph workloads." /> --- ## Production > Security > Index # Security Secure your Raphtory deployment. } title="Authentication" href="/docs/production/security/authentication" children="Configure authentication mechanisms." /> } title="Authorization" href="/docs/production/security/authorization" children="Role-based access control." /> } title="Network Security" href="/docs/production/security/network-security" children="TLS, firewalls, and network policies." /> } title="Compliance" href="/docs/production/security/compliance" children="SOC2, GDPR, and audit logging." /> --- ## Production > Deployment > Aws # AWS Deployment **Deploy Raphtory on Amazon Web Services** Run graph intelligence on EKS, ECS, or EC2. ## EKS (Elastic Kubernetes Service) ### 1. Create EKS Cluster ```bash eksctl create cluster \ --name raphtory-cluster \ --region us-east-1 \ --nodegroup-name raphtory-nodes \ --node-type m5.2xlarge \ --nodes 3 \ --nodes-min 2 \ --nodes-max 10 \ --managed ``` ### 2. Deploy with Helm ```bash # Update kubeconfig aws eks update-kubeconfig --name raphtory-cluster --region us-east-1 # Deploy Raphtory helm install raphtory raphtory/raphtory \ --namespace raphtory \ --create-namespace ``` ## ECS (Elastic Container Service) ```json { "family": "raphtory", "containerDefinitions": [{ "name": "raphtory", "image": "raphtory/raphtory:latest", "memory": 32768, "cpu": 8192, "essential": true, "portMappings": [{ "containerPort": 8000, "protocol": "tcp" }], "environment": [ {"name": "LOG_LEVEL", "value": "INFO"} ] }], "requiresCompatibilities": ["FARGATE"], "networkMode": "awsvpc", "cpu": "8192", "memory": "32768" } ``` ## Integration with AWS Services ### S3 Data Lake ```python from raphtory import Graph # Load from S3 s3 = boto3.client('s3') obj = s3.get_object(Bucket='my-bucket', Key='transactions.csv') df = pd.read_csv(obj['Body']) # Build graph g = Graph() g.load_edges_from_pandas(df, src="from", dst="to", time="ts") # Write results back to S3 results.to_csv('s3://my-bucket/fraud-rings.csv') ``` ### CloudWatch Monitoring ```yaml # Enable CloudWatch Container Insights eksctl utils update-cluster-logging \ --name raphtory-cluster \ --enable-types all \ --approve ``` ## Best Practices - Use **EKS** for scalability - **EBS volumes** for persistent storage (gp3 recommended) - **Auto Scaling Groups** for node management - **VPC** isolation for security ## See Also - [Kubernetes Helm](./helm-charts) - EKS deployment --- ## Production > Deployment > Azure # Azure Deployment **Deploy Raphtory on Microsoft Azure** Run on AKS with Azure services integration. ## AKS (Azure Kubernetes Service) ```bash # Create resource group az group create --name raphtory-rg --location eastus # Create AKS cluster az aks create \ --resource-group raphtory-rg \ --name raphtory-cluster \ --node-count 3 \ --node-vm-size Standard_D8s_v3 \ --enable-cluster-autoscaler \ --min-count 2 \ --max-count 10 # Get credentials az aks get-credentials --resource-group raphtory-rg --name raphtory-cluster # Deploy helm install raphtory raphtory/raphtory \ --namespace raphtory \ --create-namespace ``` ## Azure Blob Storage ```python from azure.storage.blob import BlobServiceClient from raphtory import Graph # Load from Blob blob_service = BlobServiceClient.from_connection_string(conn_str) blob_client = blob_service.get_blob_client(container="data", blob="transactions.csv") df = pd.read_csv(blob_client.download_blob()) g = Graph() g.load_edges_from_pandas(df, src="from", dst="to", time="ts") ``` ## Best Practices - Use **Premium SSD** for storage - **Azure Monitor** for observability - **Managed Identity** for authentication ## See Also - [Kubernetes Helm](./helm-charts) --- ## Production > Deployment > Docker Compose # Docker Compose Deployment **Single-server Raphtory with monitoring stack** Production-ready Docker Compose setup with Prometheus, Grafana, and best practices. ## Complete Stack ```yaml version: '3.8' services: raphtory: image: raphtory/raphtory:latest container_name: raphtory-prod restart: unless-stopped ports: - "8000:8000" environment: - LOG_LEVEL=INFO - RUST_LOG=raphtory=info - MAX_MEMORY=32G - WORKER_THREADS=8 volumes: - ./data/graphs:/data/graphs - ./logs:/var/log/raphtory - ./config:/etc/raphtory deploy: resources: limits: cpus: '8' memory: 32G reservations: cpus: '4' memory: 16G healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8000/health"] interval: 30s timeout: 10s retries: 3 start_period: 40s networks: - raphtory-network prometheus: image: prom/prometheus:latest container_name: prometheus restart: unless-stopped ports: - "9090:9090" command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--storage.tsdb.retention.time=30d' volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro - prometheus-data:/prometheus networks: - raphtory-network grafana: image: grafana/grafana:latest container_name: grafana restart: unless-stopped ports: - "3000:3000" environment: - GF_SECURITY_ADMIN_PASSWORD=admin - GF_USERS_ALLOW_SIGN_UP=false volumes: - grafana-data:/var/lib/grafana - ./grafana/dashboards:/etc/grafana/provisioning/dashboards:ro - ./grafana/datasources:/etc/grafana/provisioning/datasources:ro networks: - raphtory-network node-exporter: image: prom/node-exporter:latest container_name: node-exporter restart: unless-stopped ports: - "9100:9100" networks: - raphtory-network networks: raphtory-network: driver: bridge volumes: prometheus-data: grafana-data: ``` ## Prometheus Configuration Create `prometheus.yml`: ```yaml global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'raphtory' static_configs: - targets: ['raphtory:8000'] - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'node' static_configs: - targets: ['node-exporter:9100'] ``` ## Grafana Datasource Create `grafana/datasources/prometheus.yml`: ```yaml apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy url: http://prometheus:9090 isDefault: true ``` ## Deployment ```bash # Start stack docker-compose up -d # Check logs docker-compose logs -f raphtory # Access services # Raphtory: http://localhost:8000 # Grafana: http://localhost:3000 (admin/admin) # Prometheus: http://localhost:9090 # Stop stack docker-compose down # With volume cleanup docker-compose down -v ``` ## Resource Sizing | Graph Size | CPU | Memory | Storage | |------------|-----|--------|---------| | 10M edges | 4 cores | 16GB | 50GB SSD | | 10-50M edges | 8 cores | 32GB | 200GB SSD | | 50-100M edges | 16 cores | 64GB | 500GB SSD | ## Best Practices 1. **Persistence**: Mount `/data/graphs` for persistent storage 2. **Logging**: Centralize logs to `/var/log/raphtory` 3. **Backups**: Schedule regular backups of graph data 4. **Updates**: Use version pinning for production 5. **Monitoring**: Configure Grafana alerts ## See Also - [Kubernetes Deployment](./helm-charts) - Scale beyond single server - [Prometheus + Grafana](../observability/prometheus-grafana) - Detailed monitoring setup --- ## Production > Deployment > Gcp # GCP Deployment **Deploy Raphtory on Google Cloud Platform** Run on GKE with BigQuery and Cloud Storage integration. ## GKE (Google Kubernetes Engine) ```bash # Create GKE cluster gcloud container clusters create raphtory-cluster \ --region us-central1 \ --num-nodes 3 \ --machine-type n2-standard-8 \ --enable-autoscaling \ --min-nodes 2 \ --max-nodes 10 # Get credentials gcloud container clusters get-credentials raphtory-cluster # Deploy helm install raphtory raphtory/raphtory \ --namespace raphtory \ --create-namespace ``` ## BigQuery Integration ```python from google.cloud import bigquery from raphtory import Graph, algorithms client = bigquery.Client() # Load from BigQuery query = "SELECT * FROM `project.dataset.transactions` WHERE date = CURRENT_DATE()" df = client.query(query).to_dataframe() # Graph intelligence g = Graph() g.load_edges_from_pandas(df, src="from", dst="to", time="timestamp") communities = algorithms.louvain(g) # Write to BigQuery results_df = pd.DataFrame([ {"account": n.name, "community": communities.get(n.name)} for n in g.nodes() ]) job = client.load_table_from_dataframe(results_df, "project.dataset.communities") ``` ## Cloud Storage ```python from google.cloud import storage # Load graph bucket = storage.Client().bucket('my-bucket') blob = bucket.blob('graph-data.csv') df = pd.read_csv(blob.open('r')) ``` ## See Also - [BigQuery Integration](/docs/ecosystem/data-platforms/bigquery) - Data platform integration --- ## Production > Deployment > Helm Charts # Kubernetes Deployment with Helm **Production Raphtory on Kubernetes** Deploy Raphtory using Helm charts with auto-scaling, monitoring, and high availability. ## Prerequisites ```bash # Install Helm curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash # Verify kubectl access kubectl cluster-info ``` ## Quick Start ```bash # Add Raphtory Helm repository (when available) helm repo add raphtory https://charts.raphtory.com helm repo update # Install Raphtory helm install my-raphtory raphtory/raphtory \ --namespace raphtory \ --create-namespace \ --set resources.limits.memory=32Gi \ --set resources.limits.cpu=8 \ --set replicaCount=3 ``` ## Helm Values Create `values.yaml`: ```yaml replicaCount: 3 image: repository: raphtory/raphtory tag: "latest" pullPolicy: IfNotPresent resources: limits: cpu: 8 memory: 32Gi requests: cpu: 4 memory: 16Gi autoscaling: enabled: true minReplicas: 2 maxReplicas: 10 targetCPUUtilizationPercentage: 70 targetMemoryUtilizationPercentage: 80 persistence: enabled: true storageClass: "fast-ssd" size: 100Gi accessMode: ReadWriteOnce service: type: LoadBalancer port: 8000 ingress: enabled: true className: nginx annotations: cert-manager.io/cluster-issuer: letsencrypt-prod hosts: - host: raphtory.example.com paths: - path: / pathType: Prefix tls: - secretName: raphtory-tls hosts: - raphtory.example.com monitoring: enabled: true serviceMonitor: enabled: true livenessProbe: httpGet: path: /health port: 8000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /ready port: 8000 initialDelaySeconds: 20 periodSeconds: 5 env: - name: LOG_LEVEL value: "INFO" - name: RUST_LOG value: "raphtory=info" - name: MAX_MEMORY value: "28G" # Leave headroom ``` ## Install with Custom Values ```bash helm install my-raphtory raphtory/raphtory \ --namespace raphtory \ --create-namespace \ --values values.yaml ``` ## Verify Deployment ```bash # Check pods kubectl get pods -n raphtory # Check service kubectl get svc -n raphtory # View logs kubectl logs -f deployment/my-raphtory -n raphtory # Port forward for local testing kubectl port-forward svc/my-raphtory 8000:8000 -n raphtory ``` ## Upgrade ```bash # Update values helm upgrade my-raphtory raphtory/raphtory \ --namespace raphtory \ --values values.yaml \ --reuse-values # Rollback if needed helm rollback my-raphtory -n raphtory ``` ## Uninstall ```bash helm uninstall my-raphtory -n raphtory ``` ## Production Checklist - [ ] **Resource limits** set appropriately - [ ] **Persistent storage** configured - [ ] **Monitoring** enabled (Prometheus ServiceMonitor) - [ ] **Auto-scaling** configured for load - [ ] **Ingress** with TLS certificates - [ ] **Network policies** for security - [ ] **Pod disruption budgets** for HA - [ ] **Backup strategy** implemented ## See Also - [Resource Limits](./resource-limits) - Sizing guidance - [Auto-Scaling](./hpa-scaling) - HPA configuration - [Prometheus + Grafana](../observability/prometheus-grafana) - Monitoring --- ## Production > Deployment > Hpa Scaling # Horizontal Pod Autoscaling **Auto-scale Raphtory based on load** Configure HPA to automatically scale Raphtory pods based on CPU, memory, or custom metrics. ## Basic HPA Configuration ```yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: raphtory-hpa namespace: raphtory spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: raphtory minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 50 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 30 - type: Pods value: 2 periodSeconds: 30 selectPolicy: Max ``` Apply: ```bash kubectl apply -f hpa.yaml ``` ## Custom Metrics (Graph Size) Scale based on graph size or queue depth: ```yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: raphtory-hpa-custom spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: raphtory minReplicas: 2 maxReplicas: 20 metrics: - type: Pods pods: metric: name: raphtory_queue_depth target: type: AverageValue averageValue: "100" ``` ## Monitor HPA ```bash # Watch HPA status kubectl get hpa -n raphtory -w # Describe HPA kubectl describe hpa raphtory-hpa -n raphtory # View events kubectl get events -n raphtory --sort-by='.lastTimestamp' ``` ## Best Practices 1. **Min replicas**: Set to 2+ for HA 2. **Stabilization**: Prevent flapping with 5min cooldown 3. **Multiple metrics**: Combine CPU + memory + custom 4. **Test**: Load test to verify scaling behavior ## See Also - [Resource Limits](./resource-limits) - Sizing per pod - [Performance Optimization](../performance/optimization-guide) --- ## Production > Deployment > Resource Limits # Resource Limits & Sizing **Right-size your Raphtory deployment** Calculate CPU, memory, and storage requirements for your workload. ## Quick Reference | Graph Size | CPU Cores | Memory | Storage | Use Case | |------------|-----------|--------|---------|----------| | 1M edges | 2 | 8GB | 20GB | Development, testing | | 1-10M edges | 4 | 16GB | 50GB | Small production | | 10-50M edges | 8 | 32GB | 200GB | Medium production | | 50-100M edges | 16 | 64GB | 500GB | Large production | | 100M+ edges | 32+ | 128GB+ | 1TB+ | Enterprise scale | ## Memory Calculation ### Base Graph Memory **Formula**: `memory = edges × 100 bytes` Example: 10M edges ≈ 1GB base memory ### Algorithm Overhead Algorithms need additional memory: | Algorithm | Overhead | |-----------|----------| | PageRank | 2× graph size | | Louvain | 2× graph size | | Betweenness | 3× graph size | | FastRP embeddings | dimension × nodes × 4 bytes | **Total memory needed**: `base + (2-3× base)` = **3-4× graph size** ### Example Calculation Graph: 10M edges Base: 1GB Algorithm overhead: 2-3GB **Recommended**: 8-16GB Add 20% buffer → **10-20GB total** ## CPU Sizing ### Parallelization Most algorithms parallelize well: - **PageRank**: Scales to 16+ cores - **Louvain**: Scales to 8-16 cores - **Component algorithms**: Scales linearly ### Recommendations - **Development**: 2-4 cores - **Production batch**: 8-16 cores - **Real-time**: 4-8 cores with fast single-thread performance ## Storage ### Graph Data - **In-memory only**: Minimal (logs only) - **Persistent graphs**: 2× graph size for working space - **Historical snapshots**: Size × number of snapshots ### Logs & Metrics - **Application logs**: ~1GB/day - **Metrics (Prometheus)**: ~100MB/day **Total storage**: Graph + logs + buffer (30% overhead) ## Kubernetes Resource Configuration ```yaml resources: requests: cpu: "4" memory: "16Gi" limits: cpu: "8" memory: "32Gi" # For production, match requests to limits for guaranteed QoS resources: requests: cpu: "8" memory: "32Gi" limits: cpu: "8" memory: "32Gi" ``` ## Monitoring Resource Usage ```python from raphtory import Graph # Before mem_before = psutil.Process().memory_info().rss / 1024**3 g = Graph() # ... load data ... # After mem_after = psutil.Process().memory_info().rss / 1024**3 print(f"Graph memory: {mem_after - mem_before:.2f} GB") print(f"Nodes: {g.count_nodes()}, Edges: {g.count_edges()}") print(f"Bytes per edge: {(mem_after - mem_before) * 1024**3 / g.count_edges():.0f}") ``` ## Capacity Planning Calculator ```python def calculate_resources(num_edges, algorithm="pagerank"): # Base memory (bytes) base_memory_gb = (num_edges * 100) / (1024**3) # Algorithm overhead multiplier overhead = { "pagerank": 2, "louvain": 2, "betweenness": 3, "components": 1.5 } total_memory_gb = base_memory_gb * (1 + overhead.get(algorithm, 2)) # Add 20% buffer recommended_memory = total_memory_gb * 1.2 # CPU recommendation if num_edges < 10_000_000: recommended_cpu = 4 elif num_edges < 50_000_000: recommended_cpu = 8 else: recommended_cpu = 16 return { "memory_gb": round(recommended_memory, 1), "cpu_cores": recommended_cpu, "storage_gb": round(base_memory_gb * 2 + 50, 0) # 2× + logs } # Example usage resources = calculate_resources(10_000_000, "pagerank") print(f"Recommended: {resources['cpu_cores']} CPU, {resources['memory_gb']}GB RAM") ``` ## JVM/Runtime Tuning For JVM-based deployments: ```bash ``` For Rust (Raphtory default): ```bash ``` ## See Also - [Performance Optimization](../performance/optimization-guide) - [Auto-Scaling](./hpa-scaling) --- ## Production > Observability > Logging # Structured Logging **Production logging best practices** Configure structured logs for debugging and audit trails. ## Python Logging Setup ```python from raphtory import Graph, algorithms # Configure structured logging logging.basicConfig( level=logging.INFO, format='%(message)s' ) class JSONFormatter(logging.Formatter): def format(self, record): log_data = { "timestamp": self.formatTime(record), "level": record.levelname, "logger": record.name, "message": record.getMessage(), } if hasattr(record, 'graph_id'): log_data['graph_id'] = record.graph_id if hasattr(record, 'algorithm'): log_data['algorithm'] = record.algorithm return json.dumps(log_data) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger = logging.getLogger('raphtory') logger.addHandler(handler) # Use in code logger.info("Building graph", extra={'graph_id': 'fraud-detection'}) g = Graph() # ... load data ... logger.info("Running algorithm", extra={ 'algorithm': 'louvain', 'nodes': g.count_nodes(), 'edges': g.count_edges() }) ``` ## Log Levels - **DEBUG**: Algorithm internals, detailed traces - **INFO**: Graph operations, algorithm completion - **WARNING**: Performance issues, deprecated usage - **ERROR**: Failures, exceptions ## Kubernetes Logging ```yaml apiVersion: v1 kind: Pod metadata: name: raphtory spec: containers: - name: raphtory env: - name: LOG_LEVEL value: "INFO" - name: LOG_FORMAT value: "json" volumeMounts: - name: logs mountPath: /var/log/raphtory volumes: - name: logs emptyDir: {} ``` ## Centralized Logging (ELK Stack) Ship logs to Elasticsearch: ```yaml # Filebeat sidecar - name: filebeat image: elastic/filebeat:8.0.0 volumeMounts: - name: logs mountPath: /var/log/raphtory readOnly: true ``` ## Query Logs ```bash # Kubernetes logs kubectl logs -f deployment/raphtory -n raphtory # Filter by level kubectl logs deployment/raphtory -n raphtory | jq 'select(.level=="ERROR")' # Track algorithm performance kubectl logs deployment/raphtory | jq 'select(.algorithm) | {algorithm, duration}' ``` ## See Also - [Metrics](./metrics) - Prometheus metrics - [Tracing](./tracing) - Distributed tracing --- ## Production > Observability > Metrics # Metrics Reference **Complete list of Raphtory metrics** Monitor graph intelligence with these key metrics. ## Graph Metrics | Metric | Type | Description | |--------|------|-------------| | `raphtory_graph_nodes_total` | Gauge | Total nodes in graph | | `raphtory_graph_edges_total` | Gauge | Total edges in graph | | `raphtory_graph_memory_bytes` | Gauge | Graph memory usage | | `raphtory_graph_build_duration_seconds` | Histogram | Time to build graph | ## Algorithm Metrics | Metric | Type | Labels | Description | |--------|------|--------|-------------| | `raphtory_algorithm_duration_seconds` | Histogram | algorithm | Algorithm runtime | | `raphtory_algorithm_executions_total` | Counter | algorithm, status | Algorithm runs | | `raphtory_algorithm_memory_bytes` | Gauge | algorithm | Memory per algorithm | ## System Metrics | Metric | Type | Description | |--------|------|-------------| | `process_cpu_seconds_total` | Counter | CPU time | | `process_resident_memory_bytes` | Gauge | Memory usage | | `process_open_fds` | Gauge | Open file descriptors | ## Query Examples ```promql # Graph growth rate rate(raphtory_graph_edges_total[5m]) # P95 algorithm latency histogram_quantile(0.95, sum(rate(raphtory_algorithm_duration_seconds_bucket[5m])) by (le, algorithm)) # Memory per million edges raphtory_graph_memory_bytes / (raphtory_graph_edges_total / 1000000) # Algorithm throughput rate(raphtory_algorithm_executions_total[1m]) ``` ## Custom Metrics Add in your code: ```python from prometheus_client import Counter, Histogram fraud_detected = Counter('fraud_rings_detected', 'Fraud rings found') graph_analysis = Histogram('custom_analysis_seconds', 'Analysis time') with graph_analysis.time(): result = analyze_graph(g) if result['fraud']: fraud_detected.inc() ``` ## See Also - [Prometheus + Grafana](./prometheus-grafana) - Monitoring setup --- ## Production > Observability > Prometheus Grafana # Prometheus + Grafana Monitoring **Complete monitoring stack for Raphtory** Set up metrics, dashboards, and alerts for production graph intelligence. ## Quick Setup ### 1. Deploy Prometheus ```yaml # prometheus-values.yaml server: persistentVolume: enabled: true size: 50Gi retention: 30d global: scrape_interval: 15s evaluation_interval: 15s serverFiles: prometheus.yml: scrape_configs: - job_name: 'raphtory' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app] action: keep regex: raphtory ``` ```bash helm install prometheus prometheus-community/prometheus \ --namespace monitoring \ --create-namespace \ --values prometheus-values.yaml ``` ### 2. Deploy Grafana ```bash helm install grafana grafana/grafana \ --namespace monitoring \ --set adminPassword=admin123 \ --set datasources."datasources\.yaml".apiVersion=1 \ --set datasources."datasources\.yaml".datasources[0].name=Prometheus \ --set datasources."datasources\.yaml".datasources[0].type=prometheus \ --set datasources."datasources\.yaml".datasources[0].url=http://prometheus-server \ --set datasources."datasources\.yaml".datasources[0].isDefault=true # Get Grafana password kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode # Port forward kubectl port-forward -n monitoring svc/grafana 3000:80 ``` ## Key Metrics to Monitor ### Graph Metrics ```promql # Graph size raphtory_graph_nodes_total raphtory_graph_edges_total # Growth rate rate(raphtory_graph_edges_total[5m]) # Memory usage raphtory_graph_memory_bytes / (1024^3) # Convert to GB ``` ### Algorithm Performance ```promql # Algorithm runtime raphtory_algorithm_duration_seconds{algorithm="pagerank"} # P95 latency histogram_quantile(0.95, rate(raphtory_algorithm_duration_seconds_bucket[5m])) # Throughput rate(raphtory_algorithm_executions_total[1m]) ``` ### System Metrics ```promql # CPU usage rate(process_cpu_seconds_total[5m]) # Memory usage process_resident_memory_bytes / (1024^3) # GC pauses (if applicable) rate(gc_duration_seconds_sum[1m]) ``` ## Grafana Dashboard Create dashboard JSON: ```json { "dashboard": { "title": "Raphtory Intelligence", "panels": [ { "title": "Graph Size", "targets": [ { "expr": "raphtory_graph_nodes_total", "legendFormat": "Nodes" }, { "expr": "raphtory_graph_edges_total", "legendFormat": "Edges" } ], "type": "graph" }, { "title": "Algorithm Performance", "targets": [ { "expr": "histogram_quantile(0.95, rate(raphtory_algorithm_duration_seconds_bucket[5m]))", "legendFormat": "P95 Runtime" } ], "type": "graph" }, { "title": "Memory Usage", "targets": [ { "expr": "process_resident_memory_bytes / (1024^3)", "legendFormat": "Memory (GB)" } ], "type": "graph" } ] } } ``` ## Alerting Rules Create `alerts.yaml`: ```yaml groups: - name: raphtory interval: 30s rules: - alert: HighMemoryUsage expr: process_resident_memory_bytes / node_memory_MemTotal_bytes > 0.9 for: 5m labels: severity: warning annotations: summary: "Raphtory memory usage above 90%" description: "Memory at {{ $value | humanizePercentage }}" - alert: SlowAlgorithm expr: histogram_quantile(0.95, rate(raphtory_algorithm_duration_seconds_bucket[5m])) > 300 for: 10m labels: severity: warning annotations: summary: "Algorithm taking > 5 minutes (P95)" - alert: GraphGrowthStalled expr: rate(raphtory_graph_edges_total[10m]) == 0 for: 30m labels: severity: info annotations: summary: "No new edges in 30 minutes" ``` ## Custom Metrics in Python ```python from prometheus_client import Counter, Histogram, Gauge, start_http_server from raphtory import Graph, algorithms # Define metrics graph_size = Gauge('raphtory_custom_graph_nodes', 'Number of nodes') fraud_rings_detected = Counter('fraud_rings_total', 'Fraud rings detected') algorithm_duration = Histogram('algorithm_runtime_seconds', 'Algorithm execution time', ['algorithm']) # Expose metrics start_http_server(8001) # Use in code g = Graph() # ... load data ... graph_size.set(g.count_nodes()) with algorithm_duration.labels(algorithm='louvain').time(): communities = algorithms.louvain(g) fraud_count = count_fraud_rings(communities) fraud_rings_detected.inc(fraud_count) ``` ## Access Dashboards ```bash # Prometheus kubectl port-forward -n monitoring svc/prometheus-server 9090:80 # Grafana kubectl port-forward -n monitoring svc/grafana 3000:80 ``` Visit: - Prometheus: http://localhost:9090 - Grafana: http://localhost:3000 (admin / password from secret) ## Best Practices 1. **Retention**: Keep 30 days of metrics minimum 2. **Alerts**: Start conservative, tune based on patterns 3. **Dashboards**: Create role-specific views (ops, data science, exec) 4. **Cardinality**: Avoid high-cardinality labels (don't use node IDs) ## See Also - [Metrics Reference](./metrics) - All available metrics - [Logging](./logging) - Structured logging setup --- ## Production > Observability > Tracing # Distributed Tracing **Trace requests across graph intelligence pipelines** Use OpenTelemetry for end-to-end visibility. ## Setup OpenTelemetry ```python from opentelemetry import trace from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import BatchSpanProcessor from opentelemetry.exporter.jaeger.thrift import JaegerExporter # Configure tracer trace.set_tracer_provider(TracerProvider()) jaeger_exporter = JaegerExporter( agent_host_name="jaeger", agent_port=6831, ) trace.get_tracer_provider().add_span_processor( BatchSpanProcessor(jaeger_exporter) ) tracer = trace.get_tracer(__name__) ``` ## Instrument Code ```python from raphtory import Graph, algorithms with tracer.start_as_current_span("fraud-detection-pipeline"): with tracer.start_as_current_span("load-data"): df = load_transactions() with tracer.start_as_current_span("build-graph"): g = Graph() g.load_edges_from_pandas(df, src="from", dst="to", time="ts") with tracer.start_as_current_span("run-louvain") as span: span.set_attribute("graph.nodes", g.count_nodes()) span.set_attribute("graph.edges", g.count_edges()) communities = algorithms.louvain(g) with tracer.start_as_current_span("detect-fraud"): fraud_rings = identify_fraud(communities) span.set_attribute("fraud.rings_detected", len(fraud_rings)) ``` ## Deploy Jaeger ```yaml # Jaeger all-in-one kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/deploy/crds/jaegertracing.io_jaegers_crd.yaml kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/deploy/service_account.yaml kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/deploy/role.yaml kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/deploy/role_binding.yaml kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/main/deploy/operator.yaml # Access UI kubectl port-forward -n observability svc/jaeger-query 16686:16686 ``` Visit Jaeger UI at http://localhost:16686 ## See Also - [Logging](./logging) - Structured logs - [Metrics](./metrics) - Prometheus metrics --- ## Production > Performance > Benchmarking # Performance Benchmarking **Measure and track graph intelligence performance** Establish baselines and identify bottlenecks. ## Benchmarking Script ```python from raphtory import Graph, algorithms def benchmark_algorithm(g, algorithm_name, algorithm_fn): """Benchmark a single algorithm""" start = time.time() result = algorithm_fn() duration = time.time() - start return { "algorithm": algorithm_name, "nodes": g.count_nodes(), "edges": g.count_edges(), "duration_seconds": duration, "edges_per_second": g.count_edges() / duration } # Load test data g = Graph() # ... load_edges_from_pandas ... # Benchmark suite results = [] results.append(benchmark_algorithm( g, "pagerank", lambda: algorithms.pagerank(g, iterations=20) )) results.append(benchmark_algorithm( g, "louvain", lambda: algorithms.louvain(g, resolution=1.0) )) results.append(benchmark_algorithm( g, "weakly_connected_components", lambda: algorithms.weakly_connected_components(g) )) # Save results df = pd.DataFrame(results) df.to_csv("benchmark_results.csv", index=False) print(df) ``` ## Expected Performance Environment: 16 cores, 64GB RAM | Algorithm | 1M edges | 10M edges | 100M edges | |-----------|----------|-----------|------------| | PageRank (20 iter) | 0.5s | 5s | 50s | | Louvain | 0.8s | 8s | 90s | | Components | 0.1s | 1s | 10s | | Triangle Count | 1s | 15s | 300s | ## Continuous Benchmarking Track performance over time: ```python # Load historical results historical = pd.read_csv("historical_benchmarks.csv") plt.figure(figsize=(10, 6)) for algo in historical['algorithm'].unique(): data = historical[historical['algorithm'] == algo] plt.plot(data['date'], data['duration_seconds'], label=algo) plt.xlabel('Date') plt.ylabel('Duration (seconds)') plt.title('Algorithm Performance Over Time') plt.legend() plt.savefig('performance_trend.png') ``` ## Best Practices 1. **Consistent hardware**: Same env for comparisons 2. **Warm-up runs**: Discard first run (cold start) 3. **Multiple iterations**: Average over 3-5 runs 4. **Version tracking**: Pin Raphtory version ## See Also - [Optimization Guide](./optimization-guide) - Performance tuning - [Resource Limits](../deployment/resource-limits) --- ## Production > Performance > Optimization Guide # Performance Optimization **Tune Raphtory for your workload** Optimize graph building, algorithms, and queries. ## Graph Building Optimization ### Batch Loading ```python # Slow: Row-by-row for _, row in df.iterrows(): # DON'T DO THIS g.add_edge(row['time'], row['from'], row['to']) # Fast: Batch with Pandas g.load_edges_from_pandas(df, src="from", dst="to", time="time") # 100×+ faster ``` ### Pre-sort Data ```python # Sort by time for optimal ingestion df = df.sort_values('timestamp') g.load_edges_from_pandas(df, src="from", dst="to", time="timestamp") ``` ## Algorithm Optimization ### Use Sampling for Large Graphs ```python # Approximate betweenness (much faster) betweenness = algorithms.betweenness_centrality(g, k=100) # Sample 100 nodes # Exact (slow on large graphs) betweenness = algorithms.betweenness_centrality(g) # O(V × E) ``` ### Reduce Iterations ```python # PageRank converges quickly pagerank = algorithms.pagerank(g, iterations=10) # Usually sufficient # Default (20 iterations) pagerank = algorithms.pagerank(g) # More accurate, 2× slower ``` ### Temporal Windows ```python # Analyze recent data only recent = g.window(start_time, end_time) communities = algorithms.louvain(recent) # Faster than full graph ``` ## Memory Optimization ### Clear Unused Results ```python communities = algorithms.louvain(g) # ... use results ... del communities gc.collect() # Free memory ``` ### Stream Processing ```python # Process in chunks for huge datasets chunk_size = 1_000_000 for chunk in pd.read_csv('data.csv', chunksize=chunk_size): g = Graph() g.load_edges_from_pandas(chunk, src="from", dst="to", time="ts") results = process_chunk(g) save_results(results) ``` ## Parallelization Set thread count: ```bash ``` ```python # Or in code os.environ['RAYON_NUM_THREADS'] = '16' ``` ## Query Optimization ### Cache Algorithm Results ```python # Cache expensive computations _cache = {} def get_pagerank(g, cache_key="default"): if cache_key not in _cache: _cache[cache_key] = algorithms.pagerank(g) return _cache[cache_key] # Use cached result scores = get_pagerank(g) ``` ## Profiling Find bottlenecks: ```python profiler = cProfile.Profile() profiler.enable() # Your code here g = Graph() g.load_edges_from_pandas(df, src="from", dst="to", time="ts") result = algorithms.louvain(g) profiler.disable() stats = pstats.Stats(profiler) stats.sort_stats('cumulative') stats.print_stats(20) # Top 20 functions ``` ## Performance Checklist - [ ] Use `load_edges_from_pandas()` not row-by-row - [ ] Sort data by time before ingestion - [ ] Use sampling for large-graph algorithms - [ ] Reduce iterations where acceptable - [ ] Analyze temporal windows, not full history - [ ] Cache algorithm results - [ ] Set `RAYON_NUM_THREADS` appropriately - [ ] Profile to find actual bottlenecks ## See Also - [Benchmarking](./benchmarking) - Measure performance - [Resource Limits](../deployment/resource-limits) - Sizing --- ## Production > Performance > Scaling Patterns # Scaling Patterns **Horizontal vs vertical scaling strategies** Choose the right scaling approach for your workload. ## Vertical Scaling (Scale Up) **Add more resources to single instance** ### When to Use - Algorithms that don't parallelize well - Small to medium graphs ( 50M edges) - Simpler deployment ### Limits - Single server capacity (typically 128 cores, 1TB RAM) - No redundancy - Expensive at large scale ### Configuration ```yaml resources: limits: cpu: "32" memory: "256Gi" ``` **Pros**: Simple, no coordination overhead **Cons**: Limited by hardware, single point of failure --- ## Horizontal Scaling (Scale Out) **Add more instances** ### When to Use - Large graphs (50M+ edges) - Independent workloads (batch jobs) - High availability required ### Patterns #### 1. **Batch Job Parallelization** Process different time windows in parallel: ```python # Job 1: Process January g1 = load_data('2024-01') result1 = analyze(g1) # Job 2: Process February (parallel) g2 = load_data('2024-02') result2 = analyze(g2) # Combine results combined = merge_results([result1, result2]) ``` Deploy as Kubernetes Jobs: ```yaml apiVersion: batch/v1 kind: Job metadata: name: raphtory-january spec: template: spec: containers: - name: raphtory image: raphtory/raphtory:latest args: ["process", "2024-01"] resources: limits: cpu: "8" memory: "32Gi" --- apiVersion: batch/v1 kind: Job metadata: name: raphtory-february # ... same for February ``` #### 2. **Stateless API Tier** Multiple API servers behind load balancer: ```yaml replicas: 5 # 5 identical instances service: type: LoadBalancer ``` Each instance handles independent queries. #### 3. **Sharding by Entity** Partition by customer, region, etc: ```python # Worker 1: North America customers customers_na = df[df['region'] == 'NA'] g_na = build_graph(customers_na) # Worker 2: Europe customers customers_eu = df[df['region'] == 'EU'] g_eu = build_graph(customers_eu) ``` --- ## Hybrid Approach **Vertical + Horizontal** - Scale up each instance for algorithm performance - Scale out for throughput and availability Example: 5 instances × 16 cores × 64GB = 320GB total capacity ```yaml replicaCount: 5 resources: limits: cpu: "16" memory: "64Gi" ``` --- ## Decision Matrix | Graph Size | Workload | Pattern | Config | |------------|----------|---------|--------| | 10M edges | Single analysis | Vertical | 1× 8 cores, 32GB | | 10-50M edges | Batch pipeline | Vertical | 1× 16 cores, 64GB | | 50-100M edges | API + batch | Hybrid | 3× 16 cores, 64GB | | 100M+ edges | Distributed | Horizontal sharding | 10× 32 cores, 128GB | --- ## Auto-Scaling Combine with HPA for dynamic scaling: ```yaml autoscaling: enabled: true minReplicas: 2 maxReplicas: 20 targetCPUUtilizationPercentage: 70 ``` --- ## Cost Optimization **Vertical**: Better CPU/memory efficiency, but more expensive at peak **Horizontal**: Cheaper at rest (scale to zero possible), more complex **Recommendation**: Start vertical, scale horizontal when needed --- ## See Also - [HPA Scaling](../deployment/hpa-scaling) - Auto-scaling - [Resource Limits](../deployment/resource-limits) - Sizing - [Performance Optimization](./optimization-guide) - Tuning --- ## Production > Security > Authentication # Authentication **Secure access to Raphtory services** Implement authentication for production graph intelligence deployments. ## API Key Authentication Simple token-based authentication: ```python from flask import Flask, request, abort from functools import wraps app = Flask(__name__) API_KEYS = { "key123": "admin", "key456": "analyst" } def require_api_key(f): @wraps(f) def decorated_function(*args, **kwargs): api_key = request.headers.get('X-API-Key') if api_key not in API_KEYS: abort(401) return f(*args, **kwargs) return decorated_function @app.route('/analyze', methods=['POST']) @require_api_key def analyze(): # Run graph intelligence return {"status": "success"} ``` ## OAuth 2.0 / OIDC Integrate with enterprise identity providers: ```python from authlib.integrations.flask_client import OAuth app = Flask(__name__) oauth = OAuth(app) # Configure OAuth provider (e.g., Okta, Auth0) oauth.register( 'auth0', client_id='YOUR_CLIENT_ID', client_secret='YOUR_CLIENT_SECRET', server_metadata_url='https://YOUR_DOMAIN/.well-known/openid-configuration', client_kwargs={'scope': 'openid profile email'} ) @app.route('/login') def login(): return oauth.auth0.authorize_redirect( redirect_uri=url_for('callback', _external=True) ) @app.route('/callback') def callback(): token = oauth.auth0.authorize_access_token() user = oauth.auth0.parse_id_token(token) # Store user session return redirect('/dashboard') ``` ## Kubernetes Service Account For pod-to-pod authentication: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: raphtory-sa namespace: raphtory --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: raphtory-role namespace: raphtory rules: - apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: raphtory-binding namespace: raphtory roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: raphtory-role subjects: - kind: ServiceAccount name: raphtory-sa namespace: raphtory ``` Use in deployment: ```yaml spec: serviceAccountName: raphtory-sa containers: - name: raphtory image: raphtory/raphtory:latest ``` ## mTLS (Mutual TLS) Certificate-based authentication: ```yaml apiVersion: v1 kind: Secret metadata: name: raphtory-tls type: kubernetes.io/tls data: tls.crt: tls.key: ca.crt: --- apiVersion: v1 kind: Pod metadata: name: raphtory spec: containers: - name: raphtory volumeMounts: - name: tls-certs mountPath: /etc/tls readOnly: true volumes: - name: tls-certs secret: secretName: raphtory-tls ``` ## Best Practices 1. **Rotate credentials** regularly (90 days) 2. **Use secrets management** (Vault, AWS Secrets Manager) 3. **Audit access logs** for suspicious activity 4. **Enforce MFA** for admin access 5. **Separate environments** (dev/staging/prod keys) ## Environment-Specific Keys ```bash # Development # Production (from secrets manager) ``` ## See Also - [Authorization](./authorization) - Role-based access control - [Network Security](./network-security) - Network policies --- ## Production > Security > Authorization # Authorization **Role-based access control for graph intelligence** Control who can access which graph intelligence features. ## Role Definitions Define user roles: ```python ROLES = { "admin": ["read", "write", "run_algorithms", "manage_users"], "analyst": ["read", "run_algorithms"], "viewer": ["read"] } def check_permission(user_role, required_permission): return required_permission in ROLES.get(user_role, []) ``` ## Decorator-Based Authorization ```python from functools import wraps from flask import request, abort def require_permission(permission): def decorator(f): @wraps(f) def decorated_function(*args, **kwargs): user_role = get_current_user_role() # From session/token if not check_permission(user_role, permission): abort(403) # Forbidden return f(*args, **kwargs) return decorated_function return decorator @app.route('/analyze', methods=['POST']) @require_permission('run_algorithms') def run_analysis(): # Only users with 'run_algorithms' permission can access result = algorithms.louvain(g) return {"result": result} @app.route('/users', methods=['GET']) @require_permission('manage_users') def list_users(): # Admin only return {"users": [...]} ``` ## Kubernetes RBAC ```yaml # Read-only role apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: raphtory-viewer namespace: raphtory rules: - apiGroups: [""] resources: ["pods/log"] verbs: ["get", "list"] --- # Admin role apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: raphtory-admin namespace: raphtory rules: - apiGroups: ["apps"] resources: ["deployments", "statefulsets"] verbs: ["*"] - apiGroups: [""] resources: ["pods", "services", "secrets"] verbs: ["*"] ``` ## Graph-Level Permissions Control access to specific graphs: ```python GRAPH_PERMISSIONS = { "fraud-detection": ["admin", "fraud-team"], "customer-network": ["admin", "analyst", "marketing"], "internal-audit": ["admin", "compliance"] } def can_access_graph(user_role, graph_id): allowed_roles = GRAPH_PERMISSIONS.get(graph_id, ["admin"]) return user_role in allowed_roles @app.route('/graph//analyze') @require_permission('run_algorithms') def analyze_graph(graph_id): user_role = get_current_user_role() if not can_access_graph(user_role, graph_id): abort(403) g = load_graph(graph_id) result = run_intelligence(g) return {"result": result} ``` ## Audit Logging Track authorization decisions: ```python audit_logger = logging.getLogger('audit') def log_access(user, resource, action, granted): audit_logger.info({ "timestamp": datetime.now().isoformat(), "user": user, "resource": resource, "action": action, "granted": granted }) @require_permission('run_algorithms') def run_analysis(): user = get_current_user() log_access(user, "algorithm.louvain", "execute", True) # ... run algorithm ... ``` ## Fine-Grained Permissions Algorithm-level permissions: ```python ALGORITHM_PERMISSIONS = { "pagerank": ["analyst", "admin"], "louvain": ["analyst", "admin"], "custom_proprietary": ["admin"] # Sensitive algorithm } def can_run_algorithm(user_role, algorithm_name): allowed = ALGORITHM_PERMISSIONS.get(algorithm_name, ["admin"]) return user_role in allowed ``` ## Best Practices 1. **Principle of least privilege**: Grant minimum required permissions 2. **Audit regularly**: Review and revoke unused permissions 3. **Separate duties**: Different roles for dev/ops/compliance 4. **Document permissions**: Maintain RBAC documentation 5. **Test access controls**: Automated tests for permission checks ## See Also - [Authentication](./authentication) - User authentication - [Compliance](./compliance) - Regulatory requirements --- ## Production > Security > Compliance # Compliance **Meet regulatory requirements** Ensure Raphtory deployments comply with GDPR, SOC2, HIPAA, and other regulations. ## GDPR Compliance **General Data Protection Regulation (EU)** ### Right to Erasure Implement node/edge deletion: ```python from raphtory import Graph def delete_user_data(g, user_id): """Delete all data for a user (GDPR Article 17)""" # Remove user's nodes if g.has_node(user_id): g.delete_node(user_id) # Remove edges involving user for edge in g.edges(): if edge.src().name == user_id or edge.dst().name == user_id: g.delete_edge(edge.src().name, edge.dst().name) # Log deletion for audit log_gdpr_action("deletion", user_id) ``` ### Data Minimization Only store necessary properties: ```python # Bad: Storing unnecessary PII g.add_edge(t, user1, user2, properties={ "name": "John Doe", "email": "john@example.com", "ssn": "123-45-6789" # Don't store if not needed! }) # Good: Pseudonymized identifiers g.add_edge(t, "user_abc123", "user_def456", properties={ "interaction_type": "purchase" }) ``` ### Data Access Logs Track who accessed what data: ```python gdpr_logger = logging.getLogger('gdpr-audit') def access_user_data(user_id, requester, purpose): """Log data access (GDPR Article 30)""" gdpr_logger.info({ "timestamp": datetime.now().isoformat(), "data_subject": user_id, "requester": requester, "purpose": purpose, "action": "access" }) # Return data return g.node(user_id) ``` ### Data Retention Automatically delete old data: ```python from datetime import datetime, timedelta def enforce_retention_policy(g, retention_days=365): """Delete data older than retention period""" cutoff = datetime.now() - timedelta(days=retention_days) cutoff_ts = int(cutoff.timestamp()) # Keep only recent data g = g.after(cutoff_ts) log_retention_action(f"Deleted data before {cutoff}") return g ``` --- ## SOC 2 Compliance **System and Organization Controls** ### Access Control ```python # Implement least privilege ROLES = { "admin": ["read", "write", "delete"], "analyst": ["read"], "auditor": ["read", "audit_logs"] } ``` ### Audit Trail ```python def audit_log(action, user, resource, result): """Comprehensive audit logging for SOC 2""" audit_entry = { "timestamp": datetime.now().isoformat(), "action": action, "user": user, "resource": resource, "result": result, "ip_address": get_client_ip(), "session_id": get_session_id() } # Write to immutable log storage append_to_audit_log(audit_entry) ``` ### Encryption at Rest ```yaml # Kubernetes encrypted secrets apiVersion: v1 kind: Secret metadata: name: raphtory-secrets type: Opaque data: api-key: db-password: ``` ### Encryption in Transit All communication over TLS (see [Network Security](./network-security)) --- ## HIPAA Compliance **Health Insurance Portability and Accountability Act (US Healthcare)** ### PHI Encryption ```python from cryptography.fernet import Fernet # Encrypt Protected Health Information cipher = Fernet(encryption_key) # Store encrypted PHI g.add_edge(t, patient_id, provider_id, properties={ "diagnosis": cipher.encrypt(b"condition_code").decode(), "notes": cipher.encrypt(b"treatment_notes").decode() }) ``` ### Access Logging ```python def log_phi_access(patient_id, accessor, reason): """HIPAA requires logging all PHI access""" hipaa_logger.info({ "timestamp": datetime.now().isoformat(), "patient_id": hash(patient_id), # Don't log actual ID "accessor": accessor, "reason": reason, "authorized": verify_authorization(accessor, patient_id) }) ``` ### Business Associate Agreement (BAA) Ensure cloud providers sign BAA (AWS, GCP, Azure all offer HIPAA-compliant services) --- ## PCI DSS **Payment Card Industry Data Security Standard** ### Tokenize Sensitive Data ```python # Never store raw credit card numbers in graph # Use tokens instead g.add_edge(t, customer_id, merchant_id, properties={ "payment_token": "tok_abc123def456", # Token, not card number "amount": 99.99 }) ``` ### Network Segmentation Run Raphtory in isolated network: ```yaml # Separate namespace apiVersion: v1 kind: Namespace metadata: name: pci-zone labels: compliance: pci-dss ``` --- ## General Compliance Best Practices ### 1. Data Classification Label data by sensitivity: ```python DATA_CLASSES = { "public": 0, "internal": 1, "confidential": 2, "restricted": 3 # PII, PHI, PCI } g.add_edge(t, user1, user2, properties={ "data_classification": DATA_CLASSES["restricted"] }) ``` ### 2. Retention Policies Document and enforce: ```yaml # ConfigMap for retention policy apiVersion: v1 kind: ConfigMap metadata: name: retention-policy data: gdpr_retention_days: "365" logs_retention_days: "90" audit_retention_years: "7" ``` ### 3. Regular Audits ```python def compliance_audit(): """Run periodic compliance checks""" report = { "timestamp": datetime.now(), "checks": [] } # Check encryption report["checks"].append({ "item": "encryption_at_rest", "status": verify_encryption_enabled() }) # Check access controls report["checks"].append({ "item": "rbac_configured", "status": verify_rbac() }) # Check audit logs report["checks"].append({ "item": "audit_logging", "status": verify_audit_logs() }) return report ``` ### 4. Incident Response ```python def security_incident(incident_type, details): """Log and alert on security incidents""" incident_id = generate_incident_id() # Log incident security_logger.critical({ "incident_id": incident_id, "type": incident_type, "details": details, "timestamp": datetime.now().isoformat() }) # Alert security team send_alert("security-team@company.com", f"Incident {incident_id}") # Create ticket create_jira_ticket(incident_id, details) ``` --- ## Compliance Checklist - [ ] **Data classification** implemented - [ ] **Encryption** at rest and in transit - [ ] **Access controls** (RBAC) enforced - [ ] **Audit logging** comprehensive and immutable - [ ] **Data retention** policies automated - [ ] **Right to erasure** implemented (GDPR) - [ ] **Security training** for team - [ ] **Incident response** plan documented - [ ] **Regular audits** scheduled - [ ] **Third-party agreements** (BAA, DPA) signed --- ## See Also - [Authentication](./authentication) - Access control - [Authorization](./authorization) - RBAC - [Network Security](./network-security) - Network policies - [Logging](../observability/logging) - Audit trails --- ## Production > Security > Network Security # Network Security **Secure network communication** Implement network policies, TLS, and traffic control. ## Kubernetes Network Policies Restrict pod-to-pod communication: ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: raphtory-policy namespace: raphtory spec: podSelector: matchLabels: app: raphtory policyTypes: - Ingress - Egress ingress: # Allow from same namespace only - from: - podSelector: {} ports: - protocol: TCP port: 8000 # Allow from ingress controller - from: - namespaceSelector: matchLabels: name: ingress-nginx ports: - protocol: TCP port: 8000 egress: # Allow DNS - to: - namespaceSelector: matchLabels: name: kube-system ports: - protocol: UDP port: 53 # Allow external databases - to: - ipBlock: cidr: 10.0.0.0/8 # Internal network ports: - protocol: TCP port: 5432 # PostgreSQL ``` ## TLS/SSL Configuration ### Ingress with TLS ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: raphtory-ingress annotations: cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: ingressClassName: nginx tls: - hosts: - raphtory.example.com secretName: raphtory-tls-cert rules: - host: raphtory.example.com http: paths: - path: / pathType: Prefix backend: service: name: raphtory port: number: 8000 ``` ### Internal TLS (Service Mesh) Using Istio: ```yaml apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: raphtory-mtls namespace: raphtory spec: mtls: mode: STRICT # Enforce mTLS for all traffic ``` ## Firewall Rules (Cloud) ### AWS Security Groups ```bash # Allow inbound from load balancer only aws ec2 authorize-security-group-ingress \ --group-id sg-raphtory \ --protocol tcp \ --port 8000 \ --source-group sg-loadbalancer # Allow outbound to database aws ec2 authorize-security-group-egress \ --group-id sg-raphtory \ --protocol tcp \ --port 5432 \ --cidr 10.0.1.0/24 ``` ### GCP Firewall Rules ```bash gcloud compute firewall-rules create raphtory-ingress \ --network raphtory-vpc \ --allow tcp:8000 \ --source-ranges 10.0.0.0/8 \ --target-tags raphtory ``` ## Private Subnets Deploy in private subnet with NAT gateway: ```yaml # No public IP spec: template: metadata: annotations: kubernetes.io/ingress.class: "internal" spec: hostNetwork: false ``` ## API Rate Limiting Prevent abuse: ```yaml # Ingress rate limiting apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: raphtory-ingress annotations: nginx.ingress.kubernetes.io/limit-rps: "100" nginx.ingress.kubernetes.io/limit-connections: "10" ``` Application-level: ```python from flask_limiter import Limiter from flask_limiter.util import get_remote_address limiter = Limiter( app, key_func=get_remote_address, default_limits=["100 per hour"] ) @app.route('/analyze') @limiter.limit("10 per minute") def analyze(): # Limited to 10 requests/minute per IP pass ``` ## DDoS Protection - **Cloud providers**: AWS Shield, GCP Cloud Armor - **CDN**: Cloudflare, Akamai - **Web Application Firewall (WAF)**: Rate limiting, IP blocking ## Security Checklist - [ ] **Network policies** restrict pod traffic - [ ] **TLS enabled** for external traffic - [ ] **mTLS enabled** for internal (optional) - [ ] **Private subnets** for databases - [ ] **Rate limiting** configured - [ ] **Firewall rules** minimal and specific - [ ] **VPN/Bastion** for admin access only - [ ] **Security groups** reviewed quarterly ## See Also - [Authentication](./authentication) - Access control - [Compliance](./compliance) - Regulatory requirements ============================================================ # Section: Reference ============================================================ --- ## Reference > Algorithms > Centrality > Index # Centrality Algorithms Measure the importance of nodes in your temporal graph. Count the number of connections for each node. Iterative algorithm to measure node influence. Identify nodes that act as "bridges" in the network. Hubs and Authorities algorithm for directed graphs. --- ## Reference > Algorithms > Community > Index # Community Detection Identify groups and clusters in your temporal graph. Heuristic method for maximizing modularity. Fast community detection based on node labels. --- ## Reference > Algorithms > Components > Index # Components Algorithms for finding connected components in graphs. } title="Weakly Connected" href="/docs/reference/algorithms/components/weakly-connected" children="Find weakly connected components." /> } title="Strongly Connected" href="/docs/reference/algorithms/components/strongly-connected" children="Find strongly connected components." /> } title="In-Components" href="/docs/reference/algorithms/components/in-components" children="Find in-components of nodes." /> } title="Out-Components" href="/docs/reference/algorithms/components/out-components" children="Find out-components of nodes." /> --- ## Reference > Algorithms > Embeddings > Index # Graph Embeddings Algorithms for generating vector representations of graph elements. } title="Fast Random Projection" href="/docs/reference/algorithms/embeddings/fast-rp" children="Efficient node embeddings using random projection." /> ## Overview Graph embeddings convert nodes, edges, or entire graphs into fixed-dimensional vectors that can be used for: - **Machine Learning**: Feed embeddings into ML models for classification, clustering, or prediction - **Similarity Search**: Find similar nodes using vector distance metrics - **Visualization**: Project high-dimensional graph structure to 2D/3D - **Link Prediction**: Predict missing edges based on embedding similarity ## Fast Random Projection (FastRP) FastRP is a scalable algorithm for generating node embeddings: ```python from raphtory import algorithms # Generate 128-dimensional embeddings embeddings = algorithms.fast_rp(g, embedding_dim=128, iterations=3) # Access node vectors for node in g.nodes(): vector = embeddings[node.name] ``` --- ## Reference > Algorithms > Index # Algorithm Library Raphtory's intelligence capabilities powered by battle-tested graph algorithms. ## Choose by Goal What intelligence do you need? Identify influential nodes, key connectors, or critical infrastructure points. **Algorithms**: PageRank · Betweenness · Degree Centrality · HITS **Use Cases**: Influencer identification, critical infrastructure, fraud ring leaders Detect groups, clusters, or coordinated behavior automatically. **Algorithms**: Louvain · Label Propagation **Use Cases**: Customer segments, fraud rings, toxic communities, market clusters Understand how networks evolve, spread, and change over time. **Algorithms**: Temporal Reachability · Temporal 3-Node Motifs **Use Cases**: Attack chain reconstruction, viral spread, churn propagation Identify disconnected groups, isolated clusters, or network structure. **Algorithms**: Weakly Connected · Strongly Connected · In/Out Components **Use Cases**: Network fragmentation, isolated communities, dependency analysis Find recurring subgraph patterns and structural motifs. **Algorithms**: Triangle Count · 3-Node Motifs · Temporal Motifs **Use Cases**: Pattern recognition, structural analysis, fraud detection Quantify graph properties and structural characteristics. **Algorithms**: Clustering Coefficient · Graph Density · Reciprocity **Use Cases**: Network health, structural evolution, benchmark metrics --- ## All Algorithms ### Centrality Identify important nodes based on their position in the network. | Algorithm | What It Finds | Complexity | API | |-----------|---------------|------------|-----| | **[PageRank](/docs/reference/algorithms/centrality/pagerank)** | Global influence/authority | O(V + E) per iteration | `algorithms.pagerank()` | | **[Betweenness](/docs/reference/algorithms/centrality/betweenness)** | Bridge nodes, connectors | O(V × E) | `algorithms.betweenness_centrality()` | | **[Degree Centrality](/docs/reference/algorithms/centrality/degree-centrality)** | Directly connected hubs | O(V) | `algorithms.degree_centrality()` | | **[HITS](/docs/reference/algorithms/centrality/hits)** | Hubs and authorities | O(V + E) per iteration | `algorithms.hits()` | **Use Cases**: Influencer identification, critical infrastructure detection, fraud ring leaders --- ### Community Detection Find groups, clusters, or coordinated behaviors. | Algorithm | Detection Style | Complexity | API | |-----------|----------------|------------|-----| | **[Louvain](/docs/reference/algorithms/community/louvain)** | Modularity optimization | O(V log V) | `algorithms.louvain()` | | **[Label Propagation](/docs/reference/algorithms/community/label-propagation)** | Iterative label spreading | O(V + E) | `algorithms.label_propagation()` | **Use Cases**: Customer segmentation, fraud ring detection, community health monitoring --- ### Components Identify connectivity structure and isolated groups. | Algorithm | What It Finds | API | |-----------|---------------|-----| | **[Weakly Connected Components](/docs/reference/algorithms/components/weakly-connected)** | Undirected connectivity | `algorithms.weakly_connected_components()` | | **[Strongly Connected Components](/docs/reference/algorithms/components/strongly-connected)** | Directed connectivity | `algorithms.strongly_connected_components()` | | **[In-Components](/docs/reference/algorithms/components/in-components)** | Nodes that can reach target | `algorithms.in_components()` | | **[Out-Components](/docs/reference/algorithms/components/out-components)** | Nodes reachable from source | `algorithms.out_components()` | **Use Cases**: Network fragmentation analysis, dependency mapping, service reachability --- ### Temporal Algorithms Analyze how networks evolve and spread over time. | Algorithm | Temporal Insight | Use Case | API | |-----------|-----------------|----------|-----| | **[Temporal Reachability](/docs/reference/algorithms/temporal/temporal-reachability)** | Who can reach whom when | Attack chains, causality | `algorithms.temporal_reachability()` | | **[Temporal 3-Node Motifs](/docs/reference/algorithms/temporal/temporal-motifs)** | 3-node temporal patterns | Pattern discovery | `algorithms.temporal_three_node_motifs()` | | **[Temporal Rich Club](/docs/reference/algorithms/temporal/rich-club)** | Elite connectivity over time | Core group evolution | `algorithms.temporal_rich_club_coefficient()` | **Use Cases**: Cybersecurity (attack reconstruction), social analytics (viral spread), fraud (coordinated timing) --- ### Motifs & Patterns Find recurring structural patterns. | Algorithm | Pattern Type | API | |-----------|-------------|-----| | **[Triangle Count](/docs/reference/algorithms/motifs/triangle-count)** | 3-node closed triangles | `algorithms.triangle_count()` | | **[Local Triangle Count](/docs/reference/algorithms/motifs/local-triangle)** | Per-node triangles | `algorithms.local_triangle_count()` | | **[3-Node Motifs](/docs/reference/algorithms/motifs/three-node-motifs)** | All 3-node patterns | `algorithms.three_node_motifs()` | | **[Triplet Count](/docs/reference/algorithms/motifs/triplet-count)** | 3-node open patterns | `algorithms.triplet_count()` | **Use Cases**: Structural analysis, fraud pattern detection, social network analysis --- ### Metrics Measure graph-level properties. | Metric | What It Measures | API | |--------|-----------------|-----| | **[Clustering Coefficient](/docs/reference/algorithms/metrics/clustering)** | Triangle density | `algorithms.global_clustering_coefficient()` | | **[Graph Density](/docs/reference/algorithms/metrics/density)** | Edge saturation | `algorithms.directed_graph_density()` | | **[Reciprocity](/docs/reference/algorithms/metrics/reciprocity)** | Mutual connections | `algorithms.global_reciprocity()` | | **[Average Degree](/docs/reference/algorithms/metrics/average-degree)** | Mean connectivity | `algorithms.average_degree()` | **Use Cases**: Network health monitoring, structural evolution tracking, benchmark metrics --- ### Path Finding Trace connections, routes, and accessibility. | Algorithm | What It Finds | API | |-----------|---------------|-----| | **[Single Source Shortest Path](/docs/reference/algorithms/path-finding/shortest-paths)** | Shortest paths from source | `algorithms.single_source_shortest_path()` | | **[Dijkstra](/docs/reference/algorithms/path-finding/dijkstra)** | Weighted shortest paths | `algorithms.dijkstra()` | **Use Cases**: Money laundering path tracing, supply chain routing, social distance --- ### Embeddings Learn vector representations for ML integration. | Algorithm | Output | Use Case | API | |-----------|--------|----------|-----| | **[FastRP](/docs/reference/algorithms/embeddings/fast-rp)** | Node vectors | Graph ML features | `algorithms.fast_rp()` | **Use Cases**: Feature engineering for ML models, similarity search, recommendations --- ## By Industry ### Financial Services **Fraud Detection**: - [PageRank](/docs/reference/algorithms/centrality/pagerank) - Identify money mule accounts - [Louvain](/docs/reference/algorithms/community/louvain) - Find fraud rings - [Temporal Motifs](/docs/reference/algorithms/temporal/temporal-motifs) - Detect coordinated timing **Risk Scoring**: - [Degree Centrality](/docs/reference/algorithms/centrality/degree-centrality) - Transaction velocity - [Betweenness](/docs/reference/algorithms/centrality/betweenness) - Critical payment nodes ### Cybersecurity **Threat Hunting**: - [Temporal Reachability](/docs/reference/algorithms/temporal/temporal-reachability) - Attack chain analysis - [Weakly Connected Components](/docs/reference/algorithms/components/weakly-connected) - Botnet identification - [Betweenness](/docs/reference/algorithms/centrality/betweenness) - Critical attack paths ### Social Platforms **Community Health**: - [Louvain](/docs/reference/algorithms/community/louvain) - Detect communities - [Triangle Count](/docs/reference/algorithms/motifs/triangle-count) - Measure cohesion - [Label Propagation](/docs/reference/algorithms/community/label-propagation) - Fast clustering ### Operations **Infrastructure Monitoring**: - [Betweenness](/docs/reference/algorithms/centrality/betweenness) - Critical services - [Strongly Connected Components](/docs/reference/algorithms/components/strongly-connected) - Dependency cycles - [Graph Density](/docs/reference/algorithms/metrics/density) - System complexity --- ## Performance Guide ### Algorithm Selection by Scale | Graph Size | Fast Algorithms | Moderate | Expensive | |------------|----------------|----------|-----------| | 1M edges | Any algorithm | - | - | | 1M-10M edges | PageRank, Louvain, Label Propagation | Betweenness | Full motif enumeration | | 10M-100M edges | Degree, Label Propagation | PageRank, Triangle Count | Betweenness | | 100M edges | Streaming/sampling | PageRank (parallel) | - | ### Optimization Tips 1. **Use temporal windows**: Analyze recent data for real-time insights 2. **Parallelize**: Most algorithms support multi-threading 3. **Sample strategically**: Label propagation works on samples 4. **Cache centrality**: Reuse scores across queries --- ## Getting Started Quick example: ```python from raphtory import Graph, algorithms # Load your temporal graph g = Graph() # ... load data ... # Run algorithms pagerank_scores = algorithms.pagerank(g) communities = algorithms.louvain(g) # Get top results top_10_influential = pagerank_scores.top_k(10) for node, score in top_10_influential.items(): print(f"{node.name}: {score:.4f}") ``` ### Learn More - **[How-To: Run Algorithms](/docs/algorithms/)** - Detailed usage guides - **[Tutorials](/docs/tutorials/)** - Persona-based learning paths --- ## Need Help Choosing? 1. **[Case Studies](/docs/cookbooks/)** - Real-world algorithm combinations 2. **[Community Slack](https://join.slack.com/t/raphtory/...)** - Ask the Raphtory team --- ## Reference > Algorithms > Metrics > Index # Graph Metrics Algorithms for computing graph-level statistics. } title="Clustering Coefficient" href="/docs/reference/algorithms/metrics/clustering" children="Measure graph transitivity." /> } title="Density" href="/docs/reference/algorithms/metrics/density" children="Calculate edge density." /> } title="Reciprocity" href="/docs/reference/algorithms/metrics/reciprocity" children="Measure mutual connections." /> } title="Average Degree" href="/docs/reference/algorithms/metrics/average-degree" children="Compute mean node degree." /> --- ## Reference > Algorithms > Motifs > Index # Motifs Pattern detection algorithms for finding network motifs. } title="Triangle Count" href="/docs/reference/algorithms/motifs/triangle-count" children="Count triangles in the graph." /> } title="Local Triangle Count" href="/docs/reference/algorithms/motifs/local-triangle" children="Count triangles per node." /> } title="Three-Node Motifs" href="/docs/reference/algorithms/motifs/three-node-motifs" children="Detect 3-node motif patterns." /> } title="Triplet Count" href="/docs/reference/algorithms/motifs/triplet-count" children="Count connected triplets." /> --- ## Reference > Algorithms > Path Finding > Index # Path Finding Analyze how connectivity evolves and find paths through your temporal graph. Find the most direct route between nodes. Find shortest paths in weighted graphs. --- ## Reference > Algorithms > Temporal > Index # Temporal Algorithms Algorithms specifically designed for temporal graph analysis. Find repeating patterns of interaction over time. Determine which nodes can reach others within specific time windows. Analyze the connectivity between high-degree nodes. --- ## Reference > Api > Graphql > Index --- title: "GraphQL API" breadcrumb: "Reference / GraphQL" --- # GraphQL API Reference Read operations for querying graphs, nodes, and edges Write operations for modifying graph data GraphQL object types representing graph entities Input types for mutations and complex queries Enumeration types for fixed value sets Scalar types including custom types Union types for polymorphic returns --- ## Reference > Api > Index --- title: "API Reference" breadcrumb: "Reference / API" --- # API Reference Comprehensive API documentation for Raphtory, auto-generated from source code. Core Python library for building and analyzing temporal graphs Query and mutate graphs via GraphQL over HTTP Rust crate documentation on crates.io (external) --- ## Reference > Api > Python > Algorithms > Index --- title: "algorithms" breadcrumb: "Reference / Python / algorithms" --- # algorithms Algorithmic functions that can be run on Raphtory graphs ## Classes | Class | Description | |-------|-------------| | [Infected](/docs/reference/api/python/algorithms/Infected) | | | [Matching](/docs/reference/api/python/algorithms/Matching) | A Matching (i.e., a set of edges that do not share any nodes) | ## Functions | Function | Description | |----------|-------------| | [`all_local_reciprocity`](#all_local_reciprocity) | Local reciprocity - measure of the symmetry of relationships associated with a node | | [`average_degree`](#average_degree) | The average (undirected) degree of all nodes in the graph. | | [`balance`](#balance) | Sums the weights of edges in the graph based on the specified direction. | | [`betweenness_centrality`](#betweenness_centrality) | Computes the betweenness centrality for nodes in a given graph. | | [`cohesive_fruchterman_reingold`](#cohesive_fruchterman_reingold) | Cohesive version of `fruchterman_reingold` that adds virtual edges between isolated nodes | | [`degree_centrality`](#degree_centrality) | Computes the degree centrality of all nodes in the graph. The values are normalized | | [`dijkstra_single_source_shortest_paths`](#dijkstra_single_source_shortest_paths) | Finds the shortest paths from a single source to multiple targets in a graph. | | [`directed_graph_density`](#directed_graph_density) | Graph density - measures how dense or sparse a graph is. | | [`fast_rp`](#fast_rp) | Computes embedding vectors for each vertex of an undirected/bidirectional graph according to the Fast RP algorithm. | | [`fruchterman_reingold`](#fruchterman_reingold) | Fruchterman Reingold layout algorithm | | [`global_clustering_coefficient`](#global_clustering_coefficient) | Computes the global clustering coefficient of a graph. The global clustering coefficient is | | [`global_reciprocity`](#global_reciprocity) | Reciprocity - measure of the symmetry of relationships in a graph, the global reciprocity of | | [`global_temporal_three_node_motif`](#global_temporal_three_node_motif) | Computes the number of three edge, up-to-three node delta-temporal motifs in the graph, using the algorithm of Paranjape et al, Motifs in Temporal Networks (2017). | | [`global_temporal_three_node_motif_multi`](#global_temporal_three_node_motif_multi) | Computes the global counts of three-edge up-to-three node temporal motifs for a range of timescales. See `global_temporal_three_node_motif` for an interpretation of each row returned. | | [`hits`](#hits) | HITS (Hubs and Authority) Algorithm: | | [`in_component`](#in_component) | In component -- Finding the "in-component" of a node in a directed graph involves identifying all nodes that can be reached following only incoming edges. | | [`in_components`](#in_components) | In components -- Finding the "in-component" of a node in a directed graph involves identifying all nodes that can be reached following only incoming edges. | | [`k_core`](#k_core) | Determines which nodes are in the k-core for a given value of k | | [`label_propagation`](#label_propagation) | Computes components using a label propagation algorithm | | [`local_clustering_coefficient`](#local_clustering_coefficient) | Local clustering coefficient - measures the degree to which nodes in a graph tend to cluster together. | | [`local_clustering_coefficient_batch`](#local_clustering_coefficient_batch) | Returns the Local clustering coefficient (batch, intersection) for each specified node in a graph. This measures the degree to which one or multiple nodes in a graph tend to cluster together. | | [`local_temporal_three_node_motifs`](#local_temporal_three_node_motifs) | Computes the number of each type of motif that each node participates in. See global_temporal_three_node_motifs for a summary of the motifs involved. | | [`local_triangle_count`](#local_triangle_count) | Implementations of various graph algorithms that can be run on a graph. | | [`louvain`](#louvain) | Louvain algorithm for community detection | | [`max_degree`](#max_degree) | Returns the largest degree found in the graph | | [`max_in_degree`](#max_in_degree) | The maximum in degree of any node in the graph. | | [`max_out_degree`](#max_out_degree) | The maximum out degree of any node in the graph. | | [`max_weight_matching`](#max_weight_matching) | Compute a maximum-weighted matching in the general undirected weighted | | [`min_degree`](#min_degree) | Returns the smallest degree found in the graph | | [`min_in_degree`](#min_in_degree) | The minimum in degree of any node in the graph. | | [`min_out_degree`](#min_out_degree) | The minimum out degree of any node in the graph. | | [`out_component`](#out_component) | Out component -- Finding the "out-component" of a node in a directed graph involves identifying all nodes that can be reached following only outgoing edges. | | [`out_components`](#out_components) | Out components -- Finding the "out-component" of a node in a directed graph involves identifying all nodes that can be reached following only outgoing edges. | | [`pagerank`](#pagerank) | Pagerank -- pagerank centrality value of the nodes in a graph | | [`single_source_shortest_path`](#single_source_shortest_path) | Calculates the single source shortest paths from a given source node. | | [`strongly_connected_components`](#strongly_connected_components) | Strongly connected components | | [`temporal_SEIR`](#temporal_seir) | Simulate an SEIR dynamic on the network | | [`temporal_bipartite_graph_projection`](#temporal_bipartite_graph_projection) | Projects a temporal bipartite graph into an undirected temporal graph over the pivot node type. Let `G` be a bipartite graph with node types `A` and `B`. Given `delta 0`, the projection graph `G'` pivoting over type `B` nodes, | | [`temporally_reachable_nodes`](#temporally_reachable_nodes) | Temporally reachable nodes -- the nodes that are reachable by a time respecting path followed out from a set of seed nodes at a starting time. | | [`triplet_count`](#triplet_count) | Computes the number of connected triplets within a graph | | [`weakly_connected_components`](#weakly_connected_components) | Weakly connected components -- partitions the graph into node sets which are mutually reachable by an undirected path | --- ## Function Details ### [all_local_reciprocity](#all_local_reciprocity) **Signature:** `all_local_reciprocity(graph)` Local reciprocity - measure of the symmetry of relationships associated with a node This measures the proportion of a node's outgoing edges which are reciprocated with an incoming edge. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | Mapping of nodes to their reciprocity value. | ### [average_degree](#average_degree) **Signature:** `average_degree(graph)` The average (undirected) degree of all nodes in the graph. Note that this treats the graph as simple and undirected and is equal to twice the number of undirected edges divided by the number of nodes. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a Raphtory graph | #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | the average degree of the nodes in the graph | ### [balance](#balance) **Signature:** `balance(graph, name='weight', direction='both')` Sums the weights of edges in the graph based on the specified direction. This function computes the sum of edge weights based on the direction provided, and can be executed in parallel using a given number of threads. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `direction` | [Direction](/docs/reference/api/python/typing), optional | `'both'` | Specifies the direction of the edges to be considered for summation. Defaults to "both". * "out": Only consider outgoing edges. * "in": Only consider incoming edges. * "both": Consider both outgoing and incoming edges. This is the default. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph view on which the operation is to be performed. | | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `'weight'` | The name of the edge property used as the weight. Defaults to "weight". | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | Mapping of nodes to the computed sum of their associated edge weights. | ### [betweenness_centrality](#betweenness_centrality) **Signature:** `betweenness_centrality(graph, k=None, normalized=True)` Computes the betweenness centrality for nodes in a given graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A reference to the graph. | | `k` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | Specifies the number of nodes to consider for the centrality computation. All nodes are considered by default. | | `normalized` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | Indicates whether to normalize the centrality values. Defaults to True. | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | Mapping from nodes to their betweenness centrality. | ### [cohesive_fruchterman_reingold](#cohesive_fruchterman_reingold) **Signature:** `cohesive_fruchterman_reingold(graph, iter_count=100, scale=1.0, node_start_size=1.0, cooloff_factor=0.95, dt=0.1)` Cohesive version of `fruchterman_reingold` that adds virtual edges between isolated nodes #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `cooloff_factor` | [float](https://docs.python.org/3/library/functions.html#float), optional | `0.95` | Factor to reduce node movement in later iterations, helping stabilize the layout. Defaults to 0.95. | | `dt` | [float](https://docs.python.org/3/library/functions.html#float), optional | `0.1` | Time step or movement factor in each iteration. Defaults to 0.1. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A reference to the graph | | `iter_count` | [int](https://docs.python.org/3/library/functions.html#int), optional | `100` | The number of iterations to run. Defaults to 100. | | `node_start_size` | [float](https://docs.python.org/3/library/functions.html#float), optional | `1.0` | Initial size or movement range for nodes. Defaults to 1.0. | | `scale` | [float](https://docs.python.org/3/library/functions.html#float), optional | `1.0` | Global scaling factor to control the overall spread of the graph. Defaults to 1.0. | #### Returns | Type | Description | |------|-------------| | [NodeLayout](/docs/reference/api/python/node_state/NodeLayout) | A mapping from nodes to their [x, y] positions | ### [degree_centrality](#degree_centrality) **Signature:** `degree_centrality(graph)` Computes the degree centrality of all nodes in the graph. The values are normalized by dividing each result with the maximum possible degree. Graphs with self-loops can have values of centrality greater than 1. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph view on which the operation is to be performed. | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | Mapping of nodes to their associated degree centrality. | ### [dijkstra_single_source_shortest_paths](#dijkstra_single_source_shortest_paths) **Signature:** `dijkstra_single_source_shortest_paths(graph, source, targets, direction='both', weight='weight')` Finds the shortest paths from a single source to multiple targets in a graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `direction` | [Direction](/docs/reference/api/python/typing), optional | `'both'` | The direction of the edges to be considered for the shortest path. Defaults to "both". | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph to search in. | | `source` | [NodeInput](/docs/reference/api/python/typing) | - | The source node. | | `targets` | list[[NodeInput](/docs/reference/api/python/typing)] | - | A list of target nodes. | | `weight` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `'weight'` | The name of the weight property for the edges. Defaults to "weight". | #### Returns | Type | Description | |------|-------------| | [NodeStateWeightedSP](/docs/reference/api/python/node_state/NodeStateWeightedSP) | Mapping from nodes to a tuple containing the total cost and the nodes representing the shortest path. | ### [directed_graph_density](#directed_graph_density) **Signature:** `directed_graph_density(graph)` Graph density - measures how dense or sparse a graph is. The ratio of the number of directed edges in the graph to the total number of possible directed edges (given by N * (N-1) where N is the number of nodes). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | Directed graph density of graph. | ### [fast_rp](#fast_rp) **Signature:** `fast_rp(graph, embedding_dim, normalization_strength, iter_weights, seed=None, threads=None)` Computes embedding vectors for each vertex of an undirected/bidirectional graph according to the Fast RP algorithm. Original Paper: https://doi.org/10.48550/arXiv.1908.11512 #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `embedding_dim` | [int](https://docs.python.org/3/library/functions.html#int) | - | The size (dimension) of the generated embeddings. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph view on which embeddings are generated. | | `iter_weights` | list[[float](https://docs.python.org/3/library/functions.html#float)] | - | The scalar weights to apply to the results of each iteration | | `normalization_strength` | [float](https://docs.python.org/3/library/functions.html#float) | - | The extent to which high-degree vertices should be discounted (range: 1-0) | | `seed` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The seed for initialisation of random vectors | | `threads` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The number of threads to be used for parallel execution. | #### Returns | Type | Description | |------|-------------| | [NodeStateListF64](/docs/reference/api/python/node_state/NodeStateListF64) | Mapping from nodes to embedding vectors. | ### [fruchterman_reingold](#fruchterman_reingold) **Signature:** `fruchterman_reingold(graph, iterations=100, scale=1.0, node_start_size=1.0, cooloff_factor=0.95, dt=0.1)` Fruchterman Reingold layout algorithm #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `cooloff_factor` | [float](https://docs.python.org/3/library/functions.html#float) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `0.95` | the cool off factor for the algorithm. Defaults to 0.95. | | `dt` | [float](https://docs.python.org/3/library/functions.html#float) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `0.1` | the time increment between iterations. Defaults to 0.1. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | the graph view | | `iterations` | [int](https://docs.python.org/3/library/functions.html#int) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `100` | the number of iterations to run. Defaults to 100. | | `node_start_size` | [float](https://docs.python.org/3/library/functions.html#float) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `1.0` | the start node size to assign random positions. Defaults to 1.0. | | `scale` | [float](https://docs.python.org/3/library/functions.html#float) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `1.0` | the scale to apply. Defaults to 1.0. | #### Returns | Type | Description | |------|-------------| | [NodeLayout](/docs/reference/api/python/node_state/NodeLayout) | A mapping from nodes to their [x, y] positions | ### [global_clustering_coefficient](#global_clustering_coefficient) **Signature:** `global_clustering_coefficient(graph)` Computes the global clustering coefficient of a graph. The global clustering coefficient is defined as the number of triangles in the graph divided by the number of triplets in the graph. Note that this is also known as transitivity and is different to the average clustering coefficient. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a Raphtory graph, treated as undirected | #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | the global clustering coefficient of the graph | ### [global_reciprocity](#global_reciprocity) **Signature:** `global_reciprocity(graph)` Reciprocity - measure of the symmetry of relationships in a graph, the global reciprocity of the entire graph. This calculates the number of reciprocal connections (edges that go in both directions) in a graph and normalizes it by the total number of directed edges. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | reciprocity of the graph between 0 and 1. | ### [global_temporal_three_node_motif](#global_temporal_three_node_motif) **Signature:** `global_temporal_three_node_motif(graph, delta, threads=None)` Computes the number of three edge, up-to-three node delta-temporal motifs in the graph, using the algorithm of Paranjape et al, Motifs in Temporal Networks (2017). We point the reader to this reference for more information on the algorithm and background, but provide a short summary below. Motifs included: Stars There are three classes (in the order they are outputted) of star motif on three nodes based on the switching behaviour of the edges between the two leaf nodes. - PRE: Stars of the form i↔j, i↔j, i↔k (ie two interactions with leaf j followed by one with leaf k) - MID: Stars of the form i↔j, i↔k, i↔j (ie switching interactions from leaf j to leaf k, back to j again) - POST: Stars of the form i↔j, i↔k, i↔k (ie one interaction with leaf j followed by two with leaf k) Within each of these classes is 8 motifs depending on the direction of the first to the last edge -- incoming "I" or outgoing "O". These are enumerated in the order III, IIO, IOI, IOO, OII, OIO, OOI, OOO (like binary with "I"-0 and "O"-1). Two node motifs: Also included are two node motifs, of which there are 8 when counted from the perspective of each node. These are characterised by the direction of each edge, enumerated in the above order. Note that for the global graph counts, each motif is counted in both directions (a single III motif for one node is an OOO motif for the other node). Triangles: There are 8 triangle motifs: 1. i → j, k → j, i → k 2. i → j, k → j, k → i 3. i → j, j → k, i → k 4. i → j, j → k, k → i 5. i → j, k → i, j → k 6. i → j, k → i, k → j 7. i → j, i → k, j → k 8. i → j, i → k, k → j #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `delta` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum time difference between the first and last edge of the motif. NB if time for edges was given as a UNIX epoch, this should be given in seconds, otherwise milliseconds should be used (if edge times were given as string) | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A directed raphtory graph | | `threads` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The number of threads to use when running the algorithm. | #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | A 40 dimensional array with the counts of each motif, given in the same order as described above. Note that the two-node motif counts are symmetrical so it may be more useful just to consider the first four elements. | ### [global_temporal_three_node_motif_multi](#global_temporal_three_node_motif_multi) **Signature:** `global_temporal_three_node_motif_multi(graph, deltas, threads=None)` Computes the global counts of three-edge up-to-three node temporal motifs for a range of timescales. See `global_temporal_three_node_motif` for an interpretation of each row returned. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `deltas` | list[[int](https://docs.python.org/3/library/functions.html#int)] | - | A list of delta values to use. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A directed raphtory graph | | `threads` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The number of threads to use. | #### Returns | Type | Description | |------|-------------| | list[list[[int](https://docs.python.org/3/library/functions.html#int)]] | A list of 40d arrays, each array is the motif count for a particular value of delta, returned in the order that the deltas were given as input. | ### [hits](#hits) **Signature:** `hits(graph, iter_count=20, threads=None)` HITS (Hubs and Authority) Algorithm: AuthScore of a node (A) = Sum of HubScore of all nodes pointing at node (A) from previous iteration / Sum of HubScore of all nodes in the current iteration HubScore of a node (A) = Sum of AuthScore of all nodes pointing away from node (A) from previous iteration / Sum of AuthScore of all nodes in the current iteration #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Graph to run the algorithm on | | `iter_count` | [int](https://docs.python.org/3/library/functions.html#int), optional | `20` | How many iterations to run the algorithm. Defaults to 20. | | `threads` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | Number of threads to use | #### Returns | Type | Description | |------|-------------| | [NodeStateHits](/docs/reference/api/python/node_state/NodeStateHits) | A mapping from nodes their hub and authority scores | ### [in_component](#in_component) **Signature:** `in_component(node, filter=None)` In component -- Finding the "in-component" of a node in a directed graph involves identifying all nodes that can be reached following only incoming edges. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr`, optional | `None` | Optional filter | | `node` | [Node](/docs/reference/api/python/raphtory/Node) | - | The node whose in-component we wish to calculate | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | Mapping of nodes in the in-component to the distance from the starting node. | ### [in_components](#in_components) **Signature:** `in_components(graph, filter=None)` In components -- Finding the "in-component" of a node in a directed graph involves identifying all nodes that can be reached following only incoming edges. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr`, optional | `None` | Optional filter | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph | #### Returns | Type | Description | |------|-------------| | [NodeStateNodes](/docs/reference/api/python/node_state/NodeStateNodes) | Mapping of nodes to the nodes in their 'in-component' | ### [k_core](#k_core) **Signature:** `k_core(graph, k, iter_count, threads=None)` Determines which nodes are in the k-core for a given value of k #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A reference to the graph | | `iter_count` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of iterations to run | | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | Value of k such that the returned nodes have degree k (recursively) | | `threads` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | number of threads to run on | #### Returns | Type | Description | |------|-------------| | list[[Node](/docs/reference/api/python/raphtory/Node)] | A list of nodes in the k core | ### [label_propagation](#label_propagation) **Signature:** `label_propagation(graph, seed=None)` Computes components using a label propagation algorithm #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A reference to the graph | | `seed` | [bytes](https://docs.python.org/3/library/stdtypes.html#bytes), optional | `None` | Array of 32 bytes of u8 which is set as the rng seed | #### Returns | Type | Description | |------|-------------| | list[[set](https://docs.python.org/3/library/stdtypes.html#set)] | A list of sets each containing nodes that have been grouped | ### [local_clustering_coefficient](#local_clustering_coefficient) **Signature:** `local_clustering_coefficient(graph, v)` Local clustering coefficient - measures the degree to which nodes in a graph tend to cluster together. The proportion of pairs of neighbours of a node who are themselves connected. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph, can be directed or undirected but will be treated as undirected. | | `v` | [NodeInput](/docs/reference/api/python/typing) | - | node id or name | #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | the local clustering coefficient of node v in graph. | ### [local_clustering_coefficient_batch](#local_clustering_coefficient_batch) **Signature:** `local_clustering_coefficient_batch(graph, v)` Returns the Local clustering coefficient (batch, intersection) for each specified node in a graph. This measures the degree to which one or multiple nodes in a graph tend to cluster together. Uses path-counting for its triangle-counting step. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `v` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | #### Returns | Type | Description | |------|-------------| | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | | ### [local_temporal_three_node_motifs](#local_temporal_three_node_motifs) **Signature:** `local_temporal_three_node_motifs(graph, delta, threads=None)` Computes the number of each type of motif that each node participates in. See global_temporal_three_node_motifs for a summary of the motifs involved. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `delta` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum time difference between the first and last edge of the motif. NB if time for edges was given as a UNIX epoch, this should be given in seconds, otherwise milliseconds should be used (if edge times were given as string) | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A directed raphtory graph | | `threads` | optional | `None` | | #### Returns | Type | Description | |------|-------------| | [NodeStateMotifs](/docs/reference/api/python/node_state/NodeStateMotifs) | A mapping from nodes to lists of motif counts (40 counts in the same order as the global motif counts) with the number of each motif that node participates in. | ### [local_triangle_count](#local_triangle_count) **Signature:** `local_triangle_count(graph, v)` Implementations of various graph algorithms that can be run on a graph. To run an algorithm simply import the module and call the function with the graph as the argument Local triangle count - calculates the number of triangles (a cycle of length 3) a node participates in. This function returns the number of pairs of neighbours of a given node which are themselves connected. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph, this can be directed or undirected but will be treated as undirected | | `v` | [NodeInput](/docs/reference/api/python/typing) | - | node id or name | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | number of triangles associated with node v | ### [louvain](#louvain) **Signature:** `louvain(graph, resolution=1.0, weight_prop=None, tol=None)` Louvain algorithm for community detection #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | the graph view | | `resolution` | [float](https://docs.python.org/3/library/functions.html#float), optional | `1.0` | the resolution parameter for modularity. Defaults to 1.0. | | `tol` | [None](https://docs.python.org/3/library/constants.html#None) \| [float](https://docs.python.org/3/library/functions.html#float), optional | `None` | the floating point tolerance for deciding if improvements are significant (default: 1e-8) | | `weight_prop` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | the edge property to use for weights (has to be float) | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | Mapping of nodes to their community assignment | ### [max_degree](#max_degree) **Signature:** `max_degree(graph)` Returns the largest degree found in the graph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph view on which the operation is to be performed. | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The largest degree | ### [max_in_degree](#max_in_degree) **Signature:** `max_in_degree(graph)` The maximum in degree of any node in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | value of the largest indegree | ### [max_out_degree](#max_out_degree) **Signature:** `max_out_degree(graph)` The maximum out degree of any node in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | value of the largest outdegree | ### [max_weight_matching](#max_weight_matching) **Signature:** `max_weight_matching(graph, weight_prop=None, max_cardinality=True, verify_optimum_flag=False)` Compute a maximum-weighted matching in the general undirected weighted graph given by "edges". If `max_cardinality` is true, only maximum-cardinality matchings are considered as solutions. The algorithm is based on "Efficient Algorithms for Finding Maximum Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986. Based on networkx implementation [https://github.com/networkx/networkx/blob/3351206a3ce5b3a39bb2fc451e93ef545b96c95b/networkx/algorithms/matching.py](https://github.com/networkx/networkx/blob/3351206a3ce5b3a39bb2fc451e93ef545b96c95b/networkx/algorithms/matching.py) With reference to the standalone protoype implementation from: [http://jorisvr.nl/article/maximum-matching](http://jorisvr.nl/article/maximum-matching) [http://jorisvr.nl/files/graphmatching/20130407/mwmatching.py](http://jorisvr.nl/files/graphmatching/20130407/mwmatching.py) The function takes time O(n**3) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph to compute the maximum weight matching for | | `max_cardinality` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | If set to True, consider only maximum-cardinality matchings. Defaults to True. If True, finds the maximum-cardinality matching with maximum weight among all maximum-cardinality matchings, otherwise, finds the maximum weight matching irrespective of cardinality. | | `verify_optimum_flag` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | Whether the optimum should be verified. Defaults to False. If true prior to returning, an additional routine to verify the optimal solution was found will be run after computing the maximum weight matching. If it's true and the found matching is not an optimal solution this function will panic. This option should normally be only set true during testing. | | `weight_prop` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The property on the edge to use for the weight. If not provided, | #### Returns | Type | Description | |------|-------------| | [Matching](/docs/reference/api/python/algorithms/Matching) | The matching | ### [min_degree](#min_degree) **Signature:** `min_degree(graph)` Returns the smallest degree found in the graph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | The graph view on which the operation is to be performed. | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The smallest degree found | ### [min_in_degree](#min_in_degree) **Signature:** `min_in_degree(graph)` The minimum in degree of any node in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | value of the smallest indegree | ### [min_out_degree](#min_out_degree) **Signature:** `min_out_degree(graph)` The minimum out degree of any node in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a directed Raphtory graph | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | value of the smallest outdegree | ### [out_component](#out_component) **Signature:** `out_component(node, filter=None)` Out component -- Finding the "out-component" of a node in a directed graph involves identifying all nodes that can be reached following only outgoing edges. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr`, optional | `None` | Optional filter | | `node` | [Node](/docs/reference/api/python/raphtory/Node) | - | The node whose out-component we wish to calculate | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | A NodeState mapping the nodes in the out-component to their distance from the starting node. | ### [out_components](#out_components) **Signature:** `out_components(graph, filter=None)` Out components -- Finding the "out-component" of a node in a directed graph involves identifying all nodes that can be reached following only outgoing edges. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr`, optional | `None` | Optional filter | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph | #### Returns | Type | Description | |------|-------------| | [NodeStateNodes](/docs/reference/api/python/node_state/NodeStateNodes) | Mapping of nodes to the nodes within their 'out-component' | ### [pagerank](#pagerank) **Signature:** `pagerank(graph, iter_count=20, max_diff=None, use_l2_norm=True, damping_factor=0.85)` Pagerank -- pagerank centrality value of the nodes in a graph This function calculates the Pagerank value of each node in a graph. See https://en.wikipedia.org/wiki/PageRank for more information on PageRank centrality. A default damping factor of 0.85 is used. This is an iterative algorithm which terminates if the sum of the absolute difference in pagerank values between iterations is less than the max diff value given. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `damping_factor` | [float](https://docs.python.org/3/library/functions.html#float), optional | `0.85` | The damping factor for the PageRank calculation. Defaults to 0.85. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph | | `iter_count` | [int](https://docs.python.org/3/library/functions.html#int), optional | `20` | Maximum number of iterations to run. Note that this will terminate early if convergence is reached. Defaults to 20. | | `max_diff` | [float](https://docs.python.org/3/library/functions.html#float), optional | `None` | Optional parameter providing an alternative stopping condition. The algorithm will terminate if the sum of the absolute difference in pagerank values between iterations is less than the max diff value given. | | `use_l2_norm` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | Flag for choosing the norm to use for convergence checks, True for l2 norm, False for l1 norm. Defaults to True. | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | Mapping of nodes to their pagerank value. | ### [single_source_shortest_path](#single_source_shortest_path) **Signature:** `single_source_shortest_path(graph, source, cutoff=None)` Calculates the single source shortest paths from a given source node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `cutoff` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | An optional cutoff level. The algorithm will stop if this level is reached. | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A reference to the graph. Must implement `GraphViewOps`. | | `source` | [NodeInput](/docs/reference/api/python/typing) | - | The source node. | #### Returns | Type | Description | |------|-------------| | [NodeStateNodes](/docs/reference/api/python/node_state/NodeStateNodes) | Mapping from end node to shortest path from the source node. | ### [strongly_connected_components](#strongly_connected_components) **Signature:** `strongly_connected_components(graph)` Strongly connected components Partitions the graph into node sets which are mutually reachable by an directed path #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | Mapping of nodes to their component ids | ### [temporal_SEIR](#temporal_seir) **Signature:** `temporal_SEIR(graph, seeds, infection_prob, initial_infection, recovery_rate=None, incubation_rate=None, rng_seed=None)` Simulate an SEIR dynamic on the network The algorithm uses the event-based sampling strategy from https://doi.org/10.1371/journal.pone.0246961 #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | the graph view | | `incubation_rate` | [float](https://docs.python.org/3/library/functions.html#float) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | optional incubation rate (if None, simulates SI or SIR dynamics where infected nodes are infectious at the next time step) the actual incubation time is sampled from an exponential distribution with this rate | | `infection_prob` | [float](https://docs.python.org/3/library/functions.html#float) | - | the probability for a contact between infected and susceptible nodes to lead to a transmission | | `initial_infection` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | the time of the initial infection | | `recovery_rate` | [float](https://docs.python.org/3/library/functions.html#float) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | optional recovery rate (if None, simulates SEI dynamic where nodes never recover) the actual recovery time is sampled from an exponential distribution with this rate | | `rng_seed` | [int](https://docs.python.org/3/library/functions.html#int) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | optional seed for the random number generator | | `seeds` | [int](https://docs.python.org/3/library/functions.html#int) \| [float](https://docs.python.org/3/library/functions.html#float) \| list[[NodeInput](/docs/reference/api/python/typing)] | - | the seeding strategy to use for the initial infection (if `int`, choose fixed number of nodes at random, if `float` infect each node with this probability, if `list` initially infect the specified nodes | #### Returns | Type | Description | |------|-------------| | [NodeStateSEIR](/docs/reference/api/python/node_state/NodeStateSEIR) | Mapping from nodes to `Infected` objects for each infected node with attributes `infected`: the time stamp of the infection event `active`: the time stamp at which the node actively starts spreading the infection (i.e., the end of the incubation period) `recovered`: the time stamp at which the node recovered (i.e., stopped spreading the infection) | ### [temporal_bipartite_graph_projection](#temporal_bipartite_graph_projection) **Signature:** `temporal_bipartite_graph_projection(graph, delta, pivot_type)` Projects a temporal bipartite graph into an undirected temporal graph over the pivot node type. Let `G` be a bipartite graph with node types `A` and `B`. Given `delta 0`, the projection graph `G'` pivoting over type `B` nodes, will make a connection between nodes `n1` and `n2` (of type `A`) at time `(t1 + t2)/2` if they respectively have an edge at time `t1`, `t2` with the same node of type `B` in `G`, and `|t2-t1| delta`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `delta` | [int](https://docs.python.org/3/library/functions.html#int) | - | Time period | | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | A directed raphtory graph | | `pivot_type` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | node type to pivot over. If a bipartite graph has types `A` and `B`, and `B` is the pivot type, the new graph will consist of type `A` nodes. | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | Projected (unipartite) temporal graph. | ### [temporally_reachable_nodes](#temporally_reachable_nodes) **Signature:** `temporally_reachable_nodes(graph, max_hops, start_time, seed_nodes, stop_nodes=None)` Temporally reachable nodes -- the nodes that are reachable by a time respecting path followed out from a set of seed nodes at a starting time. This function starts at a set of seed nodes and follows all time respecting paths until either a) a maximum number of hops is reached, b) one of a set of stop nodes is reached, or c) no further time respecting edges exist. A time respecting path is a sequence of nodes v_1, v_2, ... , v_k such that there exists a sequence of edges (v_i, v_i+1, t_i) with t_i t_i+1 for i = 1, ... , k - 1. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | directed Raphtory graph | | `max_hops` | [int](https://docs.python.org/3/library/functions.html#int) | - | maximum number of hops to propagate out | | `seed_nodes` | list[[NodeInput](/docs/reference/api/python/typing)] | - | list of node names or ids which should be the starting nodes | | `start_time` | [int](https://docs.python.org/3/library/functions.html#int) | - | time at which to start the path (such that t_1 start_time for any path starting from these seed nodes) | | `stop_nodes` | list[[NodeInput](/docs/reference/api/python/typing)], optional | `None` | nodes at which a path shouldn't go any further | #### Returns | Type | Description | |------|-------------| | [NodeStateReachability](/docs/reference/api/python/node_state/NodeStateReachability) | Mapping of nodes to their reachability history. | ### [triplet_count](#triplet_count) **Signature:** `triplet_count(graph)` Computes the number of connected triplets within a graph A connected triplet (also known as a wedge, 2-hop path) is a pair of edges with one node in common. For example, the triangle made up of edges A-B, B-C, C-A is formed of three connected triplets. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | a Raphtory graph, treated as undirected | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | the number of triplets in the graph | ### [weakly_connected_components](#weakly_connected_components) **Signature:** `weakly_connected_components(graph)` Weakly connected components -- partitions the graph into node sets which are mutually reachable by an undirected path This function assigns a component id to each node such that nodes with the same component id are mutually reachable by an undirected path. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [GraphView](/docs/reference/api/python/raphtory/GraphView) | - | Raphtory graph | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | Mapping of nodes to their component ids. | --- ## Reference > Api > Python > Filter > Index --- title: "filter" breadcrumb: "Reference / Python / filter" --- # filter ## Classes | Class | Description | |-------|-------------| | [Edge](/docs/reference/api/python/filter/Edge) | | | [EdgeEndpoint](/docs/reference/api/python/filter/EdgeEndpoint) | | | [EdgeEndpointIdFilter](/docs/reference/api/python/filter/EdgeEndpointIdFilter) | | | [EdgeEndpointNameFilter](/docs/reference/api/python/filter/EdgeEndpointNameFilter) | | | [EdgeEndpointTypeFilter](/docs/reference/api/python/filter/EdgeEndpointTypeFilter) | | | [ExplodedEdge](/docs/reference/api/python/filter/ExplodedEdge) | | | [FilterExpr](/docs/reference/api/python/filter/FilterExpr) | | | [FilterOps](/docs/reference/api/python/filter/FilterOps) | | | [Graph](/docs/reference/api/python/filter/Graph) | | | [Node](/docs/reference/api/python/filter/Node) | | | [NodeIdFilterBuilder](/docs/reference/api/python/filter/NodeIdFilterBuilder) | | | [NodeNameFilterBuilder](/docs/reference/api/python/filter/NodeNameFilterBuilder) | | | [NodeTypeFilterBuilder](/docs/reference/api/python/filter/NodeTypeFilterBuilder) | | | [PropertyFilterOps](/docs/reference/api/python/filter/PropertyFilterOps) | | --- ## Reference > Api > Python > Graph_gen > Index --- title: "graph_gen" breadcrumb: "Reference / Python / graph_gen" --- # graph_gen Generate Raphtory graphs from attachment models ## Functions | Function | Description | |----------|-------------| | [`ba_preferential_attachment`](#ba_preferential_attachment) | Generates a graph using the preferential attachment model. | | [`random_attachment`](#random_attachment) | Generates a graph using the random attachment model | --- ## Function Details ### [ba_preferential_attachment](#ba_preferential_attachment) **Signature:** `ba_preferential_attachment(g, nodes_to_add, edges_per_step, seed=None)` Generates a graph using the preferential attachment model. Given a graph this function will add a user defined number of nodes, each with a user defined number of edges. This is an iterative algorithm where at each `step` a node is added and its neighbours are chosen from the pool of nodes already within the network. For this model the neighbours are chosen proportionally based upon their degree, favouring nodes with higher degree (more connections). This sampling is conducted without replacement. **Note:** If the provided graph doesnt have enough nodes/edges for the initial sample, the min number of both will be added before generation begins. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges_per_step` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `g` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `nodes_to_add` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `seed` | [Any](https://docs.python.org/3/library/typing.html#typing.Any), optional | `None` | | ### [random_attachment](#random_attachment) **Signature:** `random_attachment(g, nodes_to_add, edges_per_step, seed=None)` Generates a graph using the random attachment model This function is a graph generation model based upon: Callaway, Duncan S., et al. "Are randomly grown graphs really random?." Physical Review E 64.4 (2001): 041902. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges_per_step` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `g` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `nodes_to_add` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `seed` | [Any](https://docs.python.org/3/library/typing.html#typing.Any), optional | `None` | | --- ## Reference > Api > Python > Graph_loader > Index --- title: "graph_loader" breadcrumb: "Reference / Python / graph_loader" --- # graph_loader Load and save Raphtory graphs from/to file(s) ## Functions | Function | Description | |----------|-------------| | [`karate_club_graph`](#karate_club_graph) | `karate_club_graph` constructs a karate club graph. | | [`lotr_graph`](#lotr_graph) | Load the Lord of the Rings dataset into a graph. | | [`lotr_graph_with_props`](#lotr_graph_with_props) | Same as `lotr_graph()` but with additional properties race and gender for some of the nodes | | [`neo4j_movie_graph`](#neo4j_movie_graph) | Returns the neo4j movie graph example. | | [`reddit_hyperlink_graph`](#reddit_hyperlink_graph) | Load (a subset of) Reddit hyperlinks dataset into a graph. | | [`reddit_hyperlink_graph_local`](#reddit_hyperlink_graph_local) | Returns the Reddit hyperlink graph example. | | [`stable_coin_graph`](#stable_coin_graph) | Returns the stablecoin graph example. | --- ## Function Details ### [karate_club_graph](#karate_club_graph) `karate_club_graph` constructs a karate club graph. This function uses the Zachary's karate club dataset to create a graph object. Nodes represent members of the club, and edges represent relationships between them. Node properties indicate the club to which each member belongs. Background: These are data collected from the members of a university karate club by Wayne Zachary. The ZACHE matrix represents the presence or absence of ties among the members of the club; the ZACHC matrix indicates the relative strength of the associations (number of situations in and outside the club in which interactions occurred). Zachary (1977) used these data and an information flow model of network conflict resolution to explain the split-up of this group following disputes among the members. Reference: Zachary W. (1977). An information flow model for conflict and fission in small groups. Journal of Anthropological Research, 33, 452-473. #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | ### [lotr_graph](#lotr_graph) Load the Lord of the Rings dataset into a graph. The dataset is available at https://raw.githubusercontent.com/Raphtory/Data/main/lotr.csv and is a list of interactions between characters in the Lord of the Rings books and movies. The dataset is a CSV file with the following columns: * src_id: The ID of the source character * dst_id: The ID of the destination character * time: The time of the interaction (in page) Dataset statistics: * Number of nodes (subreddits) 139 * Number of edges (hyperlink between subreddits) 701 #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | A Graph containing the LOTR dataset | ### [lotr_graph_with_props](#lotr_graph_with_props) Same as `lotr_graph()` but with additional properties race and gender for some of the nodes #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | ### [neo4j_movie_graph](#neo4j_movie_graph) **Signature:** `neo4j_movie_graph(uri, username, password, database=...)` Returns the neo4j movie graph example. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `database` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `...` | | | `password` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | | | `uri` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | | | `username` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | ### [reddit_hyperlink_graph](#reddit_hyperlink_graph) **Signature:** `reddit_hyperlink_graph(timeout_seconds=600)` Load (a subset of) Reddit hyperlinks dataset into a graph. The dataset is available at http://snap.stanford.edu/data/soc-redditHyperlinks-title.tsv The hyperlink network represents the directed connections between two subreddits (a subreddit is a community_detection on Reddit). We also provide subreddit embeddings. The network is extracted from publicly available Reddit data of 2.5 years from Jan 2014 to April 2017. *NOTE: It may take a while to download the dataset* Dataset statistics: * Number of nodes (subreddits) 35,776 * Number of edges (hyperlink between subreddits) 137,821 * Timespan Jan 2014 - April 2017 Source: * S. Kumar, W.L. Hamilton, J. Leskovec, D. Jurafsky. Community Interaction and Conflict on the Web. World Wide Web Conference, 2018. Properties: * SOURCE_SUBREDDIT: the subreddit where the link originates * TARGET_SUBREDDIT: the subreddit where the link ends * POST_ID: the post in the source subreddit that starts the link * TIMESTAMP: time of the post * POST_LABEL: label indicating if the source post is explicitly negative towards the target post. The value is -1 if the source is negative towards the target, and 1 if it is neutral or positive. The label is created using crowd-sourcing and training a text based classifier, and is better than simple sentiment analysis of the posts. Please see the reference paper for details. * POST_PROPERTIES: a vector representing the text properties of the source post, listed as a list of comma separated numbers. This can be found on the source website #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timeout_seconds` | [int](https://docs.python.org/3/library/functions.html#int), optional | `600` | The number of seconds to wait for the dataset to download. Defaults to 600. | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | A Graph containing the Reddit hyperlinks dataset | ### [reddit_hyperlink_graph_local](#reddit_hyperlink_graph_local) **Signature:** `reddit_hyperlink_graph_local(file_path)` Returns the Reddit hyperlink graph example. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `file_path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | ### [stable_coin_graph](#stable_coin_graph) **Signature:** `stable_coin_graph(path=None, subset=None)` Returns the stablecoin graph example. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | | | `subset` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `None` | | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | --- ## Reference > Api > Python > Graphql > Index --- title: "graphql" breadcrumb: "Reference / Python / graphql" --- # graphql ## Classes | Class | Description | |-------|-------------| | [AllPropertySpec](/docs/reference/api/python/graphql/AllPropertySpec) | Specifies that **all** properties should be included when creating an index. | | [GraphServer](/docs/reference/api/python/graphql/GraphServer) | A class for defining and running a Raphtory GraphQL server | | [PropsInput](/docs/reference/api/python/graphql/PropsInput) | Create a PropsInput by choosing to include all/some properties explicitly. | | [RaphtoryClient](/docs/reference/api/python/graphql/RaphtoryClient) | A client for handling GraphQL operations in the context of Raphtory. | | [RemoteEdge](/docs/reference/api/python/graphql/RemoteEdge) | A remote edge reference | | [RemoteEdgeAddition](/docs/reference/api/python/graphql/RemoteEdgeAddition) | An edge update | | [RemoteGraph](/docs/reference/api/python/graphql/RemoteGraph) | | | [RemoteIndexSpec](/docs/reference/api/python/graphql/RemoteIndexSpec) | Create a RemoteIndexSpec specifying which node and edge properties to index. | | [RemoteNode](/docs/reference/api/python/graphql/RemoteNode) | | | [RemoteNodeAddition](/docs/reference/api/python/graphql/RemoteNodeAddition) | Node addition update | | [RemoteUpdate](/docs/reference/api/python/graphql/RemoteUpdate) | A temporal update | | [RunningGraphServer](/docs/reference/api/python/graphql/RunningGraphServer) | A Raphtory server handler that also enables querying the server | | [SomePropertySpec](/docs/reference/api/python/graphql/SomePropertySpec) | Create a SomePropertySpec by explicitly listing metadata and/or temporal property names. | ## Functions | Function | Description | |----------|-------------| | [`decode_graph`](#decode_graph) | Decode a Base64-encoded graph | | [`encode_graph`](#encode_graph) | Encode a graph using Base64 encoding | | [`schema`](#schema) | Returns the raphtory graphql server schema | --- ## Function Details ### [decode_graph](#decode_graph) **Signature:** `decode_graph(graph)` Decode a Base64-encoded graph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the encoded graph | ### [encode_graph](#encode_graph) **Signature:** `encode_graph(graph)` Encode a graph using Base64 encoding #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph` | [Graph](/docs/reference/api/python/raphtory/Graph) \| [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | - | the graph | #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | the encoded graph | ### [schema](#schema) Returns the raphtory graphql server schema Returns str: Graphql schema --- ## Reference > Api > Python > Index --- title: "Python API" breadcrumb: "Reference / Python" --- # Python API Reference Core graph classes: Graph, Node, Edge, and temporal views Graph algorithms: PageRank, community detection, paths Filter expressions for nodes and edges Random graph generators Load graphs from files and databases GraphQL server and client utilities Iterable wrappers for graph elements Node state management and algorithms nullmodels module plottingutils module typing module Vector embeddings and similarity search --- ## Reference > Api > Python > Iterables > Index --- title: "iterables" breadcrumb: "Reference / Python / iterables" --- # iterables ## Classes | Class | Description | |-------|-------------| | [ArcStringIterable](/docs/reference/api/python/iterables/ArcStringIterable) | | | [ArcStringVecIterable](/docs/reference/api/python/iterables/ArcStringVecIterable) | | | [BoolIterable](/docs/reference/api/python/iterables/BoolIterable) | | | [EventTimeIterable](/docs/reference/api/python/iterables/EventTimeIterable) | | | [GIDGIDIterable](/docs/reference/api/python/iterables/GIDGIDIterable) | | | [GIDIterable](/docs/reference/api/python/iterables/GIDIterable) | | | [HistoryDateTimeIterable](/docs/reference/api/python/iterables/HistoryDateTimeIterable) | | | [HistoryEventIdIterable](/docs/reference/api/python/iterables/HistoryEventIdIterable) | | | [HistoryIterable](/docs/reference/api/python/iterables/HistoryIterable) | | | [HistoryTimestampIterable](/docs/reference/api/python/iterables/HistoryTimestampIterable) | | | [I64Iterable](/docs/reference/api/python/iterables/I64Iterable) | | | [IntervalsIterable](/docs/reference/api/python/iterables/IntervalsIterable) | | | [MetadataListList](/docs/reference/api/python/iterables/MetadataListList) | | | [NestedArcStringIterable](/docs/reference/api/python/iterables/NestedArcStringIterable) | | | [NestedArcStringVecIterable](/docs/reference/api/python/iterables/NestedArcStringVecIterable) | | | [NestedBoolIterable](/docs/reference/api/python/iterables/NestedBoolIterable) | | | [NestedEventTimeIterable](/docs/reference/api/python/iterables/NestedEventTimeIterable) | | | [NestedGIDGIDIterable](/docs/reference/api/python/iterables/NestedGIDGIDIterable) | | | [NestedGIDIterable](/docs/reference/api/python/iterables/NestedGIDIterable) | | | [NestedHistoryDateTimeIterable](/docs/reference/api/python/iterables/NestedHistoryDateTimeIterable) | | | [NestedHistoryEventIdIterable](/docs/reference/api/python/iterables/NestedHistoryEventIdIterable) | | | [NestedHistoryIterable](/docs/reference/api/python/iterables/NestedHistoryIterable) | | | [NestedHistoryTimestampIterable](/docs/reference/api/python/iterables/NestedHistoryTimestampIterable) | | | [NestedI64Iterable](/docs/reference/api/python/iterables/NestedI64Iterable) | | | [NestedI64VecIterable](/docs/reference/api/python/iterables/NestedI64VecIterable) | | | [NestedIntervalsIterable](/docs/reference/api/python/iterables/NestedIntervalsIterable) | | | [NestedOptionArcStringIterable](/docs/reference/api/python/iterables/NestedOptionArcStringIterable) | | | [NestedOptionEventTimeIterable](/docs/reference/api/python/iterables/NestedOptionEventTimeIterable) | | | [NestedOptionI64Iterable](/docs/reference/api/python/iterables/NestedOptionI64Iterable) | | | [NestedOptionUsizeIterable](/docs/reference/api/python/iterables/NestedOptionUsizeIterable) | | | [NestedResultOptionUtcDateTimeIterable](/docs/reference/api/python/iterables/NestedResultOptionUtcDateTimeIterable) | | | [NestedResultUtcDateTimeIterable](/docs/reference/api/python/iterables/NestedResultUtcDateTimeIterable) | | | [NestedStringIterable](/docs/reference/api/python/iterables/NestedStringIterable) | | | [NestedUsizeIterable](/docs/reference/api/python/iterables/NestedUsizeIterable) | | | [NestedUtcDateTimeIterable](/docs/reference/api/python/iterables/NestedUtcDateTimeIterable) | | | [NestedVecUtcDateTimeIterable](/docs/reference/api/python/iterables/NestedVecUtcDateTimeIterable) | | | [OptionArcStringIterable](/docs/reference/api/python/iterables/OptionArcStringIterable) | | | [OptionEventTimeIterable](/docs/reference/api/python/iterables/OptionEventTimeIterable) | | | [OptionI64Iterable](/docs/reference/api/python/iterables/OptionI64Iterable) | | | [OptionUsizeIterable](/docs/reference/api/python/iterables/OptionUsizeIterable) | | | [OptionUtcDateTimeIterable](/docs/reference/api/python/iterables/OptionUtcDateTimeIterable) | | | [OptionVecUtcDateTimeIterable](/docs/reference/api/python/iterables/OptionVecUtcDateTimeIterable) | | | [PyNestedPropsIterable](/docs/reference/api/python/iterables/PyNestedPropsIterable) | | | [ResultOptionUtcDateTimeIterable](/docs/reference/api/python/iterables/ResultOptionUtcDateTimeIterable) | | | [ResultUtcDateTimeIterable](/docs/reference/api/python/iterables/ResultUtcDateTimeIterable) | | | [StringIterable](/docs/reference/api/python/iterables/StringIterable) | | | [U64Iterable](/docs/reference/api/python/iterables/U64Iterable) | | | [UsizeIterable](/docs/reference/api/python/iterables/UsizeIterable) | | --- ## Reference > Api > Python > Node_state > Index --- title: "node_state" breadcrumb: "Reference / Python / node_state" --- # node_state ## Classes | Class | Description | |-------|-------------| | [DegreeView](/docs/reference/api/python/node_state/DegreeView) | A lazy view over node values | | [EarliestDateTimeView](/docs/reference/api/python/node_state/EarliestDateTimeView) | A lazy view over EarliestDateTime values for each node. | | [EarliestEventIdView](/docs/reference/api/python/node_state/EarliestEventIdView) | A lazy view over node values | | [EarliestTimeView](/docs/reference/api/python/node_state/EarliestTimeView) | A lazy view over node values | | [EarliestTimestampView](/docs/reference/api/python/node_state/EarliestTimestampView) | A lazy view over node values | | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | A lazy view over node values | | [HistoryDateTimeView](/docs/reference/api/python/node_state/HistoryDateTimeView) | A lazy view over node values | | [HistoryEventIdView](/docs/reference/api/python/node_state/HistoryEventIdView) | A lazy view over node values | | [HistoryTimestampView](/docs/reference/api/python/node_state/HistoryTimestampView) | A lazy view over node values | | [HistoryView](/docs/reference/api/python/node_state/HistoryView) | A lazy view over History objects for each node. | | [IdView](/docs/reference/api/python/node_state/IdView) | A lazy view over node values | | [IntervalsFloatView](/docs/reference/api/python/node_state/IntervalsFloatView) | A lazy view over node values | | [IntervalsIntegerView](/docs/reference/api/python/node_state/IntervalsIntegerView) | A lazy view over node values | | [IntervalsView](/docs/reference/api/python/node_state/IntervalsView) | A lazy view over node values | | [LatestDateTimeView](/docs/reference/api/python/node_state/LatestDateTimeView) | A lazy view over EarliestDateTime values for each node. | | [LatestEventIdView](/docs/reference/api/python/node_state/LatestEventIdView) | A lazy view over node values | | [LatestTimeView](/docs/reference/api/python/node_state/LatestTimeView) | A lazy view over node values | | [LatestTimestampView](/docs/reference/api/python/node_state/LatestTimestampView) | A lazy view over node values | | [NameView](/docs/reference/api/python/node_state/NameView) | A lazy view over node values | | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | | | [NodeLayout](/docs/reference/api/python/node_state/NodeLayout) | | | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | | | [NodeStateF64String](/docs/reference/api/python/node_state/NodeStateF64String) | | | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | | | [NodeStateHistory](/docs/reference/api/python/node_state/NodeStateHistory) | A NodeState of History objects for each node. | | [NodeStateHistoryDateTime](/docs/reference/api/python/node_state/NodeStateHistoryDateTime) | | | [NodeStateHistoryEventId](/docs/reference/api/python/node_state/NodeStateHistoryEventId) | | | [NodeStateHistoryTimestamp](/docs/reference/api/python/node_state/NodeStateHistoryTimestamp) | | | [NodeStateHits](/docs/reference/api/python/node_state/NodeStateHits) | | | [NodeStateIntervals](/docs/reference/api/python/node_state/NodeStateIntervals) | | | [NodeStateListDateTime](/docs/reference/api/python/node_state/NodeStateListDateTime) | | | [NodeStateListF64](/docs/reference/api/python/node_state/NodeStateListF64) | | | [NodeStateMotifs](/docs/reference/api/python/node_state/NodeStateMotifs) | | | [NodeStateNodes](/docs/reference/api/python/node_state/NodeStateNodes) | | | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | | | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | | | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | | | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | | | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | | | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | | | [NodeStateReachability](/docs/reference/api/python/node_state/NodeStateReachability) | | | [NodeStateSEIR](/docs/reference/api/python/node_state/NodeStateSEIR) | | | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | | | [NodeStateU64](/docs/reference/api/python/node_state/NodeStateU64) | | | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | | | [NodeStateWeightedSP](/docs/reference/api/python/node_state/NodeStateWeightedSP) | | | [NodeTypeView](/docs/reference/api/python/node_state/NodeTypeView) | A lazy view over node values | | [UsizeIterable](/docs/reference/api/python/node_state/UsizeIterable) | | --- ## Reference > Api > Python > Nullmodels > Index --- title: "nullmodels" breadcrumb: "Reference / Python / nullmodels" --- # nullmodels Generate randomised reference models for a temporal graph edgelist ## Functions | Function | Description | |----------|-------------| | [`permuted_timestamps_model`](#permuted_timestamps_model) | Returns a DataFrame with the time column shuffled. | | [`shuffle_column`](#shuffle_column) | Returns an edgelist with a given column shuffled. Exactly one of col_number or col_name should be specified. | | [`shuffle_multiple_columns`](#shuffle_multiple_columns) | Returns an edgelist with given columns shuffled. Exactly one of col_numbers or col_names should be specified. | --- ## Function Details ### [permuted_timestamps_model](#permuted_timestamps_model) **Signature:** `permuted_timestamps_model(graph_df, time_col=None, time_name=None, inplace=False, sorted=False)` Returns a DataFrame with the time column shuffled. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph_df` | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | - | The input DataFrame representing the graph. | | `inplace` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If True, shuffles the time column in-place. Otherwise, creates a copy of the DataFrame. Default is False. | | `sorted` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If True, sorts the DataFrame by the shuffled time column. Default is False. | | `time_col` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The column number of the time column to shuffle. Default is None. | | `time_name` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The column name of the time column to shuffle. Default is None. | ### [shuffle_column](#shuffle_column) **Signature:** `shuffle_column(graph_df, col_number=None, col_name=None, inplace=False)` Returns an edgelist with a given column shuffled. Exactly one of col_number or col_name should be specified. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `col_name` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The column name to shuffle. Default is None. | | `col_number` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The column number to shuffle. Default is None. | | `graph_df` | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | - | The input DataFrame representing the timestamped edgelist. | | `inplace` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If True, shuffles the column in-place. Otherwise, creates a copy of the DataFrame. Default is False. | #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | The shuffled DataFrame with the specified column. | #### Raises | Exception | Description | |-----------|-------------| | `AssertionError` | If neither col_number nor col_name is provided. | | `AssertionError` | If both col_number and col_name are provided. | ### [shuffle_multiple_columns](#shuffle_multiple_columns) **Signature:** `shuffle_multiple_columns(graph_df, col_numbers=None, col_names=None, inplace=False)` Returns an edgelist with given columns shuffled. Exactly one of col_numbers or col_names should be specified. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `col_names` | [list](https://docs.python.org/3/library/stdtypes.html#list), optional | `None` | The list of column names to shuffle. Default is None. | | `col_numbers` | [list](https://docs.python.org/3/library/stdtypes.html#list), optional | `None` | The list of column numbers to shuffle. Default is None. | | `graph_df` | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | - | The input DataFrame representing the graph. | | `inplace` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If True, shuffles the columns in-place. Otherwise, creates a copy of the DataFrame. Default is False. | #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | The shuffled DataFrame with the specified columns. | #### Raises | Exception | Description | |-----------|-------------| | `AssertionError` | If neither col_numbers nor col_names are provided. | | `AssertionError` | If both col_numbers and col_names are provided. | --- ## Reference > Api > Python > Plottingutils > Index --- title: "plottingutils" breadcrumb: "Reference / Python / plottingutils" --- # plottingutils Useful code snippets for making commonly used plots in Raphtory. ## Functions | Function | Description | |----------|-------------| | [`ccdf`](#ccdf) | Returns x coordinates and y coordinates for a ccdf (complementary cumulative density function) from a list of observations. | | [`cdf`](#cdf) | Returns x coordinates and y coordinates for a cdf (cumulative density function) from a list of observations. | | [`global_motif_heatplot`](#global_motif_heatplot) | Out-of-the-box plotting of global motif counts corresponding to the layout in Motifs in Temporal Networks (Paranjape et al) | | [`human_format`](#human_format) | Converts a number over 1000 to a string with 1 d.p and the corresponding letter. e.g. with input 24134, 24.1k as a string would be returned. This is used in the motif plots to make annotated heatmap cells more concise. | | [`lorenz`](#lorenz) | Returns x coordinates and y coordinates for a Lorenz Curve from a list of observations. | | [`ordinal_number`](#ordinal_number) | Returns ordinal number of integer input. | | [`to_motif_matrix`](#to_motif_matrix) | Converts a 40d vector of global motifs to a 2d grid of motifs corresponding to the layout in Motifs in Temporal Networks (Paranjape et al) | --- ## Function Details ### [ccdf](#ccdf) **Signature:** `ccdf(observations, normalised=True)` Returns x coordinates and y coordinates for a ccdf (complementary cumulative density function) from a list of observations. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `normalised` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | Defaults to True. If true, y coordinates normalised such that y is the probability of finding a value greater than than or equal to x, if false y is the number of observations greater than or equal to x. | | `observations` | [list](https://docs.python.org/3/library/stdtypes.html#list) | - | list of observations, should be numeric | ### [cdf](#cdf) **Signature:** `cdf(observations, normalised=True)` Returns x coordinates and y coordinates for a cdf (cumulative density function) from a list of observations. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `normalised` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | if true, y coordinates normalised such that y is the probability of finding a value less than or equal to x, if false y is the number of observations less than or equal to x. Defaults to True. | | `observations` | [list](https://docs.python.org/3/library/stdtypes.html#list) | - | list of observations, should be numeric | ### [global_motif_heatplot](#global_motif_heatplot) **Signature:** `global_motif_heatplot(motifs, cmap='YlGnBu', kwargs=\{\})` Out-of-the-box plotting of global motif counts corresponding to the layout in Motifs in Temporal Networks (Paranjape et al) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `cmap` | optional | `'YlGnBu'` | | | `kwargs` | optional | `\{\}` | | | `motifs` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| `np.ndarray` | - | 1 dimensional length-40 array of motifs, which should be the list of motifs returned from the `global_temporal_three_node_motifs` function in Raphtory. **kwargs: arguments to | #### Returns | Type | Description | |------|-------------| | `matplotlib.axes.Axes` | ax item containing the heatmap with motif labels on the axes. | ### [human_format](#human_format) **Signature:** `human_format(num)` Converts a number over 1000 to a string with 1 d.p and the corresponding letter. e.g. with input 24134, 24.1k as a string would be returned. This is used in the motif plots to make annotated heatmap cells more concise. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `num` | [int](https://docs.python.org/3/library/functions.html#int) | - | number to be abbreviated | #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | number in abbreviated string format. | ### [lorenz](#lorenz) **Signature:** `lorenz(observations)` Returns x coordinates and y coordinates for a Lorenz Curve from a list of observations. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `observations` | [list](https://docs.python.org/3/library/stdtypes.html#list) | - | list of observations, should be numeric | ### [ordinal_number](#ordinal_number) **Signature:** `ordinal_number(number)` Returns ordinal number of integer input. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `number` | [int](https://docs.python.org/3/library/functions.html#int) | - | input number | #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | ordinal for that number as string | ### [to_motif_matrix](#to_motif_matrix) **Signature:** `to_motif_matrix(motifs, data_type=int)` Converts a 40d vector of global motifs to a 2d grid of motifs corresponding to the layout in Motifs in Temporal Networks (Paranjape et al) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data_type` | optional | `int` | | | `motifs` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| `np.ndarray` | - | 1 dimensional length-40 array of motifs. | #### Returns | Type | Description | |------|-------------| | `np.ndarray` | 6x6 array of motifs whose ijth element is M_ij in Motifs in Temporal Networks (Paranjape et al). | --- ## Reference > Api > Python > Raphtory > Index --- title: "raphtory" breadcrumb: "Reference / Python / raphtory" --- # raphtory Raphtory graph analytics library ## Classes | Class | Description | |-------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | PyEdge is a Python class that represents an edge in the graph. | | [Edges](/docs/reference/api/python/raphtory/Edges) | A list of edges that can be iterated over. | | [EventTime](/docs/reference/api/python/raphtory/EventTime) | Raphtory’s EventTime. | | [Graph](/docs/reference/api/python/raphtory/Graph) | A temporal graph with event semantics. | | [GraphView](/docs/reference/api/python/raphtory/GraphView) | Graph view is a read-only version of a graph at a certain point in time. | | [History](/docs/reference/api/python/raphtory/History) | History of updates for an object. Provides access to time entries and derived views such as timestamps, datetimes, event ids, and intervals. | | [HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime) | History view that exposes UTC datetimes. | | [HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId) | History view that exposes event ids of time entries. They are used for ordering within the same timestamp. | | [HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp) | History view that exposes timestamps in milliseconds since the Unix epoch. | | [IndexSpec](/docs/reference/api/python/raphtory/IndexSpec) | | | [IndexSpecBuilder](/docs/reference/api/python/raphtory/IndexSpecBuilder) | | | [Intervals](/docs/reference/api/python/raphtory/Intervals) | View over the intervals between consecutive timestamps, expressed in milliseconds. | | [Metadata](/docs/reference/api/python/raphtory/Metadata) | A view of metadata of an entity | | [MetadataView](/docs/reference/api/python/raphtory/MetadataView) | | | [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge) | | | [MutableNode](/docs/reference/api/python/raphtory/MutableNode) | | | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | | [Node](/docs/reference/api/python/raphtory/Node) | A node (or node) in the graph. | | [Nodes](/docs/reference/api/python/raphtory/Nodes) | A list of nodes that can be iterated over. | | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | Raphtory’s optional EventTime type. Instances of OptionalEventTime may contain an EventTime, or be empty. | | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | | [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | A temporal graph that allows edges and nodes to be deleted. | | [Prop](/docs/reference/api/python/raphtory/Prop) | | | [PropType](/docs/reference/api/python/raphtory/PropType) | PropType provides access to the types used by Raphtory. They can be used to specify the data type of different properties, | | [Properties](/docs/reference/api/python/raphtory/Properties) | A view of the properties of an entity | | [PropertiesView](/docs/reference/api/python/raphtory/PropertiesView) | | | [PyPropValueList](/docs/reference/api/python/raphtory/PyPropValueList) | | | [TemporalProperties](/docs/reference/api/python/raphtory/TemporalProperties) | A view of the temporal properties of an entity | | [TemporalProperty](/docs/reference/api/python/raphtory/TemporalProperty) | A view of a temporal property | | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | | ## Functions | Function | Description | |----------|-------------| | [`version`](#version) | Return Raphtory version. | --- ## Function Details ### [version](#version) Return Raphtory version. #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | | --- ## Reference > Api > Python > Typing > Index --- title: "typing" breadcrumb: "Reference / Python / typing" --- # typing ## Type Aliases | Name | Type | |------|------| | [`Direction`](#direction) | `Literal['in', 'out', 'both']` | | [`GID`](#gid) | `Union[int, str]` | | [`NodeInput`](#nodeinput) | `Union[int, str, 'Node']` | | [`PropInput`](#propinput) | `Mapping[str, PropValue]` | | [`PropValue`](#propvalue) | `Union[bool, int, float, datetime, str, 'Document', list['PropValue'], dict[str, 'PropValue']]` | | [`TimeInput`](#timeinput) | `Union[int, str, float, datetime, date, raphtory.EventTime, raphtory.OptionalEventTime]` | --- ## Type Alias Details ### Direction **Definition:** `Literal['in', 'out', 'both']` ### GID **Definition:** `Union[int, str]` ### NodeInput **Definition:** `Union[int, str, 'Node']` ### PropInput **Definition:** `Mapping[str, PropValue]` ### PropValue **Definition:** `Union[bool, int, float, datetime, str, 'Document', list['PropValue'], dict[str, 'PropValue']]` ### TimeInput **Definition:** `Union[int, str, float, datetime, date, raphtory.EventTime, raphtory.OptionalEventTime]` --- ## Reference > Api > Python > Vectors > Index --- title: "vectors" breadcrumb: "Reference / Python / vectors" --- # vectors ## Classes | Class | Description | |-------|-------------| | [Document](/docs/reference/api/python/vectors/Document) | A document corresponding to a graph entity. Used to generate embeddings. | | [Embedding](/docs/reference/api/python/vectors/Embedding) | | | [VectorSelection](/docs/reference/api/python/vectors/VectorSelection) | | | [VectorisedGraph](/docs/reference/api/python/vectors/VectorisedGraph) | VectorisedGraph object that contains embedded documents that correspond to graph entities. | --- ## Reference > Index # Reference Complete reference documentation for Raphtory algorithms, APIs, and deployment options. } title="API Reference" href="/docs/reference/api" children="Complete Python and GraphQL API documentation with type signatures and examples." /> } title="Algorithm Library" href="/docs/reference/algorithms" children="Complete reference for all built-in graph algorithms: centrality, community detection, path finding, and more." /> } title="Compatibility" href="/docs/reference/compatibility" children="Python version support, platform compatibility, and interoperability." /> } title="Production & Ops" href="/docs/production" children="Deployment guides, security, monitoring, and performance tuning." /> ## Quick Links - **[Troubleshooting](/docs/reference/troubleshooting)** – Common issues and solutions - **[Ecosystem Integrations](/docs/ecosystem)** – BigQuery, Databricks, Airflow, and more --- ## Reference > Algorithms > Centrality > Betweenness # Betweenness Centrality **Find bridges and critical connectors** Betweenness measures how often a node appears on shortest paths between other nodes - high betweenness = critical connector or bottleneck. ## What It Computes For each node, a score (0 to 1) representing the fraction of shortest paths passing through it. ## When to Use It - **Critical infrastructure**: Identify single points of failure - **Bottleneck detection**: Find network choke points - **Bridge identification**: Discover nodes connecting communities ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | `k` | int | Sample k nodes (optional, for approximation) | | `normalized` | bool | Normalize scores (default: true) | ## Performance **Time**: O(V × E) - Expensive! **Approximation**: O(k × E) with k samples **Scales to**: 1M edges (exact), 10M+ (approximate) ## Example ```python from raphtory import algorithms # Exact (slow on large graphs) scores = algorithms.betweenness_centrality(g) # Fast approximation scores = algorithms.betweenness_centrality(g, k=100) # Sample 100 nodes top_bridges = scores.top_k(10) for node, score in top_bridges.items(): print(f"{node.name}: {score:.4f}") ``` ## Use Cases ### Infrastructure Monitoring Find services whose failure breaks the system: ```python critical = [n for n, score in scores.items() if score > 0.1] print(f"Critical services: {[n.name for n in critical]}") ``` ### Attack Surface Cybersecurity: Which nodes provide access to most others? ### Network Optimization Remove bottlenecks for better flow ## Performance Tips 1. **Use k-sampling** for graphs >1M edges 2. **Progressive sampling**: Start with k=50, increase if needed 3. **Cache results**: Expensive to recompute ## See Also - [PageRank](./pagerank) - For influence vs bridging - [Degree Centrality](./degree-centrality) - Simple connectivity --- ## Reference > Algorithms > Centrality > Degree Centrality # Degree Centrality **Count direct connections - simplest centrality metric** Degree centrality measures how many connections a node has. For directed graphs, distinguishes in-degree (incoming) and out-degree (outgoing). ## What It Computes For each node: - **In-degree**: Number of incoming edges - **Out-degree**: Number of outgoing edges - **Total degree**: In + out (or just degree for undirected) ## When to Use It - **Quick analysis**: Fastest centrality metric - **Volume metrics**: Total activity level - **Hubs**: Nodes with many connections ## Performance **Time**: O(V) **Space**: O(V) **Scales to**: Unlimited (trivial operation) ## Example ```python from raphtory import algorithms scores = algorithms.degree_centrality(g) # Get highest degree nodes hubs = scores.top_k(10) for node, degree in hubs.items(): print(f"{node.name}: {degree} connections") ``` ## Use Cases ### Transaction Volume Financial: Accounts with highest transaction counts ### Social Popularity Social: Users with most followers (in-degree) ### Activity Monitoring Systems: Services with most connections ## Degree vs PageRank - **Degree**: Quantity of connections - **PageRank**: Quality of connections **Example**: - Node A: 1000 connections from low-value nodes - Node B: 10 connections from high-value nodes **Degree** → A wins **PageRank** → B likely wins ## When to Use Each **Use Degree when**: - Simple volume matters - Quick exploration needed - Baseline metric before sophisticated analysis **Use PageRank when**: - Quality matters - Influence propagation - Ranking by importance ## See Also - [PageRank](./pagerank) - Quality-weighted importance - [Betweenness](./betweenness) - Bridge position --- ## Reference > Algorithms > Centrality > Hits # HITS (Hubs and Authorities) **Discover hubs and authoritative sources** HITS identifies two types of important nodes: **hubs** (nodes that link to many authorities) and **authorities** (nodes linked to by many hubs). ## What It Computes Two scores per node: - **Hub score**: Quality as a curator/aggregator - **Authority score**: Quality as a source/expert ## When to Use It - **Web analysis**: Find authoritative pages and link aggregators - **Citation networks**: Discover influential papers and surveys - **Recommendation**: Separate content creators from curators ## Performance **Time**: O(E) per iteration **Typical**: 10-20 iterations **Scales to**: 10M+ edges ## Example ```python from raphtory import algorithms hub_scores, authority_scores = algorithms.hits(g, iterations=20) # Top authorities (experts/sources) print("Top Authorities:") for node, score in authority_scores.top_k(5).items(): print(f" {node.name}: {score:.4f}") # Top hubs (curators/aggregators) print("\nTop Hubs:") for node, score in hub_scores.top_k(5).items(): print(f" {node.name}: {score:.4f}") ``` ## Use Cases ### Web Search **Authority**: High-quality content pages **Hub**: Link directories, resource pages ### Academic Citations **Authority**: Seminal papers **Hub**: Survey papers, reviews ### Social Media **Authority**: Content creators, experts **Hub**: News aggregators, curators ## HITS vs PageRank | Aspect | HITS | PageRank | |--------|------|----------| | **Scores** | Hub + Authority | Single score | | **Best for** | Distinguishing roles | General importance | | **Use when** | Two-sided marketplace | Uniform ranking | ## See Also - [PageRank](./pagerank) - Single importance score - [Degree Centrality](./degree-centrality) - Simple connectivity --- ## Reference > Algorithms > Centrality > Pagerank # PageRank **Find the most influential nodes in your network** PageRank measures node importance based on the structure of incoming connections - nodes are important if they're connected to by other important nodes. ## What It Computes For each node, PageRank calculates an influence score (0 to 1) representing the probability that a random walker ends up at that node after many steps. **Key insight**: Quality of connections matters more than quantity. One link from a highly-ranked node is worth more than many links from low-ranked nodes. ## When to Use It Identify money mule accounts and fraud ring leaders Find critical nodes in attack infrastructure Discover influencers and opinion leaders Rank entities by importance ## Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `iterations` | int | 20 | Number of iterations to run | | `damping` | float | 0.85 | Probability of following an edge (vs random jump) | | `tolerance` | float | 0.0001 | Convergence threshold | **Tuning advice**: - **More iterations**: Better convergence, slower runtime (20-100 typical) - **Higher damping**: More weight on graph structure (0.85 is standard) - **Lower tolerance**: More precision, more iterations needed ## Performance **Time Complexity**: O(E) per iteration **Space Complexity**: O(V) **Typical Runtime**: 1-2 seconds for 1M edges (20 iterations) **Scales to**: 100M+ edges with good performance ## Example ```python from raphtory import Graph, algorithms # Create graph g = Graph() g.add_edge(1, "Alice", "Bob") g.add_edge(2, "Bob", "Carol") g.add_edge(3, "Carol", "Alice") # Cycle increases Alice's score g.add_edge(4, "Dave", "Alice") # Dave gives importance to Alice # Run PageRank scores = algorithms.pagerank(g, iterations=20, damping=0.85) # Get top 10 most important nodes top_10 = scores.top_k(10) for node, score in top_10.items(): print(f"{node.name}: {score:.4f}") ``` **Output**: ```text Alice: 0.3214 Carol: 0.2891 Bob: 0.2654 Dave: 0.1241 ``` ## Real-World Use Cases ### Financial Fraud Detection **Problem**: Identify money mule accounts in transaction networks **Solution**: High PageRank = receives money from many sources ```python # Find potential money mules fraud_candidates = scores.top_k(100) for node, score in fraud_candidates.items(): if score > 0.01 and node.out_degree() < 5: print(f"Suspicious: {node.name} receives funds but rarely sends") ``` ### Cybersecurity **Problem**: Identify C2 servers in botnet traffic **Solution**: High PageRank = many infected hosts communicate with it ### Social Media **Problem**: Find influencers for marketing campaigns **Solution**: PageRank identifies users whose content reaches the most people ## Temporal PageRank For temporal graphs, run PageRank on different time windows to track influence evolution: ```python # Track influence over time windows = [(0, 100), (100, 200), (200, 300)] for start, end in windows: window_graph = g.window(start, end) scores = algorithms.pagerank(window_graph) print(f"Time {start}-{end}: Top node = {scores.top_k(1)}") ``` **Use case**: Detect rising/falling influencers, emerging fraud networks ## Comparison with Other Centrality Metrics | Metric | What It Measures | When to Prefer PageRank | |--------|-----------------|------------------------| | **Degree** | Number of connections | PageRank better for quality over quantity | | **Betweenness** | Bridge position | PageRank for general influence | | **Closeness** | Average distance to others | PageRank for directed graphs | **Rule of thumb**: Use PageRank for influence, betweenness for critical connectors ## Performance Tips 1. **Use fewer iterations** for large graphs (10-15 sufficient) 2. **Parallelize**: Raphtory automatically uses multiple threads 3. **Cache results**: Store scores if querying repeatedly 4. **Sample**: For exploration, run on graph sample first ## See Also - **[Betweenness Centrality](./betweenness)** - Find bridges and connectors - **[Degree Centrality](./degree-centrality)** - Simple connection counting - **[User Guide: Algorithms](/docs/algorithms/node-centric)** - Detailed API reference --- ## Reference > Algorithms > Community > Label Propagation # Label Propagation **Fast community detection for large-scale networks** Label Propagation detects communities by iteratively propagating node labels through the network - nodes adopt the most common label among their neighbors. ## What It Computes Community ID for each node, optimized for speed over precision. ## When to Use It - **Large graphs** (10M+ edges) where speed matters - **Real-time** community detection - **Quick exploration** before running Louvain ## Performance **Time**: O(V + E) - Linear! **Space**: O(V) **Scales to**: 100M+ edges ## Example ```python from raphtory import algorithms communities = algorithms.label_propagation(g) groups = communities.groups() print(f"Found {len(groups)} communities") ``` ## vs Louvain - **Faster**: 5-10x faster than Louvain - **Less precise**: Slightly lower modularity - **Non-deterministic**: Results vary by iteration order **Use Label Propagation when**: Speed > precision **Use Louvain when**: Quality > speed ## See Also - [Louvain](./louvain) - Higher quality communities - [User Guide](/docs/algorithms/community) --- ## Reference > Algorithms > Community > Louvain # Louvain Community Detection **Discover clusters and coordinated groups automatically** Louvain detects communities by optimizing modularity - groups of nodes with dense internal connections but sparse connections to other groups. ## What It Computes For each node, assigns a community ID (integer). Nodes in the same community are more densely connected to each other than to the rest of the network. **Key insight**: Hierarchical approach finds communities at multiple scales, from small clusters to large groups. ## When to Use It Find coordinated fraud rings and money laundering networks Identify natural customer groups for targeted marketing Detect friend groups, echo chambers, toxic communities Partition networks for distributed processing ## Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `resolution` | float | 1.0 | Controls community size (higher = smaller communities) | | `weight_prop` | str | None | Edge weight property name | **Tuning advice**: - **resolution 1.0**: Larger, fewer communities - **resolution 1.0**: Smaller, more communities - **resolution = 1.0**: Standard modularity optimization ## Performance **Time Complexity**: O(V log V) **Space Complexity**: O(V + E) **Typical Runtime**: 2-5 seconds for 1M edges **Scales to**: 10M+ edges efficiently ## Example ```python from raphtory import Graph, algorithms # Create graph with communities g = Graph() # Add edges... (fraud ring example) g.add_edge(1, "Account1", "Account2") # Ring A g.add_edge(2, "Account2", "Account3") g.add_edge(3, "Account4", "Account5") # Ring B g.add_edge(4, "Account5", "Account6") # Detect communities communities = algorithms.louvain(g, resolution=1.0) # Group by community groups = communities.groups() for community_id, members in groups.items(): print(f"Community {community_id}: {[n.name for n in members]}") ``` **Output**: ```text Community 0: ['Account1', 'Account2', 'Account3'] Community 1: ['Account4', 'Account5', 'Account6'] ``` ## Real-World Use Cases ### Fraud Ring Detection **Problem**: Identify coordinated fraud groups **Solution**: Louvain finds tightly connected accounts ```python communities = algorithms.louvain(g, resolution=1.0) # Find suspicious communities for comm_id, members in communities.groups().items(): if len(members) > 5: # Large groups # Check if created within short time creation_times = [g.node(n).earliest_time for n in members] time_span = max(creation_times) - min(creation_times) if time_span < 3600: # Created within 1 hour print(f"Suspicious ring: Community {comm_id}") ``` ### Customer Segmentation **Problem**: Group customers for targeted campaigns **Solution**: Natural communities = customer segments ### Social Network Moderation **Problem**: Identify toxic communities early **Solution**: Track community health metrics over time ## Temporal Community Evolution Track how communities form and evolve: ```python # Analyze community changes over time time_windows = [(0, 100), (100, 200), (200, 300)] for start, end in time_windows: snapshot = g.window(start, end) communities = algorithms.louvain(snapshot) print(f"Time {start}-{end}: {len(communities.groups())} communities") ``` **Use case**: Detect emerging fraud networks, community splits, growing clusters ## Comparison with Label Propagation | Aspect | Louvain | Label Propagation | |--------|---------|-------------------| | **Quality** | Higher modularity | Good, slightly lower | | **Speed** | O(V log V) | O(V + E) - faster | | **Deterministic** | No (random seed) | No (order dependent) | | **Best for** | Quality matters | Speed critical | **Rule of thumb**: Use Louvain for better quality, Label Propagation for massive graphs ## Performance Tips 1. **Adjust resolution** based on expected community size 2. **Use temporal windows** for evolving networks 3. **Run multiple times** with different seeds and compare 4. **Sample for exploration** on huge graphs (>100M edges) ## See Also - **[Label Propagation](./label-propagation)** - Faster community detection - **[User Guide: Community Detection](/docs/algorithms/community)** - Detailed guide --- ## Reference > Algorithms > Components > In Components # In-Components **Find all nodes that can reach a target** In-components identifies all nodes that have paths TO a specific target node (who can reach the target). ## What It Computes Given a target node, returns the set of all nodes that can reach it (following edge direction). ## When to Use It - **Access analysis**: "Who can access this resource?" - **Dependency mapping**: "What depends on this service?" - **Impact tracing**: "If I change this, what's affected?" ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | `target` | node/str | Target node to analyze | ## Performance **Time**: O(V + E) per query **Scales to**: 10M+ edges ## Example ```python from raphtory import algorithms # Who can reach the database? can_reach_db = algorithms.in_component(g, "Database") print(f"{len(can_reach_db)} nodes can access Database:") for node in can_reach_db: print(f" - {node.name}") ``` ## Use Cases ### Access Control Audit Which users/services can access sensitive resources? ```python sensitive_resources = ["CreditCardDB", "UserPII", "APIKeys"] for resource in sensitive_resources: accessors = algorithms.in_component(g, resource) print(f"\n{resource} accessible by:") for node in accessors: print(f" - {node.name}") ``` ### Dependency Analysis What services depend on this database? ```python dependencies = algorithms.in_component(g, "CriticalService") print(f"{len(dependencies)} services depend on CriticalService") ``` ### Vulnerability Impact If this node is compromised, what else is at risk? ## In-Component vs Out-Component **In-component**: Nodes that can reach TARGET (upstream) **Out-component**: Nodes reachable FROM source (downstream) **Example**: ```text A → B → Target ← C ↓ D ``` - **In-component(Target)**: \{A, B, C\} - **Out-component(B)**: \{Target, D\} ## See Also - [Out-Components](./out-components) - Downstream reachability - [Strongly Connected Components](./strongly-connected) - Mutual reachability --- ## Reference > Algorithms > Components > Out Components # Out-Components **Find all nodes reachable from a source** Out-components identifies all nodes reachable FROM a specific source node (where can the source reach). ## What It Computes Given a source node, returns the set of all nodes reachable from it (following edge direction). ## When to Use It - **Blast radius**: "If this fails, what breaks?" - **Propagation analysis**: "What can this node influence?" - **Reachability**: "What's accessible from here?" ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | `source` | node/str | Source node to analyze | ## Performance **Time**: O(V + E) per query **Scales to**: 10M+ edges ## Example ```python from raphtory import algorithms # What can the API server reach? reachable_from_api = algorithms.out_component(g, "APIServer") print(f"API can reach {len(reachable_from_api)} nodes:") for node in reachable_from_api: print(f" - {node.name}") ``` ## Use Cases ### Blast Radius Analysis If this service fails, what else is affected? ```python critical_services = ["LoadBalancer", "AuthService", "PaymentGateway"] for service in critical_services: affected = algorithms.out_component(g, service) print(f"\n{service} failure affects {len(affected)} nodes:") for node in affected[:10]: # Show first 10 print(f" - {node.name}") ``` ### Attack Surface From compromised host, what can attacker reach? ```python compromised = "WebServer_1" exposed = algorithms.out_component(g, compromised) critical_systems = ["Database", "AdminPanel", "BackupServer"] at_risk = [n for n in exposed if n.name in critical_systems] if at_risk: print(f"ALERT: {len(at_risk)} critical systems exposed!") ``` ### Dependency Chains Map full downstream dependency tree ```python # Starting from root service downstream = algorithms.out_component(g, "RootService") print(f"Total downstream dependencies: {len(downstream)}") ``` ## Out-Component vs In-Component **Out-component**: Nodes reachable FROM source (downstream) **In-component**: Nodes that can reach target (upstream) **Example**: ```text Source → B → C ↓ D → E ``` - **Out-component(Source)**: \{B, C, D, E\} - **In-component(E)**: \{Source, D\} ## Temporal Out-Components Track how reachability evolves: ```python # How does blast radius change over time? time_windows = [(0, 100), (100, 200), (200, 300)] for start, end in time_windows: snapshot = g.window(start, end) reachable = algorithms.out_component(snapshot, "CriticalService") print(f"Time {start}-{end}: {len(reachable)} nodes reachable") ``` ## See Also - [In-Components](./in-components) - Upstream dependencies - [Temporal Reachability](../temporal/temporal-reachability) - Time-aware reachability - [Single Source Shortest Path](../path-finding/shortest-paths) - Distance calculation --- ## Reference > Algorithms > Components > Strongly Connected # Strongly Connected Components **Find nodes that can mutually reach each other** Strongly Connected Components identifies maximal subgraphs where every node can reach every other node *respecting edge direction*. ## What It Computes Component ID for each node - stricter than weakly connected (requires bidirectional reachability). ## When to Use It - **Dependency cycles**: Find circular dependencies - **Mutual reachability**: Nodes that can communicate both ways - **Core structures**: Tightly coupled subsystems ## Performance **Time**: O(V + E) **Scales to**: 100M+ edges ## Example ```python from raphtory import algorithms components = algorithms.strongly_connected_components(g) groups = components.groups() for comp_id, members in groups.items(): if len(members) > 1: # Cycles print(f"Cycle found: {[n.name for n in members]}") ``` ## Use Cases ### Find Dependency Cycles Systems with circular dependencies: ```python # Services with circular calls cycles = [m for m in groups.values() if len(m) > 1] if cycles: print("WARNING: Circular dependencies detected!") ``` ### Core Network Nodes that form the "core" (mutually reachable) ## vs Weakly Connected **Weakly**: A can reach B OR B can reach A **Strongly**: A can reach B AND B can reach A ## See Also - [Weakly Connected](./weakly-connected) - Undirected connectivity - [In/Out Components](./in-components) - Directional reachability --- ## Reference > Algorithms > Components > Weakly Connected # Weakly Connected Components **Find disconnected groups and network structure** Weakly Connected Components identifies maximal subgraphs where every node can reach every other node when edge direction is ignored. ## What It Computes For each node, assigns a component ID. All nodes with the same ID are part of the same connected group. **Key insight**: Reveals network fragmentation - how many isolated groups exist, and how large is the main component. ## When to Use It Identify disconnected network segments Find isolated records or orphaned entities Separate independent fraud rings Measure network cohesion ## Parameters No parameters - runs on full graph. ## Performance **Time Complexity**: O(V + E) **Space Complexity**: O(V) **Typical Runtime**: 1 second for 1M edges **Scales to**: 100M+ edges efficiently ## Example ```python from raphtory import Graph, algorithms # Create graph with multiple components g = Graph() # Component 1 g.add_edge(1, "A", "B") g.add_edge(2, "B", "C") # Component 2 g.add_edge(3, "D", "E") # Component 3 (isolated node) g.add_node(4, "F") # Find components components = algorithms.weakly_connected_components(g) # Analyze structure groups = components.groups() print(f"Number of components: {len(groups)}") for comp_id, members in groups.items(): print(f"Component {comp_id}: {len(members)} nodes") # Find giant component largest = max(groups, key=lambda k: len(groups[k])) print(f"Largest component: {len(groups[largest])} nodes") ``` **Output**: ```text Number of components: 3 Component 0: 3 nodes # A, B, C Component 1: 2 nodes # D, E Component 2: 1 node # F Largest component: 3 nodes ``` ## Real-World Use Cases ### Network Fragmentation Analysis **Problem**: Is network fragmenting over time? **Solution**: Track component count evolution ```python # Track fragmentation over time time_windows = [(0, 100), (100, 200), (200, 300)] for start, end in time_windows: snapshot = g.window(start, end) components = algorithms.weakly_connected_components(snapshot) num_components = len(components.groups()) print(f"Time {start}-{end}: {num_components} components") ``` **Interpretation**: - **Decreasing components**: Network connecting over time - **Increasing components**: Network fragmenting ### Fraud Ring Separation **Problem**: Multiple independent fraud rings in data **Solution**: Each component = potential separate ring ```python components = algorithms.weakly_connected_components(g) groups = components.groups() for comp_id, members in groups.items(): if len(members) > 3: # Rings with 4+ accounts print(f"Investigating fraud ring {comp_id}: {len(members)} accounts") ``` ### Data Quality Check **Problem**: Orphaned records in ETL pipeline **Solution**: Isolated components indicate data issues ## Giant Component The [Giant Component](https://en.wikipedia.org/wiki/Giant_component) is the largest connected component, typically containing most nodes in social/technological networks. **Typical in social networks**: 95%+ nodes in giant component **Fragmented networks**: 50% nodes in giant component ```python components = algorithms.weakly_connected_components(g) groups = components.groups() # Calculate giant component percentage total_nodes = g.count_nodes() giant_size = max(len(members) for members in groups.values()) giant_pct = (giant_size / total_nodes) * 100 print(f"Giant component: {giant_size}/{total_nodes} nodes ({giant_pct:.1f}%)") ``` ## Weakly vs Strongly Connected **Weakly Connected**: Ignore edge direction **Strongly Connected**: Respect edge direction (stricter) **Example**: ```text A → B → C ↑___↓ ``` - **Weakly**: All 3 nodes in one component - **Strongly**: Depends on exact connections **When to use which**: - **Undirected graphs**: Always weakly connected - **Directed graphs**: Weakly for general connectivity, strongly for directed reachability ## Performance Tips 1. **Use temporal windows** for evolution tracking 2. **Filter small components** for analysis (size < 3) 3. **Sample inspection**: Check largest components first 4. **Parallel processing**: Components can be analyzed independently ## See Also - **[Strongly Connected Components](./strongly-connected)** - Directed connectivity - **[In/Out Components](./in-components)** - Directional reachability - **[User Guide](/docs/algorithms/node-centric)** - Component algorithms --- ## Reference > Algorithms > Embeddings > Fast Rp # FastRP (Fast Random Projection) **Generate node embeddings for machine learning** FastRP creates vector representations of nodes that preserve graph structure - similar nodes get similar vectors. ## What It Computes Fixed-size vector for each node (e.g., 128 dimensions). ## When to Use It - **ML features**: Use graph structure in ML models - **Similarity search**: Find similar nodes efficiently - **Clustering**: Cluster nodes in vector space - **Dimensionality reduction**: Compress graph structure ## Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `embedding_dim` | int | 128 | Vector dimension | | `normalization_strength` | float | 0 | L2 normalization | | `iteration_weights` | list | [0, 1, 1] | multi-hop weights | ## Performance **Time**: O(E × dim) **Very fast** compared to Node2Vec **Scales to**: 100M+ edges ## Example ```python from raphtory import algorithms # Generate 128-dim embeddings embeddings = algorithms.fast_rp( g, embedding_dim=128, iteration_weights=[0, 1, 1, 1] ) # Get vector for a node node_vec = embeddings.get("User123") print(f"Embedding shape: {node_vec.shape}") # Find similar nodes (cosine similarity) from scipy.spatial.distance import cosine target_vec = embeddings.get("TargetUser") similarities = {} for node, vec in embeddings.items(): similarities[node] = 1 - cosine(target_vec, vec) # Top 5 most similar similar = sorted(similarities.items(), key=lambda x: x[1], reverse=True)[:5] for node, sim in similar: print(f"{node.name}: {sim:.3f}") ``` ## Use Cases ### Churn Prediction ```python # Create feature matrix for ML features = [] labels = [] for node in g.nodes(): embedding = embeddings.get(node.name) features.append(embedding) labels.append(node.properties.get("churned", False)) # Train model from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit(features, labels) ``` ### Recommendation Systems Find users similar to target user via embedding similarity ### Fraud Detection Cluster embeddings to find fraud rings ### Link Prediction Predict edges between nodes with similar embeddings ## FastRP vs Node2Vec | Aspect | FastRP | Node2Vec | |--------|--------|----------| | **Speed** | Very fast | Slow (random walks) | | **Quality** | Good | Better | | **Best for** | Large graphs, speed | Quality critical | **FastRP is Raphtory's available embedding** - Node2Vec not implemented ## Temporal Embeddings Generate embeddings for different time windows: ```python time_windows = [(0, 100), (100, 200), (200, 300)] embeddings_over_time = {} for start, end in time_windows: snapshot = g.window(start, end) embeddings_over_time[f"{start}-{end}"] = algorithms.fast_rp(snapshot) # Track embedding evolution # (useful for detecting behavior changes) ``` ## See Also - [PageRank](../centrality/pagerank) - Another feature for ML - [Louvain](../community/louvain) - Community features --- ## Reference > Algorithms > Metrics > Average Degree # Average Degree **Measure mean connectivity** Average degree is the mean number of connections per node. ## What It Computes Average degree = (2 × E) / V For directed: In-degree and out-degree averages ## When to Use It - **Baseline metric**: Quick network summary - **Comparison**: Compare networks - **Evolution**: Track connectivity over time ## Example ```python from raphtory import algorithms avg_deg = algorithms.average_degree(g) print(f"Average degree: {avg_deg:.2f}") ``` ## Use Cases ### Network Growth ```python for year in range(2020, 2024): snapshot = g.window(f"{year}-01-01", f"{year}-12-31") avg = algorithms.average_degree(snapshot) print(f"{year}: {avg:.2f} avg connections") ``` ### Network Comparison Compare different networks by average degree ## See Also - [Graph Density](./density) - [Degree Centrality](../centrality/degree-centrality) --- ## Reference > Algorithms > Metrics > Clustering # Clustering Coefficient **Measure how tightly knit your network is** Clustering coefficient quantifies the degree to which nodes cluster together - the prevalence of triangles compared to all possible triangles. ## What It Computes Global metric (0 to 1): Probability that two neighbors of a node are also connected. ## When to Use It - **Network health**: How cohesive is the network? - **Community strength**: Higher = stronger communities - **Compare networks**: Benchmark against similar networks ## Performance **Time**: O(E × avg_degree) **Scales to**: 10M edges ## Example ```python from raphtory import algorithms cc = algorithms.global_clustering_coefficient(g) print(f"Clustering coefficient: {cc:.3f}") ``` **Interpretation**: - **~0.0**: No clustering (tree-like) - **0.3-0.6**: Typical social networks - **~1.0**: Highly clustered (many triangles) ## Use Cases ### Social Network Health Healthy communities have high clustering (0.4+) ### Fraud Detection Fake account networks have LOW clustering (no mutual connections) ### Network Evolution Track clustering over time: ```python for year in range(2020, 2024): snapshot = g.window(f"{year}-01-01", f"{year}-12-31") cc = algorithms.global_clustering_coefficient(snapshot) print(f"{year}: {cc:.3f}") ``` ## See Also - [Triangle Count](../motifs/triangle-count) - [Triplet Count](../motifs/triplet-count) --- ## Reference > Algorithms > Metrics > Density # Graph Density **Measure edge saturation** Graph density is the ratio of actual edges to possible edges - how "full" is your graph? ## What It Computes Density = E / (V × (V-1)) for directed graphs Range: 0 (no edges) to 1 (complete graph) ## When to Use It - **Network comparison**: Compare graphs of different sizes - **Complexity metric**: How interconnected is the system? - **Evolution tracking**: Is network growing sparser/denser? ## Example ```python from raphtory import algorithms density = algorithms.directed_graph_density(g) print(f"Graph density: {density:.4f}") ``` **Interpretation**: - ** 0.01**: Sparse (most real-world networks) - **0.01-0.1**: Moderately dense - ** 0.5**: Very dense (rare) ## Use Cases ### System Complexity Higher density = more complex system interactions ### Network Evolution ```python densities = [] for month in range(12): snapshot = g.window(month*30, (month+1)*30) densities.append(algorithms.directed_graph_density(snapshot)) ``` ## See Also - [Average Degree](./average-degree) - [User Guide](/docs/algorithms/running) --- ## Reference > Algorithms > Metrics > Reciprocity # Reciprocity **Measure mutual connections in directed networks** Reciprocity quantifies the probability that if A→B exists, then B→A also exists. ## What It Computes Reciprocity = (# bidirectional edges) / (# total edges) Range: 0 to 1 ## When to Use It - **Social networks**: Mutual friendships vs one-way follows - **Communication**: Bidirectional vs broadcast - **Collaboration**: Mutual vs unidirectional relationships ## Example ```python from raphtory import algorithms reciprocity = algorithms.global_reciprocity(g) print(f"Reciprocity: {reciprocity:.3f}") ``` **Interpretation**: - **~0.0**: One-way relationships (broadcast) - **~0.5**: Mix of mutual and one-way - **~1.0**: Mostly mutual (friendships) ## Use Cases ### Social Network Type - **Twitter**: Low reciprocity (~0.2) - follows - **Facebook**: High reciprocity (~0.8) - friendships ### Community Health Higher reciprocity = more engaged community ## See Also - [Graph Density](./density) - [User Guide](/docs/algorithms/running) --- ## Reference > Algorithms > Motifs > Local Triangle # Local Triangle Count **Count triangles per node** For each node, counts how many triangles it participates in. ## What It Computes Triangle count for each node individually. ## When to Use It - **Node-level cohesion**: Which nodes are in tight clusters? - **Influence**: High triangle count = embedded in community - **Anomaly detection**: Zero triangles = isolated/suspicious ## Example ```python from raphtory import algorithms local_counts = algorithms.local_triangle_count(g) # Nodes with most triangles top_embedded = local_counts.top_k(10) for node, count in top_embedded.items(): print(f"{node.name}: {count} triangles") # Isolated nodes (suspicious?) isolated = [n for n, c in local_counts.items() if c == 0] ``` ## Use Cases ### Find Core Members Nodes with high triangle counts are community cores ### Detect Fake Accounts Zero triangles = no mutual friends = suspicious ## See Also - [Triangle Count](./triangle-count) - Global count - [Clustering Coefficient](../metrics/clustering) --- ## Reference > Algorithms > Motifs > Three Node Motifs # 3-Node Motifs **Discover all 3-node structural patterns** Identifies and counts all possible 3-node subgraph patterns (motifs). ## What It Computes Counts for each of the possible 3-node configurations. ## When to Use It - **Structural analysis**: What patterns dominate your network? - **Network comparison**: Compare motif distributions - **Anomaly detection**: Unusual motif frequencies ## Example ```python from raphtory import algorithms motif_counts = algorithms.three_node_motifs(g) for motif_id, count in motif_counts.items(): print(f"Motif {motif_id}: {count}") ``` ## Use Cases ### Network Fingerprinting Each network type has characteristic motif distribution ### Anomaly Detection Unusual motif patterns indicate problems ## See Also - [Temporal 3-Node Motifs](../temporal/temporal-motifs) - [Triangle Count](./triangle-count) --- ## Reference > Algorithms > Motifs > Triangle Count # Triangle Count **Measure network cohesion and clustering** Counts the number of triangles (3-node cycles) in the network. ## What It Computes Total number of closed triangles in the graph. ## When to Use It - **Cohesion measurement**: How tightly knit is the network? - **Community strength**: More triangles = stronger communities - **Social analysis**: "Friend of friend" connections ## Performance **Time**: O(E × avg_degree) **Scales to**: 10M edges ## Example ```python from raphtory import algorithms triangle_count = algorithms.triangle_count(g) print(f"Triangles: {triangle_count}") # Per node local_triangles = algorithms.local_triangle_count(g) top_nodes = local_triangles.top_k(10) ``` ## Use Cases ### Community Health More triangles = stronger community bonds ### Trust Networks Triangles indicate mutual connections (higher trust) ### Fraud Detection Lack of triangles = suspicious (fake accounts don't know each other) ## See Also - [Clustering Coefficient](../metrics/clustering) - [Local Triangle Count](./local-triangle) --- ## Reference > Algorithms > Motifs > Triplet Count # Triplet Count **Count 3-node open paths (non-closed triangles)** Counts triplets - 3-node paths that are NOT closed into triangles. ## What It Computes Number of 3-node paths: A→B→C where A and C are NOT directly connected. ## When to Use It - **Triangle potential**: Triplets that could close into triangles - **Clustering coefficient**: Used in calculation - **Network evolution**: Track triplet→triangle transitions ## Example ```python from raphtory import algorithms triplet_count = algorithms.triplet_count(g) triangle_count = algorithms.triangle_count(g) clustering = (3 * triangle_count) / (triplet_count + 3 * triangle_count) print(f"Clustering coefficient: {clustering:.3f}") ``` ## Use Cases ### Link Prediction Triplets likely to close into triangles ### Network Evolution Track triplet→triangle conversion rate ## See Also - [Triangle Count](./triangle-count) - [Clustering Coefficient](../metrics/clustering) --- ## Reference > Algorithms > Path Finding > Dijkstra # Dijkstra's Algorithm **Find weighted shortest paths** Dijkstra computes shortest paths considering edge weights - optimal for finding lowest-cost routes. ## What It Computes Minimum-weight path from source to all reachable nodes. ## When to Use It - **Weighted networks**: Edges have costs/distances/times - **Route optimization**: Find cheapest/fastest path - **Resource allocation**: Minimize cost ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | `source` | node/str | Starting node | | `weight_prop` | str | Edge weight property | | `target` | node/str | Optional destination | ## Performance **Time**: O((V + E) log V) **Scales to**: 10M edges ## Example ```python from raphtory import algorithms # Edges have 'cost' property distances = algorithms.dijkstra( g, source="Start", weight_prop="cost" ) for node, cost in distances.items(): print(f"{node.name}: ${cost:.2f}") ``` ## Use Cases ### Supply Chain Find lowest-cost shipping routes ### Network Routing Optimize packet routing by latency ### Resource Planning Minimize cost/time to reach objectives ## Dijkstra vs BFS **BFS (unweighted)**: Use when all edges equal **Dijkstra (weighted)**: Use when edges have costs ## See Also - [Single Source Shortest Path](./shortest-paths) - Unweighted (BFS) - [Temporal Reachability](../temporal/temporal-reachability) --- ## Reference > Algorithms > Path Finding > Shortest Paths # Single Source Shortest Path **Find shortest paths from a source to all nodes** Computes shortest paths (minimum hop count) from a source node to all reachable nodes using BFS. ## What It Computes For each reachable node: - Distance (hops) from source - Optional: actual path ## When to Use It - **Accessibility**: What can source reach? - **Distance analysis**: How far are nodes from source? - **Network diameter**: Farthest reachable node ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | `source` | node/str | Starting node | | `cutoff` | int | Max hops (optional) | ## Performance **Time**: O(V + E) **Very fast** - BFS is optimal ## Example ```python from raphtory import algorithms distances = algorithms.single_source_shortest_path(g, source="RootNode") for node, distance in distances.items(): print(f"{node.name}: {distance} hops away") # Find farthest node farthest = max(distances.items(), key=lambda x: x[1]) print(f"Diameter: {farthest[1]} hops") ``` ## Use Cases ### Service Reachability From root service, what's reachable? ### Network Diameter Maximum distance = network diameter ### Blast Radius From failure point, how far does impact spread? ## See Also - [Dijkstra](./dijkstra) - Weighted shortest paths - [Temporal Reachability](../temporal/temporal-reachability) - Time-aware paths --- ## Reference > Algorithms > Temporal > Rich Club # Temporal Rich Club Coefficient **Track elite connectivity evolution over time** Measures how well-connected the "elite" nodes (high-degree) are to each other over time. ## What It Computes Coefficient (0 to 1) indicating if high-degree nodes preferentially connect to each other. ## When to Use It - **Core group detection**: Find tightly-knit elite clusters - **Network evolution**: Track core formation over time - **Inequality analysis**: Measure concentration of connections ## Performance **Time**: O(E log V) **Scales to**: 10M edges ## Example ```python from raphtory import algorithms coefficient = algorithms.temporal_rich_club_coefficient(g, k=10) print(f"Rich club coefficient: {coefficient:.3f}") ``` **Interpretation**: - **>0.5**: Strong rich club effect (elites connect) - **~0**: Random connectivity - **High k**: Focus on very connected nodes ## Use Cases ### Social Networks Core influencer groups ### Financial Networks Major institutions' interconnectedness ### Scientific Collaboration Elite researcher collaboration patterns ## See Also - [PageRank](../centrality/pagerank) - [Degree Centrality](../centrality/degree-centrality) --- ## Reference > Algorithms > Temporal > Temporal Motifs # Temporal 3-Node Motifs **Detect recurring temporal patterns in your network** Identifies all 3-node temporal motifs - ordered sequences of edges between 3 nodes that respect time. ## What It Computes Counts of each temporal motif type (16 possible patterns for directed graphs). ## When to Use It - **Pattern discovery**: Find common interaction sequences - **Fraud detection**: Identify coordinated timing patterns - **Behavior analysis**: Discover temporal signatures ## Performance **Time**: O(E × avg_degree) **Can be expensive** on dense graphs **Scales to**: 1M edges ## Example ```python from raphtory import algorithms motif_counts = algorithms.temporal_three_node_motifs(g) # Interpret results for motif_type, count in motif_counts.items(): print(f"Motif {motif_type}: {count} occurrences") ``` ## Use Cases ### Fraud Pattern Detection Coordinated account behavior: ```text A → B → C (sequential transactions) A → C, B → C (coordinated deposits) ``` ### Social Patterns Communication sequences revealing structure ### Attack Patterns Lateral movement signatures in security logs ## Temporal vs Static Motifs **Static**: Edges exist (ignore time) **Temporal**: Edges occur in specific order **Critical difference**: Temporal reveals causality ## See Also - [Temporal Reachability](./temporal-reachability) - [Triangle Count](../motifs/triangle-count) --- ## Reference > Algorithms > Temporal > Temporal Reachability # Temporal Reachability **Discover who can reach whom through time-respecting paths** Temporal reachability analyzes which nodes can reach which other nodes through paths that respect temporal ordering - edges must be traversed in chronological order. ## What It Computes From a source node and starting time, finds all nodes reachable via temporal paths where each edge timestamp is >= the previous edge's timestamp. **Key insight**: Temporal paths reveal causality - if A can reach B temporally, A's state could have influenced B. ## When to Use It Trace attack chains and lateral movement paths Model disease/information spread through contact networks Identify disruption propagation and dependencies Track how information/influence flows through networks ## Parameters | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `start_time` | int | 0 | Time to start the temporal walk | | `seed_nodes` | list | required | Source nodes to start from | | `max_hops` | int | None | Maximum path length (optional) | **Tuning advice**: - **start_time**: Usually earliest time of interest (e.g., compromise time) - **max_hops**: Limit for performance on large graphs - **seed_nodes**: Known infected/compromised nodes ## Performance **Time Complexity**: O(V + E) with early termination **Space Complexity**: O(V) **Typical Runtime**: 0.5-2 seconds for 1M edges **Scales to**: 10M+ edges ## Example ```python from raphtory import Graph, algorithms # Create temporal network (e.g., logins) g = Graph() g.add_edge(1, "Host_A", "Host_B") # Login at time 1 g.add_edge(5, "Host_B", "Host_C") # Login at time 5 g.add_edge(3, "Host_A", "Host_D") # Login at time 3 g.add_edge(10, "Host_C", "Host_E") # Login at time 10 # Find all hosts reachable from Host_A starting at time 0 reachable = algorithms.temporal_reachability( g, start_time=0, seed_nodes=["Host_A"], max_hops=5 ) print("Hosts compromised from Host_A:") for node in reachable: print(f" - {node.name}") ``` **Output**: ```text Hosts compromised from Host_A: - Host_A - Host_B - Host_C - Host_D - Host_E ``` **Note**: Host_E is reachable (A→B→C→E) but only via the temporal path respecting time order. ## Real-World Use Cases ### Cybersecurity: Lateral Movement **Problem**: Attacker compromised one host - what else can they reach? **Solution**: Temporal reachability shows attack surface ```python # Compromised host detected at time 1000 compromised_host = "WebServer_1" compromise_time = 1000 # What can attacker reach? reachable = algorithms.temporal_reachability( g, start_time=compromise_time, seed_nodes=[compromised_host] ) # Check if critical systems are reachable critical_systems = ["Database", "DomainController", "Backups"] exposed = [n for n in reachable if n.name in critical_systems] if exposed: print(f"CRITICAL: {len(exposed)} critical systems exposed!") print(f"Exposed systems: {[n.name for n in exposed]}") ``` ### Supply Chain Disruption **Problem**: Supplier failed - which production depends on them? **Solution**: Temporal reachability traces impact ### Information Spread **Problem**: Content went viral - trace propagation path **Solution**: Find all users reached from initial posters ## Temporal vs Static Reachability **Static reachability**: A and E are connected (path exists ignoring time) **Temporal reachability**: A can reach E only if path respects time ordering **Example where they differ**: ```text A --(t=10)--> B B --(t=5)---> C ``` - **Static**: A can reach C (path A→B→C exists) - **Temporal**: A CANNOT reach C (would require going backwards in time) **Why it matters**: Temporal reachability reveals true causality and influence paths ## Performance Tips 1. **Limit max_hops** for large graphs 2. **Use time windows** to focus on recent activity 3. **Multiple seeds** for efficiency vs running multiple queries 4. **Early termination**: Algorithm stops when no new nodes found ## See Also - **[Temporal 3-Node Motifs](./temporal-motifs)** - Temporal pattern detection --- ## Reference > Api > Graphql > Enums --- title: "Enums" breadcrumb: "Reference / GraphQL / Enums" --- # Enum Types Enumeration types for fixed value sets. ## Types | Type | Description | |------|-------------| | [`AlignmentUnit`](#alignmentunit) | Alignment unit used to align window boundaries. | | [`AllPropertySpec`](#allpropertyspec) | | | [`GraphType`](#graphtype) | | | [`NodeField`](#nodefield) | | | [`SortByTime`](#sortbytime) | | --- ## Type Details ### AlignmentUnit Alignment unit used to align window boundaries. #### Values | Value | Description | |-------|-------------| | `UNALIGNED` | | | `MILLISECOND` | | | `SECOND` | | | `MINUTE` | | | `HOUR` | | | `DAY` | | | `WEEK` | | | `MONTH` | | | `YEAR` | | ### AllPropertySpec #### Values | Value | Description | |-------|-------------| | `ALL` | All properties and metadata. | | `ALL_METADATA` | All metadata. | | `ALL_PROPERTIES` | All properties. | ### GraphType #### Values | Value | Description | |-------|-------------| | `PERSISTENT` | Persistent. | | `EVENT` | Event. | ### NodeField #### Values | Value | Description | |-------|-------------| | `NODE_ID` | Node id. | | `NODE_NAME` | Node name. | | `NODE_TYPE` | Node type. | ### SortByTime #### Values | Value | Description | |-------|-------------| | `LATEST` | Latest time | | `EARLIEST` | Earliest time | --- ## Reference > Api > Graphql > Inputs --- title: "Inputs" breadcrumb: "Reference / GraphQL / Inputs" --- # Input Types Input types for mutations and complex queries. ## Types | Type | Description | |------|-------------| | [`EdgeAddition`](#edgeaddition) | | | [`EdgeFilter`](#edgefilter) | | | [`EdgeLayersExpr`](#edgelayersexpr) | | | [`EdgeSortBy`](#edgesortby) | | | [`EdgesViewCollection`](#edgesviewcollection) | | | [`EdgeTimeExpr`](#edgetimeexpr) | | | [`EdgeUnaryExpr`](#edgeunaryexpr) | | | [`EdgeViewCollection`](#edgeviewcollection) | | | [`EdgeWindowExpr`](#edgewindowexpr) | | | [`GraphFilter`](#graphfilter) | | | [`GraphLayersExpr`](#graphlayersexpr) | | | [`GraphTimeExpr`](#graphtimeexpr) | | | [`GraphUnaryExpr`](#graphunaryexpr) | | | [`GraphViewCollection`](#graphviewcollection) | | | [`GraphWindowExpr`](#graphwindowexpr) | | | [`IndexSpecInput`](#indexspecinput) | | | [`InputEdge`](#inputedge) | | | [`NodeAddition`](#nodeaddition) | | | [`NodeFieldCondition`](#nodefieldcondition) | | | [`NodeFieldFilterNew`](#nodefieldfilternew) | | | [`NodeFilter`](#nodefilter) | | | [`NodeLayersExpr`](#nodelayersexpr) | | | [`NodeSortBy`](#nodesortby) | | | [`NodesViewCollection`](#nodesviewcollection) | | | [`NodeTimeExpr`](#nodetimeexpr) | | | [`NodeUnaryExpr`](#nodeunaryexpr) | | | [`NodeViewCollection`](#nodeviewcollection) | | | [`NodeWindowExpr`](#nodewindowexpr) | | | [`ObjectEntry`](#objectentry) | | | [`PathFromNodeViewCollection`](#pathfromnodeviewcollection) | | | [`PropCondition`](#propcondition) | | | [`PropertyFilterNew`](#propertyfilternew) | | | [`PropertyInput`](#propertyinput) | | | [`PropsInput`](#propsinput) | | | [`SomePropertySpec`](#somepropertyspec) | SomePropertySpec object containing lists of metadata and property names. | | [`TemporalPropertyInput`](#temporalpropertyinput) | | | [`Value`](#value) | | | [`VectorisedGraphWindow`](#vectorisedgraphwindow) | | | [`Window`](#window) | | | [`WindowDuration`](#windowduration) | | --- ## Type Details ### EdgeAddition #### Fields | Field | Type | Description | |-------|------|-------------| | `src` | [String](/docs/reference/api/graphql/scalars#string)! | Source node. | | `dst` | [String](/docs/reference/api/graphql/scalars#string)! | Destination node. | | `layer` | [String](/docs/reference/api/graphql/scalars#string) | Layer. | | `metadata` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | Metadata. | | `updates` | [[TemporalPropertyInput](/docs/reference/api/graphql/inputs#temporalpropertyinput)!] | | ### EdgeFilter #### Fields | Field | Type | Description | |-------|------|-------------| | `src` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | Source node filter. | | `dst` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | Destination node filter. | | `property` | [PropertyFilterNew](/docs/reference/api/graphql/inputs#propertyfilternew) | Property filter. | | `metadata` | [PropertyFilterNew](/docs/reference/api/graphql/inputs#propertyfilternew) | Metadata filter. | | `temporalProperty` | [PropertyFilterNew](/docs/reference/api/graphql/inputs#propertyfilternew) | Temporal property filter. | | `and` | [[EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)!] | AND operator. | | `or` | [[EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)!] | OR operator. | | `not` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | NOT operator. | | `window` | [EdgeWindowExpr](/docs/reference/api/graphql/inputs#edgewindowexpr) | | | `at` | [EdgeTimeExpr](/docs/reference/api/graphql/inputs#edgetimeexpr) | | | `before` | [EdgeTimeExpr](/docs/reference/api/graphql/inputs#edgetimeexpr) | | | `after` | [EdgeTimeExpr](/docs/reference/api/graphql/inputs#edgetimeexpr) | | | `latest` | [EdgeUnaryExpr](/docs/reference/api/graphql/inputs#edgeunaryexpr) | | | `snapshotAt` | [EdgeTimeExpr](/docs/reference/api/graphql/inputs#edgetimeexpr) | | | `snapshotLatest` | [EdgeUnaryExpr](/docs/reference/api/graphql/inputs#edgeunaryexpr) | | | `layers` | [EdgeLayersExpr](/docs/reference/api/graphql/inputs#edgelayersexpr) | | | `isActive` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Edge is active in the current view/window. | | `isValid` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Edge is valid (undeleted) in the current view/window. | | `isDeleted` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Edge is deleted in the current view/window. | | `isSelfLoop` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Edge is a self-loop in the current view/window. | ### EdgeLayersExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | ### EdgeSortBy #### Fields | Field | Type | Description | |-------|------|-------------| | `reverse` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Reverse order | | `src` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Source node | | `dst` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Destination | | `time` | [SortByTime](/docs/reference/api/graphql/enums#sortbytime) | Time | | `property` | [String](/docs/reference/api/graphql/scalars#string) | Property | ### EdgesViewCollection #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Contains only the default layer. | | `latest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Latest time. | | `snapshotLatest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Snapshot at latest time. | | `snapshotAt` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Snapshot at specified time. | | `layers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of included layers. | | `excludeLayers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded layers. | | `excludeLayer` | [String](/docs/reference/api/graphql/scalars#string) | Single excluded layer. | | `window` | [Window](/docs/reference/api/graphql/inputs#window) | Window between a start and end time. | | `at` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View at a specified time. | | `before` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View before a specified time (end exclusive). | | `after` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View after a specified time (start exclusive). | | `shrinkWindow` | [Window](/docs/reference/api/graphql/inputs#window) | Shrink a Window to a specified start and end time. | | `shrinkStart` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window start to a specified time. | | `shrinkEnd` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window end to a specified time. | | `edgeFilter` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | Edge filter | ### EdgeTimeExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | ### EdgeUnaryExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | ### EdgeViewCollection #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Contains only the default layer. | | `latest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Latest time. | | `snapshotLatest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Snapshot at latest time. | | `snapshotAt` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Snapshot at specified time. | | `layers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of included layers. | | `excludeLayers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded layers. | | `excludeLayer` | [String](/docs/reference/api/graphql/scalars#string) | Single excluded layer. | | `window` | [Window](/docs/reference/api/graphql/inputs#window) | Window between a start and end time. | | `at` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View at a specified time. | | `before` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View before a specified time (end exclusive). | | `after` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View after a specified time (start exclusive). | | `shrinkWindow` | [Window](/docs/reference/api/graphql/inputs#window) | Shrink a Window to a specified start and end time. | | `shrinkStart` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window start to a specified time. | | `shrinkEnd` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window end to a specified time. | | `edgeFilter` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | Edge filter | ### EdgeWindowExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | ### GraphFilter #### Fields | Field | Type | Description | |-------|------|-------------| | `window` | [GraphWindowExpr](/docs/reference/api/graphql/inputs#graphwindowexpr) | | | `at` | [GraphTimeExpr](/docs/reference/api/graphql/inputs#graphtimeexpr) | | | `before` | [GraphTimeExpr](/docs/reference/api/graphql/inputs#graphtimeexpr) | | | `after` | [GraphTimeExpr](/docs/reference/api/graphql/inputs#graphtimeexpr) | | | `latest` | [GraphUnaryExpr](/docs/reference/api/graphql/inputs#graphunaryexpr) | | | `snapshotAt` | [GraphTimeExpr](/docs/reference/api/graphql/inputs#graphtimeexpr) | | | `snapshotLatest` | [GraphUnaryExpr](/docs/reference/api/graphql/inputs#graphunaryexpr) | | | `layers` | [GraphLayersExpr](/docs/reference/api/graphql/inputs#graphlayersexpr) | | ### GraphLayersExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `expr` | [GraphFilter](/docs/reference/api/graphql/inputs#graphfilter) | | ### GraphTimeExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `expr` | [GraphFilter](/docs/reference/api/graphql/inputs#graphfilter) | | ### GraphUnaryExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `expr` | [GraphFilter](/docs/reference/api/graphql/inputs#graphfilter) | | ### GraphViewCollection #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Contains only the default layer. | | `layers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of included layers. | | `excludeLayers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded layers. | | `excludeLayer` | [String](/docs/reference/api/graphql/scalars#string) | Single excluded layer. | | `subgraph` | [[String](/docs/reference/api/graphql/scalars#string)!] | Subgraph nodes. | | `subgraphNodeTypes` | [[String](/docs/reference/api/graphql/scalars#string)!] | Subgraph node types. | | `excludeNodes` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded nodes. | | `valid` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Valid state. | | `window` | [Window](/docs/reference/api/graphql/inputs#window) | Window between a start and end time. | | `at` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View at a specified time. | | `latest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | View at the latest time. | | `snapshotAt` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Snapshot at specified time. | | `snapshotLatest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Snapshot at latest time. | | `before` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View before a specified time (end exclusive). | | `after` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View after a specified time (start exclusive). | | `shrinkWindow` | [Window](/docs/reference/api/graphql/inputs#window) | Shrink a Window to a specified start and end time. | | `shrinkStart` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window start to a specified time. | | `shrinkEnd` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window end to a specified time. | | `nodeFilter` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | Node filter. | | `edgeFilter` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | Edge filter. | ### GraphWindowExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `expr` | [GraphFilter](/docs/reference/api/graphql/inputs#graphfilter) | | ### IndexSpecInput #### Fields | Field | Type | Description | |-------|------|-------------| | `nodeProps` | [PropsInput](/docs/reference/api/graphql/inputs#propsinput)! | Node properties. | | `edgeProps` | [PropsInput](/docs/reference/api/graphql/inputs#propsinput)! | Edge properties. | ### InputEdge #### Fields | Field | Type | Description | |-------|------|-------------| | `src` | [String](/docs/reference/api/graphql/scalars#string)! | Source node. | | `dst` | [String](/docs/reference/api/graphql/scalars#string)! | Destination node. | ### NodeAddition #### Fields | Field | Type | Description | |-------|------|-------------| | `name` | [String](/docs/reference/api/graphql/scalars#string)! | Name. | | `nodeType` | [String](/docs/reference/api/graphql/scalars#string) | Node type. | | `metadata` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | Metadata. | | `updates` | [[TemporalPropertyInput](/docs/reference/api/graphql/inputs#temporalpropertyinput)!] | Updates. | ### NodeFieldCondition #### Fields | Field | Type | Description | |-------|------|-------------| | `eq` | [Value](/docs/reference/api/graphql/inputs#value) | | | `ne` | [Value](/docs/reference/api/graphql/inputs#value) | | | `gt` | [Value](/docs/reference/api/graphql/inputs#value) | | | `ge` | [Value](/docs/reference/api/graphql/inputs#value) | | | `lt` | [Value](/docs/reference/api/graphql/inputs#value) | | | `le` | [Value](/docs/reference/api/graphql/inputs#value) | | | `startsWith` | [Value](/docs/reference/api/graphql/inputs#value) | | | `endsWith` | [Value](/docs/reference/api/graphql/inputs#value) | | | `contains` | [Value](/docs/reference/api/graphql/inputs#value) | | | `notContains` | [Value](/docs/reference/api/graphql/inputs#value) | | | `isIn` | [Value](/docs/reference/api/graphql/inputs#value) | | | `isNotIn` | [Value](/docs/reference/api/graphql/inputs#value) | | ### NodeFieldFilterNew #### Fields | Field | Type | Description | |-------|------|-------------| | `field` | [NodeField](/docs/reference/api/graphql/enums#nodefield)! | | | `where` | [NodeFieldCondition](/docs/reference/api/graphql/inputs#nodefieldcondition)! | | ### NodeFilter #### Fields | Field | Type | Description | |-------|------|-------------| | `node` | [NodeFieldFilterNew](/docs/reference/api/graphql/inputs#nodefieldfilternew) | Node filter. | | `property` | [PropertyFilterNew](/docs/reference/api/graphql/inputs#propertyfilternew) | Property filter. | | `metadata` | [PropertyFilterNew](/docs/reference/api/graphql/inputs#propertyfilternew) | Metadata filter. | | `temporalProperty` | [PropertyFilterNew](/docs/reference/api/graphql/inputs#propertyfilternew) | Temporal property filter. | | `and` | [[NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)!] | AND operator. | | `or` | [[NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)!] | OR operator. | | `not` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | NOT operator. | | `window` | [NodeWindowExpr](/docs/reference/api/graphql/inputs#nodewindowexpr) | | | `at` | [NodeTimeExpr](/docs/reference/api/graphql/inputs#nodetimeexpr) | | | `before` | [NodeTimeExpr](/docs/reference/api/graphql/inputs#nodetimeexpr) | | | `after` | [NodeTimeExpr](/docs/reference/api/graphql/inputs#nodetimeexpr) | | | `latest` | [NodeUnaryExpr](/docs/reference/api/graphql/inputs#nodeunaryexpr) | | | `snapshotAt` | [NodeTimeExpr](/docs/reference/api/graphql/inputs#nodetimeexpr) | | | `snapshotLatest` | [NodeUnaryExpr](/docs/reference/api/graphql/inputs#nodeunaryexpr) | | | `layers` | [NodeLayersExpr](/docs/reference/api/graphql/inputs#nodelayersexpr) | | | `isActive` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Node is active in the current view/window. | ### NodeLayersExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### NodeSortBy #### Fields | Field | Type | Description | |-------|------|-------------| | `reverse` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Reverse order | | `id` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Unique Id | | `time` | [SortByTime](/docs/reference/api/graphql/enums#sortbytime) | Time | | `property` | [String](/docs/reference/api/graphql/scalars#string) | Property | ### NodesViewCollection #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Contains only the default layer. | | `latest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | View at the latest time. | | `snapshotLatest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Snapshot at latest time. | | `layers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of included layers. | | `excludeLayers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded layers. | | `excludeLayer` | [String](/docs/reference/api/graphql/scalars#string) | Single excluded layer. | | `window` | [Window](/docs/reference/api/graphql/inputs#window) | Window between a start and end time. | | `at` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View at a specified time. | | `snapshotAt` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Snapshot at specified time. | | `before` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View before a specified time (end exclusive). | | `after` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View after a specified time (start exclusive). | | `shrinkWindow` | [Window](/docs/reference/api/graphql/inputs#window) | Shrink a Window to a specified start and end time. | | `shrinkStart` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window start to a specified time. | | `shrinkEnd` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window end to a specified time. | | `nodeFilter` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | Node filter. | | `typeFilter` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of types. | ### NodeTimeExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### NodeUnaryExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### NodeViewCollection #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Contains only the default layer. | | `latest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | View at the latest time. | | `snapshotLatest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Snapshot at latest time. | | `snapshotAt` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Snapshot at specified time. | | `layers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of included layers. | | `excludeLayers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded layers. | | `excludeLayer` | [String](/docs/reference/api/graphql/scalars#string) | Single excluded layer. | | `window` | [Window](/docs/reference/api/graphql/inputs#window) | Window between a start and end time. | | `at` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View at a specified time. | | `before` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View before a specified time (end exclusive). | | `after` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View after a specified time (start exclusive). | | `shrinkWindow` | [Window](/docs/reference/api/graphql/inputs#window) | Shrink a Window to a specified start and end time. | | `shrinkStart` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window start to a specified time. | | `shrinkEnd` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window end to a specified time. | | `nodeFilter` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | Node filter. | ### NodeWindowExpr #### Fields | Field | Type | Description | |-------|------|-------------| | `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### ObjectEntry #### Fields | Field | Type | Description | |-------|------|-------------| | `key` | [String](/docs/reference/api/graphql/scalars#string)! | Key. | | `value` | [Value](/docs/reference/api/graphql/inputs#value)! | Value. | ### PathFromNodeViewCollection #### Fields | Field | Type | Description | |-------|------|-------------| | `latest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Latest time. | | `snapshotLatest` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Latest snapshot. | | `snapshotAt` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Time. | | `layers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of layers. | | `excludeLayers` | [[String](/docs/reference/api/graphql/scalars#string)!] | List of excluded layers. | | `excludeLayer` | [String](/docs/reference/api/graphql/scalars#string) | Single layer to exclude. | | `window` | [Window](/docs/reference/api/graphql/inputs#window) | Window between a start and end time. | | `at` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View at a specified time. | | `before` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View before a specified time (end exclusive). | | `after` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | View after a specified time (start exclusive). | | `shrinkWindow` | [Window](/docs/reference/api/graphql/inputs#window) | Shrink a Window to a specified start and end time. | | `shrinkStart` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window start to a specified time. | | `shrinkEnd` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput) | Set the window end to a specified time. | ### PropCondition #### Fields | Field | Type | Description | |-------|------|-------------| | `eq` | [Value](/docs/reference/api/graphql/inputs#value) | | | `ne` | [Value](/docs/reference/api/graphql/inputs#value) | | | `gt` | [Value](/docs/reference/api/graphql/inputs#value) | | | `ge` | [Value](/docs/reference/api/graphql/inputs#value) | | | `lt` | [Value](/docs/reference/api/graphql/inputs#value) | | | `le` | [Value](/docs/reference/api/graphql/inputs#value) | | | `startsWith` | [Value](/docs/reference/api/graphql/inputs#value) | | | `endsWith` | [Value](/docs/reference/api/graphql/inputs#value) | | | `contains` | [Value](/docs/reference/api/graphql/inputs#value) | | | `notContains` | [Value](/docs/reference/api/graphql/inputs#value) | | | `isIn` | [Value](/docs/reference/api/graphql/inputs#value) | | | `isNotIn` | [Value](/docs/reference/api/graphql/inputs#value) | | | `isSome` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `isNone` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `and` | [[PropCondition](/docs/reference/api/graphql/inputs#propcondition)!] | | | `or` | [[PropCondition](/docs/reference/api/graphql/inputs#propcondition)!] | | | `not` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `first` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `last` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `any` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `all` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `sum` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `avg` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `min` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `max` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | | `len` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition) | | ### PropertyFilterNew #### Fields | Field | Type | Description | |-------|------|-------------| | `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `where` | [PropCondition](/docs/reference/api/graphql/inputs#propcondition)! | | ### PropertyInput #### Fields | Field | Type | Description | |-------|------|-------------| | `key` | [String](/docs/reference/api/graphql/scalars#string)! | Key. | | `value` | [Value](/docs/reference/api/graphql/inputs#value)! | Value. | ### PropsInput #### Fields | Field | Type | Description | |-------|------|-------------| | `all` | [AllPropertySpec](/docs/reference/api/graphql/enums#allpropertyspec) | All properties and metadata. | | `some` | [SomePropertySpec](/docs/reference/api/graphql/inputs#somepropertyspec) | Some properties and metadata. | ### SomePropertySpec SomePropertySpec object containing lists of metadata and property names. #### Fields | Field | Type | Description | |-------|------|-------------| | `metadata` | [[String](/docs/reference/api/graphql/scalars#string)!]! | List of metadata. | | `properties` | [[String](/docs/reference/api/graphql/scalars#string)!]! | List of properties. | ### TemporalPropertyInput #### Fields | Field | Type | Description | |-------|------|-------------| | `time` | [Int](/docs/reference/api/graphql/scalars#int)! | Time. | | `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | Properties. | ### Value #### Fields | Field | Type | Description | |-------|------|-------------| | `u8` | [Int](/docs/reference/api/graphql/scalars#int) | 8 bit unsigned integer. | | `u16` | [Int](/docs/reference/api/graphql/scalars#int) | 16 bit unsigned integer. | | `u32` | [Int](/docs/reference/api/graphql/scalars#int) | 32 bit unsigned integer. | | `u64` | [Int](/docs/reference/api/graphql/scalars#int) | 64 bit unsigned integer. | | `i32` | [Int](/docs/reference/api/graphql/scalars#int) | 32 bit signed integer. | | `i64` | [Int](/docs/reference/api/graphql/scalars#int) | 64 bit signed integer. | | `f32` | [Float](/docs/reference/api/graphql/scalars#float) | 32 bit float. | | `f64` | [Float](/docs/reference/api/graphql/scalars#float) | 64 bit float. | | `str` | [String](/docs/reference/api/graphql/scalars#string) | String. | | `bool` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | Boolean. | | `list` | [[Value](/docs/reference/api/graphql/inputs#value)!] | List. | | `object` | [[ObjectEntry](/docs/reference/api/graphql/inputs#objectentry)!] | Object. | ### VectorisedGraphWindow #### Fields | Field | Type | Description | |-------|------|-------------| | `start` | [Int](/docs/reference/api/graphql/scalars#int)! | Start time. | | `end` | [Int](/docs/reference/api/graphql/scalars#int)! | End time. | ### Window #### Fields | Field | Type | Description | |-------|------|-------------| | `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | Window start time. | | `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | Window end time. | ### WindowDuration #### Fields | Field | Type | Description | |-------|------|-------------| | `duration` | [String](/docs/reference/api/graphql/scalars#string) | Duration of window period. Choose from: | | `epoch` | [Int](/docs/reference/api/graphql/scalars#int) | Time. | --- ## Reference > Api > Graphql > Mutation --- title: "Mutation" breadcrumb: "Reference / GraphQL / Mutation" --- # Mutation | Field | Type | Description | |-------|------|-------------| | [`plugins`](#plugins) | [MutationPlugin](/docs/reference/api/graphql/objects#mutationplugin)! | Returns a collection of mutation plugins. | | [`deleteGraph`](#deletegraph) | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Delete graph from a path on the server. | | [`newGraph`](#newgraph) | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Creates a new graph. | | [`moveGraph`](#movegraph) | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Move graph from a path path on the server to a new_path on the server. | | [`copyGraph`](#copygraph) | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Copy graph from a path path on the server to a new_path on the server. | | [`uploadGraph`](#uploadgraph) | [String](/docs/reference/api/graphql/scalars#string)! | Upload a graph file from a path on the client using GQL multipart uploading. | | [`sendGraph`](#sendgraph) | [String](/docs/reference/api/graphql/scalars#string)! | Send graph bincode as base64 encoded string. | | [`createSubgraph`](#createsubgraph) | [String](/docs/reference/api/graphql/scalars#string)! | Returns a subgraph given a set of nodes from an existing graph in the server. | | [`createIndex`](#createindex) | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | (Experimental) Creates search index. | --- ## Field Details ### plugins Returns a collection of mutation plugins. #### Returns **Type:** [MutationPlugin](/docs/reference/api/graphql/objects#mutationplugin)! ### deleteGraph Delete graph from a path on the server. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [Boolean](/docs/reference/api/graphql/scalars#boolean)! ### newGraph Creates a new graph. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `graphType` | [GraphType](/docs/reference/api/graphql/enums#graphtype)! | | #### Returns **Type:** [Boolean](/docs/reference/api/graphql/scalars#boolean)! ### moveGraph Move graph from a path path on the server to a new_path on the server. If namespace is not provided, it will be set to the current working directory. This applies to both the graph namespace and new graph namespace. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `newPath` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [Boolean](/docs/reference/api/graphql/scalars#boolean)! ### copyGraph Copy graph from a path path on the server to a new_path on the server. If namespace is not provided, it will be set to the current working directory. This applies to both the graph namespace and new graph namespace. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `newPath` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [Boolean](/docs/reference/api/graphql/scalars#boolean)! ### uploadGraph Upload a graph file from a path on the client using GQL multipart uploading. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `graph` | [Upload](/docs/reference/api/graphql/scalars#upload)! | | | `overwrite` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | | #### Returns **Type:** [String](/docs/reference/api/graphql/scalars#string)! ### sendGraph Send graph bincode as base64 encoded string. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `graph` | [String](/docs/reference/api/graphql/scalars#string)! | | | `overwrite` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | | #### Returns **Type:** [String](/docs/reference/api/graphql/scalars#string)! ### createSubgraph Returns a subgraph given a set of nodes from an existing graph in the server. #### Arguments | Name | Type | Description | |------|------|-------------| | `parentPath` | [String](/docs/reference/api/graphql/scalars#string)! | | | `nodes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `newPath` | [String](/docs/reference/api/graphql/scalars#string)! | | | `overwrite` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | | #### Returns **Type:** [String](/docs/reference/api/graphql/scalars#string)! ### createIndex (Experimental) Creates search index. #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `indexSpec` | [IndexSpecInput](/docs/reference/api/graphql/inputs#indexspecinput) | | | `inRam` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | | #### Returns **Type:** [Boolean](/docs/reference/api/graphql/scalars#boolean)! --- ## Reference > Api > Graphql > Objects --- title: "Objects" breadcrumb: "Reference / GraphQL / Objects" --- # Objects GraphQL object types representing graph entities. ## Types | Type | Description | |------|-------------| | [`CollectionOfMetaGraph`](#collectionofmetagraph) | Collection of items | | [`CollectionOfNamespace`](#collectionofnamespace) | Collection of items | | [`CollectionOfNamespacedItem`](#collectionofnamespaceditem) | Collection of items | | [`Document`](#document) | Document in a vector graph | | [`Edge`](#edge) | Raphtory graph edge. | | [`Edges`](#edges) | | | [`EdgeSchema`](#edgeschema) | | | [`EdgesWindowSet`](#edgeswindowset) | | | [`EdgeWindowSet`](#edgewindowset) | | | [`EventTime`](#eventtime) | Raphtory’s EventTime. | | [`Graph`](#graph) | | | [`GraphAlgorithmPlugin`](#graphalgorithmplugin) | | | [`GraphSchema`](#graphschema) | | | [`GraphWindowSet`](#graphwindowset) | | | [`History`](#history) | History of updates for an object in Raphtory. | | [`HistoryDateTime`](#historydatetime) | History object that provides access to datetimes instead of `EventTime` entries. | | [`HistoryEventId`](#historyeventid) | History object that provides access to event ids instead of `EventTime` entries. | | [`HistoryTimestamp`](#historytimestamp) | History object that provides access to timestamps (milliseconds since the Unix epoch) instead of `EventTime` entries. | | [`IndexSpec`](#indexspec) | | | [`Intervals`](#intervals) | Provides access to the intervals between temporal entries of an object. | | [`LayerSchema`](#layerschema) | | | [`Metadata`](#metadata) | | | [`MetaGraph`](#metagraph) | | | [`MutableEdge`](#mutableedge) | | | [`MutableGraph`](#mutablegraph) | | | [`MutableNode`](#mutablenode) | | | [`MutationPlugin`](#mutationplugin) | | | [`Namespace`](#namespace) | | | [`Node`](#node) | Raphtory graph node. | | [`Nodes`](#nodes) | | | [`NodeSchema`](#nodeschema) | | | [`NodesWindowSet`](#nodeswindowset) | | | [`NodeWindowSet`](#nodewindowset) | | | [`PagerankOutput`](#pagerankoutput) | PageRank score. | | [`PathFromNode`](#pathfromnode) | | | [`PathFromNodeWindowSet`](#pathfromnodewindowset) | | | [`Properties`](#properties) | | | [`Property`](#property) | | | [`PropertySchema`](#propertyschema) | | | [`PropertyTuple`](#propertytuple) | | | [`QueryPlugin`](#queryplugin) | | | [`ShortestPathOutput`](#shortestpathoutput) | | | [`TemporalProperties`](#temporalproperties) | | | [`TemporalProperty`](#temporalproperty) | | | [`VectorisedGraph`](#vectorisedgraph) | | | [`VectorSelection`](#vectorselection) | | --- ## Type Details ### CollectionOfMetaGraph Collection of items #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[MetaGraph](/docs/reference/api/graphql/objects#metagraph)!]! | Returns a list of collection objects. | | `page` | [[MetaGraph](/docs/reference/api/graphql/objects#metagraph)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. The page_index sets the number of pages to skip (defaults to 0). | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns a count of collection objects. | ### CollectionOfNamespace Collection of items #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[Namespace](/docs/reference/api/graphql/objects#namespace)!]! | Returns a list of collection objects. | | `page` | [[Namespace](/docs/reference/api/graphql/objects#namespace)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. The page_index sets the number of pages to skip (defaults to 0). | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns a count of collection objects. | ### CollectionOfNamespacedItem Collection of items #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[NamespacedItem](/docs/reference/api/graphql/unions#namespaceditem)!]! | Returns a list of collection objects. | | `page` | [[NamespacedItem](/docs/reference/api/graphql/unions#namespaceditem)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. The page_index sets the number of pages to skip (defaults to 0). | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns a count of collection objects. | ### Document Document in a vector graph #### Fields | Field | Type | Description | |-------|------|-------------| | `entity` | [DocumentEntity](/docs/reference/api/graphql/unions#documententity)! | Entity associated with document. | | `content` | [String](/docs/reference/api/graphql/scalars#string)! | Content of the document. | | `embedding` | [[Float](/docs/reference/api/graphql/scalars#float)!]! | Similarity score with a specified query | | `score` | [Float](/docs/reference/api/graphql/scalars#float)! | | ### Edge Raphtory graph edge. #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Edge](/docs/reference/api/graphql/objects#edge)! | Return a view of Edge containing only the default edge layer. | | `layers` | [Edge](/docs/reference/api/graphql/objects#edge)! | Returns a view of Edge containing all layers in the list of names. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeLayers` | [Edge](/docs/reference/api/graphql/objects#edge)! | Returns a view of Edge containing all layers except the excluded list of names. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `layer` | [Edge](/docs/reference/api/graphql/objects#edge)! | Returns a view of Edge containing the specified layer. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `excludeLayer` | [Edge](/docs/reference/api/graphql/objects#edge)! | Returns a view of Edge containing all layers except the excluded layer specified. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `rolling` | [EdgeWindowSet](/docs/reference/api/graphql/objects#edgewindowset)! | Creates a WindowSet with the given window duration and optional step using a rolling window. | | ↳ `window` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration) | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `expanding` | [EdgeWindowSet](/docs/reference/api/graphql/objects#edgewindowset)! | Creates a WindowSet with the given step size using an expanding window. | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `window` | [Edge](/docs/reference/api/graphql/objects#edge)! | Creates a view of the Edge including all events between the specified start (inclusive) and end (exclusive). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `at` | [Edge](/docs/reference/api/graphql/objects#edge)! | Creates a view of the Edge including all events at a specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [Edge](/docs/reference/api/graphql/objects#edge)! | Returns a view of the edge at the latest time of the graph. | | `snapshotAt` | [Edge](/docs/reference/api/graphql/objects#edge)! | Creates a view of the Edge including all events that are valid at time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `snapshotLatest` | [Edge](/docs/reference/api/graphql/objects#edge)! | Creates a view of the Edge including all events that are valid at the latest time. | | `before` | [Edge](/docs/reference/api/graphql/objects#edge)! | Creates a view of the Edge including all events before a specified end (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `after` | [Edge](/docs/reference/api/graphql/objects#edge)! | Creates a view of the Edge including all events after a specified start (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkWindow` | [Edge](/docs/reference/api/graphql/objects#edge)! | Shrinks both the start and end of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkStart` | [Edge](/docs/reference/api/graphql/objects#edge)! | Set the start of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkEnd` | [Edge](/docs/reference/api/graphql/objects#edge)! | Set the end of the window. | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `applyViews` | [Edge](/docs/reference/api/graphql/objects#edge)! | Takes a specified selection of views and applies them in given order. | | ↳ `views` | [[EdgeViewCollection](/docs/reference/api/graphql/inputs#edgeviewcollection)!]! | | | `earliestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the earliest time of an edge. | | `firstUpdate` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | | | `latestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the latest time of an edge. | | `lastUpdate` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | | | `time` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the time of an exploded edge. Errors on an unexploded edge. | | `start` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the start time for rolling and expanding windows for this edge. Returns none if no window is applied. | | `end` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the end time of the window. Returns none if no window is applied. | | `src` | [Node](/docs/reference/api/graphql/objects#node)! | Returns the source node of the edge. | | `dst` | [Node](/docs/reference/api/graphql/objects#node)! | Returns the destination node of the edge. | | `nbr` | [Node](/docs/reference/api/graphql/objects#node)! | Returns the node at the other end of the edge (same as dst() for out-edges and src() for in-edges). | | `id` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns the id of the edge. | | `properties` | [Properties](/docs/reference/api/graphql/objects#properties)! | Returns a view of the properties of the edge. | | `metadata` | [Metadata](/docs/reference/api/graphql/objects#metadata)! | Returns the metadata of an edge. | | `layerNames` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns the names of the layers that have this edge as a member. | | `layerName` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the layer name of an exploded edge, errors on an edge. | | `explode` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns an edge object for each update within the original edge. | | `explodeLayers` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns an edge object for each layer within the original edge. | | `history` | [History](/docs/reference/api/graphql/objects#history)! | Returns a History object with time entries for when an edge is added or change to an edge is made. | | `deletions` | [History](/docs/reference/api/graphql/objects#history)! | Returns a history object with time entries for an edge's deletion times. | | `isValid` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Checks if the edge is currently valid and exists at the current time. | | `isActive` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Checks if the edge is currently active and has at least one update within the current period. | | `isDeleted` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Checks if the edge is deleted at the current time. | | `isSelfLoop` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Returns true if the edge source and destination nodes are the same. | | `filter` | [Edge](/docs/reference/api/graphql/objects#edge)! | | | ↳ `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | ### Edges #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns a collection containing only edges in the default edge layer. | | `layers` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns a collection containing only edges belonging to the listed layers. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeLayers` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns a collection containing edges belonging to all layers except the excluded list of layers. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `layer` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns a collection containing edges belonging to the specified layer. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `excludeLayer` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns a collection containing edges belonging to all layers except the excluded layer specified. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `rolling` | [EdgesWindowSet](/docs/reference/api/graphql/objects#edgeswindowset)! | Creates a WindowSet with the given window duration and optional step using a rolling window. A rolling window is a window that moves forward by step size at each iteration. | | ↳ `window` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration) | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `expanding` | [EdgesWindowSet](/docs/reference/api/graphql/objects#edgeswindowset)! | Creates a WindowSet with the given step size using an expanding window. An expanding window is a window that grows by step size at each iteration. | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `window` | [Edges](/docs/reference/api/graphql/objects#edges)! | Creates a view of the Edge including all events between the specified start (inclusive) and end (exclusive). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `at` | [Edges](/docs/reference/api/graphql/objects#edges)! | Creates a view of the Edge including all events at a specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [Edges](/docs/reference/api/graphql/objects#edges)! | | | `snapshotAt` | [Edges](/docs/reference/api/graphql/objects#edges)! | Creates a view of the Edge including all events that are valid at time. This is equivalent to before(time + 1) for Graph and at(time) for PersistentGraph. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `snapshotLatest` | [Edges](/docs/reference/api/graphql/objects#edges)! | Creates a view of the Edge including all events that are valid at the latest time. This is equivalent to a no-op for Graph and latest() for PersistentGraph. | | `before` | [Edges](/docs/reference/api/graphql/objects#edges)! | Creates a view of the Edge including all events before a specified end (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `after` | [Edges](/docs/reference/api/graphql/objects#edges)! | Creates a view of the Edge including all events after a specified start (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkWindow` | [Edges](/docs/reference/api/graphql/objects#edges)! | Shrinks both the start and end of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkStart` | [Edges](/docs/reference/api/graphql/objects#edges)! | Set the start of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkEnd` | [Edges](/docs/reference/api/graphql/objects#edges)! | Set the end of the window. | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `applyViews` | [Edges](/docs/reference/api/graphql/objects#edges)! | Takes a specified selection of views and applies them in order given. | | ↳ `views` | [[EdgesViewCollection](/docs/reference/api/graphql/inputs#edgesviewcollection)!]! | | | `explode` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns an edge object for each update within the original edge. | | `explodeLayers` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns an edge object for each layer within the original edge. | | `sorted` | [Edges](/docs/reference/api/graphql/objects#edges)! | Specify a sort order from: source, destination, property, time. You can also reverse the ordering. | | ↳ `sortBys` | [[EdgeSortBy](/docs/reference/api/graphql/inputs#edgesortby)!]! | | | `start` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the start time of the window or none if there is no window. | | `end` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the end time of the window or none if there is no window. | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of edges. | | `page` | [[Edge](/docs/reference/api/graphql/objects#edge)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Edge](/docs/reference/api/graphql/objects#edge)!]! | Returns a list of all objects in the current selection of the collection. You should filter the collection first then call list. | | `filter` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns a filtered view that applies to list down the chain | | ↳ `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | | `select` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns filtered list of edges | | ↳ `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | ### EdgeSchema #### Fields | Field | Type | Description | |-------|------|-------------| | `srcType` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the type of source for these edges | | `dstType` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the type of destination for these edges | | `properties` | [[PropertySchema](/docs/reference/api/graphql/objects#propertyschema)!]! | Returns the list of property schemas for edges connecting these types of nodes | | `metadata` | [[PropertySchema](/docs/reference/api/graphql/objects#propertyschema)!]! | Returns the list of metadata schemas for edges connecting these types of nodes | ### EdgesWindowSet #### Fields | Field | Type | Description | |-------|------|-------------| | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[Edges](/docs/reference/api/graphql/objects#edges)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Edges](/docs/reference/api/graphql/objects#edges)!]! | | ### EdgeWindowSet #### Fields | Field | Type | Description | |-------|------|-------------| | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[Edge](/docs/reference/api/graphql/objects#edge)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Edge](/docs/reference/api/graphql/objects#edge)!]! | | ### EventTime Raphtory’s EventTime. Represents a unique timepoint in the graph’s history as (timestamp, event_id). - timestamp: Number of milliseconds since the Unix epoch. - event_id: ID used for ordering between equal timestamps. Instances of EventTime may or may not contain time information. This is relevant for functions that may not return data (such as earliest_time and latest_time) because the data is unavailable. When empty, time operations (such as timestamp, datetime, and event_id) will return None. #### Fields | Field | Type | Description | |-------|------|-------------| | `timestamp` | [Int](/docs/reference/api/graphql/scalars#int) | Get the timestamp in milliseconds since the Unix epoch. | | `eventId` | [Int](/docs/reference/api/graphql/scalars#int) | Get the event id for the EventTime. Used for ordering within the same timestamp. | | `datetime` | [String](/docs/reference/api/graphql/scalars#string) | Access a datetime representation of the EventTime as a String. | | ↳ `formatString` | [String](/docs/reference/api/graphql/scalars#string) | | ### Graph #### Fields | Field | Type | Description | |-------|------|-------------| | `uniqueLayers` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns the names of all layers in the graphview. | | `defaultLayer` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a view containing only the default layer. | | `layers` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a view containing all the specified layers. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeLayers` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a view containing all layers except the specified excluded layers. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `layer` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a view containing the layer specified. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `excludeLayer` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a view containing all layers except the specified excluded layer. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `subgraph` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a subgraph of a specified set of nodes which contains only the edges that connect nodes of the subgraph to each other. | | ↳ `nodes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `valid` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a view of the graph that only includes valid edges. | | `subgraphNodeTypes` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a subgraph filtered by the specified node types. | | ↳ `nodeTypes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeNodes` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a subgraph containing all nodes except the specified excluded nodes. | | ↳ `nodes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `rolling` | [GraphWindowSet](/docs/reference/api/graphql/objects#graphwindowset)! | Creates a rolling window with the specified window size and an optional step. | | ↳ `window` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration) | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `expanding` | [GraphWindowSet](/docs/reference/api/graphql/objects#graphwindowset)! | Creates an expanding window with the specified step size. | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `window` | [Graph](/docs/reference/api/graphql/objects#graph)! | Return a graph containing only the activity between start and end, by default raphtory stores times in milliseconds from the unix epoch. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `at` | [Graph](/docs/reference/api/graphql/objects#graph)! | Creates a view including all events at a specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [Graph](/docs/reference/api/graphql/objects#graph)! | Creates a view including all events at the latest time. | | `snapshotAt` | [Graph](/docs/reference/api/graphql/objects#graph)! | Create a view including all events that are valid at the specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `snapshotLatest` | [Graph](/docs/reference/api/graphql/objects#graph)! | Create a view including all events that are valid at the latest time. | | `before` | [Graph](/docs/reference/api/graphql/objects#graph)! | Create a view including all events before a specified end (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `after` | [Graph](/docs/reference/api/graphql/objects#graph)! | Create a view including all events after a specified start (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkWindow` | [Graph](/docs/reference/api/graphql/objects#graph)! | Shrink both the start and end of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkStart` | [Graph](/docs/reference/api/graphql/objects#graph)! | Set the start of the window to the larger of the specified value or current start. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkEnd` | [Graph](/docs/reference/api/graphql/objects#graph)! | Set the end of the window to the smaller of the specified value or current end. | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `created` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the timestamp for the creation of the graph. | | `lastOpened` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the graph's last opened timestamp according to system time. | | `lastUpdated` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the graph's last updated timestamp. | | `earliestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the time entry of the earliest activity in the graph. | | `latestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the time entry of the latest activity in the graph. | | `start` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the start time of the window. Errors if there is no window. | | `end` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the end time of the window. Errors if there is no window. | | `earliestEdgeTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the earliest time that any edge in this graph is valid. | | ↳ `includeNegative` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `latestEdgeTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the latest time that any edge in this graph is valid. | | ↳ `includeNegative` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `countEdges` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of edges in the graph. | | `countTemporalEdges` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of temporal edges in the graph. | | `countNodes` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of nodes in the graph. | | `hasNode` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Returns true if the graph contains the specified node. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `hasEdge` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Returns true if the graph contains the specified edge. Edges are specified by providing a source and destination node id. You can restrict the search to a specified layer. | | ↳ `src` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `dst` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | | `node` | [Node](/docs/reference/api/graphql/objects#node) | Gets the node with the specified id. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `nodes` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Gets (optionally a subset of) the nodes in the graph. | | ↳ `select` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | | | `edge` | [Edge](/docs/reference/api/graphql/objects#edge) | Gets the edge with the specified source and destination nodes. | | ↳ `src` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `dst` | [String](/docs/reference/api/graphql/scalars#string)! | | | `edges` | [Edges](/docs/reference/api/graphql/objects#edges)! | Gets the edges in the graph. | | ↳ `select` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | | | `properties` | [Properties](/docs/reference/api/graphql/objects#properties)! | Returns the properties of the graph. | | `metadata` | [Metadata](/docs/reference/api/graphql/objects#metadata)! | Returns the metadata of the graph. | | `name` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the graph name. | | `path` | [String](/docs/reference/api/graphql/scalars#string)! | Returns path of graph. | | `namespace` | [String](/docs/reference/api/graphql/scalars#string)! | Returns namespace of graph. | | `schema` | [GraphSchema](/docs/reference/api/graphql/objects#graphschema)! | Returns the graph schema. | | `algorithms` | [GraphAlgorithmPlugin](/docs/reference/api/graphql/objects#graphalgorithmplugin)! | | | `sharedNeighbours` | [[Node](/docs/reference/api/graphql/objects#node)!]! | | | ↳ `selectedNodes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `exportTo` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Export all nodes and edges from this graph view to another existing graph | | ↳ `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `filter` | [Graph](/docs/reference/api/graphql/objects#graph)! | | | ↳ `expr` | [GraphFilter](/docs/reference/api/graphql/inputs#graphfilter) | | | `filterNodes` | [Graph](/docs/reference/api/graphql/objects#graph)! | | | ↳ `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | | `filterEdges` | [Graph](/docs/reference/api/graphql/objects#graph)! | | | ↳ `expr` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | | `getIndexSpec` | [IndexSpec](/docs/reference/api/graphql/objects#indexspec)! | (Experimental) Get index specification. | | `searchNodes` | [[Node](/docs/reference/api/graphql/objects#node)!]! | (Experimental) Searches for nodes which match the given filter expression. | | ↳ `filter` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `searchEdges` | [[Edge](/docs/reference/api/graphql/objects#edge)!]! | (Experimental) Searches the index for edges which match the given filter expression. | | ↳ `filter` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `applyViews` | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns the specified graph view or if none is specified returns the default view. | | ↳ `views` | [[GraphViewCollection](/docs/reference/api/graphql/inputs#graphviewcollection)!]! | | ### GraphAlgorithmPlugin #### Fields | Field | Type | Description | |-------|------|-------------| | `pagerank` | [[PagerankOutput](/docs/reference/api/graphql/objects#pagerankoutput)!]! | | | ↳ `iterCount` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `threads` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `tol` | [Float](/docs/reference/api/graphql/scalars#float) | | | `shortest_path` | [[ShortestPathOutput](/docs/reference/api/graphql/objects#shortestpathoutput)!]! | | | ↳ `source` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `targets` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | ↳ `direction` | [String](/docs/reference/api/graphql/scalars#string) | | ### GraphSchema #### Fields | Field | Type | Description | |-------|------|-------------| | `nodes` | [[NodeSchema](/docs/reference/api/graphql/objects#nodeschema)!]! | | | `layers` | [[LayerSchema](/docs/reference/api/graphql/objects#layerschema)!]! | | ### GraphWindowSet #### Fields | Field | Type | Description | |-------|------|-------------| | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of items. | | `page` | [[Graph](/docs/reference/api/graphql/objects#graph)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Graph](/docs/reference/api/graphql/objects#graph)!]! | | ### History History of updates for an object in Raphtory. Provides access to temporal properties. #### Fields | Field | Type | Description | |-------|------|-------------| | `earliestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Get the earliest time entry associated with this history or None if the history is empty. | | `latestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Get the latest time entry associated with this history or None if the history is empty. | | `list` | [[EventTime](/docs/reference/api/graphql/objects#eventtime)!]! | List all time entries present in this history. | | `listRev` | [[EventTime](/docs/reference/api/graphql/objects#eventtime)!]! | List all time entries present in this history in reverse order. | | `page` | [[EventTime](/docs/reference/api/graphql/objects#eventtime)!]! | Fetch one page of EventTime entries with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `pageRev` | [[EventTime](/docs/reference/api/graphql/objects#eventtime)!]! | Fetch one page of EventTime entries with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `isEmpty` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Returns True if the history is empty. | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | Get the number of entries contained in the history. | | `timestamps` | [HistoryTimestamp](/docs/reference/api/graphql/objects#historytimestamp)! | Returns a HistoryTimestamp object which accesses timestamps (milliseconds since the Unix epoch) | | `datetimes` | [HistoryDateTime](/docs/reference/api/graphql/objects#historydatetime)! | Returns a HistoryDateTime object which accesses datetimes instead of EventTime entries. | | ↳ `formatString` | [String](/docs/reference/api/graphql/scalars#string) | | | `eventId` | [HistoryEventId](/docs/reference/api/graphql/objects#historyeventid)! | Returns a HistoryEventId object which accesses event ids of EventTime entries. | | `intervals` | [Intervals](/docs/reference/api/graphql/objects#intervals)! | Returns an Intervals object which calculates the intervals between consecutive EventTime timestamps. | ### HistoryDateTime History object that provides access to datetimes instead of `EventTime` entries. #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[String](/docs/reference/api/graphql/scalars#string)!]! | List all datetimes formatted as strings. | | ↳ `filterBroken` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `listRev` | [[String](/docs/reference/api/graphql/scalars#string)!]! | List all datetimes formatted as strings in reverse chronological order. | | ↳ `filterBroken` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `page` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Fetch one page of datetimes formatted as string with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `filterBroken` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | | `pageRev` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Fetch one page of datetimes formatted as string in reverse chronological order with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `filterBroken` | [Boolean](/docs/reference/api/graphql/scalars#boolean) | | ### HistoryEventId History object that provides access to event ids instead of `EventTime` entries. #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | List event ids. | | `listRev` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | List event ids in reverse order. | | `page` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | Fetch one page of event ids with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `pageRev` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | Fetch one page of event ids in reverse chronological order with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | ### HistoryTimestamp History object that provides access to timestamps (milliseconds since the Unix epoch) instead of `EventTime` entries. #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | List all timestamps. | | `listRev` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | List all timestamps in reverse order. | | `page` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | Fetch one page of timestamps with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `pageRev` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | Fetch one page of timestamps in reverse order with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | ### IndexSpec #### Fields | Field | Type | Description | |-------|------|-------------| | `nodeMetadata` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns node metadata. | | `nodeProperties` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns node properties. | | `edgeMetadata` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns edge metadata. | | `edgeProperties` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns edge properties. | ### Intervals Provides access to the intervals between temporal entries of an object. #### Fields | Field | Type | Description | |-------|------|-------------| | `list` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | List time intervals between consecutive timestamps in milliseconds. | | `listRev` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | List millisecond time intervals between consecutive timestamps in reverse order. | | `page` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | Fetch one page of intervals between consecutive timestamps with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `pageRev` | [[Int](/docs/reference/api/graphql/scalars#int)!]! | Fetch one page of intervals between consecutive timestamps in reverse order with a number of items up to a specified limit, | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `mean` | [Float](/docs/reference/api/graphql/scalars#float) | Compute the mean interval between consecutive timestamps. Returns None if fewer than 1 timestamp. | | `median` | [Int](/docs/reference/api/graphql/scalars#int) | Compute the median interval between consecutive timestamps. Returns None if fewer than 1 timestamp. | | `max` | [Int](/docs/reference/api/graphql/scalars#int) | Compute the maximum interval between consecutive timestamps. Returns None if fewer than 1 timestamp. | | `min` | [Int](/docs/reference/api/graphql/scalars#int) | Compute the minimum interval between consecutive timestamps. Returns None if fewer than 1 timestamp. | ### LayerSchema #### Fields | Field | Type | Description | |-------|------|-------------| | `name` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the name of the layer with this schema | | `edges` | [[EdgeSchema](/docs/reference/api/graphql/objects#edgeschema)!]! | Returns the list of edge schemas for this edge layer | ### Metadata #### Fields | Field | Type | Description | |-------|------|-------------| | `get` | [Property](/docs/reference/api/graphql/objects#property) | Get metadata value matching the specified key. | | ↳ `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `contains` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | /// Check if the key is in the metadata. | | ↳ `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `keys` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Return all metadata keys. | | `values` | [[Property](/docs/reference/api/graphql/objects#property)!]! | /// Return all metadata values. | | ↳ `keys` | [[String](/docs/reference/api/graphql/scalars#string)!] | | ### MetaGraph #### Fields | Field | Type | Description | |-------|------|-------------| | `name` | [String](/docs/reference/api/graphql/scalars#string) | Returns the graph name. | | `path` | [String](/docs/reference/api/graphql/scalars#string)! | Returns path of graph. | | `created` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the timestamp for the creation of the graph. | | `lastOpened` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the graph's last opened timestamp according to system time. | | `lastUpdated` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the graph's last updated timestamp. | | `nodeCount` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of nodes in the graph. | | `edgeCount` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of edges in the graph. | | `metadata` | [[Property](/docs/reference/api/graphql/objects#property)!]! | Returns the metadata of the graph. | ### MutableEdge #### Fields | Field | Type | Description | |-------|------|-------------| | `success` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Use to check if adding the edge was successful. | | `edge` | [Edge](/docs/reference/api/graphql/objects#edge)! | Get the non-mutable edge for querying. | | `src` | [MutableNode](/docs/reference/api/graphql/objects#mutablenode)! | Get the mutable source node of the edge. | | `dst` | [MutableNode](/docs/reference/api/graphql/objects#mutablenode)! | Get the mutable destination node of the edge. | | `delete` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Mark the edge as deleted at time time. | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | | `addMetadata` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add metadata to the edge (errors if the value already exists). | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | | `updateMetadata` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Update metadata of the edge (existing values are overwritten). | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | | `addUpdates` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add temporal property updates to the edge. | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | ### MutableGraph #### Fields | Field | Type | Description | |-------|------|-------------| | `graph` | [Graph](/docs/reference/api/graphql/objects#graph)! | Get the non-mutable graph. | | `node` | [MutableNode](/docs/reference/api/graphql/objects#mutablenode) | Get mutable existing node. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `addNode` | [MutableNode](/docs/reference/api/graphql/objects#mutablenode)! | Add a new node or add updates to an existing node. | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | | | ↳ `nodeType` | [String](/docs/reference/api/graphql/scalars#string) | | | `createNode` | [MutableNode](/docs/reference/api/graphql/objects#mutablenode)! | Create a new node or fail if it already exists. | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | | | ↳ `nodeType` | [String](/docs/reference/api/graphql/scalars#string) | | | `addNodes` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add a batch of nodes. | | ↳ `nodes` | [[NodeAddition](/docs/reference/api/graphql/inputs#nodeaddition)!]! | | | `edge` | [MutableEdge](/docs/reference/api/graphql/objects#mutableedge) | Get a mutable existing edge. | | ↳ `src` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `dst` | [String](/docs/reference/api/graphql/scalars#string)! | | | `addEdge` | [MutableEdge](/docs/reference/api/graphql/objects#mutableedge)! | Add a new edge or add updates to an existing edge. | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `src` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `dst` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | | `addEdges` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add a batch of edges. | | ↳ `edges` | [[EdgeAddition](/docs/reference/api/graphql/inputs#edgeaddition)!]! | | | `deleteEdge` | [MutableEdge](/docs/reference/api/graphql/objects#mutableedge)! | Mark an edge as deleted (creates the edge if it did not exist). | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `src` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `dst` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `layer` | [String](/docs/reference/api/graphql/scalars#string) | | | `addProperties` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add temporal properties to graph. | | ↳ `t` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | | `addMetadata` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add metadata to graph (errors if the property already exists). | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | | `updateMetadata` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Update metadata of the graph (overwrites existing values). | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | ### MutableNode #### Fields | Field | Type | Description | |-------|------|-------------| | `success` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Use to check if adding the node was successful. | | `node` | [Node](/docs/reference/api/graphql/objects#node)! | Get the non-mutable Node. | | `addMetadata` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add metadata to the node (errors if the property already exists). | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | | `setNodeType` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Set the node type (errors if the node already has a non-default type). | | ↳ `newType` | [String](/docs/reference/api/graphql/scalars#string)! | | | `updateMetadata` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Update metadata of the node (overwrites existing property values). | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!]! | | | `addUpdates` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Add temporal property updates to the node. | | ↳ `time` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `properties` | [[PropertyInput](/docs/reference/api/graphql/inputs#propertyinput)!] | | ### MutationPlugin #### Fields | Field | Type | Description | |-------|------|-------------| | `NoOps` | [String](/docs/reference/api/graphql/scalars#string)! | | ### Namespace #### Fields | Field | Type | Description | |-------|------|-------------| | `graphs` | [CollectionOfMetaGraph](/docs/reference/api/graphql/objects#collectionofmetagraph)! | | | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | | `parent` | [Namespace](/docs/reference/api/graphql/objects#namespace) | | | `children` | [CollectionOfNamespace](/docs/reference/api/graphql/objects#collectionofnamespace)! | | | `items` | [CollectionOfNamespacedItem](/docs/reference/api/graphql/objects#collectionofnamespaceditem)! | | ### Node Raphtory graph node. #### Fields | Field | Type | Description | |-------|------|-------------| | `id` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the unique id of the node. | | `name` | [String](/docs/reference/api/graphql/scalars#string)! | Returns the name of the node. | | `defaultLayer` | [Node](/docs/reference/api/graphql/objects#node)! | Return a view of the node containing only the default layer. | | `layers` | [Node](/docs/reference/api/graphql/objects#node)! | Return a view of node containing all layers specified. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeLayers` | [Node](/docs/reference/api/graphql/objects#node)! | Returns a collection containing nodes belonging to all layers except the excluded list of layers. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `layer` | [Node](/docs/reference/api/graphql/objects#node)! | Returns a collection containing nodes belonging to the specified layer. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `excludeLayer` | [Node](/docs/reference/api/graphql/objects#node)! | Returns a collection containing nodes belonging to all layers except the excluded layer. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `rolling` | [NodeWindowSet](/docs/reference/api/graphql/objects#nodewindowset)! | Creates a WindowSet with the specified window size and optional step using a rolling window. | | ↳ `window` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration) | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `expanding` | [NodeWindowSet](/docs/reference/api/graphql/objects#nodewindowset)! | Creates a WindowSet with the specified step size using an expanding window. | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `window` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events between the specified start (inclusive) and end (exclusive). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `at` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events at a specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events at the latest time. | | `snapshotAt` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events that are valid at the specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `snapshotLatest` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events that are valid at the latest time. | | `before` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events before specified end time (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `after` | [Node](/docs/reference/api/graphql/objects#node)! | Create a view of the node including all events after the specified start time (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkWindow` | [Node](/docs/reference/api/graphql/objects#node)! | Shrink a Window to a specified start and end time, if these are earlier and later than the current start and end respectively. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkStart` | [Node](/docs/reference/api/graphql/objects#node)! | Set the start of the window to the larger of a specified start time and self.start(). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkEnd` | [Node](/docs/reference/api/graphql/objects#node)! | Set the end of the window to the smaller of a specified end and self.end(). | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `applyViews` | [Node](/docs/reference/api/graphql/objects#node)! | | | ↳ `views` | [[NodeViewCollection](/docs/reference/api/graphql/inputs#nodeviewcollection)!]! | | | `earliestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the earliest time that the node exists. | | `firstUpdate` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the time of the first update made to the node. | | `latestTime` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the latest time that the node exists. | | `lastUpdate` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the time of the last update made to the node. | | `start` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Gets the start time for the window. Errors if there is no window. | | `end` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Gets the end time for the window. Errors if there is no window. | | `history` | [History](/docs/reference/api/graphql/objects#history)! | Returns a history object for the node, with time entries for node additions and changes made to node. | | `edgeHistoryCount` | [Int](/docs/reference/api/graphql/scalars#int)! | Get the number of edge events for this node. | | `isActive` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Check if the node is active and it's history is not empty. | | `nodeType` | [String](/docs/reference/api/graphql/scalars#string) | Returns the type of node. | | `properties` | [Properties](/docs/reference/api/graphql/objects#properties)! | Returns the properties of the node. | | `metadata` | [Metadata](/docs/reference/api/graphql/objects#metadata)! | Returns the metadata of the node. | | `degree` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number of unique counter parties for this node. | | `outDegree` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number edges with this node as the source. | | `inDegree` | [Int](/docs/reference/api/graphql/scalars#int)! | Returns the number edges with this node as the destination. | | `inComponent` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | | | `outComponent` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | | | `edges` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns all connected edges. | | ↳ `select` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | | | `outEdges` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns outgoing edges. | | ↳ `select` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | | | `inEdges` | [Edges](/docs/reference/api/graphql/objects#edges)! | Returns incoming edges. | | ↳ `select` | [EdgeFilter](/docs/reference/api/graphql/inputs#edgefilter) | | | `neighbours` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Returns neighbouring nodes. | | ↳ `select` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | | | `inNeighbours` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Returns the number of neighbours that have at least one in-going edge to this node. | | ↳ `select` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | | | `outNeighbours` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Returns the number of neighbours that have at least one out-going edge from this node. | | ↳ `select` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter) | | | `filter` | [Node](/docs/reference/api/graphql/objects#node)! | | | ↳ `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### Nodes #### Fields | Field | Type | Description | |-------|------|-------------| | `defaultLayer` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Return a view of the nodes containing only the default edge layer. | | `layers` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Return a view of the nodes containing all layers specified. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeLayers` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Return a view of the nodes containing all layers except those specified. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `layer` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Return a view of the nodes containing the specified layer. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `excludeLayer` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Return a view of the nodes containing all layers except those specified. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `rolling` | [NodesWindowSet](/docs/reference/api/graphql/objects#nodeswindowset)! | Creates a WindowSet with the specified window size and optional step using a rolling window. | | ↳ `window` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration) | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `expanding` | [NodesWindowSet](/docs/reference/api/graphql/objects#nodeswindowset)! | Creates a WindowSet with the specified step size using an expanding window. | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `window` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the node including all events between the specified start (inclusive) and end (exclusive). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `at` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the nodes including all events at a specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the nodes including all events at the latest time. | | `snapshotAt` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the nodes including all events that are valid at the specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `snapshotLatest` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the nodes including all events that are valid at the latest time. | | `before` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the nodes including all events before specified end time (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `after` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Create a view of the nodes including all events after the specified start time (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkWindow` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Shrink both the start and end of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkStart` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Set the start of the window to the larger of a specified start time and self.start(). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkEnd` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Set the end of the window to the smaller of a specified end and self.end(). | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `typeFilter` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Filter nodes by node type. | | ↳ `nodeTypes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `applyViews` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | | | ↳ `views` | [[NodesViewCollection](/docs/reference/api/graphql/inputs#nodesviewcollection)!]! | | | `sorted` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | | | ↳ `sortBys` | [[NodeSortBy](/docs/reference/api/graphql/inputs#nodesortby)!]! | | | `start` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the start time of the window. Errors if there is no window. | | `end` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the end time of the window. Errors if there is no window. | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[Node](/docs/reference/api/graphql/objects#node)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Node](/docs/reference/api/graphql/objects#node)!]! | | | `ids` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns a view of the node ids. | | `filter` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Returns a filtered view that applies to list down the chain | | ↳ `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | | `select` | [Nodes](/docs/reference/api/graphql/objects#nodes)! | Returns filtered list of nodes | | ↳ `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### NodeSchema #### Fields | Field | Type | Description | |-------|------|-------------| | `typeName` | [String](/docs/reference/api/graphql/scalars#string)! | | | `properties` | [[PropertySchema](/docs/reference/api/graphql/objects#propertyschema)!]! | Returns the list of property schemas for this node | | `metadata` | [[PropertySchema](/docs/reference/api/graphql/objects#propertyschema)!]! | | ### NodesWindowSet #### Fields | Field | Type | Description | |-------|------|-------------| | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[Nodes](/docs/reference/api/graphql/objects#nodes)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Nodes](/docs/reference/api/graphql/objects#nodes)!]! | | ### NodeWindowSet #### Fields | Field | Type | Description | |-------|------|-------------| | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[Node](/docs/reference/api/graphql/objects#node)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Node](/docs/reference/api/graphql/objects#node)!]! | | ### PagerankOutput PageRank score. #### Fields | Field | Type | Description | |-------|------|-------------| | `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `rank` | [Float](/docs/reference/api/graphql/scalars#float)! | | ### PathFromNode #### Fields | Field | Type | Description | |-------|------|-------------| | `layers` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Returns a view of PathFromNode containing the specified layer, errors if the layer does not exist. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `excludeLayers` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Return a view of PathFromNode containing all layers except the specified excluded layers, errors if any of the layers do not exist. | | ↳ `names` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `layer` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Return a view of PathFromNode containing the layer specified layer, errors if the layer does not exist. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `excludeLayer` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Return a view of PathFromNode containing all layers except the specified excluded layers, errors if any of the layers do not exist. | | ↳ `name` | [String](/docs/reference/api/graphql/scalars#string)! | | | `rolling` | [PathFromNodeWindowSet](/docs/reference/api/graphql/objects#pathfromnodewindowset)! | Creates a WindowSet with the given window size and optional step using a rolling window. | | ↳ `window` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration) | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `expanding` | [PathFromNodeWindowSet](/docs/reference/api/graphql/objects#pathfromnodewindowset)! | Creates a WindowSet with the given step size using an expanding window. | | ↳ `step` | [WindowDuration](/docs/reference/api/graphql/inputs#windowduration)! | | | ↳ `alignmentUnit` | [AlignmentUnit](/docs/reference/api/graphql/enums#alignmentunit) | | | `window` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events between a specified start (inclusive) and end (exclusive). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `at` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events at time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `snapshotLatest` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events that are valid at the latest time. | | `snapshotAt` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events that are valid at the specified time. | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events at the latest time. | | `before` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events before the specified end (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `after` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Create a view of the PathFromNode including all events after the specified start (exclusive). | | ↳ `time` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkWindow` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Shrink both the start and end of the window. | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkStart` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Set the start of the window to the larger of the specified start and self.start(). | | ↳ `start` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `shrinkEnd` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Set the end of the window to the smaller of the specified end and self.end(). | | ↳ `end` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `typeFilter` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Filter nodes by type. | | ↳ `nodeTypes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `start` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the earliest time that this PathFromNode is valid or None if the PathFromNode is valid for all times. | | `end` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | Returns the latest time that this PathFromNode is valid or None if the PathFromNode is valid for all times. | | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[Node](/docs/reference/api/graphql/objects#node)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[Node](/docs/reference/api/graphql/objects#node)!]! | | | `ids` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Returns the node ids. | | `applyViews` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Takes a specified selection of views and applies them in given order. | | ↳ `views` | [[PathFromNodeViewCollection](/docs/reference/api/graphql/inputs#pathfromnodeviewcollection)!]! | | | `filter` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Returns a filtered view that applies to list down the chain | | ↳ `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | | `select` | [PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)! | Returns filtered list of neighbour nodes | | ↳ `expr` | [NodeFilter](/docs/reference/api/graphql/inputs#nodefilter)! | | ### PathFromNodeWindowSet #### Fields | Field | Type | Description | |-------|------|-------------| | `count` | [Int](/docs/reference/api/graphql/scalars#int)! | | | `page` | [[PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)!]! | Fetch one page with a number of items up to a specified limit, optionally offset by a specified amount. | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `offset` | [Int](/docs/reference/api/graphql/scalars#int) | | | ↳ `pageIndex` | [Int](/docs/reference/api/graphql/scalars#int) | | | `list` | [[PathFromNode](/docs/reference/api/graphql/objects#pathfromnode)!]! | | ### Properties #### Fields | Field | Type | Description | |-------|------|-------------| | `get` | [Property](/docs/reference/api/graphql/objects#property) | Get property value matching the specified key. | | ↳ `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `contains` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Check if the key is in the properties. | | ↳ `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `keys` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Return all property keys. | | `values` | [[Property](/docs/reference/api/graphql/objects#property)!]! | Return all property values. | | ↳ `keys` | [[String](/docs/reference/api/graphql/scalars#string)!] | | | `temporal` | [TemporalProperties](/docs/reference/api/graphql/objects#temporalproperties)! | | ### Property #### Fields | Field | Type | Description | |-------|------|-------------| | `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `asString` | [String](/docs/reference/api/graphql/scalars#string)! | | | `value` | [PropertyOutput](/docs/reference/api/graphql/scalars#propertyoutput)! | | ### PropertySchema #### Fields | Field | Type | Description | |-------|------|-------------| | `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `propertyType` | [String](/docs/reference/api/graphql/scalars#string)! | | | `variants` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | ### PropertyTuple #### Fields | Field | Type | Description | |-------|------|-------------| | `time` | [EventTime](/docs/reference/api/graphql/objects#eventtime)! | | | `asString` | [String](/docs/reference/api/graphql/scalars#string)! | | | `value` | [PropertyOutput](/docs/reference/api/graphql/scalars#propertyoutput)! | | ### QueryPlugin #### Fields | Field | Type | Description | |-------|------|-------------| | `NoOps` | [String](/docs/reference/api/graphql/scalars#string)! | | ### ShortestPathOutput #### Fields | Field | Type | Description | |-------|------|-------------| | `target` | [String](/docs/reference/api/graphql/scalars#string)! | | | `nodes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | ### TemporalProperties #### Fields | Field | Type | Description | |-------|------|-------------| | `get` | [TemporalProperty](/docs/reference/api/graphql/objects#temporalproperty) | Get property value matching the specified key. | | ↳ `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `contains` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | Check if the key is in the properties. | | ↳ `key` | [String](/docs/reference/api/graphql/scalars#string)! | | | `keys` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Return all property keys. | | `values` | [[TemporalProperty](/docs/reference/api/graphql/objects#temporalproperty)!]! | Return all property values. | | ↳ `keys` | [[String](/docs/reference/api/graphql/scalars#string)!] | | ### TemporalProperty #### Fields | Field | Type | Description | |-------|------|-------------| | `key` | [String](/docs/reference/api/graphql/scalars#string)! | Key of a property. | | `history` | [History](/docs/reference/api/graphql/objects#history)! | | | `values` | [[String](/docs/reference/api/graphql/scalars#string)!]! | Return the values of the properties. | | `at` | [String](/docs/reference/api/graphql/scalars#string) | | | ↳ `t` | [TimeInput](/docs/reference/api/graphql/scalars#timeinput)! | | | `latest` | [String](/docs/reference/api/graphql/scalars#string) | | | `unique` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `orderedDedupe` | [[PropertyTuple](/docs/reference/api/graphql/objects#propertytuple)!]! | | | ↳ `latestTime` | [Boolean](/docs/reference/api/graphql/scalars#boolean)! | | ### VectorisedGraph #### Fields | Field | Type | Description | |-------|------|-------------| | `emptySelection` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Returns an empty selection of documents. | | `entitiesBySimilarity` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Search the top scoring entities according to a specified query returning no more than a specified limit of entities. | | ↳ `query` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | | `nodesBySimilarity` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Search the top scoring nodes according to a specified query returning no more than a specified limit of nodes. | | ↳ `query` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | | `edgesBySimilarity` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Search the top scoring edges according to a specified query returning no more than a specified limit of edges. | | ↳ `query` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | ### VectorSelection #### Fields | Field | Type | Description | |-------|------|-------------| | `nodes` | [[Node](/docs/reference/api/graphql/objects#node)!]! | Returns a list of nodes in the current selection. | | `edges` | [[Edge](/docs/reference/api/graphql/objects#edge)!]! | Returns a list of edges in the current selection. | | `getDocuments` | [[Document](/docs/reference/api/graphql/objects#document)!]! | Returns a list of documents in the current selection. | | `addNodes` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Adds all the documents associated with the specified nodes to the current selection. | | ↳ `nodes` | [[String](/docs/reference/api/graphql/scalars#string)!]! | | | `addEdges` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Adds all the documents associated with the specified edges to the current selection. | | ↳ `edges` | [[InputEdge](/docs/reference/api/graphql/inputs#inputedge)!]! | | | `expand` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Add all the documents a specified number of hops away to the selection. | | ↳ `hops` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | | `expandEntitiesBySimilarity` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Adds documents, from the set of one hop neighbours to the current selection, to the selection based on their similarity score with the specified query. This function loops so that the set of one hop neighbours expands on each loop and number of documents added is determined by the specified limit. | | ↳ `query` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | | `expandNodesBySimilarity` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Add the adjacent nodes with higher score for query to the selection up to a specified limit. This function loops like expand_entities_by_similarity but is restricted to nodes. | | ↳ `query` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | | `expandEdgesBySimilarity` | [VectorSelection](/docs/reference/api/graphql/objects#vectorselection)! | Add the adjacent edges with higher score for query to the selection up to a specified limit. This function loops like expand_entities_by_similarity but is restricted to edges. | | ↳ `query` | [String](/docs/reference/api/graphql/scalars#string)! | | | ↳ `limit` | [Int](/docs/reference/api/graphql/scalars#int)! | | | ↳ `window` | [VectorisedGraphWindow](/docs/reference/api/graphql/inputs#vectorisedgraphwindow) | | --- ## Reference > Api > Graphql > Query --- title: "Query" breadcrumb: "Reference / GraphQL / Query" --- # Query | Field | Type | Description | |-------|------|-------------| | [`hello`](#hello) | [String](/docs/reference/api/graphql/scalars#string)! | Hello world demo | | [`graph`](#graph) | [Graph](/docs/reference/api/graphql/objects#graph)! | Returns a graph | | [`updateGraph`](#updategraph) | [MutableGraph](/docs/reference/api/graphql/objects#mutablegraph)! | Update graph query, has side effects to update graph state | | [`vectorisedGraph`](#vectorisedgraph) | [VectorisedGraph](/docs/reference/api/graphql/objects#vectorisedgraph) | Create vectorised graph in the format used for queries | | [`namespaces`](#namespaces) | [CollectionOfNamespace](/docs/reference/api/graphql/objects#collectionofnamespace)! | Returns all namespaces using recursive search | | [`namespace`](#namespace) | [Namespace](/docs/reference/api/graphql/objects#namespace)! | Returns a specific namespace at a given path | | [`root`](#root) | [Namespace](/docs/reference/api/graphql/objects#namespace)! | Returns root namespace | | [`plugins`](#plugins) | [QueryPlugin](/docs/reference/api/graphql/objects#queryplugin)! | Returns a plugin. | | [`receiveGraph`](#receivegraph) | [String](/docs/reference/api/graphql/scalars#string)! | Encodes graph and returns as string | | [`version`](#version) | [String](/docs/reference/api/graphql/scalars#string)! | | --- ## Field Details ### hello Hello world demo #### Returns **Type:** [String](/docs/reference/api/graphql/scalars#string)! ### graph Returns a graph #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [Graph](/docs/reference/api/graphql/objects#graph)! ### updateGraph Update graph query, has side effects to update graph state #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [MutableGraph](/docs/reference/api/graphql/objects#mutablegraph)! ### vectorisedGraph Create vectorised graph in the format used for queries #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [VectorisedGraph](/docs/reference/api/graphql/objects#vectorisedgraph) ### namespaces Returns all namespaces using recursive search #### Returns **Type:** [CollectionOfNamespace](/docs/reference/api/graphql/objects#collectionofnamespace)! ### namespace Returns a specific namespace at a given path #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [Namespace](/docs/reference/api/graphql/objects#namespace)! ### root Returns root namespace #### Returns **Type:** [Namespace](/docs/reference/api/graphql/objects#namespace)! ### plugins Returns a plugin. #### Returns **Type:** [QueryPlugin](/docs/reference/api/graphql/objects#queryplugin)! ### receiveGraph Encodes graph and returns as string #### Arguments | Name | Type | Description | |------|------|-------------| | `path` | [String](/docs/reference/api/graphql/scalars#string)! | | #### Returns **Type:** [String](/docs/reference/api/graphql/scalars#string)! ### version #### Returns **Type:** [String](/docs/reference/api/graphql/scalars#string)! --- ## Reference > Api > Graphql > Scalars --- title: "Scalars" breadcrumb: "Reference / GraphQL / Scalars" --- # Scalar Types Scalar types represent primitive values. ## Built-in Scalars | Type | Description | |------|-------------| | `Int` | A signed 32-bit integer. | | `Float` | A signed double-precision floating-point value. | | `String` | A UTF-8 character sequence. | | `Boolean` | `true` or `false`. | | `ID` | A unique identifier. | ## Custom Scalars ### PropertyOutput ### TimeInput Input for primary time component. Expects Int, DateTime formatted String, or Object \{ timestamp, eventId \} where the timestamp is either an Int or a DateTime formatted String, and eventId is a non-negative Int. Valid string formats are RFC3339, RFC2822, %Y-%m-%d, %Y-%m-%dT%H:%M:%S%.3f, %Y-%m-%dT%H:%M:%S%, %Y-%m-%d %H:%M:%S%.3f and %Y-%m-%d %H:%M:%S%. ### Upload A multipart file upload --- ## Reference > Api > Graphql > Unions --- title: "Unions" breadcrumb: "Reference / GraphQL / Unions" --- # Union Types Union types for polymorphic returns. ## Types | Type | Possible Types | |------|----------------| | [`DocumentEntity`](#documententity) | [Node](/docs/reference/api/graphql/objects#node), [Edge](/docs/reference/api/graphql/objects#edge) | | [`NamespacedItem`](#namespaceditem) | [Namespace](/docs/reference/api/graphql/objects#namespace), [MetaGraph](/docs/reference/api/graphql/objects#metagraph) | --- ## Type Details ### DocumentEntity Entity associated with document. **Possible types:** - [Node](/docs/reference/api/graphql/objects#node) - [Edge](/docs/reference/api/graphql/objects#edge) ### NamespacedItem **Possible types:** - [Namespace](/docs/reference/api/graphql/objects#namespace) - [MetaGraph](/docs/reference/api/graphql/objects#metagraph) --- ## Reference > Api > Python > Algorithms > Infected --- title: "Infected" breadcrumb: "Reference / Python / algorithms / Infected" --- # Infected ## Properties | Property | Description | |----------|-------------| | [`active`](#active) | The timestamp at which the infected node started spreading the infection | | [`infected`](#infected) | The timestamp at which the node was infected | | [`recovered`](#recovered) | The timestamp at which the infected node stopped spreading the infection | --- ## Property Details ### [active](#active) The timestamp at which the infected node started spreading the infection #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | | ### [infected](#infected) The timestamp at which the node was infected #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | | ### [recovered](#recovered) The timestamp at which the infected node stopped spreading the infection #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | | --- ## Reference > Api > Python > Algorithms > Matching --- title: "Matching" breadcrumb: "Reference / Python / algorithms / Matching" --- # Matching A Matching (i.e., a set of edges that do not share any nodes) ## Methods | Method | Description | |--------|-------------| | [`dst`](#dst) | Get the matched destination node for a source node | | [`edge_for_dst`](#edge_for_dst) | Get the matched edge for a destination node | | [`edge_for_src`](#edge_for_src) | Get the matched edge for a source node | | [`edges`](#edges) | Get a view of the matched edges | | [`src`](#src) | Get the matched source node for a destination node | --- ## Method Details ### [dst](#dst) **Signature:** `dst(src)` Get the matched destination node for a source node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [NodeInput](/docs/reference/api/python/typing) | - | The source node | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node), optional | The matched destination node if it exists | ### [edge_for_dst](#edge_for_dst) **Signature:** `edge_for_dst(dst)` Get the matched edge for a destination node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `dst` | [NodeInput](/docs/reference/api/python/typing) | - | The source node | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge), optional | The matched edge if it exists | ### [edge_for_src](#edge_for_src) **Signature:** `edge_for_src(src)` Get the matched edge for a source node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [NodeInput](/docs/reference/api/python/typing) | - | The source node | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge), optional | The matched edge if it exists | ### [edges](#edges) Get a view of the matched edges #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The edges in the matching | ### [src](#src) **Signature:** `src(dst)` Get the matched source node for a destination node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `dst` | [NodeInput](/docs/reference/api/python/typing) | - | The destination node | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node), optional | The matched source node if it exists | --- ## Reference > Api > Python > Filter > Edge --- title: "Edge" breadcrumb: "Reference / Python / filter / Edge" --- # Edge ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | | | [`at`](#at) | | | [`before`](#before) | | | [`dst`](#dst) | | | [`latest`](#latest) | | | [`layer`](#layer) | | | [`layers`](#layers) | | | [`metadata`](#metadata) | | | [`property`](#property) | | | [`snapshot_at`](#snapshot_at) | | | [`snapshot_latest`](#snapshot_latest) | | | [`src`](#src) | | | [`window`](#window) | | --- ## Method Details ### [after](#after) **Signature:** `after(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [at](#at) **Signature:** `at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [before](#before) **Signature:** `before(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [dst](#dst) ### [latest](#latest) ### [layer](#layer) **Signature:** `layer(layer)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layer` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [layers](#layers) **Signature:** `layers(layers)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layers` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [metadata](#metadata) **Signature:** `metadata(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [property](#property) **Signature:** `property(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_latest](#snapshot_latest) ### [src](#src) ### [window](#window) **Signature:** `window(start, end)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `end` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Filter > EdgeEndpoint --- title: "EdgeEndpoint" breadcrumb: "Reference / Python / filter / EdgeEndpoint" --- # EdgeEndpoint ## Methods | Method | Description | |--------|-------------| | [`id`](#id) | | | [`metadata`](#metadata) | | | [`name`](#name) | | | [`node_type`](#node_type) | | | [`property`](#property) | | --- ## Method Details ### [id](#id) ### [metadata](#metadata) **Signature:** `metadata(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [name](#name) ### [node_type](#node_type) ### [property](#property) **Signature:** `property(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Filter > EdgeEndpointIdFilter --- title: "EdgeEndpointIdFilter" breadcrumb: "Reference / Python / filter / EdgeEndpointIdFilter" --- # EdgeEndpointIdFilter ## Methods | Method | Description | |--------|-------------| | [`contains`](#contains) | Returns a filter expression that checks whether the string | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the string | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the endpoint ID | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the endpoint ID | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the string | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the string | --- ## Method Details ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the string representation of the endpoint ID contains the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring to search for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the string representation of the endpoint ID ends with the given suffix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the string representation of the endpoint ID. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed Levenshtein distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | Whether to require a matching prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the endpoint ID is contained within the specified iterable of IDs. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[int](https://docs.python.org/3/library/functions.html#int)] | - | Iterable of node IDs to match against. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the endpoint ID is **not** contained within the specified iterable of IDs. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[int](https://docs.python.org/3/library/functions.html#int)] | - | Iterable of node IDs to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the string representation of the endpoint ID **does not** contain the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the string representation of the endpoint ID starts with the given prefix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | --- ## Reference > Api > Python > Filter > EdgeEndpointNameFilter --- title: "EdgeEndpointNameFilter" breadcrumb: "Reference / Python / filter / EdgeEndpointNameFilter" --- # EdgeEndpointNameFilter ## Methods | Method | Description | |--------|-------------| | [`contains`](#contains) | Returns a filter expression that checks whether the entity's | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the entity's | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the entity's | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the entity's | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the entity's | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the entity's | --- ## Method Details ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the entity's string value contains the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the entity's string value ends with the specified suffix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the entity's string value. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed edit distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | If true, the value must also match as a prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the entity's string value is contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of allowed string values. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the entity's string value is **not** contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of string values to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the entity's string value **does not** contain the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must not appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the entity's string value starts with the specified prefix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | --- ## Reference > Api > Python > Filter > EdgeEndpointTypeFilter --- title: "EdgeEndpointTypeFilter" breadcrumb: "Reference / Python / filter / EdgeEndpointTypeFilter" --- # EdgeEndpointTypeFilter ## Methods | Method | Description | |--------|-------------| | [`contains`](#contains) | Returns a filter expression that checks whether the entity's | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the entity's | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the entity's | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the entity's | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the entity's | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the entity's | --- ## Method Details ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the entity's string value contains the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the entity's string value ends with the specified suffix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the entity's string value. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed edit distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | If true, the value must also match as a prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the entity's string value is contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of allowed string values. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the entity's string value is **not** contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of string values to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the entity's string value **does not** contain the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must not appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the entity's string value starts with the specified prefix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | --- ## Reference > Api > Python > Filter > ExplodedEdge --- title: "ExplodedEdge" breadcrumb: "Reference / Python / filter / ExplodedEdge" --- # ExplodedEdge ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | | | [`at`](#at) | | | [`before`](#before) | | | [`latest`](#latest) | | | [`layer`](#layer) | | | [`layers`](#layers) | | | [`metadata`](#metadata) | | | [`property`](#property) | | | [`snapshot_at`](#snapshot_at) | | | [`snapshot_latest`](#snapshot_latest) | | | [`window`](#window) | | --- ## Method Details ### [after](#after) **Signature:** `after(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [at](#at) **Signature:** `at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [before](#before) **Signature:** `before(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [latest](#latest) ### [layer](#layer) **Signature:** `layer(layer)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layer` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [layers](#layers) **Signature:** `layers(layers)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layers` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [metadata](#metadata) **Signature:** `metadata(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [property](#property) **Signature:** `property(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_latest](#snapshot_latest) ### [window](#window) **Signature:** `window(start, end)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `end` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Filter > FilterExpr --- title: "FilterExpr" breadcrumb: "Reference / Python / filter / FilterExpr" --- # FilterExpr --- ## Reference > Api > Python > Filter > FilterOps --- title: "FilterOps" breadcrumb: "Reference / Python / filter / FilterOps" --- # FilterOps ## Methods | Method | Description | |--------|-------------| | [`all`](#all) | | | [`any`](#any) | | | [`avg`](#avg) | | | [`contains`](#contains) | Returns a filter expression that checks whether the property's | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the property's | | [`first`](#first) | | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the property | | [`is_none`](#is_none) | Returns a filter expression that checks whether the property | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the property | | [`is_some`](#is_some) | Returns a filter expression that checks whether the property | | [`last`](#last) | | | [`len`](#len) | | | [`max`](#max) | | | [`min`](#min) | | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the property's | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the property's | | [`sum`](#sum) | | --- ## Method Details ### [all](#all) ### [any](#any) ### [avg](#avg) ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the property's string representation contains the given value. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Prop](/docs/reference/api/python/raphtory/Prop) | - | Substring that must appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the property's string representation ends with the given value. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Prop](/docs/reference/api/python/raphtory/Prop) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [first](#first) ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(prop_value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the property's string value. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `prop_value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed Levenshtein distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | Whether to require a matching prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the property is contained within the specified iterable of values. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[Prop](/docs/reference/api/python/raphtory/Prop)] | - | Iterable of property values to match against. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_none](#is_none) Returns a filter expression that checks whether the property value is `None` / missing. #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating `value is None`. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the property is **not** contained within the specified iterable of values. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[Prop](/docs/reference/api/python/raphtory/Prop)] | - | Iterable of property values to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [is_some](#is_some) Returns a filter expression that checks whether the property value is present (not `None`). #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating `value is not None`. | ### [last](#last) ### [len](#len) ### [max](#max) ### [min](#min) ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the property's string representation **does not** contain the given value. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Prop](/docs/reference/api/python/raphtory/Prop) | - | Substring that must not appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the property's string representation starts with the given value. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Prop](/docs/reference/api/python/raphtory/Prop) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | ### [sum](#sum) --- ## Reference > Api > Python > Filter > Graph --- title: "Graph" breadcrumb: "Reference / Python / filter / Graph" --- # Graph ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | | | [`at`](#at) | | | [`before`](#before) | | | [`latest`](#latest) | | | [`layer`](#layer) | | | [`layers`](#layers) | | | [`snapshot_at`](#snapshot_at) | | | [`snapshot_latest`](#snapshot_latest) | | | [`window`](#window) | | --- ## Method Details ### [after](#after) **Signature:** `after(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [at](#at) **Signature:** `at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [before](#before) **Signature:** `before(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [latest](#latest) ### [layer](#layer) **Signature:** `layer(layer)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layer` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [layers](#layers) **Signature:** `layers(layers)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layers` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_latest](#snapshot_latest) ### [window](#window) **Signature:** `window(start, end)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `end` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Filter > Node --- title: "Node" breadcrumb: "Reference / Python / filter / Node" --- # Node ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | | | [`at`](#at) | | | [`before`](#before) | | | [`id`](#id) | | | [`latest`](#latest) | | | [`layer`](#layer) | | | [`layers`](#layers) | | | [`metadata`](#metadata) | | | [`name`](#name) | | | [`node_type`](#node_type) | | | [`property`](#property) | | | [`snapshot_at`](#snapshot_at) | | | [`snapshot_latest`](#snapshot_latest) | | | [`window`](#window) | | --- ## Method Details ### [after](#after) **Signature:** `after(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [at](#at) **Signature:** `at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [before](#before) **Signature:** `before(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [id](#id) ### [latest](#latest) ### [layer](#layer) **Signature:** `layer(layer)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layer` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [layers](#layers) **Signature:** `layers(layers)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `layers` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [metadata](#metadata) **Signature:** `metadata(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [name](#name) ### [node_type](#node_type) ### [property](#property) **Signature:** `property(name)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [snapshot_latest](#snapshot_latest) ### [window](#window) **Signature:** `window(start, end)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `end` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Filter > NodeIdFilterBuilder --- title: "NodeIdFilterBuilder" breadcrumb: "Reference / Python / filter / NodeIdFilterBuilder" --- # NodeIdFilterBuilder ## Methods | Method | Description | |--------|-------------| | [`contains`](#contains) | Returns a filter expression that checks whether the string | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the string | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the node ID | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the node ID | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the string | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the string | --- ## Method Details ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the string representation of the node ID contains the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the string representation of the node ID ends with the given suffix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the string representation of the node ID. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed edit distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | If true, the value must also match as a prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the node ID is contained within the specified iterable of IDs. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[int](https://docs.python.org/3/library/functions.html#int)] | - | Iterable of node IDs to match against. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the node ID is **not** contained within the specified iterable of IDs. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[int](https://docs.python.org/3/library/functions.html#int)] | - | Iterable of node IDs to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the string representation of the node ID **does not** contain the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must not appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the string representation of the node ID starts with the given prefix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | --- ## Reference > Api > Python > Filter > NodeNameFilterBuilder --- title: "NodeNameFilterBuilder" breadcrumb: "Reference / Python / filter / NodeNameFilterBuilder" --- # NodeNameFilterBuilder ## Methods | Method | Description | |--------|-------------| | [`contains`](#contains) | Returns a filter expression that checks whether the entity's | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the entity's | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the entity's | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the entity's | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the entity's | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the entity's | --- ## Method Details ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the entity's string value contains the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the entity's string value ends with the specified suffix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the entity's string value. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed edit distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | If true, the value must also match as a prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the entity's string value is contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of allowed string values. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the entity's string value is **not** contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of string values to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the entity's string value **does not** contain the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must not appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the entity's string value starts with the specified prefix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | --- ## Reference > Api > Python > Filter > NodeTypeFilterBuilder --- title: "NodeTypeFilterBuilder" breadcrumb: "Reference / Python / filter / NodeTypeFilterBuilder" --- # NodeTypeFilterBuilder ## Methods | Method | Description | |--------|-------------| | [`contains`](#contains) | Returns a filter expression that checks whether the entity's | | [`ends_with`](#ends_with) | Returns a filter expression that checks whether the entity's | | [`fuzzy_search`](#fuzzy_search) | Returns a filter expression that performs fuzzy matching | | [`is_in`](#is_in) | Returns a filter expression that checks whether the entity's | | [`is_not_in`](#is_not_in) | Returns a filter expression that checks whether the entity's | | [`not_contains`](#not_contains) | Returns a filter expression that checks whether the entity's | | [`starts_with`](#starts_with) | Returns a filter expression that checks whether the entity's | --- ## Method Details ### [contains](#contains) **Signature:** `contains(value)` Returns a filter expression that checks whether the entity's string value contains the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring search. | ### [ends_with](#ends_with) **Signature:** `ends_with(value)` Returns a filter expression that checks whether the entity's string value ends with the specified suffix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Suffix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating suffix matching. | ### [fuzzy_search](#fuzzy_search) **Signature:** `fuzzy_search(value, levenshtein_distance, prefix_match)` Returns a filter expression that performs fuzzy matching against the entity's string value. Uses a specified Levenshtein distance and optional prefix matching. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | String to approximately match against. | | `levenshtein_distance` | [int](https://docs.python.org/3/library/functions.html#int) | - | Maximum allowed edit distance. | | `prefix_match` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | If true, the value must also match as a prefix. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression performing approximate text matching. | ### [is_in](#is_in) **Signature:** `is_in(values)` Returns a filter expression that checks whether the entity's string value is contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of allowed string values. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating membership. | ### [is_not_in](#is_not_in) **Signature:** `is_not_in(values)` Returns a filter expression that checks whether the entity's string value is **not** contained within the given iterable of strings. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | Iterable of string values to exclude. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating non-membership. | ### [not_contains](#not_contains) **Signature:** `not_contains(value)` Returns a filter expression that checks whether the entity's string value **does not** contain the given substring. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Substring that must not appear within the value. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating substring exclusion. | ### [starts_with](#starts_with) **Signature:** `starts_with(value)` Returns a filter expression that checks whether the entity's string value starts with the specified prefix. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | Prefix to check for. | #### Returns | Type | Description | |------|-------------| | `filter.FilterExpr` | A filter expression evaluating prefix matching. | --- ## Reference > Api > Python > Filter > PropertyFilterOps --- title: "PropertyFilterOps" breadcrumb: "Reference / Python / filter / PropertyFilterOps" --- # PropertyFilterOps ## Methods | Method | Description | |--------|-------------| | [`temporal`](#temporal) | | --- ## Method Details ### [temporal](#temporal) --- ## Reference > Api > Python > Graphql > AllPropertySpec --- title: "AllPropertySpec" breadcrumb: "Reference / Python / graphql / AllPropertySpec" --- # AllPropertySpec Specifies that **all** properties should be included when creating an index. Use one of the predefined variants: ALL , ALL_METADATA , or ALL_TEMPORAL . --- ## Reference > Api > Python > Graphql > GraphServer --- title: "GraphServer" breadcrumb: "Reference / Python / graphql / GraphServer" --- # GraphServer A class for defining and running a Raphtory GraphQL server ## Methods | Method | Description | |--------|-------------| | [`run`](#run) | Run the server until completion. | | [`set_embeddings`](#set_embeddings) | Setup the server to vectorise graphs with a default template. | | [`start`](#start) | Start the server and return a handle to it. | | [`turn_off_index`](#turn_off_index) | Turn off index for all graphs | | [`with_vectorised_graphs`](#with_vectorised_graphs) | Vectorise a subset of the graphs of the server. | --- ## Method Details ### [run](#run) **Signature:** `run(port=1736, timeout_ms=180000)` Run the server until completion. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `port` | [int](https://docs.python.org/3/library/functions.html#int), optional | `1736` | The port to use. Defaults to 1736. | | `timeout_ms` | [int](https://docs.python.org/3/library/functions.html#int), optional | `180000` | Timeout for waiting for the server to start. Defaults to 180000. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [set_embeddings](#set_embeddings) **Signature:** `set_embeddings(cache, embedding=None, nodes=True, edges=True)` Setup the server to vectorise graphs with a default template. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `cache` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the directory to use as cache for the embeddings. | | `embedding` | [Callable](https://docs.python.org/3/library/typing.html#typing.Callable), optional | `None` | the embedding function to translate documents to embeddings. | | `nodes` | [bool](https://docs.python.org/3/library/functions.html#bool) \| [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `True` | if nodes have to be embedded or not or the custom template to use if a str is provided. Defaults to True. | | `edges` | [bool](https://docs.python.org/3/library/functions.html#bool) \| [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `True` | if edges have to be embedded or not or the custom template to use if a str is provided. Defaults to True. | #### Returns | Type | Description | |------|-------------| | [GraphServer](/docs/reference/api/python/graphql/GraphServer) | A new server object with embeddings setup. | ### [start](#start) **Signature:** `start(port=1736, timeout_ms=5000)` Start the server and return a handle to it. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `port` | [int](https://docs.python.org/3/library/functions.html#int), optional | `1736` | the port to use. Defaults to 1736. | | `timeout_ms` | [int](https://docs.python.org/3/library/functions.html#int), optional | `5000` | wait for server to be online. Defaults to 5000. The server is stopped if not online within timeout_ms but manages to come online as soon as timeout_ms finishes! | #### Returns | Type | Description | |------|-------------| | [RunningGraphServer](/docs/reference/api/python/graphql/RunningGraphServer) | The running server | ### [turn_off_index](#turn_off_index) Turn off index for all graphs #### Returns | Type | Description | |------|-------------| | [GraphServer](/docs/reference/api/python/graphql/GraphServer) | The server with indexing disabled | ### [with_vectorised_graphs](#with_vectorised_graphs) **Signature:** `with_vectorised_graphs(graph_names, nodes=True, edges=True)` Vectorise a subset of the graphs of the server. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph_names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | the names of the graphs to vectorise. All by default. | | `nodes` | [bool](https://docs.python.org/3/library/functions.html#bool) \| [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `True` | if nodes have to be embedded or not or the custom template to use if a str is provided. Defaults to True. | | `edges` | [bool](https://docs.python.org/3/library/functions.html#bool) \| [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `True` | if edges have to be embedded or not or the custom template to use if a str is provided. Defaults to True. | #### Returns | Type | Description | |------|-------------| | [GraphServer](/docs/reference/api/python/graphql/GraphServer) | A new server object containing the vectorised graphs. | --- ## Reference > Api > Python > Graphql > PropsInput --- title: "PropsInput" breadcrumb: "Reference / Python / graphql / PropsInput" --- # PropsInput Create a PropsInput by choosing to include all/some properties explicitly. --- ## Reference > Api > Python > Graphql > RaphtoryClient --- title: "RaphtoryClient" breadcrumb: "Reference / Python / graphql / RaphtoryClient" --- # RaphtoryClient A client for handling GraphQL operations in the context of Raphtory. ## Methods | Method | Description | |--------|-------------| | [`copy_graph`](#copy_graph) | Copy graph from a path path on the server to a new_path on the server | | [`create_index`](#create_index) | Create Index for graph on the server at 'path' | | [`delete_graph`](#delete_graph) | Delete graph from a path path on the server | | [`is_server_online`](#is_server_online) | Check if the server is online. | | [`move_graph`](#move_graph) | Move graph from a path path on the server to a new_path on the server | | [`new_graph`](#new_graph) | Create a new empty Graph on the server at path | | [`query`](#query) | Make a GraphQL query against the server. | | [`receive_graph`](#receive_graph) | Receive graph from a path path on the server | | [`remote_graph`](#remote_graph) | Get a RemoteGraph reference to a graph on the server at path | | [`send_graph`](#send_graph) | Send a graph to the server | | [`upload_graph`](#upload_graph) | Upload graph file from a path file_path on the client | --- ## Method Details ### [copy_graph](#copy_graph) **Signature:** `copy_graph(path, new_path)` Copy graph from a path path on the server to a new_path on the server #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph to be copied | | `new_path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the new path of the copied graph | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index](#create_index) **Signature:** `create_index(path, index_spec, in_ram=True)` Create Index for graph on the server at 'path' #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `RemoteIndexSpec` | [RemoteIndexSpec](/docs/reference/api/python/graphql/RemoteIndexSpec) | - | spec specifying the properties that need to be indexed | | `in_ram` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | create index in ram | | `path` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | | `index_spec` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [delete_graph](#delete_graph) **Signature:** `delete_graph(path)` Delete graph from a path path on the server #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph to be deleted | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [is_server_online](#is_server_online) Check if the server is online. #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | Returns true if server is online otherwise false. | ### [move_graph](#move_graph) **Signature:** `move_graph(path, new_path)` Move graph from a path path on the server to a new_path on the server #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph to be moved | | `new_path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the new path of the moved graph | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [new_graph](#new_graph) **Signature:** `new_graph(path, graph_type)` Create a new empty Graph on the server at path #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph to be created | | `graph_type` | `Literal["EVENT", "PERSISTENT"]` | - | the type of graph that should be created - this can be EVENT or PERSISTENT | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [query](#query) **Signature:** `query(query, variables=None)` Make a GraphQL query against the server. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the query to make. | | `variables` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [Any](https://docs.python.org/3/library/typing.html#typing.Any)], optional | `None` | a dict of variables present on the query and their values. | ### [receive_graph](#receive_graph) **Signature:** `receive_graph(path)` Receive graph from a path path on the server #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph to be received | ### [remote_graph](#remote_graph) **Signature:** `remote_graph(path)` Get a RemoteGraph reference to a graph on the server at path #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph to be created | #### Returns | Type | Description | |------|-------------| | [RemoteGraph](/docs/reference/api/python/graphql/RemoteGraph) | the remote graph reference | ### [send_graph](#send_graph) **Signature:** `send_graph(path, graph, overwrite=False)` Send a graph to the server #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph | | `graph` | [Graph](/docs/reference/api/python/raphtory/Graph) \| [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | - | the graph to send | | `overwrite` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | overwrite existing graph. Defaults to False. | ### [upload_graph](#upload_graph) **Signature:** `upload_graph(path, file_path, overwrite=False)` Upload graph file from a path file_path on the client #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the graph | | `file_path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the path of the graph on the client | | `overwrite` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | overwrite existing graph. Defaults to False. | --- ## Reference > Api > Python > Graphql > RemoteEdge --- title: "RemoteEdge" breadcrumb: "Reference / Python / graphql / RemoteEdge" --- # RemoteEdge A remote edge reference Returned by [RemoteGraph.edge][raphtory.graphql.RemoteGraph.edge], [RemoteGraph.add_edge][raphtory.graphql.RemoteGraph.add_edge], and [RemoteGraph.delete_edge][raphtory.graphql.RemoteGraph.delete_edge]. ## Methods | Method | Description | |--------|-------------| | [`add_metadata`](#add_metadata) | Add metadata to the edge within the remote graph. | | [`add_updates`](#add_updates) | Add updates to an edge in the remote graph at a specified time. | | [`delete`](#delete) | Mark the edge as deleted at the specified time. | | [`update_metadata`](#update_metadata) | Update metadata of an edge in the remote graph overwriting existing values. | --- ## Method Details ### [add_metadata](#add_metadata) **Signature:** `add_metadata(properties, layer=None)` Add metadata to the edge within the remote graph. This function is used to add metadata to an edge that does not change over time. This metadata is fundamental information of the edge. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)] | - | A dictionary of properties to be added to the edge. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want these properties to be added on to. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_updates](#add_updates) **Signature:** `add_updates(t, properties=None, layer=None)` Add updates to an edge in the remote graph at a specified time. This function allows for the addition of property updates to an edge within the graph. The updates are time-stamped, meaning they are applied at the specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp at which the updates should be applied. | | `properties` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)], optional | `None` | A dictionary of properties to update. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want the updates to be applied. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [delete](#delete) **Signature:** `delete(t, layer=None)` Mark the edge as deleted at the specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp at which the deletion should be applied. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want the deletion applied to. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(properties, layer=None)` Update metadata of an edge in the remote graph overwriting existing values. This function is used to add properties to an edge that does not change over time. These properties are fundamental attributes of the edge. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)] | - | A dictionary of properties to be added to the edge. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want these properties to be added on to. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Graphql > RemoteEdgeAddition --- title: "RemoteEdgeAddition" breadcrumb: "Reference / Python / graphql / RemoteEdgeAddition" --- # RemoteEdgeAddition An edge update --- ## Reference > Api > Python > Graphql > RemoteGraph --- title: "RemoteGraph" breadcrumb: "Reference / Python / graphql / RemoteGraph" --- # RemoteGraph ## Methods | Method | Description | |--------|-------------| | [`add_edge`](#add_edge) | Adds a new edge with the given source and destination nodes and properties to the remote graph. | | [`add_edges`](#add_edges) | Batch add edge updates to the remote graph | | [`add_metadata`](#add_metadata) | Adds metadata to the remote graph. | | [`add_node`](#add_node) | Adds a new node with the given id and properties to the remote graph. | | [`add_nodes`](#add_nodes) | Batch add node updates to the remote graph | | [`add_property`](#add_property) | Adds properties to the remote graph. | | [`create_node`](#create_node) | Create a new node with the given id and properties to the remote graph and fail if the node already exists. | | [`delete_edge`](#delete_edge) | Deletes an edge in the remote graph, given the timestamp, src and dst nodes and layer (optional) | | [`edge`](#edge) | Gets a remote edge with the specified source and destination nodes | | [`node`](#node) | Gets a remote node with the specified id | | [`update_metadata`](#update_metadata) | Updates metadata on the remote graph. | --- ## Method Details ### [add_edge](#add_edge) **Signature:** `add_edge(timestamp, src, dst, properties=None, layer=None)` Adds a new edge with the given source and destination nodes and properties to the remote graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp of the edge. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the destination node. | | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict), optional | `None` | The properties of the edge, as a dict of string and properties. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer of the edge. | #### Returns | Type | Description | |------|-------------| | [RemoteEdge](/docs/reference/api/python/graphql/RemoteEdge) | the remote edge | ### [add_edges](#add_edges) **Signature:** `add_edges(updates)` Batch add edge updates to the remote graph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `updates` | list[[RemoteEdgeAddition](/docs/reference/api/python/graphql/RemoteEdgeAddition)] | - | The list of updates you want to apply to the remote graph | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_metadata](#add_metadata) **Signature:** `add_metadata(properties)` Adds metadata to the remote graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | - | The metadata of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_node](#add_node) **Signature:** `add_node(timestamp, id, properties=None, node_type=None)` Adds a new node with the given id and properties to the remote graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp of the node. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the node. | | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict), optional | `None` | The properties of the node. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The optional string which will be used as a node type | #### Returns | Type | Description | |------|-------------| | [RemoteNode](/docs/reference/api/python/graphql/RemoteNode) | the new remote node | ### [add_nodes](#add_nodes) **Signature:** `add_nodes(updates)` Batch add node updates to the remote graph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `updates` | list[[RemoteNodeAddition](/docs/reference/api/python/graphql/RemoteNodeAddition)] | - | The list of updates you want to apply to the remote graph | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_property](#add_property) **Signature:** `add_property(timestamp, properties)` Adds properties to the remote graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp of the temporal property. | | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | - | The temporal properties of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_node](#create_node) **Signature:** `create_node(timestamp, id, properties=None, node_type=None)` Create a new node with the given id and properties to the remote graph and fail if the node already exists. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp of the node. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the node. | | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict), optional | `None` | The properties of the node. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The optional string which will be used as a node type | #### Returns | Type | Description | |------|-------------| | [RemoteNode](/docs/reference/api/python/graphql/RemoteNode) | the new remote node | ### [delete_edge](#delete_edge) **Signature:** `delete_edge(timestamp, src, dst, layer=None)` Deletes an edge in the remote graph, given the timestamp, src and dst nodes and layer (optional) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) | - | The timestamp of the edge. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the destination node. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer of the edge. | #### Returns | Type | Description | |------|-------------| | [RemoteEdge](/docs/reference/api/python/graphql/RemoteEdge) | the remote edge | ### [edge](#edge) **Signature:** `edge(src, dst)` Gets a remote edge with the specified source and destination nodes #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the source node id | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the destination node id | #### Returns | Type | Description | |------|-------------| | [RemoteEdge](/docs/reference/api/python/graphql/RemoteEdge) | the remote edge reference | ### [node](#node) **Signature:** `node(id)` Gets a remote node with the specified id #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the node id | #### Returns | Type | Description | |------|-------------| | [RemoteNode](/docs/reference/api/python/graphql/RemoteNode) | the remote node reference | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(properties)` Updates metadata on the remote graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | - | The metadata of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Graphql > RemoteIndexSpec --- title: "RemoteIndexSpec" breadcrumb: "Reference / Python / graphql / RemoteIndexSpec" --- # RemoteIndexSpec Create a RemoteIndexSpec specifying which node and edge properties to index. --- ## Reference > Api > Python > Graphql > RemoteNode --- title: "RemoteNode" breadcrumb: "Reference / Python / graphql / RemoteNode" --- # RemoteNode ## Methods | Method | Description | |--------|-------------| | [`add_metadata`](#add_metadata) | Add metadata to a node in the remote graph. | | [`add_updates`](#add_updates) | Add updates to a node in the remote graph at a specified time. | | [`set_node_type`](#set_node_type) | Set the type on the node. This only works if the type has not been previously set, otherwise will | | [`update_metadata`](#update_metadata) | Update metadata of a node in the remote graph overwriting existing values. | --- ## Method Details ### [add_metadata](#add_metadata) **Signature:** `add_metadata(properties)` Add metadata to a node in the remote graph. This function is used to add properties to a node that do not change over time. These properties are fundamental attributes of the node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)] | - | A dictionary of properties to be added to the node. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_updates](#add_updates) **Signature:** `add_updates(t, properties=None)` Add updates to a node in the remote graph at a specified time. This function allows for the addition of property updates to a node within the graph. The updates are time-stamped, meaning they are applied at the specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | - | The timestamp at which the updates should be applied. | | `properties` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)], optional | `None` | A dictionary of properties to update. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [set_node_type](#set_node_type) **Signature:** `set_node_type(new_type)` Set the type on the node. This only works if the type has not been previously set, otherwise will throw an error #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `new_type` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The new type to be set | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(properties)` Update metadata of a node in the remote graph overwriting existing values. This function is used to add properties to a node that does not change over time. These properties are fundamental attributes of the node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)] | - | A dictionary of properties to be added to the node. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Graphql > RemoteNodeAddition --- title: "RemoteNodeAddition" breadcrumb: "Reference / Python / graphql / RemoteNodeAddition" --- # RemoteNodeAddition Node addition update --- ## Reference > Api > Python > Graphql > RemoteUpdate --- title: "RemoteUpdate" breadcrumb: "Reference / Python / graphql / RemoteUpdate" --- # RemoteUpdate A temporal update --- ## Reference > Api > Python > Graphql > RunningGraphServer --- title: "RunningGraphServer" breadcrumb: "Reference / Python / graphql / RunningGraphServer" --- # RunningGraphServer A Raphtory server handler that also enables querying the server ## Methods | Method | Description | |--------|-------------| | [`get_client`](#get_client) | Get the client for the server | | [`stop`](#stop) | Stop the server and wait for it to finish | --- ## Method Details ### [get_client](#get_client) Get the client for the server #### Returns | Type | Description | |------|-------------| | [RaphtoryClient](/docs/reference/api/python/graphql/RaphtoryClient) | the client | ### [stop](#stop) Stop the server and wait for it to finish #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Graphql > SomePropertySpec --- title: "SomePropertySpec" breadcrumb: "Reference / Python / graphql / SomePropertySpec" --- # SomePropertySpec Create a SomePropertySpec by explicitly listing metadata and/or temporal property names. --- ## Reference > Api > Python > Iterables > ArcStringIterable --- title: "ArcStringIterable" breadcrumb: "Reference / Python / iterables / ArcStringIterable" --- # ArcStringIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > ArcStringVecIterable --- title: "ArcStringVecIterable" breadcrumb: "Reference / Python / iterables / ArcStringVecIterable" --- # ArcStringVecIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > BoolIterable --- title: "BoolIterable" breadcrumb: "Reference / Python / iterables / BoolIterable" --- # BoolIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > EventTimeIterable --- title: "EventTimeIterable" breadcrumb: "Reference / Python / iterables / EventTimeIterable" --- # EventTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Change this Iterable of EventTime into an Iterable of corresponding UTC DateTimes. | | [`event_id`](#event_id) | Change this Iterable of EventTime into an Iterable of their associated event ids. | | [`t`](#t) | Change this Iterable of EventTime into an Iterable of corresponding Unix timestamps in milliseconds. | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Property Details ### [dt](#dt) Change this Iterable of EventTime into an Iterable of corresponding UTC DateTimes. #### Returns | Type | Description | |------|-------------| | [ResultUtcDateTimeIterable](/docs/reference/api/python/iterables/ResultUtcDateTimeIterable) | Iterable of UTC datetimes for each EventTime. | ### [event_id](#event_id) Change this Iterable of EventTime into an Iterable of their associated event ids. #### Returns | Type | Description | |------|-------------| | [UsizeIterable](/docs/reference/api/python/iterables/UsizeIterable) | Iterable of event ids associated to each EventTime. | ### [t](#t) Change this Iterable of EventTime into an Iterable of corresponding Unix timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [I64Iterable](/docs/reference/api/python/iterables/I64Iterable) | Iterable of millisecond timestamps since the Unix epoch for each EventTime. | --- ## Reference > Api > Python > Iterables > GIDGIDIterable --- title: "GIDGIDIterable" breadcrumb: "Reference / Python / iterables / GIDGIDIterable" --- # GIDGIDIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > GIDIterable --- title: "GIDIterable" breadcrumb: "Reference / Python / iterables / GIDIterable" --- # GIDIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > HistoryDateTimeIterable --- title: "HistoryDateTimeIterable" breadcrumb: "Reference / Python / iterables / HistoryDateTimeIterable" --- # HistoryDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect datetimes for each history. | --- ## Method Details ### [collect](#collect) Collect datetimes for each history. #### Returns | Type | Description | |------|-------------| | list[list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)]] | UTC datetimes per history. | #### Raises | Exception | Description | |-----------|-------------| | `TimeError` | If a timestamp cannot be converted to a datetime. | --- ## Reference > Api > Python > Iterables > HistoryEventIdIterable --- title: "HistoryEventIdIterable" breadcrumb: "Reference / Python / iterables / HistoryEventIdIterable" --- # HistoryEventIdIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect event ids for each history into a NumPy array. | | [`to_list`](#to_list) | Collect event ids for each history into a list. | --- ## Method Details ### [collect](#collect) Collect event ids for each history into a NumPy array. #### Returns | Type | Description | |------|-------------| | list[`NDArray[np.uintp]`] | NumPy NDArray of event ids per history. | ### [to_list](#to_list) Collect event ids for each history into a list. #### Returns | Type | Description | |------|-------------| | list[list[[int](https://docs.python.org/3/library/functions.html#int)]] | List of event ids per history. | --- ## Reference > Api > Python > Iterables > HistoryIterable --- title: "HistoryIterable" breadcrumb: "Reference / Python / iterables / HistoryIterable" --- # HistoryIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect time entries from each history in the iterable. | | [`flatten`](#flatten) | Flatten the iterable of history objects into a single list of all contained time entries. | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access history items as UTC datetimes. | | [`event_id`](#event_id) | Access event ids of history items. | | [`intervals`](#intervals) | Access intervals between consecutive timestamps in milliseconds. | | [`t`](#t) | Access history items as timestamps (milliseconds since the Unix epoch). | --- ## Method Details ### [collect](#collect) Collect time entries from each history in the iterable. #### Returns | Type | Description | |------|-------------| | list[list[[EventTime](/docs/reference/api/python/raphtory/EventTime)]] | Collected entries per history. | ### [flatten](#flatten) Flatten the iterable of history objects into a single list of all contained time entries. #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime)] | List of time entries. | --- ## Property Details ### [dt](#dt) Access history items as UTC datetimes. #### Returns | Type | Description | |------|-------------| | [HistoryDateTimeIterable](/docs/reference/api/python/iterables/HistoryDateTimeIterable) | Iterable of HistoryDateTime objects, one for each item. | ### [event_id](#event_id) Access event ids of history items. #### Returns | Type | Description | |------|-------------| | [HistoryEventIdIterable](/docs/reference/api/python/iterables/HistoryEventIdIterable) | Iterable of HistoryEventId objects, one for each item. | ### [intervals](#intervals) Access intervals between consecutive timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [IntervalsIterable](/docs/reference/api/python/iterables/IntervalsIterable) | Iterable of Intervals objects, one for each item. | ### [t](#t) Access history items as timestamps (milliseconds since the Unix epoch). #### Returns | Type | Description | |------|-------------| | [HistoryTimestampIterable](/docs/reference/api/python/iterables/HistoryTimestampIterable) | Iterable of HistoryTimestamp objects, one for each item. | --- ## Reference > Api > Python > Iterables > HistoryTimestampIterable --- title: "HistoryTimestampIterable" breadcrumb: "Reference / Python / iterables / HistoryTimestampIterable" --- # HistoryTimestampIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect timestamps for each history into a NumPy array. | | [`to_list`](#to_list) | Collect timestamps for each history into a list. | --- ## Method Details ### [collect](#collect) Collect timestamps for each history into a NumPy array. #### Returns | Type | Description | |------|-------------| | list[`NDArray[np.int64]`] | NumPy NDArray of timestamps in milliseconds per history. | ### [to_list](#to_list) Collect timestamps for each history into a list. #### Returns | Type | Description | |------|-------------| | list[list[[int](https://docs.python.org/3/library/functions.html#int)]] | List of timestamps in milliseconds per history. | --- ## Reference > Api > Python > Iterables > I64Iterable --- title: "I64Iterable" breadcrumb: "Reference / Python / iterables / I64Iterable" --- # I64Iterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`mean`](#mean) | | | [`min`](#min) | | | [`sum`](#sum) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [mean](#mean) ### [min](#min) ### [sum](#sum) --- ## Reference > Api > Python > Iterables > IntervalsIterable --- title: "IntervalsIterable" breadcrumb: "Reference / Python / iterables / IntervalsIterable" --- # IntervalsIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect intervals between each history's consecutive timestamps in milliseconds into a NumPy array. | | [`to_list`](#to_list) | Collect intervals between each history's consecutive timestamps in milliseconds into a list. | --- ## Method Details ### [collect](#collect) Collect intervals between each history's consecutive timestamps in milliseconds into a NumPy array. #### Returns | Type | Description | |------|-------------| | list[`NDArray[np.int64]`] | NumPy NDArray of intervals per history. | ### [to_list](#to_list) Collect intervals between each history's consecutive timestamps in milliseconds into a list. #### Returns | Type | Description | |------|-------------| | list[list[[int](https://docs.python.org/3/library/functions.html#int)]] | List of intervals per history. | --- ## Reference > Api > Python > Iterables > MetadataListList --- title: "MetadataListList" breadcrumb: "Reference / Python / iterables / MetadataListList" --- # MetadataListList ## Methods | Method | Description | |--------|-------------| | [`as_dict`](#as_dict) | | | [`get`](#get) | | | [`items`](#items) | | | [`keys`](#keys) | | | [`values`](#values) | | --- ## Method Details ### [as_dict](#as_dict) ### [get](#get) **Signature:** `get(key)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [items](#items) ### [keys](#keys) ### [values](#values) --- ## Reference > Api > Python > Iterables > NestedArcStringIterable --- title: "NestedArcStringIterable" breadcrumb: "Reference / Python / iterables / NestedArcStringIterable" --- # NestedArcStringIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedArcStringVecIterable --- title: "NestedArcStringVecIterable" breadcrumb: "Reference / Python / iterables / NestedArcStringVecIterable" --- # NestedArcStringVecIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedBoolIterable --- title: "NestedBoolIterable" breadcrumb: "Reference / Python / iterables / NestedBoolIterable" --- # NestedBoolIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedEventTimeIterable --- title: "NestedEventTimeIterable" breadcrumb: "Reference / Python / iterables / NestedEventTimeIterable" --- # NestedEventTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Change this nested Iterable of EventTime into a nested Iterable of corresponding UTC DateTimes. | | [`event_id`](#event_id) | Change this nested Iterable of EventTime into a nested Iterable of their associated event ids. | | [`t`](#t) | Change this nested Iterable of EventTime into a nested Iterable of corresponding Unix timestamps in milliseconds. | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Property Details ### [dt](#dt) Change this nested Iterable of EventTime into a nested Iterable of corresponding UTC DateTimes. #### Returns | Type | Description | |------|-------------| | [NestedResultUtcDateTimeIterable](/docs/reference/api/python/iterables/NestedResultUtcDateTimeIterable) | Nested iterable of UTC datetimes for each EventTime. | ### [event_id](#event_id) Change this nested Iterable of EventTime into a nested Iterable of their associated event ids. #### Returns | Type | Description | |------|-------------| | [NestedUsizeIterable](/docs/reference/api/python/iterables/NestedUsizeIterable) | Nested iterable of event ids associated to each EventTime. | ### [t](#t) Change this nested Iterable of EventTime into a nested Iterable of corresponding Unix timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [NestedI64Iterable](/docs/reference/api/python/iterables/NestedI64Iterable) | Nested iterable of millisecond timestamps since the Unix epoch for each EventTime. | --- ## Reference > Api > Python > Iterables > NestedGIDGIDIterable --- title: "NestedGIDGIDIterable" breadcrumb: "Reference / Python / iterables / NestedGIDGIDIterable" --- # NestedGIDGIDIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > NestedGIDIterable --- title: "NestedGIDIterable" breadcrumb: "Reference / Python / iterables / NestedGIDIterable" --- # NestedGIDIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > NestedHistoryDateTimeIterable --- title: "NestedHistoryDateTimeIterable" breadcrumb: "Reference / Python / iterables / NestedHistoryDateTimeIterable" --- # NestedHistoryDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect datetimes for each history in each nested iterable. | | [`flatten`](#flatten) | Flatten the nested iterable of history objects into a single list of all contained datetimes. | --- ## Method Details ### [collect](#collect) Collect datetimes for each history in each nested iterable. #### Returns | Type | Description | |------|-------------| | list[list[list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)]]] | UTC datetimes per nested history. | #### Raises | Exception | Description | |-----------|-------------| | `TimeError` | If a timestamp cannot be converted to a datetime. | ### [flatten](#flatten) Flatten the nested iterable of history objects into a single list of all contained datetimes. #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | List of UTC datetimes. | #### Raises | Exception | Description | |-----------|-------------| | `TimeError` | If a timestamp cannot be converted to a datetime. | --- ## Reference > Api > Python > Iterables > NestedHistoryEventIdIterable --- title: "NestedHistoryEventIdIterable" breadcrumb: "Reference / Python / iterables / NestedHistoryEventIdIterable" --- # NestedHistoryEventIdIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect event ids for each history in each nested iterable into a NumPy array. | | [`flatten`](#flatten) | Flatten the nested iterable of history objects into a single NumPy NDArray of all contained event ids. | | [`flattened_list`](#flattened_list) | Flatten the nested iterable of history objects into a single list of all contained event ids. | | [`to_list`](#to_list) | Collect event ids for each history in each nested iterable into a list. | --- ## Method Details ### [collect](#collect) Collect event ids for each history in each nested iterable into a NumPy array. #### Returns | Type | Description | |------|-------------| | list[list[`NDArray[np.uintp]`]] | NumPy NDArray of event ids per nested history. | ### [flatten](#flatten) Flatten the nested iterable of history objects into a single NumPy NDArray of all contained event ids. #### Returns | Type | Description | |------|-------------| | `NDArray[np.uintp]` | NumPy NDArray of event ids. | ### [flattened_list](#flattened_list) Flatten the nested iterable of history objects into a single list of all contained event ids. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of timestamps in milliseconds. | ### [to_list](#to_list) Collect event ids for each history in each nested iterable into a list. #### Returns | Type | Description | |------|-------------| | list[list[list[[int](https://docs.python.org/3/library/functions.html#int)]]] | List of event ids per nested history. | --- ## Reference > Api > Python > Iterables > NestedHistoryIterable --- title: "NestedHistoryIterable" breadcrumb: "Reference / Python / iterables / NestedHistoryIterable" --- # NestedHistoryIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect time entries from each history within each nested iterable. | | [`flatten`](#flatten) | Flatten the nested iterable of history objects into a single list of all contained time entries. | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access nested histories as datetime views. | | [`event_id`](#event_id) | Access nested histories as event id views. | | [`intervals`](#intervals) | Access nested histories as intervals views. | | [`t`](#t) | Access nested histories as timestamp views. | --- ## Method Details ### [collect](#collect) Collect time entries from each history within each nested iterable. #### Returns | Type | Description | |------|-------------| | list[list[list[[EventTime](/docs/reference/api/python/raphtory/EventTime)]]] | Collected entries per nested history. | ### [flatten](#flatten) Flatten the nested iterable of history objects into a single list of all contained time entries. #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime)] | List of time entries. | --- ## Property Details ### [dt](#dt) Access nested histories as datetime views. #### Returns | Type | Description | |------|-------------| | [NestedHistoryDateTimeIterable](/docs/reference/api/python/iterables/NestedHistoryDateTimeIterable) | Iterable of iterables of HistoryDateTime objects. | ### [event_id](#event_id) Access nested histories as event id views. #### Returns | Type | Description | |------|-------------| | [NestedHistoryEventIdIterable](/docs/reference/api/python/iterables/NestedHistoryEventIdIterable) | Iterable of iterables of HistoryEventId objects. | ### [intervals](#intervals) Access nested histories as intervals views. #### Returns | Type | Description | |------|-------------| | [NestedIntervalsIterable](/docs/reference/api/python/iterables/NestedIntervalsIterable) | Iterable of iterables of Intervals objects. | ### [t](#t) Access nested histories as timestamp views. #### Returns | Type | Description | |------|-------------| | [NestedHistoryTimestampIterable](/docs/reference/api/python/iterables/NestedHistoryTimestampIterable) | Iterable of iterables of HistoryTimestamp objects. | --- ## Reference > Api > Python > Iterables > NestedHistoryTimestampIterable --- title: "NestedHistoryTimestampIterable" breadcrumb: "Reference / Python / iterables / NestedHistoryTimestampIterable" --- # NestedHistoryTimestampIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect timestamps for each history in each nested iterable into a NumPy array. | | [`flatten`](#flatten) | Flatten the nested iterable of history objects into a single NumPy NDArray of all contained timestamps. | | [`flattened_list`](#flattened_list) | Flatten the nested iterable of history objects into a single list of all contained timestamps. | | [`to_list`](#to_list) | Collect timestamps for each history in each nested iterable into a list. | --- ## Method Details ### [collect](#collect) Collect timestamps for each history in each nested iterable into a NumPy array. #### Returns | Type | Description | |------|-------------| | list[list[`NDArray[np.int64]`]] | NumPy NDArray of timestamps in milliseconds per nested history. | ### [flatten](#flatten) Flatten the nested iterable of history objects into a single NumPy NDArray of all contained timestamps. #### Returns | Type | Description | |------|-------------| | `NDArray[np.int64]` | NumPy NDArray of timestamps in milliseconds. | ### [flattened_list](#flattened_list) Flatten the nested iterable of history objects into a single list of all contained timestamps. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of timestamps in milliseconds. | ### [to_list](#to_list) Collect timestamps for each history in each nested iterable into a list. #### Returns | Type | Description | |------|-------------| | list[list[list[[int](https://docs.python.org/3/library/functions.html#int)]]] | List of timestamps in milliseconds per nested history. | --- ## Reference > Api > Python > Iterables > NestedI64Iterable --- title: "NestedI64Iterable" breadcrumb: "Reference / Python / iterables / NestedI64Iterable" --- # NestedI64Iterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`mean`](#mean) | | | [`min`](#min) | | | [`sum`](#sum) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [mean](#mean) ### [min](#min) ### [sum](#sum) --- ## Reference > Api > Python > Iterables > NestedI64VecIterable --- title: "NestedI64VecIterable" breadcrumb: "Reference / Python / iterables / NestedI64VecIterable" --- # NestedI64VecIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedIntervalsIterable --- title: "NestedIntervalsIterable" breadcrumb: "Reference / Python / iterables / NestedIntervalsIterable" --- # NestedIntervalsIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect intervals between each nested history's consecutive timestamps in milliseconds into a NumPy array. | | [`flatten`](#flatten) | Collect intervals between each nested history's consecutive timestamps in milliseconds into a single NumPy array. | | [`flattened_list`](#flattened_list) | Collect intervals between each nested history's consecutive timestamps in milliseconds into a single list. | | [`to_list`](#to_list) | Collect intervals between each nested history's consecutive timestamps in milliseconds into a list. | --- ## Method Details ### [collect](#collect) Collect intervals between each nested history's consecutive timestamps in milliseconds into a NumPy array. #### Returns | Type | Description | |------|-------------| | list[list[`NDArray[np.int64]`]] | NumPy NDArray of intervals per nested history. | ### [flatten](#flatten) Collect intervals between each nested history's consecutive timestamps in milliseconds into a single NumPy array. #### Returns | Type | Description | |------|-------------| | `NDArray[np.int64]` | NumPy NDArray of intervals. | ### [flattened_list](#flattened_list) Collect intervals between each nested history's consecutive timestamps in milliseconds into a single list. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of intervals. | ### [to_list](#to_list) Collect intervals between each nested history's consecutive timestamps in milliseconds into a list. #### Returns | Type | Description | |------|-------------| | list[list[list[[int](https://docs.python.org/3/library/functions.html#int)]]] | List of intervals per nested history. | --- ## Reference > Api > Python > Iterables > NestedOptionArcStringIterable --- title: "NestedOptionArcStringIterable" breadcrumb: "Reference / Python / iterables / NestedOptionArcStringIterable" --- # NestedOptionArcStringIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedOptionEventTimeIterable --- title: "NestedOptionEventTimeIterable" breadcrumb: "Reference / Python / iterables / NestedOptionEventTimeIterable" --- # NestedOptionEventTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Change this nested Iterable of Optional[EventTime] into a nested Iterable of corresponding UTC DateTimes. | | [`event_id`](#event_id) | Change this nested Iterable of Optional[EventTime] into a nested Iterable of their associated event ids. | | [`t`](#t) | Change this nested Iterable of Optional[EventTime] into a nested Iterable of corresponding Unix timestamps in milliseconds. | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Property Details ### [dt](#dt) Change this nested Iterable of Optional[EventTime] into a nested Iterable of corresponding UTC DateTimes. #### Returns | Type | Description | |------|-------------| | [NestedResultOptionUtcDateTimeIterable](/docs/reference/api/python/iterables/NestedResultOptionUtcDateTimeIterable) | Nested iterable of UTC datetimes for each EventTime, if available. | ### [event_id](#event_id) Change this nested Iterable of Optional[EventTime] into a nested Iterable of their associated event ids. #### Returns | Type | Description | |------|-------------| | [NestedOptionUsizeIterable](/docs/reference/api/python/iterables/NestedOptionUsizeIterable) | Nested iterable of event ids associated to each EventTime, if available. | ### [t](#t) Change this nested Iterable of Optional[EventTime] into a nested Iterable of corresponding Unix timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [NestedOptionI64Iterable](/docs/reference/api/python/iterables/NestedOptionI64Iterable) | Nested iterable of millisecond timestamps since the Unix epoch for each EventTime, if available. | --- ## Reference > Api > Python > Iterables > NestedOptionI64Iterable --- title: "NestedOptionI64Iterable" breadcrumb: "Reference / Python / iterables / NestedOptionI64Iterable" --- # NestedOptionI64Iterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > NestedOptionUsizeIterable --- title: "NestedOptionUsizeIterable" breadcrumb: "Reference / Python / iterables / NestedOptionUsizeIterable" --- # NestedOptionUsizeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > NestedResultOptionUtcDateTimeIterable --- title: "NestedResultOptionUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / NestedResultOptionUtcDateTimeIterable" --- # NestedResultOptionUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedResultUtcDateTimeIterable --- title: "NestedResultUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / NestedResultUtcDateTimeIterable" --- # NestedResultUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedStringIterable --- title: "NestedStringIterable" breadcrumb: "Reference / Python / iterables / NestedStringIterable" --- # NestedStringIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedUsizeIterable --- title: "NestedUsizeIterable" breadcrumb: "Reference / Python / iterables / NestedUsizeIterable" --- # NestedUsizeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`mean`](#mean) | | | [`min`](#min) | | | [`sum`](#sum) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [mean](#mean) ### [min](#min) ### [sum](#sum) --- ## Reference > Api > Python > Iterables > NestedUtcDateTimeIterable --- title: "NestedUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / NestedUtcDateTimeIterable" --- # NestedUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > NestedVecUtcDateTimeIterable --- title: "NestedVecUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / NestedVecUtcDateTimeIterable" --- # NestedVecUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > OptionArcStringIterable --- title: "OptionArcStringIterable" breadcrumb: "Reference / Python / iterables / OptionArcStringIterable" --- # OptionArcStringIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > OptionEventTimeIterable --- title: "OptionEventTimeIterable" breadcrumb: "Reference / Python / iterables / OptionEventTimeIterable" --- # OptionEventTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Change this Iterable of Optional[EventTime] into an Iterable of corresponding UTC DateTimes. | | [`event_id`](#event_id) | Change this Iterable of Optional[EventTime] into an Iterable of their associated event ids. | | [`t`](#t) | Change this Iterable of Optional[EventTime] into an Iterable of corresponding Unix timestamps in milliseconds. | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Property Details ### [dt](#dt) Change this Iterable of Optional[EventTime] into an Iterable of corresponding UTC DateTimes. #### Returns | Type | Description | |------|-------------| | [ResultOptionUtcDateTimeIterable](/docs/reference/api/python/iterables/ResultOptionUtcDateTimeIterable) | Iterable of UTC datetimes for each EventTime, if available. | ### [event_id](#event_id) Change this Iterable of Optional[EventTime] into an Iterable of their associated event ids. #### Returns | Type | Description | |------|-------------| | [OptionUsizeIterable](/docs/reference/api/python/iterables/OptionUsizeIterable) | Iterable of event ids associated to each EventTime, if available. | ### [t](#t) Change this Iterable of Optional[EventTime] into an Iterable of corresponding Unix timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [OptionI64Iterable](/docs/reference/api/python/iterables/OptionI64Iterable) | Iterable of millisecond timestamps since the Unix epoch for each EventTime, if available. | --- ## Reference > Api > Python > Iterables > OptionI64Iterable --- title: "OptionI64Iterable" breadcrumb: "Reference / Python / iterables / OptionI64Iterable" --- # OptionI64Iterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > OptionUsizeIterable --- title: "OptionUsizeIterable" breadcrumb: "Reference / Python / iterables / OptionUsizeIterable" --- # OptionUsizeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`min`](#min) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [min](#min) --- ## Reference > Api > Python > Iterables > OptionUtcDateTimeIterable --- title: "OptionUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / OptionUtcDateTimeIterable" --- # OptionUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > OptionVecUtcDateTimeIterable --- title: "OptionVecUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / OptionVecUtcDateTimeIterable" --- # OptionVecUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > PyNestedPropsIterable --- title: "PyNestedPropsIterable" breadcrumb: "Reference / Python / iterables / PyNestedPropsIterable" --- # PyNestedPropsIterable ## Methods | Method | Description | |--------|-------------| | [`as_dict`](#as_dict) | Convert properties view to a dict. | | [`get`](#get) | Get property value. | | [`items`](#items) | Get a list of key-value pairs. | | [`keys`](#keys) | Get the names for all properties. | | [`values`](#values) | Get the values of the properties. | ## Properties | Property | Description | |----------|-------------| | [`temporal`](#temporal) | Get a view of the temporal properties only. | --- ## Method Details ### [as_dict](#as_dict) Convert properties view to a dict. ### [get](#get) **Signature:** `get(key)` Get property value. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the property. | #### Returns | Type | Description | |------|-------------| | `PyPropValueListList` | | ### [items](#items) Get a list of key-value pairs. ### [keys](#keys) Get the names for all properties. #### Returns | Type | Description | |------|-------------| | list[`Str`] | | ### [values](#values) Get the values of the properties. #### Returns | Type | Description | |------|-------------| | list[list[list[[PropValue](/docs/reference/api/python/typing)]]] | | --- ## Property Details ### [temporal](#temporal) Get a view of the temporal properties only. #### Returns | Type | Description | |------|-------------| | list[list[`temporalprop`]] | | --- ## Reference > Api > Python > Iterables > ResultOptionUtcDateTimeIterable --- title: "ResultOptionUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / ResultOptionUtcDateTimeIterable" --- # ResultOptionUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > ResultUtcDateTimeIterable --- title: "ResultUtcDateTimeIterable" breadcrumb: "Reference / Python / iterables / ResultUtcDateTimeIterable" --- # ResultUtcDateTimeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > StringIterable --- title: "StringIterable" breadcrumb: "Reference / Python / iterables / StringIterable" --- # StringIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | --- ## Method Details ### [collect](#collect) --- ## Reference > Api > Python > Iterables > U64Iterable --- title: "U64Iterable" breadcrumb: "Reference / Python / iterables / U64Iterable" --- # U64Iterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`mean`](#mean) | | | [`min`](#min) | | | [`sum`](#sum) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [mean](#mean) ### [min](#min) ### [sum](#sum) --- ## Reference > Api > Python > Iterables > UsizeIterable --- title: "UsizeIterable" breadcrumb: "Reference / Python / iterables / UsizeIterable" --- # UsizeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`mean`](#mean) | | | [`min`](#min) | | | [`sum`](#sum) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [mean](#mean) ### [min](#min) ### [sum](#sum) --- ## Reference > Api > Python > Node_state > DegreeView --- title: "DegreeView" breadcrumb: "Reference / Python / node_state / DegreeView" --- # DegreeView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`mean`](#mean) | mean of values over all nodes | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`sum`](#sum) | sum of values over all nodes | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [mean](#mean) mean of values over all nodes #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | mean value | ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[int]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | The sorted node state | ### [sum](#sum) sum of values over all nodes #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | int: the sum | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[int]` | Iterator over values | --- ## Reference > Api > Python > Node_state > EarliestDateTimeView --- title: "EarliestDateTimeView" breadcrumb: "Reference / Python / node_state / EarliestDateTimeView" --- # EarliestDateTimeView A lazy view over EarliestDateTime values for each node. ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all DateTime values and return the result as a list | | [`collect_valid`](#collect_valid) | Compute all DateTime values and return the valid results as a list. Conversion errors and empty values are ignored | | [`compute`](#compute) | Compute all DateTime values and return the result as a NodeState. Fails if any DateTime error is encountered. | | [`compute_valid`](#compute_valid) | Compute all values and only return the valid results as a NodeState. DateTime errors are ignored. | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over DateTimes | | [`items_valid`](#items_valid) | Iterate over valid DateTimes only. Ignore error and None values. | | [`iter_valid`](#iter_valid) | Returns an iterator over all valid DateTime values. Conversion errors and empty values are ignored | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value. Note that 'None' values will always come after valid DateTime values | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id. Fails if any DateTime error is encountered. | | [`sorted_by_id_valid`](#sorted_by_id_valid) | Sort only non-error DateTimes by node id. DateTime errors are ignored. | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over DateTimes | | [`values_valid`](#values_valid) | Iterate over valid DateTime values only. Ignore error and None values. | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The k smallest values as a node state | ### [collect](#collect) Compute all DateTime values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional] | all values as a list | ### [collect_valid](#collect_valid) Compute all DateTime values and return the valid results as a list. Conversion errors and empty values are ignored #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | all values as a list | ### [compute](#compute) Compute all DateTime values and return the result as a NodeState. Fails if any DateTime error is encountered. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | the computed `NodeState` | ### [compute_valid](#compute_valid) Compute all values and only return the valid results as a NodeState. DateTime errors are ignored. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=...)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | `...` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over DateTimes ### [items_valid](#items_valid) Iterate over valid DateTimes only. Ignore error and None values. ### [iter_valid](#iter_valid) Returns an iterator over all valid DateTime values. Conversion errors and empty values are ignored #### Returns | Type | Description | |------|-------------| | `Iterator[datetime]` | Valid datetime values. | ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | The median value or `None` if empty | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value. Note that 'None' values will always come after valid DateTime values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id. Fails if any DateTime error is encountered. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The sorted node state | ### [sorted_by_id_valid](#sorted_by_id_valid) Sort only non-error DateTimes by node id. DateTime errors are ignored. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | A Pandas DataFrame. | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The k largest values as a node state | ### [values](#values) Iterate over DateTimes #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[datetime]]` | Iterator over datetimes | ### [values_valid](#values_valid) Iterate over valid DateTime values only. Ignore error and None values. #### Returns | Type | Description | |------|-------------| | `Iterator[datetime]` | Iterator over values | --- ## Reference > Api > Python > Node_state > EarliestEventIdView --- title: "EarliestEventIdView" breadcrumb: "Reference / Python / node_state / EarliestEventIdView" --- # EarliestEventIdView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > EarliestTimestampView --- title: "EarliestTimestampView" breadcrumb: "Reference / Python / node_state / EarliestTimestampView" --- # EarliestTimestampView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > EarliestTimeView --- title: "EarliestTimeView" breadcrumb: "Reference / Python / node_state / EarliestTimeView" --- # EarliestTimeView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access earliest times as UTC DateTimes. | | [`event_id`](#event_id) | Access the event ids of the earliest times. | | [`t`](#t) | Access earliest times as timestamps (milliseconds since the Unix epoch). | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[EventTime]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[EventTime]]` | Iterator over values | --- ## Property Details ### [dt](#dt) Access earliest times as UTC DateTimes. #### Returns | Type | Description | |------|-------------| | [EarliestDateTimeView](/docs/reference/api/python/node_state/EarliestDateTimeView) | A lazy view over the earliest times for each node as datetimes. | ### [event_id](#event_id) Access the event ids of the earliest times. #### Returns | Type | Description | |------|-------------| | [EarliestEventIdView](/docs/reference/api/python/node_state/EarliestEventIdView) | A lazy view over the event ids of the earliest times for each node. | ### [t](#t) Access earliest times as timestamps (milliseconds since the Unix epoch). #### Returns | Type | Description | |------|-------------| | [EarliestTimestampView](/docs/reference/api/python/node_state/EarliestTimestampView) | A lazy view over the earliest times for each node as timestamps. | --- ## Reference > Api > Python > Node_state > EdgeHistoryCountView --- title: "EdgeHistoryCountView" breadcrumb: "Reference / Python / node_state / EdgeHistoryCountView" --- # EdgeHistoryCountView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`mean`](#mean) | mean of values over all nodes | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`sum`](#sum) | sum of values over all nodes | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [mean](#mean) mean of values over all nodes #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | mean value | ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[int]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | The sorted node state | ### [sum](#sum) sum of values over all nodes #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | int: the sum | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[int]` | Iterator over values | --- ## Reference > Api > Python > Node_state > HistoryDateTimeView --- title: "HistoryDateTimeView" breadcrumb: "Reference / Python / node_state / HistoryDateTimeView" --- # HistoryDateTimeView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryDateTime](/docs/reference/api/python/node_state/NodeStateHistoryDateTime) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryDateTime](/docs/reference/api/python/node_state/NodeStateHistoryDateTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[HistoryDateTime]` | Iterator over values | --- ## Reference > Api > Python > Node_state > HistoryEventIdView --- title: "HistoryEventIdView" breadcrumb: "Reference / Python / node_state / HistoryEventIdView" --- # HistoryEventIdView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryEventId](/docs/reference/api/python/node_state/NodeStateHistoryEventId) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryEventId](/docs/reference/api/python/node_state/NodeStateHistoryEventId) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[HistoryEventId]` | Iterator over values | --- ## Reference > Api > Python > Node_state > HistoryTimestampView --- title: "HistoryTimestampView" breadcrumb: "Reference / Python / node_state / HistoryTimestampView" --- # HistoryTimestampView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryTimestamp](/docs/reference/api/python/node_state/NodeStateHistoryTimestamp) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryTimestamp](/docs/reference/api/python/node_state/NodeStateHistoryTimestamp) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[HistoryTimestamp]` | Iterator over values | --- ## Reference > Api > Python > Node_state > HistoryView --- title: "HistoryView" breadcrumb: "Reference / Python / node_state / HistoryView" --- # HistoryView A lazy view over History objects for each node. ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Compute all History objects and return the result as a list | | [`collect_time_entries`](#collect_time_entries) | Compute all History objects and return the contained time entries as a sorted list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`earliest_time`](#earliest_time) | Get the earliest time entry. | | [`flatten`](#flatten) | Flattens all history objects into a single history with all time entries ordered. | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over History objects | | [`latest_time`](#latest_time) | Get the latest time entry. | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over History objects | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access history events as UTC datetimes. | | [`event_id`](#event_id) | Access the unique event id of each time entry. | | [`intervals`](#intervals) | Access the intervals between consecutive timestamps in milliseconds. | | [`t`](#t) | Access history events as timestamps (milliseconds since the Unix epoch). | --- ## Method Details ### [collect](#collect) Compute all History objects and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[History](/docs/reference/api/python/raphtory/History)] | all History objects as a list | ### [collect_time_entries](#collect_time_entries) Compute all History objects and return the contained time entries as a sorted list #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime)] | all time entries as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateHistory](/docs/reference/api/python/node_state/NodeStateHistory) | the computed `NodeState` | ### [earliest_time](#earliest_time) Get the earliest time entry. #### Returns | Type | Description | |------|-------------| | [EarliestTimeView](/docs/reference/api/python/node_state/EarliestTimeView) | A lazy view over the earliest time of each node as an EventTime. | ### [flatten](#flatten) Flattens all history objects into a single history with all time entries ordered. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | a history object containing all time entries | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [History](/docs/reference/api/python/raphtory/History), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History), optional | the History object for the node or the default value | ### [items](#items) Iterate over History objects ### [latest_time](#latest_time) Get the latest time entry. #### Returns | Type | Description | |------|-------------| | [LatestTimeView](/docs/reference/api/python/node_state/LatestTimeView) | A lazy view over the latest time of each node as an EventTime. | ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistory](/docs/reference/api/python/node_state/NodeStateHistory) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | A Pandas DataFrame. | ### [values](#values) Iterate over History objects #### Returns | Type | Description | |------|-------------| | `Iterator[History]` | Iterator over histories | --- ## Property Details ### [dt](#dt) Access history events as UTC datetimes. #### Returns | Type | Description | |------|-------------| | [HistoryDateTimeView](/docs/reference/api/python/node_state/HistoryDateTimeView) | A lazy view over HistoryDateTime objects for each node. | ### [event_id](#event_id) Access the unique event id of each time entry. #### Returns | Type | Description | |------|-------------| | [HistoryEventIdView](/docs/reference/api/python/node_state/HistoryEventIdView) | A lazy view over HistoryEventId objects for each node. | ### [intervals](#intervals) Access the intervals between consecutive timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [IntervalsView](/docs/reference/api/python/node_state/IntervalsView) | A lazy view over Intervals objects for each node. | ### [t](#t) Access history events as timestamps (milliseconds since the Unix epoch). #### Returns | Type | Description | |------|-------------| | [HistoryTimestampView](/docs/reference/api/python/node_state/HistoryTimestampView) | A lazy view over HistoryTimestamp objects for each node. | --- ## Reference > Api > Python > Node_state > IdView --- title: "IdView" breadcrumb: "Reference / Python / node_state / IdView" --- # IdView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[GID](/docs/reference/api/python/typing)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [GID](/docs/reference/api/python/typing), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[GID]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[GID]` | Iterator over values | --- ## Reference > Api > Python > Node_state > IntervalsFloatView --- title: "IntervalsFloatView" breadcrumb: "Reference / Python / node_state / IntervalsFloatView" --- # IntervalsFloatView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[float](https://docs.python.org/3/library/functions.html#float), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[float]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[float]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > IntervalsIntegerView --- title: "IntervalsIntegerView" breadcrumb: "Reference / Python / node_state / IntervalsIntegerView" --- # IntervalsIntegerView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > IntervalsView --- title: "IntervalsView" breadcrumb: "Reference / Python / node_state / IntervalsView" --- # IntervalsView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Calculate the maximum interval in milliseconds for each node. | | [`mean`](#mean) | Calculate the mean interval in milliseconds for each node. | | [`median`](#median) | Calculate the median interval in milliseconds for each node. | | [`min`](#min) | Calculate the minimum interval in milliseconds for each node. | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[Intervals](/docs/reference/api/python/raphtory/Intervals)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateIntervals](/docs/reference/api/python/node_state/NodeStateIntervals) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Intervals](/docs/reference/api/python/raphtory/Intervals), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Intervals](/docs/reference/api/python/raphtory/Intervals), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Calculate the maximum interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [IntervalsIntegerView](/docs/reference/api/python/node_state/IntervalsIntegerView) | A lazy view over the maximum interval between consecutive timestamps for each node. The maximum is None if there is fewer than 1 interval. | ### [mean](#mean) Calculate the mean interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [IntervalsFloatView](/docs/reference/api/python/node_state/IntervalsFloatView) | A lazy view over the mean interval between consecutive timestamps for each node. The mean is None if there is fewer than 1 interval. | ### [median](#median) Calculate the median interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [IntervalsIntegerView](/docs/reference/api/python/node_state/IntervalsIntegerView) | A lazy view over the median interval between consecutive timestamps for each node. The median is None if there is fewer than 1 interval. | ### [min](#min) Calculate the minimum interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [IntervalsIntegerView](/docs/reference/api/python/node_state/IntervalsIntegerView) | A lazy view over the minimum interval between consecutive timestamps for each node. The minimum is None if there is fewer than 1 interval. | ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateIntervals](/docs/reference/api/python/node_state/NodeStateIntervals) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Intervals]` | Iterator over values | --- ## Reference > Api > Python > Node_state > LatestDateTimeView --- title: "LatestDateTimeView" breadcrumb: "Reference / Python / node_state / LatestDateTimeView" --- # LatestDateTimeView A lazy view over EarliestDateTime values for each node. ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all DateTime values and return the result as a list | | [`collect_valid`](#collect_valid) | Compute all DateTime values and return the valid results as a list. Conversion errors and empty values are ignored | | [`compute`](#compute) | Compute all DateTime values and return the result as a NodeState. Fails if any DateTime error is encountered. | | [`compute_valid`](#compute_valid) | Compute all DateTime values and only return the valid results as a NodeState. DateTime errors are ignored. | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`items_valid`](#items_valid) | Iterate over valid DateTime items only. Ignore error and None values. | | [`iter_valid`](#iter_valid) | Returns an iterator over all valid DateTime values. Conversion errors and empty values are ignored | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value. Note that 'None' values will always come after valid DateTime values | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id. Fails if any DateTime error is encountered. | | [`sorted_by_id_valid`](#sorted_by_id_valid) | Sort only non-error DateTimes by node id. DateTime errors are ignored. | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over DateTime values | | [`values_valid`](#values_valid) | Iterate over valid DateTime values only. Ignore error and None values. | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The k smallest values as a node state | ### [collect](#collect) Compute all DateTime values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional] | all values as a list | ### [collect_valid](#collect_valid) Compute all DateTime values and return the valid results as a list. Conversion errors and empty values are ignored #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | all values as a list | ### [compute](#compute) Compute all DateTime values and return the result as a NodeState. Fails if any DateTime error is encountered. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | the computed `NodeState` | ### [compute_valid](#compute_valid) Compute all DateTime values and only return the valid results as a NodeState. DateTime errors are ignored. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [items_valid](#items_valid) Iterate over valid DateTime items only. Ignore error and None values. ### [iter_valid](#iter_valid) Returns an iterator over all valid DateTime values. Conversion errors and empty values are ignored #### Returns | Type | Description | |------|-------------| | `Iterator[datetime]` | Valid DateTime values. | ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | The median value or `None` if empty | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value. Note that 'None' values will always come after valid DateTime values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id. Fails if any DateTime error is encountered. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The sorted node state | ### [sorted_by_id_valid](#sorted_by_id_valid) Sort only non-error DateTimes by node id. DateTime errors are ignored. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | A Pandas DataFrame. | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The k largest values as a node state | ### [values](#values) Iterate over DateTime values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[datetime]]` | Iterator over values | ### [values_valid](#values_valid) Iterate over valid DateTime values only. Ignore error and None values. #### Returns | Type | Description | |------|-------------| | `Iterator[datetime]` | Iterator over values | --- ## Reference > Api > Python > Node_state > LatestEventIdView --- title: "LatestEventIdView" breadcrumb: "Reference / Python / node_state / LatestEventIdView" --- # LatestEventIdView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > LatestTimestampView --- title: "LatestTimestampView" breadcrumb: "Reference / Python / node_state / LatestTimestampView" --- # LatestTimestampView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > LatestTimeView --- title: "LatestTimeView" breadcrumb: "Reference / Python / node_state / LatestTimeView" --- # LatestTimeView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access latest times as UTC DateTimes. | | [`event_id`](#event_id) | Access the event ids of the latest times. | | [`t`](#t) | Access latest times as timestamps (milliseconds since the Unix epoch). | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Property Details ### [dt](#dt) Access latest times as UTC DateTimes. #### Returns | Type | Description | |------|-------------| | [LatestDateTimeView](/docs/reference/api/python/node_state/LatestDateTimeView) | A lazy view over the latest times for each node as datetimes. | ### [event_id](#event_id) Access the event ids of the latest times. #### Returns | Type | Description | |------|-------------| | [LatestEventIdView](/docs/reference/api/python/node_state/LatestEventIdView) | A lazy view over the event ids of the latest times for each node. | ### [t](#t) Access latest times as timestamps (milliseconds since the Unix epoch). #### Returns | Type | Description | |------|-------------| | [LatestTimestampView](/docs/reference/api/python/node_state/LatestTimestampView) | A lazy view over the latest times for each node as timestamps. | --- ## Reference > Api > Python > Node_state > NameView --- title: "NameView" breadcrumb: "Reference / Python / node_state / NameView" --- # NameView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[str]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[str]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeGroups --- title: "NodeGroups" breadcrumb: "Reference / Python / node_state / NodeGroups" --- # NodeGroups ## Methods | Method | Description | |--------|-------------| | [`group`](#group) | Get group nodes and value | | [`group_subgraph`](#group_subgraph) | Get group as subgraph | | [`iter_subgraphs`](#iter_subgraphs) | Iterate over group subgraphs | --- ## Method Details ### [group](#group) **Signature:** `group(index)` Get group nodes and value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `index` | [int](https://docs.python.org/3/library/functions.html#int) | - | the group index | ### [group_subgraph](#group_subgraph) **Signature:** `group_subgraph(index)` Get group as subgraph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `index` | [int](https://docs.python.org/3/library/functions.html#int) | - | the group index | ### [iter_subgraphs](#iter_subgraphs) Iterate over group subgraphs --- ## Reference > Api > Python > Node_state > NodeLayout --- title: "NodeLayout" breadcrumb: "Reference / Python / node_state / NodeLayout" --- # NodeLayout ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | list[[float](https://docs.python.org/3/library/functions.html#float)], optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | list[[float](https://docs.python.org/3/library/functions.html#float)] | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeLayout](/docs/reference/api/python/node_state/NodeLayout) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[list[float]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateF64 --- title: "NodeStateF64" breadcrumb: "Reference / Python / node_state / NodeStateF64" --- # NodeStateF64 ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`mean`](#mean) | mean of values over all nodes | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`sum`](#sum) | sum of values over all nodes | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [float](https://docs.python.org/3/library/functions.html#float), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [mean](#mean) mean of values over all nodes #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | mean value | ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[float]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | The sorted node state | ### [sum](#sum) sum of values over all nodes #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | float: the sum | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateF64](/docs/reference/api/python/node_state/NodeStateF64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[float]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateF64String --- title: "NodeStateF64String" breadcrumb: "Reference / Python / node_state / NodeStateF64String" --- # NodeStateF64String ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | tuple[[float](https://docs.python.org/3/library/functions.html#float), [str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | the default value. Defaults to None. | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateF64String](/docs/reference/api/python/node_state/NodeStateF64String) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values --- ## Reference > Api > Python > Node_state > NodeStateGID --- title: "NodeStateGID" breadcrumb: "Reference / Python / node_state / NodeStateGID" --- # NodeStateGID ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [GID](/docs/reference/api/python/typing), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[GID]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateGID](/docs/reference/api/python/node_state/NodeStateGID) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[GID]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateHistory --- title: "NodeStateHistory" breadcrumb: "Reference / Python / node_state / NodeStateHistory" --- # NodeStateHistory A NodeState of History objects for each node. ## Methods | Method | Description | |--------|-------------| | [`collect_time_entries`](#collect_time_entries) | Collect and return all the contained time entries as a sorted list. | | [`earliest_time`](#earliest_time) | Get the earliest time entry of all nodes. | | [`flatten`](#flatten) | Flattens all history objects into a single history object with all time entries ordered. | | [`get`](#get) | Get History object for the node. | | [`items`](#items) | Iterate over items | | [`latest_time`](#latest_time) | Get the latest time entry. | | [`nodes`](#nodes) | Iterate over nodes. | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over History objects. | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access history events as UTC datetimes. | | [`event_id`](#event_id) | Access the unique event id of each time entry. | | [`intervals`](#intervals) | Access the intervals between consecutive timestamps in milliseconds. | | [`t`](#t) | Access history events as timestamps (milliseconds since the Unix epoch). | --- ## Method Details ### [collect_time_entries](#collect_time_entries) Collect and return all the contained time entries as a sorted list. #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime)] | All time entries as a list. | ### [earliest_time](#earliest_time) Get the earliest time entry of all nodes. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest event present in any of the nodes' histories. | ### [flatten](#flatten) Flattens all history objects into a single history object with all time entries ordered. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | A history object containing all time entries. | ### [get](#get) **Signature:** `get(node, default=None)` Get History object for the node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [History](/docs/reference/api/python/raphtory/History), optional | `None` | The default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History), optional | The value for the node or the default value. | ### [items](#items) Iterate over items ### [latest_time](#latest_time) Get the latest time entry. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest event present in any of the nodes' histories. | ### [nodes](#nodes) Iterate over nodes. #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes. | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistory](/docs/reference/api/python/node_state/NodeStateHistory) | The sorted node state. | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | A Pandas DataFrame. | ### [values](#values) Iterate over History objects. #### Returns | Type | Description | |------|-------------| | `Iterator[History]` | Iterator over History objects. | --- ## Property Details ### [dt](#dt) Access history events as UTC datetimes. #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryDateTime](/docs/reference/api/python/node_state/NodeStateHistoryDateTime) | A NodeState with the computed HistoryDateTime object for each node. | ### [event_id](#event_id) Access the unique event id of each time entry. #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryEventId](/docs/reference/api/python/node_state/NodeStateHistoryEventId) | A NodeState with the computed HistoryEventId object for each node. | ### [intervals](#intervals) Access the intervals between consecutive timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [NodeStateIntervals](/docs/reference/api/python/node_state/NodeStateIntervals) | A NodeState with the computed Intervals object for each node. | ### [t](#t) Access history events as timestamps (milliseconds since the Unix epoch). #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryTimestamp](/docs/reference/api/python/node_state/NodeStateHistoryTimestamp) | A NodeState with the computed HistoryTimestamp object for each node. | --- ## Reference > Api > Python > Node_state > NodeStateHistoryDateTime --- title: "NodeStateHistoryDateTime" breadcrumb: "Reference / Python / node_state / NodeStateHistoryDateTime" --- # NodeStateHistoryDateTime ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryDateTime](/docs/reference/api/python/node_state/NodeStateHistoryDateTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[HistoryDateTime]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateHistoryEventId --- title: "NodeStateHistoryEventId" breadcrumb: "Reference / Python / node_state / NodeStateHistoryEventId" --- # NodeStateHistoryEventId ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryEventId](/docs/reference/api/python/node_state/NodeStateHistoryEventId) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[HistoryEventId]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateHistoryTimestamp --- title: "NodeStateHistoryTimestamp" breadcrumb: "Reference / Python / node_state / NodeStateHistoryTimestamp" --- # NodeStateHistoryTimestamp ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHistoryTimestamp](/docs/reference/api/python/node_state/NodeStateHistoryTimestamp) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[HistoryTimestamp]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateHits --- title: "NodeStateHits" breadcrumb: "Reference / Python / node_state / NodeStateHits" --- # NodeStateHits ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateHits](/docs/reference/api/python/node_state/NodeStateHits) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | tuple[[float](https://docs.python.org/3/library/functions.html#float), [float](https://docs.python.org/3/library/functions.html#float)], optional | `None` | the default value. Defaults to None. | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Tuple[float, float]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateHits](/docs/reference/api/python/node_state/NodeStateHits) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateHits](/docs/reference/api/python/node_state/NodeStateHits) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateHits](/docs/reference/api/python/node_state/NodeStateHits) | The k largest values as a node state | ### [values](#values) Iterate over values --- ## Reference > Api > Python > Node_state > NodeStateIntervals --- title: "NodeStateIntervals" breadcrumb: "Reference / Python / node_state / NodeStateIntervals" --- # NodeStateIntervals ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Calculate the maximum interval in milliseconds for each node. | | [`mean`](#mean) | Calculate the mean interval in milliseconds for each node. | | [`median`](#median) | Calculate the median interval in milliseconds for each node. | | [`min`](#min) | Calculate the minimum interval in milliseconds for each node. | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`to_list`](#to_list) | Collect all intervals in milliseconds into a list for each node. | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Intervals](/docs/reference/api/python/raphtory/Intervals), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Intervals](/docs/reference/api/python/raphtory/Intervals), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Calculate the maximum interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | A NodeState with the computed maximum interval between consecutive timestamps for each node. The maximum is None if there is fewer than 1 interval. | ### [mean](#mean) Calculate the mean interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | A NodeState with the computed mean interval between consecutive timestamps for each node. The mean is None if there is fewer than 1 interval. | ### [median](#median) Calculate the median interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | A NodeState with the computed median interval between consecutive timestamps for each node. The median is None if there is fewer than 1 interval. | ### [min](#min) Calculate the minimum interval in milliseconds for each node. #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | A NodeState with the computed minimum interval between consecutive timestamps for each node. The minimum is None if there is fewer than 1 interval. | ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateIntervals](/docs/reference/api/python/node_state/NodeStateIntervals) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [to_list](#to_list) Collect all intervals in milliseconds into a list for each node. #### Returns | Type | Description | |------|-------------| | list[list[[int](https://docs.python.org/3/library/functions.html#int)]] | List of intervals in milliseconds for each node. | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Intervals]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateListDateTime --- title: "NodeStateListDateTime" breadcrumb: "Reference / Python / node_state / NodeStateListDateTime" --- # NodeStateListDateTime ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateListDateTime](/docs/reference/api/python/node_state/NodeStateListDateTime) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)], optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[list[datetime]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateListDateTime](/docs/reference/api/python/node_state/NodeStateListDateTime) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateListDateTime](/docs/reference/api/python/node_state/NodeStateListDateTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateListDateTime](/docs/reference/api/python/node_state/NodeStateListDateTime) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[list[datetime]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateListF64 --- title: "NodeStateListF64" breadcrumb: "Reference / Python / node_state / NodeStateListF64" --- # NodeStateListF64 ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | list[[float](https://docs.python.org/3/library/functions.html#float)], optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | list[[float](https://docs.python.org/3/library/functions.html#float)] | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateListF64](/docs/reference/api/python/node_state/NodeStateListF64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[list[float]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateMotifs --- title: "NodeStateMotifs" breadcrumb: "Reference / Python / node_state / NodeStateMotifs" --- # NodeStateMotifs ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateMotifs](/docs/reference/api/python/node_state/NodeStateMotifs) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | list[[int](https://docs.python.org/3/library/functions.html#int)], optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[list[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateMotifs](/docs/reference/api/python/node_state/NodeStateMotifs) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateMotifs](/docs/reference/api/python/node_state/NodeStateMotifs) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateMotifs](/docs/reference/api/python/node_state/NodeStateMotifs) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[list[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateNodes --- title: "NodeStateNodes" breadcrumb: "Reference / Python / node_state / NodeStateNodes" --- # NodeStateNodes ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Nodes](/docs/reference/api/python/raphtory/Nodes), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateNodes](/docs/reference/api/python/node_state/NodeStateNodes) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Nodes]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateOptionDateTime --- title: "NodeStateOptionDateTime" breadcrumb: "Reference / Python / node_state / NodeStateOptionDateTime" --- # NodeStateOptionDateTime ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[datetime]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionDateTime](/docs/reference/api/python/node_state/NodeStateOptionDateTime) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[datetime]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateOptionEventTime --- title: "NodeStateOptionEventTime" breadcrumb: "Reference / Python / node_state / NodeStateOptionEventTime" --- # NodeStateOptionEventTime ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[EventTime]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionEventTime](/docs/reference/api/python/node_state/NodeStateOptionEventTime) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[EventTime]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateOptionF64 --- title: "NodeStateOptionF64" breadcrumb: "Reference / Python / node_state / NodeStateOptionF64" --- # NodeStateOptionF64 ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[float]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionF64](/docs/reference/api/python/node_state/NodeStateOptionF64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[float]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateOptionI64 --- title: "NodeStateOptionI64" breadcrumb: "Reference / Python / node_state / NodeStateOptionI64" --- # NodeStateOptionI64 ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionI64](/docs/reference/api/python/node_state/NodeStateOptionI64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateOptionStr --- title: "NodeStateOptionStr" breadcrumb: "Reference / Python / node_state / NodeStateOptionStr" --- # NodeStateOptionStr ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[str]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[str]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateOptionUsize --- title: "NodeStateOptionUsize" breadcrumb: "Reference / Python / node_state / NodeStateOptionUsize" --- # NodeStateOptionUsize ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[int]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionUsize](/docs/reference/api/python/node_state/NodeStateOptionUsize) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[int]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateReachability --- title: "NodeStateReachability" breadcrumb: "Reference / Python / node_state / NodeStateReachability" --- # NodeStateReachability ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | list[tuple[[int](https://docs.python.org/3/library/functions.html#int), [str](https://docs.python.org/3/library/stdtypes.html#str)]], optional | `None` | the default value. Defaults to None. | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateReachability](/docs/reference/api/python/node_state/NodeStateReachability) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values --- ## Reference > Api > Python > Node_state > NodeStateSEIR --- title: "NodeStateSEIR" breadcrumb: "Reference / Python / node_state / NodeStateSEIR" --- # NodeStateSEIR ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateSEIR](/docs/reference/api/python/node_state/NodeStateSEIR) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Infected](/docs/reference/api/python/algorithms/Infected), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Infected](/docs/reference/api/python/algorithms/Infected), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Infected](/docs/reference/api/python/algorithms/Infected), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Infected]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Infected](/docs/reference/api/python/algorithms/Infected), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateSEIR](/docs/reference/api/python/node_state/NodeStateSEIR) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateSEIR](/docs/reference/api/python/node_state/NodeStateSEIR) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateSEIR](/docs/reference/api/python/node_state/NodeStateSEIR) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Infected]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateString --- title: "NodeStateString" breadcrumb: "Reference / Python / node_state / NodeStateString" --- # NodeStateString ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[str]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateString](/docs/reference/api/python/node_state/NodeStateString) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[str]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateU64 --- title: "NodeStateU64" breadcrumb: "Reference / Python / node_state / NodeStateU64" --- # NodeStateU64 ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`mean`](#mean) | mean of values over all nodes | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`sum`](#sum) | sum of values over all nodes | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateU64](/docs/reference/api/python/node_state/NodeStateU64) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | the value for the node or the default value | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [mean](#mean) mean of values over all nodes #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | mean value | ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[int]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateU64](/docs/reference/api/python/node_state/NodeStateU64) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateU64](/docs/reference/api/python/node_state/NodeStateU64) | The sorted node state | ### [sum](#sum) sum of values over all nodes #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | int: the sum | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateU64](/docs/reference/api/python/node_state/NodeStateU64) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[int]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateUsize --- title: "NodeStateUsize" breadcrumb: "Reference / Python / node_state / NodeStateUsize" --- # NodeStateUsize ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`mean`](#mean) | mean of values over all nodes | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`sum`](#sum) | sum of values over all nodes | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | The k smallest values as a node state | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [mean](#mean) mean of values over all nodes #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float) | mean value | ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[int]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | The sorted node state | ### [sum](#sum) sum of values over all nodes #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | int: the sum | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateUsize](/docs/reference/api/python/node_state/NodeStateUsize) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[int]` | Iterator over values | --- ## Reference > Api > Python > Node_state > NodeStateWeightedSP --- title: "NodeStateWeightedSP" breadcrumb: "Reference / Python / node_state / NodeStateWeightedSP" --- # NodeStateWeightedSP ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get value for node | | [`items`](#items) | Iterate over items | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`values`](#values) | Iterate over values | --- ## Method Details ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | tuple[[float](https://docs.python.org/3/library/functions.html#float), [Nodes](/docs/reference/api/python/raphtory/Nodes)], optional | `None` | the default value. Defaults to None. | ### [items](#items) Iterate over items ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateWeightedSP](/docs/reference/api/python/node_state/NodeStateWeightedSP) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [values](#values) Iterate over values --- ## Reference > Api > Python > Node_state > NodeTypeView --- title: "NodeTypeView" breadcrumb: "Reference / Python / node_state / NodeTypeView" --- # NodeTypeView A lazy view over node values ## Methods | Method | Description | |--------|-------------| | [`bottom_k`](#bottom_k) | Compute the k smallest values | | [`collect`](#collect) | Compute all values and return the result as a list | | [`compute`](#compute) | Compute all values and return the result as a node view | | [`get`](#get) | Get value for node | | [`groups`](#groups) | Group by value | | [`items`](#items) | Iterate over items | | [`max`](#max) | Return the maximum value | | [`max_item`](#max_item) | Return largest value and corresponding node | | [`median`](#median) | Return the median value | | [`median_item`](#median_item) | Return median value and corresponding node | | [`min`](#min) | Return the minimum value | | [`min_item`](#min_item) | Return smallest value and corresponding node | | [`nodes`](#nodes) | Iterate over nodes | | [`sorted`](#sorted) | Sort by value | | [`sorted_by_id`](#sorted_by_id) | Sort results by node id | | [`to_df`](#to_df) | Convert results to pandas DataFrame | | [`top_k`](#top_k) | Compute the k largest values | | [`values`](#values) | Iterate over values | --- ## Method Details ### [bottom_k](#bottom_k) **Signature:** `bottom_k(k)` Compute the k smallest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | The k smallest values as a node state | ### [collect](#collect) Compute all values and return the result as a list #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str), optional] | all values as a list | ### [compute](#compute) Compute all values and return the result as a node view #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | the computed `NodeState` | ### [get](#get) **Signature:** `get(node, default=None)` Get value for node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [NodeInput](/docs/reference/api/python/typing) | - | the node | | `default` | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | `None` | the default value. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | the value for the node or the default value | ### [groups](#groups) Group by value #### Returns | Type | Description | |------|-------------| | [NodeGroups](/docs/reference/api/python/node_state/NodeGroups) | The grouped nodes | ### [items](#items) Iterate over items ### [max](#max) Return the maximum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The maximum value or `None` if empty | ### [max_item](#max_item) Return largest value and corresponding node ### [median](#median) Return the median value #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | Optional[Optional[str]]: | ### [median_item](#median_item) Return median value and corresponding node ### [min](#min) Return the minimum value #### Returns | Type | Description | |------|-------------| | [Optional](https://docs.python.org/3/library/typing.html#typing.Optional), optional | The minimum value or `None` if empty | ### [min_item](#min_item) Return smallest value and corresponding node ### [nodes](#nodes) Iterate over nodes #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The nodes | ### [sorted](#sorted) **Signature:** `sorted(reverse=False)` Sort by value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `reverse` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If `True`, sort in descending order, otherwise ascending. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | Sorted node state | ### [sorted_by_id](#sorted_by_id) Sort results by node id #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | The sorted node state | ### [to_df](#to_df) Convert results to pandas DataFrame The DataFrame has two columns, "node" with the node ids and "value" with the corresponding values. #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the pandas DataFrame | ### [top_k](#top_k) **Signature:** `top_k(k)` Compute the k largest values #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `k` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of values to return | #### Returns | Type | Description | |------|-------------| | [NodeStateOptionStr](/docs/reference/api/python/node_state/NodeStateOptionStr) | The k largest values as a node state | ### [values](#values) Iterate over values #### Returns | Type | Description | |------|-------------| | `Iterator[Optional[str]]` | Iterator over values | --- ## Reference > Api > Python > Node_state > UsizeIterable --- title: "UsizeIterable" breadcrumb: "Reference / Python / node_state / UsizeIterable" --- # UsizeIterable ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | | | [`max`](#max) | | | [`mean`](#mean) | | | [`min`](#min) | | | [`sum`](#sum) | | --- ## Method Details ### [collect](#collect) ### [max](#max) ### [mean](#mean) ### [min](#min) ### [sum](#sum) --- ## Reference > Api > Python > Raphtory > Edge --- title: "Edge" breadcrumb: "Reference / Python / raphtory / Edge" --- # Edge PyEdge is a Python class that represents an edge in the graph. An edge is a directed connection between two nodes. ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the Edge including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the Edge including all events at `time`. | | [`before`](#before) | Create a view of the Edge including all events before `end` (exclusive). | | [`default_layer`](#default_layer) | Return a view of Edge containing only the default edge layer | | [`exclude_layer`](#exclude_layer) | Return a view of Edge containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of Edge containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of Edge containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of Edge containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`explode`](#explode) | Explodes returns an edge object for each update within the original edge. | | [`explode_layers`](#explode_layers) | Explode layers returns an edge object for each layer within the original edge. These new edge object contains only updates from respective layers. | | [`has_layer`](#has_layer) | Check if Edge has the layer `"name"` | | [`is_active`](#is_active) | Check if the edge is currently active (has at least one update within this period). | | [`is_deleted`](#is_deleted) | Check if the edge is currently deleted | | [`is_self_loop`](#is_self_loop) | Check if the edge is on the same node | | [`is_valid`](#is_valid) | Check if the edge is currently valid (i.e., not deleted) | | [`latest`](#latest) | Create a view of the Edge including all events at the latest time. | | [`layer`](#layer) | Return a view of Edge containing the layer `"name"` | | [`layers`](#layers) | Return a view of Edge containing all layers `names` | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the Edge including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the Edge including all events that have not been explicitly deleted at the latest time. | | [`valid_layers`](#valid_layers) | Return a view of Edge containing all layers `names` | | [`window`](#window) | Create a view of the Edge including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`deletions`](#deletions) | Returns a history object with EventTime entries for an edge's deletion times. | | [`dst`](#dst) | Returns the destination node of the edge. | | [`earliest_time`](#earliest_time) | Gets the earliest time of an edge. | | [`end`](#end) | Gets the latest time that this Edge is valid. | | [`history`](#history) | Returns a history object with EventTime entries for when an edge is added or change to an edge is made. | | [`id`](#id) | The id of the edge. | | [`latest_time`](#latest_time) | Gets the latest time of an edge. | | [`layer_name`](#layer_name) | Gets the name of the layer this edge belongs to - assuming it only belongs to one layer. | | [`layer_names`](#layer_names) | Gets the names of the layers this edge belongs to. | | [`metadata`](#metadata) | Gets the metadata of an edge | | [`nbr`](#nbr) | Returns the node at the other end of the edge (same as `dst()` for out-edges and `src()` for in-edges) | | [`properties`](#properties) | Returns a view of the properties of the edge. | | [`src`](#src) | Returns the source node of the edge. | | [`start`](#start) | Gets the start time for rolling and expanding windows for this Edge | | [`time`](#time) | Gets the time of an exploded edge. | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this Edge. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the Edge including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [at](#at) **Signature:** `at(time)` Create a view of the Edge including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [before](#before) **Signature:** `before(end)` Create a view of the Edge including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [default_layer](#default_layer) Return a view of Edge containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of Edge containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of Edge containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of Edge containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of Edge containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [explode](#explode) Explodes returns an edge object for each update within the original edge. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [explode_layers](#explode_layers) Explode layers returns an edge object for each layer within the original edge. These new edge object contains only updates from respective layers. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if Edge has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_active](#is_active) Check if the edge is currently active (has at least one update within this period). #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_deleted](#is_deleted) Check if the edge is currently deleted #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_self_loop](#is_self_loop) Check if the edge is on the same node #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_valid](#is_valid) Check if the edge is currently valid (i.e., not deleted) #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [latest](#latest) Create a view of the Edge including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of Edge containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of Edge containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the Edge including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [snapshot_latest](#snapshot_latest) Create a view of the Edge including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of Edge containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the Edge including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | | --- ## Property Details ### [deletions](#deletions) Returns a history object with EventTime entries for an edge's deletion times. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | A history object containing time entries about the edge's deletions | ### [dst](#dst) Returns the destination node of the edge. #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [earliest_time](#earliest_time) Gets the earliest time of an edge. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time of an edge | ### [end](#end) Gets the latest time that this Edge is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this Edge is valid or None if the Edge is valid for all times. | ### [history](#history) Returns a history object with EventTime entries for when an edge is added or change to an edge is made. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | A history object containing temporal entries about the edge | ### [id](#id) The id of the edge. #### Returns | Type | Description | |------|-------------| | [GID](/docs/reference/api/python/typing) | | ### [latest_time](#latest_time) Gets the latest time of an edge. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time of an edge | ### [layer_name](#layer_name) Gets the name of the layer this edge belongs to - assuming it only belongs to one layer. #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | The name of the layer | ### [layer_names](#layer_names) Gets the names of the layers this edge belongs to. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | The name of the layer | ### [metadata](#metadata) Gets the metadata of an edge #### Returns | Type | Description | |------|-------------| | [Metadata](/docs/reference/api/python/raphtory/Metadata) | | ### [nbr](#nbr) Returns the node at the other end of the edge (same as `dst()` for out-edges and `src()` for in-edges) #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [properties](#properties) Returns a view of the properties of the edge. #### Returns | Type | Description | |------|-------------| | [Properties](/docs/reference/api/python/raphtory/Properties) | Properties on the Edge. | ### [src](#src) Returns the source node of the edge. #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [start](#start) Gets the start time for rolling and expanding windows for this Edge #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this Edge is valid or None if the Edge is valid for all times. | ### [time](#time) Gets the time of an exploded edge. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The time of an exploded edge | ### [window_size](#window_size) Get the window size (difference between start and end) for this Edge. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > Edges --- title: "Edges" breadcrumb: "Reference / Python / raphtory / Edges" --- # Edges A list of edges that can be iterated over. ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the Edges including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the Edges including all events at `time`. | | [`before`](#before) | Create a view of the Edges including all events before `end` (exclusive). | | [`collect`](#collect) | Collect all edges into a list | | [`count`](#count) | Returns the number of edges. | | [`default_layer`](#default_layer) | Return a view of Edges containing only the default edge layer | | [`exclude_layer`](#exclude_layer) | Return a view of Edges containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of Edges containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of Edges containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of Edges containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`explode`](#explode) | Explodes returns an edge object for each update within the original edge. | | [`explode_layers`](#explode_layers) | Explode layers returns an edge object for each layer within the original edge. These new edge object contains only updates from respective layers. | | [`has_layer`](#has_layer) | Check if Edges has the layer `"name"` | | [`is_active`](#is_active) | Check if the edges are active (there is at least one update during this time). | | [`is_deleted`](#is_deleted) | Check if the edges are deleted. | | [`is_self_loop`](#is_self_loop) | Check if the edges are on the same node. | | [`is_valid`](#is_valid) | Check if the edges are valid (i.e. not deleted). | | [`latest`](#latest) | Create a view of the Edges including all events at the latest time. | | [`layer`](#layer) | Return a view of Edges containing the layer `"name"` | | [`layers`](#layers) | Return a view of Edges containing all layers `names` | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the Edges including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the Edges including all events that have not been explicitly deleted at the latest time. | | [`to_df`](#to_df) | Converts the graph's edges into a Pandas DataFrame. | | [`valid_layers`](#valid_layers) | Return a view of Edges containing all layers `names` | | [`window`](#window) | Create a view of the Edges including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`deletions`](#deletions) | Returns a history object for each edge containing their deletion times. | | [`dst`](#dst) | Returns the destination node of the edge. | | [`earliest_time`](#earliest_time) | Returns the earliest time of the edges. | | [`end`](#end) | Gets the latest time that this Edges is valid. | | [`history`](#history) | Returns a history object for each edge containing time entries for when the edge is added or change to the edge is made. | | [`id`](#id) | Returns all ids of the edges. | | [`latest_time`](#latest_time) | Returns the latest times of the edges. | | [`layer_name`](#layer_name) | Get the layer name that all edges belong to - assuming they only belong to one layer | | [`layer_names`](#layer_names) | Get the layer names that all edges belong to - assuming they only belong to one layer. | | [`metadata`](#metadata) | Returns all the metadata of the edges | | [`nbr`](#nbr) | Returns the node at the other end of the edge (same as `dst()` for out-edges and `src()` for in-edges) | | [`properties`](#properties) | Returns all properties of the edges | | [`src`](#src) | Returns the source node of the edge. | | [`start`](#start) | Gets the start time for rolling and expanding windows for this Edges | | [`time`](#time) | Returns the times of exploded edges | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this Edges. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the Edges including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [at](#at) **Signature:** `at(time)` Create a view of the Edges including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [before](#before) **Signature:** `before(end)` Create a view of the Edges including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [collect](#collect) Collect all edges into a list #### Returns | Type | Description | |------|-------------| | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | the list of edges | ### [count](#count) Returns the number of edges. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | | ### [default_layer](#default_layer) Return a view of Edges containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of Edges containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of Edges containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of Edges containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of Edges containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [explode](#explode) Explodes returns an edge object for each update within the original edge. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [explode_layers](#explode_layers) Explode layers returns an edge object for each layer within the original edge. These new edge object contains only updates from respective layers. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if Edges has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_active](#is_active) Check if the edges are active (there is at least one update during this time). #### Returns | Type | Description | |------|-------------| | [BoolIterable](/docs/reference/api/python/iterables/BoolIterable) | | ### [is_deleted](#is_deleted) Check if the edges are deleted. #### Returns | Type | Description | |------|-------------| | [BoolIterable](/docs/reference/api/python/iterables/BoolIterable) | | ### [is_self_loop](#is_self_loop) Check if the edges are on the same node. #### Returns | Type | Description | |------|-------------| | [BoolIterable](/docs/reference/api/python/iterables/BoolIterable) | | ### [is_valid](#is_valid) Check if the edges are valid (i.e. not deleted). #### Returns | Type | Description | |------|-------------| | [BoolIterable](/docs/reference/api/python/iterables/BoolIterable) | | ### [latest](#latest) Create a view of the Edges including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of Edges containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of Edges containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the Edges including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [snapshot_latest](#snapshot_latest) Create a view of the Edges including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [to_df](#to_df) **Signature:** `to_df(include_property_history=True, convert_datetime=False, explode=False)` Converts the graph's edges into a Pandas DataFrame. This method will create a DataFrame with the following columns: - "src": The source node of the edge. - "dst": The destination node of the edge. - "layer": The layer of the edge. - "properties": The properties of the edge. - "update_history": The update history of the edge. This column will be included if `include_update_history` is set to `true`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `include_property_history` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | A boolean, if set to `True`, the history of each property is included, if `False`, only the latest value is shown. Ignored if exploded. Defaults to True. | | `convert_datetime` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean, if set to `True` will convert the timestamp to python datetimes. Defaults to False. | | `explode` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean, if set to `True`, will explode each edge update into its own row. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | If successful, this PyObject will be a Pandas DataFrame. | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of Edges containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the Edges including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | --- ## Property Details ### [deletions](#deletions) Returns a history object for each edge containing their deletion times. #### Returns | Type | Description | |------|-------------| | [HistoryIterable](/docs/reference/api/python/iterables/HistoryIterable) | An iterable of history objects, one for each edge. | ### [dst](#dst) Returns the destination node of the edge. #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [earliest_time](#earliest_time) Returns the earliest time of the edges. #### Returns | Type | Description | |------|-------------| | [OptionEventTimeIterable](/docs/reference/api/python/iterables/OptionEventTimeIterable) | Iterable of `EventTime`s. | ### [end](#end) Gets the latest time that this Edges is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this Edges is valid or None if the Edges is valid for all times. | ### [history](#history) Returns a history object for each edge containing time entries for when the edge is added or change to the edge is made. #### Returns | Type | Description | |------|-------------| | [HistoryIterable](/docs/reference/api/python/iterables/HistoryIterable) | An iterable of history objects, one for each edge. | ### [id](#id) Returns all ids of the edges. #### Returns | Type | Description | |------|-------------| | [GIDGIDIterable](/docs/reference/api/python/iterables/GIDGIDIterable) | | ### [latest_time](#latest_time) Returns the latest times of the edges. #### Returns | Type | Description | |------|-------------| | [OptionEventTimeIterable](/docs/reference/api/python/iterables/OptionEventTimeIterable) | Iterable of `EventTime`s. | ### [layer_name](#layer_name) Get the layer name that all edges belong to - assuming they only belong to one layer #### Returns | Type | Description | |------|-------------| | [ArcStringIterable](/docs/reference/api/python/iterables/ArcStringIterable) | | ### [layer_names](#layer_names) Get the layer names that all edges belong to - assuming they only belong to one layer. #### Returns | Type | Description | |------|-------------| | [ArcStringVecIterable](/docs/reference/api/python/iterables/ArcStringVecIterable) | | ### [metadata](#metadata) Returns all the metadata of the edges #### Returns | Type | Description | |------|-------------| | [MetadataView](/docs/reference/api/python/raphtory/MetadataView) | | ### [nbr](#nbr) Returns the node at the other end of the edge (same as `dst()` for out-edges and `src()` for in-edges) #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [properties](#properties) Returns all properties of the edges #### Returns | Type | Description | |------|-------------| | [PropertiesView](/docs/reference/api/python/raphtory/PropertiesView) | | ### [src](#src) Returns the source node of the edge. #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [start](#start) Gets the start time for rolling and expanding windows for this Edges #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this Edges is valid or None if the Edges is valid for all times. | ### [time](#time) Returns the times of exploded edges #### Returns | Type | Description | |------|-------------| | [EventTimeIterable](/docs/reference/api/python/iterables/EventTimeIterable) | Iterable of `EventTime`s. | ### [window_size](#window_size) Get the window size (difference between start and end) for this Edges. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > EventTime --- title: "EventTime" breadcrumb: "Reference / Python / raphtory / EventTime" --- # EventTime Raphtory’s EventTime. Represents a unique timepoint in the graph’s history as (timestamp, event_id). - timestamp: Number of milliseconds since the Unix epoch. - event_id: ID used for ordering between equal timestamps. Unless specified manually, the event ids are generated automatically by Raphtory to maintain a unique ordering of events. EventTime can be converted into a timestamp or a Python datetime, and compared either by timestamp (against ints/floats/datetimes/strings), by tuple of (timestamp, event_id), or against another EventTime. ## Properties | Property | Description | |----------|-------------| | [`as_tuple`](#as_tuple) | Return this entry as a tuple of (timestamp, event_id), where the timestamp is in milliseconds. | | [`dt`](#dt) | Returns the UTC datetime representation of this EventTime's timestamp. | | [`event_id`](#event_id) | Returns the event id used to order events within the same timestamp. | | [`t`](#t) | Returns the timestamp in milliseconds since the Unix epoch. | --- ## Property Details ### [as_tuple](#as_tuple) Return this entry as a tuple of (timestamp, event_id), where the timestamp is in milliseconds. #### Returns | Type | Description | |------|-------------| | tuple[[int](https://docs.python.org/3/library/functions.html#int), [int](https://docs.python.org/3/library/functions.html#int)] | (timestamp, event_id). | ### [dt](#dt) Returns the UTC datetime representation of this EventTime's timestamp. #### Returns | Type | Description | |------|-------------| | [datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime) | The UTC datetime. | ### [event_id](#event_id) Returns the event id used to order events within the same timestamp. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The event id. | ### [t](#t) Returns the timestamp in milliseconds since the Unix epoch. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | Milliseconds since the Unix epoch. | --- ## Reference > Api > Python > Raphtory > Graph --- title: "Graph" breadcrumb: "Reference / Python / raphtory / Graph" --- # Graph A temporal graph with event semantics. ## Methods | Method | Description | |--------|-------------| | [`add_edge`](#add_edge) | Adds a new edge with the given source and destination nodes and properties to the graph. | | [`add_metadata`](#add_metadata) | Adds static properties to the graph. | | [`add_node`](#add_node) | Adds a new node with the given id and properties to the graph. | | [`add_properties`](#add_properties) | Adds properties to the graph. | | [`cache`](#cache) | Write Graph to cache file and initialise the cache. | | [`create_index`](#create_index) | Create graph index | | [`create_index_in_ram`](#create_index_in_ram) | Creates a graph index in memory (RAM). | | [`create_index_in_ram_with_spec`](#create_index_in_ram_with_spec) | Creates a graph index in memory (RAM) with the provided index spec. | | [`create_index_with_spec`](#create_index_with_spec) | Create graph index with the provided index spec. | | [`create_node`](#create_node) | Creates a new node with the given id and properties to the graph. It fails if the node already exists. | | [`deserialise`](#deserialise) | Load Graph from serialised bytes. | | [`edge`](#edge) | Gets the edge with the specified source and destination nodes | | [`event_graph`](#event_graph) | View graph with event semantics | | [`from_parquet`](#from_parquet) | Read graph from parquet files | | [`get_all_node_types`](#get_all_node_types) | Returns all the node types in the graph. | | [`import_edge`](#import_edge) | Import a single edge into the graph. | | [`import_edge_as`](#import_edge_as) | Import a single edge into the graph with new id. | | [`import_edges`](#import_edges) | Import multiple edges into the graph. | | [`import_edges_as`](#import_edges_as) | Import multiple edges into the graph with new ids. | | [`import_node`](#import_node) | Import a single node into the graph. | | [`import_node_as`](#import_node_as) | Import a single node into the graph with new id. | | [`import_nodes`](#import_nodes) | Import multiple nodes into the graph. | | [`import_nodes_as`](#import_nodes_as) | Import multiple nodes into the graph with new ids. | | [`largest_connected_component`](#largest_connected_component) | Gives the large connected component of a graph. | | [`load_cached`](#load_cached) | Load Graph from a file and initialise it as a cache file. | | [`load_edge_metadata`](#load_edge_metadata) | Load edge metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_edges`](#load_edges) | Load edges into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_from_file`](#load_from_file) | Load Graph from a file. | | [`load_node_metadata`](#load_node_metadata) | Load node metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_nodes`](#load_nodes) | Load nodes into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`node`](#node) | Gets the node with the specified id | | [`persistent_graph`](#persistent_graph) | View graph with persistent semantics | | [`save_to_file`](#save_to_file) | Saves the Graph to the given path. | | [`save_to_zip`](#save_to_zip) | Saves the Graph to the given path. | | [`serialise`](#serialise) | Serialise Graph to bytes. | | [`to_parquet`](#to_parquet) | Persist graph to parquet files. | | [`update_metadata`](#update_metadata) | Updates static properties to the graph. | | [`write_updates`](#write_updates) | Persist the new updates by appending them to the cache file. | --- ## Method Details ### [add_edge](#add_edge) **Signature:** `add_edge(timestamp, src, dst, properties=None, layer=None, event_id=None)` Adds a new edge with the given source and destination nodes and properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the edge. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the destination node. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | The properties of the edge, as a dict of string and properties. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer of the edge. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge) | The added edge. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [add_metadata](#add_metadata) **Signature:** `add_metadata(metadata)` Adds static properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [PropInput](/docs/reference/api/python/typing) | - | The static properties of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [add_node](#add_node) **Signature:** `add_node(timestamp, id, properties=None, node_type=None, event_id=None)` Adds a new node with the given id and properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the node. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the node. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | The properties of the node. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The optional string which will be used as a node type. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [MutableNode](/docs/reference/api/python/raphtory/MutableNode) | The added node. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [add_properties](#add_properties) **Signature:** `add_properties(timestamp, properties, event_id=None)` Adds properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the temporal property. | | `properties` | [PropInput](/docs/reference/api/python/typing) | - | The temporal properties of the graph. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [cache](#cache) **Signature:** `cache(path)` Write Graph to cache file and initialise the cache. Future updates are tracked. Use `write_updates` to persist them to the cache file. If the file already exists its contents are overwritten. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the cache file | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index](#create_index) Create graph index #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index_in_ram](#create_index_in_ram) Creates a graph index in memory (RAM). This is primarily intended for use in tests and should not be used in production environments, as the index will not be persisted to disk. #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index_in_ram_with_spec](#create_index_in_ram_with_spec) **Signature:** `create_index_in_ram_with_spec(py_spec)` Creates a graph index in memory (RAM) with the provided index spec. This is primarily intended for use in tests and should not be used in production environments, as the index will not be persisted to disk. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `py_spec` | [IndexSpec](/docs/reference/api/python/raphtory/IndexSpec) | - | - The specification for the in-memory index to be created. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index_with_spec](#create_index_with_spec) **Signature:** `create_index_with_spec(py_spec)` Create graph index with the provided index spec. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `py_spec` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_node](#create_node) **Signature:** `create_node(timestamp, id, properties=None, node_type=None, event_id=None)` Creates a new node with the given id and properties to the graph. It fails if the node already exists. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the node. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the node. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | The properties of the node. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The optional string which will be used as a node type. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [MutableNode](/docs/reference/api/python/raphtory/MutableNode) | The created node. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [deserialise](#deserialise) **Signature:** `deserialise(bytes)` Load Graph from serialised bytes. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `bytes` | [bytes](https://docs.python.org/3/library/stdtypes.html#bytes) | - | The serialised bytes to decode | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | ### [edge](#edge) **Signature:** `edge(src, dst)` Gets the edge with the specified source and destination nodes #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the source node id | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the destination node id | #### Returns | Type | Description | |------|-------------| | [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge) | the edge with the specified source and destination nodes, or None if the edge does not exist | ### [event_graph](#event_graph) View graph with event semantics #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | the graph with event semantics applied | ### [from_parquet](#from_parquet) **Signature:** `from_parquet(graph_dir)` Read graph from parquet files #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph_dir` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [PathLike](https://docs.python.org/3/library/os.html#os.PathLike) | - | the folder where the graph is stored as parquet | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | a view of the graph | ### [get_all_node_types](#get_all_node_types) Returns all the node types in the graph. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | the node types | ### [import_edge](#import_edge) **Signature:** `import_edge(edge, merge=False)` Import a single edge into the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edge` | [Edge](/docs/reference/api/python/raphtory/Edge) | - | A Edge object representing the edge to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if the imported edge already exists in the graph. If merge is True, the function merges the histories of the imported edge and the existing edge (in the graph). | #### Returns | Type | Description | |------|-------------| | [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge) | An Edge object if the edge was successfully imported. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_edge_as](#import_edge_as) **Signature:** `import_edge_as(edge, new_id, merge=False)` Import a single edge into the graph with new id. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edge` | [Edge](/docs/reference/api/python/raphtory/Edge) | - | A Edge object representing the edge to be imported. | | `new_id` | [tuple](https://docs.python.org/3/library/stdtypes.html#tuple) | - | The ID of the new edge. It's a tuple of the source and destination node ids. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if the imported edge already exists in the graph. If merge is True, the function merges the histories of the imported edge and the existing edge (in the graph). | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | An Edge object if the edge was successfully imported. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_edges](#import_edges) **Signature:** `import_edges(edges, merge=False)` Import multiple edges into the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges` | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | - | A list of Edge objects representing the edges to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if any of the imported edges already exists in the graph. If merge is True, the function merges the histories of the imported edges and the existing edges (in the graph). | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_edges_as](#import_edges_as) **Signature:** `import_edges_as(edges, new_ids, merge=False)` Import multiple edges into the graph with new ids. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges` | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | - | A list of Edge objects representing the edges to be imported. | | `new_ids` | list[tuple[[int](https://docs.python.org/3/library/functions.html#int), [int](https://docs.python.org/3/library/functions.html#int)]] | - | The IDs of the new edges. It's a vector of tuples of the source and destination node ids. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if any of the imported edges already exists in the graph. If merge is True, the function merges the histories of the imported edges and the existing edges (in the graph). | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_node](#import_node) **Signature:** `import_node(node, merge=False)` Import a single node into the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [Node](/docs/reference/api/python/raphtory/Node) | - | A Node object representing the node to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if the imported node already exists in the graph. If merge is True, the function merges the histories of the imported node and the existing node (in the graph). | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | A node object if the node was successfully imported. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_node_as](#import_node_as) **Signature:** `import_node_as(node, new_id, merge=False)` Import a single node into the graph with new id. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [Node](/docs/reference/api/python/raphtory/Node) | - | A Node object representing the node to be imported. | | `new_id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The new node id. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if the imported node already exists in the graph. If merge is True, the function merges the histories of the imported node and the existing node (in the graph). | #### Returns | Type | Description | |------|-------------| | [MutableNode](/docs/reference/api/python/raphtory/MutableNode) | A node object if the node was successfully imported. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_nodes](#import_nodes) **Signature:** `import_nodes(nodes, merge=False)` Import multiple nodes into the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | list[[Node](/docs/reference/api/python/raphtory/Node)] | - | A vector of Node objects representing the nodes to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is False, the function will return an error if any of the imported nodes already exists in the graph. If merge is True, the function merges the histories of the imported nodes and the existing nodes (in the graph). | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_nodes_as](#import_nodes_as) **Signature:** `import_nodes_as(nodes, new_ids, merge=False)` Import multiple nodes into the graph with new ids. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | list[[Node](/docs/reference/api/python/raphtory/Node)] | - | A vector of Node objects representing the nodes to be imported. | | `new_ids` | [List](https://docs.python.org/3/library/typing.html#typing.List) \| `int]` | - | A list of node IDs to use for the imported nodes. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag. Defaults to False. If merge is True, the function will return an error if any of the imported nodes already exists in the graph. If merge is False, the function merges the histories of the imported nodes and the existing nodes (in the graph). | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [largest_connected_component](#largest_connected_component) Gives the large connected component of a graph. # Example Usage: g.largest_connected_component() #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | sub-graph of the graph `g` containing the largest connected component | ### [load_cached](#load_cached) **Signature:** `load_cached(path)` Load Graph from a file and initialise it as a cache file. Future updates are tracked. Use `write_updates` to persist them to the cache file. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the cache file | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | the loaded graph with initialised cache | ### [load_edge_metadata](#load_edge_metadata) **Signature:** `load_edge_metadata(data, src, dst, metadata=None, shared_metadata=None, layer=None, layer_col=None, schema=None, csv_options=None)` Load edge metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing edge information. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the destination node. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of edge metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every edge. Defaults to None. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer name. Defaults to None. | | `layer_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer column name in a dataframe. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_edges](#load_edges) **Signature:** `load_edges(data, time, src, dst, properties=None, metadata=None, shared_metadata=None, layer=None, layer_col=None, schema=None, csv_options=None)` Load edges into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing the edges. | | `time` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the update timestamps. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the source node IDs. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the destination node IDs. | | `properties` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of edge property column names. Defaults to None. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of edge metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every edge. Defaults to None. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the layer for all edges. Cannot be used in combination with layer_col. Defaults to None. | | `layer_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer column name in a dataframe. Cannot be used in combination with layer. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_from_file](#load_from_file) **Signature:** `load_from_file(path)` Load Graph from a file. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the file. | #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | | ### [load_node_metadata](#load_node_metadata) **Signature:** `load_node_metadata(data, id, node_type=None, node_type_col=None, metadata=None, shared_metadata=None, schema=None, csv_options=None)` Load node metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing node information. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the node IDs. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the node type for all nodes. Cannot be used in combination with node_type_col. Defaults to None. | | `node_type_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The node type column name in a dataframe. Cannot be used in combination with node_type. Defaults to None. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of node metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every node. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_nodes](#load_nodes) **Signature:** `load_nodes(data, time, id, node_type=None, node_type_col=None, properties=None, metadata=None, shared_metadata=None, schema=None, csv_options=None)` Load nodes into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing the nodes. | | `time` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the timestamps. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the node IDs. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the node type for all nodes. Cannot be used in combination with node_type_col. Defaults to None. | | `node_type_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The node type column name in a dataframe. Cannot be used in combination with node_type. Defaults to None. | | `properties` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of node property column names. Defaults to None. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of node metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every node. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [node](#node) **Signature:** `node(id)` Gets the node with the specified id #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the node id | #### Returns | Type | Description | |------|-------------| | [MutableNode](/docs/reference/api/python/raphtory/MutableNode) | The node object with the specified id, or None if the node does not exist | ### [persistent_graph](#persistent_graph) View graph with persistent semantics #### Returns | Type | Description | |------|-------------| | [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | the graph with persistent semantics applied | ### [save_to_file](#save_to_file) **Signature:** `save_to_file(path)` Saves the Graph to the given path. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the file. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [save_to_zip](#save_to_zip) **Signature:** `save_to_zip(path)` Saves the Graph to the given path. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the file. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [serialise](#serialise) Serialise Graph to bytes. #### Returns | Type | Description | |------|-------------| | [bytes](https://docs.python.org/3/library/stdtypes.html#bytes) | | ### [to_parquet](#to_parquet) **Signature:** `to_parquet(graph_dir)` Persist graph to parquet files. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `graph_dir` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [PathLike](https://docs.python.org/3/library/os.html#os.PathLike) | - | the folder where the graph will be persisted as parquet | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(metadata)` Updates static properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [PropInput](/docs/reference/api/python/typing) | - | The static properties of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [write_updates](#write_updates) Persist the new updates by appending them to the cache file. #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Raphtory > GraphView --- title: "GraphView" breadcrumb: "Reference / Python / raphtory / GraphView" --- # GraphView Graph view is a read-only version of a graph at a certain point in time. ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the GraphView including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the GraphView including all events at `time`. | | [`before`](#before) | Create a view of the GraphView including all events before `end` (exclusive). | | [`cache_view`](#cache_view) | Applies the filters to the graph and retains the node ids and the edge ids | | [`count_edges`](#count_edges) | Number of edges in the graph | | [`count_nodes`](#count_nodes) | Number of nodes in the graph | | [`count_temporal_edges`](#count_temporal_edges) | Number of edges in the graph | | [`default_layer`](#default_layer) | Return a view of GraphView containing only the default edge layer | | [`edge`](#edge) | Gets the edge with the specified source and destination nodes | | [`exclude_layer`](#exclude_layer) | Return a view of GraphView containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of GraphView containing all layers except the excluded `names` | | [`exclude_nodes`](#exclude_nodes) | Returns a subgraph given a set of nodes that are excluded from the subgraph | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of GraphView containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of GraphView containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`filter`](#filter) | Return a filtered view that only includes nodes and edges that satisfy the filter | | [`find_edges`](#find_edges) | Get the edges that match the properties name and value | | [`find_nodes`](#find_nodes) | Get the nodes that match the properties name and value | | [`get_index_spec`](#get_index_spec) | Get index spec | | [`has_edge`](#has_edge) | Returns true if the graph contains the specified edge | | [`has_layer`](#has_layer) | Check if GraphView has the layer `"name"` | | [`has_node`](#has_node) | Returns true if the graph contains the specified node | | [`latest`](#latest) | Create a view of the GraphView including all events at the latest time. | | [`layer`](#layer) | Return a view of GraphView containing the layer `"name"` | | [`layers`](#layers) | Return a view of GraphView containing all layers `names` | | [`materialize`](#materialize) | Returns a 'materialized' clone of the graph view - i.e. a new graph with a copy of the data seen within the view instead of just a mask over the original graph | | [`node`](#node) | Gets the node with the specified id | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`search_edges`](#search_edges) | Searches for edges which match the given filter expression. This uses Tantivy's exact search. | | [`search_nodes`](#search_nodes) | Searches for nodes which match the given filter expression. This uses Tantivy's exact search. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the GraphView including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the GraphView including all events that have not been explicitly deleted at the latest time. | | [`subgraph`](#subgraph) | Returns a subgraph given a set of nodes | | [`subgraph_node_types`](#subgraph_node_types) | Returns a subgraph filtered by node types given a set of node types | | [`to_networkx`](#to_networkx) | Returns a graph with NetworkX. | | [`to_pyvis`](#to_pyvis) | Draw a graph with PyVis. | | [`valid`](#valid) | Return a view of the graph that only includes valid edges | | [`valid_layers`](#valid_layers) | Return a view of GraphView containing all layers `names` | | [`vectorise`](#vectorise) | Create a VectorisedGraph from the current graph. | | [`window`](#window) | Create a view of the GraphView including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`earliest_time`](#earliest_time) | Time entry of the earliest activity in the graph | | [`edges`](#edges) | Gets all edges in the graph | | [`end`](#end) | Gets the latest time that this GraphView is valid. | | [`latest_time`](#latest_time) | Time entry of the latest activity in the graph | | [`metadata`](#metadata) | Get all graph metadata | | [`nodes`](#nodes) | Gets the nodes in the graph | | [`properties`](#properties) | Get all graph properties | | [`start`](#start) | Gets the start time for rolling and expanding windows for this GraphView | | [`unique_layers`](#unique_layers) | Return all the layer ids in the graph | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this GraphView. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the GraphView including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [at](#at) **Signature:** `at(time)` Create a view of the GraphView including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [before](#before) **Signature:** `before(end)` Create a view of the GraphView including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [cache_view](#cache_view) Applies the filters to the graph and retains the node ids and the edge ids in the graph that satisfy the filters creates bitsets per layer for nodes and edges #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | Returns the masked graph | ### [count_edges](#count_edges) Number of edges in the graph #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | the number of edges in the graph | ### [count_nodes](#count_nodes) Number of nodes in the graph #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | the number of nodes in the graph | ### [count_temporal_edges](#count_temporal_edges) Number of edges in the graph #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | the number of temporal edges in the graph | ### [default_layer](#default_layer) Return a view of GraphView containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [edge](#edge) **Signature:** `edge(src, dst)` Gets the edge with the specified source and destination nodes #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [NodeInput](/docs/reference/api/python/typing) | - | the source node id | | `dst` | [NodeInput](/docs/reference/api/python/typing) | - | the destination node id | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge), optional | the edge with the specified source and destination nodes, or None if the edge does not exist | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of GraphView containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of GraphView containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [exclude_nodes](#exclude_nodes) **Signature:** `exclude_nodes(nodes)` Returns a subgraph given a set of nodes that are excluded from the subgraph #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | list[[NodeInput](/docs/reference/api/python/typing)] | - | set of nodes | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | Returns the subgraph | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of GraphView containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of GraphView containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [filter](#filter) **Signature:** `filter(filter)` Return a filtered view that only includes nodes and edges that satisfy the filter #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr` | - | The filter to apply to the nodes and edges. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The filtered view | ### [find_edges](#find_edges) **Signature:** `find_edges(properties_dict)` Get the edges that match the properties name and value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties_dict` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)] | - | the properties name and value | #### Returns | Type | Description | |------|-------------| | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | the edges that match the properties name and value | ### [find_nodes](#find_nodes) **Signature:** `find_nodes(properties_dict)` Get the nodes that match the properties name and value #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `properties_dict` | dict[[str](https://docs.python.org/3/library/stdtypes.html#str), [PropValue](/docs/reference/api/python/typing)] | - | the properties name and value | #### Returns | Type | Description | |------|-------------| | list[[Node](/docs/reference/api/python/raphtory/Node)] | the nodes that match the properties name and value | ### [get_index_spec](#get_index_spec) Get index spec #### Returns | Type | Description | |------|-------------| | [IndexSpec](/docs/reference/api/python/raphtory/IndexSpec) | | ### [has_edge](#has_edge) **Signature:** `has_edge(src, dst)` Returns true if the graph contains the specified edge #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [NodeInput](/docs/reference/api/python/typing) | - | the source node id | | `dst` | [NodeInput](/docs/reference/api/python/typing) | - | the destination node id | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | true if the graph contains the specified edge, false otherwise | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if GraphView has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [has_node](#has_node) **Signature:** `has_node(id)` Returns true if the graph contains the specified node #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | [NodeInput](/docs/reference/api/python/typing) | - | the node id | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | true if the graph contains the specified node, false otherwise | ### [latest](#latest) Create a view of the GraphView including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of GraphView containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of GraphView containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [materialize](#materialize) Returns a 'materialized' clone of the graph view - i.e. a new graph with a copy of the data seen within the view instead of just a mask over the original graph #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | Returns a graph clone | ### [node](#node) **Signature:** `node(id)` Gets the node with the specified id #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | [NodeInput](/docs/reference/api/python/typing) | - | the node id | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node), optional | the node with the specified id, or None if the node does not exist | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [search_edges](#search_edges) **Signature:** `search_edges(filter, limit=25, offset=0)` Searches for edges which match the given filter expression. This uses Tantivy's exact search. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `limit` | [int](https://docs.python.org/3/library/functions.html#int), optional | `25` | The maximum number of results to return. Defaults to 25. | | `offset` | [int](https://docs.python.org/3/library/functions.html#int), optional | `0` | The number of results to skip. This is useful for pagination. Defaults to 0. | | `filter` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | #### Returns | Type | Description | |------|-------------| | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | A list of edges which match the filter expression. The list will be empty if no edges match the query. | ### [search_nodes](#search_nodes) **Signature:** `search_nodes(filter, limit=25, offset=0)` Searches for nodes which match the given filter expression. This uses Tantivy's exact search. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `limit` | [int](https://docs.python.org/3/library/functions.html#int), optional | `25` | The maximum number of results to return. Defaults to 25. | | `offset` | [int](https://docs.python.org/3/library/functions.html#int), optional | `0` | The number of results to skip. This is useful for pagination. Defaults to 0. | | `filter` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | #### Returns | Type | Description | |------|-------------| | list[[Node](/docs/reference/api/python/raphtory/Node)] | A list of nodes which match the filter expression. The list will be empty if no nodes match. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the GraphView including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [snapshot_latest](#snapshot_latest) Create a view of the GraphView including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | ### [subgraph](#subgraph) **Signature:** `subgraph(nodes)` Returns a subgraph given a set of nodes #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | list[[NodeInput](/docs/reference/api/python/typing)] | - | set of nodes | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | Returns the subgraph | ### [subgraph_node_types](#subgraph_node_types) **Signature:** `subgraph_node_types(node_types)` Returns a subgraph filtered by node types given a set of node types #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node_types` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | set of node types | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | Returns the subgraph | ### [to_networkx](#to_networkx) **Signature:** `to_networkx(explode_edges=False, include_node_properties=True, include_edge_properties=True, include_update_history=True, include_property_history=True)` Returns a graph with NetworkX. Network X is a required dependency. If you intend to use this function make sure that you install Network X with ``pip install networkx`` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `explode_edges` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean that is set to True if you want to explode the edges in the graph. Defaults to False. | | `include_node_properties` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | A boolean that is set to True if you want to include the node properties in the graph. Defaults to True. | | `include_edge_properties` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | A boolean that is set to True if you want to include the edge properties in the graph. Defaults to True. | | `include_update_history` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | A boolean that is set to True if you want to include the update histories in the graph. Defaults to True. | | `include_property_history` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | A boolean that is set to True if you want to include the histories in the graph. Defaults to True. | #### Returns | Type | Description | |------|-------------| | `nx.MultiDiGraph` | A Networkx MultiDiGraph. | ### [to_pyvis](#to_pyvis) **Signature:** `to_pyvis(explode_edges=False, edge_color='#000000', shape='dot', node_image=None, edge_weight=None, edge_label=None, colour_nodes_by_type=False, directed=True, notebook=False, kwargs=\{\})` Draw a graph with PyVis. Pyvis is a required dependency. If you intend to use this function make sure that you install Pyvis with ``pip install pyvis`` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `explode_edges` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean that is set to True if you want to explode the edges in the graph. Defaults to False. | | `edge_color` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `'#000000'` | A string defining the colour of the edges in the graph. Defaults to "#000000". | | `shape` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `'dot'` | A string defining what the node looks like. Defaults to "dot". There are two types of nodes. One type has the label inside of it and the other type has the label underneath it. The types with the label inside of it are: ellipse, circle, database, box, text. The ones with the label outside of it are: image, circularImage, diamond, dot, star, triangle, triangleDown, square and icon. | | `node_image` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | An optional node property used as the url of a custom node image. Use together with `shape="image"`. | | `edge_weight` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | An optional string defining the name of the property where edge weight is set on your Raphtory graph. If provided, the default weight for edges that are missing the property is 1.0. | | `edge_label` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | An optional string defining the name of the property where edge label is set on your Raphtory graph. By default, the edge layer is used as the label. | | `colour_nodes_by_type` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If True, nodes with different types have different colours. Defaults to False. | | `directed` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `True` | Visualise the graph as directed. Defaults to True. | | `notebook` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean that is set to True if using jupyter notebook. Defaults to False. kwargs: Additional keyword arguments that are passed to the pyvis Network class. | | `kwargs` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | `\{\}` | | #### Returns | Type | Description | |------|-------------| | `pyvis.network.Network` | A pyvis network | ### [valid](#valid) Return a view of the graph that only includes valid edges #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The filtered graph | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of GraphView containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | The layered view | ### [vectorise](#vectorise) **Signature:** `vectorise(embedding, nodes=True, edges=True, cache=None, verbose=False)` Create a VectorisedGraph from the current graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `embedding` | [Callable](https://docs.python.org/3/library/typing.html#typing.Callable) | - | Specify the embedding function used to vectorise documents into embeddings. | | `nodes` | [bool](https://docs.python.org/3/library/functions.html#bool) \| [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `True` | Enable for nodes to be embedded, disable for nodes to not be embedded or specify a custom document property to use if a string is provided. Defaults to True. | | `edges` | [bool](https://docs.python.org/3/library/functions.html#bool) \| [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `True` | Enable for edges to be embedded, disable for edges to not be embedded or specify a custom document property to use if a string is provided. Defaults to True. | | `cache` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | Path used to store the cache of embeddings. | | `verbose` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | Enable to print logs reporting progress. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [VectorisedGraph](/docs/reference/api/python/vectors/VectorisedGraph) | A VectorisedGraph with all the documents and their embeddings, with an initial empty selection. | ### [window](#window) **Signature:** `window(start, end)` Create a view of the GraphView including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [GraphView](/docs/reference/api/python/raphtory/GraphView) | | --- ## Property Details ### [earliest_time](#earliest_time) Time entry of the earliest activity in the graph #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | the time entry of the earliest activity in the graph | ### [edges](#edges) Gets all edges in the graph #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | the edges in the graph | ### [end](#end) Gets the latest time that this GraphView is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this GraphView is valid or None if the GraphView is valid for all times. | ### [latest_time](#latest_time) Time entry of the latest activity in the graph #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | the time entry of the latest activity in the graph | ### [metadata](#metadata) Get all graph metadata #### Returns | Type | Description | |------|-------------| | [Metadata](/docs/reference/api/python/raphtory/Metadata) | | ### [nodes](#nodes) Gets the nodes in the graph #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | the nodes in the graph | ### [properties](#properties) Get all graph properties #### Returns | Type | Description | |------|-------------| | [Properties](/docs/reference/api/python/raphtory/Properties) | Properties paired with their names | ### [start](#start) Gets the start time for rolling and expanding windows for this GraphView #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this GraphView is valid or None if the GraphView is valid for all times. | ### [unique_layers](#unique_layers) Return all the layer ids in the graph #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | the names of all layers in the graph | ### [window_size](#window_size) Get the window size (difference between start and end) for this GraphView. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > History --- title: "History" breadcrumb: "Reference / Python / raphtory / History" --- # History History of updates for an object. Provides access to time entries and derived views such as timestamps, datetimes, event ids, and intervals. ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect all time entries in chronological order. | | [`collect_rev`](#collect_rev) | Collect all time entries in reverse chronological order. | | [`compose_histories`](#compose_histories) | Compose multiple History objects into a single History by fusing their time entries in chronological order. | | [`earliest_time`](#earliest_time) | Get the earliest time entry. | | [`is_empty`](#is_empty) | Check whether the history has no entries. | | [`latest_time`](#latest_time) | Get the latest time entry. | | [`merge`](#merge) | Merge this History with another by interleaving entries in time order. | | [`reverse`](#reverse) | Return a History where iteration order is reversed. | ## Properties | Property | Description | |----------|-------------| | [`dt`](#dt) | Access history events as UTC datetimes. | | [`event_id`](#event_id) | Access the unique event id of each time entry. | | [`intervals`](#intervals) | Access the intervals between consecutive timestamps in milliseconds. | | [`t`](#t) | Access history events as timestamps (milliseconds since Unix the epoch). | --- ## Method Details ### [collect](#collect) Collect all time entries in chronological order. #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime)] | Collected time entries. | ### [collect_rev](#collect_rev) Collect all time entries in reverse chronological order. #### Returns | Type | Description | |------|-------------| | list[[EventTime](/docs/reference/api/python/raphtory/EventTime)] | Collected time entries in reverse order. | ### [compose_histories](#compose_histories) **Signature:** `compose_histories(objects)` Compose multiple History objects into a single History by fusing their time entries in chronological order. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `objects` | `Iterable[History]` | - | History objects to compose. | #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | Composed History object containing entries from all inputs. | ### [earliest_time](#earliest_time) Get the earliest time entry. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | Earliest time entry, or None if empty. | ### [is_empty](#is_empty) Check whether the history has no entries. #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | True if empty, otherwise False. | ### [latest_time](#latest_time) Get the latest time entry. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | Latest time entry, or None if empty. | ### [merge](#merge) **Signature:** `merge(other)` Merge this History with another by interleaving entries in time order. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `other` | [History](/docs/reference/api/python/raphtory/History) | - | Right-hand history to merge. | #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | Merged history containing entries from both inputs. | ### [reverse](#reverse) Return a History where iteration order is reversed. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | History that yields items in reverse chronological order. | --- ## Property Details ### [dt](#dt) Access history events as UTC datetimes. #### Returns | Type | Description | |------|-------------| | [HistoryDateTime](/docs/reference/api/python/raphtory/HistoryDateTime) | Datetime view of this history. | ### [event_id](#event_id) Access the unique event id of each time entry. #### Returns | Type | Description | |------|-------------| | [HistoryEventId](/docs/reference/api/python/raphtory/HistoryEventId) | Event id view of this history. | ### [intervals](#intervals) Access the intervals between consecutive timestamps in milliseconds. #### Returns | Type | Description | |------|-------------| | [Intervals](/docs/reference/api/python/raphtory/Intervals) | Intervals view of this history. | ### [t](#t) Access history events as timestamps (milliseconds since Unix the epoch). #### Returns | Type | Description | |------|-------------| | [HistoryTimestamp](/docs/reference/api/python/raphtory/HistoryTimestamp) | Timestamp (as int) view of this history. | --- ## Reference > Api > Python > Raphtory > HistoryDateTime --- title: "HistoryDateTime" breadcrumb: "Reference / Python / raphtory / HistoryDateTime" --- # HistoryDateTime History view that exposes UTC datetimes. ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect all datetimes. | | [`collect_rev`](#collect_rev) | Collect all datetimes in reverse order. | --- ## Method Details ### [collect](#collect) Collect all datetimes. #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | Collected UTC datetimes. | #### Raises | Exception | Description | |-----------|-------------| | `TimeError` | If a timestamp cannot be converted to a datetime. | ### [collect_rev](#collect_rev) Collect all datetimes in reverse order. #### Returns | Type | Description | |------|-------------| | list[[datetime](https://docs.python.org/3/library/datetime.html#datetime.datetime)] | Collected UTC datetimes in reverse order. | #### Raises | Exception | Description | |-----------|-------------| | `TimeError` | If a timestamp cannot be converted to a datetime. | --- ## Reference > Api > Python > Raphtory > HistoryEventId --- title: "HistoryEventId" breadcrumb: "Reference / Python / raphtory / HistoryEventId" --- # HistoryEventId History view that exposes event ids of time entries. They are used for ordering within the same timestamp. ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect all event ids. | | [`collect_rev`](#collect_rev) | Collect all event ids in reverse order. | | [`to_list`](#to_list) | Collect all event ids into a list. | | [`to_list_rev`](#to_list_rev) | Collect all event ids into a list in reverse order. | --- ## Method Details ### [collect](#collect) Collect all event ids. #### Returns | Type | Description | |------|-------------| | `NDArray[np.uintp]` | Event ids. | ### [collect_rev](#collect_rev) Collect all event ids in reverse order. #### Returns | Type | Description | |------|-------------| | `NDArray[np.uintp]` | Event ids in reverse order. | ### [to_list](#to_list) Collect all event ids into a list. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of event ids. | ### [to_list_rev](#to_list_rev) Collect all event ids into a list in reverse order. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of event ids. | --- ## Reference > Api > Python > Raphtory > HistoryTimestamp --- title: "HistoryTimestamp" breadcrumb: "Reference / Python / raphtory / HistoryTimestamp" --- # HistoryTimestamp History view that exposes timestamps in milliseconds since the Unix epoch. ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect all timestamps into a NumPy ndarray. | | [`collect_rev`](#collect_rev) | Collect all timestamps into a NumPy ndarray in reverse order. | | [`to_list`](#to_list) | Collect all timestamps into a list. | | [`to_list_rev`](#to_list_rev) | Collect all timestamps into a list in reverse order. | --- ## Method Details ### [collect](#collect) Collect all timestamps into a NumPy ndarray. #### Returns | Type | Description | |------|-------------| | `NDArray[np.int64]` | Timestamps in milliseconds since the Unix epoch. | ### [collect_rev](#collect_rev) Collect all timestamps into a NumPy ndarray in reverse order. #### Returns | Type | Description | |------|-------------| | `NDArray[np.int64]` | Timestamps in milliseconds since the Unix epoch in reverse order. | ### [to_list](#to_list) Collect all timestamps into a list. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of timestamps. | ### [to_list_rev](#to_list_rev) Collect all timestamps into a list in reverse order. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of timestamps. | --- ## Reference > Api > Python > Raphtory > IndexSpec --- title: "IndexSpec" breadcrumb: "Reference / Python / raphtory / IndexSpec" --- # IndexSpec ## Properties | Property | Description | |----------|-------------| | [`edge_metadata`](#edge_metadata) | Get edge metadata. | | [`edge_properties`](#edge_properties) | Get edge properties. | | [`node_metadata`](#node_metadata) | Get node metadata. | | [`node_properties`](#node_properties) | Get node properties. | --- ## Property Details ### [edge_metadata](#edge_metadata) Get edge metadata. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | ### [edge_properties](#edge_properties) Get edge properties. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | ### [node_metadata](#node_metadata) Get node metadata. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | ### [node_properties](#node_properties) Get node properties. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | --- ## Reference > Api > Python > Raphtory > IndexSpecBuilder --- title: "IndexSpecBuilder" breadcrumb: "Reference / Python / raphtory / IndexSpecBuilder" --- # IndexSpecBuilder ## Methods | Method | Description | |--------|-------------| | [`build`](#build) | Return a spec | | [`with_all_edge_metadata`](#with_all_edge_metadata) | Adds all edge metadata to the spec. | | [`with_all_edge_properties`](#with_all_edge_properties) | Adds all edge properties to the spec. | | [`with_all_edge_properties_and_metadata`](#with_all_edge_properties_and_metadata) | Adds all edge properties and metadata to the spec. | | [`with_all_node_metadata`](#with_all_node_metadata) | Adds all node metadata to the spec. | | [`with_all_node_properties`](#with_all_node_properties) | Adds all node properties to the spec. | | [`with_all_node_properties_and_metadata`](#with_all_node_properties_and_metadata) | Adds all node properties and metadata to the spec. | | [`with_edge_metadata`](#with_edge_metadata) | Adds specified edge metadata to the spec. | | [`with_edge_properties`](#with_edge_properties) | Adds specified edge properties to the spec. | | [`with_node_metadata`](#with_node_metadata) | Adds specified node metadata to the spec. | | [`with_node_properties`](#with_node_properties) | Adds specified node properties to the spec. | --- ## Method Details ### [build](#build) Return a spec #### Returns | Type | Description | |------|-------------| | [IndexSpec](/docs/reference/api/python/raphtory/IndexSpec) | | ### [with_all_edge_metadata](#with_all_edge_metadata) Adds all edge metadata to the spec. ### [with_all_edge_properties](#with_all_edge_properties) Adds all edge properties to the spec. ### [with_all_edge_properties_and_metadata](#with_all_edge_properties_and_metadata) Adds all edge properties and metadata to the spec. ### [with_all_node_metadata](#with_all_node_metadata) Adds all node metadata to the spec. ### [with_all_node_properties](#with_all_node_properties) Adds all node properties to the spec. ### [with_all_node_properties_and_metadata](#with_all_node_properties_and_metadata) Adds all node properties and metadata to the spec. ### [with_edge_metadata](#with_edge_metadata) **Signature:** `with_edge_metadata(props)` Adds specified edge metadata to the spec. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `props` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [with_edge_properties](#with_edge_properties) **Signature:** `with_edge_properties(props)` Adds specified edge properties to the spec. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `props` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [with_node_metadata](#with_node_metadata) **Signature:** `with_node_metadata(props)` Adds specified node metadata to the spec. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `props` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [with_node_properties](#with_node_properties) **Signature:** `with_node_properties(props)` Adds specified node properties to the spec. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `props` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Raphtory > Intervals --- title: "Intervals" breadcrumb: "Reference / Python / raphtory / Intervals" --- # Intervals View over the intervals between consecutive timestamps, expressed in milliseconds. ## Methods | Method | Description | |--------|-------------| | [`collect`](#collect) | Collect all interval values in milliseconds. | | [`collect_rev`](#collect_rev) | Collect all interval values in reverse order. | | [`max`](#max) | Calculate the maximum interval in milliseconds. | | [`mean`](#mean) | Calculate the mean interval in milliseconds. | | [`median`](#median) | Calculate the median interval in milliseconds. | | [`min`](#min) | Calculate the minimum interval in milliseconds. | | [`to_list`](#to_list) | Collect all interval values in milliseconds into a list. | | [`to_list_rev`](#to_list_rev) | Collect all interval values in milliseconds into a list in reverse order. | --- ## Method Details ### [collect](#collect) Collect all interval values in milliseconds. #### Returns | Type | Description | |------|-------------| | `NDArray[np.int64]` | NumPy NDArray of interval values in milliseconds. | ### [collect_rev](#collect_rev) Collect all interval values in reverse order. #### Returns | Type | Description | |------|-------------| | `NDArray[np.int64]` | Intervals in reverse order. | ### [max](#max) Calculate the maximum interval in milliseconds. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | Maximum interval, or None if fewer than 1 interval. | ### [mean](#mean) Calculate the mean interval in milliseconds. #### Returns | Type | Description | |------|-------------| | [float](https://docs.python.org/3/library/functions.html#float), optional | Mean interval, or None if fewer than 1 interval. | ### [median](#median) Calculate the median interval in milliseconds. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | Median interval, or None if fewer than 1 interval. | ### [min](#min) Calculate the minimum interval in milliseconds. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | Minimum interval, or None if fewer than 1 interval. | ### [to_list](#to_list) Collect all interval values in milliseconds into a list. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of intervals in milliseconds. | ### [to_list_rev](#to_list_rev) Collect all interval values in milliseconds into a list in reverse order. #### Returns | Type | Description | |------|-------------| | list[[int](https://docs.python.org/3/library/functions.html#int)] | List of intervals in milliseconds. | --- ## Reference > Api > Python > Raphtory > Metadata --- title: "Metadata" breadcrumb: "Reference / Python / raphtory / Metadata" --- # Metadata A view of metadata of an entity ## Methods | Method | Description | |--------|-------------| | [`as_dict`](#as_dict) | as_dict() -> dict[str, Any] | | [`get`](#get) | get property value by key | | [`items`](#items) | lists the property keys together with the corresponding value | | [`keys`](#keys) | lists the available property keys | | [`values`](#values) | lists the property values | --- ## Method Details ### [as_dict](#as_dict) as_dict() -> dict[str, Any] Convert the properties view to a python dict ### [get](#get) **Signature:** `get(key)` get property value by key #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the property | #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | the property value or `None` if value for `key` does not exist | ### [items](#items) lists the property keys together with the corresponding value ### [keys](#keys) lists the available property keys #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | the property keys | ### [values](#values) lists the property values #### Returns | Type | Description | |------|-------------| | list[[PropValue](/docs/reference/api/python/typing)] | | --- ## Reference > Api > Python > Raphtory > MetadataView --- title: "MetadataView" breadcrumb: "Reference / Python / raphtory / MetadataView" --- # MetadataView ## Methods | Method | Description | |--------|-------------| | [`as_dict`](#as_dict) | | | [`get`](#get) | | | [`items`](#items) | | | [`keys`](#keys) | | | [`values`](#values) | | --- ## Method Details ### [as_dict](#as_dict) ### [get](#get) **Signature:** `get(key)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [items](#items) ### [keys](#keys) ### [values](#values) --- ## Reference > Api > Python > Raphtory > MutableEdge --- title: "MutableEdge" breadcrumb: "Reference / Python / raphtory / MutableEdge" --- # MutableEdge ## Methods | Method | Description | |--------|-------------| | [`add_metadata`](#add_metadata) | Add metadata to an edge in the graph. | | [`add_updates`](#add_updates) | Add updates to an edge in the graph at a specified time. | | [`delete`](#delete) | Mark the edge as deleted at the specified time. | | [`update_metadata`](#update_metadata) | Update metadata of an edge in the graph overwriting existing values. | --- ## Method Details ### [add_metadata](#add_metadata) **Signature:** `add_metadata(metadata, layer=None)` Add metadata to an edge in the graph. This function is used to add properties to an edge that do not change over time. These properties are fundamental attributes of the edge. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [PropInput](/docs/reference/api/python/typing) | - | A dictionary of properties to be added to the edge. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want these properties to be added on to. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_updates](#add_updates) **Signature:** `add_updates(t, properties=None, layer=None, event_id=None)` Add updates to an edge in the graph at a specified time. This function allows for the addition of property updates to an edge within the graph. The updates are time-stamped, meaning they are applied at the specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp at which the updates should be applied. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of properties to update. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want these properties to be added on to. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [delete](#delete) **Signature:** `delete(t, layer=None, event_id=None)` Mark the edge as deleted at the specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp at which the deletion should be applied. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want the deletion applied to. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The event id for the deletion's time entry. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(metadata, layer=None)` Update metadata of an edge in the graph overwriting existing values. This function is used to add properties to an edge that does not change over time. These properties are fundamental attributes of the edge. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [PropInput](/docs/reference/api/python/typing) | - | A dictionary of properties to be added to the edge. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer you want these properties to be added on to. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Raphtory > MutableNode --- title: "MutableNode" breadcrumb: "Reference / Python / raphtory / MutableNode" --- # MutableNode ## Methods | Method | Description | |--------|-------------| | [`add_metadata`](#add_metadata) | Add metadata to a node in the graph. | | [`add_updates`](#add_updates) | Add updates to a node in the graph at a specified time. | | [`set_node_type`](#set_node_type) | Set the type on the node. This only works if the type has not been previously set, otherwise will | | [`update_metadata`](#update_metadata) | Update metadata of a node in the graph overwriting existing values. | --- ## Method Details ### [add_metadata](#add_metadata) **Signature:** `add_metadata(metadata)` Add metadata to a node in the graph. This function is used to add properties to a node that do not change over time. These properties are fundamental attributes of the node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [PropInput](/docs/reference/api/python/typing) | - | A dictionary of properties to be added to the node. Each key is a string representing the property name, and each value is of type Prop representing the property value. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_updates](#add_updates) **Signature:** `add_updates(t, properties=None, event_id=None)` Add updates to a node in the graph at a specified time. This function allows for the addition of property updates to a node within the graph. The updates are time-stamped, meaning they are applied at the specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp at which the updates should be applied. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of properties to update. Each key is a string representing the property name, and each value is of type Prop representing the property value. If None, no properties are updated. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [set_node_type](#set_node_type) **Signature:** `set_node_type(new_type)` Set the type on the node. This only works if the type has not been previously set, otherwise will throw an error #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `new_type` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The new type to be set | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(metadata)` Update metadata of a node in the graph overwriting existing values. This function is used to add properties to a node that do not change over time. These properties are fundamental attributes of the node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [PropInput](/docs/reference/api/python/typing) | - | A dictionary of properties to be added to the node. Each key is a string representing the property name, and each value is of type Prop representing the property value. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Raphtory > NestedEdges --- title: "NestedEdges" breadcrumb: "Reference / Python / raphtory / NestedEdges" --- # NestedEdges ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the NestedEdges including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the NestedEdges including all events at `time`. | | [`before`](#before) | Create a view of the NestedEdges including all events before `end` (exclusive). | | [`collect`](#collect) | Collect all edges into a list | | [`default_layer`](#default_layer) | Return a view of NestedEdges containing only the default edge layer | | [`exclude_layer`](#exclude_layer) | Return a view of NestedEdges containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of NestedEdges containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of NestedEdges containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of NestedEdges containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`explode`](#explode) | Explodes returns an edge object for each update within the original edge. | | [`explode_layers`](#explode_layers) | Explode layers returns an edge object for each layer within the original edge. These new edge object contains only updates from respective layers. | | [`has_layer`](#has_layer) | Check if NestedEdges has the layer `"name"` | | [`is_active`](#is_active) | Check if the edges are active (there is at least one update during this time). | | [`is_deleted`](#is_deleted) | Check if edges are deleted. | | [`is_self_loop`](#is_self_loop) | Check if the edges are on the same node. | | [`is_valid`](#is_valid) | Check if edges are valid (i.e., not deleted). | | [`latest`](#latest) | Create a view of the NestedEdges including all events at the latest time. | | [`layer`](#layer) | Return a view of NestedEdges containing the layer `"name"` | | [`layers`](#layers) | Return a view of NestedEdges containing all layers `names` | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the NestedEdges including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the NestedEdges including all events that have not been explicitly deleted at the latest time. | | [`valid_layers`](#valid_layers) | Return a view of NestedEdges containing all layers `names` | | [`window`](#window) | Create a view of the NestedEdges including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`deletions`](#deletions) | Returns a history object for each edge containing their deletion times. | | [`dst`](#dst) | Returns the destination node of the edge. | | [`earliest_time`](#earliest_time) | Returns the earliest time of the edges. | | [`end`](#end) | Gets the latest time that this NestedEdges is valid. | | [`history`](#history) | Returns a history object for each edge containing time entries for when the edge is added or change to the edge is made. | | [`id`](#id) | Returns all ids of the edges. | | [`latest_time`](#latest_time) | Returns the latest time of the edges. | | [`layer_name`](#layer_name) | Returns the name of the layer the edges belong to - assuming they only belong to one layer. | | [`layer_names`](#layer_names) | Returns the names of the layers the edges belong to. | | [`metadata`](#metadata) | Get a view of the metadata only. | | [`nbr`](#nbr) | Returns the node at the other end of the edge (same as `dst()` for out-edges and `src()` for in-edges) | | [`properties`](#properties) | Returns all properties of the edges | | [`src`](#src) | Returns the source node of the edge. | | [`start`](#start) | Gets the start time for rolling and expanding windows for this NestedEdges | | [`time`](#time) | Returns the times of exploded edges. | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this NestedEdges. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the NestedEdges including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [at](#at) **Signature:** `at(time)` Create a view of the NestedEdges including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [before](#before) **Signature:** `before(end)` Create a view of the NestedEdges including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [collect](#collect) Collect all edges into a list #### Returns | Type | Description | |------|-------------| | list[list[[Edges](/docs/reference/api/python/raphtory/Edges)]] | the list of edges | ### [default_layer](#default_layer) Return a view of NestedEdges containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of NestedEdges containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of NestedEdges containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of NestedEdges containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of NestedEdges containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [explode](#explode) Explodes returns an edge object for each update within the original edge. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [explode_layers](#explode_layers) Explode layers returns an edge object for each layer within the original edge. These new edge object contains only updates from respective layers. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if NestedEdges has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_active](#is_active) Check if the edges are active (there is at least one update during this time). #### Returns | Type | Description | |------|-------------| | [NestedBoolIterable](/docs/reference/api/python/iterables/NestedBoolIterable) | | ### [is_deleted](#is_deleted) Check if edges are deleted. #### Returns | Type | Description | |------|-------------| | [NestedBoolIterable](/docs/reference/api/python/iterables/NestedBoolIterable) | | ### [is_self_loop](#is_self_loop) Check if the edges are on the same node. #### Returns | Type | Description | |------|-------------| | [NestedBoolIterable](/docs/reference/api/python/iterables/NestedBoolIterable) | | ### [is_valid](#is_valid) Check if edges are valid (i.e., not deleted). #### Returns | Type | Description | |------|-------------| | [NestedBoolIterable](/docs/reference/api/python/iterables/NestedBoolIterable) | | ### [latest](#latest) Create a view of the NestedEdges including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of NestedEdges containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of NestedEdges containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the NestedEdges including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [snapshot_latest](#snapshot_latest) Create a view of the NestedEdges including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of NestedEdges containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the NestedEdges including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | | --- ## Property Details ### [deletions](#deletions) Returns a history object for each edge containing their deletion times. #### Returns | Type | Description | |------|-------------| | [NestedHistoryIterable](/docs/reference/api/python/iterables/NestedHistoryIterable) | A nested iterable of history objects, one for each edge. | ### [dst](#dst) Returns the destination node of the edge. #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [earliest_time](#earliest_time) Returns the earliest time of the edges. #### Returns | Type | Description | |------|-------------| | [NestedOptionEventTimeIterable](/docs/reference/api/python/iterables/NestedOptionEventTimeIterable) | A nested iterable of `EventTime`s. | ### [end](#end) Gets the latest time that this NestedEdges is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this NestedEdges is valid or None if the NestedEdges is valid for all times. | ### [history](#history) Returns a history object for each edge containing time entries for when the edge is added or change to the edge is made. #### Returns | Type | Description | |------|-------------| | [NestedHistoryIterable](/docs/reference/api/python/iterables/NestedHistoryIterable) | A nested iterable of history objects, one for each edge. | ### [id](#id) Returns all ids of the edges. #### Returns | Type | Description | |------|-------------| | [NestedGIDGIDIterable](/docs/reference/api/python/iterables/NestedGIDGIDIterable) | | ### [latest_time](#latest_time) Returns the latest time of the edges. #### Returns | Type | Description | |------|-------------| | [NestedOptionEventTimeIterable](/docs/reference/api/python/iterables/NestedOptionEventTimeIterable) | A nested iterable of `EventTime`s. | ### [layer_name](#layer_name) Returns the name of the layer the edges belong to - assuming they only belong to one layer. #### Returns | Type | Description | |------|-------------| | [NestedArcStringIterable](/docs/reference/api/python/iterables/NestedArcStringIterable) | | ### [layer_names](#layer_names) Returns the names of the layers the edges belong to. #### Returns | Type | Description | |------|-------------| | [NestedArcStringVecIterable](/docs/reference/api/python/iterables/NestedArcStringVecIterable) | | ### [metadata](#metadata) Get a view of the metadata only. #### Returns | Type | Description | |------|-------------| | [MetadataListList](/docs/reference/api/python/iterables/MetadataListList) | | ### [nbr](#nbr) Returns the node at the other end of the edge (same as `dst()` for out-edges and `src()` for in-edges) #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [properties](#properties) Returns all properties of the edges #### Returns | Type | Description | |------|-------------| | [PyNestedPropsIterable](/docs/reference/api/python/iterables/PyNestedPropsIterable) | | ### [src](#src) Returns the source node of the edge. #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [start](#start) Gets the start time for rolling and expanding windows for this NestedEdges #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this NestedEdges is valid or None if the NestedEdges is valid for all times. | ### [time](#time) Returns the times of exploded edges. #### Returns | Type | Description | |------|-------------| | [NestedEventTimeIterable](/docs/reference/api/python/iterables/NestedEventTimeIterable) | A nested iterable of `EventTime`s. | ### [window_size](#window_size) Get the window size (difference between start and end) for this NestedEdges. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > Node --- title: "Node" breadcrumb: "Reference / Python / raphtory / Node" --- # Node A node (or node) in the graph. ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the Node including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the Node including all events at `time`. | | [`before`](#before) | Create a view of the Node including all events before `end` (exclusive). | | [`default_layer`](#default_layer) | Return a view of Node containing only the default edge layer | | [`degree`](#degree) | Get the degree of this node (i.e., the number of edges that are incident to it). | | [`edge_history_count`](#edge_history_count) | Get the number of edge events for this node | | [`exclude_layer`](#exclude_layer) | Return a view of Node containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of Node containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of Node containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of Node containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`filter`](#filter) | Return a filtered view that only includes nodes and edges that satisfy the filter | | [`has_layer`](#has_layer) | Check if Node has the layer `"name"` | | [`in_degree`](#in_degree) | Get the in-degree of this node (i.e., the number of edges that are incident to it from other nodes). | | [`is_active`](#is_active) | Check if the node is active (it's history is not empty). | | [`latest`](#latest) | Create a view of the Node including all events at the latest time. | | [`layer`](#layer) | Return a view of Node containing the layer `"name"` | | [`layers`](#layers) | Return a view of Node containing all layers `names` | | [`out_degree`](#out_degree) | Get the out-degree of this node (i.e., the number of edges that are incident to it from this node). | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the Node including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the Node including all events that have not been explicitly deleted at the latest time. | | [`valid_layers`](#valid_layers) | Return a view of Node containing all layers `names` | | [`window`](#window) | Create a view of the Node including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`earliest_time`](#earliest_time) | Returns the earliest time that the node exists. | | [`edges`](#edges) | Get the edges that are incident to this node. | | [`end`](#end) | Gets the latest time that this Node is valid. | | [`history`](#history) | Returns the history of a node, including node additions and changes made to node. | | [`id`](#id) | Returns the id of the node. | | [`in_edges`](#in_edges) | Get the edges that point into this node. | | [`in_neighbours`](#in_neighbours) | Get the neighbours of this node that point into this node. | | [`latest_time`](#latest_time) | Returns the latest time that the node exists. | | [`metadata`](#metadata) | The metadata of the node | | [`name`](#name) | Returns the name of the node. | | [`neighbours`](#neighbours) | Get the neighbours of this node. | | [`node_type`](#node_type) | Returns the type of node | | [`out_edges`](#out_edges) | Get the edges that point out of this node. | | [`out_neighbours`](#out_neighbours) | Get the neighbours of this node that point out of this node. | | [`properties`](#properties) | The properties of the node | | [`start`](#start) | Gets the start time for rolling and expanding windows for this Node | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this Node. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the Node including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [at](#at) **Signature:** `at(time)` Create a view of the Node including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [before](#before) **Signature:** `before(end)` Create a view of the Node including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [default_layer](#default_layer) Return a view of Node containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [degree](#degree) Get the degree of this node (i.e., the number of edges that are incident to it). #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The degree of this node. | ### [edge_history_count](#edge_history_count) Get the number of edge events for this node #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The number of edge events | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of Node containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of Node containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of Node containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of Node containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [filter](#filter) **Signature:** `filter(filter)` Return a filtered view that only includes nodes and edges that satisfy the filter #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr` | - | The filter to apply to the nodes and edges. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The filtered view | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if Node has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [in_degree](#in_degree) Get the in-degree of this node (i.e., the number of edges that are incident to it from other nodes). #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The in-degree of this node. | ### [is_active](#is_active) Check if the node is active (it's history is not empty). #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [latest](#latest) Create a view of the Node including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of Node containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of Node containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [out_degree](#out_degree) Get the out-degree of this node (i.e., the number of edges that are incident to it from this node). #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The out-degree of this node. | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the Node including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [snapshot_latest](#snapshot_latest) Create a view of the Node including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of Node containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the Node including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | | --- ## Property Details ### [earliest_time](#earliest_time) Returns the earliest time that the node exists. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that the node exists. | ### [edges](#edges) Get the edges that are incident to this node. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The incident edges. | ### [end](#end) Gets the latest time that this Node is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this Node is valid or None if the Node is valid for all times. | ### [history](#history) Returns the history of a node, including node additions and changes made to node. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | A History object for the node, providing access to time entries. | ### [id](#id) Returns the id of the node. This is a unique identifier for the node. #### Returns | Type | Description | |------|-------------| | `(str` \| `int)` | The id of the node. | ### [in_edges](#in_edges) Get the edges that point into this node. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The inbound edges. | ### [in_neighbours](#in_neighbours) Get the neighbours of this node that point into this node. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The in-neighbours. | ### [latest_time](#latest_time) Returns the latest time that the node exists. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that the node exists. | ### [metadata](#metadata) The metadata of the node #### Returns | Type | Description | |------|-------------| | [Metadata](/docs/reference/api/python/raphtory/Metadata) | | ### [name](#name) Returns the name of the node. #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | The id of the node as a string. | ### [neighbours](#neighbours) Get the neighbours of this node. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The neighbours (both inbound and outbound). | ### [node_type](#node_type) Returns the type of node #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | The node type if it is set or `None` otherwise. | ### [out_edges](#out_edges) Get the edges that point out of this node. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The outbound edges. | ### [out_neighbours](#out_neighbours) Get the neighbours of this node that point out of this node. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The out-neighbours. | ### [properties](#properties) The properties of the node #### Returns | Type | Description | |------|-------------| | [Properties](/docs/reference/api/python/raphtory/Properties) | A list of properties. | ### [start](#start) Gets the start time for rolling and expanding windows for this Node #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this Node is valid or None if the Node is valid for all times. | ### [window_size](#window_size) Get the window size (difference between start and end) for this Node. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > Nodes --- title: "Nodes" breadcrumb: "Reference / Python / raphtory / Nodes" --- # Nodes A list of nodes that can be iterated over. ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the Nodes including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the Nodes including all events at `time`. | | [`before`](#before) | Create a view of the Nodes including all events before `end` (exclusive). | | [`collect`](#collect) | Collect all nodes into a list | | [`default_layer`](#default_layer) | Return a view of Nodes containing only the default edge layer | | [`degree`](#degree) | Returns the number of edges of the nodes. | | [`edge_history_count`](#edge_history_count) | Return the number of edge updates for each node | | [`exclude_layer`](#exclude_layer) | Return a view of Nodes containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of Nodes containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of Nodes containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of Nodes containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`filter`](#filter) | Return a filtered view that only includes nodes and edges that satisfy the filter | | [`has_layer`](#has_layer) | Check if Nodes has the layer `"name"` | | [`in_degree`](#in_degree) | Returns the number of in edges of the nodes. | | [`latest`](#latest) | Create a view of the Nodes including all events at the latest time. | | [`layer`](#layer) | Return a view of Nodes containing the layer `"name"` | | [`layers`](#layers) | Return a view of Nodes containing all layers `names` | | [`out_degree`](#out_degree) | Returns the number of out edges of the nodes. | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the Nodes including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the Nodes including all events that have not been explicitly deleted at the latest time. | | [`to_df`](#to_df) | Converts the graph's nodes into a Pandas DataFrame. | | [`type_filter`](#type_filter) | Filter nodes by node type. | | [`valid_layers`](#valid_layers) | Return a view of Nodes containing all layers `names` | | [`window`](#window) | Create a view of the Nodes including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`earliest_time`](#earliest_time) | The earliest times nodes are active | | [`edges`](#edges) | Get the edges that are incident to this node. | | [`end`](#end) | Gets the latest time that this Nodes is valid. | | [`history`](#history) | Returns all history objects of nodes, with information on when a node is added or change to a node is made. | | [`id`](#id) | The node ids | | [`in_edges`](#in_edges) | Get the edges that point into this node. | | [`in_neighbours`](#in_neighbours) | Get the neighbours of this node that point into this node. | | [`latest_time`](#latest_time) | The latest time nodes are active | | [`metadata`](#metadata) | The metadata of the nodes. | | [`name`](#name) | The node names | | [`neighbours`](#neighbours) | Get the neighbours of this node. | | [`node_type`](#node_type) | The node types | | [`out_edges`](#out_edges) | Get the edges that point out of this node. | | [`out_neighbours`](#out_neighbours) | Get the neighbours of this node that point out of this node. | | [`properties`](#properties) | The properties of the node. | | [`start`](#start) | Gets the start time for rolling and expanding windows for this Nodes | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this Nodes. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the Nodes including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [at](#at) **Signature:** `at(time)` Create a view of the Nodes including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [before](#before) **Signature:** `before(end)` Create a view of the Nodes including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [collect](#collect) Collect all nodes into a list #### Returns | Type | Description | |------|-------------| | list[[Node](/docs/reference/api/python/raphtory/Node)] | the list of nodes | ### [default_layer](#default_layer) Return a view of Nodes containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [degree](#degree) Returns the number of edges of the nodes. #### Returns | Type | Description | |------|-------------| | [DegreeView](/docs/reference/api/python/node_state/DegreeView) | a view of the undirected node degrees. | ### [edge_history_count](#edge_history_count) Return the number of edge updates for each node #### Returns | Type | Description | |------|-------------| | [EdgeHistoryCountView](/docs/reference/api/python/node_state/EdgeHistoryCountView) | a view of the edge history counts | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of Nodes containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of Nodes containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of Nodes containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of Nodes containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [filter](#filter) **Signature:** `filter(filter)` Return a filtered view that only includes nodes and edges that satisfy the filter #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr` | - | The filter to apply to the nodes and edges. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The filtered view | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if Nodes has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [in_degree](#in_degree) Returns the number of in edges of the nodes. #### Returns | Type | Description | |------|-------------| | [DegreeView](/docs/reference/api/python/node_state/DegreeView) | a view of the in-degrees of the nodes | ### [latest](#latest) Create a view of the Nodes including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of Nodes containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of Nodes containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [out_degree](#out_degree) Returns the number of out edges of the nodes. #### Returns | Type | Description | |------|-------------| | [DegreeView](/docs/reference/api/python/node_state/DegreeView) | a view of the out-degrees of the nodes. | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the Nodes including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [snapshot_latest](#snapshot_latest) Create a view of the Nodes including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | ### [to_df](#to_df) **Signature:** `to_df(include_property_history=False, convert_datetime=False)` Converts the graph's nodes into a Pandas DataFrame. This method will create a DataFrame with the following columns: - "name": The name of the node. - "properties": The properties of the node. - "update_history": The update history of the node. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `include_property_history` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean, if set to `True`, the history of each property is included, if `False`, only the latest value is shown. Defaults to False. | | `convert_datetime` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | A boolean, if set to `True` will convert the timestamp to python datetimes. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) | the view of the node data as a pandas Dataframe. | ### [type_filter](#type_filter) **Signature:** `type_filter(node_types)` Filter nodes by node type. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node_types` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | the list of node types to keep. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | the filtered view of the nodes | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of Nodes containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the Nodes including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [Nodes](/docs/reference/api/python/raphtory/Nodes) | | --- ## Property Details ### [earliest_time](#earliest_time) The earliest times nodes are active #### Returns | Type | Description | |------|-------------| | [EarliestTimeView](/docs/reference/api/python/node_state/EarliestTimeView) | a view of the earliest active times | ### [edges](#edges) Get the edges that are incident to this node. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The incident edges. | ### [end](#end) Gets the latest time that this Nodes is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this Nodes is valid or None if the Nodes is valid for all times. | ### [history](#history) Returns all history objects of nodes, with information on when a node is added or change to a node is made. #### Returns | Type | Description | |------|-------------| | [HistoryView](/docs/reference/api/python/node_state/HistoryView) | a view of the node histories | ### [id](#id) The node ids #### Returns | Type | Description | |------|-------------| | [IdView](/docs/reference/api/python/node_state/IdView) | a view of the node ids | ### [in_edges](#in_edges) Get the edges that point into this node. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The inbound edges. | ### [in_neighbours](#in_neighbours) Get the neighbours of this node that point into this node. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The in-neighbours. | ### [latest_time](#latest_time) The latest time nodes are active #### Returns | Type | Description | |------|-------------| | [LatestTimeView](/docs/reference/api/python/node_state/LatestTimeView) | a view of the latest active times | ### [metadata](#metadata) The metadata of the nodes. #### Returns | Type | Description | |------|-------------| | [MetadataView](/docs/reference/api/python/raphtory/MetadataView) | A view of the node properties. | ### [name](#name) The node names #### Returns | Type | Description | |------|-------------| | [NameView](/docs/reference/api/python/node_state/NameView) | a view of the node names | ### [neighbours](#neighbours) Get the neighbours of this node. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The neighbours (both inbound and outbound). | ### [node_type](#node_type) The node types #### Returns | Type | Description | |------|-------------| | [NodeTypeView](/docs/reference/api/python/node_state/NodeTypeView) | a view of the node types | ### [out_edges](#out_edges) Get the edges that point out of this node. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The outbound edges. | ### [out_neighbours](#out_neighbours) Get the neighbours of this node that point out of this node. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The out-neighbours. | ### [properties](#properties) The properties of the node. #### Returns | Type | Description | |------|-------------| | [PropertiesView](/docs/reference/api/python/raphtory/PropertiesView) | A view of the node properties. | ### [start](#start) Gets the start time for rolling and expanding windows for this Nodes #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this Nodes is valid or None if the Nodes is valid for all times. | ### [window_size](#window_size) Get the window size (difference between start and end) for this Nodes. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > OptionalEventTime --- title: "OptionalEventTime" breadcrumb: "Reference / Python / raphtory / OptionalEventTime" --- # OptionalEventTime Raphtory’s optional EventTime type. Instances of OptionalEventTime may contain an EventTime, or be empty. This is used for functions that may not return data (such as earliest_time and latest_time) because the data is unavailable. If data is contained, OptionalEventTime instances can be used similarly to EventTime. If empty, time operations (such as .t, .dt, .event_id) will return None. An empty OptionalEventTime is considered smaller than ( ) any EventTime or OptionalEventTime with data. ## Methods | Method | Description | |--------|-------------| | [`get_event_time`](#get_event_time) | Returns the contained EventTime if it exists, or else None. | | [`is_none`](#is_none) | Returns true if the OptionalEventTime doesn't contain an EventTime. | | [`is_some`](#is_some) | Returns true if the OptionalEventTime contains an EventTime. | ## Properties | Property | Description | |----------|-------------| | [`as_tuple`](#as_tuple) | Return this entry as a tuple of (timestamp, event_id), where the timestamp is in milliseconds if an EventTime is contained, or else None. | | [`dt`](#dt) | Returns the UTC datetime representation of this EventTime's timestamp if an EventTime is contained, or else None. | | [`event_id`](#event_id) | Returns the event id used to order events within the same timestamp if an EventTime is contained, or else None. | | [`t`](#t) | Returns the timestamp in milliseconds since the Unix epoch if an EventTime is contained, or else None. | --- ## Method Details ### [get_event_time](#get_event_time) Returns the contained EventTime if it exists, or else None. ### [is_none](#is_none) Returns true if the OptionalEventTime doesn't contain an EventTime. #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [is_some](#is_some) Returns true if the OptionalEventTime contains an EventTime. #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | --- ## Property Details ### [as_tuple](#as_tuple) Return this entry as a tuple of (timestamp, event_id), where the timestamp is in milliseconds if an EventTime is contained, or else None. ### [dt](#dt) Returns the UTC datetime representation of this EventTime's timestamp if an EventTime is contained, or else None. ### [event_id](#event_id) Returns the event id used to order events within the same timestamp if an EventTime is contained, or else None. ### [t](#t) Returns the timestamp in milliseconds since the Unix epoch if an EventTime is contained, or else None. --- ## Reference > Api > Python > Raphtory > PathFromGraph --- title: "PathFromGraph" breadcrumb: "Reference / Python / raphtory / PathFromGraph" --- # PathFromGraph ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the PathFromGraph including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the PathFromGraph including all events at `time`. | | [`before`](#before) | Create a view of the PathFromGraph including all events before `end` (exclusive). | | [`collect`](#collect) | Collect all nodes into a list | | [`combined_history`](#combined_history) | Returns a single history object containing time entries for all nodes in the path. | | [`default_layer`](#default_layer) | Return a view of PathFromGraph containing only the default edge layer | | [`degree`](#degree) | Returns the node degrees. | | [`edge_history_count`](#edge_history_count) | Returns the number of edge updates for each node. | | [`exclude_layer`](#exclude_layer) | Return a view of PathFromGraph containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of PathFromGraph containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of PathFromGraph containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of PathFromGraph containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`filter`](#filter) | Return a filtered view that only includes nodes and edges that satisfy the filter | | [`has_layer`](#has_layer) | Check if PathFromGraph has the layer `"name"` | | [`in_degree`](#in_degree) | Returns the node in-degrees. | | [`latest`](#latest) | Create a view of the PathFromGraph including all events at the latest time. | | [`layer`](#layer) | Return a view of PathFromGraph containing the layer `"name"` | | [`layers`](#layers) | Return a view of PathFromGraph containing all layers `names` | | [`out_degree`](#out_degree) | Returns the node out-degrees. | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the PathFromGraph including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the PathFromGraph including all events that have not been explicitly deleted at the latest time. | | [`type_filter`](#type_filter) | filter nodes by type | | [`valid_layers`](#valid_layers) | Return a view of PathFromGraph containing all layers `names` | | [`window`](#window) | Create a view of the PathFromGraph including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`earliest_time`](#earliest_time) | The node earliest times. | | [`edges`](#edges) | Get the edges that are incident to this node. | | [`end`](#end) | Gets the latest time that this PathFromGraph is valid. | | [`history`](#history) | Returns a history object for each node with time entries for when a node is added or change to a node is made. | | [`id`](#id) | The node ids | | [`in_edges`](#in_edges) | Get the edges that point into this node. | | [`in_neighbours`](#in_neighbours) | Get the neighbours of this node that point into this node. | | [`latest_time`](#latest_time) | The node latest times. | | [`metadata`](#metadata) | Returns the node metadata. | | [`name`](#name) | The node names. | | [`neighbours`](#neighbours) | Get the neighbours of this node. | | [`node_type`](#node_type) | The node types. | | [`out_edges`](#out_edges) | Get the edges that point out of this node. | | [`out_neighbours`](#out_neighbours) | Get the neighbours of this node that point out of this node. | | [`properties`](#properties) | Returns the node properties. | | [`start`](#start) | Gets the start time for rolling and expanding windows for this PathFromGraph | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this PathFromGraph. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the PathFromGraph including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [at](#at) **Signature:** `at(time)` Create a view of the PathFromGraph including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [before](#before) **Signature:** `before(end)` Create a view of the PathFromGraph including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [collect](#collect) Collect all nodes into a list #### Returns | Type | Description | |------|-------------| | list[list[[Node](/docs/reference/api/python/raphtory/Node)]] | the list of nodes | ### [combined_history](#combined_history) Returns a single history object containing time entries for all nodes in the path. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | A history object with all time entries associated with the nodes. | ### [default_layer](#default_layer) Return a view of PathFromGraph containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [degree](#degree) Returns the node degrees. #### Returns | Type | Description | |------|-------------| | [NestedUsizeIterable](/docs/reference/api/python/iterables/NestedUsizeIterable) | | ### [edge_history_count](#edge_history_count) Returns the number of edge updates for each node. #### Returns | Type | Description | |------|-------------| | [NestedUsizeIterable](/docs/reference/api/python/iterables/NestedUsizeIterable) | | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of PathFromGraph containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of PathFromGraph containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of PathFromGraph containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of PathFromGraph containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [filter](#filter) **Signature:** `filter(filter)` Return a filtered view that only includes nodes and edges that satisfy the filter #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr` | - | The filter to apply to the nodes and edges. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The filtered view | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if PathFromGraph has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [in_degree](#in_degree) Returns the node in-degrees. #### Returns | Type | Description | |------|-------------| | [NestedUsizeIterable](/docs/reference/api/python/iterables/NestedUsizeIterable) | | ### [latest](#latest) Create a view of the PathFromGraph including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of PathFromGraph containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of PathFromGraph containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [out_degree](#out_degree) Returns the node out-degrees. #### Returns | Type | Description | |------|-------------| | [NestedUsizeIterable](/docs/reference/api/python/iterables/NestedUsizeIterable) | | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the PathFromGraph including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [snapshot_latest](#snapshot_latest) Create a view of the PathFromGraph including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | ### [type_filter](#type_filter) **Signature:** `type_filter(node_types)` filter nodes by type #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node_types` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | the node types to keep | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | the filtered view | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of PathFromGraph containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the PathFromGraph including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | | --- ## Property Details ### [earliest_time](#earliest_time) The node earliest times. #### Returns | Type | Description | |------|-------------| | [NestedOptionEventTimeIterable](/docs/reference/api/python/iterables/NestedOptionEventTimeIterable) | | ### [edges](#edges) Get the edges that are incident to this node. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The incident edges. | ### [end](#end) Gets the latest time that this PathFromGraph is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this PathFromGraph is valid or None if the PathFromGraph is valid for all times. | ### [history](#history) Returns a history object for each node with time entries for when a node is added or change to a node is made. #### Returns | Type | Description | |------|-------------| | [NestedHistoryIterable](/docs/reference/api/python/iterables/NestedHistoryIterable) | A nested iterable of history objects, one for each node. | ### [id](#id) The node ids #### Returns | Type | Description | |------|-------------| | [NestedGIDIterable](/docs/reference/api/python/iterables/NestedGIDIterable) | | ### [in_edges](#in_edges) Get the edges that point into this node. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The inbound edges. | ### [in_neighbours](#in_neighbours) Get the neighbours of this node that point into this node. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The in-neighbours. | ### [latest_time](#latest_time) The node latest times. #### Returns | Type | Description | |------|-------------| | [NestedOptionEventTimeIterable](/docs/reference/api/python/iterables/NestedOptionEventTimeIterable) | | ### [metadata](#metadata) Returns the node metadata. #### Returns | Type | Description | |------|-------------| | [MetadataListList](/docs/reference/api/python/iterables/MetadataListList) | | ### [name](#name) The node names. #### Returns | Type | Description | |------|-------------| | [NestedStringIterable](/docs/reference/api/python/iterables/NestedStringIterable) | | ### [neighbours](#neighbours) Get the neighbours of this node. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The neighbours (both inbound and outbound). | ### [node_type](#node_type) The node types. #### Returns | Type | Description | |------|-------------| | [NestedOptionArcStringIterable](/docs/reference/api/python/iterables/NestedOptionArcStringIterable) | | ### [out_edges](#out_edges) Get the edges that point out of this node. #### Returns | Type | Description | |------|-------------| | [NestedEdges](/docs/reference/api/python/raphtory/NestedEdges) | The outbound edges. | ### [out_neighbours](#out_neighbours) Get the neighbours of this node that point out of this node. #### Returns | Type | Description | |------|-------------| | [PathFromGraph](/docs/reference/api/python/raphtory/PathFromGraph) | The out-neighbours. | ### [properties](#properties) Returns the node properties. #### Returns | Type | Description | |------|-------------| | `NestedPropsIterable` | | ### [start](#start) Gets the start time for rolling and expanding windows for this PathFromGraph #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this PathFromGraph is valid or None if the PathFromGraph is valid for all times. | ### [window_size](#window_size) Get the window size (difference between start and end) for this PathFromGraph. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > PathFromNode --- title: "PathFromNode" breadcrumb: "Reference / Python / raphtory / PathFromNode" --- # PathFromNode ## Methods | Method | Description | |--------|-------------| | [`after`](#after) | Create a view of the PathFromNode including all events after `start` (exclusive). | | [`at`](#at) | Create a view of the PathFromNode including all events at `time`. | | [`before`](#before) | Create a view of the PathFromNode including all events before `end` (exclusive). | | [`collect`](#collect) | Collect all nodes into a list | | [`combined_history`](#combined_history) | Returns a single history object containing time entries for all nodes in the path. | | [`default_layer`](#default_layer) | Return a view of PathFromNode containing only the default edge layer | | [`degree`](#degree) | The node degrees. | | [`edge_history_count`](#edge_history_count) | Get the number of edge updates for each node. | | [`exclude_layer`](#exclude_layer) | Return a view of PathFromNode containing all layers except the excluded `name` | | [`exclude_layers`](#exclude_layers) | Return a view of PathFromNode containing all layers except the excluded `names` | | [`exclude_valid_layer`](#exclude_valid_layer) | Return a view of PathFromNode containing all layers except the excluded `name` | | [`exclude_valid_layers`](#exclude_valid_layers) | Return a view of PathFromNode containing all layers except the excluded `names` | | [`expanding`](#expanding) | Creates a `WindowSet` with the given `step` size using an expanding window. | | [`filter`](#filter) | Return a filtered view that only includes nodes and edges that satisfy the filter | | [`has_layer`](#has_layer) | Check if PathFromNode has the layer `"name"` | | [`in_degree`](#in_degree) | The node in-degrees. | | [`latest`](#latest) | Create a view of the PathFromNode including all events at the latest time. | | [`layer`](#layer) | Return a view of PathFromNode containing the layer `"name"` | | [`layers`](#layers) | Return a view of PathFromNode containing all layers `names` | | [`out_degree`](#out_degree) | The node out-degrees. | | [`rolling`](#rolling) | Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. | | [`shrink_end`](#shrink_end) | Set the end of the window to the smaller of `end` and `self.end()` | | [`shrink_start`](#shrink_start) | Set the start of the window to the larger of `start` and `self.start()` | | [`shrink_window`](#shrink_window) | Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) | | [`snapshot_at`](#snapshot_at) | Create a view of the PathFromNode including all events that have not been explicitly deleted at `time`. | | [`snapshot_latest`](#snapshot_latest) | Create a view of the PathFromNode including all events that have not been explicitly deleted at the latest time. | | [`type_filter`](#type_filter) | filter nodes by type | | [`valid_layers`](#valid_layers) | Return a view of PathFromNode containing all layers `names` | | [`window`](#window) | Create a view of the PathFromNode including all events between `start` (inclusive) and `end` (exclusive) | ## Properties | Property | Description | |----------|-------------| | [`earliest_time`](#earliest_time) | The earliest time of each node. | | [`edges`](#edges) | Get the edges that are incident to this node. | | [`end`](#end) | Gets the latest time that this PathFromNode is valid. | | [`id`](#id) | The node IDs. | | [`in_edges`](#in_edges) | Get the edges that point into this node. | | [`in_neighbours`](#in_neighbours) | Get the neighbours of this node that point into this node. | | [`latest_time`](#latest_time) | The latest time of each node. | | [`metadata`](#metadata) | The node metadata. | | [`name`](#name) | The node names. | | [`neighbours`](#neighbours) | Get the neighbours of this node. | | [`node_type`](#node_type) | The node types. | | [`out_edges`](#out_edges) | Get the edges that point out of this node. | | [`out_neighbours`](#out_neighbours) | Get the neighbours of this node that point out of this node. | | [`properties`](#properties) | The node properties. | | [`start`](#start) | Gets the start time for rolling and expanding windows for this PathFromNode | | [`window_size`](#window_size) | Get the window size (difference between start and end) for this PathFromNode. | --- ## Method Details ### [after](#after) **Signature:** `after(start)` Create a view of the PathFromNode including all events after `start` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [at](#at) **Signature:** `at(time)` Create a view of the PathFromNode including all events at `time`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [before](#before) **Signature:** `before(end)` Create a view of the PathFromNode including all events before `end` (exclusive). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [collect](#collect) Collect all nodes into a list #### Returns | Type | Description | |------|-------------| | list[[Node](/docs/reference/api/python/raphtory/Node)] | the list of nodes | ### [combined_history](#combined_history) Returns a single history object containing time entries for all nodes in the path. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | History object with all time entries for the nodes. | ### [default_layer](#default_layer) Return a view of PathFromNode containing only the default edge layer #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [degree](#degree) The node degrees. #### Returns | Type | Description | |------|-------------| | [UsizeIterable](/docs/reference/api/python/iterables/UsizeIterable) | | ### [edge_history_count](#edge_history_count) Get the number of edge updates for each node. #### Returns | Type | Description | |------|-------------| | [UsizeIterable](/docs/reference/api/python/iterables/UsizeIterable) | | ### [exclude_layer](#exclude_layer) **Signature:** `exclude_layer(name)` Return a view of PathFromNode containing all layers except the excluded `name` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [exclude_layers](#exclude_layers) **Signature:** `exclude_layers(names)` Return a view of PathFromNode containing all layers except the excluded `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [exclude_valid_layer](#exclude_valid_layer) **Signature:** `exclude_valid_layer(name)` Return a view of PathFromNode containing all layers except the excluded `name` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | layer name that is excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [exclude_valid_layers](#exclude_valid_layers) **Signature:** `exclude_valid_layers(names)` Return a view of PathFromNode containing all layers except the excluded `names` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names that are excluded for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [expanding](#expanding) **Signature:** `expanding(step, alignment_unit=None)` Creates a `WindowSet` with the given `step` size using an expanding window. An expanding window is a window that grows by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The step size of the window. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step. For example, if the step is "1 month and 1 day", the windows will be aligned on days (00:00:00 to 23:59:59). If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [filter](#filter) **Signature:** `filter(filter)` Return a filtered view that only includes nodes and edges that satisfy the filter #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `filter` | `filter.FilterExpr` | - | The filter to apply to the nodes and edges. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The filtered view | ### [has_layer](#has_layer) **Signature:** `has_layer(name)` Check if PathFromNode has the layer `"name"` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the layer to check | #### Returns | Type | Description | |------|-------------| | [bool](https://docs.python.org/3/library/functions.html#bool) | | ### [in_degree](#in_degree) The node in-degrees. #### Returns | Type | Description | |------|-------------| | [UsizeIterable](/docs/reference/api/python/iterables/UsizeIterable) | | ### [latest](#latest) Create a view of the PathFromNode including all events at the latest time. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [layer](#layer) **Signature:** `layer(name)` Return a view of PathFromNode containing the layer `"name"` Errors if the layer does not exist #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `name` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | then name of the layer. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [layers](#layers) **Signature:** `layers(names)` Return a view of PathFromNode containing all layers `names` Errors if any of the layers do not exist. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [out_degree](#out_degree) The node out-degrees. #### Returns | Type | Description | |------|-------------| | [UsizeIterable](/docs/reference/api/python/iterables/UsizeIterable) | | ### [rolling](#rolling) **Signature:** `rolling(window, step=None, alignment_unit=None)` Creates a `WindowSet` with the given `window` size and optional `step` using a rolling window. If `alignment_unit` is not "unaligned" and a `step` larger than `window` is provided, some time entries may appear before the start of the first window and/or after the end of the last window (i.e. not included in any window). A rolling window is a window that moves forward by `step` size at each iteration. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `window` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The size of the window. | | `step` | [int](https://docs.python.org/3/library/functions.html#int) \| [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | The step size of the window. `step` defaults to `window`. | | `alignment_unit` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [None](https://docs.python.org/3/library/constants.html#None), optional | `None` | If no alignment_unit is passed, aligns the start of the first window to the smallest unit of time passed to step (or window if no step is passed). For example, if the step is "1 month and 1 day", the first window will begin at the start of the day of the first time event. If set to "unaligned", the first window will begin at the first time event. If any other alignment unit is passed, the windows will be aligned to that unit. alignment_unit defaults to None. | #### Returns | Type | Description | |------|-------------| | [WindowSet](/docs/reference/api/python/raphtory/WindowSet) | A `WindowSet` object. | ### [shrink_end](#shrink_end) **Signature:** `shrink_end(end)` Set the end of the window to the smaller of `end` and `self.end()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time of the window | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [shrink_start](#shrink_start) **Signature:** `shrink_start(start)` Set the start of the window to the larger of `start` and `self.start()` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time of the window | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [shrink_window](#shrink_window) **Signature:** `shrink_window(start, end)` Shrink both the start and end of the window (same as calling `shrink_start` followed by `shrink_end` but more efficient) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | the new start time for the window | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | the new end time for the window | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [snapshot_at](#snapshot_at) **Signature:** `snapshot_at(time)` Create a view of the PathFromNode including all events that have not been explicitly deleted at `time`. This is equivalent to `before(time + 1)` for `Graph` and `at(time)` for `PersistentGraph` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `time` | [TimeInput](/docs/reference/api/python/typing) | - | The time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [snapshot_latest](#snapshot_latest) Create a view of the PathFromNode including all events that have not been explicitly deleted at the latest time. This is equivalent to a no-op for `Graph` and `latest()` for `PersistentGraph` #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | ### [type_filter](#type_filter) **Signature:** `type_filter(node_types)` filter nodes by type #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node_types` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | the node types to keep | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | the filtered view | ### [valid_layers](#valid_layers) **Signature:** `valid_layers(names)` Return a view of PathFromNode containing all layers `names` Any layers that do not exist are ignored #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `names` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | - | list of layer names for the new view | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The layered view | ### [window](#window) **Signature:** `window(start, end)` Create a view of the PathFromNode including all events between `start` (inclusive) and `end` (exclusive) #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `start` | [TimeInput](/docs/reference/api/python/typing) | - | The start time of the window. | | `end` | [TimeInput](/docs/reference/api/python/typing) | - | The end time of the window. | #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | | --- ## Property Details ### [earliest_time](#earliest_time) The earliest time of each node. #### Returns | Type | Description | |------|-------------| | [OptionEventTimeIterable](/docs/reference/api/python/iterables/OptionEventTimeIterable) | An iterable of `EventTime`s. | ### [edges](#edges) Get the edges that are incident to this node. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The incident edges. | ### [end](#end) Gets the latest time that this PathFromNode is valid. #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The latest time that this PathFromNode is valid or None if the PathFromNode is valid for all times. | ### [id](#id) The node IDs. #### Returns | Type | Description | |------|-------------| | [GIDIterable](/docs/reference/api/python/iterables/GIDIterable) | | ### [in_edges](#in_edges) Get the edges that point into this node. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The inbound edges. | ### [in_neighbours](#in_neighbours) Get the neighbours of this node that point into this node. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The in-neighbours. | ### [latest_time](#latest_time) The latest time of each node. #### Returns | Type | Description | |------|-------------| | [OptionEventTimeIterable](/docs/reference/api/python/iterables/OptionEventTimeIterable) | An iterable of `EventTime`s. | ### [metadata](#metadata) The node metadata. #### Returns | Type | Description | |------|-------------| | [MetadataView](/docs/reference/api/python/raphtory/MetadataView) | | ### [name](#name) The node names. #### Returns | Type | Description | |------|-------------| | [StringIterable](/docs/reference/api/python/iterables/StringIterable) | | ### [neighbours](#neighbours) Get the neighbours of this node. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The neighbours (both inbound and outbound). | ### [node_type](#node_type) The node types. #### Returns | Type | Description | |------|-------------| | [OptionArcStringIterable](/docs/reference/api/python/iterables/OptionArcStringIterable) | | ### [out_edges](#out_edges) Get the edges that point out of this node. #### Returns | Type | Description | |------|-------------| | [Edges](/docs/reference/api/python/raphtory/Edges) | The outbound edges. | ### [out_neighbours](#out_neighbours) Get the neighbours of this node that point out of this node. #### Returns | Type | Description | |------|-------------| | [PathFromNode](/docs/reference/api/python/raphtory/PathFromNode) | The out-neighbours. | ### [properties](#properties) The node properties. #### Returns | Type | Description | |------|-------------| | [PropertiesView](/docs/reference/api/python/raphtory/PropertiesView) | | ### [start](#start) Gets the start time for rolling and expanding windows for this PathFromNode #### Returns | Type | Description | |------|-------------| | [OptionalEventTime](/docs/reference/api/python/raphtory/OptionalEventTime) | The earliest time that this PathFromNode is valid or None if the PathFromNode is valid for all times. | ### [window_size](#window_size) Get the window size (difference between start and end) for this PathFromNode. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int), optional | | --- ## Reference > Api > Python > Raphtory > PersistentGraph --- title: "PersistentGraph" breadcrumb: "Reference / Python / raphtory / PersistentGraph" --- # PersistentGraph A temporal graph that allows edges and nodes to be deleted. ## Methods | Method | Description | |--------|-------------| | [`add_edge`](#add_edge) | Adds a new edge with the given source and destination nodes and properties to the graph. | | [`add_metadata`](#add_metadata) | Adds metadata to the graph. | | [`add_node`](#add_node) | Adds a new node with the given id and properties to the graph. | | [`add_properties`](#add_properties) | Adds properties to the graph. | | [`cache`](#cache) | Write PersistentGraph to cache file and initialise the cache. | | [`create_index`](#create_index) | Create graph index | | [`create_index_in_ram`](#create_index_in_ram) | Creates a graph index in memory (RAM). | | [`create_index_in_ram_with_spec`](#create_index_in_ram_with_spec) | Creates a graph index in memory (RAM) with the provided index spec. | | [`create_index_with_spec`](#create_index_with_spec) | Create graph index with the provided index spec. | | [`create_node`](#create_node) | Creates a new node with the given id and properties to the graph. It fails if the node already exists. | | [`delete_edge`](#delete_edge) | Deletes an edge given the timestamp, src and dst nodes and layer (optional). | | [`deserialise`](#deserialise) | Load PersistentGraph from serialised bytes. | | [`edge`](#edge) | Gets the edge with the specified source and destination nodes | | [`event_graph`](#event_graph) | Get event graph | | [`get_all_node_types`](#get_all_node_types) | Returns all the node types in the graph. | | [`import_edge`](#import_edge) | Import a single edge into the graph. | | [`import_edge_as`](#import_edge_as) | Import a single edge into the graph with new id. | | [`import_edges`](#import_edges) | Import multiple edges into the graph. | | [`import_edges_as`](#import_edges_as) | Import multiple edges into the graph with new ids. | | [`import_node`](#import_node) | Import a single node into the graph. | | [`import_node_as`](#import_node_as) | Import a single node into the graph with new id. | | [`import_nodes`](#import_nodes) | Import multiple nodes into the graph. | | [`import_nodes_as`](#import_nodes_as) | Import multiple nodes into the graph with new ids. | | [`load_cached`](#load_cached) | Load PersistentGraph from a file and initialise it as a cache file. | | [`load_edge_deletions`](#load_edge_deletions) | Load edge deletions into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_edge_metadata`](#load_edge_metadata) | Load edge metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_edges`](#load_edges) | Load edges into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_from_file`](#load_from_file) | Load PersistentGraph from a file. | | [`load_node_metadata`](#load_node_metadata) | Load node metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`load_nodes`](#load_nodes) | Load nodes into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), | | [`node`](#node) | Gets the node with the specified id | | [`persistent_graph`](#persistent_graph) | Get persistent graph | | [`save_to_file`](#save_to_file) | Saves the PersistentGraph to the given path. | | [`save_to_zip`](#save_to_zip) | Saves the PersistentGraph to the given path. | | [`serialise`](#serialise) | Serialise PersistentGraph to bytes. | | [`update_metadata`](#update_metadata) | Updates metadata of the graph. | | [`write_updates`](#write_updates) | Persist the new updates by appending them to the cache file. | --- ## Method Details ### [add_edge](#add_edge) **Signature:** `add_edge(timestamp, src, dst, properties=None, layer=None, event_id=None)` Adds a new edge with the given source and destination nodes and properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) | - | The timestamp of the edge. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the destination node. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | The properties of the edge, as a dict of string and properties. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer of the edge. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [add_metadata](#add_metadata) **Signature:** `add_metadata(metadata)` Adds metadata to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | - | The static properties of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [add_node](#add_node) **Signature:** `add_node(timestamp, id, properties=None, node_type=None, event_id=None)` Adds a new node with the given id and properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the node. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the node. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | The properties of the node. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The optional string which will be used as a node type. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [add_properties](#add_properties) **Signature:** `add_properties(timestamp, properties, event_id=None)` Adds properties to the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the temporal property. | | `properties` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | - | The temporal properties of the graph. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [cache](#cache) **Signature:** `cache(path)` Write PersistentGraph to cache file and initialise the cache. Future updates are tracked. Use `write_updates` to persist them to the cache file. If the file already exists its contents are overwritten. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the cache file | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index](#create_index) Create graph index #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index_in_ram](#create_index_in_ram) Creates a graph index in memory (RAM). This is primarily intended for use in tests and should not be used in production environments, as the index will not be persisted to disk. #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index_in_ram_with_spec](#create_index_in_ram_with_spec) **Signature:** `create_index_in_ram_with_spec(py_spec)` Creates a graph index in memory (RAM) with the provided index spec. This is primarily intended for use in tests and should not be used in production environments, as the index will not be persisted to disk. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `py_spec` | [IndexSpec](/docs/reference/api/python/raphtory/IndexSpec) | - | The specification for the in-memory index to be created. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_index_with_spec](#create_index_with_spec) **Signature:** `create_index_with_spec(py_spec)` Create graph index with the provided index spec. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `py_spec` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [create_node](#create_node) **Signature:** `create_node(timestamp, id, properties=None, node_type=None, event_id=None)` Creates a new node with the given id and properties to the graph. It fails if the node already exists. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [TimeInput](/docs/reference/api/python/typing) | - | The timestamp of the node. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the node. | | `properties` | [PropInput](/docs/reference/api/python/typing), optional | `None` | The properties of the node. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The optional string which will be used as a node type. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [MutableNode](/docs/reference/api/python/raphtory/MutableNode) | the newly created node. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [delete_edge](#delete_edge) **Signature:** `delete_edge(timestamp, src, dst, layer=None, event_id=None)` Deletes an edge given the timestamp, src and dst nodes and layer (optional). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `timestamp` | [int](https://docs.python.org/3/library/functions.html#int) | - | The timestamp of the edge. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The id of the destination node. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The layer of the edge. | | `event_id` | [int](https://docs.python.org/3/library/functions.html#int), optional | `None` | The optional integer which will be used as an event id. | #### Returns | Type | Description | |------|-------------| | [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge) | The deleted edge | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [deserialise](#deserialise) **Signature:** `deserialise(bytes)` Load PersistentGraph from serialised bytes. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `bytes` | [bytes](https://docs.python.org/3/library/stdtypes.html#bytes) | - | The serialised bytes to decode | #### Returns | Type | Description | |------|-------------| | [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | | ### [edge](#edge) **Signature:** `edge(src, dst)` Gets the edge with the specified source and destination nodes #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the source node id | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the destination node id | #### Returns | Type | Description | |------|-------------| | [MutableEdge](/docs/reference/api/python/raphtory/MutableEdge), optional | The edge with the specified source and destination nodes, or None if the edge does not exist | ### [event_graph](#event_graph) Get event graph #### Returns | Type | Description | |------|-------------| | [Graph](/docs/reference/api/python/raphtory/Graph) | the graph with event semantics applied | ### [get_all_node_types](#get_all_node_types) Returns all the node types in the graph. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | A list of node types | ### [import_edge](#import_edge) **Signature:** `import_edge(edge, merge=False)` Import a single edge into the graph. This function takes an edge object and an optional boolean flag. If the flag is set to true, the function will merge the import of the edge even if it already exists in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edge` | [Edge](/docs/reference/api/python/raphtory/Edge) | - | An edge object representing the edge to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the edge. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The imported edge. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_edge_as](#import_edge_as) **Signature:** `import_edge_as(edge, new_id, merge=False)` Import a single edge into the graph with new id. This function takes a edge object, a new edge id and an optional boolean flag. If the flag is set to true, the function will merge the import of the edge even if it already exists in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edge` | [Edge](/docs/reference/api/python/raphtory/Edge) | - | A edge object representing the edge to be imported. | | `new_id` | [tuple](https://docs.python.org/3/library/stdtypes.html#tuple) | - | The ID of the new edge. It's a tuple of the source and destination node ids. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the edge. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [Edge](/docs/reference/api/python/raphtory/Edge) | The imported edge. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_edges](#import_edges) **Signature:** `import_edges(edges, merge=False)` Import multiple edges into the graph. This function takes a vector of edge objects and an optional boolean flag. If the flag is set to true, the function will merge the import of the edges even if they already exist in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges` | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | - | A vector of edge objects representing the edges to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the edges. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_edges_as](#import_edges_as) **Signature:** `import_edges_as(edges, new_ids, merge=False)` Import multiple edges into the graph with new ids. This function takes a vector of edge objects, a list of new edge ids and an optional boolean flag. If the flag is set to true, the function will merge the import of the edges even if they already exist in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges` | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | - | A vector of edge objects representing the edges to be imported. | | `new_ids` | list[tuple[[GID](/docs/reference/api/python/typing), [GID](/docs/reference/api/python/typing)]] | - | The new edge ids | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the edges. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_node](#import_node) **Signature:** `import_node(node, merge=False)` Import a single node into the graph. This function takes a node object and an optional boolean flag. If the flag is set to true, the function will merge the import of the node even if it already exists in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [Node](/docs/reference/api/python/raphtory/Node) | - | A node object representing the node to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the node. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | A Node object if the node was successfully imported, and an error otherwise. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_node_as](#import_node_as) **Signature:** `import_node_as(node, new_id, merge=False)` Import a single node into the graph with new id. This function takes a node object, a new node id and an optional boolean flag. If the flag is set to true, the function will merge the import of the node even if it already exists in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `node` | [Node](/docs/reference/api/python/raphtory/Node) | - | A node object representing the node to be imported. | | `new_id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | The new node id. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the node. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [Node](/docs/reference/api/python/raphtory/Node) | A Node object if the node was successfully imported, and an error otherwise. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_nodes](#import_nodes) **Signature:** `import_nodes(nodes, merge=False)` Import multiple nodes into the graph. This function takes a vector of node objects and an optional boolean flag. If the flag is set to true, the function will merge the import of the nodes even if they already exist in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | list[[Node](/docs/reference/api/python/raphtory/Node)] | - | A vector of node objects representing the nodes to be imported. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the nodes. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [import_nodes_as](#import_nodes_as) **Signature:** `import_nodes_as(nodes, new_ids, merge=False)` Import multiple nodes into the graph with new ids. This function takes a vector of node objects, a list of new node ids and an optional boolean flag. If the flag is set to true, the function will merge the import of the nodes even if they already exist in the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | list[[Node](/docs/reference/api/python/raphtory/Node)] | - | A vector of node objects representing the nodes to be imported. | | `new_ids` | [List](https://docs.python.org/3/library/typing.html#typing.List) \| `int]` | - | A list of node IDs to use for the imported nodes. | | `merge` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | An optional boolean flag indicating whether to merge the import of the nodes. Defaults to False. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_cached](#load_cached) **Signature:** `load_cached(path)` Load PersistentGraph from a file and initialise it as a cache file. Future updates are tracked. Use `write_updates` to persist them to the cache file. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the cache file | #### Returns | Type | Description | |------|-------------| | [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | the loaded graph with initialised cache | ### [load_edge_deletions](#load_edge_deletions) **Signature:** `load_edge_deletions(data, time, src, dst, layer=None, layer_col=None, schema=None, csv_options=None)` Load edge deletions into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing the edges. | | `time` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the update timestamps. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the source node ids. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the destination node ids. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the layer for all edges. Cannot be used in combination with layer_col. Defaults to None. | | `layer_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer col name in the data source. Cannot be used in combination with layer. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_edge_metadata](#load_edge_metadata) **Signature:** `load_edge_metadata(data, src, dst, metadata=None, shared_metadata=None, layer=None, layer_col=None, schema=None, csv_options=None)` Load edge metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing edge information. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the source node. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the destination node. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of edge metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every edge. Defaults to None. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer name. Defaults to None. | | `layer_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer column name in a dataframe. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_edges](#load_edges) **Signature:** `load_edges(data, time, src, dst, properties=None, metadata=None, shared_metadata=None, layer=None, layer_col=None, schema=None, csv_options=None)` Load edges into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing the edges. | | `time` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the update timestamps. | | `src` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the source node IDs. | | `dst` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the destination node IDs. | | `properties` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of edge property column names. Defaults to None. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of edge metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every edge. Defaults to None. | | `layer` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the layer for all edges. Cannot be used in combination with layer_col. Defaults to None. | | `layer_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The edge layer column name in a dataframe. Cannot be used in combination with layer. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_from_file](#load_from_file) **Signature:** `load_from_file(path)` Load PersistentGraph from a file. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the file. | #### Returns | Type | Description | |------|-------------| | [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | | ### [load_node_metadata](#load_node_metadata) **Signature:** `load_node_metadata(data, id, node_type=None, node_type_col=None, metadata=None, shared_metadata=None, schema=None, csv_options=None)` Load node metadata into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing node information. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the node IDs. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the node type for all nodes. Cannot be used in combination with node_type_col. Defaults to None. | | `node_type_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The node type column name in a dataframe. Cannot be used in combination with node_type. Defaults to None. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of node metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every node. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [load_nodes](#load_nodes) **Signature:** `load_nodes(data, time, id, node_type=None, node_type_col=None, properties=None, metadata=None, shared_metadata=None, schema=None, csv_options=None)` Load nodes into the graph from any data source that supports the ArrowStreamExportable protocol (by providing an __arrow_c_stream__() method), a path to a CSV or Parquet file, or a directory containing multiple CSV or Parquet files. The following are known to support the ArrowStreamExportable protocol: Pandas dataframes, FireDucks(.pandas) dataframes, Polars dataframes, Arrow tables, DuckDB (e.g. DuckDBPyRelation obtained from running an SQL query). #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `data` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | The data source containing the nodes. | | `time` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the timestamps. | | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The column name for the node IDs. | | `node_type` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | A value to use as the node type for all nodes. Cannot be used in combination with node_type_col. Defaults to None. | | `node_type_col` | [str](https://docs.python.org/3/library/stdtypes.html#str), optional | `None` | The node type column name in a dataframe. Cannot be used in combination with node_type. Defaults to None. | | `properties` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of node property column names. Defaults to None. | | `metadata` | list[[str](https://docs.python.org/3/library/stdtypes.html#str)], optional | `None` | List of node metadata column names. Defaults to None. | | `shared_metadata` | [PropInput](/docs/reference/api/python/typing), optional | `None` | A dictionary of metadata properties that will be added to every node. Defaults to None. | | `schema` | [list](https://docs.python.org/3/library/stdtypes.html#list) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]]` \| [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| [PropType](/docs/reference/api/python/raphtory/PropType) \| `str]`, optional | `None` | A list of (column_name, column_type) tuples or dict of \{"column_name": column_type\} to cast columns to. Defaults to None. | | `csv_options` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) \| `bool]`, optional | `None` | A dictionary of CSV reading options such as delimiter, comment, escape, quote, and terminator characters, as well as allow_truncated_rows and has_header flags. Defaults to None. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [node](#node) **Signature:** `node(id)` Gets the node with the specified id #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `id` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [int](https://docs.python.org/3/library/functions.html#int) | - | the node id | #### Returns | Type | Description | |------|-------------| | [MutableNode](/docs/reference/api/python/raphtory/MutableNode), optional | The node with the specified id, or None if the node does not exist | ### [persistent_graph](#persistent_graph) Get persistent graph #### Returns | Type | Description | |------|-------------| | [PersistentGraph](/docs/reference/api/python/raphtory/PersistentGraph) | the graph with persistent semantics applied | ### [save_to_file](#save_to_file) **Signature:** `save_to_file(path)` Saves the PersistentGraph to the given path. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the file. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [save_to_zip](#save_to_zip) **Signature:** `save_to_zip(path)` Saves the PersistentGraph to the given path. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `path` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | The path to the file. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [serialise](#serialise) Serialise PersistentGraph to bytes. #### Returns | Type | Description | |------|-------------| | [bytes](https://docs.python.org/3/library/stdtypes.html#bytes) | | ### [update_metadata](#update_metadata) **Signature:** `update_metadata(metadata)` Updates metadata of the graph. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `metadata` | [dict](https://docs.python.org/3/library/stdtypes.html#dict) | - | The static properties of the graph. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | This function does not return a value, if the operation is successful. | #### Raises | Exception | Description | |-----------|-------------| | `GraphError` | If the operation fails. | ### [write_updates](#write_updates) Persist the new updates by appending them to the cache file. #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | --- ## Reference > Api > Python > Raphtory > Prop --- title: "Prop" breadcrumb: "Reference / Python / raphtory / Prop" --- # Prop ## Methods | Method | Description | |--------|-------------| | [`bool`](#bool) | | | [`dtype`](#dtype) | | | [`f32`](#f32) | | | [`f64`](#f64) | | | [`i32`](#i32) | | | [`i64`](#i64) | | | [`list`](#list) | | | [`map`](#map) | | | [`str`](#str) | | | [`u16`](#u16) | | | [`u32`](#u32) | | | [`u64`](#u64) | | | [`u8`](#u8) | | --- ## Method Details ### [bool](#bool) **Signature:** `bool(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [dtype](#dtype) ### [f32](#f32) **Signature:** `f32(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [f64](#f64) **Signature:** `f64(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [i32](#i32) **Signature:** `i32(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [i64](#i64) **Signature:** `i64(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [list](#list) **Signature:** `list(values)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `values` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [map](#map) **Signature:** `map(dict)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `dict` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [str](#str) **Signature:** `str(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [u16](#u16) **Signature:** `u16(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [u32](#u32) **Signature:** `u32(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [u64](#u64) **Signature:** `u64(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [u8](#u8) **Signature:** `u8(value)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `value` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | --- ## Reference > Api > Python > Raphtory > Properties --- title: "Properties" breadcrumb: "Reference / Python / raphtory / Properties" --- # Properties A view of the properties of an entity ## Methods | Method | Description | |--------|-------------| | [`as_dict`](#as_dict) | Convert properties view to a dict. | | [`get`](#get) | Get property value. | | [`get_dtype_of`](#get_dtype_of) | Get the PropType of a property. Specifically, returns the PropType of the latest value for this property if it exists. | | [`items`](#items) | Get a list of key-value pairs | | [`keys`](#keys) | Get the names for all properties | | [`values`](#values) | Get the values of the properties. | ## Properties | Property | Description | |----------|-------------| | [`temporal`](#temporal) | Get a view of the temporal properties only. | --- ## Method Details ### [as_dict](#as_dict) Convert properties view to a dict. ### [get](#get) **Signature:** `get(key)` Get property value. First searches temporal properties and returns latest value if it exists. If not, it falls back to static properties. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the property. | #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | | ### [get_dtype_of](#get_dtype_of) **Signature:** `get_dtype_of(key)` Get the PropType of a property. Specifically, returns the PropType of the latest value for this property if it exists. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the property. | #### Returns | Type | Description | |------|-------------| | [PropType](/docs/reference/api/python/raphtory/PropType) | | ### [items](#items) Get a list of key-value pairs ### [keys](#keys) Get the names for all properties #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | ### [values](#values) Get the values of the properties. #### Returns | Type | Description | |------|-------------| | list[[PropValue](/docs/reference/api/python/typing)] | | --- ## Property Details ### [temporal](#temporal) Get a view of the temporal properties only. #### Returns | Type | Description | |------|-------------| | [TemporalProperties](/docs/reference/api/python/raphtory/TemporalProperties) | | --- ## Reference > Api > Python > Raphtory > PropertiesView --- title: "PropertiesView" breadcrumb: "Reference / Python / raphtory / PropertiesView" --- # PropertiesView ## Methods | Method | Description | |--------|-------------| | [`as_dict`](#as_dict) | Convert properties view to a dict. | | [`get`](#get) | Get property value. | | [`items`](#items) | Get a list of key-value pairs. | | [`keys`](#keys) | Get the names for all properties. | | [`values`](#values) | Get the values of the properties. | ## Properties | Property | Description | |----------|-------------| | [`temporal`](#temporal) | Get a view of the temporal properties only. | --- ## Method Details ### [as_dict](#as_dict) Convert properties view to a dict. ### [get](#get) **Signature:** `get(key)` Get property value. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the property. | #### Returns | Type | Description | |------|-------------| | [PyPropValueList](/docs/reference/api/python/raphtory/PyPropValueList) | | ### [items](#items) Get a list of key-value pairs. ### [keys](#keys) Get the names for all properties. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | ### [values](#values) Get the values of the properties. #### Returns | Type | Description | |------|-------------| | list[list[[PropValue](/docs/reference/api/python/typing)]] | | --- ## Property Details ### [temporal](#temporal) Get a view of the temporal properties only. #### Returns | Type | Description | |------|-------------| | list[`TemporalProp`] | | --- ## Reference > Api > Python > Raphtory > PropType --- title: "PropType" breadcrumb: "Reference / Python / raphtory / PropType" --- # PropType PropType provides access to the types used by Raphtory. They can be used to specify the data type of different properties, which is especially useful if one wishes to cast some input column from one type to another during ingestion. PropType can be used to define the schema in the various load_* functions used for data ingestion (i.e. Graph.load_nodes(...)/Graph.load_edges(...) etc.) ## Methods | Method | Description | |--------|-------------| | [`array`](#array) | | | [`bool`](#bool) | | | [`datetime`](#datetime) | | | [`f32`](#f32) | | | [`f64`](#f64) | | | [`i32`](#i32) | | | [`i64`](#i64) | | | [`list`](#list) | | | [`map`](#map) | | | [`naive_datetime`](#naive_datetime) | | | [`str`](#str) | | | [`u16`](#u16) | | | [`u32`](#u32) | | | [`u64`](#u64) | | | [`u8`](#u8) | | --- ## Method Details ### [array](#array) **Signature:** `array(p)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `p` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [bool](#bool) ### [datetime](#datetime) ### [f32](#f32) ### [f64](#f64) ### [i32](#i32) ### [i64](#i64) ### [list](#list) **Signature:** `list(p)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `p` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [map](#map) **Signature:** `map(hash_map)` #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `hash_map` | [Any](https://docs.python.org/3/library/typing.html#typing.Any) | - | | ### [naive_datetime](#naive_datetime) ### [str](#str) ### [u16](#u16) ### [u32](#u32) ### [u64](#u64) ### [u8](#u8) --- ## Reference > Api > Python > Raphtory > PyPropValueList --- title: "PyPropValueList" breadcrumb: "Reference / Python / raphtory / PyPropValueList" --- # PyPropValueList ## Methods | Method | Description | |--------|-------------| | [`average`](#average) | Compute the average of all property values. Alias for mean(). | | [`collect`](#collect) | | | [`count`](#count) | | | [`drop_none`](#drop_none) | Drop none. | | [`max`](#max) | Find the maximum property value and its associated time. | | [`mean`](#mean) | Compute the mean of all property values. | | [`median`](#median) | Compute the median of all property values. | | [`min`](#min) | Min property value. | | [`sum`](#sum) | Sum of property values. | --- ## Method Details ### [average](#average) Compute the average of all property values. Alias for mean(). #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | The average of each property values, or None if count is zero. | ### [collect](#collect) ### [count](#count) ### [drop_none](#drop_none) Drop none. #### Returns | Type | Description | |------|-------------| | list[[PropValue](/docs/reference/api/python/typing)] | | ### [max](#max) Find the maximum property value and its associated time. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | | ### [mean](#mean) Compute the mean of all property values. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | The mean of each property values, or None if count is zero. | ### [median](#median) Compute the median of all property values. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | | ### [min](#min) Min property value. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | | ### [sum](#sum) Sum of property values. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | | --- ## Reference > Api > Python > Raphtory > TemporalProperties --- title: "TemporalProperties" breadcrumb: "Reference / Python / raphtory / TemporalProperties" --- # TemporalProperties A view of the temporal properties of an entity ## Methods | Method | Description | |--------|-------------| | [`get`](#get) | Get property value for `key` if it exists. | | [`histories`](#histories) | Get the histories of all properties | | [`items`](#items) | List the property keys together with the corresponding values | | [`keys`](#keys) | List the available property keys. | | [`latest`](#latest) | Get the latest value of all properties | | [`values`](#values) | List the values of the properties | --- ## Method Details ### [get](#get) **Signature:** `get(key)` Get property value for `key` if it exists. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `key` | [str](https://docs.python.org/3/library/stdtypes.html#str) | - | the name of the property. | #### Returns | Type | Description | |------|-------------| | [TemporalProperty](/docs/reference/api/python/raphtory/TemporalProperty) | the property view if it exists, otherwise `None` | ### [histories](#histories) Get the histories of all properties ### [items](#items) List the property keys together with the corresponding values ### [keys](#keys) List the available property keys. #### Returns | Type | Description | |------|-------------| | list[[str](https://docs.python.org/3/library/stdtypes.html#str)] | | ### [latest](#latest) Get the latest value of all properties ### [values](#values) List the values of the properties #### Returns | Type | Description | |------|-------------| | list[[TemporalProperty](/docs/reference/api/python/raphtory/TemporalProperty)] | the list of property views | --- ## Reference > Api > Python > Raphtory > TemporalProperty --- title: "TemporalProperty" breadcrumb: "Reference / Python / raphtory / TemporalProperty" --- # TemporalProperty A view of a temporal property ## Methods | Method | Description | |--------|-------------| | [`at`](#at) | Get the value of the property at a specified time. | | [`average`](#average) | Compute the average of all property values. Alias for mean(). | | [`count`](#count) | Count the number of properties. | | [`items`](#items) | List update times and corresponding property values. | | [`max`](#max) | Find the maximum property value and its associated time. | | [`mean`](#mean) | Compute the mean of all property values. Alias for mean(). | | [`median`](#median) | Compute the median of all property values. | | [`min`](#min) | Find the minimum property value and its associated time. | | [`ordered_dedupe`](#ordered_dedupe) | List of ordered deduplicated property values. | | [`sum`](#sum) | Compute the sum of all property values. | | [`unique`](#unique) | List of unique property values. | | [`value`](#value) | Get the latest value of the property. | | [`values`](#values) | Get the property values for each update. | ## Properties | Property | Description | |----------|-------------| | [`history`](#history) | Returns a history object which contains time entries for when the property was updated. | --- ## Method Details ### [at](#at) **Signature:** `at(t)` Get the value of the property at a specified time. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `t` | [TimeInput](/docs/reference/api/python/typing) | - | time | #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing), optional | | ### [average](#average) Compute the average of all property values. Alias for mean(). #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | The average of each property values, or None if count is zero. | ### [count](#count) Count the number of properties. #### Returns | Type | Description | |------|-------------| | [int](https://docs.python.org/3/library/functions.html#int) | The number of properties. | ### [items](#items) List update times and corresponding property values. ### [max](#max) Find the maximum property value and its associated time. ### [mean](#mean) Compute the mean of all property values. Alias for mean(). #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | The mean of each property values, or None if count is zero. | ### [median](#median) Compute the median of all property values. ### [min](#min) Find the minimum property value and its associated time. ### [ordered_dedupe](#ordered_dedupe) **Signature:** `ordered_dedupe(latest_time)` List of ordered deduplicated property values. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `latest_time` | [bool](https://docs.python.org/3/library/functions.html#bool) | - | Enable to check the latest time only. | ### [sum](#sum) Compute the sum of all property values. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing) | The sum of all property values. | ### [unique](#unique) List of unique property values. #### Returns | Type | Description | |------|-------------| | list[[PropValue](/docs/reference/api/python/typing)] | | ### [value](#value) Get the latest value of the property. #### Returns | Type | Description | |------|-------------| | [PropValue](/docs/reference/api/python/typing), optional | | ### [values](#values) Get the property values for each update. #### Returns | Type | Description | |------|-------------| | `NumpyArray` | | --- ## Property Details ### [history](#history) Returns a history object which contains time entries for when the property was updated. #### Returns | Type | Description | |------|-------------| | [History](/docs/reference/api/python/raphtory/History) | | --- ## Reference > Api > Python > Raphtory > WindowSet --- title: "WindowSet" breadcrumb: "Reference / Python / raphtory / WindowSet" --- # WindowSet ## Methods | Method | Description | |--------|-------------| | [`time_index`](#time_index) | Returns the time index of this window set. | --- ## Method Details ### [time_index](#time_index) **Signature:** `time_index(center=False)` Returns the time index of this window set. It uses the last time of each window as the reference or the center of each if `center` is set to `True`. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `center` | [bool](https://docs.python.org/3/library/functions.html#bool), optional | `False` | If True time indexes are centered. Defaults to False. | #### Returns | Type | Description | |------|-------------| | `Iterable` | The time index. | --- ## Reference > Api > Python > Vectors > Document --- title: "Document" breadcrumb: "Reference / Python / vectors / Document" --- # Document A document corresponding to a graph entity. Used to generate embeddings. ## Properties | Property | Description | |----------|-------------| | [`content`](#content) | The document content. | | [`embedding`](#embedding) | The embedding of the document. | | [`entity`](#entity) | The graph entity corresponding to the document. | --- ## Property Details ### [content](#content) The document content. #### Returns | Type | Description | |------|-------------| | [str](https://docs.python.org/3/library/stdtypes.html#str) | Content of the document. | ### [embedding](#embedding) The embedding of the document. #### Returns | Type | Description | |------|-------------| | [Embedding](/docs/reference/api/python/vectors/Embedding), optional | The embedding of the document if it was computed. | ### [entity](#entity) The graph entity corresponding to the document. #### Returns | Type | Description | |------|-------------| | [Any](https://docs.python.org/3/library/typing.html#typing.Any), optional | | --- ## Reference > Api > Python > Vectors > Embedding --- title: "Embedding" breadcrumb: "Reference / Python / vectors / Embedding" --- # Embedding --- ## Reference > Api > Python > Vectors > VectorisedGraph --- title: "VectorisedGraph" breadcrumb: "Reference / Python / vectors / VectorisedGraph" --- # VectorisedGraph VectorisedGraph object that contains embedded documents that correspond to graph entities. ## Methods | Method | Description | |--------|-------------| | [`edges_by_similarity`](#edges_by_similarity) | Perform a similarity search between each edge's associated document and a specified `query`. Returns a number of edges up to a specified `limit` ranked in descending order of similarity score. | | [`empty_selection`](#empty_selection) | Return an empty selection of entities. | | [`entities_by_similarity`](#entities_by_similarity) | Perform a similarity search between each entity's associated document and a specified `query`. Returns a number of entities up to a specified `limit` ranked in descending order of similarity score. | | [`nodes_by_similarity`](#nodes_by_similarity) | Perform a similarity search between each node's associated document and a specified `query`. Returns a number of nodes up to a specified `limit` ranked in descending order of similarity score. | --- ## Method Details ### [edges_by_similarity](#edges_by_similarity) **Signature:** `edges_by_similarity(query, limit, window=None)` Perform a similarity search between each edge's associated document and a specified `query`. Returns a number of edges up to a specified `limit` ranked in descending order of similarity score. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [list](https://docs.python.org/3/library/stdtypes.html#list) | - | The text or the embedding to score against. | | `limit` | [int](https://docs.python.org/3/library/functions.html#int) | - | The maximum number of new edges in the results. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [VectorSelection](/docs/reference/api/python/vectors/VectorSelection) | The vector selection resulting from the search. | ### [empty_selection](#empty_selection) Return an empty selection of entities. ### [entities_by_similarity](#entities_by_similarity) **Signature:** `entities_by_similarity(query, limit, window=None)` Perform a similarity search between each entity's associated document and a specified `query`. Returns a number of entities up to a specified `limit` ranked in descending order of similarity score. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [list](https://docs.python.org/3/library/stdtypes.html#list) | - | The text or the embedding to score against. | | `limit` | [int](https://docs.python.org/3/library/functions.html#int) | - | The maximum number of new entities in the result. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [VectorSelection](/docs/reference/api/python/vectors/VectorSelection) | The vector selection resulting from the search. | ### [nodes_by_similarity](#nodes_by_similarity) **Signature:** `nodes_by_similarity(query, limit, window=None)` Perform a similarity search between each node's associated document and a specified `query`. Returns a number of nodes up to a specified `limit` ranked in descending order of similarity score. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [list](https://docs.python.org/3/library/stdtypes.html#list) | - | The text or the embedding to score against. | | `limit` | [int](https://docs.python.org/3/library/functions.html#int) | - | The maximum number of new nodes in the result. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [VectorSelection](/docs/reference/api/python/vectors/VectorSelection) | The vector selection resulting from the search. | --- ## Reference > Api > Python > Vectors > VectorSelection --- title: "VectorSelection" breadcrumb: "Reference / Python / vectors / VectorSelection" --- # VectorSelection ## Methods | Method | Description | |--------|-------------| | [`add_edges`](#add_edges) | Add all the documents associated with the specified `edges` to the current selection. | | [`add_nodes`](#add_nodes) | Add all the documents associated with the specified `nodes` to the current selection. | | [`append`](#append) | Add all the documents in a specified `selection` to the current selection. | | [`edges`](#edges) | Returns the edges present in the current selection. | | [`expand`](#expand) | Add all the documents a specified number of `hops` away from the selection. | | [`expand_edges_by_similarity`](#expand_edges_by_similarity) | Add the top `limit` adjacent edges with higher score for `query` to the selection | | [`expand_entities_by_similarity`](#expand_entities_by_similarity) | Add the top `limit` adjacent entities with higher score for `query` to the selection | | [`expand_nodes_by_similarity`](#expand_nodes_by_similarity) | Add the top `limit` adjacent nodes with higher score for `query` to the selection | | [`get_documents`](#get_documents) | Returns the documents present in the current selection. | | [`get_documents_with_scores`](#get_documents_with_scores) | Returns the documents present in the current selection alongside their scores. | | [`nodes`](#nodes) | Returns the nodes present in the current selection. | --- ## Method Details ### [add_edges](#add_edges) **Signature:** `add_edges(edges)` Add all the documents associated with the specified `edges` to the current selection. Documents added by this call are assumed to have a score of 0. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `edges` | [list](https://docs.python.org/3/library/stdtypes.html#list) | - | List of the edge ids or edges to add. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [add_nodes](#add_nodes) **Signature:** `add_nodes(nodes)` Add all the documents associated with the specified `nodes` to the current selection. Documents added by this call are assumed to have a score of 0. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `nodes` | [list](https://docs.python.org/3/library/stdtypes.html#list) | - | List of the node ids or nodes to add. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [append](#append) **Signature:** `append(selection)` Add all the documents in a specified `selection` to the current selection. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `selection` | [VectorSelection](/docs/reference/api/python/vectors/VectorSelection) | - | Selection to be added. | #### Returns | Type | Description | |------|-------------| | [VectorSelection](/docs/reference/api/python/vectors/VectorSelection) | The combined selection. | ### [edges](#edges) Returns the edges present in the current selection. #### Returns | Type | Description | |------|-------------| | list[[Edge](/docs/reference/api/python/raphtory/Edge)] | List of edges in the current selection. | ### [expand](#expand) **Signature:** `expand(hops, window=None)` Add all the documents a specified number of `hops` away from the selection. Two documents A and B are considered to be 1 hop away from each other if they are on the same entity or if they are on the same node/edge pair. Provided that two nodes A and C are n hops away of each other if there is a document B such that A is n - 1 hops away of B and B is 1 hop away of C. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `hops` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of hops to carry out the expansion. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [expand_edges_by_similarity](#expand_edges_by_similarity) **Signature:** `expand_edges_by_similarity(query, limit, window=None)` Add the top `limit` adjacent edges with higher score for `query` to the selection This function has the same behaviour as expand_entities_by_similarity but it only considers edges. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [list](https://docs.python.org/3/library/stdtypes.html#list) | - | The text or the embedding to score against. | | `limit` | [int](https://docs.python.org/3/library/functions.html#int) | - | The maximum number of new edges to add. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [expand_entities_by_similarity](#expand_entities_by_similarity) **Signature:** `expand_entities_by_similarity(query, limit, window=None)` Add the top `limit` adjacent entities with higher score for `query` to the selection The expansion algorithm is a loop with two steps on each iteration: 1. All the entities 1 hop away of some of the entities included on the selection (and not already selected) are marked as candidates. 2. Those candidates are added to the selection in descending order according to the similarity score obtained against the `query`. This loops goes on until the number of new entities reaches a total of `limit` entities or until no more documents are available #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [list](https://docs.python.org/3/library/stdtypes.html#list) | - | The text or the embedding to score against. | | `limit` | [int](https://docs.python.org/3/library/functions.html#int) | - | The number of documents to add. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [expand_nodes_by_similarity](#expand_nodes_by_similarity) **Signature:** `expand_nodes_by_similarity(query, limit, window=None)` Add the top `limit` adjacent nodes with higher score for `query` to the selection This function has the same behaviour as expand_entities_by_similarity but it only considers nodes. #### Parameters | Name | Type | Default | Description | |------|------|---------|-------------| | `query` | [str](https://docs.python.org/3/library/stdtypes.html#str) \| [list](https://docs.python.org/3/library/stdtypes.html#list) | - | The text or the embedding to score against. | | `limit` | [int](https://docs.python.org/3/library/functions.html#int) | - | The maximum number of new nodes to add. | | `window` | [Tuple](https://docs.python.org/3/library/typing.html#typing.Tuple) \| `str, int` \| `str]`, optional | `None` | The window that documents need to belong to in order to be considered. | #### Returns | Type | Description | |------|-------------| | [None](https://docs.python.org/3/library/constants.html#None) | | ### [get_documents](#get_documents) Returns the documents present in the current selection. #### Returns | Type | Description | |------|-------------| | list[[Document](/docs/reference/api/python/vectors/Document)] | List of documents in the current selection. | ### [get_documents_with_scores](#get_documents_with_scores) Returns the documents present in the current selection alongside their scores. ### [nodes](#nodes) Returns the nodes present in the current selection. #### Returns | Type | Description | |------|-------------| | list[[Node](/docs/reference/api/python/raphtory/Node)] | List of nodes in the current selection. | --- ## Reference > Compatibility # Compatibility When setting up a Raphtory environment you must consider the prerequisites of the specific version you are using and compatibilities between versions. This page provides information on the supported versions of Raphtory components and dependencies. ## Python versions The following python versions are supported: - 3.14 - 3.13 - 3.12 - 3.11 --- ## Reference > Troubleshooting # Troubleshooting This page covers common errors and misconfigurations in Raphtory. ## Specifying time measurements Internally all times in Raphtory are represented as milliseconds using unix epochs. When ingesting data you will need to convert your raw data into the appropriate format. Similarly queries made using the API should use timestamps relative to the unix epoch in milliseconds. ## Modeling transactions as state A common mistake is using [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph) for financial transfers. If A pays B $100 at `t1` and you model it as state, the graph will see them as being "in a state of paying $100" forever. Transactions are events – use [`Graph()`](/docs/reference/api/python/raphtory/Graph) instead. ## Missing deletions in PersistentGraph In a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph), if you don't call [`.delete_edge()`](/docs/reference/api/python/raphtory/PersistentGraph#delete_edge), every relationship ever created will stay active in your snapshots. Always plan your "end of life" events for persistent data. ## Docker graph storage When saving a graph to disk in the official Docker container, the default location is `/home/raphtory_server`. This is where the Raphtory server will look for graphs unless you specify an alternative working directory. When saving a file or sending a graph to the server you can always specify a custom path. ============================================================ # Section: Ecosystem ============================================================ --- ## Ecosystem > Index # Ecosystem Integrations Raphtory fits seamlessly into the modern data stack. Here are guides for integrating with popular platforms. --- ## Data Platforms } title="Snowflake" href="/docs/ecosystem/data-platforms/snowflake"> Run graph intelligence on your Snowflake data warehouse. } title="BigQuery" href="/docs/ecosystem/data-platforms/bigquery"> Graph analytics on Google Cloud data. --- ## Orchestration } title="Apache Airflow" href="/docs/ecosystem/orchestration/airflow"> Schedule and automate graph intelligence pipelines. --- ## Development & Visualization } title="Jupyter Notebooks" href="/docs/ecosystem/notebooks/jupyter"> Interactive graph exploration and prototyping. } title="Grafana" href="/docs/ecosystem/visualization/grafana"> Real-time graph intelligence dashboards. --- **Don't see your tool?** Raphtory is a standard Python library. It works with any system that supports Python or structured data files (CSV, Parquet, JSON). --- ## Ecosystem > Data Platforms > Bigquery # Raphtory + Google BigQuery **Graph intelligence on Google Cloud data.** Query BigQuery tables, run graph analytics, and export results back to BigQuery for visualization - all within your Python environment. ## Setup Install the BigQuery client and Raphtory: ```bash pip install raphtory google-cloud-bigquery db-dtypes ``` ## Example: Customer Segmentation ### 1. Query BigQuery Fetch interaction data directly from your BigQuery dataset. ```python from google.cloud import bigquery from raphtory import Graph, algorithms client = bigquery.Client() query = """ SELECT user_id, friend_id, friendship_date FROM `project.dataset.friendships` WHERE DATE(friendship_date) >= CURRENT_DATE() - 7 """ df = client.query(query).to_dataframe() ``` ### 2. Build Temporal Graph Ingest the results into Raphtory to build the customer network. ```python g = Graph() g.load_edges_from_pandas( df, src="user_id", dst="friend_id", time="friendship_date" ) ``` ### 3. Run Community Detection Use the Louvain algorithm to find groups of highly connected users. ```python communities = algorithms.louvain(g) ``` ### 4. Export to BigQuery Write the community labels back to a BigQuery table for reporting. ```python results = pd.DataFrame([ {"user_id": node.name, "community": communities.get(node.name)} for node in g.nodes() ]) table_id = "project.dataset.user_communities" job = client.load_table_from_dataframe(results, table_id) job.result() # Wait for job to complete ``` ## Use Cases - **Customer Network Segmentation**: Group users based on interaction frequency and connectivity. - **Supply Chain Risk**: Identify critical nodes in supply chain data stored in BQ. - **Social Graph Analytics**: Track the spread of influence or content across user networks. --- ## Ecosystem > Data Platforms > Snowflake # Raphtory + Snowflake **Run graph intelligence on your Snowflake data warehouse.** Load transaction data, detect fraud rings, and write results back to Snowflake for BI dashboards - all in Python. ## Why Integrate? - **Leverage existing data**: No ETL required, read directly from Snowflake. - **Scale intelligence**: Process billions of interactions. - **Unified analytics**: Combine SQL with graph intelligence. - **BI-ready results**: Write back to Snowflake for Tableau/Looker. ## Setup ```bash pip install raphtory snowflake-connector-python ``` ## Example: Daily Fraud Detection ### 1. Connect to Snowflake Initialize the connection using the Snowflake Python connector. ```python from raphtory import Graph, algorithms conn = snowflake.connector.connect( user='YOUR_USER', password='YOUR_PASSWORD', account='YOUR_ACCOUNT', warehouse='COMPUTE_WH', database='TRANSACTIONS_DB', schema='PUBLIC' ) ``` ### 2. Load Transactions Query directly from your warehouse into a Pandas DataFrame. ```python query = """ SELECT from_account_id, to_account_id, transaction_time, amount FROM transactions WHERE DATE(transaction_time) = CURRENT_DATE """ df = pd.read_sql(query, conn) ``` ### 3. Build Temporal Graph Ingest the data into Raphtory. ```python g = Graph() g.load_edges_from_pandas( df, src="from_account_id", dst="to_account_id", time="transaction_time", properties={"amount": "amount"} ) ``` ### 4. Detect Coordinated Behavior Run community detection and temporal analysis to find suspicious rings. ```python communities = algorithms.louvain(g, resolution=1.0) fraud_rings = [] for community_id, members in communities.groups().items(): if len(members) > 5: # Check timing coordination creation_times = [g.node(n).earliest_time for n in members] time_span = max(creation_times) - min(creation_times) if time_span < 3600000: # Created within 1 hour (ms) fraud_rings.append({ "ring_id": community_id, "size": len(members), "risk_score": calculate_risk(members, g) }) ``` ### 5. Write Back Results Export your findings back to a Snowflake table for downstream consumption. ```python results_df = pd.DataFrame(fraud_rings) results_df.to_sql('fraud_ring_alerts', conn, if_exists='append', index=False) ``` ## Best Practices 1. **Incremental Loading**: Query only new data since your last run to minimize warehouse costs. 2. **Result Tables**: Write to dedicated alerts tables and use indexes for fast dashboarding. 3. **Warehouse Sizing**: Use a MEDIUM or LARGE warehouse for highly parallel queries if processing large time windows. --- ## Ecosystem > Notebooks > Jupyter # Raphtory + Jupyter **Interactive graph intelligence and prototyping.** Explore networks, test algorithms, and prototype full graph intelligence workflows in the most popular environment for data science. ## Why Use Jupyter for Raphtory? - **Immediate Feedback**: See the results of complex temporal queries instantly. - **Rich Visualization**: Integrate Matplotlib, Plotly, and network-specific visualization libraries. - **Reproducible Research**: Document your analysis steps alongside your code. ## Setup ```bash pip install raphtory jupyter matplotlib plotly jupyter notebook ``` ## Example Analysis Workflow ### 1. Ingest Data Load your interaction data and build the temporal graph. ```python from raphtory import Graph, algorithms df = pd.read_csv("interactions.csv") g = Graph() g.load_edges_from_pandas(df, src="user_a", dst="user_b", time="timestamp") ``` ### 2. Exploratory Statistics Visualize the degree distribution to understand your network's connectivity. ```python degrees = [node.degree() for node in g.nodes()] plt.hist(degrees, bins=50) plt.title("Degree Distribution") plt.show() ``` ### 3. Run Temporal Analysis Analyze the network at different points in time to see how it evolves. ```python for t in [1000, 2000, 3000]: snapshot = g.at(t) print(f"Time {t}: {snapshot.count_nodes()} nodes") ``` ## Best Practices 1. **Memory Management**: Use `del g` and `gc.collect()` if you are iterating over many large graph versions in a single session. 2. **Modularize Early**: Once you are happy with a prototype, move the logic into a separate `.py` module to keep your notebooks clean. 3. **Save State**: Use `g.save_to_file()` to checkpoint your graph so you don't have to rebuild it every time you restart the kernel. --- ## Ecosystem > Orchestration > Airflow # Raphtory + Apache Airflow **Orchestrate your graph intelligence pipelines at scale.** Schedule daily fraud detection, trigger alerts, and automate complex graph analytics workflows with the industry standard for orchestration. ## Why Integrate? - **Automation**: Run hourly or daily graph analysis without manual intervention. - **Dependency Management**: Chain data loading, graph analysis, and alerting tasks. - **Monitoring**: Built-in tracking for pipeline health and failure alerts. - **Scalability**: Distribute graph workloads across workers using Celery or Kubernetes executors. ## Setup ```bash pip install raphtory apache-airflow ``` ## Example: Daily Fraud Ring Detection ### 1. Define Extraction Task Load the latest transaction data from your source (e.g., Snowflake) and save it to temporary storage. ```python from airflow import DAG from airflow.operators.python import PythonOperator def extract_transactions(): # Load from your database df = pd.read_sql("SELECT * FROM transactions WHERE date = CURRENT_DATE", conn) df.to_parquet('/tmp/daily_transactions.parquet') ``` ### 2. Define Analytics Task Ingest the data into Raphtory and run temporal community detection. ```python from raphtory import Graph, algorithms def detect_fraud_rings(): df = pd.read_parquet('/tmp/daily_transactions.parquet') g = Graph() g.load_edges_from_pandas(df, src="from_account", dst="to_account", time="timestamp") communities = algorithms.louvain(g) # Filter for suspicious rings and save ``` ### 3. Orchestrate the DAG Link your tasks into a DAG that runs on a schedule. ```python with DAG('fraud_detection_pipeline', schedule_interval='@daily', start_date=days_ago(1)) as dag: extract = PythonOperator(task_id='extract', python_callable=extract_transactions) analyze = PythonOperator(task_id='analyze', python_callable=detect_fraud_rings) extract >> analyze ``` ## Best Practices 1. **Use XCom for Metadata**: Pass counts and IDs via XCom, but keep the actual graph data in persistent storage (S3, GCS, or Delta). 2. **Resource Limits**: In multi-tenant Airflow environments, use `Pools` or strict worker memory limits for heavy graph tasks. 3. **Idempotency**: Ensure your graph analysis tasks can be retried safely without duplicating results in your dashboard tables. --- ## Ecosystem > Visualization > Grafana # Raphtory + Grafana **Real-time graph intelligence dashboards.** Visualize network health metrics, algorithm results, and temporal trends with Grafana's powerful dashboarding capabilities. ## Setup Export your graph metrics to a time-series database like Prometheus or InfluxDB, then connect Grafana to that data source. ## Example: Monitoring Fraud Rings ### 1. Define Prometheus Metrics Use the `prometheus_client` library to define gauges for your graph metrics. ```python from prometheus_client import Gauge, start_http_server from raphtory import Graph, algorithms fraud_rings_gauge = Gauge('fraud_rings_detected', 'Number of active fraud rings') start_http_server(8000) ``` ### 2. Update Metrics in Pipeline In your scheduled analysis pipeline, calculate the metrics and update the gauges. ```python def update_metrics(): g = load_latest_graph() communities = algorithms.louvain(g) # Logic to count fraud rings from communities count = analyze_communities(communities) fraud_rings_gauge.set(count) ``` ### 3. Build Grafana Dashboard Add a Prometheus data source in Grafana pointing to `localhost:8000` and create panels to track: - **Fraud Rings detected over time** - **Average community size** - **Graph density and connectivity** ## Dashboard Use Cases - **Security Operations**: Monitor for sudden spikes in "lateral movement" motifs. - **Infrastructure Health**: Track the number of isolated components in your network topology. - **Product Analytics**: Visualize the growth of user communities in real-time. ============================================================ # Section: Visualisation ============================================================ --- ## Visualisation > Index # UI overview ## Search page The **Search page** consists of the following elements: - **Global menu** - Switch between any of the main pages. - **Search results** - Shows the results of your query. - **Pinned results** - Shows any results you have pinned. - **Query Builder** - Search a selected graph using the options provided. - **Selected** - Shows contextual information about the current selection. ## Graphs page The **Graphs page** consists of the following elements: - **Global menu** - Switch between any of the main pages. - **Graphs list** - Shows the available graphs. - **Context menu** - Shows contextual information about the current selection. ## Graph view The **Graph view** displays the graph or sub-graph you have selected and provides information on that selection. You can also refine your selection further or save it as a new graph. For more information, see the [Graph view](/docs/visualisation/graph-view) page. ## GraphQL playground This page allows you to access the standard [GraphiQL](https://github.com/graphql/graphiql) playground. --- ## Visualisation > Graph View # Graph view The **Graph view** consists of the following elements: - **Global menu** - Switch between any of the main pages. - **Toolbar** - Manipulate the current selection. - **Context menu** - Shows contextual information about the current selection. - **Overview** - Information the currently selected graph or sub-graph. - **Layout** - Modify how the layout engine displays the graph. - **Selected** - Information about the selected node or edge. - **Graph settings** - Modify the style properties of a selected node or edge. - **Graph canvas** - Displays the current graph or sub-graph. You can select a node or edge to show it's information in the **Context menu**. - **Temporal view** - Displays the edges of the current graph or sub-graph as a timeline of events. On longer timescales edges are shown as a heatmap instead of discrete events. ## Modifying the graph Layout The Raphtory UI gives you detailed control over how your graphs are displayed. You can use this to match your preferences, build custom visualisations for reports or better fit the shape of your data. Raphtory's layout engine is built on [G6](https://github.com/antvis/G6) and many of the [D3 Force-Directed Layout](https://g6.antv.antgroup.com/en/manual/layout/d3-force-layout) parameters are exposed in the **Layout** tab of the **Context menu**. You can select from the following layout algorithms: - Default - Concentric - Force Based - Hierarchical TD - Hierarchical LR For each layout, specific **Advanced Options** can be set to tune the algorithm. ### Default Layout | Parameter | Description | |-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | Collision Radius | The collision force treats nodes as circles with a given radius, rather than points, and prevents nodes from overlapping. You can specify the effective radius. | | Collision Strength | Sets the strength of the collision force. | | Link Distance | Specify an ideal edge length. | | Link Strength | Higher link strength results in distances closer to the ideal. | | Many-Body Force | The mutual force between nodes, a positive value is attractive and a negative value is repulsive. | | Many-Body Range | Set a maximum and minimum distance between nodes that where many-body forces are applied. | | Center Force | Applies a uniform force on all nodes towards the center. | | Radial Force Strength | Applies a uniform force on all nodes within a specified radius towards the center. | | Radial Force Radius | Specify a radius for the radial force. | ### Concentric Layout | Parameter | Description | |----------------------------|--------------------------------------------------------------------------| | Use Clockwise | Enable to add nodes in a clockwise order. | | Maintain Equidistant Rings | Enable to require equidistant rings. | | Node Size | Effective node diameter. This effects ring spacing to avoid collision. | | Node Spacing | Minimum spacing between rings. | | Prevent Overlap | Enable to prevent overlap between nodes. Only works if Node Size is set. | | Start Angle | Start angle where the first node is added. Specified in radians. | | Sweep | Angle between the first and last nodes in the same ring. | ### Force Based Layout | Parameter | Description | |-----------|-------------------------------------------------------------------------------------------------| | Gravity | Applies a force on all nodes towards the center proportional to their distance from the center. | | Speed | Movement speed per iteration of the algorithm. | ### Hierarchical TB Layout | Parameter | Description | |----------------------------|------------------------------------------------------| | Invert | Enable to invert the direction. | | Direction | Specify how the node hierarchy should be aligned. | | Node Separation | Separation of nodes in the same rank. | | Rank Separation | Separation between ranks. | | Rank algorithm | Specify the algorithm used to assign nodes to ranks. | | Node Size | Node size used for collision. | | Retain Edge Control Points | Enable to use control points. | ### Pre-layout algorithms Optionally, you can set a pre-layout algorithm that runs before the primary layout algorithm: - Concentric - arranged around the center. - Dagre LR - arranged using the hierarchical Dagre algorithm from left to right. - Dagre TB - arranged using the hierarchical Dagre algorithm from top to bottom. For Concentric and Dagre TB algorithms you can also specify **Advanced Options** when used in the pre-layout phase. ## Modify styles You can modify the styles applied to nodes and edges from the **Graph settings** tab of the **Context menu**. You can perform both global and local changes which are saved as metadata in the graph. Style metadata for node types is stored on the graph, for edge layers is stored on each edge, and for individual nodes is stored on the matching node. The format for styles is as follows: ```python # Graph styles { '_style': { 'node_types': { 'Person': {'fill': '#1cb917'}, 'Company': {'fill': '#f8e61b'}, } } } # Node styles { '_style':{ 'fill':'#417505', 'size':12 } } # Edge styles { '_style':{ 'meets':{ 'startArrowSize':4, 'stroke':'#f8e61b', 'lineWidth':1, 'endArrowSize':4 }, 'knows':{ 'stroke':'#631854', 'endArrowSize':4 }, } } ``` ### Set the styles for a specified node type 1. Clear all selections. 2. Switch to the **Graph settings** tab of the **Context menu** 3. Click the **Select Node Type** drop down and choose a node type or 'None'. 4. Specify a colour using the **Node Colour** palette. 5. Specify a **Node Size** value. 6. Click **Save**. ### Set the styles for a specified edge layer 1. Select any edge. 2. Switch to the **Graph settings** tab of the **Context menu** 3. Click the **Select Edge Layer** dropdown and choose a layer. 4. Specify a colour using the **Node Colour** palette. 5. Specify a **Node Size** value. 6. Click **Save**. ### Set the styles for the currently selected node 1. Select any node. 2. Switch to the **Graph settings** tab of the **Context menu** 3. Specify a colour using the **Node Colour** palette. 4. Specify a **Node Size** value. 5. Click **Save**. Styles set on an individual node override styles set on a node type. Additionally, styles can only be applied to individual nodes, if you have multiple nodes selected the last node you selected will be updated. ============================================================ # Section: Building Apps ============================================================ --- ## Building Apps > Index # Building Graph Applications **Transform raw temporal data into interactive, stake-holder ready tools.** The true value of a temporal graph is realized when it moves out of the notebook and into the hands of investigators, analysts, and decision-makers. Pometry and Raphtory are designed to be "headless" engines that power a new breed of **Graph Applications**. Whether you need a quick dashboard for a tactical investigation or a high-fidelity enterprise platform, the architecture remains consistent: --- ## The Application Stack A typical Raphtory-powered application consists of three layers: 1. **The Intelligence Engine (Raphtory)**: Handles the heavy lifting of temporal windowing, multi-hop traversals, and pattern matching. 2. **The API Layer**: Exposes graph state via **GraphQL** or the **Python Agentic API**. 3. **The Interactive Frontend**: A user interface designed to navigate complex relationships and the arrow of time. --- ## Choosing Your Path Depending on your requirements and timeline, there are two primary ways to build on Raphtory: } > Perfect for internal tools, rapid prototyping, and one-off investigative dashboards. Pure Python, no frontend knowledge required. } > For production-grade intelligence platforms. High-fidelity visuals, custom temporal scrubbers, and complex agentic chat interfaces. --- ## Core UX Challenges Building for temporal graphs introduces unique UI/UX challenges that traditional tables and charts don't solve. * **Temporal Navigation**: How does a user "scroll" through the history of a billion-node network? * **Relationship Density**: How do you prevent "hairballs" while showing deep multi-hop connections? * **Contextual Grounding**: How do you prove that an AI's recommendation is backed by real graph events? We address these in our [Visualizing Time & UX](./building-apps/visualizing-time) guide. **Building with Pometry**: Pometry's commercial platform provides a pre-built "Workshop" of UI components specifically designed for these challenges, including managed WebGPU graph canvases and temporal timeline scrubbers. ============================================================ # Section: Export ============================================================ --- ## Export > Index # Exporting and visualising your graph There are many different formats and libraries that you can use to export your graphs. In this section we explore two of these, Pandas dataframes and NetworkX graphs. All functions mentioned in this section work on graphs, nodes, and edges and their matching views. This allows you to specify different windows, layers and subgraphs before conversion. By default, exporting will include all properties and all update history. However, this can be modified via flags on each export function, depending on use case requirements. You can find a description of these flags in the API docs for the function. If we are missing a format that you believe to be important, please raise an [issue](https://github.com/Pometry/Raphtory/issues) and it will be available before you know it! The following example reuses the network traffic dataset from the [ingestion tutorial](/docs/ingestion/dataframes) and the monkey interaction network from the [querying tutorial](/docs/querying). In the below code example you can get a refresher of what these datasets looks like. ============================================================ # Section: Ontology ============================================================ --- ## Ontology > Index # Introduction to Temporal Ontologies **Stop building schemas. Start modeling your world.** In traditional systems, you define a **Schema**: a rigid structure of tables and columns optimized for storage. In Raphtory, you define an **Ontology**: a semantic model of your business objects, their relationships, and how they interact over time. ## What is an Ontology? An Ontology is a "High-Fidelity Digital Twin" of your organization. It bridges the gap between technical data (rows/logs) and business reality (flights/transactions). | Concept | SQL / Warehouse | Graph Database | Raphtory Ontology | | :--- | :--- | :--- | :--- | | **Unit of Data** | Row | Vertex | **Object (Node)** | | **Connection** | Foreign Key | Edge | **Relationship (Edge)** | | **Time** | Timestamp Column | Property | **First-Class Dimension** | | **Goal** | Efficient Storage | Connectivity | **Causal Intelligence** | ## The Three Pillars } > Map your business entities (Airports, Accounts) directly to graph nodes, keeping their identity consistent across time. } > Don't just store "current state." treating every update as an event to build a lossless history. } > (Coming Soon) Define how objects interact and trigger alerts based on complex temporal patterns. ## Why "Temporal" Ontology? Most graph databases give you a snapshot: *"Who is connected to whom right now?"* Raphtory gives you a movie: *"How did this connection evolve? Who was involved 5 minutes before the fraud event?"* > [!IMPORTANT] > **The 4th Dimension**: In a Raphtory Ontology, **Time** is not just a property you query; it is the fabric of the universe. Every node and edge exists validly only within specific time windows. ## Next Steps Start by learning how to map your physical business assets into digital objects. [**Go to Object-Centric Modeling →**](/docs/ontology/modeling-objects) ============================================================ # Section: Persistent Graph ============================================================ --- ## Persistent Graph > Index # Time semantics One of the most important decisions in temporal graph modeling is how to interpret the *meaning* of time in your data. Raphtory supports two semantic models – **events** and **persistence** – giving you maximum flexibility in how you view and query your graphs. **Under the hood, these representations share the same data timeline.** The difference is purely semantic – how you interpret updates. You can seamlessly switch between views using [`event_graph()`](/docs/reference/api/python/raphtory/PersistentGraph#event_graph) and [`persistent_graph()`](/docs/reference/api/python/raphtory/Graph#persistent_graph) without duplicating data. ## Event Graphs (Discrete) In the temporal graph literature, graphs made up of instantaneous updates are known as *link streams*. Use [`Graph()`](/docs/reference/api/python/raphtory/Graph) when your data represents discrete occurrences – a payment hitting an account at `2024-03-15 10:32:47`, a network packet arriving at a router at `14:05:23.847`, or a user clicking on a page at `09:15:02`. This representation also works naturally with snapshot-based data, whether you have precise timestamps or discrete periods like daily snapshots, monthly summaries, or weekly aggregates. ### Querying Event Graphs In an Event Graph, the [`at(t)`](/docs/reference/api/python/raphtory/Graph#at) function returns only what happened at **exactly** that timestamp. Since events are instantaneous, this is most useful for coarse-grained snapshots (e.g., `g.at(day_1)`) where you know updates occur at specific intervals. For most use cases, you'll query with **windows**: [`g.window(t1, t2)`](/docs/reference/api/python/raphtory/Graph#window) ("What happened during this hour?"). This captures all events within a time range, which is more practical when dealing with continuous timestamps. ### Deletions in Event Graphs Deletions have no effect in an Event Graph because every update is seen as instantaneous – there's nothing to "end". If you call [`delete_edge()`](/docs/reference/api/python/raphtory/PersistentGraph#delete_edge) on a [`Graph`](/docs/reference/api/python/raphtory/Graph), the deletion itself is recorded as an event, but edges don't have duration to be terminated. ## Persistent Graphs (Continuous) Not all relationships are instantaneous. When someone becomes a director of a company in 2023, that relationship persists – if you query the graph at 2025, they're still a director unless explicitly removed. The same applies to employees in a department, tenants in a property, or social media followers. Use [`PersistentGraph()`](/docs/reference/api/python/raphtory/PersistentGraph) when your edges represent ongoing state rather than discrete events. ### Querying Persistent Graphs The key difference in a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph) is that relationships have duration. When you call [`pg.add_edge(t1, "Alice", "Bob")`](/docs/reference/api/python/raphtory/PersistentGraph#add_edge), you're saying "Alice and Bob's relationship began at `t1`". From that point forward, any query using [`pg.at(t)`](/docs/reference/api/python/raphtory/PersistentGraph#at) where `t >= t1` will show them as connected, even if no further updates occurred. The relationship continues indefinitely until you explicitly end it with [`pg.delete_edge(t2, "Alice", "Bob")`](/docs/reference/api/python/raphtory/PersistentGraph#delete_edge), which marks the relationship as terminated at `t2`. This means you can ask questions like "who was connected at midnight on January 1st?" and get meaningful answers, even if no edges were added at exactly that time. Note that **nodes exist forever** once added to a `PersistentGraph`. Unlike edges which can be deleted, nodes persist indefinitely from the point of their creation. ## Creating a PersistentGraph The example below shows how to create and manipulate a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph) in Raphtory: Here we have a graph with two edges: one connecting Alice and Bob, and one connecting Bob and Charlie, and three exploded edges, one for each activation of Alice and Bob's edge and the activation of Bob and Charlie's edge. If an edge is not explicitly deleted, it is assumed to last forever. ## Switching Between Semantics Since both [`Graph`](/docs/reference/api/python/raphtory/Graph) and [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph) store the same underlying timeline of updates, you can freely convert between them at any time. This is useful when you want to analyze the same data through different lenses – for example, viewing your social network as events to count daily interactions, then switching to persistent mode to see who was connected at a specific moment. Use [`persistent_graph()`](/docs/reference/api/python/raphtory/Graph#persistent_graph) to convert a [`Graph`](/docs/reference/api/python/raphtory/Graph) to persistent semantics, and [`event_graph()`](/docs/reference/api/python/raphtory/PersistentGraph#event_graph) to convert a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph) to event semantics: Notice how the same data produces different results depending on the semantic model. At `t=4`, no updates occurred, so the event view returns nothing. But in persistent mode, both edges created before `t=4` are still active (Alice-Bob at `t=1`, Bob-Charlie at `t=3`). After Alice-Bob is deleted at `t=5`, only Bob-Charlie remains in the persistent view at `t=6`. Over the next few pages, we will explore how the persistent graph works to understand its behaviour and semantics and how it can unlock some interesting analysis. ============================================================ # Section: Building Apps ============================================================ --- ## Building Apps > Fast Path # The Fast Path: Streamlit & Dash **Go from a Raphtory query to a functional internal tool in minutes using pure Python.** For data science teams and investigators, the "Fast Path" leverages Python-first web frameworks to wrap Raphtory logic in interactive UI elements like sliders, select boxes, and graph viewers. --- ## Why Streamlit or Dash? * **Zero JavaScript**: Build complex dashboards without leaving the Python ecosystem. * **Direct Access**: Use Raphtory objects (`g`, `nodes`, `edges`) directly in your app logic without an intermediate API layer. * **Rapid Iteration**: Instant hot-reloading as you refine your temporal investigative logic. --- ## Building a Temporal Investigator in Streamlit A common pattern is to create a dashboard where an analyst can select a node and a time window to see "the immediate neighborhood at that point in time." ```python from raphtory import PersistentGraph # Load the graph once (cached) @st.cache_resource def load_graph(): return PersistentGraph.load_from_file("./data/master_graph") g = load_graph() st.title("🔎 Temporal neighborhood Investigator") # 1. Select the entity target_node = st.text_input("Enter Account ID", value="customer_552") # 2. Define the temporal window st.sidebar.header("Temporal Controls") window_days = st.sidebar.slider("Analysis Window (Days)", 1, 30, 7) end_time = g.latest_time start_time = end_time - (window_days * 24 * 60 * 60 * 1000) # 3. Filter and Visualize if target_node: # Use Raphtory's windowing for the "View" view = g.window(start_time, end_time).node(target_node).neighbourhood(1) # Render with PyVis or Streamlit-AgGraph st.write(f"Found {view.count_nodes()} neighbors in this window.") # (Visualisation logic here...) ``` --- ## Best Practices for the Fast Path ### 1. Caching the Graph Raphtory's `PersistentGraph` is memory-efficient, but you should avoid reloading it on every Streamlit script rerun. Use `@st.cache_resource` to keep the graph in memory. ### 2. Aggregating Before Visualizing Don't try to render a million nodes in a web browser. Use Raphtory's algorithms (e.g., Degree, PageRank) or `subgraph()` filtering to reduce the dataset to a meaningful investigative core before passing it to the frontend. ### 3. Using PyVis for interactivity While building the UI in Python, you can use the `pyvis` library to generate interactive, draggable graph visualizations that can be embedded directly into Streamlit via `st.components.v1.html`. **The Pometry Advantage**: While Streamlit is great for prototypes, Pometry's enterprise UI components provide a managed Python environment with high-performance WebGPU rendering out of the box, allowing you to visualize much larger temporal neighborhoods without performance degradation. --- ## Next Steps Learn how to build production-grade platforms with Next.js and React. Discover UX patterns for timeline scrubbers and playback. --- ## Building Apps > Nextjs React # The Pro Path: Next.js & React **Build production-grade intelligence platforms using a modern web stack and Raphtory's GraphQL API.** When performance, customization, and user experience are paramount, the "Pro Path" involves building a custom frontend with **React** or **Next.js**, communicating with a Raphtory backend via **GraphQL**. --- ## The Architecture In a production environment, the frontend should never interact with the raw graph file. Instead, it interacts with a middleware or a managed service that exposes the graph via a high-performance API. ### 1. The GraphQL Backend Raphtory's managed platform (Pometry) provides a native GraphQL layer. This allows your app to request exactly the data it needs - whether that's a node's properties, its temporal history, or a specialized algorithm result. ### 2. The Client-Side State Use libraries like **Apollo Client** or **TanStack Query** to manage the lifecycle of your graph data. This is crucial for handling loading states during complex multi-hop queries. ### 3. The Visualization Layer For the "Pro Path," move beyond basic libraries and utilize high-performance renderers like **Cosmograph**, **Sigma.js**, or **Pometry's WebGPU Canvas** to handle 100k+ nodes with smooth interactions. --- ## Connecting Next.js to Raphtory Here is a conceptual example of a Next.js **Server Action** or **API Route** fetching a temporal subgraph for a specific entity. ```typescript // app/api/graph/neighborhood/route.ts const { nodeId, windowSize } = await req.json(); const client = createClient({ url: process.env.RAPHTORY_URL }); // Query the temporal neighborhood via GraphQL const data = await client.query({ graph: { node: { __args: { name: nodeId }, neighbourhood: { __args: { depth: 1 }, nodes: { name: true, nodeType: true, properties: true }, edges: { src: { name: true }, dst: { name: true }, properties: true } } } } }); return Response.json(data); } ``` --- ## Building an "Agentic" Interface As seen in the Pometry platform, the modern "Pro" interface often includes a chat-based assistant that can execute graph code in the background. * **Actionable Reasoning**: The AI generates a hypothesis -> writes a Raphtory query -> renders a subgraph for the user to verify. * **Grounded Components**: Use specialized React components to render the AI's "thought process" and the resulting graph visuals side-by-side. --- ## The Pometry UI Library For Pometry customers, we provide a pre-built library of high-fidelity React components: * ``: High-performance WebGPU-accelerated graph renderer. * ``: A time-aware timeline component for navigating history. * ``: A side-panel for deep inspection of node and edge properties. **Reference implementation**: If you are building a custom UI, we recommend looking at our [Open Source examples](https://github.com/pometry) for patterns on how to handle real-time graph updates and large-scale data fetching. --- ## Next Steps Deep dive into UX patterns for temporal data. Learn the schema used to power the Pro Path. --- ## Building Apps > Visualizing Time # Visualizing Time: UX Best Practices **Building intuitive interfaces for the fourth dimension.** Temporal graphs are "3D" data structures (Network + Time). Standard graph visualizations often fall short because they try to flatten history into a single snapshot. To build an effective temporal application, you must give the user a way to navigate the arrow of time. --- ## Core UX Patterns ### 1. The Temporal Scrubber (The "Time Machine") A timeline component at the bottom of the interface that allows users to drag a slider to "travel" through the graph's history. * **Behavior**: As the user drags the scrubber, the graph should dynamically update - nodes should appear/disappear, and edges should animate based on their activity in that specific window. * **Visual Cue**: Use a "Ghosting" effect where inactive nodes become semi-transparent instead of vanishing completely, helping the user maintain spatial context. ### 2. Time-Travel Playback Similar to a video player, playback allows a user to "watch" a network evolve. This is critical for detecting: * **Coordinated Bursts**: Multiple nodes suddenly becoming active in a single millisecond. * **Sequential Hopping**: Capital moving from Node A to B to C in strict chronological order. ### 3. Change Delta indicators Instead of just showing the new state, highlight **what changed**. * **New Entities**: Pulse with a "glow" or a specific color (e.g., Green). * **Vanished Entities**: Slowly fade out rather than snapping to invisible. * **High Velocity**: Entities with the most updates in the current window should be visually emphasized (e.g., larger size or more vibrance). --- ## Designing the Timeline UX An effective timeline isn't just a slider; it's a data-dense component. 1. **Event Density Heatmap**: Behind the timeline slider, render a small bar chart showing the volume of transactions/events over time. This guides the user to the "interesting" parts of history. 2. **Point-in-Time Annotation**: Allow users to "bookmark" specific timestamps where they found evidence, creating a shareable investigative trail. 3. **Variable Zoom**: Support zooming into the timeline. Analyzing a year of activity requires a different scale than a second-by-second analysis of a cyber attack. --- ## High-Fidelity Interaction Examples ### The "Scrub-to-Reveal" In this pattern, the user highlights a suspicious node. They then scrub the timeline backward to see where its funds originated. The graph "replays" the incoming edges as the slider moves. ### The "Temporal Intersection" The user selects a subset of nodes and asks: *"When were all three of these entities active at once?"* The timeline highlights the intersection periods in a distinct color. --- ## The Pometry Design System Pometry's enterprise UI is built on these principles. If you are building your own, consider these technical requirements: * **Interpolation**: Calculate the positions of nodes between time snapshots to ensure smooth animations. * **State Management**: Cache recently viewed time windows in the browser's memory to ensure the scrubber feels "weightless." * **WebGPU Sync**: Ensure the timeline state is synced with the GPU's vertex buffers for instantaneous updates. **UX Golden Rule**: Never let the user get lost in time. Always display the current "Window Start" and "Window End" in a prominent location next to the graph canvas. --- ## Next Steps Apply these patterns in a custom React/Next.js interface. Review the underlying temporal ontology of Raphtory. ============================================================ # Section: Export ============================================================ --- ## Export > Dataframes # Exporting to Pandas dataframes Raphtory enables a powerful **table → graph → table** workflow: ingest data from DataFrames, apply graph algorithms and temporal operations, then export the results back to DataFrames for downstream ML pipelines and data tools. This makes Raphtory a natural extension of your existing data science workflow. Use [`to_df()`](/docs/reference/api/python/raphtory/Nodes#to_df) on [`Nodes`](/docs/reference/api/python/raphtory/Nodes) and [`Edges`](/docs/reference/api/python/raphtory/Edges) to convert your graph data back into tabular format. ## Node Dataframe To explore the use of [`to_df()`](/docs/reference/api/python/raphtory/Nodes#to_df) on the nodes we first call the function with default parameters. This exports only the latest property updates and utilises epoch timestamps - the output from this can be seen below. To demonstrate flags, we call [`to_df()`](/docs/reference/api/python/raphtory/Nodes#to_df) again, this time enabling [`include_property_history`](/docs/reference/api/python/raphtory/Nodes#to_df) and [`convert_datetime`](/docs/reference/api/python/raphtory/Nodes#to_df). The output for this can also be seen below. ## Edge Dataframe Exporting to an edge dataframe via [`to_df()`](/docs/reference/api/python/raphtory/Edges#to_df) generally works the same as for the nodes. However, by default this will export the property history for each edge, split by edge layer. This is because [`to_df()`](/docs/reference/api/python/raphtory/Edges#to_df) has an alternative [`explode`](/docs/reference/api/python/raphtory/Edges#to_df) flag to view each update individually (which will then ignore [`include_property_history`](/docs/reference/api/python/raphtory/Edges#to_df)). In the below example we first create a [`subgraph`](/docs/reference/api/python/raphtory/Graph#subgraph) of the monkey interactions, selecting `ANGELE` and `FELIPE` as the monkeys we are interested in. This isn't a required step, but helps to demonstrate the export of [`GraphViews`](/docs/reference/api/python/raphtory/GraphView). Then we call [`to_df()`](/docs/reference/api/python/raphtory/Edges#to_df) on the subgraph edges, setting no flags. In the output you can see the property history for each interaction type (layer) between `ANGELE` and `FELIPE`. Finally, we call [`to_df()`](/docs/reference/api/python/raphtory/Edges#to_df) again, turning off the property history and exploding the edges. In the output you can see each interaction that occurred between `ANGELE` and `FELIPE`. We have further reduced the graph to only one layer via [`layers()`](/docs/reference/api/python/raphtory/Graph#layers) to reduce the output size. --- ## Export > Networkx # Exporting to NetworkX When converting to a NetworkX graph there is only one function ([`to_networkx()`](/docs/reference/api/python/raphtory/Graph#to_networkx)), which has flags for node and edge history and for exploding edges. By default all history is included and the edges are separated by layer. In the below example we call [`to_networkx()`](/docs/reference/api/python/raphtory/Graph#to_networkx) on the network traffic graph, keeping all the default arguments so that it exports the full history. We extract `ServerA` from this graph and print to show how the history is modelled. The resulting graph is a NetworkX [`MultiDiGraph`](https://networkx.org/documentation/stable/reference/classes/multidigraph.html) since Raphtory graphs are both directed and have multiple edges between nodes. We call [`to_networkx()`](/docs/reference/api/python/raphtory/Graph#to_networkx) again, disabling [`include_property_history`](/docs/reference/api/python/raphtory/Graph#to_networkx) and [`include_update_history`](/docs/reference/api/python/raphtory/Graph#to_networkx), then reprint `ServerA` to show the difference. ## Visualisation Once converted into a NetworkX graph you have access to their full suite of functionality. For example, using their [drawing](https://networkx.org/documentation/stable/reference/drawing.html) library for visualising graphs. In the code snippet below we use this functionality to draw a network traffic graph, labelling the nodes with their Server ID. For more information, see the [NetworkX](https://networkx.org/documentation/stable/reference/drawing.html) documentation. ============================================================ # Section: Ontology ============================================================ --- ## Ontology > Events And Time # Events & Time: The 4th Dimension **Don't delete history. Accumulate it.** In most systems, an "Update" destroys the previous state. * *Old DB*: `UPDATE users SET balance = 50 WHERE id = 1` -> Old balance is gone forever. * *Raphtory*: `g.add_node(t=2, id=1, prop="balance", val=50)` -> Balance is 50 *at time 2*. It was still 100 *at time 1*. ## The "Event-Sourcing" Model Raphtory treats your graph like an **Event Log**. You don't "build" a graph; you "replay" history into it. ### 1. Events become Updates Every row in your dataset is an **Event**. * A transaction assumes a `TRANSFER` edge. * A GPS ping assumes a `LOCATION` property update. ```python # Stream of Events events = [ {"time": 1, "user": "Alice", "action": "login"}, {"time": 2, "user": "Alice", "balance": 500}, {"time": 3, "user": "Alice", "balance": 450}, # Use spent 50 ] for e in events: # We never overwrite. We just add new knowledge at a new time. g.add_node(timestamp=e["time"], id=e["user"], properties=e) ``` ## 2. Time-Travel Querying Because you stored the events, you can ask questions about the past. ```python # What is Alice's balance NOW? current = g.node("Alice").property("balance") # What was Alice's balance YESTERDAY? past = g.at(time_yesterday).node("Alice").property("balance") ``` ## 3. Exploding Edges (The Interaction Log) Sometimes, the "Relationship" isn't enough. You want the raw interaction history. * **Relationship**: "Alice knows Bob" (Edge exists). * **History**: "Alice emailed Bob 50 times in 2023." In Raphtory, an Edge is a **Container** for history. ```python alice_bob_edge = g.edge("Alice", "Bob") # .explode() gives you every single event that ever formed this edge for interaction in alice_bob_edge.explode(): print(f"Interaction at {interaction.time()} with props {interaction.properties()}") ``` > [!NOTE] > **History is Immutable**: Once an event happens, it is written in stone. You can add *new* events that "correct" old values (like a bank correction), but the original entry remains as a matter of record. --- ## Ontology > Modeling Objects # Object-Centric Data Modeling **Build your graph around the things that matter.** "Object-Centric" modeling means designing your data model around the **tangible assets** of your business (the "Objects") rather than the logs that generate them. In Raphtory, an **Object** is a **Node**. A **Relationship** is an **Edge**. ## 1. Defining Objects (Nodes) An Object is a persistent entity with a unique identity. * **Examples**: `User`, `Device`, `Flight`, `Company`, `Wallet`. * **Anti-Patterns**: Avoid making "dumb" nodes like `TransactionID` or `EventID` unless they are truly distinct entities you want to analyze. ### The Identity Key Every object needs a global unique identifier (ID). This allows Raphtory to stitch together data from 10 different CSVs into a single history for that object. ```python # GOOD: Using a stable business key g.add_node(timestamp=t, id="user_123", properties={"name": "Alice"}) # BAD: Using a row ID (creates disconnected dust) g.add_node(timestamp=t, id="row_998877", properties={"name": "Alice"}) ``` ## 2. Defining Relationships (Edges) Relationships define how Objects interact. In a temporal ontology, relationships are often **verbs**. * `User` -[TRANSFERRED]-> `User` * `Plane` -[LANDED_AT]-> `Airport` * `Device` -[CONNECTED_TO]-> `IP_Address` ### Multi-Graph Layers Real world objects have multiple types of relationships. Use **Layers** to keep them semantically distinct. ```python # Layer 1: Financial Flow g.add_edge(t, src="Alice", dst="Bob", layer="Financial", properties={"amount": 100}) # Layer 2: Social Connection g.add_edge(t, src="Alice", dst="Bob", layer="Social", properties={"relation": "Friend"}) ``` ## 3. Properties vs. Metadata * **Properties**: Are *temporal*. They change. (e.g., `balance`, `location`, `risk_score`). * **Metadata**: Are *static*. They define the object. (e.g., `dob`, `account_type`, `region`). > [!TIP] > **Modeling Tip**: If a value changes often (like a stock price), it's a Property. If it never changes (like a ticker symbol), it's Metadata. ## Example: The "Flight" Ontology Let's model an aviation ontology. ### Identify Objects * `Aircraft` (ID: Tail Number) * `Airport` (ID: IATA Code) ### Identify Relationships * `Aircraft` -> `Airport` (Edge: "FLIGHT") ### Map the Data ```python # Data: "UA123 landed at JFK at 10:00 AM" # 1. The Objects (Implicitly created) src_plane = "UA123" dst_airport = "JFK" # 2. The Relationship (The Flight) g.add_edge( timestamp="10:00", src=src_plane, dst=dst_airport, layer="Flight", properties={"flight_number": "UA100", "passengers": 150} ) ``` ============================================================ # Section: Persistent Graph ============================================================ --- ## Persistent Graph > Ambiguity # Handling of ambiguous updates Because Raphtory allows updates to be ingested in any order from multiple sources, we have to handle various corner cases once deletions are available. The following sections cover the most common scenarios. ## Order of resolving additions and deletions Here two edges between Alice and Bob overlap in time: one starting at time 1 and ending at time 5, another starting at time 3 and ending at time 7. Event graphs in Raphtory allow edges between the same pair of nodes to happen at the same instant. However, when we look at the exploded edges of this [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph), the following is returned: Two edges are created, one that exists from time 1 to time 3 and another that exists from time 3 to time 5. The second deletion at time 7 is ignored. The reason for this is that Raphtory's graph updates are inserted in chronological order, so that the same graph is constructed regardless of the order in which the updates are made. With an exception for events which have the same timestamp, which will be covered shortly. ## Hanging deletions Adding edges without a deletion afterwards results in an edge which lasts forever, while deleting an edge without a prior addition does not effect the history. However, hanging deletions are tracked as an object and if the history is later modified to add the corresponding edge at an earlier time the delete will become valid and occur as expected. ## Additions and deletions in the same instant If the update times to an edge are all distinct from each other, the graph that is constructed is fully unambiguous. When events have the same timestamp, Raphtory tie-breaks the updates by the order in which they are executed. The execution order determines the final state: when we add then delete, the edge appears and disappears instantly. When we delete then add, the edge exists from that point onwards. ## Interaction with layers Layering allows different types of interaction to exist, and edges on different layers can have overlapping times in a way that doesn't make sense for edges in the same layer or for edges with no layer. Consider an example without layers: Now take a look at a slightly modified example with layers: By adding layer names to the different edge instances we produce a different result. Here we have two edges, one starting and ending at 1 and 5 respectively with the 'colleague' layer, the other starting and ending at 3 and 7 on the 'friends' layer. --- ## Persistent Graph > Validity # Graph validity When you apply a temporal view like [`at()`](/docs/reference/api/python/raphtory/PersistentGraph#at) or [`latest()`](/docs/reference/api/python/raphtory/PersistentGraph#latest) to a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph), you get a graph view filtered to that point in time. However, this collapses temporal history – property histories only contain the final value, so you can no longer access prior values, and the original `earliest_time`/`latest_time` are lost. The [`valid()`](/docs/reference/api/python/raphtory/GraphView#valid) function provides an alternative: it filters out deleted edges **without collapsing history**. This gives you a graph where traversals and degree calculations exclude deleted edges, while still preserving the full temporal information of the remaining edges – including the complete history of property changes. ## valid() on windowed graphs When applied to a windowed graph, [`valid()`](/docs/reference/api/python/raphtory/GraphView#valid) filters out edges that are deleted at the **end** of the window. An edge is included if it exists at any point during the window, but if it's deleted by the window's end time, it's excluded from the valid view. ## Edge validity methods Individual edges have three methods for checking their status: | Method | Description | | :--- | :--- | | `is_valid()` | Returns `True` if the edge is **not deleted** at the current view's end time | | `is_deleted()` | Returns `True` if the edge **is deleted** at the current view's end time | | `is_active()` | Returns `True` if the edge has **any updates** within the current window (including deletions) | **Note on `is_active()`**: An edge can be valid but not active. For example, if an edge was added at `t=1` and you query a window from `t=100` to `t=200`, the edge has no updates in that window (`is_active() = False`), but it's still present (`is_valid() = True`). --- ## Persistent Graph > Views # Views on a persistent graph The same temporal view functions you use on a regular [`Graph`](/docs/reference/api/python/raphtory/Graph) – [`at()`](/docs/reference/api/python/raphtory/PersistentGraph#at), [`before()`](/docs/reference/api/python/raphtory/PersistentGraph#before), [`after()`](/docs/reference/api/python/raphtory/PersistentGraph#after), and [`window()`](/docs/reference/api/python/raphtory/PersistentGraph#window) – work on a [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph), but they behave differently. Instead of filtering by update events, they filter by whether an edge was **active** at the queried time. ## Querying an instant with at() As we can see, the edge's presence in the graph is _inclusive_ of the timestamp at which it was added, but _exclusive_ of the timestamp at which it was deleted. Equivalently, the Alice-Bob edge is present on the interval `2 ≤ t < 5`. **Node visibility depends on activity within the view.** A node appears in a view if it either: - Was explicitly added at or before the queried time (once added, nodes exist forever from that point onwards), OR - Has an active edge within the time bounds In the example, Charlie was explicitly added at `t=3`, so he appears at `t=3` and all later times. Alice and Bob were never explicitly added – they only exist because the edge created them. Once their edge is deleted at `t=5`, they have no activity and disappear from the view. ## Getting the graph before a point with before() Here we see that [`before(T)`](/docs/reference/api/python/raphtory/PersistentGraph#before) is exclusive of the end point `T`, creating an intersection between the time interval `-∞ < t < T` and `2 ≤ t < 5` where `T` is the argument. ## Getting the graph after a point with after() [`after(T)`](/docs/reference/api/python/raphtory/PersistentGraph#after) is also exclusive of the starting point `T`. ## Windowing the graph with window() [`window(T1, T2)`](/docs/reference/api/python/raphtory/PersistentGraph#window) creates a half-open interval `T₁ ≤ t < T₂` intersecting the edge's active time (`2 ≤ t < 5` in this case). When the window is completely inside the edge active time and when the edge's active time is strictly inside the window, the edge is treated as present in the graph. ## Graph-type-agnostic snapshots If you're writing code that needs to work with both [`Graph`](/docs/reference/api/python/raphtory/Graph) and [`PersistentGraph`](/docs/reference/api/python/raphtory/PersistentGraph), use [`snapshot_at()`](/docs/reference/api/python/raphtory/GraphView#snapshot_at) and [`snapshot_latest()`](/docs/reference/api/python/raphtory/GraphView#snapshot_latest) instead of [`at()`](/docs/reference/api/python/raphtory/PersistentGraph#at) or [`before()`](/docs/reference/api/python/raphtory/Graph#before). These methods adapt their behavior based on the underlying graph type: | Method | On `Graph` | On `PersistentGraph` | | :--- | :--- | :--- | | `snapshot_at(t)` | Equivalent to `before(t + 1)` | Equivalent to `at(t)` | | `snapshot_latest()` | No-op (returns the graph as-is) | Equivalent to `latest()` | This is useful when you want to ask "what does the graph look like at time T?" without worrying about whether edges are events or persistent relationships. --- # End of Documentation For more information: - Website: https://pometry.com - GitHub: https://github.com/Pometry/Raphtory - API Reference: https://docs.raphtory.com - Contact: https://pometry.com/contact