Vectorised Graph API
Host a high-performance semantic search interface over your temporal graph.
The Vectorised Graph API (available in Pometry Enterprise) provides a managed bridge between Raphtory's temporal structures and external Vector Databases (like Qdrant, Milvus, or Pinecone). It allows you to query your graph using natural language via standard GraphQL endpoints.
Why Vectorise a Graph?
While graph traversals are excellent for finding relationships, natural language queries often start with a concept (e.g., "Show me accounts that look like shell companies"). The Vectorised Graph API allows you to:
- Retrieve by Semantic Similarity: Find nodes or edges that "meaningfully" match a query.
- Context-Aware Expansion: Once a similar node is found, automatically pull its neighborhood and convert it into a LLM-ready narrative context.
- Proxy External Vector Stores: Raphtory acts as the orchestrator, keeping the graph structure synchronized with the vector embeddings stored in your preferred DB.
Technical Architecture
The architecture consists of the Raphtory GraphServer configured with an embedding provider and a connection to a vector data store.
Setting up the GraphServer
To enable vector search, you configure the GraphServer to manage embeddings for your nodes and edges.
Querying via GraphQL
Once hosted, you can perform hybrid searches using the entitiesBySimilarity query. This is a special endpoint that combines vector retrieval with graph metadata.
Example: Semantic Search
Pometry Enterprise: This API is optimized for production workloads, featuring automatic re-indexing as your temporal graph grows and built-in integration with corporate LLM providers.
Key Capabilities
Hybrid Retrieval
Instead of just returning a list of nodes, the API can return Entities with Context. This means for every search result, you get the surrounding semantic connections (edges) that the LLM needs to understand why the node was retrieved.
Pluggable Infrastructure
- Vector DBs: Support for Qdrant, Milvus, Chroma, and more.
- Models: Use local embeddings (via Sentence-Transformers) or remote models (OpenAI, Anthropic, Cohere).
- Pipelines: Compatible with LangChain and LlamaIndex data connectors.