AI/ML Engineer Tutorial
Build context-aware AI applications with temporal graph intelligence.
Learn how to power GraphRAG pipelines, generate temporal embeddings, and give your LLMs the context they need to reason about evolving relationships.
What You'll Build
- Temporal Embeddings – Generate node representations that encode structural AND temporal patterns
- GraphRAG Pipeline – Retrieve contextual subgraphs to ground LLM responses
- Agentic AI Support – Expose graph intelligence as tools for AI agents
- Streaming Context – Update your knowledge graph in real-time
Time: 45 minutes
Prerequisites: Python, familiarity with embeddings/LLMs.
1. Generate Temporal Embeddings
Static embeddings (Node2Vec, GraphSAGE) ignore time. Raphtory's FastRP implementation captures both structure and temporal dynamics.
Why FastRP? It's 10-100x faster than GNN-based approaches while maintaining competitive accuracy. Perfect for production systems.
2. Temporal Windowed Embeddings
Generate embeddings at different points in time to capture how entities evolve:
3. GraphRAG: Contextual Retrieval for LLMs
The key to effective RAG is retrieving the right context. For relationship-heavy domains, that means subgraphs - not just documents.
4. Format Context for LLM Prompts
Convert graph context into natural language for your LLM:
5. Expose as Agentic AI Tools
Make your graph intelligence available to AI agents as callable tools:
Agent Framework Integration: Raphtory's GraphQL API works with LangChain, LlamaIndex, AutoGPT, and any framework that supports HTTP tool calls.
6. Real-Time Knowledge Graph Updates
Keep your graph current with streaming ingestion:
Production Pattern: Full GraphRAG Architecture
Next Steps
- Graph Intelligence Section – Advanced GraphRAG patterns
- Vectorisation & Search – Embedding algorithms deep-dive
- GraphQL API – Full API reference for agent tools