Platform Engineer Tutorial

Deploy, scale, and operate Raphtory in production environments.

Learn how to integrate Raphtory into your infrastructure, handle streaming ingestion at scale, and maintain high-availability deployments.

What You'll Build

  1. Production Deployment – Docker, Kubernetes, and cloud-native patterns
  2. Streaming Ingestion – Handle millions of events per second
  3. GraphQL API Layer – Serve graph intelligence to applications
  4. Observability – Metrics, logging, and alerting
  5. High Availability – Replication and failover strategies

Time: 45 minutes
Prerequisites: Docker, Kubernetes basics, Python.


1. Docker Deployment

Package Raphtory as a containerized service:

Dockerfile:

server.py:

2. Kubernetes Deployment

Deploy for high availability with proper resource limits:

Storage: Use SSDs for PersistentGraph storage. Network-attached storage (EBS, GCE PD) works but local NVMe is 3-5x faster.

3. Streaming Ingestion Pipeline

Handle high-velocity event streams with Kafka integration:

4. GraphQL API with Rate Limiting

Expose your graph with production-grade API management:

5. Observability Stack

Export metrics to Prometheus for Grafana dashboards:

Prometheus alert rules (alerts.yaml):


Cloud-Specific Patterns


Production Checklist

  • Storage: SSD-backed PersistentVolumes with backup enabled
  • Resources: Memory limits set to 2x expected graph size
  • Networking: Internal load balancer + API gateway for external access
  • Security: TLS termination, JWT authentication on GraphQL
  • Observability: Prometheus metrics, structured logging to stdout
  • Backup: Scheduled snapshots of graph storage

Next Steps