Analytics Workflows
Deploy graph intelligence into production. Move from manual exploration to automated, real-time analytics pipelines.
Graph intelligence is only valuable when it is accessible and actionable for your team. Raphtory supports a variety of workflows, from ad-hoc data science to high-scale production monitoring.
Advanced Agents: For more autonomous workflows, see our Advanced Intelligence Pipelines, which utilize agentic roles like the Historian, Forecaster, and Investigator to automate multi-step investigations.
Common Workflows
1. Exploratory Analysis
Interactive investigation and hypothesis testing for data scientists and analysts.
- Platform: Jupyter Notebooks or the interactive GraphQL UI.
- Use Case: "Is there a relationship between transaction velocity and account age in our recent fraud cases?"
- Result: Refined temporal motifs that can be promoted to automated alerts.
2. Automated Alerting
Real-time monitoring of live data streams to detect specific temporal patterns as they happen.
- Operation: Raphtory runs continuous windowed queries against an incoming stream (e.g., Kafka).
- Use Case: Alert a security analyst immediately when a "Beaconing" pattern is detected between an internal host and a known C2 server.
- Result: Significant reduction in "Mean Time to Detect" (MTTD).
3. ML Feature Engineering
Using the temporal graph to enrich machine learning models with sophisticated structural features.
- Operation: Generating graph embeddings or node metrics (like temporal PageRank) to use as inputs for traditional ML classifiers.
- Use Case: Feeding network centrality scores into a churn prediction model to account for the "social" nature of user departures.
- Result: 15-25% improvement in model accuracy over non-graph alternatives.
4. Scheduled Reporting
Generating periodic snapshots of network health and risk for executive dashboards.
- Operation: Automated scripts that calculate community health scores or infrastructure bottleneck metrics every 24 hours.
- Use Case: Daily compliance reports showing "High Risk" money flows that crossed international borders.
Deployment Stages
Exploration
Connect to your data source and identify the temporal patterns that signify meaningful events in your domain.
Validation
Test your patterns against historical data using Raphtory's "point-in-time" playback to measure precision and recall.
Automation
Integrate Raphtory into your data production environment (e.g., via the Python API) and pipe results to your existing dashboarding or alerting tools.
Integration Tip: Raphtory integrates seamlessly with the Python data stack (Pandas, PyTorch Geometric, Scikit-learn). See the Ecosystem section for more details.