Poly-LSM: High-Performance Graph Storage in Aster
- Poly-LSM is a graph-oriented LSM-tree storage engine that integrates a hybrid schema, adaptive edge operations, and skew-aware encoding for efficient graph management.
- It employs dynamic delta and pivot update mechanisms, using in-memory degree sketches and compaction strategies to optimize processing of billion-edge graphs.
- Integrated within the Aster graph database, Poly-LSM delivers up to 17× higher throughput with robust transactional support for scalable, evolving workloads.
Poly-LSM is a graph-oriented Log-Structured Merge tree (LSM-tree) storage engine designed for high-performance, large-scale, evolving graphs with intensive update and lookup workloads. Developed as the foundational storage layer of the Aster graph database, Poly-LSM integrates a hybrid storage schema, adaptive edge operation mechanisms, and skew-aware encoding, yielding both high-throughput updates and low-latency queries even at billion-edge scale. Aster is built atop Poly-LSM, providing a Gremlin-compatible interface and robust transactional support, resulting in up to 17× higher throughput than state-of-the-art graph databases on massive real-world datasets (Mo et al., 11 Jan 2025).
1. Hybrid LSM-Tree Architecture
Poly-LSM extends the classical LSM-tree paradigm with a graph-centric hybrid layout, distinguishing between pivot entries (vertex-centric adjacency lists) and delta entries (single-edge updates or deletions). The in-memory MemTable buffers both entry types, enabling efficient batch operations and high ingest rates.
Upon flushing, both entry types are persisted as SSTables in Level 0. As SSTables are compacted into progressively deeper levels—each times larger than the preceding—Poly-LSM maintains LSM invariants: non-overlapping key ranges per level and global sorted order across the tree. Each pivot entry pairs a vertex identifier with a sorted adjacency list, optimizing locality for full-list queries. Delta entries encode atomic updates, enabling lightweight incremental edge modifications via the RocksDB Merge operator.
Data lookup employs a level-wise strategy: first searching MemTable, then for each level using Bloom filters and block indices to minimize I/O, performing at most one block access per level until the relevant pivot entry is found. Delta and pivot entries for a given vertex are merged dynamically to reconstruct the current adjacency list.
During compaction, overlapping delta and pivot entries are resolved via user-defined merge functions. Delta entries are eventually merged into pivots at depth, gradually converting edge-centric deltas into consolidated vertex-centric representations. Compaction thresholds follow the LSM rule: for each level , if the cumulative size exceeds times the maximum size of level , a merge sort integrates overlapping SSTables into level :
2. Adaptive Edge Operation Mechanism
To optimize the trade-off between update latency, read efficiency, and write amplification, Poly-LSM deploys two dynamic update strategies for each new edge :
- Delta-update (edge-based): If the estimated degree (provided by an in-memory Morris counter degree sketch) is above a tunable threshold , a RocksDB Merge operation appends a delta entry for . This incurs no immediate read I/O.
- Pivot-update (vertex-based): Otherwise, a pivot update is invoked: all current entries for (both delta and pivot) are read, merged, sorted, and deduplicated; the consolidated adjacency list is then written as a new pivot entry, superseding previous representations.
The core cost model involves parameters (vertex ID size), (block size), (level size ratio), (number of levels), (lookup/update ratio), and mean degree . The expected cost for each operation is:
The threshold optimally solves , yielding: With typical settings , this yields .
Decision-making per edge is , with formal analysis showing Poly-LSM is -competitive under uniform workloads and -competitive under skewed distributions.
3. Skewness Exploitation and Space-Efficient Encoding
Poly-LSM employs a Morris counter-based degree sketch for each vertex, encoded as an 8-bit value (4 bits exponent , 4 bits mantissa ). On each incident edge, is incremented with probability ; overflow increases . The estimated degree is
ensuring unbiasedness () and probabilistic concentration ().
To compress adjacency lists, Poly-LSM partitions each list into segments; for each, Elias–Fano encoding is applied within the smaller sub-universe . Space per element is approximately bits, plus bits per segment header. The parameter (or equivalent prefix length) allows tuning between space and decompression speed.
4. Integration within the Aster Graph Database
Aster employs Poly-LSM as its storage engine and presents a Gremlin-compatible query interface. Queries traverse the following layers:
- Query Interface: Accepts Gremlin queries.
- Query Executor: Apache TinkerPop plans Gremlin queries as streaming computation graphs of steps (e.g., , , ).
- Storage Layer: Poly-LSM exposes procedural APIs (AddVertex, AddEdge, GetOutNeighbors, GetEdge, GetVertex, GetProperty, PutProperty, etc.), mapping high-level graph operations into efficient storage primitives.
Transactional support leverages RocksDB’s optimistic concurrency control: each transaction is timestamped, write operations utilize GetForUpdate, and GetSnapshot ensures repeatable reads. Multi-Version Concurrency Control (MVCC) is realized as the LSM-tree holds multiple data versions; compaction preserves historical snapshots relevant to active transactions.
5. Performance Evaluation and Scalability
Extensive experiments evaluated Aster (using Poly-LSM) on server-grade hardware (Intel i9-13900K, 128 GB RAM, 2 TB NVMe SSD) and diverse real-world graph datasets, including:
| Dataset Type | Example Datasets | Scale |
|---|---|---|
| Moderate | DBLP, Twitch, Cit-Patents | – vertices |
| Large | Wikipedia, Orkut, Freebase | – V |
| Massive | V, E |
Baselines included Neo4j, ArangoDB, OrientDB, SQLG (PostgreSQL), JanusGraph (BerkeleyDB), NebulaGraph (edge-LSM), DuckDB, Umbra, as well as Edge-LSM, Vertex-LSM, Delta-Poly, and Pivot-Poly engine variants.
Under a range of lookup/update ratios, Aster consistently achieved markedly higher throughput and lower latency: for the Twitter dataset, load time was under two days, with all baselines except Neo4j failing to complete or severely degrading. On all tested workloads, Aster delivered up to 17× higher throughput than the best competitor and notably lower latencies (e.g., AddEdge versus Neo4j ; GetNeighbors versus Neo4j ).
Scalability evaluations revealed Poly-LSM outperforms both Edge-LSM and Vertex-LSM schemes across all workload mixes and datasets; the adaptive threshold varies with observed workload characteristics, allowing dynamic shifting between delta and pivot updates. The analytic I/O cost model accurately predicted empirical performance, confirming the soundness of the strategy.
6. Analytical Query Processing and Limitations
Evaluation with LDBC Graphalytics benchmarks (algorithms: PageRank, CDLP, WCC, SSSP, BFS) demonstrated that Aster and Neo4j were the only graph databases completing all five algorithms within two hours on Cit-Patents and Wiki-Talk. Aster excelled particularly on local-traversal tasks (BFS, SSSP), but graph processors such as GridGraph and Mosaic remained an order of magnitude faster on full-scan analytics, indicating a current limitation of Poly-LSM in OLAP-style, whole-graph analytical workloads. A plausible implication is that further optimization or integration with analytical engines is needed to close this gap.
7. Summary and Outlook
Poly-LSM’s blend of a hybrid LSM-tree design, adaptive per-vertex update strategies, degree-aware encoding, and integration within Aster establishes a new baseline for scalable, high-throughput graph management. These architectural features jointly deliver major improvements in update and query performance for massive, evolving graphs as evidenced by empirical comparison on multi-billion edge datasets (Mo et al., 11 Jan 2025). Potential future directions include augmentation for OLAP workloads and further exploration of cross-engine optimizations to extend the analytical capabilities of Poly-LSM–backed systems.