JEPSEN

Analyses

Since 2013, Jepsen has analyzed over two dozen databases, coordination services, and queues—and we’ve found replica divergence, data loss, stale reads, read skew, lock conflicts, and much more. Here’s every analysis we’ve published.

Aerospike2015-05-043.5.4
2018-03-073.99.0.3
Cassandra2013-09-242.0.0
Chronos2015-08-102.4.0
CockroachDB2017-02-16beta-20160829
Crate2016-06-280.54.9
Dgraph2018-08-231.0.2
2020-04-301.1.1
Elasticsearch2014-06-151.1.0
2015-04-271.5.0
etcd2014-06-090.4.1
2020-01-303.4.3
FaunaDB2019-03-052.5.4
Hazelcast2017-10-063.8.3
Kafka2013-09-240.8 beta
MariaDB Galera2015-09-0110.0
MongoDB2013-05-182.4.3
2015-04-202.6.7
2017-02-073.4.0‑rc3
2018-10-233.6.4
2020-05-154.2.6
MySQL2023-12-198.0.34
NuoDB2013-09-231.2
Percona XtraDB Cluster2015-09-045.6.25
PostgreSQL2020-06-1212.3
RabbitMQ2014-06-063.3.0
Radix DLT2022-02-051.0-beta.35.1
RavenDB2024-01-316.0.2
Redis2013-05-182.6.13
2013-12-10WAIT
Redis-Raft2020-06-231b3fbf6
Redpanda2022-04-2921.10.1
RethinkDB2016-01-042.1.5
2016-01-222.2.3
Riak2013-05-191.2.1
Scylla2020-12-234.2-rc3
Tendermint2017-09-050.10.2
TiDB2019-06-122.1.7
VoltDB2016-07-126.3
YugaByte DB2019-03-261.1.9
2019-09-051.3.1
Zookeeper2013-09-233.4.5

Get Tested

Would you like Jepsen to analyze your distributed system? Contact aphyr@jepsen.io for pricing.

Analyses generally take one to four months, depending on scope, and are bound by our research ethics policy. We work with your team to understand the system’s guarantees, design a test for the properties you care about, and build a reproducible test harness. We then help you understand what any observed consistency anomalies mean, and work with your engineers to file bugs and develop fixes.

Jepsen can also provide assistance with your existing Jepsen tests—developing new features, performance improvements, visualizations, and more.

Techniques

Jepsen occupies a particular niche of the correctness testing landscape. We emphasize:

  • Opaque-box systems testing: we evaluate real binaries running on real clusters. This allows us to test systems without access to their source, and without requiring deep packet inspection, formal annotations, etc. Bugs reproduced in Jepsen are observable in production, not theoretical. However, we sacrifice some of the strengths of formal methods: tests are nondeterministic, and we cannot prove correctness, only find errors.

  • Testing under distributed systems failure modes: faulty networks, unsynchronized clocks, and partial failure. Many test suites only evaluate the behavior of healthy clusters, but production systems experience pathological failure modes. Jepsen shows behavior under strain.

  • Generative testing: we construct random operations, apply them to the system, and construct a concurrent history of their results. That history is checked against a model to establish its correctness. Generative (or property-based) tests often reveal edge cases with subtle combinations of inputs.