Over the past four years, Jepsen has analyzed over two dozen databases, coordination services, and queues—and we’ve found replica divergence, data loss, stale reads, lock conflicts, and much more. Here’s every analysis we’ve published.
|Percona XtraDB Cluster||2015-09-04||5.6.25|
Would you like Jepsen to analyze your distributed system? Contact firstname.lastname@example.org for pricing.
Analyses generally take one to four months, depending on scope, and are bound by our research ethics policy. We work with your team to understand the system’s guarantees, design a test for the properties you care about, and build a reproducible test harness. We then help you understand what any observed consistency anomalies mean, and work with your engineers to file bugs and develop fixes.
Jepsen can also provide assistance with your existing Jepsen tests—developing new features, performance improvements, visualizations, and more.
Jepsen occupies a particular niche of the correctness testing landscape. We emphasize:
Black-box systems testing: we evaluate real binaries running on real clusters. This allows us to test systems without access to their source, and without requiring deep packet inspection, formal annotations, etc. Bugs reproduced in Jepsen are observable in production, not theoretical. However, we sacrifice some of the strengths of formal methods: tests are nondeterministic, and we cannot prove correctness, only find errors.
Testing under distributed systems failure modes: faulty networks, unsynchronized clocks, and partial failure. Many test suites only evaluate the behavior of healthy clusters, but production systems experience pathological failure modes. Jepsen shows behavior under strain.
Generative testing: we construct random operations, apply them to the system, and construct a concurrent history of their results. That history is checked against a model to establish its correctness. Generative (or property-based) tests often reveal edge cases with subtle combinations of inputs.