README
Last updated
Last updated
You are reading the Testground documentation for the stable release v0.5.3 branch.
The Testground team maintains documentation for the
master
branch and for the latest stable release.
Testground is a platform for testing, benchmarking, and simulating distributed and peer-to-peer systems at scale. It's designed to be multi-lingual and runtime-agnostic, scaling gracefully from 2 to 10k instances, only when needed.
The Testground project was started at Protocol Labs because we couldn't find a platform that would allow us to reliably and reproducibly test and measure how changes to the IPFS and libp2p codebases would impact the performance and health of large networks (as well as individual nodes), so we decided to invent it.
No puppeteering necessary.
No need to package and ship the system as a separate daemon with an external API in order to puppeteer it.
No need to expose every internal setting over an external API, just for the sake of testing.
Communicate out-of-band information (such as endpoint addresses, peer ids, etc.).
Leverage synchronization and ordering primitives such as signals and barriers to model a distributed state machine.
Programmatically apply network traffic shaping policies, which you can alter during the execution of a test to simulate various network conditions.
The choreography and sequencing emerges from within the test plan itself.
Benchmark, simulate, experiment, run attacks, etc. against versions v1.1 and v1.2 of the components under test in order to compare results, or test compatibility.
Assemble hybrid test runs mixing various versions of the dependency graph.
You record observations, metrics, success/failure statuses.
You emit structured or unstructured assets you want collected, such as event logs, dumps, snapshots, binary files, etc.
Assemble a test run comprising groups of 2, 200, or 10000 test instances, each with different test parameters, or built against different dependency sets.
Schedule them for run locally (executable or Docker), or in a cluster (Kubernetes).
With a single command...
Using data processing scripts and platforms (such as the upcoming Jupyter notebooks integration) to draw conclusions.
(🌕 = fully supported // 🌑 = planned)
Experimental/iterative development 🌖
The team at Protocol Labs has used Testground extensively to evaluate protocol changes in large networks, simulate attacks, measure algorithmic improvements across network boundaries, etc.
Debugging 🌗
Comparative testing 🌖
Backwards/forward-compatibility testing 🌖
Interoperability testing 🌑
Continuous integration 🌑
Stakeholder/acceptance testing 🌑
A test plan is a blackbox with a formal contract. Testground promises to inject a set of env variables, and the test plan promises to emit events on stdout, and assets on the output directory.
As such, a test plan can be any kind of program, written in Go, JavaScript, C, or shell.
At present, we offer builders for Go, with TypeScript (node and browser) being in the works.
For running test plans written in different languages, targeted for different runtimes, and levels of scale:
exec:go
and docker:go
builders: compile test plans written in Go into executables or containers.
local:exec
, local:docker
, cluster:k8s
runners: run executables or containers locally (suitable for 2-300 instances), or in a Kubernetes cloud environment (300-10k instances).
Got some spare cycles and would like to add support for writing test plans Rust, Python or X? It's easy! Open an issue, and the community will guide you!
Redis-backed lightweight API offering synchronisation primitives to coordinate and choreograph distributed test workloads across a fleet of nodes.
Test instances are able to set connectedness, latency, jitter, bandwidth, duplication, packet corruption, etc. to simulate a variety of network conditions.
Create a k8s cluster ready to run Testground jobs on AWS by following the instructions at testground/infra
.
Compiling test plans against specific versions of upstream dependencies (e.g. moduleX v0.3, or commit 1a2b3c).
So that a single test plan can work with a range of versions of the components under test, as these evolve over time.
Diagnostics: Automatic diagnostics via pprof (for Go test plans), with metrics emitted to InfluxDB in real-time. Metrics can be raw data points or aggregated measurements, such as histograms, counters, gauges, moving averages, etc.
Results: When the test plan concludes, all results are pushed in batch to InfluxDB for later exploration, analysis, and visualization.
Create tailored test runs by composing scenarios declaratively, with different groups, cohorts, upstream deps, test params, etc.
Emit and collect/export/download test outputs (logs, assets, event trails, run events, etc.) from all participants in a run.
!> This docs site is work-in-progress! You're bound to find dragons 🐉 in some sections, so please bear with us! If something looks wrong, please open a docs issue on our main repo.