Observability, assets and metrics
Gathering data about your test run
In the Getting Started section, we showed you how to collect
test run outputs
, i.e. the events outputted by individual test run instances during the test run, for example:
Testground SDK supports a variety of metrics and outputs to support test plan developers in gathering data and understanding the results of their test plans and test runs.
Availability
Events outputted by test instances via the available APIs from the RunEnv
runtime environment, are generally available as:
output
files after the test run concludesin the metrics database, currently InfluxDB
Lifecycle events
Lifecycle events facilitate real-time progress monitoring of test runs, either by a human, or by the upcoming watchtower/supervisor service.
They are inserted immediately into the metrics database via direct call to the InfluxDB API.
API
sdk-go/runtime/runtime_events.go
Diagnostics
Diagnostics are inserted immediately into the metrics store via direct call to the InfluxDB API. They are also recorded in file diagnostics.out
.
API
sdk-go/runtime/metrics_api.go
Results
Recording observations about the subsystems and components under test. Conceptually speaking, results are a part of the test output.
Results are the end goal of running a test plan. Results feed comparative series over runs of a test plan, along time, across dependency sets.
They are batch-inserted into InfluxDB when the test run concludes.
API
sdk-go/runtime/metrics_api.go
Assets
Output assets will be saved when the test terminates. You can also manually create output assets/directories under runenv.TestOutputsPath
.
API
sdk-go/runtime/runenv_assets.go
Last updated