Observability, assets and metrics
Last updated
Last updated
Gathering data about your test run
In the section, we showed you how to collect
test run outputs
, i.e. the events outputted by individual test run instances during the test run, for example:
Testground SDK supports a variety of metrics and outputs to support test plan developers in gathering data and understanding the results of their test plans and test runs.
Events outputted by test instances via the available APIs from the RunEnv
runtime environment, are generally available as:
output
files after the test run concludes
in the metrics database, currently InfluxDB
Lifecycle events facilitate real-time progress monitoring of test runs, either by a human, or by the upcoming watchtower/supervisor service.
They are inserted immediately into the metrics database via direct call to the InfluxDB API.
sdk-go/runtime/runtime_events.go
Diagnostics are inserted immediately into the metrics store via direct call to the InfluxDB API. They are also recorded in file diagnostics.out
.
sdk-go/runtime/metrics_api.go
Recording observations about the subsystems and components under test. Conceptually speaking, results are a part of the test output.
Results are the end goal of running a test plan. Results feed comparative series over runs of a test plan, along time, across dependency sets.
They are batch-inserted into InfluxDB when the test run concludes.
sdk-go/runtime/metrics_api.go
Output assets will be saved when the test terminates. You can also manually create output assets/directories under runenv.TestOutputsPath
.
sdk-go/runtime/runenv_assets.go