Debugging test plans
This document explains a few options available for finding bugs in test plans and troubleshooting failures
While writing a test plan, there are a few ways to troubleshoot. On this page I will introduce bugs intentionally so we can see how the system behaves and troubleshoot it.
Build errors
$ testground plan create --plan planbuggyThe command above will create a default planbuggy test plan. Unfortunately, for our purposes, the plan has no bugs. Edit main.go so it contains the following buggier code:
main.go
package main
import (
"github.com/testground/sdk-go/run"
"github.com/testground/sdk-go/runtime"
)
func main() {
run.Invoke(test)
}
func test(runenv *runtime.RunEnv) error {
// No closing quote, will not build.
runenv.RecordMessage("Hello Bugs")
return nil
}How it looks in terminal output
When this plan runs, the code is sent to the daemon to be built. Of course, this will fail. Notice that the output comes in several sections. The section labeled Server output shows us the error encountered by our builder.
In this case, the error is pretty straightforward, but in a more complex plan, this output can be difficult to parse. So what can you do?
Using standard debugging tools
Test plans are regular executables which accept configuration through environment variables. Because of this, you can test by compiling, testing, and running code. Except for the sync service, the code can be tested outside of Testground. Let's test the code this time without sending it to the Testground daemon. Let's see what the same code looks like testing locally.
?> If your plan relies on knowledge of the test plan or test case, this can be passed as an environment variable.
Now that output is much more readable!
I can't claim that build errors will always be as easy to diagnose as this one, but this feature enables plan writers to employ traditional debugging techniques or other debugging tools which they are already familiar.
Debugging with message output
The next technique is useful for plans which build correctly and you want to observe the behaviour for debugging. If you have ever debugged a program by adding logging or printing to the screen, you know exactly what I'm talking about. On Testground plans can emit events and messages.
Another thing which might be useful for debugging is events. Just like messages, events can be used as a point-in-time caputre of the current state. Events are included in the outputs collection. They are recorded in the order they occur for each plan instance. We created R() and D() metrics collectors (results and debugging). The difference between these two is that debugging is sent to the metrics pipeline fairly quickly whereas results are collected at the end of a test run.
To see how this works, let's use ron swanson's classic dilemma.
In the following plan, five philosophers Ron Swansons sit at a table with five forks between them. Unfortunately, there is an implementation bug and these Ron Swansons will be be here forever. Add some debugging messages using runenv.RecordMessage to see if you can straighten this whole thing out (hint: answer is in the second tab)
excersise
solution
If you can successfully debug this code, you will see each ron finish his meals and finally the end message "all rons have eaten"
Collecting outputs vs viewing messages in the terminal
When using the local runners, with a relatively small number of plan instances it is fairly easy to view outputs in the terminal runner. I recommend troubleshooting the plan with a small number of instances. The same messages you can see in your terminal are also available in outputs collections.
For more information about this, see Analyzing the results.
Accessing profile data
All Go test plans have profiling enabled by default.
For information about using Go's pprof and generating graphs and reports, I recommend you start here.
On Testground gaining access to the pprof port can sometimes be non-obvious. Allow me to explain how to get access to port 6060 on each of our runners:
local:exec
local:docker
cluster:k8s
Last updated