Testing in Go by example: Part 3Conveying behavior with our approach to BDD in Go
Michael Whatcott
Michael Whatcott
 • 
May 11, 2015
Tags

Review:

Welcome to part 3 of our "Testing in Go" series. If you're new here, feel free to catch up before reading on.

In part 1 of this series I eluded to our perceptions of the standard testing tools provided by the Go tool and the standard library and what was missing for us. We all have different expectations of a testing tool and so it's no wonder that so many have been created.

Part 2 of the series focused on how we have made the act of running tests effortless and automatic.

Introduction

In this post and the next few posts I'll focus on our approach to writing actual tests. The way we write tests now was influenced by a set of important concepts that didn't make it into the standard go test tooling:

  1. Tests as Documentation: Conventions which facilitate documenting the behavior of the the SUT via tests.
  2. Assertions: A comprehensive assortment of helpful functions for comparing expected/actual results.
  3. Setup/teardown: Behavior invoked before and/or after each test case/function.
  4. Fine-grained control over runner: Ability to ignore one or a few cases or focus on nothing more than one or a few cases without having to (1) write extra code, (2) invoke the runner any differently, or (3) comment out any code (a tall order...).

We felt strongly enough about all of these items that we decided to build a package on top of the standard "testing" package to address them. The result was the convey package, which is part of the GoConvey project. By the end of this post you will have an in-depth understanding of how and why to use it.

Tests as Documentation

The convey package

You'll want to refer to the "Bowling Game" code samples found in part 1 of this series. What follows are the same test cases from part 1, but they have been rewritten using GoConvey's convey package:

package bowling

import (
	"testing"

	. "github.com/smartystreets/goconvey/convey"
)

func TestBowlingGameScoring(t *testing.T) {
	Convey("Given a fresh score card", t, func() {
		game := NewGame()

		Convey("When all gutter balls are thrown", func() {
			game.rollMany(20, 0)

			Convey("The score should be zero", func() {
				So(game.Score(), ShouldEqual, 0)
			})
		})

		Convey("When all throws knock down only one pin", func() {
			game.rollMany(20, 1)

			Convey("The score should be 20", func() {
				So(game.Score(), ShouldEqual, 20)
			})
		})

		Convey("When a spare is thrown", func() {
			game.rollSpare()
			game.Roll(3)
			game.rollMany(17, 0)

			Convey("The score should include a spare bonus.", func() {
				So(game.Score(), ShouldEqual, 16)
			})
		})

		Convey("When a strike is thrown", func() {
			game.rollStrike()
			game.Roll(3)
			game.Roll(4)
			game.rollMany(16, 0)

			Convey("The score should include a strike bonus.", func() {
				So(game.Score(), ShouldEqual, 24)
			})
		})

		Convey("When all strikes are thrown", func() {
			game.rollMany(21, 10)

			Convey("The score should be 300.", func() {
				So(game.Score(), ShouldEqual, 300)
			})
		})
	})
}

Here's the output of go test -v -run=TestBowlingGameScoring:

=== RUN TestBowlingGameScoring

  Given a fresh score card
    When all gutter balls are thrown
      The score should be zero ✔
    When all throws knock down only one pin
      The score should be 20 ✔
    When a spare is thrown
      The score should include a spare bonus. ✔
    When a strike is thrown
      The score should include a strike bonus. ✔
    When all strikes are thrown
      The score should be 300. ✔


5 assertions thus far

--- PASS: TestBowlingGameScoring (0.00s)
PASS
ok  	github.com/mdwhatcott/bowling	0.005s

What do you notice that is different about these tests compared to the version from part 1?

Well, there's an extra import statement at the top. Uh-oh, and you used a . to import all the exported names from the convey package right into the bowling package! Isn't that kind of a a "no-no"?

That was very observant of you. Yes, we have taken a dependency on an external library for our tests. Please rest assured that we don't change the API for the convey package. It is very stable at this point. A simple go get github.com/smartystreets/goconvey/convey will ensure the package is available.

And yes, we've imported all exported names into the package under test. The Go Code Review Comments document warns against using import dots, and for good reason. It's not wise to pollute your namespace with too many names. Here are some reasons we encourage the use of an import dot when using the convey package:

  1. While there are several names in the convey package, I doubt you were thinking of using any of them in your own package. Just take a deep breath and everything will be ok.
  2. Those names will only be imported into the tests. They won't be included in the binary we put into production later because the test files won't be compiled in that scenario.
  3. Most importantly, the main point of the convey package was to allow the developer to describe test behavior. Having a gazillion instances of convey.Convey and convey.So was not what we wanted. Consider the following two blocks of functionally equivalent code:

1.

package noDot

import (
	"testing"

	"github.com/smartystreets/goconvey/convey"
)

func TestSomething(t *testing.T) {
	convey.Convey("Description of cool behavior", t, func() {
		convey.So(1, convey.ShouldEqual, 2)
	})
}

2.

package dot

import (
	"testing"

	. "github.com/smartystreets/goconvey/convey"
)

func TestSomething(t *testing.T) {
	Convey("Description of cool behavior", t, func() {
		So(1, ShouldEqual, 2)
	})
}

Can you spot the difference? Now, imagine that there were hundreds of Convey and So invocations. How would not having used an import dot affect the readability of the code. Understand, we aren't afraid of typing, but in this case we want the tests to read almost like prose so we prefer the import dot.

What else do you notice that is different about the convey test cases?

Well, each time you invoke Convey the first argument is a helpful description. Is that required?

Yes! Kudos to you again for being so observant. Tests can serve as a wonderful form of documentation of the behavior we are executing and the results we are checking. You'll notice in the part 1 tests we had to make use of test function names and numerous calls to t.Log(...) to achive that result. The only problem with that approach is that it's easy to forget t.Log when you get in the groove and it's not a habit. Requiring a description as the first argument makes it incredibly hard to forget.

What else is different?

I only see you use t once, and all you're doing with it is passing it into the Convey block--but wait, it's only passed into the top-level Convey block. What's up with that?

Early on, we made the decision to build on top of the "testing" package, not build our own from the ground up. So by passing t into the top-level Convey block, you relenquish the calling of methods on the *testing.T to the convey package so it can mark tests as failed when there are problems. The one thing you need to understand at this point is that you should not call methods on the t within a Convey block, especially not methods like t.Fatal or t.Skip because they invoke runtime.Goexit() and therefore don't allow the convey package to do its job.

You astutely noticed that we only pass t into top-level blocks. We wanted the DSL for this package to be very lean---just Convey, So, and some ShouldEqual kinds of assertions. We didn't want a top-level function that was different than nested Conveys, so we defined a variadic signature and parse the arguments according to the following convention:

Convey(description string, [toplevel: t *testing.T,] action func())

But that kind of variadic function is certainly not idiomatic Go code!

You're right, but neither is using anything but the "testing" package for writing test cases, but here we are... Try it out and see if it bothers you over time.

Assertions

Notice anything else intriguing?

Ok, normally we would call t.Fail() or t.Error... to signal a failure but you're saying that the convey package is going to do that for me. How do I check for incorrect results in my tests?

I see where you are going but that's actually the wrong question! Nevertheless, it will now serve quite nicely to introduce one of the fundamental differences between "the Go way" of writing tests and how to write tests with the convey package.

Consider this simple test from the convey package itself (which uses the standard "testing" package to assert it's behavior):

func TestDotReporterOnlyReportsAssertions(t *testing.T) {
	monochrome()
	file := newMemoryFile()
	printer := NewPrinter(file)
	reporter := NewDotReporter(printer)

	reporter.BeginStory(nil)
	reporter.Enter(nil)
	reporter.Exit()
	reporter.EndStory()

	if file.buffer != "" {
		t.Errorf("\nExpected: '(blank)'\nActual:  '%s'", file.buffer)
	}
}

Now consider how we would check for a correct result using the convey package:

func TestDotReporter(t *testing.T) {
	Convey("Subject: The Dot Reporter", t, func() {
		monochrome()
		file := newMemoryFile()
		printer := NewPrinter(file)
		reporter := NewDotReporter(printer)

		Convey("When no assertions are executed", t, func() {
			reporter.BeginStory(nil)
			reporter.Enter(nil)
			reporter.Exit()
			reporter.EndStory()

			Convey("The answer should be blank", func() {
				So(file.buffer, ShouldBeBlank)
			})
		})
	})
}

There are a few differences to notice, some of which we've already explored, but what is most interesting at this point is the way the results are inspected:

if file.buffer != "" {
	t.Errorf("\nExpected: '(blank)'\nActual:  '%s'", file.buffer)
}

vs.

So(file.buffer, ShouldBeBlank)

In a nutshell, here's the difference: We prefer to check that the results are what we expect, not that the results are not what we don't expect. It's a subtle but important point.

So, when you asked "How do I check for incorrect results in my tests?", that was the wrong question. The right question is "How do I check for correct results in my tests?". And you've just seen how. The So function receives as it's first parameter the value being checked, followed by an "assertion" function, followed by zero or more parameters against which the first argument should be compared. Here are the assertion functions in action:

Equality

type thing struct { a string }

thing1a := thing{a: "asdf"}
thing1b := thing{a: "asdf"}
thing2 := thing{a: "qwer"}

So(1, ShouldEqual, 1)
So("1", ShouldEqual, "1")
So(1, ShouldNotEqual, 2)
So(1, ShouldAlmostEqual, 1.000000000000001)
So(1, ShouldNotAlmostEqual, 2, 0.5)
So(thing1a, ShouldResemble, thing1b)
So(thing1a, ShouldNotResemble, thing2)
So(&thing1a, ShouldPointTo, &thing1a)
So(&thing1a, ShouldNotPointTo, &thing1b)
So(nil, ShouldBeNil)
So(1, ShouldNotBeNil)
So(true, ShouldBeTrue)
So(false, ShouldBeFalse)
So(0, ShouldBeZeroValue)

Numeric Comparison

So(1, ShouldBeGreaterThan, 0)
So(1, ShouldBeGreaterThanOrEqualTo, 1)
So(1, ShouldBeLessThan, 2)
So(1, ShouldBeLessThanOrEqualTo, 1)
So(1, ShouldBeBetween, 0, 2)
So(1, ShouldNotBeBetween, 2, 4)
So(1, ShouldBeBetweenOrEqual, 1, 2)
So(1, ShouldNotBeBetweenOrEqual, 2, 4)

Container inspection

So([]int{1, 2, 3}, ShouldContain, 2)
So([]int{1, 2, 3}, ShouldNotContain, 4)
So(1, ShouldBeIn, []int{1, 2, 3})
So(4, ShouldNotBeIn, []int{1, 2, 3})
So([]int{}, ShouldBeEmpty)
So([]int{1}, ShouldNotBeEmpty)

String-specific inspection

So("asdf", ShouldStartWith, "a")
So("asdf", ShouldNotStartWith, "z")
So("asdf", ShouldEndWith, "df")
So("asdf", ShouldNotEndWith, "as")
So("", ShouldBeBlank)
So("asdf", ShouldNotBeBlank)
So("asdf", ShouldContainSubstring, "sd")
So("asdf", ShouldNotContainSubstring, "af")

Panic Recovery

func panics() {
	panic("Goofy Gophers!")
}

So(panics, ShouldPanic)
So(func() {}, ShouldNotPanic)
So(panics, ShouldPanicWith, "Goofy Gophers!")
So(panics, ShouldNotPanicWith, "Guileless Gophers!")

Type checking

So(1, ShouldHaveSameTypeAs, 0)
So(1, ShouldNotHaveSameTypeAs, "1")
So(bytes.NewBufferString(""), ShouldImplement, (*io.Reader)(nil))
So("string", ShouldNotImplement, (*io.Reader)(nil))

Time

const timeLayout = "2006-01-02 15:04"
january1, _ := time.Parse(timeLayout, "2013-01-01 00:00")
january2, _ := time.Parse(timeLayout, "2013-01-02 00:00")
january3, _ := time.Parse(timeLayout, "2013-01-03 00:00")
january4, _ := time.Parse(timeLayout, "2013-01-04 00:00")
january5, _ := time.Parse(timeLayout, "2013-01-05 00:00")
oneDay, _ := time.ParseDuration("24h0m0s")

So(january1, ShouldHappenBefore, january4)
So(january1, ShouldHappenOnOrBefore, january1)
So(january2, ShouldHappenAfter, january1)
So(january2, ShouldHappenOnOrAfter, january2)
So(january3, ShouldHappenBetween, january2, january5)
So(january3, ShouldHappenOnOrBetween, january3, january5)
So(january1, ShouldNotHappenOnOrBetween, january2, january5)
So(january2, ShouldHappenWithin, oneDay, january3)
So(january5, ShouldNotHappenWithin, oneDay, january1)
So([]time.Time{january1, january2}, ShouldBeChronological)

Ok, all these assertion functions seem interesting and useful, but aren't assertions a bad practice?

You're probably referring to classic C assertions, the kind that make a program blow up when not satisified. The Go team has expressed their disdain for that construct. They have also explained why they chose not to provide helper/assertion functions in the "testing" package, citing the need for all tests to run regardless of a failure early on in the test suite.

Ultimately, each scenario is different and there very well may be situations where one approach is better than another. For example, most programmers agree that goto is not the best way to manage flow of control in a program but there are times when it is appropariate, even necessary.

In the case of assertions, we agree with the spirit of what the Go team is teaching, but we also see things differently from the perspective of a testing environment. A helpful testing environment should provide constructs and tools that make the job of conceiving and writing tests easier to do. It should allow failures to be quickly understood so that solutions can be devised without delay. Having to come up with all your own failure messages over and over is tedious at best. We prefer to use a generic solution that, once published, can serve each and every test across all projects, not just a collection of table-driven test cases or worse, a single test case.

As a final example, consider this contrived test:

answer := Squares(4) // should return []int{0, 1, 4, 9} but actually returns: []int{}

So(len(answer), ShouldEqual, 4)
So(answer[0], ShouldEqual, 0)
So(answer[1], ShouldEqual, 1)
So(answer[2], ShouldEqual, 4)
So(answer[3], ShouldEqual, 9)

The first assertion will fail and show a helpful message. The following assertions won't even run because of a runtime panic (index out of range). Wouldn't it be more helpful for this particular test case to halt execution of the test after the first failed assertion? Fortunately the convey package allows either style of execution. Have it your way, whichever way that is.

Hey, I'm hungry. Can we pick this up later?

You're right, this post is getting long-ish so we'll defer talking about items 3 (Setup/Teardown) and 4 (fine-grained control) some other time. Happy testing!

Subscribe to our blog!
Learn more about RSS feeds here.
rss feed iconSubscribe Now
Read our recent posts
Unlocking Ecommerce Success: Fabletics' Journey with Address Autocomplete
Arrow Icon
In the growing world of ecommerce, providing a seamless checkout experience is key for customer satisfaction and conversion rates. Fabletics, a leading activewear brand, embarked on a conversion-improving journey with Smarty to enhance their checkout process. We recently hosted a webinar filled with insights from Mel Cummings, Vice President of Product Management and UX at Fabletics. Unlocking Ecommerce SuccessConsider this common scenario: your customer is navigating through an online checkout process, only to be thwarted by an error, often related to address entry.
Avoid Awkward Moments with Smarty's Accurate Data
Arrow Icon
March 18 marks a bit of an awkward holiday: Awkward Moments Day. It's a day that celebrates those times that make you cringe, moments that make you face-palm, and stories that wake you up at 4 in the morning thinking, "Ugh! Why did I do that?" Although these moments can be uncomfortable, it's helpful to remember that everyone has them, and if they say they don't … We know they’re lying.  It's also helpful to remember that these embarrassing stories can help us learn and do better the next time around.
Inside Smarty - Dirk Whatcott
Arrow Icon
Meet Dirk, Smarty's very own Swiss Army knife, with a smile. Officially, our Operations Engineer Team Lead, he's our go-to guru for IT, building management, APIs, and more. He’s also known around the office for running the movie script reading group (A group of Smarty workers who spend their lunch reading through well-known and classic movie scripts. ). Picture him as the superhero of systems, ensuring everything runs like a well-oiled machine, providing that smooth, worry-free experience for us here and our customers at Smarty.
Ready to get started?