I love assertions and use them everywhere in my production code. They help me detect the actual runtime problems, so I do not have to test exhaustively. The assertions I use are lazy and even async so there is very little performance penalty for using them.
After using the defensive coding approach for a while at work, a weird situation has developed. The difference between the production and unit testing approaches broke the DRY (do not repeat yourself) principle twice!
Problem 1
In production code we use lazy-ass assertions with predicates from Angular, lodash and check-types.
1 | function add(a, b) { |
We picked check-types after evaluating several assertion libraries, and have extended check-types with our own domain-specific predicates
1 | // include after check-types.js |
In our unit tests however we use built-in Jasmine matchers plus extra Jamie Mason's matchers.
1 | describe('addition', function () { |
As time progressed I often found myself wanting to use the assertions and predicated from production in the unit specs. Having two different ways to express the same intent "fail if this condition is false" is definitely a red flag.
We have decided to try NOT to use even the built-in Jasmine matchers in our unit test code. We are moving towards using the same lazy assertions in our specs. Jasmine detects a thrown Error, so the end result is the same: a failing unit test
1 | describe('addition', function () { |
Problem 2
If an assertion or a matcher in the test adds numbers
fails, we have no idea which specific assertion has failed,
until we look at the stack and find the failed assertion line number
1 | describe('addition', function () { |
If there are multiple assertion, we are used to put the text message to be displayed on failure, for example take a look at a typical QUnit test in underscore-contrib
1 | test("add", function() { |
We again are breaking DRY principle by repeating the the predicate, for example adding
13 characters in line // 1
. Why do we need this? The reason is that JavaScript evaluates
each parameter to the function and passes the resulting value. So lazyAss
function has
no idea what the original predicate was that failed. All it sees is the falsey value as
the first argument
1 | function lazyAss(condition) { |
It would be nice if we could pass the actual expression as text to the lazyAss
, then
it could print it or use as the error message. Remember: lazyAss serializes and prints
all arguments on failure:
1 | // how can we automatically pass the condition expression? |
After exploring several approaches I wrote lazy-ass-helpful.
It only provides single function lazyAssHelpful
that can transform any other function.
It rewrites every found lazyAss(...)
by pushing the condition as source string to be an extra argument
1 | function foo() { |
The second argument to lazyAss in line // 1
was added automatically by lazyAssHelpful
.
If there are any other assertion arguments, they are preserved:
1 | function foo() { |
We have plugged in this automatic rewriting into our unit specs. lazy-ass-helpful includes
lazy-ass-helpful-bdd.js
that you can include after Jasmine or Mocha engine, but before your specs. It wraps describe
function and provides new function helpDescribe
. Just use helpDescribe
in your specs
to get automatic helpful info in your specs
1 | helpDescribe('addition', function () { |
If you use QUnit instead, please take a look at qunit-helpful -
it wraps QUnit.test
function directly.
Performance
I used qunit-helpful to dynamically rewrite assertions in underscore-contrib unit tests. The rewriting added 25% to the unit test execution time (420ms to 500ms). I consider this to be a small price to pay for avoiding typing and updating the error messages manually. You can see the underscore-contrib pull request here.
Conclusions
We try to express the assumptions through assertions with custom predicates as naturally and easy to understand as possible. We have picked assertions and removed matchers from the unit tests. By automatically modifying the unit test code, we avoid writing duplicate messages. If an assertion fails in the unit test, it will provide plenty of helpful context for use to quickly diagnose the root cause.
In general I am moving towards providing all the information JavaScript runtime does not give me when a crash happens: local and global variables, expressions. We have to implement most of this ourselves in the user space. The challenge is to do this on demand using lazy evaluation to avoid paying huge performance penalty.