Helpful assertions

Use lazy and helpful assertions instead of Jasmine matchers in the unit tests.

I love assertions and use them everywhere in my production code. They help me detect the actual runtime problems, so I do not have to test exhaustively. The assertions I use are lazy and even async so there is very little performance penalty for using them.

After using the defensive coding approach for a while at work, a weird situation has developed. The difference between the production and unit testing approaches broke the DRY (do not repeat yourself) principle twice!

Problem 1

In production code we use lazy-ass assertions with predicates from Angular, lodash and check-types.

1
2
3
4
5
function add(a, b) {
lazyAss(check.number(a), 'a should be a number', a);
lazyAss(check.number(b), 'b should be a number', b);
return a + b;
}

We picked check-types after evaluating several assertion libraries, and have extended check-types with our own domain-specific predicates

1
2
3
4
5
6
7
8
9
10
// include after check-types.js
check.validNumbers = function (list) {
return Array.isArray(list) &&
list.every(check.number);
}
// production code
function add(a, b) {
lazyAss(check.validNumbers([a, b]), 'expecting numbers to add');
return a + b;
}

In our unit tests however we use built-in Jasmine matchers plus extra Jamie Mason's matchers.

1
2
3
4
5
6
describe('addition', function () {
it('adds numbers', function () {
expect(add(2, 3)).toEqual(5);
expect(add(-1, 1)).toEqual(0);
});
});

As time progressed I often found myself wanting to use the assertions and predicated from production in the unit specs. Having two different ways to express the same intent "fail if this condition is false" is definitely a red flag.

We have decided to try NOT to use even the built-in Jasmine matchers in our unit test code. We are moving towards using the same lazy assertions in our specs. Jasmine detects a thrown Error, so the end result is the same: a failing unit test

1
2
3
4
5
6
7
8
describe('addition', function () {
it('adds numbers', function () {
lazyAss(add(2, 3) === 5);
lazyAss(add(-1, 1) === 0);
});
});
// Error
module "addition" test "adds numbers"

Problem 2

If an assertion or a matcher in the test adds numbers fails, we have no idea which specific assertion has failed, until we look at the stack and find the failed assertion line number

1
2
3
4
5
6
describe('addition', function () {
it('adds numbers', function () {
lazyAss(add(2, 3) === 5);
lazyAss(add(-1, 1) === 0);
});
});

If there are multiple assertion, we are used to put the text message to be displayed on failure, for example take a look at a typical QUnit test in underscore-contrib

1
2
3
4
5
test("add", function() {
equal(_.add(1, 1), 2, '1 + 1 = 2'); // 1
equal(_.add(3, 5), 8, '3 + 5 = 8');
equal(_.add(1, 2, 3, 4), 10, 'adds multiple operands');
});

We again are breaking DRY principle by repeating the the predicate, for example adding 13 characters in line // 1. Why do we need this? The reason is that JavaScript evaluates each parameter to the function and passes the resulting value. So lazyAss function has no idea what the original predicate was that failed. All it sees is the falsey value as the first argument

1
2
3
4
function lazyAss(condition) {
...
}
lazyAss(1 + 1 === 3); // condition = false

It would be nice if we could pass the actual expression as text to the lazyAss, then it could print it or use as the error message. Remember: lazyAss serializes and prints all arguments on failure:

1
2
3
4
// how can we automatically pass the condition expression?
lazyAss(1 + 1 === 3, '1 + 1 === 3');
// output (text comes from the second argument)
Error: "1 + 1 === 3"

After exploring several approaches I wrote lazy-ass-helpful. It only provides single function lazyAssHelpful that can transform any other function. It rewrites every found lazyAss(...) by pushing the condition as source string to be an extra argument

1
2
3
4
5
6
7
8
9
10
11
function foo() {
lazyAss(1 + 1 === 3);
}
var rewrittenFoo = lazyAssHelpful(foo);
console.log(rewrittenFoo.toString());
// output
function foo() {
lazyAss(1 + 1 === 3, "condition [1 + 1 === 3]"); // 1
}
rewrittenFoo();
// Error: condition [1 + 1 === 3]

The second argument to lazyAss in line // 1 was added automatically by lazyAssHelpful. If there are any other assertion arguments, they are preserved:

1
2
3
4
5
function foo() {
lazyAss(1 + 1 === 3, 'simple addition');
}
lazyAssHelpful(foo)();
// Error: condition [1 + 1 === 3] simple addition

We have plugged in this automatic rewriting into our unit specs. lazy-ass-helpful includes lazy-ass-helpful-bdd.js that you can include after Jasmine or Mocha engine, but before your specs. It wraps describe function and provides new function helpDescribe. Just use helpDescribe in your specs to get automatic helpful info in your specs

1
2
3
4
5
6
7
8
9
helpDescribe('addition', function () {
it('adds numbers', function () {
lazyAss(add(2, 3) === 5);
lazyAss(add(-1, 1) === 0);
});
});
// Jasmine fails with
// Error: condition [add(2, 3) === 5]
// module "addition" test "adds numbers"

If you use QUnit instead, please take a look at qunit-helpful - it wraps QUnit.test function directly.

Performance

I used qunit-helpful to dynamically rewrite assertions in underscore-contrib unit tests. The rewriting added 25% to the unit test execution time (420ms to 500ms). I consider this to be a small price to pay for avoiding typing and updating the error messages manually. You can see the underscore-contrib pull request here.

Conclusions

We try to express the assumptions through assertions with custom predicates as naturally and easy to understand as possible. We have picked assertions and removed matchers from the unit tests. By automatically modifying the unit test code, we avoid writing duplicate messages. If an assertion fails in the unit test, it will provide plenty of helpful context for use to quickly diagnose the root cause.

In general I am moving towards providing all the information JavaScript runtime does not give me when a crash happens: local and global variables, expressions. We have to implement most of this ourselves in the user space. The challenge is to do this on demand using lazy evaluation to avoid paying huge performance penalty.