Filtering Mocha tests

How to filter the collected Mocha unit tests before running them.

Imagine you have Mocha unit tests, like these ones

1
2
3
4
5
it('test a', () => {})

it('test b', () => {})

it('test c', () => {})

You can run these tests by installing Mocha

1
2
3
4
$ npm i mocha
+ [email protected]
updated 1 package and audited 226 packages in 5.338s
found 0 vulnerabilities

and then running the command

1
2
3
4
5
6
7
8
$ npx mocha ./spec.js


✓ test a
✓ test b
✓ test c

3 passing (4ms)

Beautiful, but what if we want to change the list of collected tests before running them? What if we want to filter tests and maybe run only some of them? We could use --grep command option, for example we could run just the "test b".

1
2
3
4
5
6
$ npx mocha --grep "test b" ./spec.js


✓ test b

1 passing (4ms)

I want more. I want to run previously failed tests first. Or run the slowest tests first, or randomize the test order to find the dependencies among tests. Previously I have written tools for this, like rocha and locha but those tools have a limitation - they wrap around Mocha, rather than modifying its behavior. They do not reuse Mocha's CLI module, which means as Mocha gets more features, those other tools fall behind.

I would like to plug right into Mocha's internals and add a new hook that runs after Mocha has collected the tests, but before they start running. Here is how to do this - all using the most powerful and underused feature of Node.js - its --require option. Mocha actually looks for --require CLI option and loads those modules for us. So how can we take advantage of loading our code to change Mocha's behavior?

First, find in Mocha's code the place where collected tests are being executed. That's easy - it is the code inside the file lib/runner.js that has (surprise, surprise) prototype method run

lib/runner.js
1
2
3
4
5
6
7
8
9
10
11
12
13
/**
* Run the root suite and invoke `fn(failures)`
* on completion.
*
* @public
* @memberof Runner
* @param {Function} fn
* @return {Runner} Runner instance.
*/
Runner.prototype.run = function(fn) {
var self = this;
var rootSuite = this.suite;
...

Great, and this is an asynchronous function too, so we can modify the list of tests in whatever way we want. We can load files, make HTTP calls - even ask the user which tests to run. All we need is to overwrite this method, prefixing it with our code. Start a new local file, let's call it reorder.js. Here is how to reverse the order of tests in the root suite.

reorder.js
1
2
3
4
5
6
const Runner = require('mocha/lib/runner')
const originalRun = Runner.prototype.run
Runner.prototype.run = function (done) {
this.suite.tests.reverse()
originalRun.call(this, done)
}

Let's run Mocha and require this script - it will reverse the order of tests

1
2
3
4
5
6
7
8
$ npx mocha --require ./reorder ./spec.js


✓ test c
✓ test b
✓ test a

3 passing (4ms)

Because we have the callback function, we can make our logic asynchronous. Here is reversing the test order after 1 second delay

reorder-async.js
1
2
3
4
5
6
7
8
const Runner = require('mocha/lib/runner')
const originalRun = Runner.prototype.run
Runner.prototype.run = function (done) {
setTimeout(() => {
this.suite.tests.reverse()
originalRun.call(this, done)
}, 1000)
}

Runs the same way, except for 1 second pause before running the tests.

We can even ask the user what test to run. Here is a CLI script that asks the user using the enquirer library.

select.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
const { Select } = require('enquirer')
const Runner = require('mocha/lib/runner')
const originalRun = Runner.prototype.run
Runner.prototype.run = function (done) {
const prompt = new Select({
name: 'run test',
message: 'Which test should I run',
choices: this.suite.tests.map(t => t.title)
})
prompt.run().then(answer => {
this.suite.tests = this.suite.tests.filter(t => t.title === answer)
originalRun.call(this, done)
})
}

And here is the hook in action

Select test to run

Just remember - in the real world situation, you need to look through each suite of tests to modify them, and probably consider the interplay between filtering tests and exclusive / skipped tests. For example, if there is an exclusive test with it.only, you probably do not want to run all the tests, right?!