Accurate coverage number

Remove unit testing coverage from the collected data.

I started sending code coverage information to My favorite combination for generating the line by line coverage for nodejs projects is through gt testing framework that uses istanbul internally to instrument the code. The generated file is then piped through node-coveralls The entire setup is described in this blog post.

Travis-ci and coveralls badges

Build status Coverage Status

After going through several feature updates, I have noticed a small problem: a single number reported by the code coverage badge is less meaningful than even the short summary coverage report printed by gt to STDOUT. Because the coveralls reports aggregate coverage percentage, it mixes the code coverage from the source code and from unit tests.


qunit-promises project: combined code coverage is 94%, but there are two files: the source qunit-promises.js and the unit test file node-tests.js

qunit promises coverage

When I run the tests from command line, I see the split, and know that in reality only 80% of my source is covered by unit tests. I also know that all my unit tests execute (99% coverage). But the average number shown by the coveralls makes it look much better than it is: 93%.


I decided to keep instrumenting all files, but limit how much I am sending to coveralls by removing code coverage for unit test files. So I wrote lcov-filter - it reads file and filters out records for filenames matching a given regular expression. A typical file looks like this:


All I needed to do was split the file by TN: delimiters and test filenames in SF: lines against a regular expression. The output is printed to STDOUT to feed node-coveralls directly.

After switching qunit-promises to send the filtered stats, the coverage number dropped of course, but that is something I can improve!