I started sending code coverage information to coveralls.io. My favorite combination for generating the line by line coverage for nodejs projects is through gt testing framework that uses istanbul internally to instrument the code. The generated lcov.info file is then piped through node-coveralls The entire setup is described in this blog post.
Travis-ci and coveralls badges
After going through several feature updates, I have noticed a small problem: a single number reported by the code coverage badge is less meaningful than even the short summary coverage report printed by gt to STDOUT. Because the coveralls reports aggregate coverage percentage, it mixes the code coverage from the source code and from unit tests.
Example
qunit-promises project: combined code coverage is 94%, but there are two files: the source qunit-promises.js and the unit test file node-tests.js
When I run the tests from command line, I see the split, and know that in reality only 80% of my source is covered by unit tests. I also know that all my unit tests execute (99% coverage). But the average number shown by the coveralls makes it look much better than it is: 93%.
lcov-filter
I decided to keep instrumenting all files, but limit how much I am sending to coveralls by removing code coverage for unit test files. So I wrote lcov-filter - it reads lcov.info file and filters out records for filenames matching a given regular expression. A typical lcov.info file looks like this:
TN:
SF:/Users/gleb/git/qunit-promises/qunit-promises.js
FN:1,(anonymous_1)
FN:4,verifyPromise
FN:18,alwaysName
...
TN:
SF:/Users/gleb/git/qunit-promises/test/node-tests.js
...
All I needed to do was split the file by TN: delimiters and test filenames in SF: lines against a regular expression. The output is printed to STDOUT to feed node-coveralls directly.
After switching qunit-promises to send the filtered stats, the coverage number dropped of course, but that is something I can improve!