I love writing tests, yet I do not like wasting time writing them. Instead of spending a chunk of time writing all possible tests after the software has been written, I do the following
Write tests before writing a small chunk of code
When I code a clearly defined small algorithm (like sorting), I start with writing the tests first. The tests help me understand the problem and ensure I do not get outside the scope. This is the classic Test Driven Development approach. Unit testing frameworks with watch mode are super helpful for this.
When writing documentation
I like reading little code examples showing how functions in Lodash library work. Read the following description and see if you can quickly understand what it does.
The inverse of _.toPairs; this method returns an object composed from key-value pairs.
Now look at the code example
1 | _.fromPairs([['a', 1], ['b', 2]]); |
It might be just me, but I understand the code example faster than the text!
Since the documentation gets out of sync with the code really quickly, it is important to have tested examples in your documentation. Tools like xplain and comment-value are my solutions to this problem. They ensure the documentation is accurate, yet require a few unit tests. Usually these tests show the most user friendly cases, great for teaching what the code does.
Write functional (end 2 end) tests after deploying a system
When a complete system has been assembled from smaller modules (I really like the Twelve-factor app architecture) and deployed, I write simple "success path" tests that simulate a typical user. A typical test would be something like
- Login
- Create new document
- Logout
Another test would be
- Login
- Create new document
- Logout
- Login again
- Find previously created document
The tests are really the "use case scenarios" and describe how the system behaves when things go well. There are two requirements at this stage
- the focus should be on test readability because they can be maintained by
any developer working on the system. If there are multiple people involved,
some might be less proficient using the end to end testing framework, yet
must be able to effectively triage a failed test, update an existing one
or write a test for a new feature. A good example: asynchronous JavaScript
tests that use
async / await
feature are easier to read for people unfamiliar with JavaScript than tests using promises, so pick your testing framework wisely. - the test failure should have all the information for triage. Logs, screenshots, stack, linked crash reports, etc. Otherwise, it can be hard to recreate a specific environment and conditions to repeat the failure in a complex system.
When a bug is reported
If a user of my library, API or website finds a bug, I write a test first, before working on a fix.
- the test should recreate the failure, which is often hard to do manually
- I put the bug ID in the test name to easily find it
- the test prevents regressions in the future
Before code refactoring
If I decide to refactor code, even if it has unit tests, I usually write a few tests to clarify the behavior. This allows me to remember how the code works, fill missing edge cases and ensure I do not accidentally break an existing contract.
Test redundancy is a problem in this case, use code coverage tools to avoid it. There are lot of interesting things about code coverage, and I have written about some of them: