"TopCoder is interested in incorporating a code coverage tool in the component development process. The reason is to measure how well the unit test cases are covering all of the code. The ultimate goal is to improve the quality of our components."
That's how the idea started in the forums and, as usual, it stirred up a chorus of opinions. TopCoder members offered tool proposals, interesting views and concerns, along with the usual, often unspoken questions: is this a good idea? Is it a bad idea? Will it really help us? Well, code coverage isn't necessarily good or bad, but it is a necessary element of a modern test-driven software development paradigm.
Based on the users' proposals from the forum, TopCoder started to look for the right code coverage tools - ideally ones that were small and versatile, with a reasonable learning curve. After identifying a few likely candidates for both Java and .NET, TopCoder gathered a few coders to test drive the tools. This article will provide a summary of those results.
1. Code Coverage
Code coverage analysis a.k.a. test coverage analysis is all about how thoroughly your tests exercise your code base and, as mentioned above, can be considered an indirect measure of quality - indirect because it gauges the degree to which the tests cover the code, rather than directly measuring the quality of the code itself. Code coverage can be viewed as white-box or structural testing, because the "assertions" are made against the internals of classes, not against the system's interfaces or contracts. It helps to identify paths in your program that are not being tested, and is most useful and powerful when testing logic-intensive applications.
Here is a summary of the most common "measures" used in code coverage analysis - for more on the subject, review this nice introduction.
2. The tools
3. EMMA vs. Cobertura
EMMA takes a more complex approach and supports the class, method, line, and basic block coverage measures. It can instrument classes for coverage either offline (before they are loaded) or on the fly (using an instrumenting application classloader) and also handles entire .jar files without needing to expand them.
From the usage point of view both tools are easy to use and set up. "I found Emma to be easy to use," wrote madking, in his review of EMMA. "All I had to do was insert a couple of lines (which were provided in the documentation) into ant's build.xml file and I got coverage results. I didn't run into any problems during testing. I guess the only 'edge' case is when a single line is only partially covered, which, because Emma instruments byte code rather than source code, sometimes happens unexpectedly. It handles this fine, though, and shows the relevant line as partially covered."
I reviewed Cobertura, and found that I could make the most of it after spending just one or two hours hours studying the documentation and the examples (though, of course, my familiarity with Ant probably helped).
Reports can be generated in HTML and also XML by both tools, though only EMMA provides plain text results. Both tools can be run from the command line, provide custom ant tasks, and are test framework agnostic, thus not favoring any of the frameworks with special integration. Both are free and have a strong community behind them, which is comforting if you run into problems while using them. Another thing to be noted is that the code must be compiled with the debug info for Cobertura to generate reports, while "EMMA does not require access to the source code and degrades gracefully with decreasing amount of debug information available in the input classes."
Next let's look at some real usage examples, based on TopCoder components.
As you can see here we cannot talk about a full side-by-side comparison, since the measures are different. Some small observation, though, about Cobertura can be made. It reports interfaces as classes with N/A test coverage (EMMA just ignores them), which can be a little bit confusing, especially at first. It can stumble on classes with only static methods and reports complex one-liners as fully covered even if they aren't.
EMMA is pretty fast, and the memory overhead is just a few hundred bytes. Cobertura isn't quite as speedy, and it occasionally incurred serious delays on the test runs (I noticed some tests that took a few seconds without Cobertura instrumenting the classes could take up to a few minutes with the instrumented classes).
4. NCover vs. TeamCoverage
NCover uses method, block and line coverage measures to instrument the code, while TeamCoverage uses only branch and line measures.
Both tools have .NET 1.1, .NET 2.0 and C# support (though the latest NCover build is not working on 1.1 due to a bug, according to the NCover site).
NCover has support for Nant as an exec task. real_vg, who reviewed TeamCoverage, noted that "even used through
Unfortunately this isn't the end of that the issues with TeamCoverage. "The reports can only be viewed in VS or accessed programmatically," wrote real_vg. The tool "is only available as a part of MS VS Team Suite, which has a high price." In terms of ease of use, the tool is "not really intuitive, very limited. The results cannot be converted into human readable format without special effort (writing a tool). It is easy to use only when working in VS and using VS built-in functionalities for everything, it doesn't seems to be possible to combine VS GUI possibilities considering coverage with Nunit."
"Looks like a good solution only for those who already own the MS VS Team Suite and use it to do all jobs related to testing, i.e. testing is done using MS testing framework. Command line possibilities are marginal; to use the tool efficiently one needs to write his own programs which will access coverage data programmatically using the MS API."
Here's a sample of how TeamCoverage presents its results:
Here's a look at real usage examples, based on TopCoder components:
I'm sure you have noticed the discrepancies between the two reports (blocks and lines seem to be really different for the tools), so I'll appeal to real_vg again to figure this out:
"TopCoder components have one problem with Team Coverage. The issue is that the tests in TopCoder components are usually built into the same assembly with the main classes, instead of using the separate assembly for main classes. The problem is that it is impossible to skip instrumentation and further coverage data gathering of the tests, as they are built into the same assembly. It is possible to modify the build file, but many of the components will need further modification - assembly internal methods will need to be changed to public, or the tests won't compile."
As far as NCover goes, reviewer fvillaf found that invoking it is easy, and the code only needed to be compiled with debug information. NCover has full support for partial line coverage, command line capabilities -- making it scriptable -- but the only output available is in XML (though fvillaf notes that XSL is provided, which can transform the XML into a nice HTML page). One note of caution: NCover requires that the target .dll of the code under test "must be previously compiled for the corresponding framework" (.NET 1.1 or .NET 2.0).
As I stated at the beginning of this article, I believe this is the right next step for TopCoder to take. As we all begin to grapple with the day-to-day realities of implementing code coverage, though, I encourage you to maintain your perspective: code coverage tools provide valuable insight into unit testing, but remember that they can be misused. It's easy to mistake a high coverage rate for code that is flawless, but that isn't necessarily the case.
Code coverage, particularly in combination with automated testing, can be a powerful tool. One of the best pieces of advice that I've found on the matter, though, was a reminder that no tool is as valuable as your own best judgment. "Code coverage analysis tools," wrote Brian Marick, "will only be helpful if they're used to enhance thought, not replace it."