Ch 9. Unit Tests
Test Driven Development (TDD) dictates that unit tests be written before production code, adhering to three core principles:
- writing a failing unit test before production code,
- not writing more of a unit test than necessary to fail, and
- not writing more production code than needed to pass the failing test.
This results in a swift cycle of test and code creation, with tests leading slightly ahead.
This methodology results in a plethora of tests which could be as extensive as the production code itself, presenting a significant management challenge.
- A past experience with a team showed me the pitfalls of neglecting the quality of test code. They had decided that quick, dirty tests were acceptable as long as they worked. However, this mindset backfired.
- Dirty tests are hard to maintain and update as production code evolves, becoming a liability that grows with each release, and potentially leading to the abandonment of the test suite altogether.
The absence of a reliable test suite can lead to increased defects, reluctance to make changes, and ultimately, deteriorating production code.
- The fault lies not with the concept of testing but with the execution—permitting tests to be messy was the root of the problem. Successful teams I’ve coached have demonstrated that clean, well-maintained tests are crucial.
- The moral is clear: Test code is as vital as production code. It must be treated with the same level of care and design to remain effective and maintainable. Clean tests are the foundation of a robust and agile codebase.
public void testGetPageHieratchyAsXml() throws Exception {
crawler.addPage(root, PathParser.parse("PageOne"));
crawler.addPage(root, PathParser.parse("PageOne.ChildOne"));
crawler.addPage(root, PathParser.parse("PageTwo"));
request.setResource("root");
request.addInput("type", "pages");
Responder responder = new SerializedPageResponder();
SimpleResponse response =
(SimpleResponse) responder.makeResponse(new FitNesseContext(root), request);
String xml = response.getContent();
assertEquals("text/xml", response.getContentType());
assertSubString("<name>PageOne</name>", xml);
assertSubString("<name>PageTwo</name>", xml);
assertSubString("<name>ChildOne</name>", xml);
}
public void testGetPageHieratchyAsXmlDoesntContainSymbolicLinks() throws Exception {
WikiPage pageOne = crawler.addPage(root, PathParser.parse("PageOne"));
crawler.addPage(root, PathParser.parse("PageOne.ChildOne"));
crawler.addPage(root, PathParser.parse("PageTwo"));
PageData data = pageOne.getData();
WikiPageProperties properties = data.getProperties();
WikiPageProperty symLinks = properties.set(SymbolicPage.PROPERTY_NAME);
symLinks.set("SymPage", "PageTwo");
pageOne.commit(data);
request.setResource("root");
request.addInput("type", "pages");
Responder responder = new SerializedPageResponder();
SimpleResponse response =
(SimpleResponse) responder.makeResponse(new FitNesseContext(root), request);
String xml = response.getContent();
assertEquals("text/xml", response.getContentType());
assertSubString("<name>PageOne</name>", xml);
assertSubString("<name>PageTwo</name>", xml);
assertSubString("<name>ChildOne</name>", xml);
assertNotSubString("SymPage", xml);
}
public void testGetDataAsHtml() throws Exception {
crawler.addPage(root, PathParser.parse("TestPageOne"), "test page");
request.setResource("TestPageOne");
request.addInput("type", "data");
Responder responder = new SerializedPageResponder();
SimpleResponse response =
(SimpleResponse) responder.makeResponse(new FitNesseContext(root), request);
String xml = response.getContent();
assertEquals("text/xml", response.getContentType());
assertSubString("test page", xml);
assertSubString("<Test", xml);
}
The initial code examples provided show tests that are encumbered by unnecessary details and duplication, making it difficult for a reader to understand what’s being tested. The author points out issues such as:
- Duplication of setup code, making it harder to see what’s unique about each test.
- Irrelevant details, like the way
PathParseris used, which obscures the purpose of the test. - Incidental complexities involved in setting up the request and response objects, which are not central to what’s being tested.
- The cumbersome way of building the request URL.
public void testGetPageHierarchyAsXml() throws Exception {
makePages("PageOne", "PageOne.ChildOne", "PageTwo");
submitRequest("root", "type:pages");
assertResponseIsXML();
assertResponseContains("<name>PageOne</name>", "<name>PageTwo</name>", "<name>ChildOne</name>");
}
public void testSymbolicLinksAreNotInXmlPageHierarchy() throws Exception {
WikiPage page = makePage("PageOne");
makePages("PageOne.ChildOne", "PageTwo");
addLinkTo(page, "PageTwo", "SymPage");
submitRequest("root", "type:pages");
assertResponseIsXML();
assertResponseContains("<name>PageOne</name>", "<name>PageTwo</name>", "<name>ChildOne</name>");
assertResponseDoesNotContain("SymPage");
}
public void testGetDataAsXml() throws Exception {
makePageWithContent("TestPageOne", "test page");
submitRequest("TestPageOne", "type:data");
assertResponseIsXML();
assertResponseContains("test page", "<Test");
}In the revised examples, the tests have been refactored to eliminate the noise and focus on what’s important.
This is achieved by:
- Extracting common setup code into helper methods (like
makePagesorsubmitRequest). - Using descriptive names that convey the intent of the test.
- Structuring the tests with a clear BUILD-OPERATE-CHECK pattern, making it obvious what the test is setting up, what operation it is performing, and what outcomes it is asserting.
The BUILD-OPERATE-CHECK pattern simplifies the understanding of the test’s flow by organizing it into three distinct sections:
- Build: Prepare the necessary test environment or conditions.
- Operate: Execute the function or method under test.
- Check: Verify that the operation has produced the expected result.
The refactored tests show a much cleaner approach, focusing only on the essential actions and assertions that reflect the intent of the tests. This makes them far more maintainable and understandable, which in turn makes the code base more resilient to changes since developers can quickly and confidently modify the code when needed without fearing that they will inadvertently introduce regressions.
The primary goal of unit tests is to ensure that the code remains flexible and that changes can be made without fear of introducing new bugs. Tests are considered the safety net that allows for continuous improvement and refactoring of the codebase.
-
Importance of Tests: The author emphasizes that unit tests are essential for maintaining code flexibility, which is critical for the ongoing adaptability of the software. They allow developers to make changes to the code with confidence.
-
Fear of Changes: Without tests, there is a natural fear of making changes due to the risk of introducing new bugs. Tests help mitigate this fear, especially when they are comprehensive (high test coverage).
-
Dirty Tests: If tests are not clean — meaning they are hard to read, understand, and maintain — they become ineffective and may eventually be discarded, leading to “code rot.” Clean tests are crucial for ensuring that they continue to serve their purpose over time.
-
Clean Test Characteristics: Readability is highlighted as the most critical feature of a clean test. The author argues that tests should be clear, simple, and express a lot with as few expressions as possible, which is achieved by eliminating unnecessary details and focusing on the core purpose of the test.
-
BUILD-OPERATE-CHECK Pattern: The author advocates for structuring tests in a way that clearly separates the setup (build), the execution (operate), and the verification (check) phases. This pattern enhances the clarity and readability of tests, making it easier for developers to understand what the test is doing and to ensure that it is doing the right thing.
-
Refactoring for Clarity: The improved tests provided by the author show a refactored version that removes redundant code and irrelevant details, employing helper methods and clear naming conventions to convey the intent more effectively.
Domain-Specific Testing Language (DSL) for writing tests.
-
Domain-Specific Testing Language:
- It involves creating a set of functions and utilities for writing tests that are more readable and easier to write than using the raw APIs.
- This testing language evolves from refactoring test code to make it less cluttered and more expressive.
-
A Dual Standard:
- The engineering standards for the testing API differ from production code, prioritizing readability over efficiency.
- Testing code can afford to be less efficient as it’s not run in a production environment.
-
Example Tests:
- Initial tests are verbose and require readers to jump between the state being checked and the assertion made, which is cumbersome.
- Improved tests use a compact representation of the system state in a string format, making them quicker to read and understand.
-
Efficiency vs. Cleanliness:
- Testing code doesn’t need to be as efficient as production code and can use simpler constructs like string concatenation over more efficient ones like
StringBuffer, especially in a test environment with abundant resources.
- Testing code doesn’t need to be as efficient as production code and can use simpler constructs like string concatenation over more efficient ones like
-
One Assert per Test:
- While there is a guideline recommending a single assert per test for clarity, for flexibility, minimizing asserts rather than strictly having one is more practical.
-
Single Concept per Test:
- The author emphasizes testing a single concept per test function to avoid confusing tests that cover multiple concepts, as demonstrated by the
testAddMonths()example. - Tests should be split to ensure that each one focuses on a single concept, which helps clarify what each test aims to verify.
- The author emphasizes testing a single concept per test function to avoid confusing tests that cover multiple concepts, as demonstrated by the
The F.I.R.S.T principle
The F.I.R.S.T principle outlines essential guidelines for writing clean and effective tests in software development.
-
Fast: Tests should execute quickly. Slow tests discourage frequent running, which is essential for early problem detection and ease of code maintenance. Fast tests help prevent the degradation of the code base over time.
-
Independent: Tests must not rely on each other. Each test should set up its own conditions and be executable independently, in any sequence. Dependencies among tests can lead to cascading failures, complicating the diagnosis of issues and obscuring problems in subsequent tests.
-
Repeatable: Tests should consistently yield the same results in any environment, whether it’s a production, QA, or a local development environment. Non-repeatable tests can give rise to excuses for failures and limit testing to specific environments, which is not ideal.
-
Self-Validating: The outcome of a test should be clear-cut: either pass or fail. Tests should not require manual examination of log files or comparison of different outputs to determine their success. Tests that aren’t self-validating can lead to subjective interpretations of results and time-consuming manual analysis.
-
Timely: Tests should be written just before the production code they are meant to validate. Writing tests after developing production code can lead to difficulties in test creation and potentially untestable code. Timely test writing encourages testable, well-designed production code.
Conclusion
- While these principles only begin to cover the topic of clean testing, they highlight the crucial role tests play in maintaining the health of a project.
- Tests are vital not just for verifying functionality, but also for ensuring the long-term flexibility, maintainability, and reusability of production code. Regular attention and refinement of tests, along with the development of domain-specific testing languages, are key practices in maintaining code quality and project health. Neglecting test quality can lead to the deterioration of the entire codebase.