-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Core Principles
Core principles describe the general philosophy we use when considering adding features or making changes to JUnit.
This is a work in progress. There are likely things here that people may disagree with, and we have violated these principles in the past. Still, it's useful having a place to bounce around ideas of how we decided to evolve JUnit.
JUnit is a simple framework for writing repeatable tests of Java code using Java.
It's better to enable new functionality by creating or augmenting an extension point rather than adding the functionality as a core feature.
- JUnit has never tried to be a swiss army knife
- Third party developers move more quickly than we do
- Once we create an API, we often cannot easily modify it. Third party libraries can make mistakes and fix them because they have fewer users.
We could have added built in support for things like test names and temporary folders. Instead, we created extension points (@Rule
and later @ClassRule
) and provided the functionality via implementations of those extension points.
This is both an example of what to do and a counter example. We provided parameterized tests via our own Runner. On the one hand, third parties have been able to re-use the Runner interface to provide their own API for specifying test parameters. On the other hand, a test class has one runner, so you can't use Parameterized
with one of the Runners provided by Spring.
Instead, we could provide extension points to allow third-party developers to specify a strategy for parameterized tests
Tests often fail. In fact, it's their raison d'être to fail when there are problems. If a test fails when run in one mode (a build tool) but not in another (IDEs) it makes it hard to debug problems and easy to introduce new failures.
- Running tests in an IDE vs a build tool
- Running all classes in a package vs a test suite vs a single class vs a single method
It should be possible to understand how JUnit will treat a class based on reading the test class (and base class) and looking at the annotations.
To quote David Saff:
I have sometimes been in the situation in which someone used a test framework in an extremely clever way that I couldn't find without serious digging. Because of this cleverness, tests failed in ways that looked like they were finding bugs in the production code, or (much worse!) tests passed that should have failed, because it wasn't clear that there was a convention that needed to be followed for the tests to have any meaning at all.
- The quote from David was in a thread about supporting meta annotations
- This has come up when people have proposed adding behavior via package-level annotations