Tuesday, May 27, 2008

The dark side of user stories

I've always had a twofold feeling towards user stories. Despite being critical about many drawbacks of the Use Case approach (the one I hate most is the time wasted in filling a template with the same useless information repeated over and over), I think that User Stories have some limitations that a team willing to turn agile has to be aware of.

Interlude N°1

Seven years ago, a younger me collecting requirements for a small application that should have handled the operations required to set up a secure banking account. Every person interviewed was telling me a different story in terms of "special cases" (the whole story seemed a collection of if … ), and the resulting workflow seemed messy and impossible to tame. I tried a different approach and started modeling considering states instead of actions. It turned out that the workflow was pretty simple, after realizing that there was no before-after correlation between some actions, and a loose workflow could have mastered the whole stuff with 2 screens, and a bunch of checkboxes.

Interlude N°2

JAOO Conference 2006. Jeff Sutherland talking about Scrum to a subjugated audience. Telling something like "If the product backlog is not ready, I give the whole development team a day off.". Silence. People dazed and confused. "There is no chance that any line of code written without knowing its purpose will turn useful. Instead we'll have to pay for writing that line, for testing that line, for refactoring that line and to convince the team that this line shouldn't be used.". Pause. "The other reason is that there is no better way to have the management hurry up and finalize their decisions than having the whole development team having a day off". I liked that.

Interlude N°3

Business application. Very complex domain. Requirements were heavily related to the Italian fiscal system (which means that the application should mimic the laws, but the laws are not intended to be understood, moreover many of them are flawed from a logical approach – but this is a whole different story, that can't fit here…). It took the lead analyst a couple of weeks to be able to nail down a specification for one of the complex use cases, and he was working together with the domain expert every day of the week.

Back to our story

One of the key points of user stories is that they act like placeholders of the real specification. Instead of writing a detailed spec, structured like a Use Case document, some months in advance, a User Story provides the minimum amount of information required to identify the story, estimate it, prioritize and relate it to other stories. Before development of that specific story starts, Story Cards serve basically as a low-fi management tool (estimation and prioritizing), allowing to defer the detailed requirement specification to the very last moment. This way, requirements will be more precise (because they could have changed in the meanwhile and because the whole team is more mature and expert of the problem domain) and would require less documentation, due to the shortest release cycle.

Unfortunately, there are some preconditions that make this magic happen. One is the availability of a domain expert. It's a key factor in many agile processes (not to mention the XP "cohabitation") but really reads like a simple thing, in a book. In reality one might not be that lucky. Domain experts might not be so available (I agree with you that this is a business suicide – don't get me wrong), or not so expert, or maybe simply do not have the right perspective. Well …these are basically the reasons why the analyst role has been invented. Forget what we've seen in the "Rose years" where an analyst was a man who produced useless diagrams. An analyst is a knowledge cruncher (this definition comes from the DDD book): somebody that becomes able to master the domain like the domain experts do (or possibly better) and to derive a model from fragments of information.

Many times, User stories are seen as a way to leapfrog the analysis phase and the analyst altogether. A good design will "emerge" as a result of iterations. Well, …the use case of Interlude 3 was one iteration. But if requirements were collected by developers straight from domain experts (as in Interlude 1) the result could just let you completely stuck with crappy code for ages. Many teams do not suffer so much from this limitation. Maybe they have rare beasts like developers with brilliant analysis skills, or the analysts really embraced the agile mindset and perfectly fitted in this role. Sometimes, instead, the analyst is perceived like "the person who writes the specs", and since the specs are not that necessary anymore, so is the analyst.

Hmmm … stormy weather approaching.


Friday, May 23, 2008

Cleaning up our test code – Part 2

Assuming that we've created our test object in a few moves (please read the previous post about it), now the focus switches to the way we use test code to assert correctness of production code. JUnit assertion family relies heavily on equals-based assertions. Unfortunately, the equals() method is far from being used consistently, so equals-based testing has some dangers we need to be aware of.

The equals() contract

Talking about the equals() method, there is a general behavioral contract, which is the one defined in the Java Specification, and it is used heavily in the Collections classes to retrieve objects from different container classes. As every good java developer knows, overriding equals() needs us to adhere to the implicit behavioral contract, and also that we override hashCode() to ensure consistent behavior with different container types. So, to effectively test or domain objects, we need to override both methods. So far so good.

There are also a few convenient methods to do this: Eclipse allows you to generate equals() and hashCode() from a given set of attributes. The resulting code quite good, but it's like a grey area in your source code, in terms of readability. Jakarta commons implementation is less "automatic" but provides the developer with better control over the final result.

Enter a new player

If you're using Hibernate to persist your domain objects, you'll probably know that this requires some attention on the way equals() and hashCode() are defined in your application. This is primarily tied to the hibernate persistence lifecycle (which generally populates id fields upon writing operations) and to lazy-loading (some fields are loaded only if they're explicitly accessed inside a Hibernate session). The Hibernate recommendation is to define equals() and hashCode() methods according to equality of the so-called business-key, which can be roughly be mapped to "a unique subset of required non-lazy fields, excluding id". Id-based equality should be managed only by Hibernate, while business operations should rely on an equals() method based on the business key. To purists, this sounds like an undesirable implicit dependency on the Hibernate framework (your POJOs are still POJOs, but not exactly the same POJOs you would have had without Hibernate).

Equality as a business-dependent concept

So far, we have 2 separate equality implementations: id-based equality (that should be used only bi Hibernate, behind the scenes) and business-key equality that will be used in our business code and will be implicitly used if we uses containers from the Collections framework. What should we use in testing? Unfortunately, there is no one-size-fits-all answer, but the choice depends heavily on what we are testing and what is precisely the desired behavior of your application. If we are adding information on some non-mandatory field, then simple equality won't check it. If we're changing the value of a mandatory field, and want to check that this doesn't trigger creation of a new Id, you need to explicitly check that field.

Often, applications with a nontrivial domain model can't rely only on a single notion of equality (are two BankAccount instances equal if they have a different balance?), this is more or less clear, during analysis, but the presence of an assertEquals() method in JUnit makes blindly using equals() so tempting…

Smarter predicates

Once we've realized that equality is too generic to be applied blindly, the following step is to try to apply the right context-dependent equality in the appropriate context. The obvious solution to do this is to decompose equality to an attribute-level check: so instead of having

assertEquals(a, b);

we end up with something like


assertEquals(a.getName(), b.getName());
assertEquals(a.getSurname(), b.getSurname());
// … you got the point

Which is longer, less maintainable, with a higher coupling, and … ugly. Most of the times, anyway a business relevant notion of equality doesn't show up only in tests. I would argue that 99% of the times the same equality is hidden somewhere in your code in form of some business rule. Why not having the same rule emerge and be used in the test layer as well?

A good way to do this is to rely on the Specification pattern, developed by Eric Evans and Martin Fowler which basically delegates to a dedicated object the assertion of the applicability of a given business rule on a domain object. Put in another way, Specifications are a way to express predicates or to verify expectations on domain objects, in a way that could look like:

assert(spec.hasJustBeenUpdated(a));
assert(spec.isEligibleForRefund(b));
assert(spec.sameCoreInformations(a,b));

After thoroughly testing the Specification (a thing that we should have done anyway, since it is a business implementation), we could be able to reuse the same logic as an assertion mechanism in our test layer, making our code shorter and cleaner. Not all business oriented assertions will be that useful in the test layer, but some normally do. As I said in the previous posts, one of the main goals was to be able to write a lot of tests, and to write them in few minutes. Being able to rely on a higher level of abstraction definitely helps

Friday, May 09, 2008

I thought I had an original idea....

... instead I discovered the name Soap Opera Test has been already invented :-( . Good news is that no one blamed me o sued me for that. I just discovered it googling the term, I've found a 2003 reference in this article by Brian Marick who credits Hans Buwalda for term invention, as it can be seen in this paper.

However the meaning of the term is slightly different. The original meaning refers to a comprehensive test strategy, while I focus on the antipattern perspective. Maybe the emergency of the Soap Opera Test term is an antipattern itself...