Showing posts with label tdd. Show all posts
Showing posts with label tdd. Show all posts

Monday, June 16, 2014

Not Dead Yet

Like it or not, David Heinemeier Hanson did it. By starting the debate around the “Is TDD Dead” topic, he forced the whole agile community to re-think about many issues that were taken for granted. You may like his opinion or not, you may prefer to choose his side or Kent Beck side, or the more radical one from Robert C. Martin. Or you may also appreciate the way Martin Fowler proposed to sort out the issue, in a way that is a great example of “this is how we discuss things in the agile community”.

I personally liked the fact that many collateral discussion spread out on the topic, the best ones I had with Francesco Fullone and Matteo Vaccari. Here are some of the thoughts they helped shaping.

Who’s paying for what

I remember the old argument that 90% time on code is spent on maintenance, so it does make a lot of sense to write software that’s easier to read an maintain. This is what I normally do, or, better, this is what I relentlessly keep trying to do.

But the more I look around the more I realise that this statement leads to a quite naive approach. This is not how most developers are working right now.

Let me explain that.

It’s not about the code

Given 90% of time spent on code is in fact maintenance, it makes a lot of sense to try to improve this 90% instead of the 10% spent writing new stuff. Unless…

…unless we stop focusing about code and start looking at human beings surrounding it. Some call them developers, but I am really more concerned about the human being, not the role.

Humans move, evolve, grow, move to another team and another company. Humans will be far away the moment evolution will be needed on a given piece of code. They won’t be there to take the praise for well-written maintainable code as well as they won’t be there for talking the blame for horrible legacy code.

Even worse: once they’ve left, they might be blamed also for supposedly well-written code, if the new developer who’s now taking care of their beloved source code has a different programming style. Or simply doesn’t like their indentation style, or what was intended to be good coding practice at the time the code was written.

If I don’t plan to stay around a piece of software for a long time, writing maintainable code is dangerously close to an exercise of style. Many practices are said to repay themselves in the long term but no one stays around long enough to catch the benefit

Being short-term sustainable

Yep. In the long term.

That’s the thing I really don’t like. It sounded like reasonable first time I heard it, but now it has the sinister sound of I have no evidence to support it, just faith.

And in a world where software development workforce is made not only by internal developers, but also consultants, freelancers, contractors, offshore developers and part-time hackers (just to name a few) the implicit assumption that

if you don’t code right, technical debt will bite your back 

is flawed.

Well, technical debt is a bad thing. It’s probably worse than we think. Companies deliver horrible services due to technical debt, they struggle, collapse and ultimately sink for technical debt.

But well… who cares! Big companies will survive anyway. For small companies… it’s just darwinian selection. As a developer I’ll move somewhere else soon (disclaimer: I have a bias for not hiring developers who previously worked in companies that sunk).

So, I guess the real story with technical debt should be rephrased like

if you don’t code right, technical debt will bite somebody’s back

meaning …Not necessarily yours. Call me cynical, but by the time your crappy code will unleash its sinister powers you’ll probably be in another company complaining about a different yet stinky legacy codebase. Time for a soundtrack: try this.

A little tragedy of the commons

If code stays around longer than the expected lifespan of developers there’s not much that we can do to prevent technical debt to flourish. It’s just another version of the well known tragedy of the commons, that economists know very well: people acting in their own interest will ultimately harm a greater good.

A system view

Thinking of a codebase as a part of a larger system, if independent agents aren’t there to stay, they won’t improve the system. I remember my days when I was a university student. I hated it. I probably hated every single exam. The way it was led, taught, and the way students were evaluated. But the thing I hated most was that the whole system was hopeless: nobody liked it, but nobody really put any effort in changing the system. Students were not there to stay, the winning strategy was to keep your mouth shut and pass the exam. It will be somebody else’s business, not ours.

Does it sound familiar?

If the time needed to be rewarded for an effort is longer than the expected time in the system, well …who cares?

Enter the Boy Scout Rule

Interestingly, Uncle Bob proposed a solution for that: the so-called Boy Scout Rule, which goes as follows:

always leave the campground cleaner than you’ve found it

I love it, for many reasons. It creates a sense of ethics and belonging. It provides a little quick reward: “I am doing something good here”. But the most interesting thing is that it turns a rational argument into an ethical and moral one. Boy Scouts do not rationally believe that every forest will be cleaned up if everybody starts behaving the way they do. But the feeling of doing something good and right, is a good one. And moral - together with the fear that Uncle Bob in person will one day see my code and throw me in the flames of hell as a consequence - will probably do a better job.

A higher duty

Who should care for the system? Who should ‘enforce’ (I hate this term) the boy scout rule? It’s pretty clear that this is the job of somebody who’s going to stay around for a longer time. Somebody that cares about the ecosystem, and sees the whole. Or at least tries to do that. Be it a CTO a Technical Lead or the CEO, depending on your company.

For example, working for a sustainable work environment with as little turnover as possible might be a really good strategy in the long term. If developers feel the workplace as their own workplace, continuos improvement policies might flourish and people might stay around long enough to see them working, creating a virtuous cycle.

No hope in the short term?

Here we are. Now you’re probably thinking that - since the battle for maintainable code is very hard to win I - am depressed enough to give up and let the barbarian hordes commit whatever they want.

Not yet, guys.

The thing is, good practices like TDD and continuous refactoring should repay themselves in the short term too. And they do, …if you know them well enough. Which means that you’re correctly accounting costs and benefits, including the cost of learning TDD and keeping it distinct from the cost of doing TDD.

Personally, I would be lost without TDD. But I am a different beast: despite all of my efforts, coding is not my primary activity: booking hotels and flights is. So TDD is a way to keep a codebase manageable even in scenarios with no continuous work on a project. I guess this goes for many open source projects too.

But the most interesting thing is that the best developers I know are all doing TDD. Not for religion, but because they get benefits in return. Not potential benefits, just-in-case style. Real benefits, in the short term.

Safety, confidence and speed.

Very good developers can also recognise different needs in different contexts. There may be cases when TDD is overkill or bringing little value: if all you need right now is showing a prototype, show a damn rails prototype right now! It’s no religion, it’s a continuous evaluation of return on investment and evaluating options. Which means that you can also be wrong: you may be too conservative and pay an insurance for a disaster that would not happen, or be taking some risks, just to end up smashing your face against a wall with your colleagues staring at you with a I-told-you-so look on their faces. That’s life, dude.

But in the land of continuous choices, being able to code only in one way, means being weak. The only safe spot, is being so good in TDD to know when not to use TDD. Aren’t you there yet? Good luck. There’s still people that go to a Japanese restaurant and ask for fork and spoon. But, you don’t want to be the one.

So please, stop raising arguments like “Yes, but if one day somebody wants to change the codebase”  because I won’t buy it. If you can’t provide short term benefits for TDD and refactoring, then you probably don’t know them well enough, yet. And at the same time I don’t want to be the one trading a sure expense, for a potential gain in the distant future.

Monday, March 23, 2009

The mechanics of continuous integration

I was reading this nice post about Continuous Integration, from Distilled Brilliance, by John Feminella (thanks to Marco Abis for pointing me to that, via twitter), and I have to say I wholeheartedly agree: setting up a CI dedicated machine is absolutely pointless if mechanics and disciplines are note attacked.
  • Unless the teams move to a smaller increment policy, some of the advantages of CI won't be appreciated.
  • To make consistent progress with small increments, TDD is your better friend.
  • If the team is using a CI server, but team members don't quickly fix the build if it happens to be broken, then you're probably lost (I know this is rather obvious... but... )
  • If the team is using a CI server but nobody checks the status before updating the local version of the code you're probably lost again.
  • The team should be working on the trunk, and not on separate branches.
As John pointed out, it's much more a matter of discipline rather than of practice. And discipline takes time and effort. In this case it takes trust and coordination. The same kind you might find in a crowded bar at rush hour when many bartenders are communicating quickly and finding things exactly where they should be, by adhering to team standards (such as "she small dish calls for an espresso" while "the large one calls for a cappuccino").

It takes time to get there, but once you're there, the benefits are great.
Reblog this post [with Zemanta]

Friday, May 23, 2008

Cleaning up our test code – Part 2

Assuming that we've created our test object in a few moves (please read the previous post about it), now the focus switches to the way we use test code to assert correctness of production code. JUnit assertion family relies heavily on equals-based assertions. Unfortunately, the equals() method is far from being used consistently, so equals-based testing has some dangers we need to be aware of.

The equals() contract

Talking about the equals() method, there is a general behavioral contract, which is the one defined in the Java Specification, and it is used heavily in the Collections classes to retrieve objects from different container classes. As every good java developer knows, overriding equals() needs us to adhere to the implicit behavioral contract, and also that we override hashCode() to ensure consistent behavior with different container types. So, to effectively test or domain objects, we need to override both methods. So far so good.

There are also a few convenient methods to do this: Eclipse allows you to generate equals() and hashCode() from a given set of attributes. The resulting code quite good, but it's like a grey area in your source code, in terms of readability. Jakarta commons implementation is less "automatic" but provides the developer with better control over the final result.

Enter a new player

If you're using Hibernate to persist your domain objects, you'll probably know that this requires some attention on the way equals() and hashCode() are defined in your application. This is primarily tied to the hibernate persistence lifecycle (which generally populates id fields upon writing operations) and to lazy-loading (some fields are loaded only if they're explicitly accessed inside a Hibernate session). The Hibernate recommendation is to define equals() and hashCode() methods according to equality of the so-called business-key, which can be roughly be mapped to "a unique subset of required non-lazy fields, excluding id". Id-based equality should be managed only by Hibernate, while business operations should rely on an equals() method based on the business key. To purists, this sounds like an undesirable implicit dependency on the Hibernate framework (your POJOs are still POJOs, but not exactly the same POJOs you would have had without Hibernate).

Equality as a business-dependent concept

So far, we have 2 separate equality implementations: id-based equality (that should be used only bi Hibernate, behind the scenes) and business-key equality that will be used in our business code and will be implicitly used if we uses containers from the Collections framework. What should we use in testing? Unfortunately, there is no one-size-fits-all answer, but the choice depends heavily on what we are testing and what is precisely the desired behavior of your application. If we are adding information on some non-mandatory field, then simple equality won't check it. If we're changing the value of a mandatory field, and want to check that this doesn't trigger creation of a new Id, you need to explicitly check that field.

Often, applications with a nontrivial domain model can't rely only on a single notion of equality (are two BankAccount instances equal if they have a different balance?), this is more or less clear, during analysis, but the presence of an assertEquals() method in JUnit makes blindly using equals() so tempting…

Smarter predicates

Once we've realized that equality is too generic to be applied blindly, the following step is to try to apply the right context-dependent equality in the appropriate context. The obvious solution to do this is to decompose equality to an attribute-level check: so instead of having

assertEquals(a, b);

we end up with something like


assertEquals(a.getName(), b.getName());
assertEquals(a.getSurname(), b.getSurname());
// … you got the point

Which is longer, less maintainable, with a higher coupling, and … ugly. Most of the times, anyway a business relevant notion of equality doesn't show up only in tests. I would argue that 99% of the times the same equality is hidden somewhere in your code in form of some business rule. Why not having the same rule emerge and be used in the test layer as well?

A good way to do this is to rely on the Specification pattern, developed by Eric Evans and Martin Fowler which basically delegates to a dedicated object the assertion of the applicability of a given business rule on a domain object. Put in another way, Specifications are a way to express predicates or to verify expectations on domain objects, in a way that could look like:

assert(spec.hasJustBeenUpdated(a));
assert(spec.isEligibleForRefund(b));
assert(spec.sameCoreInformations(a,b));

After thoroughly testing the Specification (a thing that we should have done anyway, since it is a business implementation), we could be able to reuse the same logic as an assertion mechanism in our test layer, making our code shorter and cleaner. Not all business oriented assertions will be that useful in the test layer, but some normally do. As I said in the previous posts, one of the main goals was to be able to write a lot of tests, and to write them in few minutes. Being able to rely on a higher level of abstraction definitely helps

Thursday, April 24, 2008

Cleaning up our test code

In the last posts about testing in a TDD fashion, I tried to dig into the reasons why many developers tend to write Soap Opera tests which end up being less effective and maintainable that they should be. As I said earlier, test share a common logical structure, I stuck to this one:

  1. Set up
  2. Declare expected results
  3. Exercise the unit under test
  4. Get the actual result
  5. Assert that actual result match the expected results

In the popular essay Mocks Aren't Stubs, Martin Fowler uses a differently grained structure (setup, exercise, verify, teardown) but I'll stick on the original structure, at least for now.

Since "refactor" is the weaker step of the TDD mantra, I tend to keep a slightly different approach, trying to think at my tests in the cleanest possible way. There is normally not so much to do with step three – which is often no more than a single method call – but often a lot can be done to improve all the other phases.

Setting up the test

JUnit structurally poses a separation between the common set up shared by a family of test and the specific set up or initialization needed for a single test to run. The former is placed in the setUp() method, while the latter is generally the beginning of our test method. A common situation is to place here the definition of all the Domain Objects needed to perform the operation to be tested. This is also a good place to check if your application has been properly designed. Creating objects for testing purposes should be a straightforward activity. As a rule of thumb, creation of the necessary objects shouldn't need more than three-four lines of code. Does it sound thin? Let me explain.

Objects should be ideally created one-shot. In other words, you should have robust constructors available to create the domain objects without having to bother with setter methods. This might not be a completely viable options if you have a lot of optional attributes in your entities, which probably shouldn't be in you constructors anyway. You are definitely luckier if your architecture already has some Factory in place (such as the ones prescribed by DDD, by the way).

Complex object structures should be available off-the shelf: if I need to test Entity A, associated with B and a specific C instance, and this is a boundary condition for my application, I want this combination to be readily available for any single test that pops out in my brainless mind. Ok, you can achieve a similar result with cut-&-paste, but … please … (ok, the official reason is that you end up slowing test development and increasing unnecessary coupling). An off-the-shelf test object approach fits particularly well with the agile style of personalizing users and typical entities: if I am developing a banking system and Fred is my average user with a banking account and a VISA, while Randolph and Mortimer are very rich people with many accounts investment and so on, I want to have available in my test framework something like createFred() and createRandolph() or createMortimer(), to be used in many more short tests. Such convenience methods are particularly useful when business functions or business objects are complex, and such complexity ends up preventing people from writing the test they should write.

The worst case scenario you might have to deal with, happens when you have only the Java Bean empty constructor and a plethora of setter methods. Setting up complex objects will take pages, and the code is a perfect humus for small bugs in the test definition. In addition, I will hate you :-). In general, test code might just be as buggy as production code, so writing the shortest possible test sounds like a good advice, both from a robustness and readability point of view. In general, creating objects for testing greatly benefits from the presence of dedicated Factories, and this should be taken into account when designing the application. Creating objects should be easy because we want to do that over and over and over.

In Java, Spring helps a lot in managing creation of other type of objects, such as Services, Controllers, Managers or DAOs. After all, Spring is like a "pervasive factory" that takes care of object setup and initialization. Typically, services are Spring Managed, while entities are not. So we have to deal with entities by ourselves. If Factories are not available, I often end up writing factories for test specific purposes. Depending on the level of control I have over the application they often make it for a refactoring on production code as well. If factories are already in place, we can subclass or wrap them with a test-layer factory class that provides us with the aforementioned convenience methods


Tuesday, March 25, 2008

Why do test antipatterns emerge?

In the previous post I presented an example of what I call the Soap Opera Test Antipattern, and some possible side-effects like having test code implicitly coupled to the application code. Reasons for this post arose from a discussion which is still going on in the Bologna XP mailing list, and reinforced by this post by Jason Gorman. Of course, every methodology works perfectly well …in theory. But practice with testing systems leaves us with a bunch of challenging issues when applied (more or less blindly) to real world situations.

So why do we end up having Soap Operas test in our code? I think one reason is rooted in the heart of the TDD mantra "Red, Green, Refactor". Here's why:

  1. Red. You want to add a new requirement, you do so by adding the corresponding test. You're done when you added the test, and running it results in a red bar.
  2. Green. You get to the green bar as quick as possible. Hacks are allowed, to get to green because being far from the green makes you dive too deep and you have no idea about what it takes to get back to green. You're done when you have the green bar again in your xUnit test suite.
  3. Refactor. This is a green-to-green transition that allows you to clean up the code, remove duplications, and make the code look better than in step 2.

Step 3 looks a little weaker than the others for a few reasons

  • It's the third step. If you're time-boxed, this is where you're gonna cut, by telling "done" to your boss, even if you feel that something's still missing.
  • The termination condition is less defined, compared to step 1 and 2; "green" is a lot less disputable than "clean". To declare step 3 over you have to satisfy your "personal definition of code beauty", assumed you have one. Moreover, refactoring goals are often personal: TDD book suggests to write them on a paper and keep it for the day. This means that you refactoring goals are not shared with the team. This is not a mandatory approach, for example I am the kind of guy that normally starts polluting the bug tracking system with refactoring suggestions. But I also know that very few of them will actually make it to production code (unless I am supremely in charge of the project…). Anyway, I think that most of the time refactoring notes are something too trivial to be shared on the Bug Tracking System. But the best way to achieve that is to have them fixed before they have to become remainders.
  • It's a matter of culture. If you're doing TDD but lack some crucial OOP skill, you're in danger of writing sloppy tests. There's a lot of good OO in a framework like JUnit, and designers made it good enough that the OO part is well hidden behind the scenes. But this does not mean that developers should code like neanderthalians when it comes to coding tests.

Putting it all together, the result is often test code which is less effective than it should be.



Friday, March 14, 2008

The soap opera test antipattern

If you are coming from a romantic programmer attitude, or simply didn't care about testing your code, then every single line of tests code is valuable and adds some stability to your system.

After a while, anyway, the testing code mass could increase significantly and become problematic if not correctly managed. I've pointed you to the Coplien vs Martin video in my previous post. Now I won't claim that I've found the solution of the issue, but some thoughts on the topic might be worth sharing.

Starting to test

When embracing TDD or test first, or – less ambitiously – when starting to use xUnit frameworks for testing, you simply have to start from somewhere. You choose the target class or component, define the test goal and code your test using assertions to check the result. If the light is green then the code is fine, if it's red… well, you have a problem. You solve the problem, refactor the solution to make it better, in a green-to-green transition, then move to the next feature, or the next test (which will be the same thing, if you are a TDD purist).

Every test adds stability and confidence to your code base, so it should be a good thing. Unfortunately, when the test code mass reaches a certain weight it starts making refactoring harder, because it looks like extra code to be affected in a refactoring process, making refactoring estimations more pessimistic, and the whole application less flexible.

Why does this happen? I suspect testing skills tend to be a little underestimated. JUnit examples are pretty simple, and some urban legends (like "JUnit is only for unit tests") are misleading. Testing somehow is a lot better that not testing at all. Put it all together in a large scale project and you're stuck.

The soap opera test antipattern

The most typical symptom of this situation is what I call the soap-opera test: a test that looks like an endless script.

@test
public void testSomething() {
// create object A

// do something with this A

// assert something about A

// do something else with A

// assert something about A

// create object B

// assert something about B

// do something with B

// assert something about B

// do something with B and A

// assert something about B and A

}

The main reason why I named this one "soap opera" is straightforward: there is no clear plot, there are many characters whose role is unclear, things are happening slowly, and conversations are filled with a lot of "do you really mean what you said?" and there is no defined end. The second reasons is that I always dreamed to name a pattern, or an antipattern… somehow.

Even if I was too lazy (or sensible) to put some real code in there, some issues are pretty evident:

  • Test looks like a long script;
  • if you're lucky, the purpose of the test is in the method name or in the javadoc, assertions are too many to make the test readable or to make out the purpose by simply reading the code;
  • I bet a beer that 90% of the lines you have on a test like this are simply cut&paste from another test in the same class (if this is the only test you have in your system the bet is not valid);
  • The test can get red for too many reasons;
  • Really looks like the inertial test code mass mentioned before.

What's the point in "looks like a long script"? My opinion is simply that it doesn't have to look like that! A good test has a well defined structure which is

  1. Set up
  2. Declare the expected results
  3. Exercise the unit under test
  4. Get the actual results
  5. Assert that the actual results match the expected results

I grabbed the list from here, the original article talks about many JUnit antipatterns (but calls the soap opera antipattern "the overly complex test" which is a lot less glamorous). Setting up can't be accomplished completely by the setUp() method, cause some preparation is obviously test-specific. Steps 3 and 4 often overlap especially if you're testing a function. But the whole point is that this definitely is a structure, while a script is something less formed.

Multiplying the asserts has a thrilling effect: when something goes wrong all of your test start getting red. In theory a test should test one and only one feature. There are obviously dependent features, but a well formed test suite will help you a lot in problem determination by pointing right to the root cause. If the testing code for a feature is duplicated all over the test suite… you just get a lot of red lights but no hint about where the problem is.

Testing against implicit interfaces

Even if you clean up your testing code and refactor to be in one feature/one test situation you'll still experience some inertia, due to testing code. This definitely smells: we were told that unit tests are supposed to help refactoring, allowing us to change the implementation while controlling behavior on the interface. The problem is that we are often doing it only in step 3 of the above list, while we are depending on application implicit interfaces in creation of test objects and sometimes also in asserting correctness of the result. Creating a test object might me a nontrivial process – especially if the application does not provide you with a standard way of doing it, like Factories or the like – and tends to be repeated all over the testing code. If you're depending on a convention, changing it will have probably a heavier impact.

In general, when writing a test, step 3 is very short. Basically just a line of code, depending on the interface you've chosen. Dependencies and coupling sneak in from test preparation and test verification, but you've got to keep it under control to avoid getting stuck by your test code base.


Wednesday, March 12, 2008

TDD vs Architecture debate

Some days ago, I was watching this video on InfoQ, where James Coplien and Robert C. Martin were discussing about some undesired side effects of TDD, particularly on the architecture side. One of the key point was that testing code increases the overall weight of the code base, making it harder to eventually refactor the architecture.

Another interesting issue presented was that TDD doesn't necessarily enforce testing all the possible boundary conditions, but often ends up in a sort of heuristic testing, which is less effective that testing based on a design-by-contract assumption.

Honestly, TDD book put a lot of emphasis on efforts to remove duplications, also between production and testing code, but I have the impression that this portion of the message is often lost by test writers. I've got some ruminations on the topic that will probably make up enough stuff for some more posts in the following days.



Monday, October 23, 2006

Worst marketing ever?


Quite often, while introducing quality assurance, testing and TDD with our customer or colleagues, I get some weird responses… something like: “Yes, we tried JUnit, but it can only do unit tests, and we need integration test as well” or “we started using JUnit, but we dropped it, cause we didn’t have the time to write the test for every single method in every class of the system”.
Hmmm, clearly somebody got it quite wrong, ‘cause JUnit is a testing framework and it just takes the right entry point (a façade, a web service, a main method) to have it run integration tests as well. Half of the people are misled by the name (but I never heard anybody saying that JBoss is only for smuggling applications…), the other half by the on line documentation, which does nothing to prevent new readers from getting it wrong.

For many, the starting point is simply no test at all. So the best possible choice would be to start implementing the most useful tests, like those that catch the highest number of errors. This means putting the interception point at the surface of the application (be it a presentation tier or a public API), while starting from the small, maybe testing all the getters too, provides very little value, and doesn’t focus on the way components interact together to provide the application behaviour.

Unit test provide a value too, because they help in the problem determination phase, but when you have only a few resources on testing (as it is sadly often the case) you want them to check the overall application before shipping, and not only some random component.

Tags: , ,

Sunday, June 11, 2006

Why Object Oriented Designers should rule the world


I was drowning into the abyss of the Italian bureaucracy in the last few days, going from an office to another one to fetch the papers needed to ask for another paper. A notary public asked me to get papers from the local registry office. The data was on paper, so the officer wrote an official document with a typing machine (a mechanical one). Of course there was something wrong in it, but since it was the referring to the notary’s county of the original document author and the notary was the same that asked me for the document (in other words, he asked me to get something he wrote) I hope this is a minor mistake…

The second paper (same papers I already had but I needed a fresher timestamp on it) forced me to go to another municipal office, where they told me
  • I had to go to the land registry office (is cadastre the right word?) asking for some maps

  • I had to pay €100,00 at a post office

  • I had to buy two special stamps worth €14,62 each
I searched in the internet yellow pages, for the land registry office address, but I got the wrong one. I phoned to a yellow pages service and I got almost the right one. Once I got in, the place looked like some kind of transit station on the way to hell: dozens of people camping in a large hall waiting their number. Luckily I had something simple to do and my queue was relatively fast. Of course I had to pay €35,00 for the maps I asked (just reading what was needed to complete the request for the other office (are you feeling lost? Me too).

Then I went to the post office to pay the €100,00 and buy the stamps. Obviously, there were no stamps of that size, so the lady started trying various combinations of different stamps (in a variant of the knapsack problem) to achieve the exact sum of €14,62. Welcome to the third millennium!

I then brought the maps back to the first office and they told me most of them were not needed for my type of request (grrr), so they just took one. I am now waiting the phone call of another officer telling me some papers are missing.

…So what?

Ok, I might be a sort of a strange guy but I can’t help it: every time I go to a public office I start thinking about it in terms of OO modelling. It just doesn’t make sense to me that all of a transaction complexity is put on the user shoulders. It’s exactly the same problem you can face with badly designed interfaces in server code.
A server class (as the name states) is supposed to serve multiple clients. A good design decision is to hide complexity behind the server class API, so that you can write simpler clients. Since clients are written in different context and times, you pay once on complexity on the server side and save every time somebody writes client software.
Bureaucracy is normally doing the opposite: to achieve thin processes on the server side (enabling eternal coffee breaks), officers are moving the burden of complexity on the users: forcing citizens to locate external services, pay (in the silliest possible ways).

The more I look at it the more it doesn’t look like a way to save time at all: if a frequent connection has to be established between two separate offices, then using the citizen (which is 90% of the times doing this thing for the first time) is the most inefficient way of all! Citizens have to be trained to go there and ask that and that, and this is a repeated waste, citizens are not experts so they could make more mistakes than trained officers on a standard procedure, so the process results more error prone.

Can anybody do something about it? Unfortunately this is pretty hard, because requires somebody with a pretty large scope, authority, and completeness of vision: a dictator? An emperor? An alien from a distant planet? Since crossing organizational borders or ruling the empty spaces is one of the most difficult management activities, I am pretty pessimistic about that.

Test Driven Bureaucracy

In the OO world, the best way to achieve a simple API is to develop in a test-first style. This way the developer thinks first from the client perspective (resulting in low complexity of the desired process interface, mostly a money for paper bargain), and then implements the service, keeping most of the complexity behind the server interface.
I just wonder how a similar approach might perform if applied to bureaucracy, I guess we’ll have a lot of surprises (in Italy whole institutions are totally useless) by crossing a test driven methodology with a strong OO-like role/responsibility analysis. As the picture shows, even if pessimistic, I haven’t lost all my hopes.



Tags: , ,

Thursday, March 30, 2006

Test Driven Development and the Romantic Programmer


Stressing on test activities is part of my usual job as a consultant. Sometimes we can push till the test first practice as recommended by XP and TDD, sometimes we just promote xUnit test cases to primary level project artifacts, meaning that an iteration is not completed if there’s no working test in place to testify it. Testing is different from developing in many ways – honestly, you need a different mindset to be an effective tester – but after learning a couple of tricks, developers easily comply to the new methodology.

Even if you can achieve some success with heavy testing practices, you can’t leave the field unattended. What I observed in many cases is that quite a few developers tend to shift back to the usual methodology, even if they recognized that test driven development was better. There is no rational answer for this, except that the old way was more amazing. I know what they mean: in some way it’s just gambling applied to software development: it’s the thrill of fixing a bug changing a couple of lines of code and deploying it right in production, or assembling an integration version just the night before the final release. It’s just thousand miles away from being professional, but it’s just the same feeling of the good old days of the university, or the first garage programming experience. Some people are just in search of this thrill, and that’s the reason why they like this job, and they’re probably getting terribly bored, if the system doesn’t do anything else than growing smoothly milestone after milestone without a complete crash in the middle, and a miracle hack recovery, of course…

Tags: , ,