Tuesday, January 30, 2007

The dark art of cheating software estimates

Most widely used techniques for estimating software projetcs are
  • guess a number
  • ask a number and multiply it by pi
  • pick the release date, count the days backwards and multiply them by the available people
  • function point analysis
  • use case points analysis
  • ...
I personally prefer UCP, because it best fits our usual development process scenario, where the use case list will also be the starting point for the real development phase.
In UCP the estimation is the result of two main factors: the application complexity (roughly measured vie the number and complexity of the use cases and actors) and the environment factors which - as everybody knows - heavily influence the outcome of the project.

The second reason why I like UCP methodology is that when I did retrospective analysis on finished projects, the results where pretty precise. Which is obvious, if you think about it, because retrospective analysis is exactly the way estimation methodologies were tuned. There are two important points so far:
  1. The UCP methodology is pretty accurate, if you correctly evaluate the starting factors
  2. You can still make mistakes, because factors might change (new use cases might be added during development, or environmental factors may change, or reveal themselves wrong)
The outcome of the estimation process is a number, or a range of numbers, which can represent hours, days or months of development effort. It's a measure of human time needed to construct the desired application. Here start the real problems: you pass the number to the management, they multiply it for the salaries and realize that the project will be too expensive. Put simple: you say time and they say money.

Here comes the cheating
The official estimates will be close to reality, but reality is bad news... so what will you do? The problem is that the link between effort and price looks completely blocked before the project starts. During the project, everything might happen: you might be 2 months late, hire new people and train them, and so on. Each time, you add a random factor that invalidates your fixed ratio between worked hours and the overall price. Still what happens 99% of the times, when showing estimates to the management, you'll be asked to reduce them.

Sometimes there are business reasons for it. It's like buying a car, the salesman knows that the final price will be $50.000, but will tell you about $39.900 and the optionals... Some other times it's like a pavlovian reaction: time is a cost and must be compressed. Wasn't it sad enough, I've never heard of a smart suggestion coming out in this phase. Normally you can have one of the three:

a)"Let's skip the analysis phase"
b)"Let's skip the design phase"
c)"Let's skip the test phase"

If you still are optimistic, I have to warn you that the a) and b) almost always imply c). Or, put in another way, c) is always implicit in such a situation.
The most annoying thing is that the same overall result could have been reached talking only about money. Just lowering prices. But everybody assumes that the other one is lying or maybe the way (presumed) cost reduction is achieved makes the difference for somebody.

But let's get to our numbers, as I said before good news are that prediction are rather accurate, still they might fail. One way is having a wrong count of use cases, meaning that more can appear along the way. But a new use case is a new feature (it's the $500 optional on our car) so it's not a problem, as long as the price varies accordingly, and the time also does. Often, what happens in the closed room it's a bargain of money vs time, sort of "I'll pay you this extra money, but you'll have to deliver it all at the original planned date...". hmmm
External factors are more tricky, cause they're harder to evaluate. Sometimes they're just mere assumptions and it takes time to realize if they're right or not. An example of a tricky one is "motivation": you can assume motivation is 3 on a 0 to 5 scale, cause you simply don't know. Then it's hard to find a special moment in the project lifecycle when motivation drops to 2, triggering a recalculation of the estimates. If you have a boss saying "I noted that mood dropped in the development team, can you please update the estimates according to that?".
So your initial assumption are kept "locked" shielding the official numbers from the force of reality. But every time the estimates are kept safe, old, and untouched, you can assume that they're just a lie, and distance from the truth will have to be filled somehow. The difference is that if the truth is exposed, people tend to behave accordingly; if the truth is under the carpet everybody feel free to cheat a little bit more.

Wednesday, January 17, 2007

Designing to lower the TCO

I was reading this post from Steve Jones, and had mixed feelings: as I commented on his blog, I am trying hard not to be that type of architect. But forcing an organization to consider the Total Cost of Ownership for the whole project lifecycle is often a tough job.
Sometimes is the organization itself to be badly shaped, maybe with separated budget between development and production, so managers aren't enforced to save somebody's else's budget. Sometimes the cost of ownership is perceived as a "normal mess" and becomes alarming only when it's totally out of control, which can be ten times bigger than acceptable, or more, due to the "buffer effect" of the dedicated people.

Sometimes is the overall project planning that plants the seeds, for the "touch and go" architect. Time is allocated mainly before the whole development starts, so the architect can't see the developers in action. I mean, architecture is challenged by development, and by time too: developers could not behave as predicted and find different ways to solve coding problems, and evolving tools and framework could make the balance of forces that drove some design choices not any more valid (that's a good reason for architect to document the reasons of their choices). It's often not a matter of being right or wrong, but instead a matter of seeing the whole picture , which is - obviously - much easier at the end. Following a project till the late stages clearly makes a better architect, but seeing everything possible from the early stages is what software architects are paid for.

There's some sort of analogy with the traditional drawbacks of the waterfall approach with the analyst role. Agile processes have put of a lot of efforts in introducing iterative release cycles, which are just a way to anticipate real feedback as much as possible. Iterating architecture, for some reasons, seems to have a longer path, but I'd say it's probably the same problem, only with different category of users.

Tuesday, January 09, 2007

One year of blogging

I started this blog just one year ago. Time for a quick review.
  • I didn't get rich by blogging. My AdSense account registers 9 US dollars, which makes $ 0,75 a month, and globally a beer in a fashionable pub (or two in a crappy one). Wow.
  • I am still blogging, even if the last two months the rate of my postings dramatically decreased.
  • I had a ratio of one post per week, which is pretty good, considering how busy I am normally.
  • I haven't run out of ideas, instead I get a lot of good hints and ideas by checking the blogosphere.
  • Some of my friends, colleagues, and customers read my blog. Sometimes, also somebody I don't know. Not so many, but it's more than nobody.
From a statistical point of view, more interesting than the published articles, are the unpublished ones, and the reasons behind not publishing them.
  • A couple of them, simply had no point. Thought I had something smart to say, I wrote it and didn't sound that smart.
  • Some post were too personal. Publishing it would have meant shifting a "private" fact in public, or maybe violating a non-disclosure agreement with the customer. As a consultant, the most interesting things you can encounter are the one you are supposed not to talk about.
  • Some of them kept bouncing in my head but I had no time to write them down, then I simply lost the right time to talk about a topic, cause it wasn't hot anymore.
  • I also thought about starting a different blog, in Italian, about Software projects that simply don't work (there's plenty of them, especially in the public sector) but I though it wouldn't be that wise to attack beasts of that size without having a good lawyer behind.
I guess now I have a clearer idea about what blogging is all about. I am still wondering if blogging in English, instead of Italian, was the right choice. I think it just gives me more freedom and a bit less of sarcasm, which is something I often abuse, and generally makes me think about things from a different perspective, which is normally a good thing.

Ok, end of the plea. If anybody'reading this, I guess I can assume he or she is a reader. So deserves my deepest thanks for being able to stand my endless whine (sounds a bit like Green Day's "Basketcase"...).

Tags: ,

Sunday, January 07, 2007

How to become a communication paranoid

In the last week I found myself thinking or discussing about which was the most suitable container for different pieces of information. In one case it was a planning doc: was it best to have it shared on the CVS or published on the wiki? A similar question arose about a to-do list for restructuring activities: keep on sharing the same excel list, or have the list inserted in a bug tracking system? Same question for a project glossary: wiki, excel or a word document?

One common choice factor between the different activities, is ease of use. The shape of the container should make it easy for the users to feed the information needed. One thing you should be aware of is that when people talks about “the users” they often intend themselves, or the people who provide the information, which should generally be a minority of users, compared to the readers, who generally get the most benefit from the shared information. In this case ease of use turns out to become accessibility and it’s probably the primary factor to consider.

But what I am getting paranoid about lately, is finding the optimal way to attach the necessary meta-information to the provided information. Context and collateral informations might be in the e-mail in which I am sending a word document as an attachment. But the user is reading the mail, printing the doc, and showing the doc to somebody else, without some crucial extra info such as “this document is bullshit, the real one will be prepared next week”. To avoid information abuse, I find myself using different tools just to ensure that information is handled in the proper way. So a common weakness of wikis, such as the not-so-easy printability of the contained information, becomes a plus when I don’t want the information to be printed (because it’s evolving too fast).

Clearly, meta-information can be attached in many ways, but sometimes implicit meta-information is a more efficient way to carry it.