- guess a number
- ask a number and multiply it by pi
- pick the release date, count the days backwards and multiply them by the available people
- function point analysis
- use case points analysis
- ...
In UCP the estimation is the result of two main factors: the application complexity (roughly measured vie the number and complexity of the use cases and actors) and the environment factors which - as everybody knows - heavily influence the outcome of the project.
The second reason why I like UCP methodology is that when I did retrospective analysis on finished projects, the results where pretty precise. Which is obvious, if you think about it, because retrospective analysis is exactly the way estimation methodologies were tuned. There are two important points so far:
- The UCP methodology is pretty accurate, if you correctly evaluate the starting factors
- You can still make mistakes, because factors might change (new use cases might be added during development, or environmental factors may change, or reveal themselves wrong)
Here comes the cheating
The official estimates will be close to reality, but reality is bad news... so what will you do? The problem is that the link between effort and price looks completely blocked before the project starts. During the project, everything might happen: you might be 2 months late, hire new people and train them, and so on. Each time, you add a random factor that invalidates your fixed ratio between worked hours and the overall price. Still what happens 99% of the times, when showing estimates to the management, you'll be asked to reduce them.
Sometimes there are business reasons for it. It's like buying a car, the salesman knows that the final price will be $50.000, but will tell you about $39.900 and the optionals... Some other times it's like a pavlovian reaction: time is a cost and must be compressed. Wasn't it sad enough, I've never heard of a smart suggestion coming out in this phase. Normally you can have one of the three:
a)"Let's skip the analysis phase"
b)"Let's skip the design phase"
c)"Let's skip the test phase"
If you still are optimistic, I have to warn you that the a) and b) almost always imply c). Or, put in another way, c) is always implicit in such a situation.
The most annoying thing is that the same overall result could have been reached talking only about money. Just lowering prices. But everybody assumes that the other one is lying or maybe the way (presumed) cost reduction is achieved makes the difference for somebody.
But let's get to our numbers, as I said before good news are that prediction are rather accurate, still they might fail. One way is having a wrong count of use cases, meaning that more can appear along the way. But a new use case is a new feature (it's the $500 optional on our car) so it's not a problem, as long as the price varies accordingly, and the time also does. Often, what happens in the closed room it's a bargain of money vs time, sort of "I'll pay you this extra money, but you'll have to deliver it all at the original planned date...". hmmm
External factors are more tricky, cause they're harder to evaluate. Sometimes they're just mere assumptions and it takes time to realize if they're right or not. An example of a tricky one is "motivation": you can assume motivation is 3 on a 0 to 5 scale, cause you simply don't know. Then it's hard to find a special moment in the project lifecycle when motivation drops to 2, triggering a recalculation of the estimates. If you have a boss saying "I noted that mood dropped in the development team, can you please update the estimates according to that?".
So your initial assumption are kept "locked" shielding the official numbers from the force of reality. But every time the estimates are kept safe, old, and untouched, you can assume that they're just a lie, and distance from the truth will have to be filled somehow. The difference is that if the truth is exposed, people tend to behave accordingly; if the truth is under the carpet everybody feel free to cheat a little bit more.