Showing posts with label project management. Show all posts
Showing posts with label project management. Show all posts

Monday, June 16, 2014

Not Dead Yet

Like it or not, David Heinemeier Hanson did it. By starting the debate around the “Is TDD Dead” topic, he forced the whole agile community to re-think about many issues that were taken for granted. You may like his opinion or not, you may prefer to choose his side or Kent Beck side, or the more radical one from Robert C. Martin. Or you may also appreciate the way Martin Fowler proposed to sort out the issue, in a way that is a great example of “this is how we discuss things in the agile community”.

I personally liked the fact that many collateral discussion spread out on the topic, the best ones I had with Francesco Fullone and Matteo Vaccari. Here are some of the thoughts they helped shaping.

Who’s paying for what

I remember the old argument that 90% time on code is spent on maintenance, so it does make a lot of sense to write software that’s easier to read an maintain. This is what I normally do, or, better, this is what I relentlessly keep trying to do.

But the more I look around the more I realise that this statement leads to a quite naive approach. This is not how most developers are working right now.

Let me explain that.

It’s not about the code

Given 90% of time spent on code is in fact maintenance, it makes a lot of sense to try to improve this 90% instead of the 10% spent writing new stuff. Unless…

…unless we stop focusing about code and start looking at human beings surrounding it. Some call them developers, but I am really more concerned about the human being, not the role.

Humans move, evolve, grow, move to another team and another company. Humans will be far away the moment evolution will be needed on a given piece of code. They won’t be there to take the praise for well-written maintainable code as well as they won’t be there for talking the blame for horrible legacy code.

Even worse: once they’ve left, they might be blamed also for supposedly well-written code, if the new developer who’s now taking care of their beloved source code has a different programming style. Or simply doesn’t like their indentation style, or what was intended to be good coding practice at the time the code was written.

If I don’t plan to stay around a piece of software for a long time, writing maintainable code is dangerously close to an exercise of style. Many practices are said to repay themselves in the long term but no one stays around long enough to catch the benefit

Being short-term sustainable

Yep. In the long term.

That’s the thing I really don’t like. It sounded like reasonable first time I heard it, but now it has the sinister sound of I have no evidence to support it, just faith.

And in a world where software development workforce is made not only by internal developers, but also consultants, freelancers, contractors, offshore developers and part-time hackers (just to name a few) the implicit assumption that

if you don’t code right, technical debt will bite your back 

is flawed.

Well, technical debt is a bad thing. It’s probably worse than we think. Companies deliver horrible services due to technical debt, they struggle, collapse and ultimately sink for technical debt.

But well… who cares! Big companies will survive anyway. For small companies… it’s just darwinian selection. As a developer I’ll move somewhere else soon (disclaimer: I have a bias for not hiring developers who previously worked in companies that sunk).

So, I guess the real story with technical debt should be rephrased like

if you don’t code right, technical debt will bite somebody’s back

meaning …Not necessarily yours. Call me cynical, but by the time your crappy code will unleash its sinister powers you’ll probably be in another company complaining about a different yet stinky legacy codebase. Time for a soundtrack: try this.

A little tragedy of the commons

If code stays around longer than the expected lifespan of developers there’s not much that we can do to prevent technical debt to flourish. It’s just another version of the well known tragedy of the commons, that economists know very well: people acting in their own interest will ultimately harm a greater good.

A system view

Thinking of a codebase as a part of a larger system, if independent agents aren’t there to stay, they won’t improve the system. I remember my days when I was a university student. I hated it. I probably hated every single exam. The way it was led, taught, and the way students were evaluated. But the thing I hated most was that the whole system was hopeless: nobody liked it, but nobody really put any effort in changing the system. Students were not there to stay, the winning strategy was to keep your mouth shut and pass the exam. It will be somebody else’s business, not ours.

Does it sound familiar?

If the time needed to be rewarded for an effort is longer than the expected time in the system, well …who cares?

Enter the Boy Scout Rule

Interestingly, Uncle Bob proposed a solution for that: the so-called Boy Scout Rule, which goes as follows:

always leave the campground cleaner than you’ve found it

I love it, for many reasons. It creates a sense of ethics and belonging. It provides a little quick reward: “I am doing something good here”. But the most interesting thing is that it turns a rational argument into an ethical and moral one. Boy Scouts do not rationally believe that every forest will be cleaned up if everybody starts behaving the way they do. But the feeling of doing something good and right, is a good one. And moral - together with the fear that Uncle Bob in person will one day see my code and throw me in the flames of hell as a consequence - will probably do a better job.

A higher duty

Who should care for the system? Who should ‘enforce’ (I hate this term) the boy scout rule? It’s pretty clear that this is the job of somebody who’s going to stay around for a longer time. Somebody that cares about the ecosystem, and sees the whole. Or at least tries to do that. Be it a CTO a Technical Lead or the CEO, depending on your company.

For example, working for a sustainable work environment with as little turnover as possible might be a really good strategy in the long term. If developers feel the workplace as their own workplace, continuos improvement policies might flourish and people might stay around long enough to see them working, creating a virtuous cycle.

No hope in the short term?

Here we are. Now you’re probably thinking that - since the battle for maintainable code is very hard to win I - am depressed enough to give up and let the barbarian hordes commit whatever they want.

Not yet, guys.

The thing is, good practices like TDD and continuous refactoring should repay themselves in the short term too. And they do, …if you know them well enough. Which means that you’re correctly accounting costs and benefits, including the cost of learning TDD and keeping it distinct from the cost of doing TDD.

Personally, I would be lost without TDD. But I am a different beast: despite all of my efforts, coding is not my primary activity: booking hotels and flights is. So TDD is a way to keep a codebase manageable even in scenarios with no continuous work on a project. I guess this goes for many open source projects too.

But the most interesting thing is that the best developers I know are all doing TDD. Not for religion, but because they get benefits in return. Not potential benefits, just-in-case style. Real benefits, in the short term.

Safety, confidence and speed.

Very good developers can also recognise different needs in different contexts. There may be cases when TDD is overkill or bringing little value: if all you need right now is showing a prototype, show a damn rails prototype right now! It’s no religion, it’s a continuous evaluation of return on investment and evaluating options. Which means that you can also be wrong: you may be too conservative and pay an insurance for a disaster that would not happen, or be taking some risks, just to end up smashing your face against a wall with your colleagues staring at you with a I-told-you-so look on their faces. That’s life, dude.

But in the land of continuous choices, being able to code only in one way, means being weak. The only safe spot, is being so good in TDD to know when not to use TDD. Aren’t you there yet? Good luck. There’s still people that go to a Japanese restaurant and ask for fork and spoon. But, you don’t want to be the one.

So please, stop raising arguments like “Yes, but if one day somebody wants to change the codebase”  because I won’t buy it. If you can’t provide short term benefits for TDD and refactoring, then you probably don’t know them well enough, yet. And at the same time I don’t want to be the one trading a sure expense, for a potential gain in the distant future.

Wednesday, January 07, 2009

Subcontracting is a recipe for disaster

A friend of mine recently told me about a tricky situation he was trying to solve. A software company was supposed to develop an application, but they subcontracted development to another company. After a while, the project started going out of control: features were late, quality was poor, bugs were not fixed and release date slipped indefinitely. Things were so bad that lawyers started to warm-up.

As many of you, I had a sense of deja-vu, when hearing this story. The thing that struck me was that this type of project is doomed from the start! Why do people repeat the same mistakes over and over and over?

Money for nothing
Let’s dig into the scenario: customer asks Company A to produce some piece of software.
The price can be defined in many ways, but it’s normally related to a rough estimation. Anyway, Company A gets the contract. Even if they are not going to write the code, they want to be paid for the marketing, and everything that comes before the project starts, somebody will be paid to “manage” the project and some money will be allocated to cover project risk.

Budget splitting from Company A’s perpective.

Ideally, such project should provide a good ecosystem for communication between the customer and the developers, allowing developers to explore and to build consistently upon frequent customer feedback.

The supposed development ecosystem: project manager deals with high level issues, while detailed communication happens informally between the team and the customer

The subcontracting scenario
In a common subcontracting scenario, Company A decides to outsource development operations to Company B. However, since the original deal was between Customer and Company A, a new deal is necessary between Company A and Company B. Sometimes this deal is completely visible to the customer, but sometimes Company A is just pretending to develop the application while Company B is developing it behind the scenes.

Anyway, the new deal between Company A and Company B is based on a different project budget, since some costs (marketing, risk and project management) are accounted on Company A.

Now the project may be worth about 1/3 less than its original budget (the exact figure may vary according to greediness and other human factors) but Company B still needs to allocate budget for its own project management and risk coverage: nobody wants to be sued for somebody else’s mistake.

The project budget, now seen from Software Company B’s perspective.

Adding two separate management boxes around the project doesn’t provide any real value in return, the financial effect is that now the project is a low-budget project, so company B will probably start looking for the cheapest software development resources available.

One would expect that the extra money put in Project Management and Risk Coverage would turn into a perfectly managed, risk free project. Well... not exactly so.

By the way, this is often the schema which is applied in offshore software development, where the common belief is that developers are so cheap that you can compensate the extra costs related to offshoring with huge savings in product building. I am not going to dig into offshoring in this post, but it’s definitely not that simple.

Messin’up with things

Unfortunately, this type of two-levels management doesn’t only duplicate costs, it does more: it makes the whole process inefficient, and it’s actually effectively working against project success.


The subcontracting development ecosystem: formally the referring person is still belonging to Company A, but development is performed by Company B. The key communication channel is now obstructed.

  • Communication flows from Customer to Company B through Company A. If Company A is the one to be paid, then all of the key discussion must pass through A, transforming the supposed Project Manager into a bottleneck.
  • Indirect communication means delay, important information may rot in a mailbox, before being forwarded to the person associated with the solution.
  • Indirect communication means also information loss. Some information must be understood, not only listened or read. Sometimes forwarding it’s not enough to ensure some key detail is not missing.
  • Since the information is sensitive for two different contracts, all parties become more inclined to write key issues instead of using face-to-face communication. Some issues are then carefully evaluated and involve more negotiation on an official level. The entire process becomes then waterfallish: having three players, with higher risk and no effective direct communication, paper (specifications, change requests, and so on) becomes more important than face-to-face communication eventually slowing down the process.
  • The customer role becomes awkward. The customer is the real customer, but for Company B, Company A is ‘the customer’. Unfortunately, the quality of information you can get from a customer who is not the user of the software it’s a lot lower. So the risk of doing the wrong thing it’s extremely higher.
  • If Company A is still pretending to be the developing company, direct communication between Company B and the Customer might even be forbidden, making things even more awkward.
The overall effect of all these factors combined is often pure havoc. A project with a decent budget becomes staffed with inadequate people, in an ecosystem that obstructs good communication between the customer and the developers, and where management roles are duplicated adding costs and subtracting value.
Is there any hope?
In theory, there is one scenario where it could still work: it’s where Company B is so good at building software that its estimations are significantly smaller than Company A ones. But to put this scenario in the real world one should answer the question: “Why a company that is so good at building software needs marketing from another (and poorer) Software Company?”

As developers, we’re probably associating this situation with the decorator pattern, which is simply adding features (a more effective marketing) to some other (the development) without changing it. Well ...this is simply not the case with subcontracting. The act of subcontracting has a negative impact on development and should be avoided or limited. As a customer, it does make a lot of difference if the software company is actually developing the software or simply playing the role of a broker. Having seen many of these situations going straight to non-recoverable state, I really think that special care about these issues should be taken in the pre-contract negotiation phase to prevent unavoidable surprises.

Tuesday, December 02, 2008

Does agile add costs?

Another interesting question that popped out during my “Agile: the B-Plan” presentation at IAD2008 regarded the different cost schema of an agile process, compared with a traditional one. Somebody asked “How can I sell an agile process, given that I need some kind of presence on the customer site, and this will trigger extra costs for travel and hotel expenses?”. This sounded like “if you want an agile process, it will cost more”. Strong objections arose: “This is not truly agile”, which is true, just adding a communication channel towards the customer doesn’t make you agile. But I think the problem is a little deeper than that. I’ll try to elaborate.

Selling the process
Does it make sense to sell your development process? I think it doesn’t. As software companies, we are expected to sell results, not the way we use to achieve them. Agile is better to achieve results? Great, this is our business.

Unfortunately the last statement is not entirely true: agile does not work without customer collaboration in place. So the customer must be aware somehow that starting an agile project involves some commitment (dedicated time from key resources) from the organization. Not all the organizations are equally ready for this type of engagement, so you might find yourself desperately looking for some type of feedback, and this feedback might be arriving only close to the deadline. Ouch, we’re back to waterfall.

The myth of the on-site customer
XP approaches the customer collaboration problem introducing the on-site customer role. Well, in practice this is not likely to happen, for dozens of reasons. So a negotiation phase is necessary, to establish a communication channel with the customer, as well as a tuning phase to ensure results are good. This is an area were simply “playing by the book” will lead you nowhere: you’ll need to understand why you need this type of communication, how can you achieve that, and which are the shortcomings and risks of the agreed trade off. And both sides must be aware of that. Scrum places much of this responsibility on the Product Owner. The efficiency of the communication channel between the team and the PO is critical to project success.

Frequent feedback loops are necessary to both sides. The team needs feedback to know they’re developing the right thing. And the customer needs to see how the things are going, to reduce project risks. Risk management is probably one of the most powerful key words if you still want to sell an agile process. But I think talking of risk management sounds better to the management’s ears.

Project specific cost schemas
However, there are no dogmas, even in this area. A face-to-face communication with the customer, is a lot more efficient. But it does have a price. Sending 2 people to the customer offices for a day or a week has a cost, which is a lot depending on the project specific constraints. In Italy, hotel prices went up in the last years, while IT salaries went down. Traveling to other countries might be really expensive. In some circumstances, the cost of knowing that I am doing the right thing might be higher than the cost of “learning-by-mistakes” (but remember: there are some vicious hidden costs on this side, to consider). Unfortunately, almost always it takes the right person in the right place to understand exactly what we are missing. So the only way to know if we can safely reduce some costs by having a less efficient communication channel towards the customer, is to have an efficient communication channel towards the customer. Otherwise we won’t know what we’re actually missing.

Getting back to the title question: I think agile processes have a different cost schema (you save a lot in printed paper and pay more in post-it... ), but are generally more efficient overall even in fixed-scope, fixed-costs projects.

Thursday, November 27, 2008

The dark side of self-organizing teams

Agile methodologies focus on the notion of self-organizing teams, as a key to software development success. This works for a lot of reasons: talent is not constrained to follow a pre-defined process, the process is adaptive and tailored on the individuals and so on.

But it looks so beautiful and simple in books, and so hard to achieve in reality. One reason is that there are two main patterns for self-organizing team creation:
  • being already on one
  • spontaneously creating one
Penguins are self organizing. Males stay in the middle of the antarctic continent for months with an egg on their feet keeping each other warm, and discussing about politics and soccer. At springtime female penguins arrive from their shopping and fishing just in time to start feeding the new born baby penguins. Young penguins follow the group habits without questioning and the ritual goes on and on like this year after year.

Creating a new group involves discovering common passion, is just like starting playing baseball in a small town. Maybe 2 kids are just playing with a ball and a bat. Another one joins the two, same day same hour, then later on, they discover they’re enough to form a real team. There can be some key moments in the growing phase, but many times, the original vision is shared by the majority of the team.

What is really hard, is trying to transform a team who’s always been directed from above, into a self-organizing team. There are so many things that can get in the way, that is often better to start with a newly formed group instead of turning an existing team into a self organizing one.

...not necessarily good
But the notion of self organizing team is not necessarily good in itself. Self organizing teams can be pretty nasty, (Mafia, Al-Quaeda, Ku-Klux-Klan are all examples of self organizing teams, as this interesting post says). But the question is not only tied to team ethics, it’s also related to what the team can do to achieve its goals, which involves powers and responsibilities.

If you follow soccer a bit you’ll probably know the story of Antonio Cassano. One of the most talented Italian players, he played in AS Roma for a while. After some initial success, its role (not his skills) started being questioned. AS Roma sold Cassano to Real Madrid and right after that started an impressive record of 11 victories in a row. After the embarrassing parenthesis in Spain, now Cassano is doing fine again with Sampdoria. This is a clear example of how a team can improve as a team by getting rid of some talent.

Talent is hard to manage (and this could be the subject of a dedicated post), but the key point here is that the team might decide to drop team members that don’t share the same values with the rest of the team. You need to grant this right to the team, otherwise shared vision and behavior will never emerge, but you’ll also need to be prepared to the consequences, as a manager or as a team member. And consequences can be pretty nasty: like telling a colleague that he/she is not welcome (maybe with some “unless...”).
One key point is to be sure that the most influential team members have a positive influence on the team. If you can’t guarantee that, you’re probably doomed, right from the start.

Wednesday, November 26, 2008

So ...where’s the fun part?

I have the feeling of having killed somebody’s dream in my last post. And also of walking on the edge of being misinterpreted. I think I’ve got to blog about it again from a different angle.

I like developing software. I think it’s fun. Even if I like solving complicated puzzles, I think the best part of software development comes from working within a team. I have some friends working for the Toro Rosso F1 Team (being from the same town makes it easy), and I know how everybody felt after winning in Monza. This is the kind of feeling a team should work for. Not every team could be so lucky, but ...you got the point.

Are we just executors?
So, are we just implementing a specification? Nope. If that was the job, pick some MDA tool and have it do the job for you. I just asserted that coding is a lot less creative than many developers think. Finding a proven solution (hopefully a good one) is often a preferable alternative to an original one. And “original solutions” often degenerate in in-house frameworks lock-in.

But being mere executors just doesn’t work either. Following specifications is a waterfall, but it’s also a waste of talent. And such a scenario will eventually lead to low motivation, and talent loss within your company.

Creativity might slip in when the puzzle is more complex than usual, or when tempting something new. In this cases, you have to fight with the creativity tools: get away from the workstation, have a (time-boxed) brainstorming session, involving somebody else, be provocative, avoid censorship. The key is to recognize which type of problem you’re facing and to use the right tool for the job.

Learn and challenge
Another area of software development where you definitely need some creativity is interaction with users and stakeholders. Understanding the problem domain is an interesting activity, which will lead you in unexplored areas of knowledge. Understanding the way users are interacting with your application, and the reasons for that, might lead you to propose some features that might come in handy, or make the difference!

Software development teams are often a concentration of pretty smart minds. Pretending that the only focus is code, is wasting talent and, often, a mess.

Tuesday, November 25, 2008

The alibi of creativity

One of the most intriguing parts of the discussion, that was part of my speech in IAD2008, was a question (interestingly asked by one of the few non-developers) regarding possible negative effects of time-boxing (especially in the form of “pomodoro”) on a creative activity such as developing software.

I quickly dropped my opinion, but I was more interested in making others opinion on the topic emerge. Then I had some tome to think better about it: here are my thoughts.

Coding is not a creative activity

I mean ...there’s creativity involved, but most of the time we are solving problems, in ways which are probably been explored by many others before us. So, basically, coding is a problem solving activity. Creativity slips in when the problem to solve is new, non conventional or pretty hard, or does not strictly involve coding (in this respect, the process might be far more creative than coding). Sometimes we have the luck to deal with applications which are pioneering, but many more times, this is not the case.

This doesn’t mean that we have to be mere follower’s of somebody’s else ideas. But “being creative” or pretending to be, too often has the undesirable side effect of “reinventing the wheel” which is definitely not what we want. Put in another way, “being creative” is often just an excuse for being “too lazy to study”.

Time-boxing does not constrain creativity

Ok, then there are times where the strict time constraints imposed by time-boxed activity, a pairing session, a pomodoro, a day or a sprint, is not enough to allow for the “inspired” solution. As Simone Genini pointed out, “as a manager, you don’t want to wait for developer’s inspiration”. So what you need is a repeatable attitude for this problem solving activity.

Back home I realized that this happens in creative fields too: comics are written on a monthly basis (even if not every author can fit in the schedule), ad campaigns are creative, but organized as time-boxed projects, and so on.

Even in software development, the time-box constraint is less stringent than it might be: if one of your best developers came out with what he consider a sub-optimal solution, something he doesn’t really like. You can be sure that he’ll keep thinking about it, and probably will attack the problem again, once a better solution appeared to him in his dreams or under the shower. Just allowing more coding time to be explicitly allocated to that task won’t probably get the solution any closer.

Tuesday, November 18, 2008

The scrum mother - re-explained

Honestly, I never thought my last post could have been that controversial. But I had so many diverging feedbacks: some loved the post, some demolished and told me it was terrible, and some others liked but got the meaning the other way round)... Maybe it is necessary to clarify things a little bit.

I am very convinced that authority stands in the middle of being a good Scrum Master. SM is not a leading role in Scrum, he/she just preserves the integrity of the process, without effectively taking part in it. SM could participate in a discussion, but must not take any decision. SM must ensure that the discussion terminates with a shared decision.

That’s why I came up with the analogy of the Italian mamma and the old-style Italian family. The father has the authority, and brings the money home (hopefully), the mother manages to keep everything running. Preparing the food does not deliver any value to the outside (unless you own a restaurant) but allows family member to be healthy and do their work.

But I guess the metaphor could have turned weak, because I was referring to a very specific type of family, and every single reader had his own idea of mother, and the concepts overlapped diverging a lot from my intentions.

How to raise kids
Probably, the flaw of the mother example is related to the different perception of the mother role within a family. I’ll explain what I meant, going straight to the SM role, to be as clear as possible. SM are not supposed to prevent developers from hurting themselves. SM are supposed to allow the team to grow, by letting them do and recognize mistakes, by allowing them to develop self-confidence and do something on their own. You must be around when your babies are learning to go on a bicycle, but if you keep holding them, or force them to use those small extra wheels ...they’ll learn later. The same applies to swimming, where confidence is just about everything. If you keep being always around, they’ll start crying the first time they’re alone. Delayed growth, lack of confidence and a lot of time spent just “being around” by the senior management.

Wednesday, September 24, 2008

Me and MSProject

Yesterday evening I attended Craig Larman's brilliant talk -  in Skills Matter, in which he deeply criticized some commons dysfunctions of traditional project management, and doomed MsProjects and Gantt charts as useless and dangerous in software development.
I couldn't agree more with him, so imagine my surprise when today I discovered that on my naymz page I was actually sponsored by MsProject!

Needless to say, I am quite disappointed. :-(
I know it's AdWords driven, and that some keywords in my profile triggered the ad. But...

Wednesday, September 10, 2008

Sustainable Software Architecture

A software development project is a bounded activity. One of the key goals of Software Architecture is to find the best trade-off (or the sweet spot) for a given project ecosystem.
Such ecosystem is the result of many combined factors such as:
  • project size
  • team size
  • team location (es. co-located vs distributed or offshore team)
  • team skills (experienced developers, contractors, folks just hired, and so on)
  • team members availability (not all teams are well-formed at the time the project starts)
  • architect avalability
  • turnover
  • logistics
  • marketing constraints
  • deadline constraints
  • etc. etc.
I learned to know that keeping software architecture clean and preserved from all this is a dead end road. Those ecosystem constraints affect the way a project is carried on, and will also affect the optimal architecture for a given project. Put in another way, there's no optimal architecture for a given problem, unless you include all the variables in.

As Kevin Seal pointed out at the last week SM event: "Architecture is about things that are expensive to change", and in the open source era, the most expensive things to change are time (which can't be reverted) and people, which have normally a long and inefficient learning cycle.
This makes choices like "choosing the implementation language" an expensive one, because despite the free availability of development environments for different platforms, team skills might have to be formed, and properly skilled.

Make the architecture fit the ecosystem
Answering to questions like "What's the right tool for a given load requirement?" is a non trivial job, but it's the one we're (hopefully) prepared to do. It's right in the software architect mindset.
What I've found more tricky is the need to define just the minimum affordable level of architecture, in a given application. In an ideal world we would like to have the most reliable architecture, making coding easy and meeting all of the nonfunctional requirements. We would also like to have all the team members getting familiar with that, and understanding roles of every architectural component and the reason they've been implemented the way they are.

Too much of a dream. Architecture definition is often a time bounded activity also. For a consultant,  the Architecture Specification Document might be a deliverable mentioned in the contract, so even if there are more efficient ways to deliver architecture information (podcasts, meetings, comedies, tattoos, ...just to name a few) a document must be prepared. But the contained information must be delivered some other way...

Pairing with programmers, or simply coding are great way to get a grasp of coding reality (meaning that architects might learn something really useful from the way architecture and other coding tools are used), but also a bit naive when the matter is "How to deliver architecture information". Architecture have (I mean they must have) a broader scope than developers, taking into account long term factors, while a developer is generally focusing on a problem that has to be solved now. And developers are not all the same. Seasoned veterans and college guys have peculiar skills ad interests, and a completely different background.
Moreover, properly training the team might be a goal for a company investing it its own development team while it might be not a viable option for an external or a contractors-based team. This might sound a bit cynical (it is), but even if I prefer teaching (after all, it's part of my job) as much as I can about the architecture in place, I have to face the fact that information will flow on a limited bandwidth channel: there will not be as many chances as I would like to discuss the architecture, there would not be as many questions, and developers might not be that curiuos, after all.

Finding the sweet spot
Generally, I think that the ideal sweet spot is doing as much architecture as it can be managed, by the team (buy I usually learn about the point location by surpassing it). This is not a fixed line. A good team will probably ask for a more elegant architecture, al long as the project advances, pushing (or pulling) architecture evolution. In some critical situations, like long projects with a lot of turnover, the architecture should be robust enough to prevent developers to harm themselves, keeping the learning curve as small as possible. Some more work for the architecture team, but usually a bit sadder.


Thursday, October 25, 2007

Videogames for the Development Team - Updated


This is just a slightly different version of an old post...

I’ve got this stuff bouncing in my head, right after reading this
Joel Spolsky’s post trilogy about project management. You’ll realize that my head is pretty foggy lately…

The average developers attitude is best expressed by 1st person shooters, such as Quake. A developer has a dirty job to complete, and has not to be afraid to dig into a bloodbath to finish it. Collaborative teamwork is encouraged for better result, but in the end it’s you against the enemy.

Team leader’s attitude is best trained by strategy games such as Warcraft, or similar. You organize the teams, assign task, and make characters’ experience evolve, so they can perform more complex task. Controlling parallel activities is just the nature of the game.

A project manager perspective is almost the same that a game like SimCity can give. You don’t control characters anymore, but create conditions for them to do stuff, such as provide houses, roads, and so on. And watch interesting happen: if your city is a nice place to be, then people will be glad to join your city, otherwise they will leave.

You might be tempted to choose a different approach, Sims-like, but as Joel’s article explains pretty well, it’s just pretty fine grained to work effectively (to be honest, I always have flies over the garbage can...).

The perfect game for the DBA is Dungeon Keeper, well... you might guess that I am not a DBA, but the whole game is defined in an access provider perspective.

Ok, somebody might be curious about my favourite game… well, I’ve always been a Civilization’s fan, from version 1. But the one and only that really pleased my ego was Populous II, the moments I could send a tidal wave and watch poor innocent human beings drowning in a tidal wave… Does this mean something?

Tags: , ,

Saturday, April 21, 2007

Gotta get going...

"Excuse me, sir. I think you got the wrong shoes on!"
"What do you mean?"
"Looks like you put the right foot into the left shoe and vice versa."
"Yeah, looks like you're right. That's why they were hurting so badly."
"Aren't you switching them?"
"No, I'm late, I gotta get going now."

One of the strongest advantages of iterative development is that the concept of iteration leads also to the concept of checkpoint. The moment you release an intermediate milestone is also a moment to think and verify if you're going in the right direction. That's also a moment where you take a breath and think instead of just doing things.

Put in another way, iteration doesn't mean repetition. Splitting the project time line in iterations means that there should be fixed moment reserved to approaching the project from another angle. SCRUM makes this "what's the biggest slowing factor of the project?" question at the center of the project lead on a daily basis. Long lasting waterfall projects normally place this question far too late.

If you are in an agile or iterative project, and there's no difference, or perceived change from iteration n to iteration n+1, this is generally a bad smell. Normally it means that this is not an agile project or, more precisely, that it's not an adaptive one. Checkpoints are probably used only to track the elapsed effort and delays, and to re-adapt estimates. A more pervasive, SCRUM like, survey might lead instead to a different way of doing certain things, or to a suggestion about how to improve them.

It's just a larger scale application of the tuning methodology: test (iterate), measure (finish the iteration), diagnose (the iteration meeting), and correct (plan for the next iteration). If the iterations borders are blurry, you have a lot less test information, if you're not having a meeting, you're probably not getting all the information you need for a good diagnose, if you never stop and plan for a change, you'll never improve. If you keep on postponing needed changes, just because you are too busy, ... you'll always be.

Saturday, April 14, 2007

The dark art of cheating software estimates - Part two.

Just a quick addendum to my previous post about software estimates (and how to fool yourself cheating with them).

The most effective trick to miss a deadline (and the following one) is to put unrealistic estimations over developers activities. To achieve this result you have basically two ways.

Define the estimates at the wrong level
This normally happens when the team leader defines the activities and assigns them over the head of the developers. Experienced developers know better what should be done and which activities take time, so you should rely on their point of view, after all they're the experts in their domain. Imposing an estimation on the developer's head has also the undesired effect to cause an emotional drift in case of activities taking longer than expected. A young developer could assume that he is the problem (and maybe the real problem is just something forgotten at a higher level) and feel responsible for the delay. If you just say "You have 3 days to finish this" you might end up having the only crappy piece of software the developer was able to write in those three days.
And, of course, skills and environmental factors are different, so the same activity could take 2 days if assigned to one developer and 4 to another one. If the estimates are coming from above, it's easier to forget about that, when quickly reassigning activities.

Ok, this is just something that might happen. I am not saying that one should completely rely only on developers' numbers. A good team leader has its own estimates in place, that can be used manage risk related to optimistic developers, and so on. But this is an activity that should be performed behind the scenes (my first team leader always added 30% to my developer estimations). Comparing developers estimates with yours could also help to spot potential problems, like activities that shouldn't take that much.
By comparing different developers estimations on similar tasks, you could also spot if somebody found a smarter way to do something, and have him teach to the others (or discover that somebody is not finishing the activity and leaving some dirt under the carpet as an undesired gift for the following iteration). Put in another way you should rely on the developers on the estimations and take yourself the burden of find out how to speed up activities.

Asking estimations in the wrong way
One thing you should never do is ask "Will it be finished for Tuesday?" and - even worse - the follow-up "You told me this was going to be finished for Tuesday". Of course, in the middle, everything can happen. If you want to be hated you can ask the question one day, then interrupt with a higher priority task, then on Tuesday ask the follow-up, with a blaming pitch.
The point is that, as a leader, you shouldn't forget the effect of your role. A milestone should be a leader's problem, not a developer's problem (and shouldn't be managed in terms of blame anyway). Asking a question in this way is no different from a woman coming out from the hairdresser with a new "transgressive" haircut asking "How do I look?". It's just a compelling invitation to lie.
At the end, the numbers are exactly the ones the team leader wanted, but in this case the blame is all on the developer. Playing this trick, is unfair, doing it repeatedly is just a way to increase pressure and to threaten team alchemy.

The right way to put question is "How much time you need to do this?" and then do everything possible to ensure that all of this time will be spent on that activity. A good leader manages pressure from above and should shield team from it. If the collected numbers are too high, there is probably a problem, which needs to be investigated and possibly solved, soon.

Don't forget that providing a detailed estimation of the activity, is an activity itself. It shouldn't take a day, but be careful when you get an immediate reply; it's a sign that somebody isn't probably thinking enough. So leave your developers the time to think about what should be do and how much time will be needed. As Joel Spolsky states, this is design activity, after all.


Tuesday, January 30, 2007

The dark art of cheating software estimates

Most widely used techniques for estimating software projetcs are
  • guess a number
  • ask a number and multiply it by pi
  • pick the release date, count the days backwards and multiply them by the available people
  • function point analysis
  • use case points analysis
  • ...
I personally prefer UCP, because it best fits our usual development process scenario, where the use case list will also be the starting point for the real development phase.
In UCP the estimation is the result of two main factors: the application complexity (roughly measured vie the number and complexity of the use cases and actors) and the environment factors which - as everybody knows - heavily influence the outcome of the project.

The second reason why I like UCP methodology is that when I did retrospective analysis on finished projects, the results where pretty precise. Which is obvious, if you think about it, because retrospective analysis is exactly the way estimation methodologies were tuned. There are two important points so far:
  1. The UCP methodology is pretty accurate, if you correctly evaluate the starting factors
  2. You can still make mistakes, because factors might change (new use cases might be added during development, or environmental factors may change, or reveal themselves wrong)
The outcome of the estimation process is a number, or a range of numbers, which can represent hours, days or months of development effort. It's a measure of human time needed to construct the desired application. Here start the real problems: you pass the number to the management, they multiply it for the salaries and realize that the project will be too expensive. Put simple: you say time and they say money.

Here comes the cheating
The official estimates will be close to reality, but reality is bad news... so what will you do? The problem is that the link between effort and price looks completely blocked before the project starts. During the project, everything might happen: you might be 2 months late, hire new people and train them, and so on. Each time, you add a random factor that invalidates your fixed ratio between worked hours and the overall price. Still what happens 99% of the times, when showing estimates to the management, you'll be asked to reduce them.

Sometimes there are business reasons for it. It's like buying a car, the salesman knows that the final price will be $50.000, but will tell you about $39.900 and the optionals... Some other times it's like a pavlovian reaction: time is a cost and must be compressed. Wasn't it sad enough, I've never heard of a smart suggestion coming out in this phase. Normally you can have one of the three:

a)"Let's skip the analysis phase"
b)"Let's skip the design phase"
c)"Let's skip the test phase"

If you still are optimistic, I have to warn you that the a) and b) almost always imply c). Or, put in another way, c) is always implicit in such a situation.
The most annoying thing is that the same overall result could have been reached talking only about money. Just lowering prices. But everybody assumes that the other one is lying or maybe the way (presumed) cost reduction is achieved makes the difference for somebody.

But let's get to our numbers, as I said before good news are that prediction are rather accurate, still they might fail. One way is having a wrong count of use cases, meaning that more can appear along the way. But a new use case is a new feature (it's the $500 optional on our car) so it's not a problem, as long as the price varies accordingly, and the time also does. Often, what happens in the closed room it's a bargain of money vs time, sort of "I'll pay you this extra money, but you'll have to deliver it all at the original planned date...". hmmm
External factors are more tricky, cause they're harder to evaluate. Sometimes they're just mere assumptions and it takes time to realize if they're right or not. An example of a tricky one is "motivation": you can assume motivation is 3 on a 0 to 5 scale, cause you simply don't know. Then it's hard to find a special moment in the project lifecycle when motivation drops to 2, triggering a recalculation of the estimates. If you have a boss saying "I noted that mood dropped in the development team, can you please update the estimates according to that?".
So your initial assumption are kept "locked" shielding the official numbers from the force of reality. But every time the estimates are kept safe, old, and untouched, you can assume that they're just a lie, and distance from the truth will have to be filled somehow. The difference is that if the truth is exposed, people tend to behave accordingly; if the truth is under the carpet everybody feel free to cheat a little bit more.

Wednesday, January 17, 2007

Designing to lower the TCO

I was reading this post from Steve Jones, and had mixed feelings: as I commented on his blog, I am trying hard not to be that type of architect. But forcing an organization to consider the Total Cost of Ownership for the whole project lifecycle is often a tough job.
Sometimes is the organization itself to be badly shaped, maybe with separated budget between development and production, so managers aren't enforced to save somebody's else's budget. Sometimes the cost of ownership is perceived as a "normal mess" and becomes alarming only when it's totally out of control, which can be ten times bigger than acceptable, or more, due to the "buffer effect" of the dedicated people.

Sometimes is the overall project planning that plants the seeds, for the "touch and go" architect. Time is allocated mainly before the whole development starts, so the architect can't see the developers in action. I mean, architecture is challenged by development, and by time too: developers could not behave as predicted and find different ways to solve coding problems, and evolving tools and framework could make the balance of forces that drove some design choices not any more valid (that's a good reason for architect to document the reasons of their choices). It's often not a matter of being right or wrong, but instead a matter of seeing the whole picture , which is - obviously - much easier at the end. Following a project till the late stages clearly makes a better architect, but seeing everything possible from the early stages is what software architects are paid for.

There's some sort of analogy with the traditional drawbacks of the waterfall approach with the analyst role. Agile processes have put of a lot of efforts in introducing iterative release cycles, which are just a way to anticipate real feedback as much as possible. Iterating architecture, for some reasons, seems to have a longer path, but I'd say it's probably the same problem, only with different category of users.

Sunday, January 07, 2007

How to become a communication paranoid

In the last week I found myself thinking or discussing about which was the most suitable container for different pieces of information. In one case it was a planning doc: was it best to have it shared on the CVS or published on the wiki? A similar question arose about a to-do list for restructuring activities: keep on sharing the same excel list, or have the list inserted in a bug tracking system? Same question for a project glossary: wiki, excel or a word document?


One common choice factor between the different activities, is ease of use. The shape of the container should make it easy for the users to feed the information needed. One thing you should be aware of is that when people talks about “the users” they often intend themselves, or the people who provide the information, which should generally be a minority of users, compared to the readers, who generally get the most benefit from the shared information. In this case ease of use turns out to become accessibility and it’s probably the primary factor to consider.


But what I am getting paranoid about lately, is finding the optimal way to attach the necessary meta-information to the provided information. Context and collateral informations might be in the e-mail in which I am sending a word document as an attachment. But the user is reading the mail, printing the doc, and showing the doc to somebody else, without some crucial extra info such as “this document is bullshit, the real one will be prepared next week”. To avoid information abuse, I find myself using different tools just to ensure that information is handled in the proper way. So a common weakness of wikis, such as the not-so-easy printability of the contained information, becomes a plus when I don’t want the information to be printed (because it’s evolving too fast).


Clearly, meta-information can be attached in many ways, but sometimes implicit meta-information is a more efficient way to carry it.

Friday, August 18, 2006

Keeping technology standing still


Recalling projects I’ve seen, I realized that there was a strong correlation between the technological landscape of the project and the actual number of developers leaving the project (and the company). The more the technology was fixed, the more it became boring and frustrating for developers. Software developers are not normal workers, but strange animals that actually
like coding, and they can do a hell of a job as long as they have some fun out of it. Tom deMarco’s Peopleware makes a perfect portrait of the software developer and of his peculiar needs.

Trading fun for productivity?
Still, it looked like sort of a bargain: one can choose to keep the technology fixed, to pay less in training and refactoring while increasing the risk of an early departure from the project. Unfortunately, as long as the project continues, the chances of this event steadily increase so you must carefully consider this option. What is somehow hidden is that developers are not at all the same, and the one who is more prone to get bored might be just the most curious one, the most brilliant or the most passionate. So if you’re assuming that one over five of your developers might be leaving, chances are high that you’re losing 30% of the workforce instead.

The other hidden part of the bargain is that if somebody leaves in the middle, you have to pay a productivity price in training time as well. If the base technology is outdated, or – worse – proprietary, learning cost for newcomers will increase over time.

Updating the landscape
Ok, not all projects are intended to be “eternally open” (so this thought can’t be applied everywhere), but if you have a long living project and you’re trying to save money by stopping the evolution you are actually creating a huge cost area in the future that will materialize in the form of “the big refactoring mayhem” or of “the application that can’t be changed”. Maybe it won’t be your personal problem as a PM but in both ways you end up wasting money in the long term.

Tags: ,

Saturday, June 24, 2006

Massive Learning Patterns


A recent research on learning patterns on humans and primates lead to interesting results: findings were that while young primates basically emulate the parent’s behaviour, human babies tend instead to imitate parents behaviour uncritically. Put in another way, they simply do what they see a parent is doing. One example was that if the mother was switching the light off with the forehead, babies were basically doing the same things, while young primates were acting in a more conscious way.

It might look like primates are smarter, but the evolutionary answer was the opposite: imitation is the fastest way to learn a series of complex behaviours in a short time, especially if you’re still too young to understand the reasons. Incidentally, learning by imitation, without asking questions is the same learning pattern adopted by armies all over the world, and they share the same goal: a lot of coordinated behaviours to be learned in a short time.

From primates to developers
Recently, we shifted some developers from Struts to Java Server Faces in the presentation layer of a couple of projects. Newer RAD-oriented IDEs (such as Java Studio Creator and JDeveloper) should have had a positive impact on productivity, after the initial phase. Still some of the developers claimed to be stuck in the learning phase, while others who just skipped the “I-have-to-learn-the-framework” phase and went straight to the code, copying some code snippets and adapting them to the project needs, performed significantly better both in short and long term.

Mere imitation, or code pasting, isn’t the whole story: babies imitate parents who have god-like authority on them, so their behaviour is accepted by trust before it can be understood. Similarly developers look for trusted sources of information to extract some proven solution (and the ability to dig the net for it is becoming a distinguishing factor among developers). In a closed context the perfect solution should be a robust prototype, providing a vertical example of a functionality.

As I experienced while coordinating offshore teams, a good prototype or an architectural skeleton removes whole families of problems, simply by providing a source for cut & paste: “if it comes from the architect it can’t be wrong”. Well… it obviously can be. But even a wrong repeated pattern is better than 20 differently wrong patterns spread all through the application.

Tags: ,

Friday, June 16, 2006

Strive for progress observability


One of the key features of iterative development processes is the ability to make the work in progress observable. Clearly, different roles have different needs, and different level of observability are required: for a contractor milestones are important on the agreed deadline and day-by-day evolution shouldn’t matter, a team leader should instead look also for daily changes to better guess directions every developer is moving on.

During “pure development” phase, I normally ask my team to commit their job on the CVS or Subversion repository as soon as possible, at least on a daily basis, but I realized that this might sound odd to some of them. Personally, I can’t find any good reason (in this phase, at least) not to commit any code which is slightly better than the previous version. If the code won’t compile or breaks some test, then it’s a good choice not to share it, but that’s the only reason. By postponing a commit you’re just putting a slightly risky area (merging and integration) a bit closer to the deadline, giving a minor trouble a chance to become bigger.

Savage time
During early stages, I sometimes try to enforce “competitive access” to shared resources: if more than one developer is accessing the same file, then the first one to commit pays a smaller integration burden. This might sound pretty naïve (and I’d never use this style for the whole development process…) but it’s normally part of a “shock therapy” to rapidly establish team mechanics, if there are new team members or developers still stuck with MS SourceSafe bad habits. Once everybody is confident they can get along with the versioning policy, the team can switch to a less aggressive style.

Don’t forget that even if you switch to a more coordinated policy, you might still need the practice for emergency recovery, production bugs, night fix and all of those situation which can become nightmares if you add complexity on top of a mess. If you are skilled for troubled water, then you’re less likely to get panicked.

Tags: , ,

Wednesday, May 24, 2006

Communication in the Agile Team


Agile methodologies, particularly XP, strive to achieve better process efficiency by reducing the amount of the required documentation to a “no more than sufficient” level (among other things, of course). Having a relatively small team of people who get along quite well, and putting the team members spatially close together should provide the desired background for an informal but highly efficient communication, to spread between team members.

What “spontaneously” happens is something slightly different: communication happens, but developers aren’t saying the right things. They’re asking each other
  • how do you configure this?
  • how do you setup the environment?
  • This page looks odd, how did you solve this in yours?
Which are good questions, but could be better answered by an agile HowTo, or a script. Or are problem determination questions, which are good for sharing knowledge but still with a backward perspective. What is less likely to happen is an efficient communication between mates, about what really matters (to me, at least) which is putting the pieces together to produce software that works. Developers tend to focus on their task only, asking somebody else only if they are in need. Otherwise development proceeds silently, or with headphones, wasting all the competitive advantage of space proximity.

Constructive communication
Adding cooperative documentation didn’t prove itself to be an efficient glue. What did prove so was re-assigning tasks in a way that forced team mates to cooperate: instead of trying to have developers work simultaneously on two separate parallel tasks (which is what you normally do to avoid deadlocks), I asked them to work together on the same, short task. The result was a lot less messy than expected, in fact we had working code faster than expectations; there was a deadline pressure, to be honest, but didn’t result in overtime. Instead it produced a spontaneous design discussion on the classes to be shared, something I haven’t seen for a while.

Some might say that I just discovered the XP hot water: pair programming, and continuous integration. Which is partially true, but we still aren’t doing real pair programming (for a lot of reasons), while we normally have continuous integration practices in place. Still we are talking here about integration at communication level. Asking is only one form of communication (and if DeMarco’s theory about the state of flow is true, asking is a sort of stealing) and a pretty primitive one. Forcing tasks overlapping might look like a suicide choice, but generates instead positive effects on development speed, and on the team mechanics.

Tags: , ,

Sunday, May 14, 2006

Sticky Standards (Coding, the IKEA way - part 2)


Right after publishing this post, I’ve got the feeling of something missing, that still could be extracted from the IKEA metaphor. Mounting drawers handles was indeed only the second part of the job, after building the whole closet (well, in fact you could have chosen another way too: mounting handles, or simply drilling before building the whole stuff).

So, how do you do that? You just unpack the pieces, and start following the instructions. Don’t believe those folks blaming on the quality of IKEA papers, they’re pretty good: only drawings, no text (to get rid of i18n issues) but if you follow the plan you can’t miss. Drawings provide details about which side goes up, which part to start from reducing degrees of freedom that you might have in doing even the simplest stuff without having a plan. This way they can avoid maintaining a huge F.A.Q. section answering things like “how to attach legs to a closet after you filled it up” and so on.

What’s the difference from coding? It’s in the fact that many developers tend to favour copying some colleague’s code instead of following a detailed HowTo. A good reason for that is that you can ask clarifications to the author of the code if needed (which is efficient on a small scale, but it’s not on a large one). A not so good reason it’s just in developer’s mind: following a plan might be easy, and leads to predictable results. Put in another way it’s boring. Fun is solving a problem, and if there’s no problem there’s no fun.

Some might already have spotted the underlying danger in this practice, but to achieve a little more thrill, I’ll start telling a completely different story.

The unpredictable standard
The keyboard in front of you has an interesting story: the so called QWERTY standard, was originally developed for mechanical type writers, replacing the dominant hunt-and-peek system (requiring two actions for every key) which was dominant at that time. In absence of standards there was plenty of freedom in choosing how to place the chars on the keyboard, and the main reason which led to the odd QWERTY disposition was to protect the underlying mechanics. Put in another way, it was designed to slow down typing to avoid collisions between mechanical parts. Well it was still a lot faster than the previous standard, but the decision was to sacrifice part of the potential for short term needs. If you are interested in a full review of the QWERTY story, please read this article.

History made the rest. QWERTY standard survived far beyond expectations, and once mechanics were not a bottleneck anymore the standard itself become the next. But every attempt to move a step forward failed, due to the critical mass achieved by QWERTY users.

Drawing conclusions
Many things, and code is one of them, persist more than they were intended. In a single project lifecycle, the thing you don’t want is an anti-pattern calling for refactoring. The most efficient way to shield you from this is to provide the team with a bullet-proof prototype, that developers can sack in the usual way. This won’t ensure you that the team got it right, but will reduce a lot the diffusion of “alternative solutions” to already solved problems.


Tags: , ,