Agile methodologies, particularly XP, strive to achieve better process efficiency by reducing the amount of the required documentation to a “no more than sufficient” level (among other things, of course). Having a relatively small team of people who get along quite well, and putting the team members spatially close together should provide the desired background for an informal but highly efficient communication, to spread between team members.What “spontaneously” happens is something slightly different: communication happens, but developers aren’t saying the right things. They’re asking each other- how do you configure this?
- how do you setup the environment?
- This page looks odd, how did you solve this in yours?
Which are good questions, but could be better answered by an agile HowTo, or a script. Or are problem determination questions, which are good for sharing knowledge but still with a backward perspective. What is less likely to happen is an efficient communication between mates, about what really matters (to me, at least) which is putting the pieces together to produce software that works. Developers tend to focus on their task only, asking somebody else only if they are in need. Otherwise development proceeds silently, or with headphones, wasting all the competitive advantage of space proximity.
Constructive communicationAdding cooperative documentation didn’t prove itself to be an efficient glue. What did prove so was re-assigning tasks in a way that forced team mates to cooperate: instead of trying to have developers work simultaneously on two separate parallel tasks (which is what you normally do to avoid deadlocks), I asked them to work together on the same, short task. The result was a lot less messy than expected, in fact we had working code faster than expectations; there was a deadline pressure, to be honest, but didn’t result in overtime. Instead it produced a spontaneous design discussion on the classes to be shared, something I haven’t seen for a while.Some might say that I just discovered the XP hot water: pair programming, and continuous integration. Which is partially true, but we still aren’t doing real pair programming (for a lot of reasons), while we normally have continuous integration practices in place. Still we are talking here about integration at communication level. Asking is only one form of communication (and if DeMarco’s theory about the state of flow is true, asking is a sort of stealing) and a pretty primitive one. Forcing tasks overlapping might look like a suicide choice, but generates instead positive effects on development speed, and on the team mechanics.Tags:
Project Management,
Agile,
XP
Right after publishing this post, I’ve got the feeling of something missing, that still could be extracted from the IKEA metaphor. Mounting drawers handles was indeed only the second part of the job, after building the whole closet (well, in fact you could have chosen another way too: mounting handles, or simply drilling before building the whole stuff).So, how do you do that? You just unpack the pieces, and start following the instructions. Don’t believe those folks blaming on the quality of IKEA papers, they’re pretty good: only drawings, no text (to get rid of i18n issues) but if you follow the plan you can’t miss. Drawings provide details about which side goes up, which part to start from reducing degrees of freedom that you might have in doing even the simplest stuff without having a plan. This way they can avoid maintaining a huge F.A.Q. section answering things like “how to attach legs to a closet after you filled it up” and so on. What’s the difference from coding? It’s in the fact that many developers tend to favour copying some colleague’s code instead of following a detailed HowTo. A good reason for that is that you can ask clarifications to the author of the code if needed (which is efficient on a small scale, but it’s not on a large one). A not so good reason it’s just in developer’s mind: following a plan might be easy, and leads to predictable results. Put in another way it’s boring. Fun is solving a problem, and if there’s no problem there’s no fun.Some might already have spotted the underlying danger in this practice, but to achieve a little more thrill, I’ll start telling a completely different story. The unpredictable standardThe keyboard in front of you has an interesting story: the so called QWERTY standard, was originally developed for mechanical type writers, replacing the dominant hunt-and-peek system (requiring two actions for every key) which was dominant at that time. In absence of standards there was plenty of freedom in choosing how to place the chars on the keyboard, and the main reason which led to the odd QWERTY disposition was to protect the underlying mechanics. Put in another way, it was designed to slow down typing to avoid collisions between mechanical parts. Well it was still a lot faster than the previous standard, but the decision was to sacrifice part of the potential for short term needs. If you are interested in a full review of the QWERTY story, please read this article.History made the rest. QWERTY standard survived far beyond expectations, and once mechanics were not a bottleneck anymore the standard itself become the next. But every attempt to move a step forward failed, due to the critical mass achieved by QWERTY users.Drawing conclusionsMany things, and code is one of them, persist more than they were intended. In a single project lifecycle, the thing you don’t want is an anti-pattern calling for refactoring. The most efficient way to shield you from this is to provide the team with a bullet-proof prototype, that developers can sack in the usual way. This won’t ensure you that the team got it right, but will reduce a lot the diffusion of “alternative solutions” to already solved problems. Tags:
Project Management,
Agile,
XP
Everybody knows that the average developer tends to (largely) underestimate development time, or to run out of the estimated time. A less investigated phenomenon is that, despite all the evolution in IDEs, developers instinctively also tend to overestimate refactoring time. When asked to change some features of the existing code – except their own – estimations always jump up to the worst-possible-scenario-on-earth. There are several reasons for this phenomenon, the most obvious one is a lack of trust, meaning that you don’t trust somebody’s else’s code. Having automated tests in place might of course help, but, even if it increases operation’s safety, doesn’t have a direct effect on the estimation numbers: a 20 minutes stuff still is perceived as a “half a day” stuff, while repeating a stupid 30 seconds workaround hundreds of times it’s not perceived as a waste. Still I don’t think this has a lot to do with safety and trust, but a lot more with the taste developers had when they were learning java. I remember the “good old days” when even renaming a single class was a mess (remember JBuilder 1.0?) due to the classname=filename.java constraint, plus naming convention issues and OS issues as well. Well, let me say one thing: THOSE DAYS ARE GONE. It just takes a minute to use a refactoring command to rename a class or move it to another package, and this is true on all the most used IDEs, even the free of charge ones.Not all refactoring tasks are that simple: changing an attribute from one type to another might require some extra work were IDEs stop being that helpful. But also in those cases, if you have a bunch of red markers for compile errors in your code and solve them one by one, you are not going to loose that much time. You are going to lose a lot if your starting design was poor (ok, we have a code smell here), but if you have an average level of encapsulation in place we are still talking ‘bout minutes, not hours.
The domino effectWhat’s worst in this situation, is that this poses the basement for making development speed get stuck in the hell of spaghetti code. The domino effect goes like this: 1) somebody writes some bad piece of code, maybe marking it as a todo; 2) somebody developing another use case, looks for guidance in that code, and copies the anti-pattern; 3) now the anti-pattern is spread like a virus, and shows heavier effects; 4) still correcting it is delayed, cause there is not so much time to correct it (for some obvious reasons, like time was lost before). Hmmm, sounds like an XP evangelist paper, and honestly it is. Good news are that the domino effect really works in the other direction also, but you have to try before you believe it. - Good designed code makes refactoring tasks faster (which is a little different from the common belief that the first XP iteration should produce crap, just to be refactored).
- A good starting point helps a lot, cause younger developers are anyway going to copy (if not reuse) a solution, so it’s better to get inspiration from a good one.
- test helps building the safety net for refactorings, speeding up change time.
Only after you apply changes quite a few times you’ll have estimations get smaller. If you’re going for refactoring with a “shy” team, my suggestion is go for it regardless of the opposition, and possibly make a bet to make it a remarkable lesson. Or use the heaven sent extra time to have a party. Tags:
Project Management,
Agile,
XP
I finished my last post grunting like an angry old man. Time to stop whining and start proposing something. Ok, here’s what I do. As a rule in my projects, I mark parameterless constructors in domain classes as @deprecated. This way I can monitor bad coding habits without affecting frameworks that need to access the default constructor. This normally raises questions so I have a chance to explain to developers why I consider it an anti-pattern and to find a solution (normally a robust constructor or a factory).Another possible solution is to make it a check in source code quality checkers such as checkstyle, this approach eases the burden of manually adding the deprecation comment in all the classes, but requires the team to be familiar with the tool, while deprecation should already be in every developer’s scope. Tags:
Java,
OOP,
AntiPatterns