Thursday, March 30, 2006

Test Driven Development and the Romantic Programmer


Stressing on test activities is part of my usual job as a consultant. Sometimes we can push till the test first practice as recommended by XP and TDD, sometimes we just promote xUnit test cases to primary level project artifacts, meaning that an iteration is not completed if there’s no working test in place to testify it. Testing is different from developing in many ways – honestly, you need a different mindset to be an effective tester – but after learning a couple of tricks, developers easily comply to the new methodology.

Even if you can achieve some success with heavy testing practices, you can’t leave the field unattended. What I observed in many cases is that quite a few developers tend to shift back to the usual methodology, even if they recognized that test driven development was better. There is no rational answer for this, except that the old way was more amazing. I know what they mean: in some way it’s just gambling applied to software development: it’s the thrill of fixing a bug changing a couple of lines of code and deploying it right in production, or assembling an integration version just the night before the final release. It’s just thousand miles away from being professional, but it’s just the same feeling of the good old days of the university, or the first garage programming experience. Some people are just in search of this thrill, and that’s the reason why they like this job, and they’re probably getting terribly bored, if the system doesn’t do anything else than growing smoothly milestone after milestone without a complete crash in the middle, and a miracle hack recovery, of course…

Tags: , ,

Sunday, March 26, 2006

UserFriendly Strip Comments

Found this some days ago, while browsing Raymond Chen's blog. It's not that far from reality...

UserFriendly Strip

Tuesday, March 21, 2006

Wiki Collective Ownership


The collective ownership of the wiki the content, aside from sounding “too liberal” for many managers I know, has a limitation that you should be aware of. In an XP development team there are some factors that help collective ownership being effective.
  • Team size is relatively small, often sharing the same space: clarifying or correcting a published information is a quick action, involving no ceremony.

  • Teams have a well defined common goal, meaning that the concept of “what is good for the team” is close enough to “what is good for the individual”. Frequent role switching help enforce this perspective also;

  • Development teams are constructing a system. Crafting, and seeing something growing, with everybody’s contribution, helps feeling part of a greater whole, enforcing the community feeling.
You can’t expect these factors to be equally effective when you promote these tools at a company level. In an average company, individual profiles and goals are much more spread around than in a single development team, and have also a longer timescale (XP-like documentation often lacks support for the next project).

Users and contributors
Yet, the wikipedia success story might make things look too easy, when in fact they’re not. The underlying model sounds like this:
  • some pioneer contributor start writing on some known topics

  • readers benefit from the information, and if they have something to add to make the information more complete, they are invited to contribute.
What’s missing here is that this model works with big numbers, let’s say a contributor every thousand readers (I have no stats, but you get the idea). But wikipedia is a world service so this approach works anyway. The same might apply to Linux for example, I know many linux users, but I think none of them added a line of code to it. So, size is probably factor number one if you are expecting “spontaneous” contributions to the content.

On a smaller scale you won’t get the same result, unless you really have a shared “vision” and/or a common “goal” and you actively push for it.

Tags: , ,

Thursday, March 16, 2006

Wiki spontaneous structure gallery


Sort of minor post here. While writing down the follow-up of the last post, I found myself defining a classification of the structures wikis tend to assume when left growing in an uncontrolled fashion. I was tempted to call them patterns, but they’re far from being a proven solution, and often they tend to be part of the problem instead.

Tree-like structure
This is the most common shape, when people are told to use wikis. Information is put on a deeply nested page, on a path that doesn’t have any other content than the link. The overall structure is no different than a deeply nested directory tree, but it’s probably not the way to efficiently use wikis:
  • if you are exploring the wiki you have to click quite a few times to find meaningful information;

  • the information is always in the leaf and not in the node that addresses it, sometimes the information is only in the attached document;

  • a nested directory tree is a personal interpretation of a structure to a set of information that doesn’t yet have one. Put in this way is a sort of arbitrary act: I’ve never seen two person agree on a directory structure “at a glance”, simply because people’s brains are organized differently: it’s easier to agree on a subject than on the way we store it in our personal mind categories.
You can bypass this limitation by accessing information through a search facility, but having a wiki full of meaningless page doesn’t make the media appealing at all. Like it or not, you have to make your wiki look useful in order to have the community use it.

Blog-like structure
In this case, the author puts the whole information needed in a single page that keeps getting bigger and bigger. Aside from readability, this approach fools feed readers, that keep marking the page as modified, leaving the reader the burden of discovering where the page has actually changed.
This structure might fit sequential information (like a sequence of steps in a how-to document), but tends to be discouraging for the readers. Linking the page, as a memo, also turns out to be less efficient, cause you ‘re linking the whole page and not the section.

Document Aggregators
If you have a lot of external documentation to be made available to the wiki community, aggregator pages in a wiki can really make things a lot easier for the newcomer: they can provide extra information about where to start reading, the context the documentation was written, the intended audience and so on. Adding just a note, when attaching a document could really make the difference.

If the documentation is evolving, then we are probably putting ourselves at risk of duplicating the information. Some wikis are more oriented to software development and allow integration with SCM systems, more often this is a duplicated step so misalignments are always possible.

Hypertext blob structure
This is my personal favourite. The information is divided into several pages, each one focused on a single topic. Pages normally contain a lot of outgoing links, some pointing to a related page, some simply acting as a “to do” marker. In this way the information is structured as a “pure hypertext” containing links to all the related pages.
Unfortunately this structure works fine if you are using it as a help file, but tends to make you loose the grip about where the important information is.
The worst situation happens where you have to linearize the content of the wiki: both the structures described before can be represented as trees with a root acting as a starting point, while the pure hypertext has returning links transforming the structure in a graph, thus making linearization harder. To handle these situations some wikis provide a more constrained tree-like structure that can be enhanced by extra links, which have no impact on the main structure.

With such “fully floating wikis”, a technique I often use to help user navigate in complex information structures is to provide one or more access pages that act as reading keys of the whole hypertext. A hypertext might have many of these indexing pages pointing to safe “starting points”.

Tags: , ,

Sunday, March 12, 2006

Wiki Information Patterns


The recent generation of web tools (part the so-called web 2.0) offers new amazing ways to share information on the web. New tools enable new way to shape company processes: software development is the most obvious one, but not the only one. As always, new tools have also a dark side: one might turn out to be a great productivity booster in a given context and total waste of time in an apparently similar one.


The shape of the information
The primary advantage of wikis is that they allow you to provide the desired information to a group of users “on-the-fly”. In an agile scenario, useless information is simply not written because nobody is demanding for it.
Another positive site effect of making publishing as easy as possible, is that different types of information may find their way to be effectively shared as well. Information doesn’t need any more to be “framed” in a complete document and published when the whole document is complete. Or put in a stand-by state before being “officially released” if the information is not fitting the standard document structure. At the end, wikis are just probably the most effective way to lower the information publishing threshold.

This allows smaller bits of meaningful information to be provided to the readers community, but multiplies for a zillion times the question “where should I write this?”.

Wikis provide no fixed structure constraints so that the information grows (bloats) spontaneously, as a pseudo-organic being, looking like a logical structure to somebody and like a completely uncontrolled chaos to somebody else.

The communication model
Publishing useful informations is still a lot different from ensuring that every consumer is reached by the same bit of information. If you want instant, or official, communication to a group of people, then mailing lists are probably the best way. Wiki information are instead meant to be persistent, available to an open community of users, and consumed on demand. They fit the XP paradigm by ensuring collective ownership of the content, so everybody is free (and usually encouraged) to add extra information, or to correct the current content if it’s found to be incorrect.

In this context the reader is probably looking for some information about a given subject and is given different options:
  • navigate the wiki structure to go where the information should be,
  • search in a google-like fashion the whole wiki for the desired information,
  • check the differences, between last time through a “what’s new” service or an RSS feed.
Navigating the wiki is probably the weakest way: the wiki is intended not to have a strong, well defined structure. The wiki growth model resembles the algorithm for storing data in a balanced tree structure: pages get split when they start containing too many different information, so the searched information might not be in the same place it was found before. On the contrary, if the structure is not changing that much, a visitor might think that also the content hasn’t changed that much since last visit, making the wiki a less attractive place to visit.

After you admit that a super imposed structure is not the solution, you have to step back and rely on the search approach, which works whatever the current structure is. The only limit of this approach is that it lets you search for things you know, but tells you nothing about things you don’t know.

To “smell” current trends in your community you’d better switch to the “what’s new” approach, maybe enabling a RSS feed reader (most wikis support RSS feeds). Unfortunately, this typical web 2.0 approach might result unfamiliar yet to somebody, requiring some guidance to become effective. Be careful that, even in an ideal world where everybody has a RSS feed reader linking to your wiki you still can’t compare effectiveness of communication to an old fashioned web 1.0 e-mail.


Tags: , ,

Thursday, March 09, 2006

Blame and Open Source Software

In the company I work for, open source software is generally the first choice, on every new in-house project. But when I am consulting in different organizations, the situation is often quite different: a lot of money is spent on software licensing (even if the software is seldom used), and open source software is observed suspiciously.

Some times ago, a manager asked to me and a colleague: “What if [a big company] starts selling their version of Ant?” We both looked puzzled, but then we got his implicit point. The point is that when you buy software, you are in fact paying for the right to blame somebody else, if something goes wrong (is this a Chain of Responsibility pattern?) or to call to say “Fix this immediately!”. If you adopt open source instead, you are saving some money (aside from the possible learning costs), but you are actually buying back responsibility: if something goes wrong, the only one to blame it’s you. Some managers accept this risk, some don’t and prefer a defensive strategy such as “we bought the best software on the market, what else can we do?” or maybe simply decided that persuading their bosses that open source doesn’t necessarily mean two teenage hackers in a garage isn’t yet a battle worth fighting for.

Tags: ,