Monday, March 23, 2009

The mechanics of continuous integration

I was reading this nice post about Continuous Integration, from Distilled Brilliance, by John Feminella (thanks to Marco Abis for pointing me to that, via twitter), and I have to say I wholeheartedly agree: setting up a CI dedicated machine is absolutely pointless if mechanics and disciplines are note attacked.
  • Unless the teams move to a smaller increment policy, some of the advantages of CI won't be appreciated.
  • To make consistent progress with small increments, TDD is your better friend.
  • If the team is using a CI server, but team members don't quickly fix the build if it happens to be broken, then you're probably lost (I know this is rather obvious... but... )
  • If the team is using a CI server but nobody checks the status before updating the local version of the code you're probably lost again.
  • The team should be working on the trunk, and not on separate branches.
As John pointed out, it's much more a matter of discipline rather than of practice. And discipline takes time and effort. In this case it takes trust and coordination. The same kind you might find in a crowded bar at rush hour when many bartenders are communicating quickly and finding things exactly where they should be, by adhering to team standards (such as "she small dish calls for an espresso" while "the large one calls for a cappuccino").

It takes time to get there, but once you're there, the benefits are great.
Reblog this post [with Zemanta]

Friday, March 20, 2009

Toxic Code

During these days, the word toxic is often associated to investments hidden in some bank caveau, their risk have been underestimated and the result is a backlash with devastating effect.

I think many institutions have the same toxicity problem in their software, but still fail to admit it: key portions of their software act like small timebombs and are quietly undermining their businesses.

There are infinite examples of what I consider to be toxic code, here is a small checklist some might be familiar with:
  • Useless code: it’s there but nobody uses it
  • Untested code: it’s there, probably works but nobody wants to touch it
  • Valueless code: Software which is not delivering any business value
  • Annoying code: every change to the system takes more time than it’s reasonably necessary.
  • Annoying software: the code is “working”, but in a way that makes things harder instead of easier. Users waste time every time they perform an operation.
  • Leaking software: key portions of the application are not tested enough, some exceptions are unhanded resulting, in random untraced errors.
  • Countdown software: software with a time-related flaw that will have a havoc effect at a given point in time.
  • ...(add yours here)

Well... it’s basically the same idea of technical debt (and Ward Cunningham posted an excellent short video on that - thanks to Piergiuliano Bossi for pointing it) which should be familiar to any agile developer, and was created as an analogy to the financial world. But my worries are more related to the business side, of software rather than to its production process. Ideally, we should have 100% trust on our IT systems (don't laugh please... I said "ideally"), because every time some of them it’s not working there is a waste somewhere (be it time, money, or a customer abandoning the company).

Sometimes companies are aware of the real quality of their running software, and decisions to fix or improve specific portions of the running application are just the results of conscious strategic budget allocations. More often, this is not the case: like toxic assets, software with risky behavior is treated like normal production code, underestimating the associated risks and drawbacks.

Anyway, even when companies are taking into account the costs of a less-than-optimal solution, they’re generally considering only the direct effects of this choice. Something like “this feature might be more user friendly but it will cost us $xxx to rewrite it and the possible revenue for that will be only $yyy, so it’s not worth developing” which is reasonable, but doesn’t consider the cost of not developing it which is put on the user who is having a less-than-optimal user experience (well ...sometimes a miserable one) wasting time and/or money any time he uses the system. It's like polluting: if you don't count it, it's cheaper to do dirty things.

I am still intimately convinced that computers should make life simpler and not more complicated. And I am also convinced that they can.
For some reasons they don’t.

Reblog this post [with Zemanta]

Monday, March 02, 2009

SOA desperately needs DDD

Talking with some colleagues, involved in SOA projects, I often have the feeling of a big hole in the perception of what SOA is really meant for, and what is needed for SOA projects to be successful.

I do believe that SOA is a great idea, it just sounds like large scale common sense. But I also do believe that “large scale common sense” is an oxymoron. Many dysfunctions in SOA projects are closely related to what normally happens in large scale projects, and in many cases those dysfunctions are transforming the whole stuff a whole big loss of money.

A common mistake, is to focus primarily on the architectural aspects of a SOA implementation. Which sounds like a rather obvious thing to do, considering what the A of SOA stands for. Unfortunately, this approach often turns out being narrowed to large scale OOP, only with a different terminology. Services (and their underlying model) are not reusable the same way objects are (and you remember a bit of the times where “reuse” was the buzzword and OO was hype, you probably know that also object reuse turned out being a lot different from the premises).

Behind its interface, a service is implemented according to a specific model. SOA makes no assumptions about the model, and allows for different implementation strategies: so far so good. Still one of the primary drivers for SOA is the need to rationalize the enterprise landscape, by removing unnecessary duplications and enforcing reuse of enterprise level services (we’re still in the “large scale common sense” here). Too often, the attempt to rationalize the landscape goes too far, trying to involve also the model in the process. This is often linked to the way SOA is conceived and implemented within your organization, a policy like “every significant entity will have to be wrapped by a service” will possibly lead to an enterprise scale CRUD moloch.

Enter Strategic Domain Driven Design
The point is that a service might be a significant reusable entity throughout the enterprise, while a model might not. A model should be the optimal solution to one specific problem, valid within a specific context. The context must have boundaries to allow for an optimal solution. An enterprise-level model will rapidly become blurred and some key abstraction will start to serve too many different purposes.

What is a Customer within an enterprise? Can you really find a one-size-fits-all model to represent a customer within the many context that must be supported by your enterprise scale SOA? My take on that is ...nope. Well, you’ll always be able to find some trivial entity that can be the same throughout the whole enterprise, but the odds will be against you if you start looking to some non-trivial entities existing in multiple domains.

In strategic Domain Driven Design there are some key principles to address this situation.
  • a model serves a specific use”: a model is a tool, and to be perfectly shaped for a specific use, the use must be well defined. Like a tool, a model can’t be really effective if the same object is used for many different purposes. Also, an effective model must be kept small enough to be coherent and manageable by a single skilled development team.
  • a model lives within a well defined context”: context and their boundaries are really important to define a coherent model. There are entities that can be used in different context, but sharing the same entity is not necessarily the best way to address the problem. Often the drawbacks are heavies than the advantages.
  • there will always be multiple models”: despite this sounding rather obvious on a large scale SOA, many times a lot of effort is dedicated to try to fight this situation, with minimal chances to win.

A typical problem with SOA is that implicit communication costs are rarely accounted. Sharing the vision of a model within a development team already has a cost which can be kept small enough if the team size is reasonable. Having the same vision within a 5 persons team is feasible. Sharing the vision among 40 people (or more) from different consulting or body rental companies (which is a common scenario in large-scale SOA development) is pure utopia.

Reblog this post [with Zemanta]