Friday, December 29, 2006

A day in a consultant's life...

Some days my job really looks like this:
Customer: “We are planning to build a LEGO spaceship to land on Saturn. Will you help us?”
Me: “Well… you can’t do that!”
Customer: “that’s why we hire consultants! We need expertise.”
Me: “But… You can’t build a LEGO spaceship to go to Saturn!!”
Customer: “Why not?”
Me: “You cannot seal the life module! You can’t build thrusters in Lego! Lego plastic will melt down when passing through the atmosphere!”
Customer: “See? You’re an expert! Come on let’s go”

While I normally admire this type of Blutarski-like attitude, when I am at work it really makes me feel scary….

Sunday, December 03, 2006

Signed up the Agile Manifesto

Today I signed up the Agile Manifesto. Something I should have done long ago...

Tags:

Monday, November 20, 2006

Technorati Tagger Greasemonkey Plugin

Found this script that allows me to quickly add technorati tags from blogger beta interface. You need Greasemokey installed on your Firefox.

Sunday, November 05, 2006

Protecting sensible data on test environments


James McGovern raised the issue on this post, I think there is not such a survey about the situation in Italy, but in my own experience as a consultant I found only one customer that really cared about this stuff.

To be honest, sometimes the customer doesn’t even worry about the need of a fully equipped test environment, so we have to lower down expectations: we get what we get and we have to be happy with it. Se far we only dealt with banking or insurance customers data: I wonder what developers would do if they have the opportunity to manage some more sensible data (we don’t have fashion agencies as customers, or videogames manufacturers, so I’ll never know).

Unfortunately, the law in Italy about sensible data management is quite ambiguous, stating more or less that you should do everything possible to protect your data, which is …err everything. So I could be theoretically sued because I didn’t use Navajos to translate my Skype conversations or keep my hard disk stored in the depth of a mountain. Which is something I can do, but simply doesn’t make sense for the type of data I am currently managing. The overall result is that every simple software application that holds some personal data (almost everything, except the MP3 player) could be considered illegal. Spending money in improving security will make you just a little less illegal, so it’s pointless. So many simply don’t care, or wait for the next big scandal to know where the law limit exactly is.

Tags: , , ,

Monday, October 23, 2006

Worst marketing ever?


Quite often, while introducing quality assurance, testing and TDD with our customer or colleagues, I get some weird responses… something like: “Yes, we tried JUnit, but it can only do unit tests, and we need integration test as well” or “we started using JUnit, but we dropped it, cause we didn’t have the time to write the test for every single method in every class of the system”.
Hmmm, clearly somebody got it quite wrong, ‘cause JUnit is a testing framework and it just takes the right entry point (a façade, a web service, a main method) to have it run integration tests as well. Half of the people are misled by the name (but I never heard anybody saying that JBoss is only for smuggling applications…), the other half by the on line documentation, which does nothing to prevent new readers from getting it wrong.

For many, the starting point is simply no test at all. So the best possible choice would be to start implementing the most useful tests, like those that catch the highest number of errors. This means putting the interception point at the surface of the application (be it a presentation tier or a public API), while starting from the small, maybe testing all the getters too, provides very little value, and doesn’t focus on the way components interact together to provide the application behaviour.

Unit test provide a value too, because they help in the problem determination phase, but when you have only a few resources on testing (as it is sadly often the case) you want them to check the overall application before shipping, and not only some random component.

Tags: , ,

Sunday, October 15, 2006

Plea of a Together Orphan


Despite all the criticism surrounding it at JAOO, I am still working with UML sometimes (not that often, I am currently stuck between Excel, MsProject and – if I am lucky – PowerPoint). When asked about which UML tool to use, I used to answer “Together is the best choice, if you can afford it”. But in the last couple of years I simply tried to evade the answer.
In the recent past, Together proved itself the best for mainly 3 reasons:
  1. Full roundtrip – despite some “not pure UML” tricks roundtrip really worked, so you could really keep in sync the model and the implementation code. You needed an OOP aware modeller to get all the benefits from it, but that’s what I used to be. Rose’s reverse engineering procedure really seemed terrible compared to this.

  2. Code inspection features – the audit section was pretty interesting and made together a great tool for inspecting an unknown big project. As a consultant I used it proficiently every time I had to explore somebody else’s code (I am not a zealot in applying all of the rules, but you get quite a good outlook of the coding practices, or malpractices, the team used to adopt) in a CSI fashion.

  3. Keyboard shortcuts – after a while, it really took me seconds to come out with a small domain model, almost without using the mouse. Adding classes, attributes, methods and interfaces was all done via keystrokes. Given that the classes’ source code was actually created at the same time, creating domain classes was faster than with an IDE. I ended up using it as a whiteboard, while discussing the model with the colleagues, cause it was faster than the whiteboard.
Clearly there were things that you couldn’t really do, even if they were available, like designing a GUI (Swing or Web), debugging, and so on. There were also a lot of annoying small bugs in generating PDF documentation, cutting and pasting images and so on.

Standing the test of time
Unfortunately, things never remain the same – panta rei – so did CASE tools as well. Well, Rose didn’t change that much in a few years, showing the same inconsistencies in the user interfaces over and over again. Together, meanwhile, blew it completely. They started a new version from scratch on the Eclipse platform. This was not a bad idea per se, given the weight of the original user interface, but they dropped all the keyboard shortcuts, and even if the GUI looks a bit nicer, it takes more and more time to come up with a model, cause you’re always selecting-dragging-dropping, exactly the things that made a lot of developers hate CASE tools in first place. Auditing tools are not such an advantage anymore, when you can include Checkstyle or PMD checks both in the IDE and in the build script, so you end up only with the roundtrip… but at the same price (

The only tool that really took modellers’ productivity seriously so far is Magic Draw UML, which added some nice features to the user interface that actually made drawing a model pretty faster. Still I can’t help myself… I don’t like it, it looks like a toy to me. It’s just a look and feel stuff… The guys at Visual Paradigm tried to replicate the solution, but they got it from the wrong side, and everything there looks counterintuitive, but shiny (they have plenty of colours and gradients, I guess it’s UML in Colors 2.0). Sparx’s Enterprise Architect looks promising, with an interesting price, but it’s not yet what I would like to have.

Back to basics
Given the new landscape, I’ll have to find another tool… I don’t need (and honestly I hate it) any code generation tool. I need only a fast modelling tool, so I am switching back to the whiteboard, and I’ll be using a camera to take pictures of it. Hope it’ll be readable.

Tags: , ,

Saturday, October 07, 2006

Back from JAOO conference


Just came back from Aarhus - Denmark, where the JAOO conference was, and spent the day trying to catch up (recover) from all the things (mess) happened while I was away.
Conference was great! And I had the pleasure to listen some very cool speeches and meet some nice folks as well. There’s plenty of things to blog about, but I’ll start making brief summary of the overall trends I smelled from jumping from one track to another.
  • UML (down): there’s been no hype at all, and notably some sarcasm about the latest evolutions of UML. The idea that UML is the way to design software via cool tools, looks definitely doomed. UML should be used only as away to communicate between humans, possibly on a whiteboard.

  • XP (down): though everybody was pretty happy about agile practices in general, there were no XP advocates as speakers, and no hype about XP practices, probably because too many have been touched by the chaotic dark side of a not-so-well-defined XP development process, or simply because XP doesn’t fit all needs.

  • SCRUM (up): despite the somehow BORGish attitude of Jeff Sutherland, he indeed made an impressive speech. It looks like that SCRUM might be the mainstream agile methodology, possibly because it can scale up better than XP.

  • Domain Specific Languages (up): they deserved a track on their own. As a general trend more and more attention is put on the core domain of software application, and expressing the domain logic in the most suitable mean – such as a specific language that catches and leverages the domain peculiarities – is one way to enhance development productivity and increase the delivered value.

  • Agile Software Development (stable): there was nothing pretty new on that area, even if speeches from Alistaire Cockburn and Kevlin Henney have been simply great. The main important fact is that, when polled, 80% of the audience answered that they were on Agile methodologies. Which is pretty high, even if some of them weren’t fully compliant to the agile manifesto.

  • Domain Driven Design (up): The book from Eric Evans is really getting mainstream. Eric himself is a great communicator, and his slides were the best presented overall. The main idea is that the real asset of a complex software project it’s in the model, not in the technology, and that you should make your best to come up with the best possible model. In a certain way also Rod Johnson’s speech draw the line on putting the real OOP back in the centre of the operations.

  • CASE Tools (down): nobody really uses them any more, they sound more harmful than useless.

  • MDA (down): same problem. Though MDA could be part of the “back to the model” trend, everybody was very keen in specifying that they were not talking about MDA. The overall feeling is that all the specification is too vendor oriented, and generally far from the needs of the average projects.

  • AOP (stable): good news are that AspectJ definitely works. But apart from massive adoption in c and in a couple of other tools looks like Spring 2.0 it won’t really make it to the mainstream development.

  • Ruby (up): a lot of interest about the language, which has been pretty carefully designed to meet developer’s taste. Everybody wants give it a try.

  • AJAX (up): it’s getting a compulsory mainstream, for both enterprise and web2.0 applications. Google toolkit made an big impact, and we’re probably going to see something really astonishing in the way applications are built pretty soon.

  • ORM (down): everybody looked so bored by the term. The same applies to XML as well.
An Italian version of this post has been published on the online Java magazine Mokabyte. You can read it here.

Tags: , ,

Saturday, September 23, 2006

The Flawed Interface Principle - Part 2


After re-reading my last post, I felt quite disappointed: the main point could have been squeezed up in
  1. the more you add stuff to a system, the more you’re likely to bring mess

  2. if you do things by yourself you’re probably getting a better result (but in more time)

  3. interfaces are dangerous.
Ok, point number one, is right, but really sounds too obvious to deserve a post (I'll never win a Nobel with that...). We’ll get on it later.

Point number two is simply dangerous: you simply don’t start a project developing everything from scratch, but start choosing the available components on the market (let’s say… JBoss, Hibernate, Log4J etc.) No way I am telling that you should re-invent everything from scratch! If time is a constraint (and in software development always is) then the better trade off is somewhere else, remember that we moved from a two-minutes micro waved junk food to an 6/8 hours preparation delicacy. Moreover, there is normally no reuse purpose in cooking. While reusable components are the keys to successful projects.

Point three. The real meaning is that interfaces are tricky. In OOP theory, an interface is just a substitution point. As a client class, I declare the need of an object instance able to perform certain operations, namely the public methods of the interface. I can’t make no assumptions about the implementing class, I just expect that somebody (a factory method or the classloader) will provide the needed runtime implementation. What I don’t know is just what’s behind the interface. Which is both powerful and dangerous.

What lies behind
In complex systems, the value of interfaces is their ability to simplify the overall picture. A complex subsystem can be masked by an elegant Façade allowing designers to ignore implementation details. I’ll make another example with a pretty simple interface: the electricity plug on your wall. It’s a pretty simple interface: 3 holes, 220V, 50Hz. Every electronic device can work everywhere (at least in one single country). The plug mask the whole complexity that lies behind so you can happily ignore about tri-phase cables, power plants, and so on. Unless they start building a nuclear power plant near your home, or stocking plutonium in your garden…
We got closer to the point. Complexity is still there. Interfaces make it more manageable, by allowing us to ignore details, not to delete them.

Crossing competence boundaries
While developing complex J2EE applications, a common problem to face is a version clash between (indirectly) required libraries. Xerces is a common suspect in such circumstances, cause it’s often referenced by the application server, the application itself and one or more used libraries often use it as well. Normally the solution is nontrivial and has to do with fine tuning of the application server classloading mechanism. Still you need an expert J2EE architect to drill and solve the problem. Sometimes is not so easy to find a good one, but it’s a well defined professional profile.

Things are not that easy when you start crossing competence boundaries. One of the first borders is the Java-SQL. How many applications have you seen that could be told to be optimized both on DB and Java performances?

If you start adding more layers, like running on a virtual environment instead of a real one, such as VMWare. How many details are you missing in the meanwhile?

Problem determination in Babylon
Here we go. We built our beautiful tower, by putting freely available components together. The tower is high and majestic. Then we have a problem. If the problem is in the glue we used to put the things together, it’s an easy task. After all gluing is all we did. If the problem is in one brick, we have to determine where the problem is (which can start to be a nontrivial task), and update, refactor or substitute the faulty component.
If the problem is in the interaction between two or more components, or in what happens behind the scenes (or the public interfaces) then you’re in deeper trouble: cause you need cross concerns competence and you probably won’t have it. And it’s not easy to be found as well.

Tags: , ,

Saturday, September 16, 2006

Food processing chain and The Flawed Interface Principle


While I was discussing with a friend, preparing a speech about complex system management, I realized that there is a close analogy between food processing industry (and its worst effects) and what usually happens on large complex systems.

Grandma’s good old taste
Suppose you want to make some pasta, tagliatelle Bolognese for example, that you used to like in your childhood (ok, it’s getting pretty clear that this post it’s gonna get targeted to Italian geeks that like to cook… a pretty narrow audience niche…).
You can go to the supermarket and buy the ingredients you need, but then you have to make a choice
  1. buy the “all-in-one” package that you just have to warm up (microwave or frying pan)

  2. buy the pasta and buy the sauce, cook the pasta and then mix it by yourself

  • buy the eggs, the floor, the meat and the vegetables and actually make the pasta and the sauce
If you are a good enough cooker, you’ll notice that there’s a close connection between how much time you spend in cooking, and the final result.

Still, even if good, the result still doesn’t taste as good as your grandma’s. Next step is forgetting about the supermarket and buy the staff straight from the farmer. Suddenly eggs start to taste differently and so do the vegetables.

Ok, in the end you have to surrender: you can’t beat your grandma in cooking. It’s just an axiom. But sometimes you can get close. The interesting thing is that once you managed to get that close to the taste of your childhood you can’t even think about eating option number one at the top of page. And even if the label (and the pictures) tries to tell you that it’s the same thing, you just know it’s a completely different stuff from what you really want.

Question is: “How could it get that far?”
The answer is pretty simple: by systematically doing a substitution of the original ingredient with its industrial equivalent. And here is where we get back to software.

The Flawed Interface Principle
The keyword is “substitution”, how could you possibly do that (remember the oop substitution principle…)? Because every ingredient’s decoy looks or smells (almost) like the original, and if you change only one ingredient the difference could be small enough to go unnoticed.
Put in another way, food decoys implement just part of the original ingredients’ interface, generally the look (maybe with the help of some good photographer), but not others, less documented or testable, such as smell, for example.


If you start reading labels of pre-prepared foods, you realize also that they have more than twice the ingredients needed of the original recipe. Stabilizers, preservatives, enhances, colourings (whose role is just to satisfy the visual interface), and so on.

The point is that nontrivial interfaces are seldom complete. Even if they can be thoroughly documented, there is probably nobody checking if the whole system can fit together (at least I’ve never seen it happening in the software development industry). Every component you add in a system has a burden of internal complexity (libraries, for example, but not only) that can’t be completely masked by encapsulation, but only 99%.
The net result in complex software systems is that if you consider that every implemented interface between components has a certain probability to be flawed, you come to the conclusion that every time you add (or more fashionably plug-in) some component in a complex system, you are getting one step closer to the whole system collapse.


Tags: ,

Friday, August 18, 2006

Keeping technology standing still


Recalling projects I’ve seen, I realized that there was a strong correlation between the technological landscape of the project and the actual number of developers leaving the project (and the company). The more the technology was fixed, the more it became boring and frustrating for developers. Software developers are not normal workers, but strange animals that actually
like coding, and they can do a hell of a job as long as they have some fun out of it. Tom deMarco’s Peopleware makes a perfect portrait of the software developer and of his peculiar needs.

Trading fun for productivity?
Still, it looked like sort of a bargain: one can choose to keep the technology fixed, to pay less in training and refactoring while increasing the risk of an early departure from the project. Unfortunately, as long as the project continues, the chances of this event steadily increase so you must carefully consider this option. What is somehow hidden is that developers are not at all the same, and the one who is more prone to get bored might be just the most curious one, the most brilliant or the most passionate. So if you’re assuming that one over five of your developers might be leaving, chances are high that you’re losing 30% of the workforce instead.

The other hidden part of the bargain is that if somebody leaves in the middle, you have to pay a productivity price in training time as well. If the base technology is outdated, or – worse – proprietary, learning cost for newcomers will increase over time.

Updating the landscape
Ok, not all projects are intended to be “eternally open” (so this thought can’t be applied everywhere), but if you have a long living project and you’re trying to save money by stopping the evolution you are actually creating a huge cost area in the future that will materialize in the form of “the big refactoring mayhem” or of “the application that can’t be changed”. Maybe it won’t be your personal problem as a PM but in both ways you end up wasting money in the long term.

Tags: ,

Tuesday, August 08, 2006

RAD development and domain modeling


Java web programming suffered bad development performances in the last years, mainly related to the complexity of the presentation layer. As a coordinator for J2EE learning classes, I found often embarrassing to define a competence stack for the web programmers which included Java, J2EE concepts such as Servlets and JSPs, a bit of XML and HTML (and HTTP of course) some JavaScript and a MVC (so some Design Pattern concepts got in as well) framework, normally Struts. Given that in another language, such as Delphi or Visual Basic, programming for the presentation layer is made ultra-simple, this pile of necessary knowledge really looked scary.

Moving from Struts to JSF

Although the JSF approach is far from being perfect, it’s just a great leap forward compared to the old mainstream Struts:

- at the architectural level, JSF removed the need for a layer, making it possible for the presentation layer to manage model objects instead of dealing with extra-flat ActionForm classes. In many cases, inability to handle data which was not a String led to an extra layer of DTOs to simply handle transportation and type translation between the web layer and the business layer (pretty boring stuff indeed)

- in the IDE, designing a web user interface has become simpler and more productive since JSF support is built-in and not only an a not-so-well-designed-plugin. Many components are designed on top of basic layers (Servlets and JSPs), so you normally shouldn’t bother about them.

Data Aware components

JSF toolkits offer also the possibility to use data-aware components in a web oriented paradigm. This way you can expose a RowSet directly on the web page and rely on the data-aware component for synchronizing page data with the underlying database. It’s definitely not OOP but it might come in handy in many cases.

The main message is probably intended for the average orphan Delphi programmer: “your paradigm works in java as well, and on the web too!”.

Everybody happy?

So, looks like the Java web developer is now given two tools when constructing a simple web application. Anyway my feeling is that data-aware components are just a bad idea for a developer with a J2EE background.

1) data-aware component are popular in not-so-OOP languages (ok, languages jave OO capabilities, but I’ve never seen a VB programmer define a domain model), in such a scenario your application is just a collection of CRUDs (create-read-update-delete) and this is not exactly a great idea.

2) You have to define a place to put your business logic anyway, and since JSF allows you to use POJOs …why not use the POJOs? If you prefer to put all of your domain logic in the presentation layer you’re free to do it, but I have to warn… you’re probably going to hurt yourself.

3) Data-aware components fits well in closed client-server architectures. Here the architecture is not so close, we’re already on the web, so the chances for exposing a service on another channel (be it a web service, or a palm interface) are far higher than before.

4) Tools like Hibernate, or Spring really make data-aware component less interesting. Off-the-shelf ORM tools weren’t that popular in the client-server days, but now they really are part of the scenario. It was different were EJBs were hype, cause “domain modelling” really sounded like “business layer”, and this sounded like “EJB” (or worse: “entity beans”) meaning finally “heavyweight mess”.

So, even if some debate is still going on, I really think that data-aware component aren’t that interesting for somebody with a java background. They might come in handy for newcomers to the java platform that can postpone learning some heavy topic in the J2EE landscape for a while, but it really sounds like a short-term choice.


Tags: , ,

Sunday, July 30, 2006

Why CRUD use cases are intrinsically evil


When performing analysis for a new system, many people find useful to shrink several use cases in a single one, and tagging it with the CRUD stereotype. This choice has a clear advantage in terms of readability of the use case diagram, while the stereotype (often associated with a different color) just triggers a multiplication factor when defining estimates for development time.

Readability and estimates are just a small part of the game when it comes to actually design the overall system. A Use Case is intended not only to be a measurement of the overall complexity of the system, but also a useful tool to capture the real goal (for a good exploration on the matter, please read Alistair Cockburn’s Writing Effective Use Cases) of the different parties involved in the use case. Just a simple order-invoice combination cannot be well represented by two CRUDs due to the many interactions possible between the two data types.

A system made up by a pile of CRUD usually ends up in a data-centric application, where part of the complexity of the interaction between different data types becomes a burden on the user, instead of a system’s responsibility. A user-focused use case (or an XP-like user story) should instead capture the flow of the interactions between different data types. If you’re not modeling what lies beneath a CRUD and another one, you’ll simply have to face it later on in terms of cluttered implementation or a not-so-usable application.

Separating use cases concerns
In nontrivial systems I found useful to separate the different concerns by defining two layers of use cases:
  • Business Level use cases focus on the user interaction with the system, they provide a detailed description of the user’s goal as well as other party’s, and span across multiple data types;

  • Implementation Level use cases focus instead on the artifacts needed to realize the use case, such as web pages, persistence methods, complex data manipulation and so on, generally tied to one or few data types.
The first layer depends on the second one for the implementation, and in trivial cases there is no need to separate them.

Testing the resulting system
A not-so-small difference in the two styles is that we can choose which use case layer is best suited to provide the specifications for building our test suite: at the business case level we can probably test more with less effort, and catch some awkward combination of data that tests based on the implementation level couldn’t catch. Of course the better combination is to define (unit) tests for the implementation level to be used as bricks for the business level tests.

If acceptance test are to be run by a real user, they’ll probably follow some path defined in business level use cases, instead of merely trying to add or modify some data and see what happens.

Enter the domain model
Such a modeling style, prepares the background for a domain model type of system (as opposed to the data-centric one). If we are to model something which lies in the correlation between two or more different data types, the best place to define it is in the domain model, be it in the component behavior or in the use case controller class. The default choice for data centric systems I’ve seen so far, has always been presentation layer… :-(

A significant exception
Apart from very simple systems, data centric applications (and thus CRUD-like use case modeling) are the best choice when the overall business processes aren’t completely defined, so instead of having a complex, but incomplete, system driving the user (possibly to a dead end) it’s probably better to have the expert users in control, and let them the possibility to tweak the system as they need (a possibility greatly appreciated in call centers, for example). However, this situation it’s not a great symptom of the whole organization’s health (but might be the driving force in a startup, for example), more often it is a sign of a poorly done analysis.


Tags: , ,

Saturday, July 29, 2006

Java Conference presentation available

SUN has published the presentation we gave at the last Italian Java Conference. The downloadable material is here.

Tags: ,

Monday, July 10, 2006

FIFA World Championship 2006


Campioni del mondo!!!

Campioni del mondo!!!
Campioni del mondo!!!
Campioni del mondo!!!
Campioni del mondo!!!
Campioni del mondo!!!
(Who cares about J2EE....)

Sunday, July 09, 2006

New challenges for the J2EE web developer


Recent technologies such as Java Server Faces and AJAX are causing a positive earthquake in the Java web technologies arena. As far as I can see, the first victim is the Struts framework: just an year ago it was the de-facto standard for a generic J2EE web application and was backed by a good support my many different IDEs, now it looks like nobody is starting a Struts project any more.

The leading factor for establishing a prevalence of JSF over Struts has been the pursuit of productivity, which was the weak point of many web application frameworks. Overall team efficiency paid a double fee to open source standards:
  • high development speed were achievable after a long training: the stack of competencies necessary to a generic web developer was definitely too high and fragmented (Servlets, Java Server Pages, tag libraries, HTTP, HTML, MVC and the framework in use).

  • Integration with IDEs has never been good: first it comes the framework and then comes some form of support from the IDE, but far from being sufficient. So the standard situation is to have the IDE perform some sort of trivial task for you and then start digging in runtime case mismatch error in some XML file (I guess you know what I mean)

JSF nowadays starts straight from the IDE offering a developer a way to efficiently design web application in a way that has been forgotten for a long time (most developers I know find still more productive to manually edit web pages or tags instead of delegating it to the IDE). After a couple of projects, our experience shows that JSF impacts positively on development speed, so …goodbye Struts.

Re-enter usability
In the meanwhile, components are evolving, exploiting possibilities allowed by the AJAX technology. A new level of interactivity is possible for the web platform where it has been neglected for years (with some niche exceptions like ActiveX, Flash etc.), particularly in the enterprise software field, where you don’t have to attract users to your software (“I pay my employees to w-o-r-k, not to have a satisfactory user experience”).

Now the situation is different: in this scenario a web software can be comparable to a fat client in term of interactivity and usability. The problem is that web developers have been working without these possibilities for years (forced to choose only between combos and radio buttons) forgetting what an easy-to-use user interface should be. The move to a new paradigm has been taken only on the technology side, but doing “the same old stuff” with a new framework is only half of the job: the other half is using the new tools to produce better software.

The following months will tell if this will be achieved by improving developer’s awareness of man-machine interaction mechanism or having software analysts provide a more detailed level of specifications in the early stages of the project.

Tags: , , ,

Tuesday, July 04, 2006

Java Conference Afterthoughts


It took me quite a few time to recover from the Italian Java Conference. I was involved in a 3 hour seminar and a 45 minutes speech. Seminar’s title was “migrating applications from Delphi to Java”, while the short speech one was “Migrating development teams from Delphi to SUN Java Studio Creator”. The night before I met with Filippo Bosi and Giuseppe Madaffari, my co-speakers and with Mokabyte’s egghead Giovanni Puliti, to finalize the presentations. As expected, I went to sleep not before 3.00am. And woke up angry and disappointed 4 hours later.

Not so many people attended the seminar, probably because
  • you had to pay SUN 200€ for that

  • James Gosling was speaking in another conference hall
We had some troubles configuring my laptop with the projector, this made us lose some time which I tried to recover at the end… so we finished late. But then I was the first speaker of the afternoon, guess what? No lunch (I hate that). The smaller speech was unpredictably stuffed up with people, probably waiting for the following speech about XP and TDD, I suppose. I regretted shaping the slides in a too serious way for this one, while I had more funnier slides in our “private” performance. As a result the speech sounded a bit more boring than I wanted, but I really was too hungry and tired to add an extra verve to the speech.

After the speech (and the long awaited lunch) we hanged around the Mokabyte stand, where I bumped into Filippo Diotalevi, that was a speaker on the following day. He has been doing a lot of cool things since I knew him in BPM… chapeau!

Tags: ,

Saturday, June 24, 2006

Italian Java Conference

I'll be one of the speakers at the italian Java Conference in Milan next tuesday. Both the events (a seminar and a speech) focus on how to move developers from Delphi to Java, from an organizational and architectural point of view.

By the way, this is mainly an excuse to experiment Microformats in my blog.


June 27, 2006 - 10:00
-

13:00

-
Java Conference Seminar
- at

Milano


Migrare Applicazioni da Delphi a Java




By the way, the result of the experiment was that, after installing the Greasemonkey Firefox plugin I could see an "Add to calendar" icon on the page. Clicking on it resulted in a .ics file ready to be imported by MS Outlook. Still Outlook failed to import it...

Massive Learning Patterns


A recent research on learning patterns on humans and primates lead to interesting results: findings were that while young primates basically emulate the parent’s behaviour, human babies tend instead to imitate parents behaviour uncritically. Put in another way, they simply do what they see a parent is doing. One example was that if the mother was switching the light off with the forehead, babies were basically doing the same things, while young primates were acting in a more conscious way.

It might look like primates are smarter, but the evolutionary answer was the opposite: imitation is the fastest way to learn a series of complex behaviours in a short time, especially if you’re still too young to understand the reasons. Incidentally, learning by imitation, without asking questions is the same learning pattern adopted by armies all over the world, and they share the same goal: a lot of coordinated behaviours to be learned in a short time.

From primates to developers
Recently, we shifted some developers from Struts to Java Server Faces in the presentation layer of a couple of projects. Newer RAD-oriented IDEs (such as Java Studio Creator and JDeveloper) should have had a positive impact on productivity, after the initial phase. Still some of the developers claimed to be stuck in the learning phase, while others who just skipped the “I-have-to-learn-the-framework” phase and went straight to the code, copying some code snippets and adapting them to the project needs, performed significantly better both in short and long term.

Mere imitation, or code pasting, isn’t the whole story: babies imitate parents who have god-like authority on them, so their behaviour is accepted by trust before it can be understood. Similarly developers look for trusted sources of information to extract some proven solution (and the ability to dig the net for it is becoming a distinguishing factor among developers). In a closed context the perfect solution should be a robust prototype, providing a vertical example of a functionality.

As I experienced while coordinating offshore teams, a good prototype or an architectural skeleton removes whole families of problems, simply by providing a source for cut & paste: “if it comes from the architect it can’t be wrong”. Well… it obviously can be. But even a wrong repeated pattern is better than 20 differently wrong patterns spread all through the application.

Tags: ,

Friday, June 16, 2006

Strive for progress observability


One of the key features of iterative development processes is the ability to make the work in progress observable. Clearly, different roles have different needs, and different level of observability are required: for a contractor milestones are important on the agreed deadline and day-by-day evolution shouldn’t matter, a team leader should instead look also for daily changes to better guess directions every developer is moving on.

During “pure development” phase, I normally ask my team to commit their job on the CVS or Subversion repository as soon as possible, at least on a daily basis, but I realized that this might sound odd to some of them. Personally, I can’t find any good reason (in this phase, at least) not to commit any code which is slightly better than the previous version. If the code won’t compile or breaks some test, then it’s a good choice not to share it, but that’s the only reason. By postponing a commit you’re just putting a slightly risky area (merging and integration) a bit closer to the deadline, giving a minor trouble a chance to become bigger.

Savage time
During early stages, I sometimes try to enforce “competitive access” to shared resources: if more than one developer is accessing the same file, then the first one to commit pays a smaller integration burden. This might sound pretty naïve (and I’d never use this style for the whole development process…) but it’s normally part of a “shock therapy” to rapidly establish team mechanics, if there are new team members or developers still stuck with MS SourceSafe bad habits. Once everybody is confident they can get along with the versioning policy, the team can switch to a less aggressive style.

Don’t forget that even if you switch to a more coordinated policy, you might still need the practice for emergency recovery, production bugs, night fix and all of those situation which can become nightmares if you add complexity on top of a mess. If you are skilled for troubled water, then you’re less likely to get panicked.

Tags: , ,

Sunday, June 11, 2006

Why Object Oriented Designers should rule the world


I was drowning into the abyss of the Italian bureaucracy in the last few days, going from an office to another one to fetch the papers needed to ask for another paper. A notary public asked me to get papers from the local registry office. The data was on paper, so the officer wrote an official document with a typing machine (a mechanical one). Of course there was something wrong in it, but since it was the referring to the notary’s county of the original document author and the notary was the same that asked me for the document (in other words, he asked me to get something he wrote) I hope this is a minor mistake…

The second paper (same papers I already had but I needed a fresher timestamp on it) forced me to go to another municipal office, where they told me
  • I had to go to the land registry office (is cadastre the right word?) asking for some maps

  • I had to pay €100,00 at a post office

  • I had to buy two special stamps worth €14,62 each
I searched in the internet yellow pages, for the land registry office address, but I got the wrong one. I phoned to a yellow pages service and I got almost the right one. Once I got in, the place looked like some kind of transit station on the way to hell: dozens of people camping in a large hall waiting their number. Luckily I had something simple to do and my queue was relatively fast. Of course I had to pay €35,00 for the maps I asked (just reading what was needed to complete the request for the other office (are you feeling lost? Me too).

Then I went to the post office to pay the €100,00 and buy the stamps. Obviously, there were no stamps of that size, so the lady started trying various combinations of different stamps (in a variant of the knapsack problem) to achieve the exact sum of €14,62. Welcome to the third millennium!

I then brought the maps back to the first office and they told me most of them were not needed for my type of request (grrr), so they just took one. I am now waiting the phone call of another officer telling me some papers are missing.

…So what?

Ok, I might be a sort of a strange guy but I can’t help it: every time I go to a public office I start thinking about it in terms of OO modelling. It just doesn’t make sense to me that all of a transaction complexity is put on the user shoulders. It’s exactly the same problem you can face with badly designed interfaces in server code.
A server class (as the name states) is supposed to serve multiple clients. A good design decision is to hide complexity behind the server class API, so that you can write simpler clients. Since clients are written in different context and times, you pay once on complexity on the server side and save every time somebody writes client software.
Bureaucracy is normally doing the opposite: to achieve thin processes on the server side (enabling eternal coffee breaks), officers are moving the burden of complexity on the users: forcing citizens to locate external services, pay (in the silliest possible ways).

The more I look at it the more it doesn’t look like a way to save time at all: if a frequent connection has to be established between two separate offices, then using the citizen (which is 90% of the times doing this thing for the first time) is the most inefficient way of all! Citizens have to be trained to go there and ask that and that, and this is a repeated waste, citizens are not experts so they could make more mistakes than trained officers on a standard procedure, so the process results more error prone.

Can anybody do something about it? Unfortunately this is pretty hard, because requires somebody with a pretty large scope, authority, and completeness of vision: a dictator? An emperor? An alien from a distant planet? Since crossing organizational borders or ruling the empty spaces is one of the most difficult management activities, I am pretty pessimistic about that.

Test Driven Bureaucracy

In the OO world, the best way to achieve a simple API is to develop in a test-first style. This way the developer thinks first from the client perspective (resulting in low complexity of the desired process interface, mostly a money for paper bargain), and then implements the service, keeping most of the complexity behind the server interface.
I just wonder how a similar approach might perform if applied to bureaucracy, I guess we’ll have a lot of surprises (in Italy whole institutions are totally useless) by crossing a test driven methodology with a strong OO-like role/responsibility analysis. As the picture shows, even if pessimistic, I haven’t lost all my hopes.



Tags: , ,

Wednesday, May 24, 2006

Communication in the Agile Team


Agile methodologies, particularly XP, strive to achieve better process efficiency by reducing the amount of the required documentation to a “no more than sufficient” level (among other things, of course). Having a relatively small team of people who get along quite well, and putting the team members spatially close together should provide the desired background for an informal but highly efficient communication, to spread between team members.

What “spontaneously” happens is something slightly different: communication happens, but developers aren’t saying the right things. They’re asking each other
  • how do you configure this?
  • how do you setup the environment?
  • This page looks odd, how did you solve this in yours?
Which are good questions, but could be better answered by an agile HowTo, or a script. Or are problem determination questions, which are good for sharing knowledge but still with a backward perspective. What is less likely to happen is an efficient communication between mates, about what really matters (to me, at least) which is putting the pieces together to produce software that works. Developers tend to focus on their task only, asking somebody else only if they are in need. Otherwise development proceeds silently, or with headphones, wasting all the competitive advantage of space proximity.

Constructive communication
Adding cooperative documentation didn’t prove itself to be an efficient glue. What did prove so was re-assigning tasks in a way that forced team mates to cooperate: instead of trying to have developers work simultaneously on two separate parallel tasks (which is what you normally do to avoid deadlocks), I asked them to work together on the same, short task. The result was a lot less messy than expected, in fact we had working code faster than expectations; there was a deadline pressure, to be honest, but didn’t result in overtime. Instead it produced a spontaneous design discussion on the classes to be shared, something I haven’t seen for a while.

Some might say that I just discovered the XP hot water: pair programming, and continuous integration. Which is partially true, but we still aren’t doing real pair programming (for a lot of reasons), while we normally have continuous integration practices in place. Still we are talking here about integration at communication level. Asking is only one form of communication (and if DeMarco’s theory about the state of flow is true, asking is a sort of stealing) and a pretty primitive one. Forcing tasks overlapping might look like a suicide choice, but generates instead positive effects on development speed, and on the team mechanics.

Tags: , ,

Sunday, May 14, 2006

Sticky Standards (Coding, the IKEA way - part 2)


Right after publishing this post, I’ve got the feeling of something missing, that still could be extracted from the IKEA metaphor. Mounting drawers handles was indeed only the second part of the job, after building the whole closet (well, in fact you could have chosen another way too: mounting handles, or simply drilling before building the whole stuff).

So, how do you do that? You just unpack the pieces, and start following the instructions. Don’t believe those folks blaming on the quality of IKEA papers, they’re pretty good: only drawings, no text (to get rid of i18n issues) but if you follow the plan you can’t miss. Drawings provide details about which side goes up, which part to start from reducing degrees of freedom that you might have in doing even the simplest stuff without having a plan. This way they can avoid maintaining a huge F.A.Q. section answering things like “how to attach legs to a closet after you filled it up” and so on.

What’s the difference from coding? It’s in the fact that many developers tend to favour copying some colleague’s code instead of following a detailed HowTo. A good reason for that is that you can ask clarifications to the author of the code if needed (which is efficient on a small scale, but it’s not on a large one). A not so good reason it’s just in developer’s mind: following a plan might be easy, and leads to predictable results. Put in another way it’s boring. Fun is solving a problem, and if there’s no problem there’s no fun.

Some might already have spotted the underlying danger in this practice, but to achieve a little more thrill, I’ll start telling a completely different story.

The unpredictable standard
The keyboard in front of you has an interesting story: the so called QWERTY standard, was originally developed for mechanical type writers, replacing the dominant hunt-and-peek system (requiring two actions for every key) which was dominant at that time. In absence of standards there was plenty of freedom in choosing how to place the chars on the keyboard, and the main reason which led to the odd QWERTY disposition was to protect the underlying mechanics. Put in another way, it was designed to slow down typing to avoid collisions between mechanical parts. Well it was still a lot faster than the previous standard, but the decision was to sacrifice part of the potential for short term needs. If you are interested in a full review of the QWERTY story, please read this article.

History made the rest. QWERTY standard survived far beyond expectations, and once mechanics were not a bottleneck anymore the standard itself become the next. But every attempt to move a step forward failed, due to the critical mass achieved by QWERTY users.

Drawing conclusions
Many things, and code is one of them, persist more than they were intended. In a single project lifecycle, the thing you don’t want is an anti-pattern calling for refactoring. The most efficient way to shield you from this is to provide the team with a bullet-proof prototype, that developers can sack in the usual way. This won’t ensure you that the team got it right, but will reduce a lot the diffusion of “alternative solutions” to already solved problems.


Tags: , ,

Sunday, May 07, 2006

Estimating Change


Everybody knows that the average developer tends to (largely) underestimate development time, or to run out of the estimated time. A less investigated phenomenon is that, despite all the evolution in IDEs, developers instinctively also tend to overestimate refactoring time. When asked to change some features of the existing code – except their own – estimations always jump up to the worst-possible-scenario-on-earth.

There are several reasons for this phenomenon, the most obvious one is a lack of trust, meaning that you don’t trust somebody’s else’s code. Having automated tests in place might of course help, but, even if it increases operation’s safety, doesn’t have a direct effect on the estimation numbers: a 20 minutes stuff still is perceived as a “half a day” stuff, while repeating a stupid 30 seconds workaround hundreds of times it’s not perceived as a waste.

Still I don’t think this has a lot to do with safety and trust, but a lot more with the taste developers had when they were learning java. I remember the “good old days” when even renaming a single class was a mess (remember JBuilder 1.0?) due to the classname=filename.java constraint, plus naming convention issues and OS issues as well. Well, let me say one thing: THOSE DAYS ARE GONE. It just takes a minute to use a refactoring command to rename a class or move it to another package, and this is true on all the most used IDEs, even the free of charge ones.

Not all refactoring tasks are that simple: changing an attribute from one type to another might require some extra work were IDEs stop being that helpful. But also in those cases, if you have a bunch of red markers for compile errors in your code and solve them one by one, you are not going to loose that much time. You are going to lose a lot if your starting design was poor (ok, we have a code smell here), but if you have an average level of encapsulation in place we are still talking ‘bout minutes, not hours.

The domino effect
What’s worst in this situation, is that this poses the basement for making development speed get stuck in the hell of spaghetti code. The domino effect goes like this: 1) somebody writes some bad piece of code, maybe marking it as a todo; 2) somebody developing another use case, looks for guidance in that code, and copies the anti-pattern; 3) now the anti-pattern is spread like a virus, and shows heavier effects; 4) still correcting it is delayed, cause there is not so much time to correct it (for some obvious reasons, like time was lost before).

Hmmm, sounds like an XP evangelist paper, and honestly it is. Good news are that the domino effect really works in the other direction also, but you have to try before you believe it.
  • Good designed code makes refactoring tasks faster (which is a little different from the common belief that the first XP iteration should produce crap, just to be refactored).

  • A good starting point helps a lot, cause younger developers are anyway going to copy (if not reuse) a solution, so it’s better to get inspiration from a good one.

  • test helps building the safety net for refactorings, speeding up change time.
Only after you apply changes quite a few times you’ll have estimations get smaller. If you’re going for refactoring with a “shy” team, my suggestion is go for it regardless of the opposition, and possibly make a bet to make it a remarkable lesson. Or use the heaven sent extra time to have a party.

Tags: , ,

Tuesday, May 02, 2006

Default Constructor: a possible solution


I finished my last post grunting like an angry old man. Time to stop whining and start proposing something. Ok, here’s what I do. As a rule in my projects, I mark parameterless constructors in domain classes as @deprecated. This way I can monitor bad coding habits without affecting frameworks that need to access the default constructor. This normally raises questions so I have a chance to explain to developers why I consider it an anti-pattern and to find a solution (normally a robust constructor or a factory).

Another possible solution is to make it a check in source code quality checkers such as checkstyle, this approach eases the burden of manually adding the deprecation comment in all the classes, but requires the team to be familiar with the tool, while deprecation should already be in every developer’s scope.

Tags: , ,

Tuesday, April 25, 2006

Why I hate the default constructor


Hmm, I think I should provide a more detailed (boring) explanation on the reasons why the default constructor is in the list of my most hated anti-patterns.

Back to basics
OO theory provided us with the gift of encapsulation: the ability to hide details of the internal structure of an object by providing access control to attribute and methods thus allowing to split the exposed behaviour of an object (its interface) from the internal structure (the implementation).

The rule of thumb says that you should define all of your attributes private, and then provide public methods for the interactions with other classes. It’s a common practice (a short way to say that I quite disagree with that, but this will drive me out of scope) to provide also public methods to access the properties of the object, in a JavaBeans like fashion.

JavaBeans specification
JavaBeans specification defined that a special category of java objects, the Beans, was identified by the following properties
  • presence of a public parameterless constructor

  • presence of public methods to access properties of the object, such as getters and setters in the form of getPropertyName() and setPropertyName(…)
Plenty of discussions arose on the subject “is JavaBeans specification breaking encapsulation?” arguing that making attributes public would have had the same effect.

Unfortunately, what most of the developers forgot was that java beans specification were meant to be used by tools like IDEs (one of the first implementation of the spec was used to add user defined graphics object in a graphical environment, so that you could create your components and add them to the JBuilder palette) or frameworks like Castor or Hibernate. Java Server Pages also use Java Beans as the underlying object for the pages. Java Beans specification allowed tools to dynamically access properties of given (and unknown) objects by relying on introspection, driven by the coding conventions mentioned above.

Having a commonly accepted syntax for accessor methods, was a good result indeed, but was largely abused. But this gave new popularity to the empty constructor anti-pattern.

Why is the empty constructor evil?
Ok, I need a basic example to start from. Suppose we have a simple class like this:
public class Person {

private String name;
private String surname;

public Person() {}

public String getName() { return this.name; }
public String getSurname() {return this.surname; }

public void setName(String name) { this.name = name; }
public void setSurname(String surname) { this.surname = surname; }
}

By creating an empty object, and
then setting its property in a
Person p = new Person();
p.setName(“John”);
p.setSurname(“Smith);
fashion, we make a series of mistakes. At the end of the execution the result is just like
Person p = new Person(“John”,”Smith”);
assuming that we defined in Person.java a constructor like
Public Person (String name, String surname) {
this.name = name;
this.surname = surname;
}
But unfortunately the result it’s not the same during the execution. In other words object initialization is not atomic in the former case, while it is atomic in the latter. Moreover, the second constructor encapsulates creation logic, knowing that you need both name and surname to have a fully functional Person object. Using the default constructor, anybody trying to initialize the class should be aware of the internal details of the Person class to initialize the class.

Suppose then that you need instances of your class in different areas of your software. Every time, you should create your objects in the correct fashion. To do so, you normally would use the best replacement for OOP: cut & paste (which is a form of reuse after all…). Suppose then that you need to add an extra attribute to the Person class, namely birthDate, and that this attribute is required. Guess what? Changing the robust constructor to
public Person (String name, String surname, Date birthDate) {
this.name = name;
this.surname = surname;
this.birthDate = birthDate;
}
raises compilation problem on every invocation of the constructor. Is this a bad thing? Not at all! It allows you to correct every single invocation. It can be a simple job, or a tricky one (if the newly needed parameter is not so easy to get), but it ensures that the object is in the correct state. The other way round raises no compilation error, you know that you have to correct the code anyway, so you have the following options:
  • search every instance of new Person() throughout the code base and check that all the three setters are called (do you think that it’s fastest than fixing the compile errors?)

  • search some known instance of new Person(), add the new setter, and have a coffee

  • add the setter where we need it and let somebody else deal with the nullPointerException they are 99% likely to get.
Conclusion (where I explain my bitter mood)
What really puzzles me is that, every in the simplest situation, invoking the default constructor is almost always wrong! If your class has attributes, then you should favour a robust constructor instead, and makes you write less code. If it doesn’t have one, then maybe you don’t need a constructor at all but you could access static methods or use a factory to create your object. The only reason why we abuse so much this anti-pattern is just because we are damn lazy.

Tags: , ,

Monday, April 24, 2006

I hate the empty constructor


I was looking for inspiration for a short post about the abuse of the parameterless constructor. Then I found this and thought that there’s not so much else to say. If there is still somebody still convinced that

Person p = new Person();
p.setName(“John”);
p.setSurname(“Smith”);

is still as good as

Person p = new Person(“John”,”Smith”);

…please leave a comment.

Ok, then I thought this was a little bit too short, and wrote down some more explanation in the following post.

Tags: ,

Monday, April 17, 2006

Coding, the IKEA way


Today I spent some time mounting handles in an IKEA wardrobe I bought some weeks ago. Mounting handles is somehow a precision job, cause you don’t want your house to look like a cartoon room. So, I took measurements of the drawers, and of the handles, asked my wife in which position the handles looked better, tried to fit the preferred position with some basic proportion (so that the spaces above and below the handle are respectively ¼ and ¾ of the total drawer height), then calculated on paper the exact coordinates for the needed hole. The next step was to apply the measurement on the drawers. I didn’t have a professional measurement tape at home so I double checked all the measurements before marking with the pencil the final target for the drill. When I was satisfied wit the position I made a small hole with a nail in the exact position. Then I took the drill. Everything was in place so the result was pretty close to perfection.

It’s surprising how far we can go from this metaphor if we just turn back to our beloved coding activity. When asked to perform a simple, yet nontrivial, task, the average developer would switch on the IDE (I am optimistic, the IDE is obviously already on by default, and the mail client is now probably an Eclipse plugin) and start coding. A couple of lines there, a TODO marker when an ugly shortcut is taken, another couple of lines there, and the job looks almost complete. Then a failure during test sets the need for another quick fix just before the deadline. Ooops! Time for removing TODOs has gone, the code seem to work and there are more urgent things to do. In other words, the TODOs just made it to production. Cheers!

Let’s get back to my IKEA drawers. Really I should have started from the drill (which is my favourite tool anyway)! Making a couple of holes in the wardrobe (just to check if the drill worked) then refining the hole positioning by successive attempts. This way the handle probably wouldn’t be that aligned, or perfectly fixed, so I would probably have had to put some stuff to have the handles blocked some way. Of course there is only one probability over a million that all the handles are vertically aligned and centered in all the drawers, and I should have used some tape to close the unnecessary holes I did at the beginning, but as long as the whole wardrobe doesn’t fall apart I should have been pretty satisfied.

Ok, I probably went too far. Code can just be erased, while drill holes are irreversible. Well, code can be cleaned if you have time and we all know that you’ll never have…

Tags: ,