Wednesday, January 11, 2006
MDA Skeptic - part 2: Impact on roles
The first thought that stroke me while getting deep on the subject, is the fact that moving the focus from the code to the model means also that the development team must be reshaped accordingly. Roughly said: you need more modelers and less developers. Traditional development phases get affected as well: modeling actually becomes coding, so the borderline between development and implementation becomes blurred.
To cope with MDA, modelers have to be pretty skilled in UML. One might argue that they must have been anyway, but this is not always the case. Normally, you define a reasonable subset of the features you might want to model at a certain level of the development process, according to the available skills of the team, and you postpone or ignore the others. For example, a good UML modeler knows that there is a tight semantic relation between choosing an association or an aggregation and the resulting code for constructors, but in some contexts this can be an unnecessary complication for the expected result of the analysis phase. With a MDA approach some of these sometimes “unnecessary details” become key modeling abstractions, so modelers need to manage them as well: their UML knowledge must be better than what they’re used to and since their model is running, they need to be a bit more in the developer perspective.
There’s a sort of Darwinian selection here: some folks normally can keep on modeling as long as somebody else is doing the dirty job of making things work (reviewing or sometimes also ignoring the given model), they probably are not going to make it with MDA – “Am I supposed to run my diagrams?” – while some others (probably with developers skills) could face the challenge. Anyway, since you need to spend more time modeling than in a normal process, the more reasonable choice is to have some of your developers be trained in high level UML to join your analysts in refining the model.
Sounds like I am getting too far on the skeptic’s side, but that’s not exactly the case. If you can have good modelers in your team, forcing a strict link between the model (and have it look like a real good OO one) and the actual code that’s the best option. My favorite approach in this case is pretty close to Evans’ Domain Driven Design. As a consultant, I have to say that this is not always the case: sometimes you have to work with pure modelers and have the team dynamics get the software done somehow.
We are asking a great sacrifice to the development team, we are asking them to abandon what they have more sacred: their fully featured IDE. Moving the focus in the modeling area can result in a loss of productivity in the small scale (on the large scale the MDA promise is to code less so it might still be convenient after all) due to a lower confidence with the development tools. After all Eclipse, X-Doclet, Together and so on already do generate some code, and good developers are really fast in that.
To be honest, development IDEs will still have to be used, for custom implementation to be mapped or mapped, but this appears more like an exception to the development process than the rule.
Probably the main point here is that the developer is now dropped in a more complex context than he is used to, having to leave a mono-dimensional development environment (the code) to a multi-dimensional one where the code is just the projection of different models, mapping, links, markers and so on. In the small scale the situation is just getting more complex for the developer, which resembles the situation happened during transition from procedural to OOP code (I still remember the panic of COBOL developers trying to follow execution flow across class boundaries when debugging their first OO application).
Resistance is a common phenomenon before a revolution, and MDA aims to be a revolution, so it just makes sense. The question is when will it get the required critical mass?
Tags: MDA, Model Driven Architecture, Code Generation, Software Development Process