Functions were probably the first widely adopted element of software reuse. Developers loved them because functions made them save their own time, but in a way reuse scope was limited by the application boundaries. OOP promised to increase reuse across the boundaries of a single application, and made it easier to share reusable units between the development team. Still we saw many attempts to build company assets of reusable objects, that simply didn’t succeed. Apparently this was due to the underestimation of two factors:
the communication costs of sharing the knowledge
the need of a company support, to make developers choose learning somebody else’s code, instead of writing their own.
There’s a bit of game theory in it: every developer could agree that OOP is a good thing, still 99% of the times
coding looks faster than learning
it’s certainly more interesting (developers simply love starting from scratch)
Damn! I can write better code than this!...
Ok. Let’s get back to MDA, here the reuse is aimed to a higher level: platform independence, portability of the model between applications and technologies. To me, it just looks like that the required time to enjoy the positive effects are longer than in other evolutionary steps, and that advantages are expected mainly at company level, while singles and teams are carrying the heavy burden of the adoption threshold (learning curve and so on) without an immediate and/or personal reward. My personal experience (and maybe there is some typical Italian mood here) is that changing people’s attitude is a lot harder when benefits are targeted at a higher level than singles, or small groups.
A lot of emphasis has been put on the fact that MDA tools could be used to generate pattern code, straight from the model. The one thing that is puzzling me is that in a full MDA scenario this is useless. Patterns are a great way to clean your design, and to define the proper responsibilities to the components. But they do so having 2 good weapons.
Readability: the code looks cleaner and clearer,
Ease of (re)use: further evolutions are easier because the pattern structure defines the place where to work, when you need to add or change something.
They both are valueless in a MDA approach: the code is not written, but is generated so it won’t make a big difference if it’s cleaner or not, the edit point is in the Model, or in the Mapping, not in the generated code. Of course, generating the correct patterns would be fine in a one-shot MDA approach (which I tend to dislike, ‘cause will deviate from the model anyway), but the good point of MDA is exactly the one of keeping the model in sync with the code, the full MDA roundtrip.
We can probably expect specific MDA patterns to arise in this context, as long as MDA adoption gets wide enough, but design patterns (and architectural patterns as well) look too rooted in the traditional development process. If we move the focus on the definition of the model, then readability, and maintenance cost are no longer key factors, as – for example – performance might be. Possible candidates for being popular patterns in this new context are likely to be the Analysis Patterns collected by Fowler some years ago.
The first thought that stroke me while getting deep on the subject, is the fact that moving the focus from the code to the model means also that the development team must be reshaped accordingly. Roughly said: you need more modelers and less developers. Traditional development phases get affected as well: modeling actually becomes coding, so the borderline between development and implementation becomes blurred.
Modelers To cope with MDA, modelers have to be pretty skilled in UML. One might argue that they must have been anyway, but this is not always the case. Normally, you define a reasonable subset of the features you might want to model at a certain level of the development process, according to the available skills of the team, and you postpone or ignore the others. For example, a good UML modeler knows that there is a tight semantic relation between choosing an association or an aggregation and the resulting code for constructors, but in some contexts this can be an unnecessary complication for the expected result of the analysis phase. With a MDA approach some of these sometimes “unnecessary details” become key modeling abstractions, so modelers need to manage them as well: their UML knowledge must be better than what they’re used to and since their model is running, they need to be a bit more in the developer perspective.
There’s a sort of Darwinian selection here: some folks normally can keep on modeling as long as somebody else is doing the dirty job of making things work (reviewing or sometimes also ignoring the given model), they probably are not going to make it with MDA – “Am I supposed to run my diagrams?” – while some others (probably with developers skills) could face the challenge. Anyway, since you need to spend more time modeling than in a normal process, the more reasonable choice is to have some of your developers be trained in high level UML to join your analysts in refining the model.
Sounds like I am getting too far on the skeptic’s side, but that’s not exactly the case. If you can have good modelers in your team, forcing a strict link between the model (and have it look like a real good OO one) and the actual code that’s the best option. My favorite approach in this case is pretty close to Evans’ Domain Driven Design. As a consultant, I have to say that this is not always the case: sometimes you have to work with pure modelers and have the team dynamics get the software done somehow.
Developers We are asking a great sacrifice to the development team, we are asking them to abandon what they have more sacred: their fully featured IDE. Moving the focus in the modeling area can result in a loss of productivity in the small scale (on the large scale the MDA promise is to code less so it might still be convenient after all) due to a lower confidence with the development tools. After all Eclipse, X-Doclet, Together and so on already do generate some code, and good developers are really fast in that.
To be honest, development IDEs will still have to be used, for custom implementation to be mapped or mapped, but this appears more like an exception to the development process than the rule.
Probably the main point here is that the developer is now dropped in a more complex context than he is used to, having to leave a mono-dimensional development environment (the code) to a multi-dimensional one where the code is just the projection of different models, mapping, links, markers and so on. In the small scale the situation is just getting more complex for the developer, which resembles the situation happened during transition from procedural to OOP code (I still remember the panic of COBOL developers trying to follow execution flow across class boundaries when debugging their first OO application).
Resistance is a common phenomenon before a revolution, and MDA aims to be a revolution, so it just makes sense. The question is when will it get the required critical mass?
Being dealing with MDA lately, I am currently trying to decide which side am I on. As the title above states, I found myself still somewhat skeptic (for reasons I’ll try to explain here). Anyway I am still digging in the MDA area, so my feelings of today will change or might result wrong.
MDA vs. Agile UML Philosophy A crucial difference between UML modeling and MDA is that UML is a great tool to share a system vision with somebody who is not a programmer. Agile approaches focus on the UML as a tool to communicate so you tend to keep the model as simple as possible, to avoid information overload. Another agile principle is to avoid redundancy, for example including only a typical sequence diagram for a sample interaction with the system, without a detailed model of the others. MDA goes quite straight in the opposite direction: a model must be formally correct in order to generate working code, so it must be complete. This means that all those not-so-different sequence diagrams now have to be modeled, instead of simply coded (which means also that’s going to be a designer’s job instead of a developer’s job). To be completely honest developers deal with small differences from the sample iterations in two ways: a) refactoring common code to an elegant schema, b) cutting and pasting. The latter – unfortunately – more frequently. Both ways are intended to save time; I guess there will be ways to abstract a common behavior at model’s level to reuse part of the already modeled behavior some way, but the efficiency of this part might not be comparable with what you can do with good refactoring tools on a recent Java IDE.
Another element that affects normal modeling is the adoption of the OCL as a way to define constraints in UML diagram. Bad news are that OCL is probably not so well readable to the domain experts (that’s the reason why you normally prefer plain language). On the other hand, this enables formal controls for completeness or consistency of the logic conditions, improving software robustness. A great burden here resides on the level of usability achieved by the development tools, that could make the difference from a toy to a really useful tool (the more you click, select and drag, the more you are probably faster in just typing java code).
Hi all, here is my personal blog. I am a software consultant so most of the stuff here will be dedicated to the world of software, and related topics, just trying not to overlap what I already do as a job. If anyone's interested... have a good reading!