No silver domain modeling bullets

This past week, I attended a presentation on Object-Role Modeling (with the unfortunate acronym ORM) and its application to DDD modeling".  The talk itself was interesting, but more interesting were some of the questions from the audience.  The gist of the tool is to provide a better modeling tool for domain modeling than traditional ERM tools or UML class diagrams.  ORM is a tool for fact-based analysis of informational models, information being data plus semantics.  I’m not an ORM expert, but there are plenty of resources on the web.

One of the outputs of this tool could be a complete database, with all constraints, relationships, tables, columns and whatnot built and enforced.  However, the speaker, Josh Arnold, mentioned repeatedly that it was not a good idea to do so, or at least it doesn’t scale at all.  It could be used as a starting point, but that’s about it.

Several times at the end of the talk, the question came up, “can I use this to generate my domain model” or “database”.  Tool-generated applications are a lofty, but extremely flawed goal.  Code generation is interesting as a one-time, one-way affair.  But beyond that, code generation does not work.  We’ve seen it time and time again.  Even though the tools get better, the underlying invalid assumption does not change.

The fundamental problem is that visual code design tools can never and will never be as expressive, flexible and powerful as actual code.  There will always be a mismatch here, and it is a fool’s errand to try to build anything more.  Instead, the ORM tool looked quite useful as a modeling tool for generating conversation and validating assumptions about their domain, rather than a domain model builder.

Ultimately, the only validation that our domain is correct is the working code.  There is no silver bullet for writing code, as there is always some level of complexity in our applications that requires customization.  And there’s nothing that codegen tools hate more than modification of the generated code  However, I’m open to the idea that I’m wrong here, and I would love to be shown otherwise.

Related Articles:

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

About Jimmy Bogard

I'm a technical architect with Headspring in Austin, TX. I focus on DDD, distributed systems, and any other acronym-centric design/architecture/methodology. I created AutoMapper and am a co-author of the ASP.NET MVC in Action books.
This entry was posted in Domain-Driven Design. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • Josh Arnold

    I think you’re spot on here, Jimmy. I was once on the bandwagon of pushing for the code-generation aspect of ORM (both SQL and the domain model in code) but it didn’t take long for me to run into too many problems. As you said, it’s a great starting point. During my little journey through the world of ORM, I felt like it became pretty clear that it’s great at facilitating one thing: communication. Perhaps the best way to describe it is that it helps remove ambiguity and makes it easier to talk about your domain. Rather than spending time clarifying relationships, you have a structured way of discussing the meaning of the relationships.

    The title of this post is perfect. My fear would be that anyone that starts using ORM would view it as a silver-bullet. It’s simply another tool to help with the process of modeling a domain.

  • Henrik

    I would argue that there’s a role for modelling if the model is also used for programmatic decision making. Now, I have no experience the the object role modelling that you are talking about, but I understand the concept of creating an ontology – and an ontology can be used for algorithmic reasoning, such as argumentation or planning logic.

    On the other hand, you are correct in that the source of truth is the code.

    A lot of computation is about data structures encapsulating data, such
    as splay trees, other computation about encoding human knowledge, like
    DDD, other computation about acting on both the human knowledge encoded
    and the data structures.

    I think this will lead to languages that are more homoiconic in the long run, i.e. where the code IS the data IS the ontology.