What’s been happening in Fluent NHibernate land?

 

Fluent NHibernate has seen a flurry of development followed by a complete lack of commits, I figure it’s time to let everyone know what’s going on.

The Short Version

We’re rewriting the internals. We have 100% of tests passing, but still can’t guarantee no regressions. Stuff may break, if it does, tell us. It’ll be worth it.

The Long Version

Once upon a time…

Fluent NHibernate at it’s core is a fluent interface over xml generation. We use the series of methods you call in your mappings to build up an in-memory hbm xml document that we then feed to NHibernate. Slowly, as we’ve started to support more and more of NHibernate’s features, we’ve started to find flaws in our architecture. It amounts to us not having enough separation of concerns; our fluent interface is generating xml, I think most people would say that’s not a good thing. Our xml is being generated too soon in the cycle to allow us to do more clever things that’d improve the user’s experience. That’s where the quiet time comes in.

Paul Batum undertook the task of redesigning the internals of Fluent NHibernate to be something much more scalable. I’ll leave the details out for now (maybe a future post), but it amounts to a intermediary layer being introduced between the fluent interface and the xml generation. We’ve dubbed this the Semantic Model, and it’s that which is now generated by the fluent interface, then later translated into xml. This added abstraction allows us to do things that weren’t possible while we were just generating straight xml.

Whilst you wont see any immediate improvements while everything is being converted, the extra layer allows us to inspect the model before it is converted to xml; this gives us some immense capabilities, such as allowing subclasses to be separate from their parent mapping, create reusable component mappings, improve relationship support etc… We’re pretty much limited only by what we can imagine now, rather than by architectural decisions.

However, we have a dilemma. The semantic-model branch has deviated so much from trunk we’ve got a major merge problem. A merge is pretty much out of the question actually, because there’s barely any commonality between the two streams at all. It was then assumed that the branch would eventually replace trunk (we’d just rename trunk to a tag, then rename the semantic-model branch to trunk), and all we needed to do was get the branch up-to-date with the features of trunk. Little did we realise we were essentially committing to something that I typically oppose, a complete rewrite! I’ve never been in a situation where that’s ever been a good idea, and yet here we are, responsible for our own destiny as it was, and we’d chosen a rewrite!

A few weeks went by with very little work happening. I think we’d all started feeling demoralised by the idea of re-implementing most of the features from trunk using the new design. It was a worthwhile endeavor, definitely, but just very uninteresting. We repeatedly swore that we’d get this done, but all the while we felt less inclined to support trunk because every new feature meant a new feature to port too. We stopped.

At that point Hudson Akridge, our newest contributor, had the balls to tell me that he thought what we were doing felt pretty futile. At that moment I gained a great deal of respect for him, as it’s something that was in the back of my mind for some time but I wasn’t ready to face yet. It was that which got me moving again.

I sat down over a long weekend and took on the mammoth task of merging our rewrite branch and trunk, with the aim of allowing the existing code to exist alongside the new code; this approach, if it worked, would allow us to convert existing features at our own pace while still writing new features for the cleaner codebase. After a lot of unpleasant hacking of nice code, I managed to get all our existing tests passing while utilising the semantic model behind the scenes. Our old code still directly generates xml, which then gets injected into the semantic model via a nasty shortcut. It feels dirty, because I’ve had to comprimise some good code to get it working, but in a way I think that’s a good thing; if code is nasty, we’re all less likely to be content with it. The main goal was achieved though, and that was to allow us to work at our own pace on trunk.

Where we stand now is three branches, trunk, integration, and semantic-model. Over the next day or so I am going to merge integration with trunk, which will mean the semantic model will be in use for new features. From there we can slowly migrate all the original code to use our new semantic model, all the while adding new features using it, then eventually remove the duct-tape that’s holding the legacy code to the new code and dump the old stuff.

This is where I’ll give you a little warning: although we have 100% of tests passing, we don’t have 100% coverage. There may be some regressions that we aren’t aware of. If anyone finds anything broken that was working before, contact us immediately on the mailing list or via the issues list and we’ll correct it. Regressions will be treated with the highest priority over any other work.

That’s it, you now know more about what’s happening with Fluent NHibernate than you ever wanted to. I hope this sheds a little light onto what’s been happening with us, and perhaps why your patch hasn’t been applied as quickly as you would’ve liked.

 

Related Articles:

    Post Footer automatically generated by Add Post Footer Plugin for wordpress.

    This entry was posted in fluent nhibernate, nhibernate, open source. Bookmark the permalink. Follow any comments here with the RSS feed for this post.
    • http://therightstuff.de Alexander Groß

      James,

      It’s great to see FNH evolve, even if there might be regressions. I like reading other’s code and learned a lot, even from the “old” FNH code base. FNH definitely jumpstarted me in NHibernate, the XML has kept me from using it for years.

      Keep up the good work!

      Alex

    • http://www.paulbatum.com Paul Batum

      In the unlikely event you could possibly want to know EVEN MORE than what James has told you here (ridiculous, i know!) , I wrote a companion post:
      http://www.paulbatum.com/2009/04/fluent-nhibernate-update.html

    • http://www.lostechies.com/members/jagregory/default.aspx jagregory

      Alex,

      That’s great to hear, I’m glad FNH has been a stepping stone for improving your knowledge.

      Hopefully there won’t be any regressions, but you can never be certain. I felt it’d be wise to let people know that we are working deep in the guts, so if they do see any regressions it’s less of a surprise.

      James

    • http://devlicio.us Billy McCafferty

      Thanks for the update James…keep us posted.

    • http://www.lostechies.com/members/gnschenker/default.aspx Gabriel N. Schenker

      again I want to take this opportunity and thank you and all the other committers for the great work you do! Fluent NHibernate has brought NHibernate to many new developers who were hesitating previously.
      It’s now a way easier to introduce and starting to use this ORM framework .

    • http://hackingon.net Liam McLennan

      This sounds like an important step for the long term success of FNH. I see FNH/Automapping/Linq2Nh as the future of NHibernate so it is great to see this project moving forward.