OO Do I Know Thee?


I was first introduced to the OO Programming Paradigm through JAVA in 1998.  Before that I did most of my work in C or shell scripts.  I liked it well enough but for business applications the paradigm felt wrong.  JAVA was a breath of fresh air and – at first – it seemed so natural.  I was indoctrinated into the well known 4 Concepts of OO and the whole Is A vs Has A relationships.  I dutifully created my Parent Objects which then had more specific Children (Dog Is A(n) Animal).  And it quickly began to feel as awkward as procedural programming.  In fact it was worse, because of a sense that my initial “this is so natural” reaction was correct and I must have taken a wrong turn.

This last year has seen a major shift in how I perceive what OO is and how well (or not so well) the languages I’ve used represent or support OO.  The sequence happened a bit like this:

First I went to Nothing But .Net and had many of my ideas about .Net stripped away.  It would be fair to say I left with more questions than answers but they were questions I’d been looking for since my JAVA days.

Then I learned Objective C and with it new questions around what an object is, what it does, and how are they connected.  And do they always have to be connected?

Then I started playing with Ruby and found even more questions about objects, behaviors, abstraction, and an OO application.

All of which has lead me to this – I still don’t know The Truth about OO but I have a pretty good idea what it isn’t.  A better way to say this is: I have a better idea why my initial feeling was so strong and where I diverged from the “it just makes sense” view of OO.  Some of those errors (in no particular order):

An Object is an encapsulates data and behaviors around that data.  Wrong.  Nope, not even going there.  Wrong.  Wrong.  Wrong.  This doesn’t mean an object can’t have data and behaviors.  Having and being are different things.  Don’t confuse them.

Inheritance is a parent-child relationship.  Incomplete.  The parent-child metaphor has way too much meaning and side-effects that have no place in the idea of Inheritance.  Don’t confuse them.

(C#) Interfaces are an abstract representation of a responsibility.  Too inflexible and limiting.  Narrows the scope of what they can be by (ironically) assigning them too broad a role in an application.

Abstraction is primarily expressed through Interfaces and Abstract Classes.  Perhaps that is a vehicle supplied in a particular language but Abstraction is not organically connected to any mechanism to express it.  Thinking of it in linguistic programming terms limits it’s meaning and our ability to exercise it.

I could go on to list many of the design patterns and principles you may have heard, including SOLID.  I’ve held most of these in an almost holy status at one point or another.  None of them hold The Key to OO nirvana and many of them – when viewed from an inappropriate perspective (which I often did) – can distract you from the potential and power of OO.

Lastly, from the time of my euphoric introduction to JAVA until (painfully) recently I treated OO like a sacred hammer and saw most meaningful programming challenges as eager nails.  Sad-but-true.

I now see OO as a useful paradigm to consider for human process driven applications (e.g. an application designed to mimic or assist in tasks a human would/could do).  With that broad definition I would understand people thinking I still have a magic hammer and see everything as a nail.  The reality for me is more subtle – so much so that I won’t elaborate in this post – and I’m beginning to see many problems I can solve with OO but I don’t feel I have solve with OO.

I know I haven’t supplied many answers in this post.  You don’t need my answers.  You don’t really even need my questions.  You need your own questions and your own answers.  I hope mine can help you find yours.

Are you Mocking Me?