- The author is a professor of computer science and psychology at Carnegie Mellon University and the 1978 Nobel Laureate in Economics
- It was published by MIT Press
- .NET Remoting
- ASMX Webservices
- WCF
- MSMQ
- ADO.NET
- Oracle Data Provider
- .NET’s Common Language Runtime
- A Java Virtual Machine Implementation
- COM+
- eBay deploys a new release every 2 weeks
- They add roughly 100 KLOC per week to the codebase
- Why use the Entity Framework? Yeah, why exactly?
- Reaction from Jeremy Miller
- Why EF?
- More Entity Framework Thoughts
- EF Long Term Plans
- Entity Framework: Our Albatross
- Rewriting the Entity Framework Source Control Support </ul> In short, the Entity Framework is a 1.0 release. Expect from that what you will.
Let’s just hope that the next version (or two) will hit somewhere near the mark–although I do have really big concerns about some of the design decisions they are pushing (such as reusing the models all over the place). It sounds like an expensive and delicate design to maintain. Smearing any kind of structure or model (data, object, or conceptual) makes change to that model quite a bit more expensive.
That is, in fact, software design 101. David Parnas was writing about it all the way back in 1972.
Long-term we are working to build EDM awareness into a variety of other Microsoft products so that if you have an Entity Data Model, you should be able to automatically create REST-oriented web services over that model (ADO.Net Data Services aka Astoria), write reports against that model (Reporting Services), synchronize data between a server and an offline client store where the data is moved atomically as entities even if those entities draw from multiple database tables on the server, create workflows from entity-aware building blocks, etc. etc
While it often seems like a good idea to expose things like entities to the outside world (it’s reuse, right?). The illusion quickly gets shattered when you start looking at how things will change in the future. Sharing data models usually means spreading the associated logic around. Spreading logic around means duplication. Duplication is costly to maintain. It also has very real coupling issues.
Also, if adding a new field to the Customer entity requires many changes (both in your app and in your consumer’s apps), you’ve done a piss poor job of design (think: sharing internal models across service boundaries).
But not only that, but I find that this type of thinking leads straight into CRUD.
I’m going to stop here because I’m quickly winding my way into a design death spiral. And I think you get my point.
- Entity Framework: Our Albatross
- EF Long Term Plans
- More Entity Framework Thoughts
- Why EF?
- Reaction from Jeremy Miller
- The production system has between 6k-7k different Access databases (databases, not tables)
- Since the app creates new databases on the fly, no one is sure which are production dbs
Community Reality Check: Is INETA a concrete life raft?
I’m asking you, dear reader, is INETA still viable as an organization?
I don’t know about your local user groups, but here in Nashville, we don’t get nearly enough good speakers to fill our monthly timeslots. Worst of all, the presentations all seem to be the same. They are all basic introductions to whatever shiny new toy Microsoft has recently released. Way too much Microsoft, way too little programming.
There are, of course, some very good exceptions to that status quo. I personally really enjoyed seeing Scott Cate when he came through town, and we were lucky enough to snag Ted Neward for DevLink. But over all, the quality seems to be falling further and further below what I would consider a high quality outreach organization–at least one whose goals aren’t completely in line with that of a single product vendor.
Another issue that came up recently was the organization’s refusal to work with corporate user groups. While this might make sense for companies that might spin up something with only 2 or 3 people, it seems very silly when you work in an organization that employs 200+ .NET developers. It also makes no sense when you look at other organizations of the same size (say: Fedex).
So, here’s the question I would like you, the community, to answer.
Is INETA a concrete life-raft?
For the curious, I currently play a small local role in the organization. As an attempt to continually improve my contributions to the community, I’m trying to determine the value I’m generating through the volunteering of my time.
Announcing: the Nashville ALT.NET User Group
This has been a long time coming, but it’s finally here!
Join the announcements list and jump in on the discussion. We’ll plan our first meeting for later this month via the discussion list.
[Book Review] The Sciences of the Artificial (3rd Ed)
In a nutshell, this book blew me away! I’m giving it 6 stars (out of 5).
After seeing this book referenced in another book I’m currently reading and also seeing it on the SEI’s Essential Collection, I thought it would be a good idea to pick up a copy. The 3rd edition is the most recent printing (1996), but the original was published back in 1969. If you can find a book on technical topics that’s as old as this one, you know you’ve found a good one.
The Artificial? You mean AI?
No, not even close. The first chapter does a great job describing the difference between the natural and the artificial. In short, anything that’s man made is artificial. That includes everything from software to skyscrapers. The obvious conclusion reached at the start of the book, is that our classical sciences only deal with things as they are. Given an existing structure, for example, the classical sciences allow us to reason about the various properties of the item in question. What they don’t tell us directly, is how to design something artificial, from scratch, to best meet the needs of the intended user, consumer, etc. From this point, the author lays out a very compelling mini-curriculum on the subject of design.
Overall, I found this book to be very thought provoking. It will definitely be one of those books I reread down the road.
A couple of other interesting facts about this particular book:
While I highly recommend this book, if you put a couple others under your belt first, the book will jump out considerably more. In particular, start with a good study on the topic of Quality Attributes first. You will quickly make the connections if you do. I recommend this one.
[Book Review] IT Architectures and Middleware (2 Ed)
What is Middleware?
Whether your aim is to build a single, large distributed system or to integrate multiple existing systems into a single, large system-of-systems, middleware is your key to success. When you begin to distribute across multiple processes, whether they are on the same or different machines, middleware is the stuff “in the middle” that allows the different parts to talk.
A few examples of middleware would include:
Why would you want to learn about it?
There are many different implementations of middleware–as you can see from my example list above. If you want to be able to make an informed decision when choosing among multiple suitable implementations, it’s important to be able to recognize both the sweet spots and the pain points for each type. It’s this knowledge that will allow you to know which tool is most suitable–both for your current needs and, potentially, your future needs as well.
What about the book?
I picked up a copy of the 2nd edition, which was published in 2004. The original was published back in December of 2000. The driving reason I chose to read this book was that I found it referenced from several other items I had read previously (which I can’t remember at the moment).
Overall, I’d give this book 3 stars out of 5. Although, I know that if I had read this several years ago, I would have no doubt rated it higher. A few places in the book deal with specific vendor technologies, including .NET, and given the publish date, it’s age is starting to show a bit.
One thing I really liked about this book was the way in which the authors chose to explain the concept of middleware–even the diagrams were simple and elegant. They also took the time to touch on a few quality attributes: resiliency, performance, and scalability (to name a few).
One area that seemed a little rough was on the topic of asynchronous message queuing. A few spots seemed to give conflicting assessments on the technique–although I’d probably guess some of that information just didn’t get scrutinized close enough during the revision.
Although I liked the book overall, this particular review is a little tough to write. I didn’t feel like the book stretched me very far from my current understanding. Whether that’s a sign of me maturing or a reflection on the author’s writing I’ll leave for another discussion.
In short, if you are looking to get your feet wet with middleware and related concepts, this is a good book to reach for. Don’t think, however, that any single book will be sufficient to learn the concepts in depth. If you are new to the topic, this would serve as a good introduction. If you’ve been around the block a few times, it would serve as a nice refresher.
The Cost of Defects
I just ran across some great stats on the cost of software defects. These are quotable, so I thought I would share.
The following is a quote from Capers Jones in his book, Estimating Software Costs{.}. For those unfamiliar with Capers work, he is one of the driving forces in the field of software estimation–especially in the world of traditional development processes. He’s been working full time doing software estimation since 1971, and now runs a consulting company{.} on software estimation.
The most expensive and time-consuming work of software development is the work of finding bugs and fixing them. In the United States the number of bugs or defects in requirements, design, code, documents, and bad-fixes averages five per function point. Average defect removal efficiency before delivery is 85 percent. The cost of finding and repairing these defects averages about 35 percent of the total development cost of building the application. The schedule time required is about 30 percent of project development schedules. Defect repair costs and schedules are often larger than coding costs and schedules. Accuracy in software cost estimates is not possible if defects and defect removal are not included in the estimates. The author’s software cost-estimating tools include full defect-estimation capabilities, and support all known kinds of defect-removal activity. This is necessary because the total effort, time, and cost devoted to a full series of reviews, inspections, and multistage tests will cost far more than source code itself.
Emphasis is mine. Capers also asserts that the cost of producing paper documents often costs more than the actual coding.
In regards to requirements on a traditional project, requirements grow at an average of 2% per month of development. When measured with function points, requirements growth can often exceed 50% of the volume of the original requirements. </ p itself.</P></p>
Retaining Good People
This is largely common sense, but I’d bet there’s a large number of IT shops out there that haven’t picked up on this yet.
Do you want to keep the best people on your staff?
Make sure you have career paths for them. If there aren’t clear avenues for people to be promoted or advance, it means they will be forced to look outside your organization to move up. Those who are talented or work agressively to advance will be forced to leave as they become more skilled.
If you are in an industry where prior domain knowledge is prized during the hiring process, that also means you will likely lose them to your competitor. Ouch.
Estimating System Load
One of the initial steps that every non-trivial project should go through revolves around determining system usage requirements.
Here’s a no-nonsense method for tackling this issue head-on.
1. Anticipate Usage and Usage Patterns
A prudent developer or architect will find out who the end user will be. He will also find out the anticipated numbers of end users that will be accessing the system. In addition, he will look at the anticipated usage pattern to look for things like spikes (ie..all users log into the app at 8am on Monday..or users only log in once per year).
Note: Don’t forget to include integrated applications in the system usage numbers. They generate load just like human end users.
2. Estimate Future Usage
Assuming the number of end users isn’t fixed over time, you should also estimate future growth. If the application will be used in house, ask what the hiring trend will look like for the next year. If this is a web startup, you might look at future revenue (ie..$10/user per month) and determine what an acceptable cap for the system will be (1000 users = $10k/month). Spend time talking with the customer and find out what is acceptable and within reason. They will appreciate you setting realistic expectations for them.
Also take time to compare how the numbers change. Some systems (ie..internal systems) may exhibit growth in usage that’s linear (ie..adding 5 additional users per year). Other applications may experience exponential growth (ie..the next hot social networking application). It’s important to be able to recognize those exponential growth apps, as extra care will be needed.
3. Calculate Anticipated Load
Taking your usage figures, try and give a rough estimate as to the number of calls/requests per second the numbers translate into. If this were an HR web app and there are 35 people in HR (with no anticipated hiring in the next year), you might estimate the number of calls per second for the entire department.
35 people * 3 seconds between clicks = ~12 requests per second
4. Include a Margin of Safety
Taking load estimates and multiplying by 10 will give you a nice cushion for error in your estimates. In the fictitious example above, we need to validate that our system will handle at least 120 requests per second.
5. Sanity Check Your Architecture
The initial development done on any system should be a threading of the architecture. This means wiring each piece of the entire system together, from UI to database and everything between. This step allows you to vet the high level design of the system. It also allows you to sanity check your baseline architecture against the usage requirements. Can the technology you’ve chosen and the deployment you’ve designed handle the anticipated load? If not, it’s better to find out now, rather than develop the app and find out during the initial deployment.
And You Thought Your Deployments Were Tough
Did you know:
That should be a little extra motivation to work on removing the waste and increasing the flow of getting your application out the door. If they can do it with a 6 million line codebase, why can’t you on your codebase (which is likely trivial in comparison).
Source: The eBay Architecture (circa Nov 2006)
On Comparing Current Tools to Futureware
I’m going to take a quote from Daniel Simmons on why we should use the Entity Framework.
I’m not specifically interested in his comparison with NHibernate because I think the following is true of many current O/RMs (whatever your personal flavor happens to be). I am, however, going to quote it because it caught my attention.
The big difference between the EF and nHibernate is around the Entity Data Model (EDM) and the long-term vision for the data platform we are building around it.
The biggest problem with comparisons like this is that you can’t compare unwritten software with something that’s already in production today. The false assumption underlying this is that the current breed of O/RMs will stand still while Microsoft magically comes from behind to deliver an innovative platform. It just doesn’t happen like that.
So how does the EF differentiate from the current breed of O/RMs? Apparently, it doesn’t. That’s slated for the next version.
Here are a few other reactions that I agree with, that hit my RSS reader today:
Don’t Do That
Over lunch today, I learned about one of the company’s recent acquisitions. Here’s the short of it:
Don’t ever do that. Please. And no, I couldn’t make this one up even if I tried. 😉
subscribe via RSS