- Array of strings
- object notation
- Reduces coupling in your application
- Reduces complexity by separating logic into smaller, more manageable pieces
- Increases the flexibility of your application by letting you change behavior by creating new classes to handle new functionality
Create test data for mongoose.js backed MEAN stack applications
tl:dr: I’ve created a node library that creates test data with Mongoose. You can check it out here:
https://github.com/jcteague/mongoose-fixtures
Integration Tests FTW!
Because nodejs+mongodb applications are so fast to start up, most of my tests are integration tests. I test from the API level down, spinning up an instance of my application server with each test suite and using SuperTest to generate the http calls and verify the results and http codes that are generated. And each test runs in about 50 ms so I can run through all of my test very quickly, get very complete code coverage, and reduce my testing footprint. This simply wouldn’t be possible using ASP.Net or on a JVM web framework. It would way too long. for an application where I would have 100-200 tests, I’m able to test it thoroughly with 20 or 30 tests. I break down to unit-level tests only when I don’t know what I’m doing and need TDD to guide me through it.
While working mostly with integration tests have helped test more with less, you face the problem of setting up your test data to verify your behavior is correctly. With the asynchronous of node, this can clutter up your test suites very quickly, especially when you need more that one model or when they are related.
To help remedy that I’ve shared a module that creates test data with Mongoose.js that abstracts all of the data creation and async goo to help keep your test suites readable. It also makes the test data easily accessible to reference later in your tests.
Mongoose.js is a hard dependency. If you’re working with another mongodb library, I doubt this library would work without modification.
Using Mongoose-Fixtures
The module only exposes two functions:
create that creates the data
clean_up that deletes the data
A typical test suite setup / teardown would look like this:
var FixturePrep = require("mongoose-fixture-prep"); var fixtures = new FixturePrep();
describe("creating single fixture",function(){ before(function(done){ fixtures.create( [ {name:'user1', model: 'User', val:{firstName:'John',lastName:'Smith'}} ], done); }); after(function(done){fixtures.clean_up(done)});
The parameters passed to create are name, model, and val. The obvious fields here are model and val, where model is the name of the mongoosejs model you’ve defined and val is the object you want to save. What name does is creates a new field on the fixtures object so that you can reference it later in your tests. Here is a test that shows that it works:
it("should be able to access the object you saved", function(){ //user1 is attached to the fixtures.should.have.property('user1'); //it's has been saved and now has the _id field fixtures.user1.should.have.property('_id'); });
Multiple Objects
As you can see the create takes an array of these objects, so you can create multiple test data objects at once:
var FixturePrep = require("mongoose-fixture-prep"); var fixtures = new FixturePrep();
describe("creating a multiple fixtures", function(){ before(function(done){ fixtures.create([{name:'user1',model: 'User', val:{firstName:'John',lastName:'Smith'}}, {name:'admin',model: 'User', val:{firstName:'super',lastName:'user', roles:['admin']}} ], done) }) it("should be able to access the object you saved", function(){ //user1 is attached to the fixtures.should.have.property('user1'); fixtures.should.have.property('admin'); fixtures.admin.should.have.property('_id'); }); });
Arrays of Data
You can also create an array of data and have it associated with one test fixture property.
describe("creating an array of data", function(){ before(function(done){ fixtures.create([ {name:'users', model: 'User', val:[ {firstName:'John',lastName:'Smith'}, {firstName:'Jane',lastName:'Doe'} ]}, {name:'admin',model: 'User', val:{firstName:'super',lastName:'user', roles:['admin']}} ], done) }); it("should be able to access the object you saved", function(){ fixtures.users.length.should.eql(2); fixtures.should.have.property('admin'); }); });
Related Data
One of the hardest parts was coming up with a way to create inter-related test data. The cleanest way I could come up with is passing a function for the val parameter that has can access to the fixtures already created.
describe("related data", function(){ before(function(done){ user_with_company = function(fixtures){ return {firstName:'John',lastName:'Smith', company: fixtures.testCompany} }; fixtures.create( [ {name:'testCompany',model:'Company',val:{name:'my company'}}, {name:'user1',model: 'User', val:user_with_company}], done) })it("should have the saved product in the line items", function(){ fixtures.user1.should.have.property('company'); fixtures.user1.company.should.eql(fixtures.testCompany._id); }); })
This is not as clean as I would like it, but it sure beats the alternative of trying to do all of that in the setup manually 😉
Use with AutoFixture
In a previous post, I introduced AutoFixture.js, that is a test fixture factory to generate the test objects. In my applications I use these two together. In the spirit of having small composable modules, they have been separated. But they work well together.
describe('using with autofixture',function(){ before(function(done){ factory.define('User',['firstName','lastName']) fixtures.create([ {name:'user1',model:'User',val:factory.create('User')} ],done); });
it("should create the fixtures from the factory",function(){ fixtures.should.have.property('user1'); //it's has been saved and now has the _id field fixtures.user1.should.have.property('_id'); }) })
These have been extracted out of a project that I worked on recently, so they are being used in real life. I’d still like to do more to make it more integrated into mocha. For instance it’d be cool of the fields were part of the test suite directly instead of the fixture class. Related data still needs some work too. It’s all available on github, so Pull Requests are welcome!!
AutoFixture — a Node.js Test Fixture Library
Working on a recent project that was on the MEAN stack, I needed to create test data quickly. I reviewed and tested out some of the existing libraries that are out there, but none of them fit my specific style. My favorite fixture library of all time is NBuilder for C#. It generates the fixture from the type information and also generates pseudo random data to put into them. If you can generate a fixture that way in JavaScript, I’m certainly not capable of it, so I took what I could from NBuilder and applied it to the factory-girl / factory-lady style of defining a fixture, with the pseudo random generation of NBuilder. The result is AutoFixturejs.
I’ve been dogfooding it for a while and it’s met my needs so far, but it’s far from done. Please check it out and provide feedback.
Installing
It is available from npm
npm install autofixture
Usages
Using the library consists of two parts, 1) defining the factories and 2) creating them in your tests
Creating a factory
You can define an object factory in one of the ways
Fields Defined with an Array
Factory.define('User',['first_name','last_name','email'])
This will create an object with the fields specified in the array
var user = Factory.create('User')
The user object will have the fields specified in the definition with pseudo random values:
{ first_name: 'first_name1', last_name: 'last_name1', email: 'email1' }
When you create another instance of this object with the factory, the numbers will be incremented:
//second object: var user2 = factory.create('User') { first_name: 'first_name2', last_name: 'last_name2', email: 'email2' }
You can also create an array of fixtures, each with with unique values.
var users = factory.createListOf('User',2) [ { first_name: 'first_name1', last_name: 'last_name1', email: 'email1' }, { first_name: 'first_name2', last_name: 'last_name2', email: 'email2' }
Overriding values
You can also override at creation time as well
factory.define('User',[ 'first_name', 'roles'.asArray(1) ]); var adminUser = factory.create('User',{roles:['admin']});
To change the behavior of the factory and return specific data types, several helper methods are added to the string object
Factory.define('User',[ 'first_name', 'id'.asNumber(), 'created'.asDate(), 'roles'.asArray(2) 'city'.withValue('MyCity') ]); //created will be DateTime.now var user = Factory.create('user') { first_name: 'first_name1', id: 1 created: Date roles: ['roles1','roles2'], city: 'MyCity1' }
Custom genearators can be defined as well:
Factory.define('User',[ 'first_name', 'email'.as(function(i){ return 'email'+i+'@email.com';}); ]); var user = factory.create('User'); { first_name: 'first_name1', email: '[email protected]' }
You can also used other Factories to generate fields
Factory.define('User',[ 'first_name', ]); Factory.define('Order',[ 'id'.asNumber(), 'order_date'.asDate() 'user'.fromFixture('User') ]);
Using Objects to Define a Factory
You can also use an object to define your fixtures. When you use an object the values for each field are used to create random data when you create the fixture
factory.define('User',{first_name, 'first', created_at: new Date(), id:1}); var user = factory.create('User'); { first_name: 'first1'; created_at: new Date id: 1 }
Creating a Fixtures file
Generally speaking you’ll want to put the fixture definitions into a single file and reuse for different tests. There’s no real specific way you must do this, but this is how I’ve set mine up and it is working well for me
Create a module that takes the factory as a function dependency
//fixtures.js ============= exports.module = function(factory){ factory.define ... }
In your test files require AutoFixture then pass the AutoFixture variable to the fixtures class
//tests.js var factory = require('AutoFixture') require('./fixtures')(factory)
Now you can use the factory to access your defined fixtures.
describe("my tests",functio(){ var user = factory.create('user'); });
npm install autofixture
How to Use it
The readme has a lot of examples of how to use it (as well as the tests), so I don’t want to repeat it here, but here are some hightlights.
You can create a fixture by either providing an array of property names to include or an object:
Factory.define('User',['first_name','last_name','email'])
Using Git subtrees to split a repository
We are in a position where we needed to create a new back-end server for an application. The current application is on a MEAN stack (Mongodb, Expressjs, Angularjs, Node.js), but a new client wants the backend to be deployed onto a JBoss server. This created a situation where we needed a completely different backend, but the front-end was shared between them. The approach we opted for was using git subtrees to split the ui code into its own repository and shared between the nodejs repo and the Java repo. We did this by using the subtree features in git.
To be clear, I would only use this for very specific situations like this. If possible, keeping things simple in a single repository is usually best. But if you’re in the same situation, hopefully this will be helpful for you.
Splitting the Original Repository
The subtree commands effectively take a folder and split to another repository. Everything you want in the subtree repo will need to be in the same folder. For the sake of this example, let’s assume you have a /lib folder that you want to extract to a separate repo.
Create a new folder and initialize a bare git repo:
mkdir lib-repo cd lib-repo git init --bare
Create a remote repository in github or wherever for lib project and add that as the origin remote.
From within your parent project folder, use the subtree split command and put the lib folder in a separate branch:
git subtree split --prefix=lib -b split
Push the contents to the of the split branch to your newly created bare repo using the file path to the repository.
git push ~/lib-repo split:master
This will push the split branch to your new repo as the master branch
From lib-repo
push to your origin remote
Now that lib folder lives in it’s new repository, you need to remove it from the parent repository and add the subtree back, from it’s new repository:
git remote add lib <url_to_lib_remote> git rm -r lib git add -A git commit -am "removing lib folder" git subtree add --prefix=lib lib master
Setting up a new user with the subtree
When a new user wants to work on your repository, they will need to setup the subtree repo manually. What ends up happening is that the split off folder will live in two repositories: the existing repo and the one setup as a subtree. You need to explicitly commit changes to subtree. This is obviously a mixed blessing. If you have a repository with a few occasional committers, they can pull the original repository and push as if the subtree didn’t exist. Then some one on the core team could occasionally push to the subtree.
If you want set up a core member who pushes to the subtree, clone the repository as normal:
git clone <core_git_location>
You will also need to add a second remote repository that points to the rain-ui repository
git remote add lib <lib_git_location>
Once the repository is cloned, you need to remove the lib folder and commit the changes:
git rm -r lib git add -A git commit -am "removing lib folder and contents"
Now you need to add the lib folder back, but this time using the subtree commands and the rain-ui repo
git subtree add --prefix=lib lib master
Breakdown: prefix defines the folder, lib is the name of the remote repository for the lib project, master is the branch you are pulling from the lib remote
Pushing to the lib repository
If all you are doing is working on non-lib related items you are done, continue pushing to the main repository as necessary. If you have changes in a different repository that is using the lib repo as a subtree and you want to push changes upstream, use the following command:
git subtree push --prefix=lib <lib remote name> <branch name>
#following the example git subtree push --prefix=lib lib master
Pulling from the lib repository
If there are changes in the lib repository and you are not working in the main repository, you would use the corresponding subtree pull command:
git subtree pull --prefix=lib <lib remote name> <branch name>
#following the example git subtree pull --prefix=lib lib master
References
Here is the list of reference I used during the process. When doing additional research on this topic, be aware of a different strategy called subtree merging, which is a different approach.
https://github.com/apenwarr/git-subtree/blob/master/git-subtree.txt
http://blogs.atlassian.com/2013/05/alternatives-to-git-submodule-git-subtree/
http://makingsoftware.wordpress.com/2013/02/16/using-git-subtrees-for-repository-separation/
The Open Space Experience
Join us at Los Techies Fiesta Open Space Conference October 25-27
An Open Space conference is really a different experience than the traditional, presentation driven conference. While both formats are about learning new things, they go about it in entirely different ways. While I enjoy both, to me an Open Space conference is more organic and focuses on a free flow of ideas vs a structured lecture. The way I often describe it is if you’ve ever gone to a conference and the most stimulating moment was a conversation in the hallway or break room, that’s an Open Space conference all day long.
If you’ve ever gone to a conference and the most stimulating moment was a conversation in the hallway or break room, that’s an Open Space conference all day long.
The environment is very fluid and active. This is due to the Law of Two Feet and the schedule is determined by the attendees. Unlike a presentation focused event, where you tend to sit still and listen to someone for an hour or more, you are actively encouraged to move around and keep engaged. If a session isn’t going the way you want, just go to another, or start your own.
The self-organizing aspect of the format is really important. Even though I am one of the organizers of the event, I have almost no say in what the topics and sessions are about, other than proposing topic I’m interested in learning mor about. And the fact that speakers don’t submit their sessions months in advance, the topics are usually very current and relevant. For me, the first conference had a lot of conversations about noSql databases and distributed architecture with messaging. While our second conference had a lot more JavaScript conversations.
What would you like to talk about this year?
Pablo’s Fiesta is Back!!
The Details:
When: October 25 & 26
Where: Austin TX, St. Edwards PEC (location)
We took a little haitus last year, but we’re coming back this year for our Third Pablo’s Fiesta Open Space conference.
What is it?
Pablo’s Fiesta is a an Open Space conference on Software Quality and Craftsmanship. It’s a chance for us to come together, learn from each other, and share our experiences and passion for what we do in an open and inclusive atmosphere.
If you have never been to an Open Space conference, it’s a conference experience like no other. The schedule is defined at the conference open session, so it’s up to us to make it a great conference!! Unlike traditional, presentation centric conferences, you are encouraged to move around and find a topic or session that is the most stimulating for you. Or just start your own. It’s more of a conversation than presented material.
The Schedule
I’ll provide more detailed schedule information as I get more things planned, such as any parties and social events.
If you’ve been to previous Fiestas, this one is just a little different. The previous conferences started on Friday night for the opening session and schedule selection and sessions on Saturday and Sunday. This year, we will only have sessions on Saturday. We are going to plan some social activities for Sunday, October 27th, like a picnic and a hackfest, so you might want to plan on staying in Austin until late Sunday.
Friday October 25, around 6:00 PM. Opening Session. This is where we propose topics for the conference and define the schedule for the conference.
Saturday October 26, around 9:00 AM. Conference Sessions begin.
Sunday October 27, TBA. Social Activities. We’ve got some ideas were bouncing around, probably a picnic or some other activity, plus a hackfest or something.
It’s going to be a great time and I look forward to meeting you all there!
Polymorphism Part 2: Refactoring to Polymorphic Behavior
I spoke at the Houston C# User Group earlier this year. Before my talk Peter Seale did an introductory presentation on refactoring. He had sample code to calculate discounts on an order based on the number of items in the shopping cart. There were several opportunities for refactoring in his sample. He asked the audience how they thought the code sample could be improved. He got several responses like making it easier to read and reducing duplicated code. My response was a bit different; while the code worked just fine and did the job, it was very procedural in nature and did not take advantage of the object-oriented features available in the language.
One of the most important, but overlooked refactoring strategies is converting logic branches to polymorphic behavior. Reducing complicated branching can yield significant results in simplifying your code base, making it easier to test and read.
The Evils of the switch Statement
One of my first large applications that I had a substantial influence on the design of the application had some code that looked like this:
private string SetDefaultEditableText() { StringBuilder editableText = new StringBuilder(); switch ( SurveyManager.CurrentSurvey.TypeID ) { case 1: editableText.Append("<p>Text for Survey Type 2 Goes Here</p>"); case 2: editableText.Append("<p>Text for Survey Type 2 Goes Here</p>"); case 3: default: editableText.Append("<p>Text for Survey Type 3 Goes Here</p>"); } return editableText.ToString(); }
Now there are a lot of problems with this code (a Singleton, really). But I want to focus on the use of the switch statement. As a language feature, the switch statement can be very useful. But when designing a large-scale application it can be crippling and using it breaks a lot of OOD principles. For starters if you use switch statement like this in your code, chances are you are going to need to do it again. Now you’ve got duplicated logic scattered about your application. If you ever need to add a new case branch to your switch statement you now have to go through the entire application code base and look where you used these statements and change them.
What is really happening is changing the behavior of our app based on some condition. We can do the same thing using polymorphism and make our system less complex and easier to maintain. Suppose you are running a Software as a Service application and you’ve got a couple of different premium services that you charge for. One of them is a flat fee and the other service fee is calculated by the number of users on the account. The procedural approach to this might be done by creating an enum for the service type and then use switch statement to branch the logic.
public enum ServiceTypeEnum { ServiceA, ServiceB } public class Account { public int NumOfUsers{get;set;} public ServiceTypeEnum[] ServiceEnums { get; set; } } // calculate the service fee public double CalculateServiceFeeUsingEnum(Account acct) { double totalFee = 0; foreach (var service in acct.ServiceEnums) { switch (service) { case ServiceTypeEnum.ServiceA: totalFee += acct.NumOfUsers * 5; break; case ServiceTypeEnum.ServiceB: totalFee += 10; break; } } return totalFee; }
This has all of the same problems as the code above. As the application gets bigger, the chances of having similar branch statements are going to increase. Also as you roll out more premium services you’ll have to continually modify this code, which violates the Open-Closed Principle. There are other problems here too. The function to calculate service fee should not need to know the actual amounts of each service. That is information that needs to be encapsulated.
A slight aside: enums are a very limited data structure. If you are not using an enum for what it really is, a labeled integer, you need a class to truly model the abstraction correctly. You can use Jimmy’s awesome Enumeration class to use classes to also us them as labels.
Let’s refactor this to use polymorphic behavior. What we need is abstraction that will allow us to contain the behavior necessary to calculate the fee for a service.
public interface ICalculateServiceFee { double CalculateServiceFee(Account acct); }
Several people in my previous post asked me why I started with an interface and if by doing so is it really polymorphism. My coding style is generally favors composition than inheritance (which I hope to discuss later), so I generally don’t have deep inheritance trees. Going by the definition of I provided: “Polymorphism lets you have different behavior for sub types, while keeping a consistent contract.” it really doesn’t matter if it starts with an interface or a base class as you get the same benefits. I would not introduce a base class until I really needed too.
Now we can create our concrete implementations of the interface and attach them the account.
public class Account{ public int NumOfUsers{get;set;} public ICalculateServiceFee[] Services { get; set; } } public class ServiceA : ICalculateServiceFee { double feePerUser = 5; public double CalculateServiceFee(Account acct) { return acct.NumOfUsers * feePerUser; } } public class ServiceB : ICalculateServiceFee { double serviceFee = 10; public double CalculateServiceFee(Account acct) { return serviceFee; } }
Now we can calculate the total service fee using these abstractions.
public double CalculateServiceFee(Account acct){ double totalFee = 0; foreach (var svc in acct.Services) { totalFee += svc.CalculateServiceFee(acct); } return totalFee; }
Now we’ve completely abstracted the details of how to calculate service fees into simple, easy to understand classes that are also much easier test. Creating a new service type can be done without changing the code for calculating the total service fee.
public class ServiceC : ICalculateServiceFee { double serviceFee = 15; public double CalculateServiceFee(Account acct) { return serviceFee; } }
But now we have introduced some duplicated code, since the new service behaves the same as ServiceB. This is the point where a base class is useful. We can pull up the duplicated code into base classes.
public abstract class PerUserServiceFee : ICalculateServiceFee { private double feePerUser; public PerUserServiceFee(double feePerUser) { this.feePerUser = feePerUser; } public double CalculateServiceFee(Account acct){ return feePerUser * acct.NumOfUsers; } }
public abstract class MonthlyServiceFee : ICalculateServiceFee { private double serviceFee; public MonthlyServiceFee(double serviceFee) { this.serviceFee = serviceFee; } public double CalculateServiceFee(Account acct) { return serviceFee; } }
Now our concrete classes just need to pass the serviceFee value to their respective base classes by using the base keyword as part of their constructor.
public class ServiceA : PerUserServiceFee { public ServiceA() : base(5) { } } public class ServiceB : MonthlyServiceFee { public ServiceB() : base(10) { } } public class ServiceC : MonthlyServiceFee { public ServiceC() : base(15) { } }
Also, because we started with the interface and our base classes implement it, none of the existing code needs to change because of this refactor.
Next time you catch yourself using a switch statement, or even an if-else statement, consider using the object-oriented features at your disposal first. By creating abstractions for behavior, your application will be a lot easier to manage.
Polymorphism: Part 1
Note: I am teaching a course in Austin TX on Object Oriented Programming in March. I’ll also be speaking at the Austin .Net Users Group on this topic.
To say that understanding polymorphism is critical to understanding how to effectively utilize an object-oriented language is a bit of an understatement. It’s not just a central concept, it’s the concept you need to understand in order to build anything of size and scope beyond the trivial. Yet, as important as it is I feel it is often quickly glossed over in most computer science curriculums. From my own experience, I took two courses that focused on OOP, undergraduate and graduate, but I don’t think I truly understood its importance until later.[1]
In C#, there are actually two forms of polymorphism available; Subtype Polymorphism through inheritance, and Parametric Polymorphism through the use of Generics. Generics is an important concept and deserves it’s own discussion. But we are going to focus on Subtype for now.
Polymorphism lets you have different behavior for sub types, while keeping a consistent contract.
Now that sentence hides a lot of nuances and potential. What exactly does this do for you?
The Mechanics:
Let’s go through a real simple example, just to describe the mechanics of how to use this. Don’t worry, we’ll take the training wheels off real fast. There are three basic steps necessary at this point: 1) define the contract, 2) create concrete implementations, 3) leverage the abstraction.
Step 1: Define the Contract
You need to define the contract that will define abstraction the rest of your system will interact with. In C# you can use either an Interface or a Class. The class can either be an abstract class or a concrete class. The only different between a concrete class and an abstract class is that you can’t directly use the abstract class, only concrete classes that inherit from it. You also cannot create an instance of an Interface either. The difference between an Interface and an abstract class is that an interface can only define the contract; you cannot implement any of the fields on an interface. With an abstract class however you can define a method and implement it, so that all subtypes inherit the same behavior.
We’re going to create a component the sends messages to users. We’ll start with the contract. In this case we’ll use an interface to define the contract.
It’s a very simple abstraction and only includes what we need it to do. Now what we’ll do is create some concrete implorations of the interface.
Now we’ve got two different behaviors with the same while keeping a consistent contract for send a message. Now we’ll leverage the abstraction somewhere else in our application.
Notice that the constructor takes a parameter of ISendMessages, not a concrete implementation. The OrderSender is not concerned with determining what type of message should be sent or the implementation details of sending the message. It only needs to know that the contract requires a Message object. This is known as the Inversion of Control Principle. By using only the abstraction and not the concrete types in the OrderProcessor the OrderProcessor has inverted control of how to send the message to the originator. When the OrderProcessor is created, it must be told what implementation to use.
Also notice that because the contract in both concrete implementations, we can substitute any subtype for the base type (in this case the interface). This is another of the design principle: Liskov Substitution Principle. Let’s change the implementation a bit that will break LSP.
Now that we need to set the Carrier on the SMSMessenger, we have to change what the order processor to set the carrier in the case of SMSMessager. As you can see it really complicates using the messenger. Now the OrderProcessor must have specific knowledge on how use a concrete type, and so will every other component that needs to send a message. In this case there are several ways to solve this problem will still maintaining the LSP and we’ll discuss some of them later.
[1] To be fair I took the graduate level course after a couple of years of experience, but I felt that I had a better grasp of the material than my professor did.
[2] While I don’t think I learned much about OOP in graduate school, I learned a lot of other things that would never have been exposed too otherwise and overall I think the experience is worthwhile.
Combining Modules in Require.js
Here’s a quick tip that I learned today the hard way, because it’s actually in the documentation.
In one of my projects, I’ve got a bunch of commands that I want to attach to an event based on what menu item is selected. My app object listens for menu events and then wires up the command based on the selected menu item. My first version looked like this:
This is module definition is obviously going to get very messy as I add more available commands. The solution was to create a new module to combine all of the commands and use a different variation on the module definition, where you’re only dependency is require itself.
Now I can simplify my app module to look like this
Node.js Must Know Concepts: Asynchronous
When writing node applications, there are a few concepts that are important to understand in order to create large-scale applications. I’m going to cover a few concepts that I think are important when building non-trivial sites with node.js. If you have suggestions of other important topics or concepts, or areas you are struggling with, let me know and I’ll try to cover them as well.
It’s asynchronous, duh
If you’ve done anything or read anything with node.js, I’m sure you are aware that it is built on an event-driven, asynchronous model. It’s one of the first things you have to come to grips with if you are building anything substantial. Because node.js applications are single threaded, it’s very important that you keep to the asynchronous model. When you do, your apps will be amazingly fast and responsive. If you don’t, you’re application will slow to a crawl. Let’s take the simplest web server example:
This code is running on a single thread, waiting for a web request. When a web request, comes in you want to pass the work off to an asynchronous callback handler, freeing the main thread to respond to more requests. If you block the main event loop, then no more requests will processed until it completes.
It can take a while to get used to this model, especially coming from a blocking or multi-threaded paradigm, which uses a different approach for concurrency. The first time I ran into this was building the Austin code camp site. To save the results from the form, I abstracted the work into a separate function. on the request handler, I called the save function, then returned the response.
But I forgot that the work to save the data, was done asynchronously, so my output log looked like this:
calling save returning response saving the data
Because the work to save the data was done asynchronously, I sent the response was sent before the data was actually saved. (Keep in mind, this is not always a bad thing, like saving a log statement, but not waiting to see if it completes or not.) What I needed to do was use a continuation model, and pass in a callback that completes the html request, when the request to save the data completes, or sends back an error.
It can take a while to get used to the continuation model, and it can get really messy when you need to complete several operations before completing a request. There are a lot of workflow modules that you can use to make this easier. It’s also relatively simple to build your own. In fact it’s practically a rite of passage that a lot of node developers do. It’s also possible to abstract this by using EventEmitters, which we’ll discuss in a later topic.
Would you like to be employee #1 at a Software Company
Update: this position is located in Austin. However if you don’t live in the area and are interested in working with us, please contact me. I’m always interested in working with talented developers no matter where they live!
I’m expanding my company. I’m looking to hire a polyglot programmer that is comfortable in C#, JavaScript, Ruby, the best tool for the job.
We’re a mix of consulting and software products (just starting our first) and you’ll work with a good group of guys offshore as well. You’ll be setting the architecture for ours and our clients applications, defining user stories and acceptance criteria and making sure quality stays at an extremely high level. TDD and Agile are required.
Compensation will be salary+equity. Chances are, if you’re sr. .Net developer, salary will be lower than what you’re making now. But you’ll have a lot responsibility, and a life (40 hour work weeks). You’ll need to be more excited about owning part of the company and making it into something exceptional.
You can contact me through the website, on twitter at @john_teague, or at john at avenidasoftware.com if you are interested.
subscribe via RSS