Why Facebook’s Messaging Solution Matters


    This was originally posted on my company blog: “Why Facebook’s Messaging Solution Matters

    This is a pretty straightforward one – Facebook’s messaging solution matters because Facebook matters to its users. Facebook is a compelling platform – keeping in contact with your peers is undeniably important. But here’s the rub – people are more likely to check Facebook than check email. So if I can send someone a Facebook message in the same way I can send them an email – I’m gonna opt for the Facebook method. If I can combine the two… that’s killer feature. I can email someone and have them receive it when they check Facebook? Yes please.

    What does this mean for business though? Facebook does remain a platform for people rather than professionals. Certainly some of the stuff I mention on my Facebook wall, to my trusted Facebook Friends, I wouldn’t want to be exposed in a business context. But at the very least, this announcement means that Facebook users are more likely to receive your email communications, as when they check Facebook, they’re checking their email at the same time. In terms of enabling timely communication, this could be crucial. Facebook’s mobile platform has always been strong, so assuming that they integrate this new feature then that provides another avenue for people to check their Facebook communications from any location.

    The consolidation of  various messaging types is very Google Wave-like, but that’s as irrelevant as Google Wave was. The type of communication doesn’t matter – just the fact that it can be seen in a timely fashion in a highly-available interface – that’s the key. In fact, Apple have very slowly been driving at this – by combining MMS and standard text messages in their iPhone application – and ultimately their Facetime platform will most likely merge with this. But they’ve missed the trick that Facebook have understood – the medium is not important. It’s all about the message.

    Where is this going to fall down? GMail was a revelation because of threaded messaging and because it disregarded storage limitations. GMail’s spam and phishing filters are very good indeed. So Facebook needs to heed this – organised, unlimited messages with strong filtering of malicious communications will swing this for them. Microsoft and Yahoo are still playing catch-up with GMail five years since its release – this is going to be a massive setback for them. GMail’s recent performance issues indicate that they could be struggling to stay still, let alone advance, so Facebook could capitalise on that.

    Ultimately though, Facebook is going to be for personal use and GMail can be happy with corporate usage – it’s just a question of whether Facebook’s move will push email into irrelevance in the medium-term.

    This was originally posted on my company blog: “Why Facebook’s Messaging Solution Matters

    Ruby Method Parameter Default Values – A Shortcut


    Today was one of those moments where I thought to myself: “wouldn’t it be cool if I could”… and it turned out I could. In Ruby, you can give your method parameters default values:

    def my_method(a = 5)
    	puts a
    end
    
    my_method # outputs: 5
    

    So far, so ordinary. But you can also do something a bit more interesting – populate your parameter defaults from instance variables:

    class MyClass
    	def initialize
    		@default_a = 5
    	end
    
    	def my_method(a = @default_a)
    		puts a
    	end
    end
    
    MyClass.new.my_method # outputs: 5

    There’s probably some Ruby gurus looking at this and thinking *OH MY GOD WHAT ARE YOU DOING* but in my case, it saved me having to create a method which “seeded” a second method with some default values; a few lines of code successfully saved.

    PTOM: Breaking Free from HttpContext


    The System.Web.HttpContext class is a real heavyweight of the .NET Framework. It holds a wealth of information on the current server context, from the details of the current user request to a host of details about the server. It’s accessible from the HttpContext.Current static property, which means you can get hold of this information at and point in your code. Whether this is a strength or a weakness depends on your point of view, but consider the following code:

    public class AuthenticationService
    {
    public IRepository Repository { get; set; }

    public void Login(string username, string password)
    {
    User user = Repository.FindByLogin(username, password);

    HttpContext.Current.Session["currentuserid"] = user.Id;
    }
    }

    This type of code is probably pretty widespread; separate authentication code into a separate class. The problem with this type of code comes when you want to test it. Consider this test snippet:

    [TestMethod]
    public void Should_Retrieve_User_From_Repo()
    {
    _authService.Login(username, password);
    _repo.AssertWasCalled(x => x.FindByLogin(username, password);
    }

    This will fail hard, because when your test runs, you don’t have a current HttpContext available to work with. In theory, you could fire up a webserver class and populate HttpContext.Current and everything would work just fine; with early versions of the Castle Monorail project, the Controller test support did something similar. However, this is pretty unwieldy and not to mention slow.

    Of course we do have some horrible situations in which teams don’t run these kind of tests, so they’re probably thinking that they don’t care. They always run their code with a valid HttpContext available and are perfectly happy. Wait till you try and reuse your code to integrate with a third party which calls your authentication service. Ouch.

    So the bottom line is that we need to make sure HttpContext.Current is kept as far away from our code as possible. Another example of HttpContext usage is something like this:

    public void Log(string message)
    {
    WriteFile(message, DateTime.Now, HttpContext.Current.Url);
    }

    So we’re writing a log message along with the time and the page where the message was logged. We need this information for debugging, so it’s understandable why this code arises, but again we can see testing issues with HttpContext. Fortunately in this case it’s easy to fix:

    public void Log(string message, string url)
    {
    WriteFile(message, DateTime.Now, url);
    }

    Of course this kind of solution applies to any static class you need to pull out, so let’s look at an example which is more closely related to HttpContext: cookies. A standard approach would see us doing something like this:

    private void PersistUser(string encryptedUserIdentifier)
    {
    HttpCookie cookie = new HttpCookie("user");
    cookie.Value = encryptedUserIdentifier;
    cookie.Expiry = DateTime.Now.AddDays(14);
    Response.Cookies.Add(cookie);
    }

    This does the job, and adds a cookie to the response so that the browser will acknowledge it. The problem again lies in testing this code; without an HttpContext, we’re in trouble. Because a lot of new C# code is working with ASP.NET MVC and test-first practices, we need to take that in to account in every part of our application. How about this instead:

    private readonly ICookieContainer _cookies;

    public Controller(ICookieContainer cookieContainer)
    {
    _cookies = cookieContainer;
    }

    private void PersistUser(string encryptedUserIdentifier)
    {
    _cookies.Set("user", encryptedUserIdentifier, DateTime.Now.AddDays(14));
    }

    Now, we don’t include any kind of code which references the cookie implementation directly, and that in turn means we don’t use HttpContext.Current. We provide an implementation of an ICookieContainer via the constructor. That interface and implementation could look like this:

    public interface ICookieContainer
    {
    void Set(string name, string value, DateTime expires);
    string Get(string name);
    }

    public class HttpCookieContainer : ICookieContainer
    {
    public void Set(string name, string value, DateTime expires)
    {
    HttpCookie cookie = new HttpCookie("user");
    cookie.Value = encryptedUserIdentifier;
    cookie.Expiry = DateTime.Now.AddDays(14);
    Response.Cookies.Add(cookie);
    }
    }

    Now, looking at this you might be wondering what on earth the point is – this is exactly the same code but in a different class! The important thing is that the first set of code is likely to be part of a bigger controller class, a class which you want to keep as thin as possible. So we pull the cookie handling code out and then the controller doesn’t have to be concerned about it at all.

    Similar approaches can be used where ever HttpContext touches your code. The important thing is that because HttpContext is such a heavyweight, we can break it apart and use only the parts that are needed by wrapping them up into custom classes which can be injected where they’re needed.

    Castle MicroKernel Fluent Event Wiring


    The Castle MicroKernel Registration API is also used in Windsor, and both have a facility to allow a components to subscribe to events exposed by each other. Right now, the only way to use the fluent API to configure the facility is to go right down and build the configuration nodes (taken from http://blogger.forgottenskies.com/?p=266):

    Then the actual business end of things is the Extension method which allows me to use this:

    The State of Windows Mobile


    “Have you done any Windows Mobile development?”

    “A tiny bit. Isn’t it just like Winforms but on a phone?”

    And from such an innocent beginning, a world of pain did explode into my universe. Just like Winforms on a phone is it? What’s the difference between the Compact Framework, Smartphone development, Pocket PC development, Windows Mobile? So many terms! So little time!

    Windows Mobile is the operating system, just like Windows Vista. The Compact Framework is just like the .NET Framework on the desktop. As for the difference between a Smartphone and a Pocket PC, well, you’ve got me there. I picked Smartphone because my device had phone functionality and it seems to be working so far. There are separate SDKs for each, so I assume there are some key differences which escape me. With Windows Mobile 6, the Smartphone and Pocket PC SDKs are now Windows Mobile 6 Standard and Windows Mobile 6 Professional, respectively. I think.

    Actually I think the real difference in these is the templates for projects you create and the emulators you are provided with. Professional, or Pocket PC, provides emulators for bigger screens. Microsoft has this to say about the naming kerfuffle:

    With Windows Mobile 6, we are revising our SKU taxonomy and naming to better align our brand and products with the realities of today’s mobile device marketplace. The historical form-factor based distinction between Windows Mobile powered Smartphone and Windows Mobile powered Pocket PC Phone Edition is blurring dramatically. We want our taxonomies and terminology to evolve to better reflect the evolution of the mobile device industry.

    So in order to reflect the blurring of the mobile device form factors, they’ve changed from having SDKs named after the types of device to SDKs named “Standard” and “Professional”. Hmm. How about having a single SDK called “Mobile Device SDK” and allow me to pick the device dimensions from within my project on the fly? Back at the start of this tale, I assumed that picking Windows Mobile for development would allow us to target a range of different devices, large and small, and in fact I can do that. I can deploy my application to a Windows Mobile phone with a big screen and to one with a small screen. The SDK split seems pretty artificial with that in mind.

    Naming conventions and confusions aside, it is nice to be able to write against a single API and deploy to any Windows Mobile device. Or it would be if it worked.

    My bugbear here is with a particular class: CameraCaptureDialog. Take the Samsung Omnia for example. You can certainly pop up the camera using CCD.ShowDialog(), but can you retrieve the filename of the image you took? You cannot. That’s because the Omnia’s camera supports taking multiple images one after the other until you explicitly close it.

    How about the HTC Diamond? Well that opens fine, and returns a filename too, but if you try and re-open the camera straight after processing the filename, to allow the user to take another photo, it fails silently and doesn’t show the camera. If you try and do the same thing with the HTC Touch, it freezes.

    Part of the issue is that the Compact Framework leaves too much up to the manufacturers and doesn’t give enough control to the developer. We can set the resolution of the camera for example, but we have no shortcut of setting it to the maximum resolution available. If you try and set it to a resolution which is not supported, some devices reset silently to a much lower resolution.

    Microsoft need to extend camera support for .NET developers and give a lower level of access. They need to push device manufacturers to adhere to the Windows Mobile APIs and be more precise in how they are specified. And they need to simplify and modernise their mobile development framework so that developers can be fully aware of all the options available to them.

    This post was also published on my personal blog.

    Parsing XML-like Files


    The quantity of data now stored in XML, HTML, and other similar formats must now be absolutely huge. Fetching that data from XML-like files is largely seen as a solved problem on many platforms, but I’m going to look at the various alternatives and see where each would be appropriate and how you can improve the development process by using each method.

    The Manual Method

    This is the one which makes me go *yuck* in a bit way. You can read in a file as a straightforward string or array of line strings, and skip to the part of the document you want. This is most definitely the manual method when it comes to parsing XML, because you’re not really taking an interest in whether the document’s XML or not. You’re not takin advantage of the structure and you’re treeating the file in the same way you would any flat file.

    I saw this approach in a PHP application, and it was being used to parse HTML. In this environment I can kind of understand why you’d think about the manual method: PHP doesn’t have a built in means of parsing XML into HTML, so there’s no way of getting a structured document to even use PHP’s DOM support with. However, there are third-party libraries which support methods which are a bit less laborious.

    Taming HTML

    Many platforms have libraries which parse HTML into a well-formed state to allow further processing. In fact, using HTML Tidy, you can do this manually from pretty much any platform. Some libraries will wrap HTML Tidy and some will provide their own method, but the bottom line is that you’re likely to be able to leverage some tool to tame the horrific mess of HTML that is the world wide web.

    XPATH

    With your HTML under control, or your XML at the ready, you’ve now got a chance to pull data from your document with some more advanced techniques. Most developers will have touched on XPATH at some time or another, as it’s commonplace within the ecosystems of most development platforms. The name gives it away – XPATH is XML Path Language, and allows the use of specialized queries to pull out parts of an XML document:

    //li
    

    This XPATH expression specifies that we wish to find all <li> elements at any level of the document. More complex expressions are available, providing means of selecting elements based on attributes, values of nodes, and much more. The strength of XPATH is its availability. In the .NET world, for example, fast XML processing and XPATH are available as first-class members of the framework, only a using directive away. This makes the use of XPATH common-place.

    CSS Selectors

    This alternative approach is less common due to the lack of first-class support in most frameworks. However, choice is good, so let’s look at the way in which CSS can pull out data from your XML documents. I suspect there are still developers out there with an aversion to CSS, so let’s be clear: this approach discusses CSS selectors, not layout with CSS. That means all of the strange behavious with comes with floating elements is not applicable here. We’re also not running CSS in the browser, so there are no incompatibility issues. This is a subset of CSS, working as advertised.

    Hpricot for Ruby, phpQuery for PHP and my own Fizzler for .NET are examples of this kind of solution. Here’s a simple sample using Fizzler:

    var engine = new Fizzler.Parser.SelectorEngine(html);
    engine.Parse(".content");
    

    We start up Fizzler by passing in some HTML string, and then select any nodes with a class name of “content”. Fizzler uses HTML Agility Pack underneath, so the result comes back as a collection of nodes which you can further manipulate.

    Conclusion

    My rationale for developing Fizzler was a personal dislike of XPATH. Because I was strongly familiar with CSS, and because Fizzler supports some advanced CSS 3 selectors, it’s possible for me to achieve the same results using CSS selectors without the barrier for entry presented by XPATH. Your mileage may vary, depending on your experience with each technology, but Fizzler fills a gap in the .NET market and I hope some people will find it useful.

    Open Source Documentation


    Recently I’ve set up a network attached storage computer on my home network. As well as providing RAID storage for all the devices in the house, it acts as a central download server for everyone to use. Key to this strategy is SABnzbd, a Python application which downloads binaries from newsgroups, and which sits on the server as a daemon, grabbing files on a schedule or when we ask it to. The functionality of this software is incredible, but more than that, there is a great deal of documentation for each feature directly linked from the web interface. This enabled me to set up advanced features such as RSS feeds, categorisation, and post-download scripts, in order to shift SABnzbd from being handy to indispensible.

    This post is not about SABnzbd though – it’s about documentation. My latest project has been a very quick CMS solution using Monorail, and I’ve been taking advantage of the new features available in Castle’s trunk. The new routing in Monorail, the fluent API for component registration in Microkernel, and more new features, have all been making my life easier… once I’ve figured them out. I’m in awe of the people who have produced these features and I’m not adverse to digging round test cases where I can, in order to find out how to use them.

    However, it would unarguably be better if the Castle documentation reflected these new changes. It’s understandable that the documention lags behind these features, and since I don’t have the intimate Castle knowledge needed to contribute to fixing bugs or adding new code, I figured it’d be good to try and work on this documentation. Castle uses an XML based documentation format which is just fine for final docs, but not that great for scrabbling down notes and filling out information. For that, I’ve decided to use the using.castleproject.org wiki, a site designed to hold tips and tricks for the Castle Project.

    I’ve set up a simple system of tagging which allows people to search out stuff in need of documentation and then tag it when it’s complete. At that point, I plan on converting it into a patch for the official Castle documentation. In this way we can get the rapid prototyping of a wiki combined with an easy route to formal documentation. I think barrier for entry is a definite problem for contributing on many projects, and documentation can be a good place to start. For Castle, I’m trying to make even the barrier for entry for even that documentation very low. So if you can help out with the routing documentation or the validation documentation or anything else that’s missing or incomplete in the main Castle docs, please pitch in and try and help!

    (Also published on my personal blog)

    Common Interfaces for Tool Families


    There are a load of different tool “families” in use in the .NET ecosystem which I’m sure LosTechies readers will take advantage of pretty much every day. IoC containers. Logging infrastructures. URL routing mechanisms. Each of these families operate on broadly similar principals – taking the container example, we know that we need to add types to the container and resolve types which are already in there. For logging, we’d generally have the ability to log to different levels of severity. So you can see that while the implementations and underlying behaviour may be significantly different, there is a layer of abstraction which highlights commonality.

    Castle Project has a Castle.Core.Logging.ILogger class which supports the use of a variety of different logging systems within your applications. It is a facade behind which log4net or NLog does the magic while your application happily logs information while not worrying about what is actually taking care of the logging. To me, this is a very interesting method of supporting a tool family – expose the most common methods which a tool supports and let the tool get on with its own business.

    What I’d like to see is a community effort to publish an ILogger interface to which various logging libraries can adhere, and an IContainer interface for IoC libraries, and other interfaces for various tool families which have enough common features. In this way, we can enable a new level of code sharing and integration between projects.

    (Also published on my personal blog)

    Application Configuration


    I had cause to recently revisit an old ASP.NET application I’d written way back when I was a development newcomer. Digging around the web.config I found the appSettings section:

    <appSettings>
    <add key="systemEmailAddress" value="me@me.com" />
    <add key="adminEmailAddress" value="me@me.com" />
    <add key="templateDirectory" value="~/admin/templates/" />
    <add key="installPath" value="~/admin/" />
    </appSettings>

    You get the idea. There were loads of these, configuring many different aspects of the system. Many should have been configurable by site administrators from some kind of user interface. Technically this is possible – editing the web.config on the fly – but I really wouldn’t recommend it.

    Anyway, since then I’ve used this method a number of times, as well as having a Settings database table which stores key/value pairs:

    var email = SettingRepository.FindByKey("email");

    Or having a Settings table with a single row and columns for each setting to allow it to be mapped to an object:

    Settings settings = SettingsRepository.FindFirst();

    All three have upsides and downsides but none are particularly satisfying. I’m mulling over which approach to take in my next project which is going to need a fair few of these settings. Which method do you favour? Do you have a fourth way?

    VMWare Optimization


    Further to my earlier post on development enviroments, I wanted to share some ideas on how to work with Visual Studio in VMWare. Remember that running VMWare isn’t cost-free; you need to make sure that you’ve got resources to allocate to your virtual machine and that the virtual machine in question isn’t going to grind to a halt because you’re trying to do too much with it.

     

subscribe via RSS