Is Functional Abstraction Too Clever?


I received a rather interesting comment on a recent Stack Overflow answer:

This code seems too clever by half. Is it art? – PeterAllenWebb

The code in question was a functional solution to an algorithm described approximately as follows:

Draw n−1 numbers at random, in the range 1 to m−1. Add 0 and m to the list and order these numbers. The difference between each two consecutive numbers gives you a return value.

Which I solved like this, with n = slots and m = max:

static int[] GetSlots(int slots, int max)
{
    return new Random().Values(1, max)
                       .Take(slots - 1)
                       .Append(0, max)
                       .OrderBy(i => i)
                       .Pairwise((x, y) => y - x)
                       .ToArray();
}

Using a few extension methods:

  • Values() returns an infinite sequence of random values within the specified range.
  • Append() takes a params array and appends its arguments to the original sequence.
  • Pairwise() generates a sequence from calculations on pairs of consecutive elements in the original sequence.

I can see how one would think the code is clever; however, I’m not sure what would qualify it as too clever. Every method call has a well-defined purpose and maps directly to part of the original algorithm:

  1. From random numbers in the range 1 to m−1…
  2. …draw n−1.
  3. Add 0 and m to the list…
  4. …and order these numbers.
  5. The difference between each two consecutive numbers…
  6. …gives you a return value [in the array].

As far as I’m concerned, a solution couldn’t get much clearer than

this, but that’s easy enough for me to say—what do you think? Is there

a better way to express the algorithm? Would an imperative solution

with shared state be more readable? How about maintainable?

For example, one could add the requirement that the random numbers

not be repeated so that the difference between adjacent numbers is

always nonzero. Updating the functional solution is as simple as adding

a Distinct() call:

    return new Random().Values(1, max)
.Distinct()
                       .Take(slots - 1)
                       ...

To me, this is the value proposition of functional programming. By

expressing the algorithm in terms of common operations, we’re able to

spend more time thinking about the problem than the details of the

solution. A similar change in an imperative implementation would almost

certainly have been more involved and prone to error.

For completeness, here are the implementations of the extension methods used:

public static IEnumerable<int> Values(this Random random, int minValue, int maxValue)
{
    while (true)
        yield return random.Next(minValue, maxValue);
}

public static IEnumerable<TResult> Pairwise<TSource, TResult>(
this IEnumerable<TSource> source,
Func<TSource, TSource, TResult> resultSelector)
{
    TSource previous = default(TSource);

    using (var it = source.GetEnumerator())
    {
        if (it.MoveNext())
            previous = it.Current;

        while (it.MoveNext())
            yield return resultSelector(previous, previous = it.Current);
    }
}

public static IEnumerable<T> Append<T>(this IEnumerable<T> source, params T[] args)
{
    return source.Concat(args);
}

This also reminds me that Jimmy posted a similar Append() method as part of his latest post on missing LINQ operators. I used to use a version similar to his, but have found the params version to be more flexible (and easier to implement). Its Prepend() counterpart is similarly trivial:

public static IEnumerable<T> Prepend<T>(this IEnumerable<T> source, params T[] args)
{
    return args.Concat(source);
}
Refactoring with Iterators: Prime Factors