Monday, May 10, 2010

The Postman Cometh

Ahh code... you perplex whilst I define you. Like clay in my hands, shapes appear and fall, as time funnels through glass. I stand, ready and waiting, for your challenge. Who will spill blood or bits at the crossroads? Until we compile again, sincerely dave.

Thursday, July 2, 2009

Real-World Haskell

Normally I put more time into a blogpost, but this time I'm just going to rant for a bit instead.

I bought a book called Real World Haskell hoping that I would learn all kinds of wonderful things about the (in my book) top programming language Haskell. As it turns out, a Gentle Introduction to Haskell and the Haskell 99 Problems are a WAY BETTER APPROACH to learning the language IMHO. The book does a relatively decent job at teaching the language in the first few chapters but there is a distinct drop in quality in the later chapters.

Let Gentle Introduction to Haskell be GIH and Real World Haskell be RWH: RWH - GIH = (consider cost of thunk, consider readability of where clause over lambda). Consider the cost of thunk: when you are left-folding a list it will create a list of thunks which can be expensive. For example, if you

foldl (+) [1..1000000]

it will probably cause stack overflow. Do the substitution and you'll notice that the value isn't required unless the result is used so Haskell will delay that evaluation (in he process it creates a number of thunk procedures which end up eating the stack). Of course you can use the strict version (foldl') or foldr instead (foldr is primitive recursive for you computability geeks out there). So that is RWH good thing number one.

I like the Haskell take on lambda: \x -> x + 1. In particular the symbol '\' is the next best standard key on your keyboard to an actual lambda. This is not unlike the choice for not equals '/=' which looks much like the mathematical symbol for not equals. In RWH they introduction lambda expressions but mention that it's generally better to use a named function which is defined in a where clause instead of a lambda... I would agree with this. Readability should be considered very important because there are so many programmer that will look at your Haskell code be think "dude, what the hell is this?" anyway. Of course, I'm not taking a stab as Haskell... rather the reader because it's probable that they aren't familiar with functional programming (lambda's in particular) to start with.

In summary, don't buy Real World Haskell unless you are a hack-programmer who likes to feel good about themselves after reading a book with 'Haskell' printed on the cover (this would be the same type of programmer that thinks C is a functional language because you can only define functions and not class methods... get your shit together). You are much better off reading GIH over and over until it sinks in than RWH (with the exception of the difference set I just detailed). Point is... just pull it together and read GIH until you understand it... RWH is just masturbation.

Wednesday, April 29, 2009

Functions, Pattern Matching, and Types

Haskell is strongly typed, which means you can't compile ill-typed expressions (using a string when a number is expected and so on). Strong typing implies that types are resolved at compile time rather than runtime... that is, it is a statically typed language. Although the language is statically typed, you rarely need to "finger type" your code because Haskell will infer the types from your definitions. With type inference you can write nice compact programs and still have the performance and safety of static typing.

Before we look at some type inference we'll need to learn a few things about writing Haskell code. Since this language is functional you are primarily writing functions. You define a Haskell function very much like you do in a Mathematics class, as a series of equations. Consider this set of equations

fib(0) = 0
fib(1) = 1
fib(n) = fib(n-2) + fib(n-1)

The top equation says that the function named 'fib' reduces to 1 when the input is 0, the next says the reduction is 1 when the input is 1, and the last equation says that when the input is some number 'n', then the reduction is the sum of the previous two... this should look familiar, it's the generating function for the famous Fibonacci sequence. Let's have a look at this function written in Haskell:

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n-2) + fib (n-1)

It's a pretty straightforward transition to Haskell code from the Mathematical definition... this is generally true in most programming languages, but particularly evident with Haskell. The first difference you see is the function signature, this tells you about the function. Just read :: as "has type" and know that 'Int -> Int' means you've got function of one integer argument that returns an integer and this first line reads as:

The function named 'fib' has type of a unary function from Int to Int.

The next three lines are the Fibonacci equations written in code. When the program executes it does pattern matching from the top to the bottom, so if Haskell sees a function call like the following (notice that in Haskell you don't use ( and ) when calling a function):

fib 3

It says "hey is 3 the same as 0?" and the answer is no, so it goes down a line and says "is 3 the same as 1?" again this is a no, so it goes to the next line and says "is 3 the same as n?" and this is actually a match this time (n is the argument name here, not a character literal). So the third equation is picked and then the n is bound to the value 3*. Next the right-side of the equation executes and you get the following after substitution:

fib (3-2) + fib (3-1)

which is really the same as

fib 1 + fib 2

and then you ask the same questions: "is 1 the same as 0?", no... "is 1 the same as 1?", yes so we use that equation this time and it reduces to 1. Rinse, wash, repeat and in the end you've just computed the 4th fibonacci number (2). Now let's do something crazy:

fib 0 = 0
fib 1 = 1
fib n = fib (n-2) + fib (n-1)

I've removed the function signature and left the equations, and that's cool because Haskell just says "hey man, I see what you're doing here, I'll just write the signature for you". This isn't done by some magical little programmer in your computer, it is type inference. If you think about it, it kinda makes sense... there is enough information in the equation set for you and I to figure out the types so why can't a computer do it too? That is what Haskell is doing here and it can do it pretty well all the time.

So why does Haskell even have a way for you to write the type signature then? Well, even though it can be inferred, you might still want to write it out. It's a good way for beginners to get familiar with the type system and also serves as a nice "self documenting" part of the language (you can learn a lot about what the function does by looking at the signature).

So this is just the tip of the Haskell type system iceberg but it's a good starting point. We'll take off from here next time and look into the available types with more detail.

* This is actually a lie... Haskell doesn't bind that n to the value 3 it binds to something called a "thunk". The thunk is a procedure that will produce a 3 when it is called. This means that if the n was never used in the equation, then the thunk is never called, and the input is never evaluated. This isn't a big deal when we've got a literal value being used like we do here but it can be in other scenarios (think infinite). This is a feature called "non-strict evaluation" and it's a big deal in Haskell. We'll be talking about this much more in subsequent posts. I lied at first so that the details don't get in the way truth and you get to understand how Haskell functions work... at least I came clean and gave you a better idea of what the real deal is.

Monday, March 30, 2009

Haskell Series Kickoff

I know many multilingual people but most only speak one language... programmers usually know multiple computer languages. It's good to know more than one programming language, but I recommend picking one from a paradigm you don't already know. There is a fair chance you have some familiarity with an imperative and/or object-oriented language such as: C, Java, C++, Pascal, C#, and many more. Until a couple years ago I was in the same boat, then I read the wonderful book The Structure and Interpretation of Computer Programs (aka the wizard book).

The wizard book uses a LISP-dialect called Scheme which is in the functional language paradigm. Another popular functional language is Haskell, this is my favorite functional language. I learned Haskell by reading A Gentle Introduction to Haskell and doing (most of) the Ninety-Nine Haskell Problems along the way. Adding these two resources with a decent amount of time proved to be a winning formula for learning Haskell.

I've always enjoyed programming with Haskell and using my new Haskell-inspired problem solving skills in other languages as well. I figured others might feel the same way so this post is the kickoff for a series on Haskell. I'll take you through the previously mentioned resources and add my own perspectives here and there.

I'll try to get a post up every week or so and I'd really appreciate your feedback so please leave comments. The first post will focus on the fundamentals of the Haskell type system.

Sunday, March 15, 2009

Using Template Specialization for Multi-format IO

I get the feeling that C++ templates have a reputation for being too difficult. That is how I felt about templates at first too. I think that everything seems difficult at first, that's because you don't understand it yet. You just need to dig in and eventually a lightbulb clicks on and it feels so natural all of the sudden because you understand it. A while I deiced to learn the STL and that is when I understood templates. Later on I even developed a pretty handy technique for doing mutli-format object serialization using templates.

The main idea is that you have a type for each format you want to support and then write a template specialization for the stream extraction (>>) and stream insertion (<<) operators for each type. Check it out:


#include <iostream>
#include <fstream>
#include <string>
using namespace std;

class oxmlstream : public ofstream {};
class ojsonstream : public ofstream {};

class contact
{
public:
contact(string name, string address)
{
_name = name;
_address = address;
}

string get_name() const {return _name;}
string get_address() const {return _address;}

private:
string _name;
string _address;
};

template<typename stream_type>
stream_type& operator<<(stream_type& stream, const contact& contact)
{
stream << contact.get_name() << ' ' << contact.get_address();
return stream;
}

template<> oxmlstream& operator<<(oxmlstream& xml, const contact& contact)
{
xml << "<contact>" << endl;
xml << " <name>" << contact.get_name() << "</name>" << endl;
xml << " <address>" << contact.get_address() << "</address>" << endl;
xml << "</contact>";

return xml;
}

template<> ojsonstream& operator<<(ojsonstream& json, const contact& contact)
{
json << "{" << endl;
json << " \"name\": \"" << contact.get_name() << "\"," << endl;
json << " \"address\": \"" << contact.get_address() << '\"' << endl;
json << "}";

return json;
}

int main(int argc, char* argv[])
{
contact dave("dave", "dave's address");
cout << dave << endl;

oxmlstream xout;
xout.open("dave.xml");
xout << dave << endl;
xout.close();

ojsonstream jout;
jout.open("dave.json");
jout << dave << endl;
jout.close();

return 0;
}


I skipped the corresponding stream extraction (input) overloads here but I think this should communicate the main idea. This keeps the input/output code outside of the class definition so you can focus on the information and behavior while working on the class. You can tackle the IO via the operator overloads after you've got the class working properly, and you are able to incrementally add support for the latest and greatest format (MGraph anyone).

Sunday, March 1, 2009

Function Curry

Curry is good spicy food and it is also a handy programming technique. You can curry a function to produce a new function that has one less parameter... that is, the arity is reduced by one. Consider the following binary function:

add x y = x + y

If you call this function with two arguments, it will evaluate to the sum of the numbers as expected. What do you think should happen when you call the function with just one argument like this:

add 1

Should this be an error or is there a meaningful way to evaluate this? It doesn't make any sense to try adding 1 to nothing so we can't do that. What if we consider the type of the result, instead of a number why not return a function? This new function should be based on the add function but somehow incorporate that argument 1 into it. Why not fix the first parameter to be 1 and the resulting function would just have a single argument... essentially we create an increment function:

increment = add 1
increment 3 -- evaluates to 4

This is how you curry a function... it's pretty straightforward with Haskell, but how could we do this in C#? Consider the following extension method:

static class FuncExtensions
{
public static Func<B, R> Curry<A, B, R>(this Func<A, B, R> f, A x)
{
return y => f(x, y);
}
}

You can apply this extension to a binary function and use resulting "curried" unary function. Now we can write the previous Haskell code in C#:

class Program
{
static void Main(string[] args)
{
Func<int, int, int> add = (x, y) => x + y;
Func<int, int> increment = add.Curry(1);

var z = increment(3); // evaluates to 4
}
}

Currying is a way to build new functions that are based on existing ones, this allows for better code reuse and supports a stratified design.

Thursday, February 26, 2009

First!!

Welcome to the algotron, the one stop solution for the awesome things you've always wanted to do. I've been doing awesome things for awhile now so I figured I might as well start sharing. You don't need to know much about me... I'm a coder, rocker, and I'm gonna rock some code out on this blog... let's get it going:

I wrote this Haskell code that defines an infinite list of prime numbers using the Sieve of Eratosthenes... I don't think this is the most efficient way to generate primes but it's got a pretty elegant definition

primes = let sieve (x:xs) = x:sieve [y | y <- xs, y `mod` x /= 0] in sieve [2..]

So I thought I'd try to port my code over to C# 3.0 and it looks something like the following:

class Program
{
static IEnumerable From(int start)
{
while (true)
{
yield return start++;
}
}

static IEnumerable Sieve(IEnumerable sequence)
{
var x = sequence.First();
var xs = sequence.Skip(1);

return sequence.Take(1).Concat(Sieve(xs.Where(y => (y % x) != 0)));
}

static IEnumerable Primes
{
get
{
return Sieve(From(2));
}
}

static void Main(string[] args)
{
var xs = Primes.Take(10);
}
}

When I load the Haskell code into the REPL and try something like:

> take 10 primes
[2,3,5,7,11,13,17,19,23,29]

it works just fine. However, when I execute the C# program it doesn't halt and eventually overflows the stack (when I step though with the debugger I see that it is generating the right sequence it just doesn't stop when it's supposed to). I'm pretty sure the Haskell code works due to lazy evaluation but I thought that IEnumerable was also lazy... I guess it's only lazy when you aren't using it recursively.

This isn't a huge show stopper or anything... I think it would be nice if it worked in C# but I'm not gonna loose any sleep over it.