Sunday 14 August 2016

On functional programming: why tools and their quality matter

My rule-driven cellular automaton, MicroWorld,
modelling human settlement in the British Isles
Writing software is about solving problems, about finding elegant ways to solve problems. But it isn't and shouldn't be about finding elegant ways to solve the problems caused by the shoddy workmanship and poor technical design of platform you're working on. If the platform you're working on is shoddy, don't work on it.

Java is a case in point. Java was designed as a special purpose language for writing embedded software for small 32 bit appliances with limited memory. For that special purpose, it's OK - it's certainly more usable than assembly.

Jave inherits syntax from C, which in turn inherits it from BCPL, which inherits it from Algol. It inherits the idea of a virtual machine from BCPL's CINT Code Interpreter. It is, then, an imperative language, one step up from a macro-assembler. There's no mathematical formalism underlying its design.

There's a sort-of object-oriented layer bolted on top of the Algol-like language even in the earliest version of Java, but it's done without conviction. Primitive data items are not objects; there's no orthogonality. And there's no multiple inheritance, either, so it needs to be very clumsily simulated with interfaces. On the very small, memory-limited devices for which Java was designed, these were reasonable choices. But we now very rarely use Java in such constrained environments today.

The orthogonality problem was sort-of fixed in Java 5, with 'boxing' of primitives - if you referred to a primitive as if it were an object, the compiler would magically transform it into an object. But it's a hack.

And Java continues to be extended by hacks. Java 8, for example, hacked on functional programming, because functional programming is a fashion of today. But once again it's 'sort-of' functional programming, just as Java 1 was 'sort-of' object oriented.

Functional programming is in fashion today because we've reached the physical limits of Moore's law: individual processor cores are no longer getting faster. To increase performance, we need to share processing over more and more processors: to share the load. That's simple if the data being acted on by the process on one processor cannot be changed by a process running on a different processor, and so functional languages with closures and immutable data make programming for parallel architectures trivial.

But Java data is inherently mutable: that's the nature of object orientation. The state of an object is altered by setting its instance variables. Java also doesn't have first-class closures. So Java's 'functional programming' hack doesn't do what it implies on the tin: it doesn't make programming for parallel architectures trivial. On the contrary, it introduces more opportunities for subtle and intermittent bugs.

This is software design by marketing focus group: our customers want X, let's bolt something which looks superficially like X onto the side of our existing product.

C# is, in origin, an even worse story than Java. Java was designed to do something - to be used to create embedded programs for small appliances. C# is just a fit of pique, because a judge wouldn't let Microsoft release a non-standard Java and call it Java. So C# is a language like Java and inheriting all Java's faults, but deliberately incompatible with Java. And as each new feature has been bolted onto the side of either Java or C#, the other has slavishly (but subtly incompatibly) copied.

Programming languages don't have to be like this. Structured Query Language is soundly based in relational algebra; Prolog is based on first order predicate calculus. But the only computer language anyone actually needs to know is a straightforward implementation of the mathematical basis of computation itself: the lambda calculus.

Yes, I know all computer systems of any significant size contain elements which seemed like a good idea at the time and which later turned out not to be. Interlisp, Portable Standard Lisp and Common Lisp are all examples of LISP-2 - they have separate value and function pointers on every symbol. Interlisp and Portable Standard Lisp both also have subtly different symbol binding mechanisms between interpreted and compiled code, which means that the scope of symbols is different leading to semantic difference - which was an uncommon but horrible pitfall when your code worked when interpreted but broke when compiled.

Interlisp had an awkward inorthogonality between older functions which have abbreviated names in ALL UPPER CASE and newer ones which have LongerNamesInCamelCase. Oh, and there was the curious CLISP feature which you could either see as magical or a well of despair, depending on how you used it (I didn't).

Finally, in the Medley release, Interlisp had Common Lisp bodged onto the side. Common Lisp is a bit of a mess anyway, but the attempt to make one system interoperate functions written in two different syntaxes must have made for interesting debugging. But I had moved onto Prolog before Medley was released, so I never really had to deal with that.

In most early Lisps, too, data wasn't really immutable. The functions rplaca and rplacd, which allowed the overwriting of pointers in data structures, were Lisp's dirty little secret. These 'destructive' functions made data mutable. But even back in the 1980s, when parallelism wasn't yet a significant issue, we knew that these functions were inelegant and shouldn't be used.

Modern Lisps, like, for example, Clojure, don't have destructive functions. Data is immutable, and, consequently, parallelism really is trivial.

Of course, Clojure too has its faults. It is partially crippled by the JVM's limited, fixed size stack (back to that notional 32 bit minimal-memory set-top box Java was designed for); but while for many functions I believe that the recursive expression is both the most expressive and most natural, if you make idiomatic use of Clojure's lazy sequences the use of deeply recursive algorithms can be avoided.

There have been some bizarre decisions in Clojure's language design:

  • the default arithmetic operators do not gracefully switch to bignums on arithmetic overflow (although there are alternative operators which sensibly do, if you know about them);
  • nil is not the empty list (both (nil? '()) and (empty? nil) return false) and is also not false ((false? nil) returns false); also, bizarrely, the car of the empty list is nil ((first '()) returns nil) but the cdr is the empty list ((rest '()) returns ());
  • the decision to remove a layer of parentheses from cond and case means that pretty-printing does not give a rational code layout (and also makes translation of code from other Lisps that bit harder);
  • the decision to notate arg-lists and let binding lists as vectors is also bizarre and again introduces needless inorthogonality (yes, I know that as compiled for the JVM they're implemented as vectors, but that's an implementation detail)...

And so on. There is no perfect computer language.

But anyone who has looked at a fern-moss, or a frond of bracken, or a waterfall, or a birch tree, or a cloud, or a prawn, or the development of a human embryo, has to acknowledge that if God does not write Lisp, God writes some language so similar to Lisp as to make no difference. The physical world is built in all its aspects and at all scales of simple functions recursively applied. It physically hurts me to work on systems whose underlying architecture is a bag of hacks kluged together.

A good craftsman chooses his tools.

No comments:

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License