Saturday, 31 December 2016

How not to build your own Lisp

Occasionally one buys a book which is a disappointment. Usually, when I buy a book which is a disappointment, I don't review it, because it isn't nice trashing other people's hard work; and that's especially true when the writer has written as engagingly and sincerely as Daniel Holden has. He's written a book I'd like to like.

But sometimes it's important to explain why a book is a disappointment, what is wrong with it, and what residual merit it still has.

Lisp is the List Processing Language. The clue is in the name. A linked list is a very simple and primitive data structure - essentially a node in a binary directed graph - from which other data structures (including executable programs, in the form of recursive functions) can be built recursively. And it is this inherent recursive nature which enables the other critically interesting point about Lisp: it implements the lambda calculus, Alonzo Church's groundbreaking mathematical formalism which made it possible to reason about the nature and limits of computation.

The problem is that the language Holden is showing you how to write, while it has some of the surface level syntactic structure of Lisp, isn't a list processing language at all. There are no lists. So there are no list cells. So there can be no primitive to construct a cons cell, nor one to take the value of its first pointer, nor one to take the value of its second pointer.

Holden does mention his lack of list cells, in a boxout on page 88; he says

'This naturally leads to an implementation using linked lists, a different data structure to the one we are using. I choose to represent S-Expressions as a variable sized array in this book for the purposes of simplicity, but it is important to be aware that the official definition, and typical implementation are both subtly different'

It's not subtly different. It's crucially different. You can indeed make something that looks like a duck out of papier mache, but it won't walk like a duck and it won't quack like a duck. It's not a duck, and this is not a Lisp.

For example, take a list '(a b c). Let's call that list p: (let ((p '(a b c))). Now take the tail of that list twice: (let ((q (cdr p))(r (cdr p))). Now suppose we test whether q and r (both, remember, being the tail of p) are the same thing:

* (let ((p '(a b c))) (let ((q (cdr p))(r (cdr p)))(eq q r)))

T

Yes, they are.

What about in Holden's language?

No, they're not.

They're both identical copies of the same thing.

We've lost the distinction between what is the same and what looks the same. We can no longer tell the difference between a duck and a papier mache copy of a duck.

Another crucial issue in the design of a Lisp is memory management, something I'm intensely interested in. Holden ignores this, simply delegating it to the C heap; but because instead of constructing homogeneous list cells all of which have the same size, he's constructing variable sized vectors, he will fragment the heap and ultimately cease to be able to allocate more memory even when there is memory available. Of course, in modern machines with very large amounts of memory it's unlikely that anything written in a toy programming language is going to get into this situation, but it's still disappointing.

All of this raises the question who the book is for. Lisp is not a popular language. It's a relatively obscure language, of interest to computer science geeks because of its simplicity and power. A book called 'Build Your Own Lisp' is likely to appeal to computer science geeks, and especially to computer science geeks who want to build their own Lisp. It's not likely to be of interest to beginner programmers, because beginner programmers wont know what Lisp is or why they should be interested in it.

So the people who will buy the book won't benefit from it, and are liable to be annoyed by it; while the people who might benefit from it are unlikely to buy it.

But they'll be missing a trick, because what this book is is a very good, clear, engaging introduction to writing a non-trivial program in C.

Saturday, 24 December 2016

Those tyres: 650b x 42

A couple of weeks ago I posted my review of my Cannondale Slate. And, like more or less everyone who's reviewed the Slate, I said nasty things about the slick tyres. What's wrong with the tyres?

Well, on tarmac, they're excellent. They're fast rolling and supple, which, on tarmac, adds up to excellent grip. The problem is that the Slate isn't designed just for tarmac, and I don't use mine just for tarmac. It's a bike for roads, paths, tracks and trails. On damp grass, damp rocks, mud, or more or less any other off road surface in the wet west of Scotland, the degree of grip available is sketchy at best. To be fair, dropping the pressure does help a little bit.

Stopping at each junction between tarmac and non-tarmac to deflate or inflate the tyres would be stupid and impractical and I've no plans to do it, but they'll run at 3.1bar/45psi on tarmac without and significant deterioration of performance, and this is a compromise which more or less works. For now.

But I've been looking for a replacement 650b x 42 tyre with a bit of tread, particularly on the shoulders, because sooner or later those tyres are going to let go unexpectedly and I'm going to go down hard. I'm a little old to enjoy that, now. And the truth is there really isn't much available.

Schwalbe have two options, the Marathon Supreme which is semi-slick and, as Schwalbe themselves say, has poor offroad grip; and the G1 (actually in 40mm width, but that will fit), which has a fine overall tread which would probably roll well on road but I imagine would clog rapidly in mud. Their Hurricane, which might be the sort of profile I'm looking for, isn't available in 650b at less than 50mm wide.

Panaracer offer their Gravelking tyre in 650b x 1.5", which is more or less the same fit. It has a very slight tread - very slight indeed. They also do a Col de la Vie touring tyre, which has a diamond tread pattern which might work.

There are some minor American brands offering 650b x 42. There's a thing called a Grand Bois Hetre, which has barely perceptible tread; I don't think it would work in Scottish conditions. There's a thing called a Babyshoe Pass, which has just a tiny bit more. Mind you, it claims to be designed for mountain conditions in Washington State, which I don't imagine are very dry. And finally in this group there's the Soma Cazadero, which has decidedly more bite and would definitely be better off road - but probably less good on road.

All these boutique American tyres are very expensive - the Cazadero comes in at US$73.99 per tyre, before you've paid shipping.

The thing which most closely resembles what I think I want is the Continental Tour Ride, which Continental describe as an 'urban tyre'; it has a smooth central tread and chunky shoulder blocks. However, it's discontinued and I cannot find that its replacement, the Ride Tour, is available in 650b. Also, although Continental are a German company, I cannot find any European retailer offering the 650b version of this tyre.

And that's all I can come up with. It's not a big range.

There are good technical reasons for choosing the 650b x 42 tyre size: it produces a higher volume tyre, able to cope with rougher surfaces, with the same outside diameter as a 700c x 23, and consequently with race bike geometry. But if companies like Cannondale want to sell bikes with 650b wheels, they're going to have to persuade more tyre manufacturers to support them.

Monday, 5 December 2016

Search, and you shall find

I tend towards the view that Google sets out to be, and believes itself to be, on the whole a force for good. Sergey Brin's original motto for the company was 'don't be evil'; Google now says that its mission is "to organise the world’s information and make it universally accessible and useful"; that its core aim is "to make it as easy as possible for you to find the information that you need and get the things that you need to do done."

I'm going to take that at face value; in this essay I shall write as though I believe these claims to be true (and, in fact, that is true: on the whole I do).

So when Carole Cadwalladr, working from original research by Jonathan Albright, forensically demonstrates that Google is acting as a potent amplifier for neo-fascist propaganda, we need to ask what is happening.

There are, essentially, three possibilities:

  1. That Google is aware of what it is doing and has tuned its algorithms to promote neo-fascist views (for reasons given above I do not believe this is currently the case);
  2. That the neo-fascist right, by superior intellect and cunning, have been able to game the Google ranking algorithms (for reasons I hope to give below I don't wholly believe this);
  3. That a combination of naivety of Google's algorithms and the structure of far-right websites has accidentally created the current mess. This is what I (mostly) believe, and I shall explain why.

(Note that today Google tweaked the search suggestions system so that it no longer offers the 'Are Jews evil' suggestion that Cadwalladr highlighted, but this is a solution only to the problem of that specific query: it isn't a general solution)

How search works

People who are not computer scientists believe algorithms are inherently complex and magical. They're not. Most are very simple. Google's page-rank algorithm is now proprietary, and thus secret; it has presumably been tuned somewhat over the years since it was last open and public. But the core of it is extremely simple.

Previous search engines, like Alta Vista, had scored web pages based on the content of the page itself. Thus if a web page contained the word 'Jew' many times, these search engines would rank the page highly as a source of information about Jews (in this essay I am using 'Jew' and 'Jews' as an example of a word that has become gamed; this essay itself has nothing to do with Jews and says nothing either positive or negative about them). The more times the page repeated the word, the more highly it would be ranked for that word. This was based on the naive assumption that people writing web pages were honest, well intentioned, non-delusional and well informed. And as most people are honest, well intentioned, non-delusional, and don't write about subjects on which they're not well informed, for a while this algorithm worked well enough.

But it was extraordinarily easy to game. The search engine believed what a web page said about itself. The search engine reads the text of the page, not the formatted image (that's still true of Google today). So Alta Vista, although it would give higher weight to words that were in headings than to words that were in body text, gave the same weight to words which were, for example, in white on a white background (and which therefore a normally sighted human reader using a normal browser wouldn't see) as to words which were black on a white background. 'Invisible' text could be inserted into pages - including as headings - which search engines would see but readers would not. Very often this invisible text would be a single repeated word: 'Jew Jew Jew Jew', or variations 'Jew Jewish Jews Judaism'.

Google's insight was that what a page says about itself is not trustworthy; Google's insight was to treat what other, unrelated sites said about a page as more trustworthy. The Web is a collection of linked pages; rather than counting the words on a page, Google counted the words in links to the page. So if a page contained the word 'Jew' a hundred times, Google (unlike Alta Vista) would not be more likely treat that page as an authoritative source of information on Jewishness than if it did not contain the word 'Jew' at all. But if pages on a hundred other sites - that is, sites with a different domain name - all have links to the page, and all those links contain the word 'Jew', then Google would rank the page highly as a source of information on Jews. The greater the number of such links, the higher Google would rate it.

People, on the whole, are more likely to link to sites they agree with than to sites they disagree with. So for example, I create a lot of links to stuff by Andy Wightman, Lesley Riddoch, Cat Boyd, Vonny Moyes. Different communities of interest also use different vocabularies. So for example if you type 'land reform' into Google you'll get a set of results broadly favourable to land reform; if you type 'land grab' you'll get a set of results broadly unfavourable towards land reform. The reason is simple: those who oppose reform are much more likely to frame it as 'grabbing'.

So we have a situation in which a page which is linked to be a very large number of other pages with the word 'Jew' in the link text is rated highly as a source of information about Jews, and it happens that the majority of pages which use the word 'Jew' in link text use those links to point towards anti-semitic pages; and thus Google, using its very simple algorithm of counting the links which contain the word, treats those anti-semitic pages as authoritative about Jews. Google isn't being evil; it's simply being naive.

The question is why it happens that the majority of pages which use both the words 'Jew' and 'Evil' in links point to anti-semitic sites. Originally, I'm pretty sure it was happenstance. Thousands of rabid mouth-frothers created thousands of links on thousands of blogs, all using the word 'Jew'. Ordinary serious Jewish people, writing about Judaism, probably don't use the word 'Jew' very often, because in their discourse Jewishness is assumed; and in particular they're pretty unlikely to link it with the word 'evil', because people tend not to think of people within their own community as evil.

The Google game

But once this pattern emerges and is recognised, what happens? I can go out this morning and buy a hundred internet domains all with apparently unrelated names, all with a hundred apparently distinct registered owners. I can point those domains at servers I can hire cheaply in the cloud, and I can host a hundred different websites. On each of those websites I can host a page with a link with the text 'Jew', which points to a single, common page saying something negative about Jewishness. If I choose a page which is already fairly highly ranked on the word 'Jew', I can push it even further up the rankings.

This is a scheme which has already been used for years by spammers and scammers; it would be a miracle if conspiracists had not noticed and begun to exploit it. So, as I wrote above, I believe that the current situation where innocent searches can lead to extreme or malicious material has arisen by accident as a result of naivety on the part of an essentially-reasonably-honest Google; but I also believe that it has now begun to be gamed.

But beyond that, the search suggestion system can be gamed. The search suggestion system is just an ordered list of the most common search queries. It has some slight tweeks, but that's essentially it. So if a million monkeys sit at a million keyboards and type 'are Jews evil' into a Google search all day, then 'are Jews evil' quickly rises up the suggestion list and starts to be the first thing offered by the suggestion system when someone innocently types 'are Jews'. Of course, those monkeys don't need to be real monkeys - a bot-net of hacked computers could easily be programmed to repeatedly ask Google particular questions, forcing other phrases up the suggestion list.

Search in a capitalist society

I sat down yesterday evening to think, OK, how does international civic society work with Google to limit this problem, to algorithmically build a better notion of trustworthiness into the evaluation of links, when I stumbled on an - obvious, when you have thought of it, but very disturbing when you first stumble upon it - even more potent problem.

We live in a capitalist society. Capitalism is disentropic of wealth; people who have wealth have opportunities to accumulate more wealth which are not available to people who don't have wealth. This is true at all scales; a home owner has more economic opportunities than a tenant, a millionaire than a home owner, a billionaire than a millionaire. In normal functioning, in a capitalist society, wealth is concentrated more and more into fewer and fewer hands, and the rate at which this concentration happens accelerates over time. There is a stark tension between this fact and the idea of fairness which appears to be innate in human beings, which even very small children can clearly articulate. Historically, there have been events when capitalism has reached crisis, when wealth has been radically redistributed from the very rich to rest of society; the most recent of these was during and immediately following the Second World War.

But since then, the ratchet has been working quietly away again, as simple mechanisms will.

One of the things which happens when capitalism reaches crisis is the rise of the right. This isn't in the least bit accidental. People who are very wealthy wish by definition to remain very wealthy, since giving away wealth is easy. People with wealth can fund political campaigns, and political persuasion. It's no accident that, throughout the Western world, the bulk of mass media is owned, not by readers or workers co-operatives nor by civil society, but by individual plutocrats. It's no accident that very wealthy people stand for high office - and win.

'The immigrants are taking our jobs' is an explanation for the reason that employment is getting harder to find. But in an age of globalisation and automation, it's hardly a very persuasive one. There are alternative, more persuasive, explanations: the investor class has offshored our jobs; the technologists have automated them out of existence. Yet in the narrative surrounding both the Brexit vote in the United Kingdom and the Trump victory in the United States, it it accepted that a significant proportion of the vote was driven by xenophobia against immigrants.

Why?

Well, certainly one explanation is that the right amplified that message at the expense of the alternatives. And the reason the right should choose to do that is because the right represents the interests of capitalism's winners - those who have, by luck, chance, dishonesty, inheritance, or by any other means accumulated more than their equal share of the world's wealth, and who want to hold onto it. The 'offshore' and 'automation' narratives both place responsibility for the loss of jobs in western economies on the heads of the investor class which chooses where to place investments, and chooses which technology. The right seeks to shift responsibility for loss of jobs from the few powerful plutocrats to the many powerless migrants.

And the evidence is that they're succeeding, which is, tangentially, where we came in.

But the fact that the right is succeeding is not the horrible thought. On a level playing field we could counter the right's success in exploiting Google, either (which I would prefer) by working with Google to develop algorithms and architectures which would make it easier to assign a trustworthiness score to a link, or by creating a new left-oriented search engine, or by 'reverse gaming' the page rank algorithm, architecting a 'web of the left' to balance the existing web (whether accidental or designed) of the right.

But this isn't a level playing field.

There ain't no such thing as a free search

We don't pay for Internet search. We accept that search is, like so much else on the Internet, free to use. Of course, it isn't free to provide. To handle the billions of search requests Google receives each day, to run the spiders which continually explore the Web to keep search results up to date, to run the indexers which convert the pages collected by the spiders into ranked data that search responses can be collated from, takes a mass of hardware and an enormous quantity of bandwidth. But Google doesn't provide us with this free, rich search experience out of charity. It doesn't even provide it as a loss leader. On the contrary, it is the enormous profitability of search which cross subsidises Google's many more experimental ventures.

So how does free search convert into enormous profits? By building up a detailed picture of your interests in order to sell highly targeted advertising. To see what a search engine looks like without that revenue, look at Duck Duck Go. Duck Duck Go doesn't identify you, doesn't collect information on you, and doesn't sell the information that it doesn't collect to advertisers. It is also a commercial company, seeking to make profit from search. Instead of collecting data about you to sell on, it sometimes (but not often) shows adverts at the top of the search results.

Duck Duck Go is there, it works, it's relatively unintrusive. You could use it, but you don't. You don't use it partly because you know Google will find you what you want; you don't use it because you intuit (and, it happens, correctly) that the results will not be so good.

What you don't see is how up to date the results are. In a typical week, Googlebot - Google's spider - reads more than 500 pages from my personal website. In the same period, DuckDuckBot reads one. And that differential represents the difference in resources the two companies have. Google crawls websites based on their own metric of how often a site changes, but nevertheless they check most pages on my site most days; my site changes rarely. Sites which change more frequently are crawled more intensively. Google clearly has the resource to scan the whole web very frequently: search results from Google will always be very up to date. DuckDuckGo don't say how their spider searches, but nevertheless it's clearly much less often.

But there's more that DuckDuckGo can't do that we've come to expect Google to do for us. Because Google collects and stores a lot of information about us, it can tailor it's search results to be relevant to us. It knows what I've searched for in the past, where I live, what car I drive, which websites I visit, what items I've shopped for but not (yet) bought. It can show me things it thinks will interest me, and a lot of the time it's right. DuckDuckGo cannot do this, because of a choice - arguably an ethical choice - its designers have chosen to make: they've chosen not to collect the data which would make those personalisations possible.

Who owns our searchers?

Google is a commercial company which makes enormous profits by collecting a great deal of information about its users so it can target advertising at them. I continue to believe that Google is on the whole a relatively ethical company. At least one of its founders thinks seriously about the ethics of what Google does, and while his ethical judgements are not always the same as mine (and, it seems to me, do not always win out, these days), I don't see the company as ethically vacuous in the way many are these days, still less actually evil. I believe that if we could show Google how to develop referrer quality metrics and integrate them into search, they would do this. I believe that we could work with Google to make it harder for political interests (including ourselves) to manipulate search results.

As long as their mission is "to organise the world’s information and make it universally accessible and useful", as long as that is a sincere statement, we can work with Google, because improving the perceived political neutrality of their search (to the extent that there is such a thing as political neutrality) improves the quality of the product.

But Google is a publicly listed company. It can be bought. And Google is not necessarily the world's most popular librarian for ever; Facebook is coming up fast behind, and there's no pretence that Facebook is an ethical company. We cannot trust the places people go to find information on the Web will be benevolent. On the contrary, like big media, they are likely to become targets for people - very wealthy people - who wish to influence public opinion, just as the major newspapers and television channels have been.

Google has restructured itself to be part of a new group, called Alphabet (although Google is by far the largest and most profitable company in that group). Alphabet's market valuation is something more than half a trillion US dollars. That's about equal to one third of the combined total wealth of the poorer half of the world's population. The poor cannot buy Google, or anything like Google. The left cannot buy Google. But as few as ten of the world's richest people could club together and buy Alphabet. It would be a good investment. It's still very profitable.

And, of course, many of the world's richest people are (very) right wing.

The library of lies

Control information - control the information it is possible to search for, possible to discover - and you control thought. Heterodox ideas - heresies can be made unfindable. Books need not even be burned; they can simply be hidden, bowdlerised, altered; false, perverted copies can be produced as the real thing. False 'news' can be mixed with true until the two become indistinguishable, as has already begun to happen to readers of some newspapers and viewers of some television channels.

People discover the Web largely through search. It does not matter how much true information, how many clear and logical expositions of interesting heterodox opinion there are out there on the Web, if search - the search we choose to use - does not find it for us. Network effects mean that at any one time one search engine will dominate search - the biggest search engine has the most resources, so can be most up to date and responsive, so everyone uses it - why would you use anything else? Thus Alta Vista supplanted Lycos and Google supplanted Alta Vista. Possibly someone will come up with an algorithm so much better than Google's that they will sweep Google from the Web; more likely, companies like Facebook and Apple will fragment the Web into separate walled gardens in which they control search, and into which they don't allow third party spiders.

But whether Google remains king of the hill, or whether it is supplanted, the politically ambitious rich must now be eyeing search in the same way that fifty years ago they viewed broadcasting and a hundred years ago they viewed newspapers. Control information, and you control thought. And the means by which people access information, in a capitalist economy, can be bought.

Yes, I believe that the left - and civil society generally - could work with Google to create 'politically neutral' search, for some value of politically neutral. We could because, I believe, Google is at its core still a well-intentioned company. But in a future - a future I think under capitalism more or less inevitable - in which search is owned by people like the owners of the Daily Mail, the owners of Fox News, could we then work towards 'politically neutral' search?

Well, only to the extent that Fox News is now politically neutral television.

Look to windward.

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License