Tuesday, 12 December 2017

On Catalunya, and Freedom

I don't know much about Catalunya. I've never been there; I have no friends from there. I've read, obviously, books coming out of the experience of the Spanish Civil War - George Orwell, Laurie Lee, and anarcho-syndicalist texts: Murray Bookchin, Robert Alexander and others. But that's all a long time ago; and my interest was in political theory and in how you build a good society, not particularly in the place itself. I don't know much about Catalunya.

But that is beside the point. The point, in Catalunya now, is the right of people in a place to define themselves as a community and a polis, and to achieve self determination for that polis. Chapter One, Article One of the Charter of the United Nations - which all members have signed up to - asserts the right to self determination. So does Article One of the International Covenant on Civil and Political Rights, of which Spain is also a signatory. It is a basic, fundamental right.

And it is a right we in Scotland claim. It is core to our Claim of Right. It is core to our right to choose whether we wish Scotland to become a nation again.

In Catalunya, at this moment, four people are in prison for peacefully asserting this right. They are:

Jordi Cuixart i Navarro is a civil society activist, not a politician. He leads a cultural organisation, Omnium Cultural, which promotes the Catalan language. He is, effectively, equivalent to the chair of An Comunn Gàidhealach. He's accused of organising passive resistance to the attempts by the Guardia Civil - the Spanish national police - to destroy the infrastructure for the first of October referendum on Catalan independence.

His address in prison is:
Jordi Cuixart i Navarro
Centro Penitenciario
Madrid V
Ctra. M-609, km 3,5
28791 Soto del Real
Madrid, Spain
Jordi Sanchez Picanyol is President of the Assemblea Nacional Catalana, which might be seen as roughly equivalent to the Scottish Independence Convention; so again, although he is a political activist, he's not a party politician. His closest Scottish equivalent might be Leslie Riddoch. Like Jordi Cuixart, he's accused of organising passive resistance to the attempts by the Guardia Civil to destroy the infrastructure for the referendum.

His address in prison is:
Jordi Sànchez Picanyol
Centro Penitenciario
Madrid V
Ctra. M-609, km 3,5
28791 Soto del Real
Madrid, Spain
Oriel Junqueras i Vies is a historian by profession, but he's an elected politician and serves as Vice-President of the Government of Catalonia; his nearest Scottish equivalent might be John Swinney. He leads the Republican Left party in the Catalan Parliament, and led the 'Yes Coalition' campaign in the last parliamentary elections - so he's also a bit like Blair Jenkins. He's accused of  rebelliousness and sedition for responsibility for the unilateral declaration of Catalan independence.

His address in prison is:
Oriol Junqueras i Vies
Centro Penitenciario
Madrid VII
Ctra. M-241, km 5.750
28595 Estremera
Madrid, Spain
Joachim Forn Chiariello is a lawyer turned liberal politician. He leads the Convergència Democràtica de Catalunya party, and is Minister of the Interior in the Catalan Government. In Scottish terms he's somewhere between Willie Rennie and Michael Matheson. He's in prison largely because his ministry was responsible for the Mossos d'Esquadra, the Catalan regional police (equivalent to Police Scotland) and for the Catalan firefighters, who together defended voters from the violence of the Guardia Civil on the first of October.

His address in prison is:
Joaquim Forn Chiariello
Centro Penitenciario
Madrid VII
Ctra. M-241, km 5.750
28595 Estremera
Madrid, Spain
So, four men. None of them revolutionaries. None of them have espoused violence. Each serving in roles which are quite familiar in Scotland. And each, imprisoned for doing so.

If the Westminster Government sent the Metropolitan Police to smash up the baby boxes that the Scottish Government are distributing, wouldn't you hope that someone in Scottish civil society - Jonathon Shafi, perhaps, or Lesley Riddoch - would organise a peaceful demonstration to prevent them?

If the Holyrood parliament votes to organise a second independence referendum, and, if that referendum returns a strong 'Yes' vote, votes again to declare Scotland independent, wouldn't you hope that the Scottish Government would act on that mandate?

Of course you would.

And let's put to bed the claim that the referendum was 'illegal'. The right to self determination is guaranteed by binding international agreements to which Spain is a voluntary signatory. Therefore it is legal in Spain to vote for self determination. Self determination cannot be legally protected if all the means to express self determination are forbidden. The referendum cannot have been illegal. As Spain claims that its national law trumps Catalunya's regional law, so international law trumps Spain's national law.

I don't know much about Catalunya. I don't know whether Catalunya should be independent. That's none of my business; it's for the people of Catalunya to decide. But I do know this: if Scotland will not stand in solidarity with the Catalans, if we will not stand up to assert Catalunya's right to self determination, why should anyone stand up for ours?

Libertat Presos Politics. They aren't just Catalunya's political prisoners; they're also ours.

Saturday, 2 December 2017

Wildwood: Development

[What follows is the text of the 'Development' chapter - chapter five - of my thesis as it existed in June 1988, when I lost access to the machine on which the work was done. This is an unfinished chapter of an unfinished thesis; I am posting it because people have expressed interest in the explanation game. This is the last form of this chapter, but it isn't the end of my thinking on the matter, and if I were rewriting it now I would add further moves to the set to allow the agents to argue not just about assumptions, but about the merit of the authorities from whom rules of inference are drawn. 

A modern implementation would also, of course, have access to the semantic web in order to draw in additional sources of knowledge beyond that stored in a local knowledge base.]

Development

Explanation is a social behaviour

Games are social behaviours which can be modelled

Explanation can be viewed as a game playing activity

One of the models we frequently use in real life situations where a decision has to be made on the the basis of uncertain or conflicting evidence is the dialectic model. In this, one authority proposes a decision, giving an argument which supports it; another authority brings forward objections to the argument, and may propose an alternative decision. This method may continue recursively until there is no further argument which can be advanced.

This is the model which underlies both academic and parliamentary debate, and the proceedings of the law courts. The model supports a ‘decision support’ conception of the Expert System’s role, in addition to a ‘decision making’ one; in the decision support conception, the user is placed in a position analogous to that of a jury, considering the arguments presented by the advocates and making a decision on the basis of them.

We can view this dialectic approach to decision making as a game, with a fixed set of legitimate moves.

Hintikka’s game theoretic logic

This approach seems very promising. It may tie in with game-theoretic logics such as those of Ehrenfeucht [Ehrenfeucht 61] and Hintikka [Hintikka & Kulas, 83 and passim]. Of course, these logics were strictly monotonic. Problem areas where monotonic logics are adequate are of limited interest, and unlikely to give rise to interesting problems of explanation; but it should be possible to develop a provable semantics for a non-montonic extension. The decision procedure for a game-theoretic logic should not raise any particular difficulties; the ‘games’ are zero sum, under complete information.

However, the adoption of a game theoretic explanation mechanism does not commit us to adopting a game theoretic logic; so (although interesting) this line will be pursued only if it proves easy.

The object of an explanation game

If explanation is to be seen as a game, what is to be its objective? If we were to take a positivist approach, we might say that the purpose of explanation was to convey understanding; then the game would be a co-operative one, with the objective of both players being to achieve this end.
But seeing we have rejected this position, the objective of an explanation must be to persuade.

Furthermore, if we accept the argument advanced in Chapter 3 above that explanation is hegemonistic, the explanee must be seen to have an objective in resisting explanation. So now we have a clear idea of what the objective of an explanation game will be: the explainer will win, if the explainee is brought to a point of no longer being able to produce a rational1 reason for resisting the explanation. Otherwise, if when all the supporting arguments the explainer can bring to bear have been advanced the explanee is still unconvinced, the explainee has won.

Legitimate moves in an explanation game

A possible set of legitimate moves in an explanation game might be:

PRESENT

Present a case, supported by an argument, which may include one or more assumptions.
Generally, in a non-monotonic system, we are dealing with incomplete information (and, indeed, this is generally the case in ‘real life’). Consequently, any conclusion is likely to be based upon one or more unsupported assumptions.
[starting move]

DOUBT

Challenge one or more of the assumptions.
[basic response to PRESENT, REBUT or COUNTER-EXAMPLE]

DOUBT-IRRELEVANT

Claim that, even if the assumption were false, it would make no difference to the decision.
[blocking response to DOUBT]

ASSUMPTION-PROBABLE

Claim that the assumption is probable based on prior experience, or, for example, statistical data.
[weaker response to DOUBT]

REBUT

This is essentially the same as PRESENT, but of an argument based on the opposite assumption to that challenged in the preceding DOUBT.
[response to a PASS following a DOUBT]

COUNTER-EXAMPLE

Present a known case similar to the current one, where the predicate of the assumption which has been challenged holds the opposite value.
[response to ASSUMPTION-PROBABLE]

PASS

A null move in response to any other move. Forces the other player to move. In some sense this might be seen as equivalent to saying ‘so what?’: the opponent has made a move, but there is nothing further which the player wants to advance at this stage. This move is inherently dangerous, since the game ends when the two players pass in successive moves.
[weakest response to anything]

Implementing a game theoretic approach to explanation

Playing the explanation game

The game playing algorithm required to implement an explanation system need not be sophisticated; indeed, it must not be too sophisticated, at least as a game player. Consider: a good game playing algorithm looks ahead to calculate what the probable responses to the moves it makes will be. If it sees that one possible move leads to a dead end, it will not make that move.
But in generating an explanation, we need to explore any path whose conclusion is not immediately obvious. So either the game-playing algorithm should not look ahead more than, perhaps, one ply; or else, if it does so, then when it sees a closed path it should print out a message something of the form:
Critic: Oh yes, I can see that even if the assumption that Carol hasTwoLegs is false it does not affect the assumption that Carol canSwim.
Initially, however, I propose to implement a game-playing algorithm which simply makes the strongest move available to the player, based on a scoring scheme which does not look ahead. The game will proceed something like tennis; the player who last made a PRESENT or REBUT move will have the initiative, the ‘serve’, as it were, and need only defend to win. The other player must try to break this serve, by forcing a DOUBT… PASS sequence, which allows a REBUT play.

Dialectic explanation: the two-player game

With this schema, we can mechanise a dialectic about a case. The game is played by the machine against itself; ‘agents’ in the machine play moves in turn until both sequentially pass, or until the user interrupts. Each move produces a natural language statement on the display Ideally, the user could interrupt at any time, either to request further justification for a move or to halt the process.

As a (simple) example of what might be produced, here is some sample output from a very crude prototype system2:

User: (← Carol Explain CanSwin)
Consultant: What is the value of CanSwim for Carol?
User: Don’t know
Consultant: I found that CanSwim was true of Carol by Default using the following assumptions:
I assumed that CanSwim was true of Carol
My reasons are
CanSwim is true of most Mammals;
Carol is a Mammal;
Therefore it is probable that CanSwim is true of Carol.
Critic: The assumption that CanSwim is true of Carol is unsafe:
Henry is also of type(s) (Person) but CanSwim is false of Henry.

This example, which is from a system considerably less developed than the ideas given above, shows ‘Consultant’ PRESENTing a case based on a single assumption, and ‘Critic’ responding with a COUNTER-EXAMPLE – a move which would not be legal under the scheme given. However, this should give some indication of how the idea might be implemented.

The simple model described above has not achieved all we want, however. The machine, in the form of the agent ‘Consultant’ has attempted to explain to itself, wearing its ‘Critic’ hat. The user’s role has simply been that of passive observer. The first enhancement would be to give the user control of the ‘Critic’; but because the user cannot have the same view of the knowledge base as the machine, the ‘Critic’ will have to guide the user, firstly by showing the user the legal moves available, and secondly by finding supplementary information to support these moves.

Explanation as conversation – a three player game

A better design still might be to allow the machine to carry on its dialogue as before, but to allow the user to interrupt and challenge either of the agents. It seems probable that the most interesting moves the user could make would be DOUBT, and also COUNTER-EXAMPLE, where the user could present an example case not previously known to the machine. Now the user can sit back and watch, if the conversation brings out points which seem relevant; but equally, the user can intervene to steer the conversation.

These ideas have, needless to say, yet to be implemented.

1‘Rational’, here, is of course a fix. In an explanation situation involving two human beings, the explainee might continue to resist even if there is no rational reason for doing so. This situation would be non-deterministic, though, and too difficult for me to deal with.

2This is output from Wildwood version Arden, my experimental prototype object based inference mechanism. The game-playing model is different from (a predecessor of) the one given above.

Friday, 1 December 2017

Before Wildwood: the Arboretum engine

(this piece was written as an email in a response to a question by Chas Emerick about whether it is possible to explain inference to ordinary people. It describes the Arboretum engine, which started me thinking about how you make a computable logic which people can understand.)

Right, let's try to set this out in some sensible detail. I'll start by giving you the bones of the story, and then move on to talk about the things you're actually interested in. I'm copying Peter Mott in so he can correct any misrememberings of mine (I have long term severe depression, and it really messes with your memory).

I did a minor course in logic and metaphysics when I was an undergrad. When I graduated in 1986, my logic tutor, Peter Mott, who was entirely deaf, was invited to join an AI project - the "Alvey DHSS Large Demonstrator" essentially as the formal logic input to the process. He invited me to join him as his research associate, mainly I think because having been in his seminars for two years I was pretty fluent at discussing technicalities of logic in sign language, and could interpret for him. I was never the world's most brilliant logician, but I could more or less do it.

Peter was certainly responsible for the logic of the "D-Engine", and I think wrote the first prototype in Acornsoft Lisp himself. The D-Engine logic is a Popperian propositional logic, and the goal of the engine at any stage is to seek to falsify what it currently believes. Again I think it was Peter's insight that this would map naturally onto a systematic practice for interviewing domain experts, but, being deaf, he couldn't develop this interviewing practice himself.

A rule in D-Engine logic is naturally a tree, which we called a "D-Tree". It nodes are propositions "coloured" with truth values, and edges essentially represent "unless" relationships. Because D-Engine logic naturally argues from the general to the particular, it's possible to recover extremely good natural language explanations of decisions from simple boilerplate fragments on the nodes.

There's a paper explaining all this: Mott, P. & Brooke, S. A Graphical Inference Mechanism. Expert Systems, May 1987. vol. 4. No. 2 pp. 106-117

(Golly. I've just - for the first time ever - done a google search for that paper, to see it it was online anywhere. It isn't, but it's cited in lots of places - including by Marc Eisenstadt! I'm pleasantly suprised).

I built the second prototype, Arboretum, in Interlisp on Xerox 1108 Dandelion and 1186 Daybreak machines, and that was the one that was used in the project; we demonstrated that it was possible to automate decision making on a piece of legislation (eligibility for widows' allowance), and, critically, to automatically generate letters to claimants which claimants could understand.

At the end of the project, Peter and I and a number of others of Peter's ex-students spun out a company to try to commercialise D-Engine logic. As (again) Peter was deaf, and as he intended to continue as a lecturer, he chose not to head up the company, and I was elected Chief Exec; I was also the technical lead. We knew we couldn't commercialise our work on the Xerox workstations, because they were way too specialist and expensive to sell into industry. I had a look at Windows, which was at version 2 at that stage, and it was just a horrible mess. We couldn't afford Macintosh (and, in any case, the M68000 Macs of those days were really not very powerful); so we decided to write the new version on Acorn Archimedes ARM machines, on the basis that I already had one. The plan was always to port it to UN*X later.

There was a good Lisp compiler for the Archimedes machines, but it was Portable Standard Lisp which was going out of favour (Common Lisp had just been standardised). It suffered from another problem which was for us more serious: the garbage collector moved things in memory, and the Archimedes window manager held fixed pointers to things; so, while I did a lot of work on this and got somewhere, the solution wasn't going to be portable.

So we decided to rewrite the system in C (which at that stage none of us knew). We also decided to do something else ambitious: we decided that because our new version - called KnacqTools for "Knowledge Acquisition Tools" - would require a graphics workstation, and such machines were then uncommon in industry and expensive, we should compile the knowledge once elicited down to run on one of a number of rule engines that were then available for PCs; and we decided that these rule language compilers should be pluggable.

We got the Archimedes version of KnacqTools feature complete and working well within a year. We demonstrated it to Acorn, who gave us two of the UN*X (BSD 4.3) version of their ARM workstations, called R260. On these we ported KnacqTools to OSF-Motif, which was the then-dominant window manager for X11. We got this feature complete in not very long, and it looked extremely impressive - so impressive that NCR gave us UN*X workstations with the intention that they would adopt KnacqTools as a product. However, the OSF-Motif version suffered from persistent memory leaks which we never fully resolved.

But in the meantime the company was in financial trouble. We'd started without capital and had funded the company organically by doing consulting. Members of the team wanted me to concentrate on product development, but I was also the person who could do the sort of consulting work which brought in money. And in the winter of 1981-2, recession hit the UK economy and our customers stopped spending on experimental AI projects.

If I knew then what I know now, the company could have been saved. What we needed was an angel, and we actually had a very good story to tell to angels. Our customers included Courtaulds - for whom one of our systems ran a critical chemical plant - Ford Motor Company, Bull, De La Rue, and a number of other industrial big players. We had a very impressive product, which was very close to finished. And we were teaching successful courses on the adoption of artificial intelligence into industry. But I didn't know about angels and I didn't have the contacts, so the company went down.

OK, that's the background: onwards!

Over a period of about three years I taught three-day courses on knowledge acquisition to around a hundred people mainly drawn from industry and the civil service - I can't remember why we didn't do more with financial services, which would have been an obvious market, but it's a fact we didn't. In teaching knowledge acquisition I obviously also taught an understanding of representing and structuring knowledge. The people I taught it to were mostly degree educated, and many were extremely senior people - ranging from senior engineers in charge of industrial plant, to executives evaluating future technologies for their companies. But they weren't by any means universally people with advanced maths skills or any previous experience of formal logic. These courses were generally very successful, and at the end of them participants typically could build working systems to make decisions in a domain of knowledge.

What made this possible was the very simple logical schema: ask the domain expert to choose the question to be answered, the proposition whose truth value was to be determined:

This person satisfies the conditions for widows' benefit

Ask them what the default value of the proposition is - in this case, 'false', most people aren't eligible for widows' benefit.

We would notate this as follows:

        Satisfies conditions for Widows' Benefit -

So OK, you ask the expert, what's the first thing you can think of which would make you change your mind, make you think a given person was eligible?

If they were a widow.

And if they were a widow, would they then be eligible?

No, their husband's national insurance contributions would have to have been up to date

And if their husband's national insurance contributions were up to date, would they then be eligible?

No.

We can now extend the graph:

        Satisfies conditions for Widows' Benefit -
                            |
                            |
                         Widow -
                            |
                            |
               Husband's contributions OK -

So, given they are a widow, and their husbands contributions were up to date, what's the first thing you can think of which would make you change your mind?

If they were under pension age when bereaved.

And if they were under pension age when bereaved, would they then be eligible?

Yes.

We now have a condition which overturns our original assumption that the person isn't eligible, and we notate it thus:

        Satisfies conditions for Widows' Benefit -
                            |
                            |
                         Widow -
                            |
                            |
               Husband's contributions OK -
                            |
                            |
                   Under pension age +

Is there any further condition that would change your mind and lead you to conclude that they were not eligible?

No.

So that closes one path and we recurse back up:

OK, so we've got someone who is a widow and whose husband's contributions were OK, but was not under pension age when bereaved. Is there anything else that would make you think they satisfied the conditions for widows benefit?

Yes, if their husband wasn't entitled to a retirement pension they would be eligible.

        Satisfies conditions for Widows' Benefit -
                            |
                            |
                         Widow -
                            |
                            |
               Husband's contributions OK -
                         /     \
                        /       \
       Under pension age +     Husband not entitled to retirement pension +

Again, you probe for further conditions; if there are none, you close the path, and recurse back.

OK, so we've got someone who is a widow and whose husband's contributions were OK, but was not under pension age when bereaved, and their husband was entitled to a retirement pension. Is there anything else that would make you think they satisfied the conditions for widows benefit?

No.

Again, we've nowhere further to go from this node, so we recurse back up:

OK, so we've got someone who is a widow but whose husband's contributions were not OK. Is there anything else at all which would make you think they satisfied the conditions for widows benefit?

No.

Again recurse back up:

OK, so we've got someone who is not a widow. Is there anything else at all which would make you think they satisfied the conditions for widows benefit?

No.

And so we can close this tree: it's complete. As you can see, this is a very simple procedure: the interviewer always knows what the next question to ask is, and the answers can be systematically recorded. So it's very easy to teach, and people easily understand how this process maps on to real world decision making. There may still be some things which aren't completely clear - for example, what is entailed by 'Husbands contributions OK'? But we can use that as the start of a new tree and repeat the process. We had special notepads printed with a triangular grid to make drawing trees easier, but that was mainly sales collateral; essentially anyone who can ask questions and scribble on paper can conduct one of these interviews.

So, again, I didn't have the original idea, that was down to Peter. But I did develop the interviewing practice, and I was the person who mainly delivered the courses on it.

The nature of logic is that you can operate on it with logic. A logical formalism can be translated into another logical formalism with similar (or lesser) expressivity, so it was trivial to translate D-Trees into production rules which would run on a conventional backward chaining production rule engine. I'd also argue, from experience, that D-Trees are easier for non-logicians to understand than production rules, since a non-specialist can assess whether a D-Tree has complete coverage of an area of domain knowledge, whereas it's much harder to assess whether a corpus of production rules does.

So to come back to your original question, yes, I know from experience that you can enable 'ordinary people' to understand inference, provided you can find a simple enough expression of it. Of course the D-Engine logic wasn't as sophisticated as a first-order predicate logic, and certainly wasn't as sophisticated as a constraint propagation logic, but I don't believe those are insuperable issues. You can't expect 'ordinary people' to understand dense code or complex formalisms; upside down 'A's and back turned 'E's, and all the various hooks of set notation, quickly alienate those who are not mathematically inclined. But all inference is the systematic performance of a limited number of individually simple legal moves, like chess; and 'ordinary people' can understand chess, even if many are (like me) not very good at it.

Thursday, 23 November 2017

Coffeeneuring

Myself in Ae forest on our second ride.
I went out for a ride with my old friend Andrew Crooks back in September, when I was - and partly because I was - still deep in depression. I rode into Castle Douglas to Gareth Montgomerie's shop, where I met up with Andrew, and we rode down to Tongland, out to Twynholm, over to Kirkcudbright, and back by Loch Fergus. We stopped for cake and coffee in the very excellent Earth's Crust bakery, and that's when Andrew broached the idea of Coffeeneuring.

There's this challenge, he said, on an American blog: ride to seven different coffee shops (and drink coffee) in six weeks, starting in October and ending in November.

So we did.

Myself and Andrew near New Abbey
The first day of the challenge was the Friday the 13th October, an auspicious date - so of course we had to start then. It was a wild, blustering day with sharp showers blowing down a stiff westerly (that will become a theme). We met up at Beeswing, and rode out by the beautiful road down to New Abbey; from there we took wee backroads up to Cargenbridge, and joined the Old Military Road back to Beeswing. We had coffee and cake at Loch Arthur farm shop, which was as always excellent.

We started our second ride, on the 22nd, from Andrew's house in Dumfries, riding through the town mainly on the old railway cycle path, and thence up to Ae Forest - a 10Km climb which could have been a fair old grind, but for yet another sturdy south-westerly which fairly lifted us up the hill. We did a wee bit of zooming around the forest on green-graded single track, had an excellent cake and coffee in the Ae Forest Cafe, and then... zoomed back down the long hill into Dumfries, our speed limited only by the westering sun glaring in our eyes.

For our third - Hallowe'en - ride we again met at Gareth's shop (and again I rode in), with the intention of riding out to the Cream of Galloway ice cream factory near Gatehouse of Fleet. Again the wind was westerly, and again it was driving rain before it. We rode down to Kirkcudbright by the Loch Fergus road, and thence out via Nunmill towards Borgue. But the wind was near gale, and the rain became more intense; after a brief discussion we decided to turn back to Mulberries in Kirkcudbright, and there had soup and coffee. We rode back via Tongland. It was an enjoyable but quite tough day - what I particularly remember about it was that Andrew (who is always faster than me uphill) was also faster down. That felt an injustice!

Pete White, in impossibly bling
retro-reflective jacket, buys coffee
in the unknown coffee shop. 
I was in Edinburgh on the weekend of the 5th November, staying at my sister's; and, on the Sunday, went for a ride with my friend Pete White. We had planned a tour of the reservoirs up in the Pentlands, but it was a cracking day, and so instead we rode out of Edinburgh by the Innocent Railway path, and took the coast road fast out to North Berwick. In North Berwick we got a cairry-oot coffee and cake from a wee cafe I've totally failed to identify from Google maps; and then rode up to the train station where we got a train back into Waverley. Riding back to my sister's from the station I made a sentimental detour to the Edinburgh Bicycle Co-op, one of my favourite shops.

That counted as my fourth Coffeeneuring trip; by agreement Andrew was also doing one that weekend. However, later that week I rode into Castle Douglas for Gareth to fit new non-slick tubeless tyres to the Slate, and I snuck in a coffee and cake at Streetlights Cafe, putting me one ride ahead. I rode home by the old iron-age road over the ridge by Dungairy, a long climb on mud and gravel. The new tyres made the bike feel quite different - assured and surefooted in conditions in which it had previously been decidedly sketchy.

The three mile dirt road descent from Dungairy to Culnaightrie was a complete blast.

From the start of the Coffeeneuring challenge Andrew and I had talked about doing a Roof of the World ride at some point if the weather allowed, but the weather didn't really allow and on the 13th of November, riding from Andrew's house, we decided to do the Birthplace of the Bicycle instead; Andrew promised me an interesting climb on the way home - one we've often talked about in the past, but I hadn't yet ridden.

Where the first ever pedal powered bicycle was built
It was actually a beautiful day, but cold; on the north slope of one of the first hills we encountered a patch of ice which completely covered the road for about twenty metres, and from there on our descending was a little more tentative than it would otherwise have been. But we had a good ride up Nithsdale and before too long came to the birthplace itself, where we stopped to make the customary obeisances.

Then on into Penpont for coffee in a surprisingly busy wee cafe, and back. Andrew led us across to the east side of the dale, crossing the A76 at Closeburn, onto wee back roads that were new to me.

Approaching the top of the Loch Ettrick climb - yes, it is
uphill, dammit!
Andrew is lots faster than me anyway, but on hills he's notoriously quite a lot faster. So when we came to the junction at the bottom of the "interesting" climb, he stopped for a comfort break and I went ahead. I was very pleased with myself with how far I'd got before he caught me, but the photograph he took of me near the top doesn't look at all dramatic.

However, we topped out at Loch Ettrick, high above Ae Forest, and Andrew promised me ten miles of continuous descent. It wasn't quite continuous, but pretty close to it, and apart from one huge tipper truck which wasn't taking any prisoners, it was quite a blast.

That made my sixth qualifying ride, but only Andrew's fifth.

For the next, which I number as 7.1, we met up at Mossdale on the 20th and rode up by the Raider's Road to the Otter Pool, where we enjoyed an al fresco coffee with some buns I'd bought in Castle Douglas on the way up. It was a soft, damp day with continuous fairly light rain, but not dreadfully cold; still, it wasn't weather we wanted to sit around in for too long. We'd had a plan to stop at the cafe at Clatteringshaws, but it was closed for the winter. We discussed the route back, and Andrew liked the idea of the loch shore to get us out of the wind (although it wasn't that severe).

Al fresco at the Otter Pool
So we made the glorious descent into New Galloway, where we actually could have bought a coffee at the CatStrand; but we agreed that if we stopped in our wet clothes we'd quickly get cold, so we rode on at good speed down the west shore of Loch Ken. The autumn colours were glorious, and in the shelter of the hill and the forest the loch was almost still. Burns tumbled down the hillside in wild spate. We got back to the cars at Mossdale with - for me, anyway - mixed feelings. We were droukit. It was cold. Dry clothes were delightful (although I'd carelessly forgotten dry socks). But it had been a magical ride, on largely empty roads (as, to be fair, all these rides have been), amid all the spectacular scenery of Galloway's high country.

Today's ride - 7.2 - completed the set. We met in Kirkcudbright on a day of westerly gales, although by 11:00 when we started the rain was almost through. Andrew had concern about his back tyre, which had lost a lot of air with a spectacular hiss in the car on the way over, and which neither of our mini-pumps could get hard; but, after checking he had a spare inner tube, he decided to set off anyway. Our aim, again, was the Ice Cream Factory, which as well as ice cream makes good coffee and spectacular cakes.

It's a mistake to start a ride with someone who's a much better grimpeur by going uphill, but given the wind I wanted to stay in shelter as much as possible on the way out, and get down onto the coast to take advantage of the wind on the way back. I controlled my competitiveness and didn't burn my legs on the first long climb, and, despite the weather, we had a remarkably swift and enjoyable ride out to Rainton.

Is there anybody there? said the traveller,
knocking at the moonlit door...
Sadly, the cafe was shut. We discussed alternatives, and decided to return to Kirkcudbright. I asked Andrew whether I thought his tyre would cope with an unmetalled section, intending to take the shore path from Sandgreen round to Carrick. He thought it would, but when we got down to Sandgreen felt it was just too risky. So we doubled back round to Knockbrex on the metalled road, and took a wee sentimental detour down to Carrick just because.

Then, with the sun beginning to shine and the wind behind us, we were whisked up the coast, past Knockbrex, passed the Isles of Fleet, past the Coo Palace, past Kirkanders Borgue, and up the long hill to Borgue village, at good speed. We did stop a couple of times because the views out to the Isle of Whithorn and the Isle of Man were just so good!

In Borgue I had to stop to take off my jacket and change to fingerless gloves; despite a benselling wind the sun had become warm. And then swiftly on, past the Brighouse turn, down through Senwick to the Doon and Nunmill, and up the river to Kirkcudbright. We obviously couldn't go to Mulberries again, because we'd already bagged it; and Harbour Lights and Solway Tide were closed for the winter. But the Belfry Cafe was open, and we had soup, scones and coffee to finish up.

Eight rides - actually slightly more than eight over the period, because I've done another couple which didn't involve coffee and thus don't count - averaging around thirty miles/fifty Km, longest not more than fifty miles/eighty Km. None of them epic. All but one in company and in good company. All but the one I rode alone probably faster than I ride alone.

They haven't, by themselves, cured depression - it isn't yet, completely, cured, although it's greatly better. But they've certainly contributed. And one thing I realise that this simple, light hearted challenge has done for me - aided by my friends - is that it's given me an excuse to take time off to ride just for the pleasure of riding, of enjoying the weather and the views and the roads and most of all the company.

Myself and Andrew at Clatteringshaws - with coffee!
And the coffee, of course. Let's not forget the coffee.

More pictures here.


Monday, 6 November 2017

Catalunya, Rule, and Law

The Spanish courts seek to suppress the Republic of Catalunya, in the name of the rule of law. The EU refuses to intervene, because it claims it's an internal Spanish matter. 

The EU has high claims to support fine-sounding principles, including human rights, democracy, subsidiarity, and the rule of law; and in claiming that, if finds itself, like a bullfighter in a fight the Spanish courts denied Catalunya the right to ban, on the horns of a dilemma of its own breeding.

The Rule of Law is not synonymous with democracy; in fact, it is more often antagonistic to it. This is shown by the Catalan crisis.

Law is, at best, a lagging indicator of a social consensus - but only when passed by delegates voting in their constituents interests. Law is more often passed by elites (the House of Lords, a Tory cabinet of millionaires, etc) in their own interests, or by elected representatives excessively or corruptly influenced by powerful interests through 'think tanks' and 'lobbyists'.

This is particularly so in the Catalan case: the Spanish constitution was negotiated with fascists in Franco's dying days, and is fenced round with conditions which make it unalterable in practice. Even if there was a practical course to amend the constitution, the Catalans are a systematic minority in Spain. They do not constitute a majority, they can never muster a supermajority. They cannot change it.

So where does that leave Carles Puigdemont and the Catalan Government? The courts said they could not hold a referendum. The electors, who elected them to office, said they must do so.  The Rule of Law did not support democracy. Rather, democracy and the rule of law are in direct conflict. No man can serve two masters; the government of Catalunya chose to obey their electors.

Here endeth the first part; the lemma, if you will. Now, let's move onto the thesis.

Lawyers will argue that the law solves this problem: that the UN Charter and the ECHR are incorporated into Spanish law, and somehow trump the constitution, making the judgement of the Constitutional Court wrong. I say that argument does not hold. 

It may be that in this particular case there are ambiguities and paradoxes in the corpus of law by which one can contort the law into appearing to agree with the democratic decision of the people. But what if there weren't? Should the rule of law trump democracy?

The principle of subsidiarity dictates that the people who should decide the governance of Catalunya are the people of Catalunya. The principle of democracy dictates that they must have a mechanism available to then to decide this. And in Catalunya especially, with its ancient tradition of civil society and its proud tradition of anarcho-syndicalism, the views of the people must surely trump the views of any governing elite.

So where does that leave Donald TuskGuy Verhofstadt and the rest of the sclerotic cabal in Brussels? They can support the Rule of Law. Or they can support Democracy. They can't do both. It's time to choose.

Monday, 9 October 2017

The place you call home

The place I call home
It's no secret that I live in an illegal house (you might call it a hut, a bothy, a shack, whatever). Most of my neighbours also live in informal dwellings - old vans and buses, old caravans, yurts. It's pretty common in this parish, because legal housing is unaffordable for many people. How common it is across Scotland I don't know, and I don't think anyone knows. I think it would be worth trying to find out.

So, if you live in an informal dwelling - that is, anything that isn't a legal house or flat that you legally own or rent - anywhere in Scotland, I'd like to talk to you. I've got a set of questions I'd like to ask you. Ideally I'd like to come and see you, but I can't come everywhere and see everyone, so some at least of this needs to be done by email or telephone. Ideally, if you'll permit it, I'd like to have a picture of where you live (although I understand that many people will feel anxious about this, so if you don't allow it I perfectly understand). But first, I've a web form you can fill in - and if you live in an informal dwelling I'd be really grateful if you would. It's completely anonymous.

Obviously, informal chats with self-selected folk don't produce hard, numeric data. That needs a census, and at the time of the last census, when I was sleeping rough, I couldn't even get the census people to send me a form, although I asked for one. But if we can put together at least some impressionistic data on what sorts of people are living in informal dwellings, and what (if anything) their needs are, I think that would be useful

Anything I am told, I will anonymise. I won't publish who you are, or exactly where you live (although I'd like to be able to say, for example, "I spoke to twenty people living in Galloway").

If you live in an informal dwelling (including squatting, couch-surfing or sleeping rough) and are prepared to talk to me, please let me know - either by commenting on this blog, or by emailing me.

Tuesday, 19 September 2017

Implementing post-scarcity hardware

The address space hinted at by using 64 bit cons-space and a 64 bit vector space containing objects each of whose length may be up to 1.4e20 bytes (2^64 of 64 bit words) is so large that a completely populated post-scarcity hardware machine can probably never be built. But that doesn't mean I'm wrong to specify such an address space: if we can make this architecture work for machines that can't (yet, anyway) be built, it will work for machines that can; and, changing the size of the pointers, which one might wish to do for storage economy, can be done with a few edits to consspaceobject.h.

But, for the moment, let's discuss a potential 32 bit PoSH machine, and how it might be built.

Pass one: a literal implementation

Let's say a processing node comprises a two core 32 bit processor, such as an ARM, 4GB of RAM, and a custom router chip. On each node, core zero is the actual processing node, and core one handles communications. We arrange these on a printed circuit board that is 4 nodes by 4 nodes. Each node is connected to the nodes in front, behind, left and right by tracks on the board, and by pins to the nodes on the boards above and below. On the edges of the board, the tracks which have no 'next neighbour' lead to some sort of reasonably high speed bidirectional serial connection - I'm imagining optical fibre (or possibly pairs of optical fibre, one for each direction). These boards are assembled in stacks of four, and the 'up' pins on the top board and the 'down' pins (or sockets) on the bottom board connect to similar high speed serial connectors.

This unit of 4 boards - 64 compute nodes - now forms both a logical and a physical cube. Let's call this cube module a crystal. Connect left to right, top to bottom and back to front, and you have a hypercube. But take another identical crystal, place it along side, connect the right of crystal A to the left of crystal B and the right of B to the left of A, leaving the tops and bottoms and fronts and backs of those crystals still connected to themselves, and you have a larger cuboid with more compute power and address space but slightly lower path efficiency. Continue in this manner until you have four layers of four crystals, and you have a compute unit of 4096 nodes. So the basic 4x4x4 building block - the 'crystal' - is a good place to start, and it is in some measure affordable to build - low numbers of thousands of pounds, even for a prototype.

I imagine you could get away with a two layer board - you might need more, I'm no expert in these things, but the data tracks between nodes can all go on one layer, and then you can have a raster bus on the other layer which carries power, backup data, and common signals (if needed).

So, each node has 4Gb of memory (or more, or less - 4Gb here is just illustrative). How is that memory organised? It could be treated as a heap, or it could be treated as four separate pages, but it must store four logical blocks of data: its own curated conspage, from which other nodes can request copies of objects; its own private housekeeping data (which can also be a conspage, but from which other nodes can't request copies); its cache of copies of data copied from other nodes; and its heap.

Note that a crystal of 64 nodes each with 4Gb or RAM has a total memory of 256Gb, which easily fits onto a single current generation hard disk or SSD module. So I'm envisaging that either the nodes take turns to back up their memory to backing store all the time during normal operation. They (obviously) don't need to backup their cache, since they don't curate it.

What does this cost? About £15 per processor chip, plus £30 for memory, plus the router, which is custom but probably still in tens of pounds, plus a share of the cost of the board; probably under £100 per node, or £6500 for the 'crystal'.

Pass two: a virtual implementation

OK, OK, this crystal cube is a pretty concept, but let's get real. Using one core of each of 64 chips makes the architecture very concrete, but it's not necessarily efficient, either computationally or financially.

64 core ARM chips already exist:

1. Qualcom Hydra - 64 of 64 bit cores;
2. Macom X-Gene - 64 of 64 bit cores;
2. Phytium Mars - 64 cores, but frustratingly this does not say whether cores are 32 or 64 bit

There are other interesting chips which aren't strictly 64 core:

1. Cavium ThunderX - ARM; 96 cores, each 64 bit, in pairs of two, shipping now;
2. Sparc M8 - 32 of 64 bit cores each capable of 8 concurrent threads; shipping now.

Implementing the virtual hypercube

Of course, these chips are not designed as hypercubes. We can't route our own network of physical connections into the chips, so our communications channels have to be virtual. But we can implement a communications channel as a pair of buffers, an 'upstream' buffer writable by the lower-numbered processor and readable by the higher, and a 'downstream' buffer writable by the higher numbered processor and readable by the lower. Each buffer should be at least big enough to write a whole cons page object into, optionally including a cryptographic signature if that is implemented. Each pair of buffers also needs at least four bits of flags, in order to be able, for each direction, to be able to signal

0. Idle - the processor at the receiving end is idle and can accept work;
1. Busy writing - the processor at the sending end is writing data to the buffer, which is not yet complete;
2. Ready to read - the processor at the sending end has written data to the buffer, and it is complete;
3. Read - the processor at the receiving end has read the current contents of the buffer.

Thus I think it takes at least six clock ticks to write the buffer (set busy-writing, copy four 64 bit words into the buffer, set ready-to-read) and five to read it out - again, more if the messages are cryptographically signed - for an eleven clock tick transfer (the buffers may be allocated in main memory, but in practice they will always live in L2 cache). That's probably cheaper than making a stack frame. All communications channels within the 'crystal' cost exactly the same.

But note! As in the virtual design, a single thread cannot at the same time execute user program and listen to communications from neighbours. So a node has to be able to run two threads. Whether that's two threads on a single core, or two cores per node, is a detail. But it makes the ThunderX and Spark M8 designs look particularly interesting.

But note that there's one huge advantage that this single-chip virtual crystal has over the literal design: all cores access the same memory pool. Consequently, vector space objects never have to be passed hop, hop, hop across the communications network, all can be accessed directly; and to pass a list, all you have to pass is its first cons cell. So any S-Expression can be passed from any node to any of its 6 proximal neighbours in one hop.

There are downsides to this, too. While communication inside the crystal is easier and quicker, communication between crystals becomes a lot more complex and I don't yet even have an idea how it might work. Also, contention on the main address bus, with 64 processors all trying to write to and read from the same memory at the same time, is likely to be horrendous, leading to much lower speed than the solution where each node has its own memory.

On a cost side, you probably fit this all onto one printed circuit board as against the 4 of the 'literal' design; the single processor chip is likely to cost around £400; and the memory will probably be a little cheaper than on the literal design; and you don't need the custom routers, or the connection hardware, or the optical transceivers. So the cost probably looks more like £5,000. Note also that this virtual crystal has 64 bit processors (although address bus contention will almost certainly burn all that advantage and more).

Conclusion

An experimental post-scarcity machine can be built now - and I can almost afford to build it. I don't have the skills, of course; but I can learn.


Thursday, 14 September 2017

Hardware of the deep future

HAL9000, a visualisation of hardware of the deep future
In thinking about how to write a software architecture that won't quickly become obsolescent, I find that I'm thinking increasingly about the hardware on which it will run.

In Post Scarcity Hardware I envisaged a single privileged node which managed main memory. Since then I've come to thing that this is a brittle design which will lead to bottle necks, and that each cons page will be managed by a separate node. So there needs to be a hardware architecture which provides the shortest possible paths between nodes.

Well, actually... from a software point of view it doesn't matter. From a software point of view, provided it's possible for any node to request a memory item from any other node, that's enough, and, for the software to run (slowly), a linear serial bus would do. But part of the point of this thinking is to design hardware which is orders of magnitude faster than the von Neumann architecture allows. So for performance, cutting the number of hops to a minimum is important.

I've been reading Danny Hillis' thesis and his book The Connection Machine which, it transpires, is closely based on it. Danny Hillis was essentially trying to do what I am trying to do, but forty years ago, with the hardware limitations of forty years ago (but he was trying to do it in the right place, and with a useful amount of money that actually allowed him to build something physical, which I'm never likely to have).

Hillis' solution to the topology problem, as I understand it (and note - I may not understand it very well) is as follows:

If you take a square grid and place a processor at every intersection, it has at most four proximal neighbours, and, for a grid which is x cells in each direction, the longest path between two cells is 2x. If you join the nodes on the left hand edge of the grid to the corresponding nodes on the right hand edge, you have a cylinder, and the longest path between two nodes is 1.5x. If you then join the nodes on the top of the grid to the nodes on the bottom, you have a torus - a figure like a doughnut or a bagel. Every single node has four proximal neighbours, and the longest path between any two nodes is x.

So far so good. Now, let's take square grids and stack them. This gives each node at most six proximal neighbours. We form a cube, and the longest distance between two nodes is 3x. We can link the nodes on the left of the cube to the corresponding nodes on the right and form a (thick walled) cylinder, and the longest distance between two nodes is 2.5x. Now join the nodes at the top of the cube to the corresponding nodes at the bottom, and we have a thick walled torus. The maximum distance between is now 2x.

Let's stop for a moment and think about the difference between logical and physical topology. Suppose we have a printed circuit board with 100 processors on it in a regular grid. We probably could physically bend the circuit board to form a cylinder, but there's no need to do so. We achieve exactly the same connection architecture simply by using wires to connect the left side to the right. And if we use wires to connect those at the top with those at the bottom, we've formed a logical torus even though the board is still flat.

It doesn't even need to be a square board. We could have each processor on a separate board in a rack, with each board having four connectors probably all along the same edge, and use patch wires to connect the boards together into a logical torus.

So when we're converting our cube into a torus, the 'cube' could consist of a vertical stack of square boards each of which has a grid of processors on it. But it could also consist of a stack of boards in a rack, each of which has six connections, patched together to form the logical thick-walled torus. So now lets take additional patch leads and join the nodes that had been on the front of the logical cube to the corresponding nodes on the back of the logical cube, and we have a topology which has some of the properties of a torus and some of the properties of a sphere, and is just mind-bending if you try to visualise it.

This shape is what I believe Hillis means by a hypercube, although I have to say I've never found any of the visualisations of a hypercube in books or on the net at all helpful, and they certainly don't resemble the torusy-spherey thing I which visualise.

It has the very useful property, however, that the longest distance between any two nodes is 1.5x.

Why is 1.5x on the hypercube better than 1x on the torus? Suppose you want to build a machine with about 1000 nodes. The square root of a thousand is just less than 32, so let's throw in an extra 24 nodes to make it a round 32. We can lay out 1024 nodes on a 32 x 32 square, join left to right, top to bottom, and we have a maximum path between two of 1024 nodes of 32 hops. Suppose instead we arrange our processors on ten boards each ten by ten, with vertical wires connecting each processor with the one above it and the one below it, as well tracks on the board linking each with those east, west, north and south. Connect the left hand side to the right, the front to the back and the top to the bottom, and we have a maximum path between any two of 1000 nodes of fifteen hops. That's twice as good.

Obviously, if you increase the number of interconnectors to each processor above six, the paths shorten further but the logical topology becomes even harder to visualise. This doesn't matter - it doesn't actually have to be visualised - but wiring would become a nightmare.

I've been thinking today about topologies which would allow higher numbers of connections and thus shorter paths, and I've come to this tentative conclusion.

I can imagine topologies which tesselate triangle-tetrahedron-hypertetrahedron and pentagon-dodecahedron-hyperdodecahedron. There are possibly others. But the square-cube-hypercube model has one important property that those others don't (or, at least, it isn't obvious to me that they do). In the square-cube-hypercube model, every node can be addressed by a fixed number of coordinates, and the shortest path from any node to any other is absolutely trivial to compute.

From this I conclude that the engineers who went before me - and who were a lot more thoughtful and expert than I am - were probably right: the square-cube-hypercube model, specifically toruses and hypercubes, is the right way to go.

Friday, 25 August 2017

Riddles in the dark

I'm continuing my series of essays on the future of news; this essay may look like a serious digression, but trust me, it's not. The future of news, in my opinion, is about cooperation; it's about allowing multiple voices to work together on the same story; it's about allowing users to critically evaluate different versions of the story to evaluate which is more trustworthy. This essay introduces some of the technical underpinnings that may make that possible.

In January 2015, I ran a workshop in Birnam on Land Reform. While planning the workshop, I realised I would need a mechanism for people who had attended the workshop to work cooperatively on the document which was the output of the event. So, obviously, I needed a Wiki. I could have put the wiki on a commercial Wiki site like Wikia, but if I wanted to control who could edit it I'd have to pay, and in any case it would be plastered with advertising.

So I decided to install a Wiki engine on my own server. I needed a Wiki engine which was small and easy to manage; which didn't have too many software dependencies; which was easy to back up; and on which I could control who could edit pages. So I did a quick survey of what was available.

Wiki engines store documents, and versions of those documents. As many people work on those documents, the version history can get complex. Most Wiki engines store their documents in relational databases. Relational databases are large, complex bits of software, and when they need to be upgraded it can be a pain. Furthermore, relational databases are not designed to store documents; they're at there best with fixed size data records. A Wiki backed by a relational database works, but it isn't a good fit.

Software is composed of documents too - and the documents of which software is composed are often revised many, many times. In 2012 there were 37,000 documents, containing 15,004,006 lines of code (it's now over twenty million lines of code). It's worked on by over 1,000 developers, many of whom are volunteers. Managing all that is a considerable task; it needs a revision control system.

In the early days, when Linus Torvalds, the original author of Linux, was working with a small group of other volunteers on the early versions of the kernel, revision control was managed in the revision control system we all used back in the 1980s and 90s: the Concurrent Versions System, or CVS. CVS has a lot of faults but it was a good system for small teams to use. However, Linux soon outgrew CVS, and so the kernel developers switched to a proprietary revision control system, Bitkeeper, which the copyright holders of that system allowed them to use for free.

However, in 2005, the copyright holders withdrew their permission to use Bitkeeper, claiming that some of the kernel developers had tried to reverse engineer it. This caused a crisis, because there was then no other revision control system then available sophisticated enough to manage distributed development on the scale of the Linux kernel.

What makes Linus Torvalds one of the great software engineers of all time is what he did then. He sat down on 3rd April 2005 to write a new revision control system from scratch, to manage the largest software development project in the world. By the 29th, he'd finished, and Git was working.

Git is extremely reliable, extremely secure, extremely sophisticated, extremely efficient - and at the same time, it's small. And what it does is manage revisions of documents. A Wiki must manage revisions of documents too - and display them. A Wiki backed with git as a document store looked a very good fit, and I found one: a thing called Gollum.

Gollum did everything I wanted, except that it did not authenticate users so that I couldn't control who could edit documents. Gollum is written in a language called Ruby, which I don't like. So I thought about whether it was worth trying to write an authentication module in Ruby, or simply start again from scratch in a language I do like - Clojure.

I started working on my new wiki, Smeagol, on 10th November 2014, and on the following day it was working. On the 14th, it was live on the Internet, acting as the website for the Birnam workshop.

It was, at that stage, crude. Authentication worked, but the security wasn't very good. The edit screen was very basic. But it was good enough. I've done a bit more work on it in the intervening time; it's now very much more secure, it shows changes between different versions of a document better (but still not as well as I'd like), and it has one or two other technically interesting features. It's almost at a stage of being 'finished', and is, in fact, already used by quite a lot of other people around the world.

But Smeagol exploits only a small part of the power of Git. Smeagol tracks versions of documents, and allows them to be backed up easily; it allows multiple people to edit them. But it doesn't allow multiple concurrent versions of the same document; in Git terms, it maintains only one branch. And Smeagol currently allows you to compare a version of the document only with the most recent version; it doesn't allow you to compare two arbitrary versions. Furthermore, the format of the comparison, while, I think, adequately clear, is not very pretty. Finally, because it maintains only one branch, Smeagol has no mechanism to merge branches.

But: does Smeagol - that simple Wiki engine I whipped up in a few days to solve the problem of collaboratively editing a single document - hold some lessons for how we build a collaborative news system?

Yes, I think it does.

The National publishes of the order of 120 stories per issue, and of the order of 300 issues per year. That's 36,000 documents per year, a volume on the scale of the Linux kernel. There are a few tens of blogs, each with with at most a few tens of writers, contributing to Scottish political blogging; and at most a few hundred journalists.

So a collaborative news system for Scottish news is within the scale of technology we already know how to handle. How much further it could scale, I don't know. And how it would cope with federation I don't yet know - that might greatly increase the scaling problems. But I will deal with federation in another essay later.


Trust me

Reality is contested, and it is an essential part of reality that reality is contested. There are two reasons that reality is contested: one is, obviously, that people mendaciously misreport reality in order to deceive. That's a problem, and it's quite a big one; but (in my opinion) it pales into insignificance beside another. Honest people honestly perceive reality differently.

If you interview three witnesses to some real life event, you'll get three accounts; and its highly likely that those accounts will be - possibly sharply - different. That doesn't mean that any of them are lying or are untrustworthy, it's just that people perceive things differently. They perceive things differently partly because they have different viewpoints, but also because they have different understandings of the world.

Furthermore, there isn't a clear divide between an honest account from a different viewpoint and outright propaganda; rather, there's a spectrum. The conjugation is something like this:
  • I told the truth
  • You may have been stretching the point a little
  • He overstated his case
  • They were lying
Reality is hard. We see reality through a glass, darkly. There is no mechanism accessible to us which will reveal the perfect truth (except to religious extremists). We have to assess the viewpoints, and decide how much we trust them. We have to assess which accounts are trustworthy, and, from that, which witnesses are trustworthy.

A digression. I am mad. I say this quite frequently, but because most of the time I seem quite lucid I think people tend not to believe me. But one of the problems of being mad - for me - is that I sometimes have hallucinations, and I sometimes cannot disentangle in my memory what happened in dreams from what happened in reality. So I have to treat myself as an unreliable witness - I cannot even trust my own accounts of things which I have myself witnessed.

I suspect this probably isn't as rare as people like to think.

Never mind. Onwards.

The point of this, in constructing a model of news, is we don't have access to a perfect account of what really happened. We have at best access to multiple contested accounts. How much can we trust them?

Among the trust issues on the Internet are
  1. Is this user who they say they are?
  2. Is this user one of many 'sock puppets' operated by a single entity?
  3. Is this user generally truthful?
  4. Does this user have an agenda?

Clues to trustworthiness

I wrote in the CollabPRES essay about webs of trust. I still think webs of trust are the best way to establish trustworthiness. But there have to be two parallel dimensions to the web of trust: there's 'real-life' trustworthiness, and there's reputational - online - trustworthiness. There are people I know only from Twitter whom nevertheless I trust highly; and there are people I know in real life as real people, but don't necessarily trust as much.

If I know someone in real life, I can answer the question 'is this person who they say they are'. If the implementation of the web of trust allows me to tag the fact that I know this person in real life, then, if you trust me, then you have confidence this person exists in real life.

Obviously, if I don't know someone in real life, then I can still assess their trustworthiness through my evaluation of the information they post, and in scoring that evaluation I can allow the system to model the degree of my trust for them on each of the different subjects on which they post. If you trust my judgement of other people's trustworthiness, then how much I trust them affects how much you trust them, and the system can model that, too.

Alleged follower network of Twitter user DavidJo52951945,
alleged to be a Russian sock-puppet
However, the web of trust has limits. If thirty users of a system all claim to know one another in real life, but none (or only one) of them are known in real life by anyone outside the circle, they may be, for example, islanders on a remote island. But they may also all be sock-puppets operated by the same entity.

Also, a news system based in the southern United States, for example, will have webs of trust biased heavily towards creationist worldviews. So an article on, for example, a new discovery of a dinosaur fossil which influences understanding of the evolution of birds is unlikely to be scored highly for trustworthiness on that system.

I still think the web of trust is the best technical aid to assessing the trustworthiness of an author, but there are some other clues we can use.

Alleged posting times of DavidJo52951945
There is an interesting thread on Twitter this morning about a user who is alleged to be a Russian disinformation operative. The allegation is based (in part) on the claim that the user posts to Twitter only between 08:00 and 20:00 Moscow time. Without disputing other elements of the case, that point seems to me weak. The user may be in Moscow; or may be (as he claims) in Southampton, but a habitual early riser.

But modern technology allows us to get the location of the device from which a message was sent. If Twitter required that location tracking was enabled (which it doesn't, and, I would argue, shouldn't), then a person claiming to be tweeting from one location using a device in another location would be obviously less trustworthy.

There are more trust cues to be drawn from location. If the location from which a user communicates never moves, then that's a cue. Of course, the person may be housebound, for example, so it's not a strong cue. Equally, people may have many valid reasons for choosing not to reveal their location; but simply having or not having a revealed location is a cue to trustworthiness.  Of course, location data can be spoofed. It should never be trusted entirely; it is only another clue to trustworthiness.

Allegedly also, the user claims to have a phone number which is the same as a UKIP phone number in Northern Ireland.

Databases are awfully good at searching for identical numbers. If multiple users all claim to have the same phone number, that must reduce their presumed trustworthiness, and their presumed independence from one another. Obviously, a system shouldn't publish a user's phone number (unless they give it specific permission to do so, which I think most of us wouldn't), but if they have a verified, distinct phone number known to the system, that fact could be published. If they enabled location sharing on the device with that phone number, and their claimed posting location was the same as the location reported by the device, then that fact could be reported.

Another digression; people use different identities on different internet systems. Fortunately there are now mechanisms which allow those identities to be tied together. For example, if you trust what I post as simon_brooke on Twitter, you can look me up on keybase.io and see that simon_brooke on Twitter is the same person as simon-brooke on GitHub, and also controls the domain journeyman.cc, which you'll note is the domain of this blog.

So webs of trust can extend across systems, provided users are prepared to tie their Internet identities together.

The values (and limits) of anonymity

Many people use pseudonyms on the Internet; it has become accepted. It's important for news gathering that anonymity is possible, because news is very often information that powerful interests wish to suppress or distort; without anonymity there would be no whistle-blowers and no leaks; and we'd have little reliable information from repressive regimes or war zones.

So I don't want to prevent anonymity. Nevertheless, a person with a (claimed) real world identity is more trustworthy than someone with no claimed real world identity, a person with an identity verified by other people with claimed real world identities is more trustworthy still, and a person with a claimed real world identity verified by someone I trust is yet more trustworthy.

So if I have two stories from the siege of Raqqa, for example, one from an anonymous user with no published location claiming to be in Raqqa, and the other from a journalist with a published location in Glasgow, who is has a claimed real-world identity which is verified by people I know in the real world, and who claims in his story to have spoken by telephone to (anonymous) people whom he personally knows in Raqqa, which do I trust more? Undoubtedly the latter.

Of course, if the journalist in Glasgow who is known by someone I know endorses the identity of the anonymous user claiming to be in Raqqa, then the trustworthiness of the first story increases sharply.

So we must allow anonymity. We must allow users to hide their location, because unless they can hide their location anonymity is fairly meaningless (in any case, precise location is only really relevant to eye-witness accounts, so a person who allows their location to be published within a 10Km box may be considered more reliable than one who doesn't allow their location to be published at all).

Conclusion

We live in a world in which we have no access to 'objective reality', if such a thing even exists. Instead, we have access to multiple, contested accounts. Nevertheless, there are potential technical mechanisms for helping us to assess the trustworthiness of an account. A news system for the future should build on those mechanisms.

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License