Monday, 6 November 2017

Catalunya, Rule, and Law

The Spanish courts seek to suppress the Republic of Catalunya, in the name of the rule of law. The EU refuses to intervene, because it claims it's an internal Spanish matter. 

The EU has high claims to support fine-sounding principles, including human rights, democracy, subsidiarity, and the rule of law; and in claiming that, if finds itself, like a bullfighter in a fight the Spanish courts denied Catalunya the right to ban, on the horns of a dilemma of its own breeding.

The Rule of Law is not synonymous with democracy; in fact, it is more often antagonistic to it. This is shown by the Catalan crisis.

Law is, at best, a lagging indicator of a social consensus - but only when passed by delegates voting in their constituents interests. Law is more often passed by elites (the House of Lords, a Tory cabinet of millionaires, etc) in their own interests, or by elected representatives excessively or corruptly influenced by powerful interests through 'think tanks' and 'lobbyists'.

This is particularly so in the Catalan case: the Spanish constitution was negotiated with fascists in Franco's dying days, and is fenced round with conditions which make it unalterable in practice. Even if there was a practical course to amend the constitution, the Catalans are a systematic minority in Spain. They do not constitute a majority, they can never muster a supermajority. They cannot change it.

So where does that leave Carles Puigdemont and the Catalan Government? The courts said they could not hold a referendum. The electors, who elected them to office, said they must do so.  The Rule of Law did not support democracy. Rather, democracy and the rule of law are in direct conflict. No man can serve two masters; the government of Catalunya chose to obey their electors.

Here endeth the first part; the lemma, if you will. Now, let's move onto the thesis.

Lawyers will argue that the law solves this problem: that the UN Charter and the ECHR are incorporated into Spanish law, and somehow trump the constitution, making the judgement of the Constitutional Court wrong. I say that argument does not hold. 

It may be that in this particular case there are ambiguities and paradoxes in the corpus of law by which one can contort the law into appearing to agree with the democratic decision of the people. But what if there weren't? Should the rule of law trump democracy?

The principle of subsidiarity dictates that the people who should decide the governance of Catalunya are the people of Catalunya. The principle of democracy dictates that they must have a mechanism available to then to decide this. And in Catalunya especially, with its ancient tradition of civil society and its proud tradition of anarcho-syndicalism, the views of the people must surely trump the views of any governing elite.

So where does that leave Donald TuskGuy Verhofstadt and the rest of the sclerotic cabal in Brussels? They can support the Rule of Law. Or they can support Democracy. They can't do both. It's time to choose.

Monday, 9 October 2017

The place you call home

The place I call home
It's no secret that I live in an illegal house (you might call it a hut, a bothy, a shack, whatever). Most of my neighbours also live in informal dwellings - old vans and buses, old caravans, yurts. It's pretty common in this parish, because legal housing is unaffordable for many people. How common it is across Scotland I don't know, and I don't think anyone knows. I think it would be worth trying to find out.

So, if you live in an informal dwelling - that is, anything that isn't a legal house or flat that you legally own or rent - anywhere in Scotland, I'd like to talk to you. I've got a set of questions I'd like to ask you. Ideally I'd like to come and see you, but I can't come everywhere and see everyone, so some at least of this needs to be done by email or telephone. Ideally, if you'll permit it, I'd like to have a picture of where you live (although I understand that many people will feel anxious about this, so if you don't allow it I perfectly understand). But first, I've a web form you can fill in - and if you live in an informal dwelling I'd be really grateful if you would. It's completely anonymous.

Obviously, informal chats with self-selected folk don't produce hard, numeric data. That needs a census, and at the time of the last census, when I was sleeping rough, I couldn't even get the census people to send me a form, although I asked for one. But if we can put together at least some impressionistic data on what sorts of people are living in informal dwellings, and what (if anything) their needs are, I think that would be useful

Anything I am told, I will anonymise. I won't publish who you are, or exactly where you live (although I'd like to be able to say, for example, "I spoke to twenty people living in Galloway").

If you live in an informal dwelling (including squatting, couch-surfing or sleeping rough) and are prepared to talk to me, please let me know - either by commenting on this blog, or by emailing me.

Tuesday, 19 September 2017

Implementing post-scarcity hardware

The address space hinted at by using 64 bit cons-space and a 64 bit vector space containing objects each of whose length may be up to 1.4e20 bytes (2^64 of 64 bit words) is so large that a completely populated post-scarcity hardware machine can probably never be built. But that doesn't mean I'm wrong to specify such an address space: if we can make this architecture work for machines that can't (yet, anyway) be built, it will work for machines that can; and, changing the size of the pointers, which one might wish to do for storage economy, can be done with a few edits to consspaceobject.h.

But, for the moment, let's discuss a potential 32 bit PoSH machine, and how it might be built.

Pass one: a literal implementation

Let's say a processing node comprises a two core 32 bit processor, such as an ARM, 4GB of RAM, and a custom router chip. On each node, core zero is the actual processing node, and core one handles communications. We arrange these on a printed circuit board that is 4 nodes by 4 nodes. Each node is connected to the nodes in front, behind, left and right by tracks on the board, and by pins to the nodes on the boards above and below. On the edges of the board, the tracks which have no 'next neighbour' lead to some sort of reasonably high speed bidirectional serial connection - I'm imagining optical fibre (or possibly pairs of optical fibre, one for each direction). These boards are assembled in stacks of four, and the 'up' pins on the top board and the 'down' pins (or sockets) on the bottom board connect to similar high speed serial connectors.

This unit of 4 boards - 64 compute nodes - now forms both a logical and a physical cube. Let's call this cube module a crystal. Connect left to right, top to bottom and back to front, and you have a hypercube. But take another identical crystal, place it along side, connect the right of crystal A to the left of crystal B and the right of B to the left of A, leaving the tops and bottoms and fronts and backs of those crystals still connected to themselves, and you have a larger cuboid with more compute power and address space but slightly lower path efficiency. Continue in this manner until you have four layers of four crystals, and you have a compute unit of 4096 nodes. So the basic 4x4x4 building block - the 'crystal' - is a good place to start, and it is in some measure affordable to build - low numbers of thousands of pounds, even for a prototype.

I imagine you could get away with a two layer board - you might need more, I'm no expert in these things, but the data tracks between nodes can all go on one layer, and then you can have a raster bus on the other layer which carries power, backup data, and common signals (if needed).

So, each node has 4Gb of memory (or more, or less - 4Gb here is just illustrative). How is that memory organised? It could be treated as a heap, or it could be treated as four separate pages, but it must store four logical blocks of data: its own curated conspage, from which other nodes can request copies of objects; its own private housekeeping data (which can also be a conspage, but from which other nodes can't request copies); its cache of copies of data copied from other nodes; and its heap.

Note that a crystal of 64 nodes each with 4Gb or RAM has a total memory of 256Gb, which easily fits onto a single current generation hard disk or SSD module. So I'm envisaging that either the nodes take turns to back up their memory to backing store all the time during normal operation. They (obviously) don't need to backup their cache, since they don't curate it.

What does this cost? About £15 per processor chip, plus £30 for memory, plus the router, which is custom but probably still in tens of pounds, plus a share of the cost of the board; probably under £100 per node, or £6500 for the 'crystal'.

Pass two: a virtual implementation

OK, OK, this crystal cube is a pretty concept, but let's get real. Using one core of each of 64 chips makes the architecture very concrete, but it's not necessarily efficient, either computationally or financially.

64 core ARM chips already exist:

1. Qualcom Hydra - 64 of 64 bit cores;
2. Macom X-Gene - 64 of 64 bit cores;
2. Phytium Mars - 64 cores, but frustratingly this does not say whether cores are 32 or 64 bit

There are other interesting chips which aren't strictly 64 core:

1. Cavium ThunderX - ARM; 96 cores, each 64 bit, in pairs of two, shipping now;
2. Sparc M8 - 32 of 64 bit cores each capable of 8 concurrent threads; shipping now.

Implementing the virtual hypercube

Of course, these chips are not designed as hypercubes. We can't route our own network of physical connections into the chips, so our communications channels have to be virtual. But we can implement a communications channel as a pair of buffers, an 'upstream' buffer writable by the lower-numbered processor and readable by the higher, and a 'downstream' buffer writable by the higher numbered processor and readable by the lower. Each buffer should be at least big enough to write a whole cons page object into, optionally including a cryptographic signature if that is implemented. Each pair of buffers also needs at least four bits of flags, in order to be able, for each direction, to be able to signal

0. Idle - the processor at the receiving end is idle and can accept work;
1. Busy writing - the processor at the sending end is writing data to the buffer, which is not yet complete;
2. Ready to read - the processor at the sending end has written data to the buffer, and it is complete;
3. Read - the processor at the receiving end has read the current contents of the buffer.

Thus I think it takes at least six clock ticks to write the buffer (set busy-writing, copy four 64 bit words into the buffer, set ready-to-read) and five to read it out - again, more if the messages are cryptographically signed - for an eleven clock tick transfer (the buffers may be allocated in main memory, but in practice they will always live in L2 cache). That's probably cheaper than making a stack frame. All communications channels within the 'crystal' cost exactly the same.

But note! As in the virtual design, a single thread cannot at the same time execute user program and listen to communications from neighbours. So a node has to be able to run two threads. Whether that's two threads on a single core, or two cores per node, is a detail. But it makes the ThunderX and Spark M8 designs look particularly interesting.

But note that there's one huge advantage that this single-chip virtual crystal has over the literal design: all cores access the same memory pool. Consequently, vector space objects never have to be passed hop, hop, hop across the communications network, all can be accessed directly; and to pass a list, all you have to pass is its first cons cell. So any S-Expression can be passed from any node to any of its 6 proximal neighbours in one hop.

There are downsides to this, too. While communication inside the crystal is easier and quicker, communication between crystals becomes a lot more complex and I don't yet even have an idea how it might work. Also, contention on the main address bus, with 64 processors all trying to write to and read from the same memory at the same time, is likely to be horrendous, leading to much lower speed than the solution where each node has its own memory.

On a cost side, you probably fit this all onto one printed circuit board as against the 4 of the 'literal' design; the single processor chip is likely to cost around £400; and the memory will probably be a little cheaper than on the literal design; and you don't need the custom routers, or the connection hardware, or the optical transceivers. So the cost probably looks more like £5,000. Note also that this virtual crystal has 64 bit processors (although address bus contention will almost certainly burn all that advantage and more).


An experimental post-scarcity machine can be built now - and I can almost afford to build it. I don't have the skills, of course; but I can learn.

Thursday, 14 September 2017

Hardware of the deep future

HAL9000, a visualisation of hardware of the deep future
In thinking about how to write a software architecture that won't quickly become obsolescent, I find that I'm thinking increasingly about the hardware on which it will run.

In Post Scarcity Hardware I envisaged a single privileged node which managed main memory. Since then I've come to thing that this is a brittle design which will lead to bottle necks, and that each cons page will be managed by a separate node. So there needs to be a hardware architecture which provides the shortest possible paths between nodes.

Well, actually... from a software point of view it doesn't matter. From a software point of view, provided it's possible for any node to request a memory item from any other node, that's enough, and, for the software to run (slowly), a linear serial bus would do. But part of the point of this thinking is to design hardware which is orders of magnitude faster than the von Neumann architecture allows. So for performance, cutting the number of hops to a minimum is important.

I've been reading Danny Hillis' thesis and his book The Connection Machine which, it transpires, is closely based on it. Danny Hillis was essentially trying to do what I am trying to do, but forty years ago, with the hardware limitations of forty years ago (but he was trying to do it in the right place, and with a useful amount of money that actually allowed him to build something physical, which I'm never likely to have).

Hillis' solution to the topology problem, as I understand it (and note - I may not understand it very well) is as follows:

If you take a square grid and place a processor at every intersection, it has at most four proximal neighbours, and, for a grid which is x cells in each direction, the longest path between two cells is 2x. If you join the nodes on the left hand edge of the grid to the corresponding nodes on the right hand edge, you have a cylinder, and the longest path between two nodes is 1.5x. If you then join the nodes on the top of the grid to the nodes on the bottom, you have a torus - a figure like a doughnut or a bagel. Every single node has four proximal neighbours, and the longest path between any two nodes is x.

So far so good. Now, let's take square grids and stack them. This gives each node at most six proximal neighbours. We form a cube, and the longest distance between two nodes is 3x. We can link the nodes on the left of the cube to the corresponding nodes on the right and form a (thick walled) cylinder, and the longest distance between two nodes is 2.5x. Now join the nodes at the top of the cube to the corresponding nodes at the bottom, and we have a thick walled torus. The maximum distance between is now 2x.

Let's stop for a moment and think about the difference between logical and physical topology. Suppose we have a printed circuit board with 100 processors on it in a regular grid. We probably could physically bend the circuit board to form a cylinder, but there's no need to do so. We achieve exactly the same connection architecture simply by using wires to connect the left side to the right. And if we use wires to connect those at the top with those at the bottom, we've formed a logical torus even though the board is still flat.

It doesn't even need to be a square board. We could have each processor on a separate board in a rack, with each board having four connectors probably all along the same edge, and use patch wires to connect the boards together into a logical torus.

So when we're converting our cube into a torus, the 'cube' could consist of a vertical stack of square boards each of which has a grid of processors on it. But it could also consist of a stack of boards in a rack, each of which has six connections, patched together to form the logical thick-walled torus. So now lets take additional patch leads and join the nodes that had been on the front of the logical cube to the corresponding nodes on the back of the logical cube, and we have a topology which has some of the properties of a torus and some of the properties of a sphere, and is just mind-bending if you try to visualise it.

This shape is what I believe Hillis means by a hypercube, although I have to say I've never found any of the visualisations of a hypercube in books or on the net at all helpful, and they certainly don't resemble the torusy-spherey thing I which visualise.

It has the very useful property, however, that the longest distance between any two nodes is 1.5x.

Why is 1.5x on the hypercube better than 1x on the torus? Suppose you want to build a machine with about 1000 nodes. The square root of a thousand is just less than 32, so let's throw in an extra 24 nodes to make it a round 32. We can lay out 1024 nodes on a 32 x 32 square, join left to right, top to bottom, and we have a maximum path between two of 1024 nodes of 32 hops. Suppose instead we arrange our processors on ten boards each ten by ten, with vertical wires connecting each processor with the one above it and the one below it, as well tracks on the board linking each with those east, west, north and south. Connect the left hand side to the right, the front to the back and the top to the bottom, and we have a maximum path between any two of 1000 nodes of fifteen hops. That's twice as good.

Obviously, if you increase the number of interconnectors to each processor above six, the paths shorten further but the logical topology becomes even harder to visualise. This doesn't matter - it doesn't actually have to be visualised - but wiring would become a nightmare.

I've been thinking today about topologies which would allow higher numbers of connections and thus shorter paths, and I've come to this tentative conclusion.

I can imagine topologies which tesselate triangle-tetrahedron-hypertetrahedron and pentagon-dodecahedron-hyperdodecahedron. There are possibly others. But the square-cube-hypercube model has one important property that those others don't (or, at least, it isn't obvious to me that they do). In the square-cube-hypercube model, every node can be addressed by a fixed number of coordinates, and the shortest path from any node to any other is absolutely trivial to compute.

From this I conclude that the engineers who went before me - and who were a lot more thoughtful and expert than I am - were probably right: the square-cube-hypercube model, specifically toruses and hypercubes, is the right way to go.

Friday, 25 August 2017

Riddles in the dark

I'm continuing my series of essays on the future of news; this essay may look like a serious digression, but trust me, it's not. The future of news, in my opinion, is about cooperation; it's about allowing multiple voices to work together on the same story; it's about allowing users to critically evaluate different versions of the story to evaluate which is more trustworthy. This essay introduces some of the technical underpinnings that may make that possible.

In January 2015, I ran a workshop in Birnam on Land Reform. While planning the workshop, I realised I would need a mechanism for people who had attended the workshop to work cooperatively on the document which was the output of the event. So, obviously, I needed a Wiki. I could have put the wiki on a commercial Wiki site like Wikia, but if I wanted to control who could edit it I'd have to pay, and in any case it would be plastered with advertising.

So I decided to install a Wiki engine on my own server. I needed a Wiki engine which was small and easy to manage; which didn't have too many software dependencies; which was easy to back up; and on which I could control who could edit pages. So I did a quick survey of what was available.

Wiki engines store documents, and versions of those documents. As many people work on those documents, the version history can get complex. Most Wiki engines store their documents in relational databases. Relational databases are large, complex bits of software, and when they need to be upgraded it can be a pain. Furthermore, relational databases are not designed to store documents; they're at there best with fixed size data records. A Wiki backed by a relational database works, but it isn't a good fit.

Software is composed of documents too - and the documents of which software is composed are often revised many, many times. In 2012 there were 37,000 documents, containing 15,004,006 lines of code (it's now over twenty million lines of code). It's worked on by over 1,000 developers, many of whom are volunteers. Managing all that is a considerable task; it needs a revision control system.

In the early days, when Linus Torvalds, the original author of Linux, was working with a small group of other volunteers on the early versions of the kernel, revision control was managed in the revision control system we all used back in the 1980s and 90s: the Concurrent Versions System, or CVS. CVS has a lot of faults but it was a good system for small teams to use. However, Linux soon outgrew CVS, and so the kernel developers switched to a proprietary revision control system, Bitkeeper, which the copyright holders of that system allowed them to use for free.

However, in 2005, the copyright holders withdrew their permission to use Bitkeeper, claiming that some of the kernel developers had tried to reverse engineer it. This caused a crisis, because there was then no other revision control system then available sophisticated enough to manage distributed development on the scale of the Linux kernel.

What makes Linus Torvalds one of the great software engineers of all time is what he did then. He sat down on 3rd April 2005 to write a new revision control system from scratch, to manage the largest software development project in the world. By the 29th, he'd finished, and Git was working.

Git is extremely reliable, extremely secure, extremely sophisticated, extremely efficient - and at the same time, it's small. And what it does is manage revisions of documents. A Wiki must manage revisions of documents too - and display them. A Wiki backed with git as a document store looked a very good fit, and I found one: a thing called Gollum.

Gollum did everything I wanted, except that it did not authenticate users so that I couldn't control who could edit documents. Gollum is written in a language called Ruby, which I don't like. So I thought about whether it was worth trying to write an authentication module in Ruby, or simply start again from scratch in a language I do like - Clojure.

I started working on my new wiki, Smeagol, on 10th November 2014, and on the following day it was working. On the 14th, it was live on the Internet, acting as the website for the Birnam workshop.

It was, at that stage, crude. Authentication worked, but the security wasn't very good. The edit screen was very basic. But it was good enough. I've done a bit more work on it in the intervening time; it's now very much more secure, it shows changes between different versions of a document better (but still not as well as I'd like), and it has one or two other technically interesting features. It's almost at a stage of being 'finished', and is, in fact, already used by quite a lot of other people around the world.

But Smeagol exploits only a small part of the power of Git. Smeagol tracks versions of documents, and allows them to be backed up easily; it allows multiple people to edit them. But it doesn't allow multiple concurrent versions of the same document; in Git terms, it maintains only one branch. And Smeagol currently allows you to compare a version of the document only with the most recent version; it doesn't allow you to compare two arbitrary versions. Furthermore, the format of the comparison, while, I think, adequately clear, is not very pretty. Finally, because it maintains only one branch, Smeagol has no mechanism to merge branches.

But: does Smeagol - that simple Wiki engine I whipped up in a few days to solve the problem of collaboratively editing a single document - hold some lessons for how we build a collaborative news system?

Yes, I think it does.

The National publishes of the order of 120 stories per issue, and of the order of 300 issues per year. That's 36,000 documents per year, a volume on the scale of the Linux kernel. There are a few tens of blogs, each with with at most a few tens of writers, contributing to Scottish political blogging; and at most a few hundred journalists.

So a collaborative news system for Scottish news is within the scale of technology we already know how to handle. How much further it could scale, I don't know. And how it would cope with federation I don't yet know - that might greatly increase the scaling problems. But I will deal with federation in another essay later.

Trust me

Reality is contested, and it is an essential part of reality that reality is contested. There are two reasons that reality is contested: one is, obviously, that people mendaciously misreport reality in order to deceive. That's a problem, and it's quite a big one; but (in my opinion) it pales into insignificance beside another. Honest people honestly perceive reality differently.

If you interview three witnesses to some real life event, you'll get three accounts; and its highly likely that those accounts will be - possibly sharply - different. That doesn't mean that any of them are lying or are untrustworthy, it's just that people perceive things differently. They perceive things differently partly because they have different viewpoints, but also because they have different understandings of the world.

Furthermore, there isn't a clear divide between an honest account from a different viewpoint and outright propaganda; rather, there's a spectrum. The conjugation is something like this:
  • I told the truth
  • You may have been stretching the point a little
  • He overstated his case
  • They were lying
Reality is hard. We see reality through a glass, darkly. There is no mechanism accessible to us which will reveal the perfect truth (except to religious extremists). We have to assess the viewpoints, and decide how much we trust them. We have to assess which accounts are trustworthy, and, from that, which witnesses are trustworthy.

A digression. I am mad. I say this quite frequently, but because most of the time I seem quite lucid I think people tend not to believe me. But one of the problems of being mad - for me - is that I sometimes have hallucinations, and I sometimes cannot disentangle in my memory what happened in dreams from what happened in reality. So I have to treat myself as an unreliable witness - I cannot even trust my own accounts of things which I have myself witnessed.

I suspect this probably isn't as rare as people like to think.

Never mind. Onwards.

The point of this, in constructing a model of news, is we don't have access to a perfect account of what really happened. We have at best access to multiple contested accounts. How much can we trust them?

Among the trust issues on the Internet are
  1. Is this user who they say they are?
  2. Is this user one of many 'sock puppets' operated by a single entity?
  3. Is this user generally truthful?
  4. Does this user have an agenda?

Clues to trustworthiness

I wrote in the CollabPRES essay about webs of trust. I still think webs of trust are the best way to establish trustworthiness. But there have to be two parallel dimensions to the web of trust: there's 'real-life' trustworthiness, and there's reputational - online - trustworthiness. There are people I know only from Twitter whom nevertheless I trust highly; and there are people I know in real life as real people, but don't necessarily trust as much.

If I know someone in real life, I can answer the question 'is this person who they say they are'. If the implementation of the web of trust allows me to tag the fact that I know this person in real life, then, if you trust me, then you have confidence this person exists in real life.

Obviously, if I don't know someone in real life, then I can still assess their trustworthiness through my evaluation of the information they post, and in scoring that evaluation I can allow the system to model the degree of my trust for them on each of the different subjects on which they post. If you trust my judgement of other people's trustworthiness, then how much I trust them affects how much you trust them, and the system can model that, too.

Alleged follower network of Twitter user DavidJo52951945,
alleged to be a Russian sock-puppet
However, the web of trust has limits. If thirty users of a system all claim to know one another in real life, but none (or only one) of them are known in real life by anyone outside the circle, they may be, for example, islanders on a remote island. But they may also all be sock-puppets operated by the same entity.

Also, a news system based in the southern United States, for example, will have webs of trust biased heavily towards creationist worldviews. So an article on, for example, a new discovery of a dinosaur fossil which influences understanding of the evolution of birds is unlikely to be scored highly for trustworthiness on that system.

I still think the web of trust is the best technical aid to assessing the trustworthiness of an author, but there are some other clues we can use.

Alleged posting times of DavidJo52951945
There is an interesting thread on Twitter this morning about a user who is alleged to be a Russian disinformation operative. The allegation is based (in part) on the claim that the user posts to Twitter only between 08:00 and 20:00 Moscow time. Without disputing other elements of the case, that point seems to me weak. The user may be in Moscow; or may be (as he claims) in Southampton, but a habitual early riser.

But modern technology allows us to get the location of the device from which a message was sent. If Twitter required that location tracking was enabled (which it doesn't, and, I would argue, shouldn't), then a person claiming to be tweeting from one location using a device in another location would be obviously less trustworthy.

There are more trust cues to be drawn from location. If the location from which a user communicates never moves, then that's a cue. Of course, the person may be housebound, for example, so it's not a strong cue. Equally, people may have many valid reasons for choosing not to reveal their location; but simply having or not having a revealed location is a cue to trustworthiness.  Of course, location data can be spoofed. It should never be trusted entirely; it is only another clue to trustworthiness.

Allegedly also, the user claims to have a phone number which is the same as a UKIP phone number in Northern Ireland.

Databases are awfully good at searching for identical numbers. If multiple users all claim to have the same phone number, that must reduce their presumed trustworthiness, and their presumed independence from one another. Obviously, a system shouldn't publish a user's phone number (unless they give it specific permission to do so, which I think most of us wouldn't), but if they have a verified, distinct phone number known to the system, that fact could be published. If they enabled location sharing on the device with that phone number, and their claimed posting location was the same as the location reported by the device, then that fact could be reported.

Another digression; people use different identities on different internet systems. Fortunately there are now mechanisms which allow those identities to be tied together. For example, if you trust what I post as simon_brooke on Twitter, you can look me up on and see that simon_brooke on Twitter is the same person as simon-brooke on GitHub, and also controls the domain, which you'll note is the domain of this blog.

So webs of trust can extend across systems, provided users are prepared to tie their Internet identities together.

The values (and limits) of anonymity

Many people use pseudonyms on the Internet; it has become accepted. It's important for news gathering that anonymity is possible, because news is very often information that powerful interests wish to suppress or distort; without anonymity there would be no whistle-blowers and no leaks; and we'd have little reliable information from repressive regimes or war zones.

So I don't want to prevent anonymity. Nevertheless, a person with a (claimed) real world identity is more trustworthy than someone with no claimed real world identity, a person with an identity verified by other people with claimed real world identities is more trustworthy still, and a person with a claimed real world identity verified by someone I trust is yet more trustworthy.

So if I have two stories from the siege of Raqqa, for example, one from an anonymous user with no published location claiming to be in Raqqa, and the other from a journalist with a published location in Glasgow, who is has a claimed real-world identity which is verified by people I know in the real world, and who claims in his story to have spoken by telephone to (anonymous) people whom he personally knows in Raqqa, which do I trust more? Undoubtedly the latter.

Of course, if the journalist in Glasgow who is known by someone I know endorses the identity of the anonymous user claiming to be in Raqqa, then the trustworthiness of the first story increases sharply.

So we must allow anonymity. We must allow users to hide their location, because unless they can hide their location anonymity is fairly meaningless (in any case, precise location is only really relevant to eye-witness accounts, so a person who allows their location to be published within a 10Km box may be considered more reliable than one who doesn't allow their location to be published at all).


We live in a world in which we have no access to 'objective reality', if such a thing even exists. Instead, we have access to multiple, contested accounts. Nevertheless, there are potential technical mechanisms for helping us to assess the trustworthiness of an account. A news system for the future should build on those mechanisms.

Thursday, 24 August 2017

Challenges in the news environment

Alistair Carmichael, who proved that politicians can lie with
impunity. Picture by Stewart Bremner 
Two days ago I reposted my CollabPRES essay, written twelve years ago, which addressed problems of the news publishing environment that I was aware of then: centrally, the collapse of revenue which, at that point, affected local media most harshly.

Time moves on; there are other problems in the news publishing environment which existed then but which are much more starkly apparent to me now; and there are some issues which are authentically new.

Why News Matters

Challenges to the way news media works are essentially challenges to democracy. Without a well-informed public, who understand the issues they are voting on and the potential consequences of their decisions, who have a clear view of the honesty, integrity and competence of their politicians, meaningful democracy becomes impossible.

We need to create a media which is fit for - and defends - democracy; this series of essays is an attempt to propose how we can do that.

So what's changed in the past twelve years? Let's start with those changes which have been incremental.

History Repeats Itself

Proprietors have always used media to promote their political interests, and, for at least a century, ownership of media has been concentrated in the hands of a very few, very rich white men whose interests are radically different from those of ordinary folk. That's not new. But over the past few years the power this gives has been used more blatantly and more shamelessly. Furthermore, there was once an assumption that things the media published would be more or less truthful, and that (if they weren't) that was a matter of shame. No longer. Fake News is all too real.

Politicians have always lied, back to the beginnings of democracy. But over the past decade we've seen a series of major elections and referenda swung by deliberate mendacity, and, as the Liar Carmichael scandal shows, politicians now lie with complete impunity. Not only are they not shamed into resigning, not only are they not sacked, successful blatant liars like Liar Carmichael, Boris Johnson, David Davis, Liam Fox and, most starkly of all, Donald Trump, are quite often promoted.

Governments have for many years tried to influence opinion in countries abroad, to use influence over public opinion as at least a diplomatic tool, at most a mechanism for regime change. That has been the purpose of the BBC World Service since its inception. But I think we arrogantly thought that as sophisticated democracies we were immune to such manipulation. The greater assertiveness and effectiveness of Russian-owned media over the past few years has clearly shown we're not.

The Shock of the New: Social Media

Other changes have been revolutionary. Most significant has been the rise of social media, and the data mining it enables. Facebook was a year old when I wrote my essay; Twitter didn't yet exist. There had been precursors - I myself had used Usenet since 1986, others had used AOL and bulletin boards. But these all had moderately difficult user interfaces, and predated the mass adoption of the Internet. It was the World Wide Web (1991) that made the Internet accessible to non-technical people, but adoption still took time.

Six Degrees, in 1997, was the first recognisably modern social media platform, which tracked users relationships with one another. Social Media has a very strong 'Network Effect' - if your friends are all on a particular social media platform, there's a strong incentive for you to be on the same platform. If you use a different social media platform, you can't communicate with them. Thus social media is in effect a natural monopoly, and this makes the successful social media companies - which means, essentially, Facebook, Facebook and Facebook - immensely powerful.

What makes social media a game changer is that it allows - indeed facilitates - data mining. The opinions and social and psychological profiles of users are the product that the social media companies actually sell to fund their operations.

We've all sort of accepted that Google and Facebook use what they learn from their monitoring of our use of their services to sell us commercial advertising, as an acceptable cost of having free use of their services. It is, in effect, no more than an extension of, and in many ways less intrusive than, the 'commercial breaks' in commercial television broadcasts. We accept that as kind-of OK.

It becomes much more challenging, however, when they market that data to people who will use sophisticated analysis to precisely target (often mendacious) political messages at very precisely defined audiences. There are many worrying aspects to this, but one worrying aspect is that these messages are not public: we cannot challenge the truth of messages targeted at audiences we're not part of, because we don't see the messages. This is quite different from a billboard which is visible to everyone passing, or a television party-political broadcast which is visible to all television watchers.

There's also the related question of how Facebook and Twitter select the items they prioritise in the stream of messages they show us. Journalists say "it's an algorithm", as if that explained everything. Of course it's an algorithm; everything computers do is an algorithm. The question is, what is the algorithm, who controls it, and how we can audit it (we can't) and change it (see above).

The same issue, of course, applies to Google. Google's search engine outcompeted all others (see below) because its ranking algorithm was better - specifically, was much harder to game. The algorithm which was used in Google's early days is now well known, and we believe that the current algorithm is an iterative development on that. But the exact algorithm is secret, and there is a reasonable argument that it has to be secret because if it were not we'd be back to the bad old days of gaming search engines.

Search, in an Internet world, is a vital public service; for the sake of democracy it needs to be reasonably 'objective' and 'fair'. There's a current conspiracy theory that Google is systematically down rating left-wing sites. I don't know whether this is true, but if it is true it is serious; and, whatever the arguments about secrecy, for the sake of democracy we need Google's algorithm, too, to be auditable, so that it can be shown to be fair.

Tony Benn's five famous questions apply just as much to algorithms as to officials:

  1. What power have you got?
  2. Where do you get it from?
  3. In whose interests do you exercise it?
  4. To whom are you accountable?
  5. How do we get rid of you?

Facebook and Google need to answer these questions on behalf of their algorithms, of course; but in designing a system to provide the future of news, so do we.

To Everything, There is a Season

There's a lot to worry about in this situation, but there is some brightness to windward: we live in an era of rapid change, and we can, in aggregate, influence that change.

Changing how the news media works may not be all that hard, because the old media is caught in a tight financial trap between the rising costs of printing and distribution and the falling revenues from print advertising. Serious thinkers in the conventional media acknowledge that change is necessary and are searching for ways to make it happen.

Creating new technology and a new business model for the news media is the principal objective of this series of essays; but it isn't the only thing that needs to be done, if we are to create an environment in which Democracy can continue to thrive.

Changing how social media works is harder, because of the network effect. To topple Facebook would require a concerted effort by very many people in very many countries across the globe. That's hard. But looking at the history of search engines shows us it isn't impossible.

I was responsible for one of the first 'search engines' back in 1994; was intended to be a directory to all Scottish websites. It was funded the SDA and was manually curated. Yahoo appeared - also as a manually curated service - at about the same time. But AltaVista, at the same time, launched a search engine which was based on an automated spidering of the whole Web (it wasn't the first engine to do this), which, because of its speed and its uncluttered interface, very rapidly became dominant.

Where is AltaVista now? Gone. Google came along with a search engine which was much harder to 'game', and which consequently produced higher-quality search results, and AltaVista died.

Facebook is now a hugely overmighty subject, with enormous political power; but Facebook will die. It will die surprisingly suddenly. It will die when a large enough mass of people find a large enough motivation to switch. How that may be done will be the subject of another essay.

Wednesday, 23 August 2017

How do we pay for search?

[This is taken from a Twitter thread I posted on June 27th. I'm reposting it here now because it's part of supporting argument for a larger project I'm working on about the future of news publishing]

Seriously, folks, how do we want to pay for search? It's a major public service of the Internet age. We all use it. Google (and Bing, and others) provide us with search for free and fund it by showing us adverts and links to their own services.

But search is not free. The effort of cataloging & indexing all the information on the web costs serious compute time. Literally thousands - possibly by now millions - of computers operate 24 hours a day every day just doing that and nothing else. Another vast army of computers sits waiting for our search queries and serving us responses quickly and efficiently. All this costs money: servers, buildings, vast amounts of electricity, bandwidth. It's not free. It's extremely expensive.

Google's implicit contract with us is that we supply them with information about ourselves through what we search for, and also look at the adverts they show us, and in return Google (or Bing, or...) supplies us with an extraordinarily powerful search service.

The EU say it's not OK for Google to show us adverts for their own services (specifically, their price comparison service). Why is it not OK? We all understand the implicit contract. It's always been OK for other media to show adverts for their own stuff. How many trailers for other programmes does the BBC (or ITV, or Sky...) show in a day? You can't watch BBC TV without seeing trails for other BBC services/programmes. The BBC don't show trails for Sky, nor Sky for BBC.

So I don't understand why it's wrong for Google to do this. But if we think it is wrong, how do we want to pay for search?  I can perfectly see an argument that search is too important to be entrusted to the private sector, that search ought to be provided by an impartial, apolitical public service - like e.g. a national library - funded out of taxation. 

But the Internet is international, so if it was a public sector organisation it would in effect be controlled by the UN. This is an imperfect world, with imperfect institutions. Not all members of the UN are democracies. The Saudis sit on the UN Womens' Rights Commission. Do we want them controlling search?

Search is really important. It is important that it should be as far as possible unbiased, neutral, apolitical. Trusted. But Google knows that its whole business is based on trust. It is greatly in its interests to run an impartial search service. 

I am a communist. I inherently trust socialised institutions above private ones. But we all know the implicit contract with Google.  I value search; for me, it is worth being exposed to the adverts that Google shows me, and sharing a bit of personal information. But, if we as a society choose, as the EU implies, not to accept that contract, how do we as a society want to pay for search?

Tuesday, 22 August 2017

CollabPRES: Local news for an Internet age

(This is an essay I wrote on December 30th, 2005; it's a dozen years old. Please bear this in mind when reading this; things on the Internet do change awfully fast. I'm republishing it now because it contains a lot of ideas I want to develop over the next few weeks)

The slow death of newsprint

Local newspapers have always depended heavily on members of the community, largely unpaid, writing content. As advertising increasingly migrates to other media and the economic environment for local newspapers gets tighter, this dependency on volunteer contributors can only grow.

At the same time, major costs on local papers are printing and distribution. In the long run, local news must move to some form of electronic delivery; but for the present, a significant proportion of the readership is aging and technology-averse, and will continue to prefer flattened dead trees.
Approaches to local Internet news sites

I've been building systems to publish local news on the Internet for six years now. In that time, most local news media have developed some form of Internet presence. Almost without exception, these Internet news sites have been modelled (as the ones I've written have) on the traditional model of a local newspaper or magazine: an editor and some journalists have written all the content, and presented it, static and unalterable, before the waiting public.

I've become less and less convinced by this model. Newer Internet systems in other areas than local news have been exploiting the technology in much more interesting ways. And reading recent essays by other people involved in this game whom I respect, it seems they're thinking the same. So how can we improve this?


PRES is a web application I built six years ago to serve small news sites. It's reasonably good, stable and reliable, but there's nothing particularly special about PRES, and it stands here merely as an example of a software system for driving a relatively conventional Internet news site. It's just a little bit more sophisticated than a 'blog' engine.

PRES revolves around the notion of an editor – who can approve stories; authors – who can contribute stories; subscribers – who are automatically alerted, by email, to new stories in their areas of interest, and who may be able to contribute responses or comments to stories; and general readers,who simply read the stories on the Web. It organises stories into a hierarchy of 'categories' each of which may have different presentation. Within each category one article may be nominated by the editor as the 'lead article', always appearing at the top of the category page. Other articles are listed in chronological order, the most recent first. Only eight 'non-lead' stories are by default shown in any category, so articles 'age out' of the easily navigated part of the website automatically as new articles are added.

PRES also offers flexible presentation and does a number of other useful news-site related things such as automatically generating syndication feeds, automatically integrating Google advertising, and providing NITF (News Industry Text Format) versions of all articles; but all in all it's nothing very special, and it's included here mainly to illustrate one model of providing news.

This model is 'top down'. The editor – a special role - determines what is news, by approving stories. The authors – another special role – collect and write the news. The rest of the community are merely consumers. The problem with this approach is that it requires a significant commitment of time from the editor, particularly, and the authors; and that it isn't particularly sensitive to user interest.


A 'wiki' is a collaborative user edited web site. In a classic wiki there are no special roles; every reader is equal and has equal power to edit any part of the web site. This model has been astoundingly successful; very large current wiki projects include Wikipedia, which, at the time of writing after only five years now has over 2 million articles in over 100 languages. The general quality of these articles is very high, and I have come to use Wikipedia to get a first overview about subjects of which I know little. It is, by far, more comprehensive and more up to date than any print encyclopedia; and it is so precisely because it makes it so easy for people who are knowledgeable about a particular subject to contribute both original articles and corrections to existing articles.

However, as with all systems, in this strength is its weakness. Wikipedia allows anyone to contribute; it allows anyone to contribute anonymously, or simply by creating an account; and in common with many other web sites it has no means of verifying the identity of the person behind any account. It treats all edits as being equal (with, very exceptionally, administrative overrides). Wikipedia depends, then, on the principle that people are by and large well motivated and honest; and given that most people are by and large well motivated and honest, this works reasonably well.

But some people are not well motivated or honest, and Wikipedia is very vulnerable to malice, sabotage and vandalism, and copes poorly with controversial topics. Particular cases involve the malicious posting of information the author knows to be untrue. In May of this year an anonymous user, since identified, edited an article about an elderly and respected American journalist to suggest that he had been involved in the assassinations of John F and Robert Kennedy. This went uncorrected for several months, and led to substantial controversy about Wikipedia in the US.

Similarly, many articles concern things about which people hold sharply different beliefs. A problem with this is that two groups of editors, with different beliefs, persistently change the article, see-sawing it between two very different texts. Wikipedia's response to this is to lock such articles, and to head them with a warning, 'the neutrality of this article is disputed'. An example here at the time of writing is the article about Abu Bakr, viewed by Sunni Muslims as the legitimate leader of the faithful after the death of Mohamed, and by Shia Muslims as a usurper.

However, these problems are not insurmountable, and, indeed, Wikipedia seems to be coping with them very well by developing its own etiquette and rules of civil society.

Finally, using a WIKI is a little intimidating for the newcomer, since special formatting of the text is needed to make (e.g.) links work.

The Wiki model is beginning to be applied to news, not least by Wikipedia's sister project WikiNews. This doesn't seem to me yet to be working well (see 'Interview with Jimmy Wales' in further reading). Problems include that most contributors are reporting, second hand, information they have gleaned from other news sources; and that attempting to produce one global user-contributed news system is out of scale with the level of commitment yet available, and with the organising capabilities of the existing software.

This doesn't mean, of course, that a local NewsWiki could not be successful; indeed, I believe it could. Local news is by definition what's going on around people; local people are the first-hand sources. And it takes only a relatively small commitment from a relatively small group of people to put local news together.

Karma and Webs of Trust

The problem of who to trust as a contributor is, of course, not unique to a wiki; on the contrary it has been best tackled so far, I believe, by the discussion system Slashcode, developed to power the discussion site. Slashcode introduces two mechanisms to scoring user-contributed content which are potentially useful to a local news system. The first is 'karma', a general score of the quality of a user's contributions. Trusted users (i.e., to a first approximation, those with high 'karma') are, from time to time, given 'moderation points'. They can spend these points by 'moderating' contributions – marking them as more, or less valuable. The author of a contribution that is marked as valuable is given an increment to his or her karma; the author of a contribution marked down, loses karma. When a new contribution is posted its initial score depends on the karma of its author. This automatic calculation of 'karma' is of course not essential to a karma based system. Karma points could simply be awarded by an administrative process; but it illustrates that automatic karma is possible.

The other mechanism is the 'web of trust'. Slashcode's implementation of the web of trust idea is fairly simple and basic: any user can make any other user a 'friend' or a 'foe', and can decide to modify the scores of contributions by friends, friends of friends, foes, and foes of friends. For example, I modify contributions by my 'friends' +3, which tends to bring them to the top of the listing so I'm likely to see them. I modify contributions by 'friends of friends' by +1, so I'm slightly more likely to see them. I modify contributions by 'foes' by -3, so I'm quite unlikely to see them.

Slashdot's web of trust, of course, only operates if a user elects to trust other people, only operates at two steps remove (i.e. friends and friends of friends, but not friends of friends of friends) and is not additive (i.e. If you're a friend of three friends of mine, you aren't any more trusted than if you're a friend of only one friend of mine). Also, I can't qualify the strength of my trust for another user: I can't either say “I trust Andrew 100%, but Bill only 70%”, or say “I trust Andrew 100% when he's talking about agriculture, but only 10% when he's talking about rural transport”.

So to take these issues in turn. There's no reason why there should not be a 'default' web of trust, either maintained by an administrator or maintained automatically. And similarly, there's no reason why an individual's trust relationships should not be maintained at least semi-automatically.

Secondly, trust relationships can be subject specific, and thus webs of trust can be subject specific. If Andrew is highly trusted on agriculture, and Andrew trusts Bill highly on agriculture, then it's highly likely that Bill is trustworthy on agriculture. But if  Andrew is highly trusted on agriculture, and Andrew trusts Bill highly on cars, it doesn't necessarily imply that Bill is to be trusted on cars. If a news site is divided into subject specific sections  (as most are) it makes sense that the subjects for the trust relationships should be the same as for the sections.

What is news?

So what is news? News is what is true, current and interesting. Specifically it is what is interesting to your readers. Thus it is possible to tune the selection of content in a news-site by treating page reads as voting, and giving more frequently read articles more priority (or, a slightly more sophisticated variant of the same idea, give pages a score based on a formula computed§ from the number of reads and a 'rate this page' scoring input).

The problem with a simple voting algorithm is that if you prioritise your front page (or subcategory – 'inner' pages) by reads, then your top story is simply your most read story, and top stories will tend to lock in (since they are what a casual reader sees first). There has to be some mechanism to attract very new stories to the attention of readers, so that they can start to be voted on. And there has to be some mechanism to value more recent reads higher than older ones.

So your front page needs to comprise an ordered list of your currently highest scoring articles, and a list of your 'breaking news' articles – the most recently added to your system. How do you determine an ordering for these recent stories?

They could simply be the most recent N articles submitted. However, there is a risk that the system could be 'spammed' by one contributor submitting large numbers of essentially similar articles. Since without sophisticated text analysis it is difficult to automatically determine whether articles are 'essentially similar' it might be reasonable to suggest that only a highly trusted contributors should be able to have more than one article in the 'breaking news' section at a time.

The next issue is, who should be able to contribute to, and who edit, stories. The wiki  experience suggests that the answer to both these things should be 'everyone', with the possible proviso that, to prevent sabotage and vandalism you should probably require that users identify themselves to the system before being allowed to contribute.

As Robin Miller says
“No matter how much I or any other reporter or editor may know about a subject, some of the readers know more. What's more, if you give those readers an easy way to contribute their knowledge to a story, they will.”
Consequently, creating and editing new stories should be easy, and available to everyone. Particularly with important, breaking stories, new information may be becoming available all the time, and some new information will become available to people who are not yet trusted contributors. How, then, do you prevent a less well informed but highly opinionated contributor overwriting an article by a highly trusted one?

'Show newer, less trusted versions'

In wikis it is normal to hold all the revisions of an article. What is novel in what I am suggesting here is that rather than by default showing the newest revision of an article, as wikis typically do, by default the system should show the newest revision by the most trusted contributor according to the web of trust of the reader for the subject area the article is in, if (s)he has one, or else according to the default web of trust for that subject. If there are newer revisions in the system, a link should be shown entitled 'show newer, less trusted versions'. Also, when a new revision if a story is added to the system, email should be automatically sent to the most trusted previous contributor to the article according to the default web of trust, and to the sub-editor of the section if there is one, or else to the contributor(s) most trusted in that section.

All this means that casual users will always see the most trusted information, but that less casual users will be able to see breaking, not yet trusted edits, and that expert contributors will be alerted to new information so that they can (if they choose) 'endorse' the new revisions and thus make them trusted.

Maintaining the Web of Trust

Whenever a contributor endorses the contribution of another contributor that's a strong indication of trust. Of course, you may think that a particular contribution is valuable without thinking that its author is generally reliable. So your trust for another contributor should not simply be a measure of your recent endorsement of their work. Furthermore we need to provide simple mechanisms for people who are not highly ranked contributors to maintain their own personal web of trust.

Fortunately, if we're already thinking of a 'rate this page' control, HTML gives us the rather neat but rarely used image control, a rectangular image which returns the X, Y co-ordinates of where it was clicked. This could easily be used to construct a one-click control which scores 'more trusted/less trusted' on one axis, and 'more interesting/less interesting' on the other.

Design of CollabPRES

CollabPRES is a proposal for a completely new version of PRES with some of the features of a WIKI and an advanced web of trust system. While there will still be a privileged role – an Administrator will be able to create and manage categories (sections) and will be able to remove articles and to remove privileges from other users in exceptional circumstances. An article will not exist as a record in itself but as a collection of revisions. Each revision will be tagged with its creator (a contributor) and with an arbitrary number of endorsers (also contributors). In order to submit or edit an article, or to record an opinion of the trustworthiness of an article, a contributor must first log in and identify themselves to the system. contributors will not be first class users authenticated against the RDBMS but second class users authenticated against the application. There will probably not be a threaded discussion system, as, seeing the article itself is editable, a separate mechanism seems unnecessary.

Whether contributors are by default allowed to upload photographs will be an administrative decision for the site administrator. Where contributors are not by default permitted to upload images, the administrator will be able to grant that privilege to particular contributors.

In order to make it easier for unsophisticated users to add and edit stories, it will be possible to upload a pre-prepared text, HTML, OpenOffice, or (ideally, if possible) MSWord file as an alternative to editing text in an HTML textarea control
To be successful, CollabPRES must have means of integrating both local and national advertising into the output. At present this paper does not address that need.

Finally, there must still be an interaction between the website and the printed page, because many of the consumers of local news still want hard copy, and will do for at least some years to come.

Whereas in most current local papers the website is at best an adjunct to the printed paper, CollabPRES turns that on its head by generating the layout of the printed paper automatically from the content currently on the website. At some point on press day, the system will generate, using XSL to transform CollabPRES's native XML formats to either postscript, PDF, or whatever SGML format the paper's desktop publishing software uses, the full content of the paper in a ready to print format, ready to be printed to film and exposed onto the litho plate. If the transform has been set up correctly to the paper's house style, there should be no need for any human intervention at all.

Obviously editors may not want to be muscled out of this process and may still want to have the option of some final manual adjustment of layout; but that should no longer be the role of the editor of a local paper in a CollabPRES world. Rather, the role of the editor must be to go out and recruit, encourage and advise volunteer contributors, cover (or employ reporters to cover) those stories which no volunteers are interested in, and monitor the quality of contributions to the system, being the contributor of last resort, automatically 100% trusted, who may tidy up any article.

CollabPRES and the local news enterprise

Technology is not a business plan. Technology is just technology. But technology can support a business plan. Local news media need two things, now. They need to lower their costs. And they need to engage their communities. CollabPRES is designed to support these needs. It provides a mechanism for offloading much of the gathering and authoring of news to community volunteers. It automates much of the editing and prioritisation of news. But it implies a whole new way of working for people in the industry, and the issue of streamlining the flow of advertising from the locality and from national campaigns into the system still needs to be addressed.


  1. PRES - of historical interest only, now.
  2. Wikipedia 
  3. WikiNews (see also interview with Jimmy Wales, founder of WikiMedia)
  4. Robin Miller's essay 'A Recipe for Newspaper Survival in the Internet Age

Friday, 30 June 2017

Signposts, not weathercocks

Last night, Jeremy Corbyn whipped Labour MPs not to vote for continued membership of the Single Market. This is why he was wrong.

Corbyn enthused a lot of young people to vote at the last election. People who don't usually vote, many of them for the first time.

They voted at least partially against the Tories' vision of hard Brexit, which they understand will wreck their futures.

They voted at least partly because Corbyn presented himself as an authentic politician, a man of principle.

And yet, here Corbyn is proving himself to be just another tired, cynical, political game player. If he has a principled objection to free movement of labour, he should have said so at the general election; should have said 'I support Theresa.'  He didn't. He said he would “push to maintain full access to the European single market.”

And now? Now he sacks MPs who voted for what he said he would deliver. So why does this matter?

Folk don't vote because they think it won't make a difference;

Folk don't vote  because they think politicians lie;

Folk don't vote because they've had their hopes raised before;

Folk don't vote because they've had those hopes dashed.

No politician can deliver on all the dreams they inspire; we've seen leaders come in on a tide of hope and leave with a soiled legacy.

Blair with foreign war,

Obama with drone strikes,

Sturgeon with education and dither.

But Corbyn has a special responsibility to those young people who voted for him. If he leaves them cynical, they may not vote again.

If I were cynical I'd be pleased about this. With the SNP rudderless, a resurgent Labour party is the last thing Scotland needs. Both Scotland and England need strong parties of the left with clear platforms. We need leaders who say what they mean and deliver. We need politicians who will argue strongly for what they believe in, not what the latest focus group says.

We need, as the great Tony Benn said, signposts, not weathercocks.

Corbyn made a contract with his young voters. He said, I'm an honest, principled, straightforward person, I will protect your rights. He needs to deliver, or leave the stage.

Nicola Sturgeon, this applies to you also.

Friday, 16 June 2017

These boots aren't made for walking

Boots: L to R Mammut (newest),  Scarpa, Loveson
My lifestyle is probably tougher on boots than most people's; particularly in winter, but actually all year round. Consequently I've had (and worn out) a lot of pairs of boots. Three years ago I went into Tiso's mountaineering shop on Buchannan Street and asked for their strongest pair; they sold me a pair of Mammut boots, which I've worn ever since. But they're reaching the end of their life and it's time to decide what to wear next.

So I've been thinking a lot about why boots fail, and critically examining those old boots which I've not thrown away. The pair I wore before the Mammuts were by Scarpa, similar to these. They also lasted about three years. Prior to that I had a succession of pairs of Timberland boots much like this, which were light and comfortable (and much cheaper) but which wore out in nine months to a year. I've a much older pair of Loveson boots - bought more than thirty years ago while I was an undergraduate, and worn regularly for fifteen of those years; and still with some life in them.

So what causes these boots to fail? What do you look for in a boot which makes for durability?

Mammut boots: mainly good condition after three years wear,
but Vibram sole has delaminated and is falling apart.
My Mammuts are failing principally because the soles - by Vibram - are delaminating and falling apart. Consequently the boots are no longer stable, which is both uncomfortable and potentially risky on broken ground.

The uppers are also beginning to show cracks across just below the end of the tongue, but that's at least partly because once the soles started to come to bits I stopped taking care of the uppers.

The splendidly deep welts are still adhering to the upper all the way round; the boots are still waterproof and are still extremely comfortable; and the original laces have only recently worn through.

Vibram soles seem widely used on high end boots, but in recent years they've become much more complicated with many more layers, and these layers don't seem to be stuck together very well. Mind you; these may be a poor batch. But I think that Scottish winter conditions - always wet, rarely frosty, and with sharp rocks and gritty soils - are pretty tough on these over-complex multi-layer constructions where a funky, high-tech appearance seems to take priority over function. So for my next boots, I'll seek to avoid Vibram.

Left hand Scarpa boot, showing
damage to the upper.
The Scarpas were bought about six years ago, and were worn regularly for three years. They're mostly good, but have failed on the inner side of the left boot. The leather/Goretex sandwich cracked through just above the welt, allowing water and mud into the boot. The welt - much lower than on the Mammuts - also didn't adhere so well to the upper, and is loose in several places. Apart from that, they're still comfortable, and have some dry-weather wear left in them. Their soles are also Vibram-branded, but if they have a complex multi-layer construction its well hidden and well protected. There's no damage to the sole, apart from acceptable wear.

Both the Mammuts and the Scarpas were made in Romania. The quality of construction of both pairs - despite the issues I've described above - is generally very good.

I haven't kept any of the Timberlands, because they weren't worth it. The thin leather fairly quickly wears through into holes where the toe flexes, and as I recall there were also often problems with the welts and with the the soles, but they're relatively cheap and you get what you pay for.

Loveson boots, bought about 1984 and still good.
The Loveson boots are a quality apart. After many years of hard wear, they still have live in them. The tread on the soles is completely worn away, and they've always been a little loose in the heel for my feet, but nothing has failed. These boots don't have a Goretex membrane, but with dubbin they're adequately waterproof.

Unfortunately it seems Loveson no longer make walking boots, but these have taken a lot more wear than either the Scarpas or the Mammuts.

As a temporary measure I've bought a pair of Chinese made Karrimor boots simply because they were cheap in a sale - but they're not very good and I doubt they'll last; they're also rather stiff and uncomfortable to walk in, and too narrow for my feet. I'm looking for another pair of quality boots. I'd be inclined to buy another pair of Scarpas, but this year's model seem to have one of those over-complicated laminated Vibram soles.

The deep welt on the Mammuts certainly helps protect the sides of the upper from cuts and wear; they've been good and comfortable boots. But both Scarpa and Mammut are expensive; if they do last three years then they cost about the same per month as Timberlands. I want a boot that can survive longer, but I don't know where I'm going to find it.

Wednesday, 14 June 2017

How to introduce yourself

A quick guide to how to introduce yourself to people from other EU countries. This is the first step towards a t-shirt design, if anyone is interested. Probably white on saltire blue.

BG: Аз съм шотландски, а не британски
CZ: Jsem skotský, ne britský
DA: Jeg er skotsk, ikke britisk
DE: Ich bin schottisch, nicht britisch
EE: Ma olen Scottish mitte Briti
FI: Olen skotlantilainen, ei brittiläinen
FR: Je suis écossais, pas britannique
GR: Είμαι Σκωτσέζος, όχι Βρετανός
HR: Ja sam škotski, a ne britanski
HU: Skót vagyok, nem brit
IE: Tá mé hAlban, ní Breataine (or 'Is Albanach mé, ní Briotanach mé'?)
IT: Sono scozzese, non britannico
LI: Esu škotų, o ne britų
LV: Es esmu Skotijas, nevis Britu
MT: Skoċċiżi, mhux Brittaniċi
NE: Ik ben Schots, niet Brits
PO: Jestem szkocka, a nie brytyjska
PT: Eu sou escocês, não britânico
RO: Sunt scoțian, nu britanic
SE: Jag är skotsk, inte brittisk
SI: Sem Škotska, ni britanski
SK: Som škótsky, nie britský
SP: Soy escocés, no británico

Friday, 9 June 2017

The end of May. And now?

I am become death - portrait of Theresa May by
Stewart Bremner
There was no possible good outcome from this general election; the outcome we've got is far from the worst we could have had. But it's time for the left in Scotland in general - and myself in particular - to reassess, and work out how we go forward.

First, let's be clear. For me at least, independence for Scotland is not an end in itself: it's a means towards achieving a more just, more equal and more peaceful world. If other means would achieve the same end more quickly or more certainly, independence would become much less important.

Secondly, independence is not the key political issue of our age. The key political issue of our age is ecocide - by which I mean global warming, certainly, but also all the other insults to the planet: our use of poisons; our dumping of waste into the oceans; our manufacture and use of non-biodegradable materials; our unsustainable depletion of topsoil; our deforestation; our destruction of biodiversity. These, together - our destruction of the biosphere of the planet on which we live - is the key political issue of our time.

An independent Scotland in an uninhabitable world is not a win.

But secondly, independence isn't even the second most important issue of our time. Injustice is by far more important; the arms trade, which is tightly bound up with injustice, is a more important issue. An unjust world can never be a peaceful world; an unjust nation can never be a happy nation. We must radically redistribute both wealth and real power at all levels - locally, nationally and across the world.

An independent Scotland in which six hunner own half the land is not a win.
Noo Scotland's free! Watch in amaze
The Queen still in her palace stays
Across the sky the rockets blaze
The bankers gang their greedy ways
An ilka working karl still pays
Tae line the pokes o lairds who laze
On Cote d'Azure, Bahama keys
It's time tae rise as levellers again
So where are we now?

My take on it is (in Scotland) this is Corbyn's win and Corbyn's alone. He outflanked the SNP on the left, where elections, in Scotland, are won. But of course he also outflanks Scottish Labour on the left; we've elected MPs who are unlikely to be loyal to him. And if Labour don't now unite behind Corbyn, there is no silver lining from this night.

The SNP must offer Corbyn solid support, but must require from him an acceptance of at least direct Scottish representation in the EU negotiations, and a commitment to the single market. However, unless Sinn Fein come to Westminster, the game's pretty much a bogey. Neither Labour nor Tories can really govern.

Will Sinn Fein take their seats? I hae my doubts. It's critically in their interest - in the interest of the whole island of Ireland - for them  to take them. A Tory/Unionist coalition will damage Ireland north and south, and will set back the possibility of unity a long way. A hard border across Ireland would be a very, very bad thing.

If Sinn Fein take their seats then a progressive majority at Westminster is possible. If there is a progressive majority at Westminster, there will be a much softer (and, frankly, an enormously more competent) Brexit negotiation; and consequently, there will be no hard border in Ireland. Against this is the republicans' strong and principled stance of not taking an oath of loyalty to the Queen. I agree with their stance. In a democracy, taking an oath of loyalty to a monarch is an anathema. And as I wrote just this week, we do have to stop making compromises with things we know to be evil.

So I don't know which way Sinn Fein will jump. If they go to Westminster, a progressive alliance is (just) possible, but it would be very difficult.

A Tory/Unionist government is more likely. If Sinn Fein don't take their seats, it's a racing certainty. This is better than a Tory landslide, but not much better; it makes the Brexit negotiations more or less impossible. There is no settled will of the British people on Europe; the Brexit referendum was won on a slender majority of a low-turnout poll.

There are two ways forward for the Scottish left from this: one is to get behind Corbyn and try to build socialism in Britain. I have no faith in that being possible. I have no faith in the ability of Labour to unite behind the left. Blairites are too ambitious.

The alternative is for us to make a stronger push for independence, either by pushing the SNP left or else by working outside parliamentary politics to build pressure in the country. That's my preference but I've not much confidence.

The SNP lost this election. Yes, I know that in absolute terms they won; yes, I know that First Past the Post hugely magnifies small swings. But a setback of this magnitude stands as a relative loss; and they deserved to lose. They neither tried to make a case for independence, nor adequately defended their considerable achievements in office. They offered no clear radical programme. Triangulating to a mythical centre ground will not work: there is no majority in the centre. The majority for independence - if it can be gathered - is on the left.

Of course, the SNP faced a relentlessly hostile media, including the BBC, who deliberately conflated devolved with reserved issues and generally muddied the water. But that does not detract from the key point that the SNP did not offer a clear, radical vision that Scotland could unite behind.

I continue to believe that a press owned by an offshore cartel is a significant problem. First Past the Post is obviously a problem.  It's hard to believe, though, that Westminster will ever be able to resolve either of these. A political class running scared of an over-mighty media lack the courage to rein it in; a political establishment elected on First Past the Post are not motivated to reform it.

Sclerotic Westminster, with all its archaic and anti-democratic 'traditions', remains the central problem. We have to get rid of it.

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License