Tuesday 12 December 2017

On Catalunya, and Freedom

I don't know much about Catalunya. I've never been there; I have no friends from there. I've read, obviously, books coming out of the experience of the Spanish Civil War - George Orwell, Laurie Lee, and anarcho-syndicalist texts: Murray Bookchin, Robert Alexander and others. But that's all a long time ago; and my interest was in political theory and in how you build a good society, not particularly in the place itself. I don't know much about Catalunya.

But that is beside the point. The point, in Catalunya now, is the right of people in a place to define themselves as a community and a polis, and to achieve self determination for that polis. Chapter One, Article One of the Charter of the United Nations - which all members have signed up to - asserts the right to self determination. So does Article One of the International Covenant on Civil and Political Rights, of which Spain is also a signatory. It is a basic, fundamental right.

And it is a right we in Scotland claim. It is core to our Claim of Right. It is core to our right to choose whether we wish Scotland to become a nation again.

In Catalunya, at this moment, four people are in prison for peacefully asserting this right. They are:

Jordi Cuixart i Navarro is a civil society activist, not a politician. He leads a cultural organisation, Omnium Cultural, which promotes the Catalan language. He is, effectively, equivalent to the chair of An Comunn Gàidhealach. He's accused of organising passive resistance to the attempts by the Guardia Civil - the Spanish national police - to destroy the infrastructure for the first of October referendum on Catalan independence.

His address in prison is:
Jordi Cuixart i Navarro
Centro Penitenciario
Madrid V
Ctra. M-609, km 3,5
28791 Soto del Real
Madrid, Spain
Jordi Sanchez Picanyol is President of the Assemblea Nacional Catalana, which might be seen as roughly equivalent to the Scottish Independence Convention; so again, although he is a political activist, he's not a party politician. His closest Scottish equivalent might be Leslie Riddoch. Like Jordi Cuixart, he's accused of organising passive resistance to the attempts by the Guardia Civil to destroy the infrastructure for the referendum.

His address in prison is:
Jordi Sànchez Picanyol
Centro Penitenciario
Madrid V
Ctra. M-609, km 3,5
28791 Soto del Real
Madrid, Spain
Oriel Junqueras i Vies is a historian by profession, but he's an elected politician and serves as Vice-President of the Government of Catalonia; his nearest Scottish equivalent might be John Swinney. He leads the Republican Left party in the Catalan Parliament, and led the 'Yes Coalition' campaign in the last parliamentary elections - so he's also a bit like Blair Jenkins. He's accused of  rebelliousness and sedition for responsibility for the unilateral declaration of Catalan independence.

His address in prison is:
Oriol Junqueras i Vies
Centro Penitenciario
Madrid VII
Ctra. M-241, km 5.750
28595 Estremera
Madrid, Spain
Joachim Forn Chiariello is a lawyer turned liberal politician. He leads the Convergència Democràtica de Catalunya party, and is Minister of the Interior in the Catalan Government. In Scottish terms he's somewhere between Willie Rennie and Michael Matheson. He's in prison largely because his ministry was responsible for the Mossos d'Esquadra, the Catalan regional police (equivalent to Police Scotland) and for the Catalan firefighters, who together defended voters from the violence of the Guardia Civil on the first of October.

His address in prison is:
Joaquim Forn Chiariello
Centro Penitenciario
Madrid VII
Ctra. M-241, km 5.750
28595 Estremera
Madrid, Spain
So, four men. None of them revolutionaries. None of them have espoused violence. Each serving in roles which are quite familiar in Scotland. And each, imprisoned for doing so.

If the Westminster Government sent the Metropolitan Police to smash up the baby boxes that the Scottish Government are distributing, wouldn't you hope that someone in Scottish civil society - Jonathon Shafi, perhaps, or Lesley Riddoch - would organise a peaceful demonstration to prevent them?

If the Holyrood parliament votes to organise a second independence referendum, and, if that referendum returns a strong 'Yes' vote, votes again to declare Scotland independent, wouldn't you hope that the Scottish Government would act on that mandate?

Of course you would.

And let's put to bed the claim that the referendum was 'illegal'. The right to self determination is guaranteed by binding international agreements to which Spain is a voluntary signatory. Therefore it is legal in Spain to vote for self determination. Self determination cannot be legally protected if all the means to express self determination are forbidden. The referendum cannot have been illegal. As Spain claims that its national law trumps Catalunya's regional law, so international law trumps Spain's national law.

I don't know much about Catalunya. I don't know whether Catalunya should be independent. That's none of my business; it's for the people of Catalunya to decide. But I do know this: if Scotland will not stand in solidarity with the Catalans, if we will not stand up to assert Catalunya's right to self determination, why should anyone stand up for ours?

Libertat Presos Politics. They aren't just Catalunya's political prisoners; they're also ours.

Saturday 2 December 2017

Wildwood: Development

[What follows is the text of the 'Development' chapter - chapter five - of my thesis as it existed in June 1988, when I lost access to the machine on which the work was done. This is an unfinished chapter of an unfinished thesis; I am posting it because people have expressed interest in the explanation game. This is the last form of this chapter, but it isn't the end of my thinking on the matter, and if I were rewriting it now I would add further moves to the set to allow the agents to argue not just about assumptions, but about the merit of the authorities from whom rules of inference are drawn. 

A modern implementation would also, of course, have access to the semantic web in order to draw in additional sources of knowledge beyond that stored in a local knowledge base.]

Development

Explanation is a social behaviour

Games are social behaviours which can be modelled

Explanation can be viewed as a game playing activity

One of the models we frequently use in real life situations where a decision has to be made on the the basis of uncertain or conflicting evidence is the dialectic model. In this, one authority proposes a decision, giving an argument which supports it; another authority brings forward objections to the argument, and may propose an alternative decision. This method may continue recursively until there is no further argument which can be advanced.

This is the model which underlies both academic and parliamentary debate, and the proceedings of the law courts. The model supports a ‘decision support’ conception of the Expert System’s role, in addition to a ‘decision making’ one; in the decision support conception, the user is placed in a position analogous to that of a jury, considering the arguments presented by the advocates and making a decision on the basis of them.

We can view this dialectic approach to decision making as a game, with a fixed set of legitimate moves.

Hintikka’s game theoretic logic

This approach seems very promising. It may tie in with game-theoretic logics such as those of Ehrenfeucht [Ehrenfeucht 61] and Hintikka [Hintikka & Kulas, 83 and passim]. Of course, these logics were strictly monotonic. Problem areas where monotonic logics are adequate are of limited interest, and unlikely to give rise to interesting problems of explanation; but it should be possible to develop a provable semantics for a non-montonic extension. The decision procedure for a game-theoretic logic should not raise any particular difficulties; the ‘games’ are zero sum, under complete information.

However, the adoption of a game theoretic explanation mechanism does not commit us to adopting a game theoretic logic; so (although interesting) this line will be pursued only if it proves easy.

The object of an explanation game

If explanation is to be seen as a game, what is to be its objective? If we were to take a positivist approach, we might say that the purpose of explanation was to convey understanding; then the game would be a co-operative one, with the objective of both players being to achieve this end.
But seeing we have rejected this position, the objective of an explanation must be to persuade.

Furthermore, if we accept the argument advanced in Chapter 3 above that explanation is hegemonistic, the explanee must be seen to have an objective in resisting explanation. So now we have a clear idea of what the objective of an explanation game will be: the explainer will win, if the explainee is brought to a point of no longer being able to produce a rational1 reason for resisting the explanation. Otherwise, if when all the supporting arguments the explainer can bring to bear have been advanced the explanee is still unconvinced, the explainee has won.

Legitimate moves in an explanation game

A possible set of legitimate moves in an explanation game might be:

PRESENT

Present a case, supported by an argument, which may include one or more assumptions.
Generally, in a non-monotonic system, we are dealing with incomplete information (and, indeed, this is generally the case in ‘real life’). Consequently, any conclusion is likely to be based upon one or more unsupported assumptions.
[starting move]

DOUBT

Challenge one or more of the assumptions.
[basic response to PRESENT, REBUT or COUNTER-EXAMPLE]

DOUBT-IRRELEVANT

Claim that, even if the assumption were false, it would make no difference to the decision.
[blocking response to DOUBT]

ASSUMPTION-PROBABLE

Claim that the assumption is probable based on prior experience, or, for example, statistical data.
[weaker response to DOUBT]

REBUT

This is essentially the same as PRESENT, but of an argument based on the opposite assumption to that challenged in the preceding DOUBT.
[response to a PASS following a DOUBT]

COUNTER-EXAMPLE

Present a known case similar to the current one, where the predicate of the assumption which has been challenged holds the opposite value.
[response to ASSUMPTION-PROBABLE]

PASS

A null move in response to any other move. Forces the other player to move. In some sense this might be seen as equivalent to saying ‘so what?’: the opponent has made a move, but there is nothing further which the player wants to advance at this stage. This move is inherently dangerous, since the game ends when the two players pass in successive moves.
[weakest response to anything]

Implementing a game theoretic approach to explanation

Playing the explanation game

The game playing algorithm required to implement an explanation system need not be sophisticated; indeed, it must not be too sophisticated, at least as a game player. Consider: a good game playing algorithm looks ahead to calculate what the probable responses to the moves it makes will be. If it sees that one possible move leads to a dead end, it will not make that move.
But in generating an explanation, we need to explore any path whose conclusion is not immediately obvious. So either the game-playing algorithm should not look ahead more than, perhaps, one ply; or else, if it does so, then when it sees a closed path it should print out a message something of the form:
Critic: Oh yes, I can see that even if the assumption that Carol hasTwoLegs is false it does not affect the assumption that Carol canSwim.
Initially, however, I propose to implement a game-playing algorithm which simply makes the strongest move available to the player, based on a scoring scheme which does not look ahead. The game will proceed something like tennis; the player who last made a PRESENT or REBUT move will have the initiative, the ‘serve’, as it were, and need only defend to win. The other player must try to break this serve, by forcing a DOUBT… PASS sequence, which allows a REBUT play.

Dialectic explanation: the two-player game

With this schema, we can mechanise a dialectic about a case. The game is played by the machine against itself; ‘agents’ in the machine play moves in turn until both sequentially pass, or until the user interrupts. Each move produces a natural language statement on the display Ideally, the user could interrupt at any time, either to request further justification for a move or to halt the process.

As a (simple) example of what might be produced, here is some sample output from a very crude prototype system2:

User: (← Carol Explain CanSwin)
Consultant: What is the value of CanSwim for Carol?
User: Don’t know
Consultant: I found that CanSwim was true of Carol by Default using the following assumptions:
I assumed that CanSwim was true of Carol
My reasons are
CanSwim is true of most Mammals;
Carol is a Mammal;
Therefore it is probable that CanSwim is true of Carol.
Critic: The assumption that CanSwim is true of Carol is unsafe:
Henry is also of type(s) (Person) but CanSwim is false of Henry.

This example, which is from a system considerably less developed than the ideas given above, shows ‘Consultant’ PRESENTing a case based on a single assumption, and ‘Critic’ responding with a COUNTER-EXAMPLE – a move which would not be legal under the scheme given. However, this should give some indication of how the idea might be implemented.

The simple model described above has not achieved all we want, however. The machine, in the form of the agent ‘Consultant’ has attempted to explain to itself, wearing its ‘Critic’ hat. The user’s role has simply been that of passive observer. The first enhancement would be to give the user control of the ‘Critic’; but because the user cannot have the same view of the knowledge base as the machine, the ‘Critic’ will have to guide the user, firstly by showing the user the legal moves available, and secondly by finding supplementary information to support these moves.

Explanation as conversation – a three player game

A better design still might be to allow the machine to carry on its dialogue as before, but to allow the user to interrupt and challenge either of the agents. It seems probable that the most interesting moves the user could make would be DOUBT, and also COUNTER-EXAMPLE, where the user could present an example case not previously known to the machine. Now the user can sit back and watch, if the conversation brings out points which seem relevant; but equally, the user can intervene to steer the conversation.

These ideas have, needless to say, yet to be implemented.

1‘Rational’, here, is of course a fix. In an explanation situation involving two human beings, the explainee might continue to resist even if there is no rational reason for doing so. This situation would be non-deterministic, though, and too difficult for me to deal with.

2This is output from Wildwood version Arden, my experimental prototype object based inference mechanism. The game-playing model is different from (a predecessor of) the one given above.

Friday 1 December 2017

Before Wildwood: the Arboretum engine

(this piece was written as an email in a response to a question by Chas Emerick about whether it is possible to explain inference to ordinary people. It describes the Arboretum engine, which started me thinking about how you make a computable logic which people can understand.)

Right, let's try to set this out in some sensible detail. I'll start by giving you the bones of the story, and then move on to talk about the things you're actually interested in. I'm copying Peter Mott in so he can correct any misrememberings of mine (I have long term severe depression, and it really messes with your memory).

I did a minor course in logic and metaphysics when I was an undergrad. When I graduated in 1986, my logic tutor, Peter Mott, who was entirely deaf, was invited to join an AI project - the "Alvey DHSS Large Demonstrator" essentially as the formal logic input to the process. He invited me to join him as his research associate, mainly I think because having been in his seminars for two years I was pretty fluent at discussing technicalities of logic in sign language, and could interpret for him. I was never the world's most brilliant logician, but I could more or less do it.

Peter was certainly responsible for the logic of the "D-Engine", and I think wrote the first prototype in Acornsoft Lisp himself. The D-Engine logic is a Popperian propositional logic, and the goal of the engine at any stage is to seek to falsify what it currently believes. Again I think it was Peter's insight that this would map naturally onto a systematic practice for interviewing domain experts, but, being deaf, he couldn't develop this interviewing practice himself.

A rule in D-Engine logic is naturally a tree, which we called a "D-Tree". It nodes are propositions "coloured" with truth values, and edges essentially represent "unless" relationships. Because D-Engine logic naturally argues from the general to the particular, it's possible to recover extremely good natural language explanations of decisions from simple boilerplate fragments on the nodes.

There's a paper explaining all this: Mott, P. & Brooke, S. A Graphical Inference Mechanism. Expert Systems, May 1987. vol. 4. No. 2 pp. 106-117

(Golly. I've just - for the first time ever - done a google search for that paper, to see it it was online anywhere. It isn't, but it's cited in lots of places - including by Marc Eisenstadt! I'm pleasantly suprised).

I built the second prototype, Arboretum, in Interlisp on Xerox 1108 Dandelion and 1186 Daybreak machines, and that was the one that was used in the project; we demonstrated that it was possible to automate decision making on a piece of legislation (eligibility for widows' allowance), and, critically, to automatically generate letters to claimants which claimants could understand.

At the end of the project, Peter and I and a number of others of Peter's ex-students spun out a company to try to commercialise D-Engine logic. As (again) Peter was deaf, and as he intended to continue as a lecturer, he chose not to head up the company, and I was elected Chief Exec; I was also the technical lead. We knew we couldn't commercialise our work on the Xerox workstations, because they were way too specialist and expensive to sell into industry. I had a look at Windows, which was at version 2 at that stage, and it was just a horrible mess. We couldn't afford Macintosh (and, in any case, the M68000 Macs of those days were really not very powerful); so we decided to write the new version on Acorn Archimedes ARM machines, on the basis that I already had one. The plan was always to port it to UN*X later.

There was a good Lisp compiler for the Archimedes machines, but it was Portable Standard Lisp which was going out of favour (Common Lisp had just been standardised). It suffered from another problem which was for us more serious: the garbage collector moved things in memory, and the Archimedes window manager held fixed pointers to things; so, while I did a lot of work on this and got somewhere, the solution wasn't going to be portable.

So we decided to rewrite the system in C (which at that stage none of us knew). We also decided to do something else ambitious: we decided that because our new version - called KnacqTools for "Knowledge Acquisition Tools" - would require a graphics workstation, and such machines were then uncommon in industry and expensive, we should compile the knowledge once elicited down to run on one of a number of rule engines that were then available for PCs; and we decided that these rule language compilers should be pluggable.

We got the Archimedes version of KnacqTools feature complete and working well within a year. We demonstrated it to Acorn, who gave us two of the UN*X (BSD 4.3) version of their ARM workstations, called R260. On these we ported KnacqTools to OSF-Motif, which was the then-dominant window manager for X11. We got this feature complete in not very long, and it looked extremely impressive - so impressive that NCR gave us UN*X workstations with the intention that they would adopt KnacqTools as a product. However, the OSF-Motif version suffered from persistent memory leaks which we never fully resolved.

But in the meantime the company was in financial trouble. We'd started without capital and had funded the company organically by doing consulting. Members of the team wanted me to concentrate on product development, but I was also the person who could do the sort of consulting work which brought in money. And in the winter of 1981-2, recession hit the UK economy and our customers stopped spending on experimental AI projects.

If I knew then what I know now, the company could have been saved. What we needed was an angel, and we actually had a very good story to tell to angels. Our customers included Courtaulds - for whom one of our systems ran a critical chemical plant - Ford Motor Company, Bull, De La Rue, and a number of other industrial big players. We had a very impressive product, which was very close to finished. And we were teaching successful courses on the adoption of artificial intelligence into industry. But I didn't know about angels and I didn't have the contacts, so the company went down.

OK, that's the background: onwards!

Over a period of about three years I taught three-day courses on knowledge acquisition to around a hundred people mainly drawn from industry and the civil service - I can't remember why we didn't do more with financial services, which would have been an obvious market, but it's a fact we didn't. In teaching knowledge acquisition I obviously also taught an understanding of representing and structuring knowledge. The people I taught it to were mostly degree educated, and many were extremely senior people - ranging from senior engineers in charge of industrial plant, to executives evaluating future technologies for their companies. But they weren't by any means universally people with advanced maths skills or any previous experience of formal logic. These courses were generally very successful, and at the end of them participants typically could build working systems to make decisions in a domain of knowledge.

What made this possible was the very simple logical schema: ask the domain expert to choose the question to be answered, the proposition whose truth value was to be determined:

This person satisfies the conditions for widows' benefit

Ask them what the default value of the proposition is - in this case, 'false', most people aren't eligible for widows' benefit.

We would notate this as follows:

        Satisfies conditions for Widows' Benefit -

So OK, you ask the expert, what's the first thing you can think of which would make you change your mind, make you think a given person was eligible?

If they were a widow.

And if they were a widow, would they then be eligible?

No, their husband's national insurance contributions would have to have been up to date

And if their husband's national insurance contributions were up to date, would they then be eligible?

No.

We can now extend the graph:

        Satisfies conditions for Widows' Benefit -
                            |
                            |
                         Widow -
                            |
                            |
               Husband's contributions OK -

So, given they are a widow, and their husbands contributions were up to date, what's the first thing you can think of which would make you change your mind?

If they were under pension age when bereaved.

And if they were under pension age when bereaved, would they then be eligible?

Yes.

We now have a condition which overturns our original assumption that the person isn't eligible, and we notate it thus:

        Satisfies conditions for Widows' Benefit -
                            |
                            |
                         Widow -
                            |
                            |
               Husband's contributions OK -
                            |
                            |
                   Under pension age +

Is there any further condition that would change your mind and lead you to conclude that they were not eligible?

No.

So that closes one path and we recurse back up:

OK, so we've got someone who is a widow and whose husband's contributions were OK, but was not under pension age when bereaved. Is there anything else that would make you think they satisfied the conditions for widows benefit?

Yes, if their husband wasn't entitled to a retirement pension they would be eligible.

        Satisfies conditions for Widows' Benefit -
                            |
                            |
                         Widow -
                            |
                            |
               Husband's contributions OK -
                         /     \
                        /       \
       Under pension age +     Husband not entitled to retirement pension +

Again, you probe for further conditions; if there are none, you close the path, and recurse back.

OK, so we've got someone who is a widow and whose husband's contributions were OK, but was not under pension age when bereaved, and their husband was entitled to a retirement pension. Is there anything else that would make you think they satisfied the conditions for widows benefit?

No.

Again, we've nowhere further to go from this node, so we recurse back up:

OK, so we've got someone who is a widow but whose husband's contributions were not OK. Is there anything else at all which would make you think they satisfied the conditions for widows benefit?

No.

Again recurse back up:

OK, so we've got someone who is not a widow. Is there anything else at all which would make you think they satisfied the conditions for widows benefit?

No.

And so we can close this tree: it's complete. As you can see, this is a very simple procedure: the interviewer always knows what the next question to ask is, and the answers can be systematically recorded. So it's very easy to teach, and people easily understand how this process maps on to real world decision making. There may still be some things which aren't completely clear - for example, what is entailed by 'Husbands contributions OK'? But we can use that as the start of a new tree and repeat the process. We had special notepads printed with a triangular grid to make drawing trees easier, but that was mainly sales collateral; essentially anyone who can ask questions and scribble on paper can conduct one of these interviews.

So, again, I didn't have the original idea, that was down to Peter. But I did develop the interviewing practice, and I was the person who mainly delivered the courses on it.

The nature of logic is that you can operate on it with logic. A logical formalism can be translated into another logical formalism with similar (or lesser) expressivity, so it was trivial to translate D-Trees into production rules which would run on a conventional backward chaining production rule engine. I'd also argue, from experience, that D-Trees are easier for non-logicians to understand than production rules, since a non-specialist can assess whether a D-Tree has complete coverage of an area of domain knowledge, whereas it's much harder to assess whether a corpus of production rules does.

So to come back to your original question, yes, I know from experience that you can enable 'ordinary people' to understand inference, provided you can find a simple enough expression of it. Of course the D-Engine logic wasn't as sophisticated as a first-order predicate logic, and certainly wasn't as sophisticated as a constraint propagation logic, but I don't believe those are insuperable issues. You can't expect 'ordinary people' to understand dense code or complex formalisms; upside down 'A's and back turned 'E's, and all the various hooks of set notation, quickly alienate those who are not mathematically inclined. But all inference is the systematic performance of a limited number of individually simple legal moves, like chess; and 'ordinary people' can understand chess, even if many are (like me) not very good at it.

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License