A modern implementation would also, of course, have access to the semantic web in order to draw in additional sources of knowledge beyond that stored in a local knowledge base.]
Development
Explanation is a social behaviour
Games are social behaviours which can be modelled
Explanation can be viewed as a game playing activity
One of the models we frequently use in real life
situations where a decision has to be made on the the basis of
uncertain or conflicting evidence is the dialectic model. In this,
one authority proposes a decision, giving an argument which supports
it; another authority brings forward objections to the argument, and
may propose an alternative decision. This method may continue
recursively until there is no further argument which can be advanced.
This is the model which underlies both academic
and parliamentary debate, and the proceedings of the law courts. The
model supports a ‘decision support’ conception of the Expert
System’s role, in addition to a ‘decision making’ one; in the
decision support conception, the user is placed in a position
analogous to that of a jury, considering the arguments presented by
the advocates and making a decision on the basis of them.
We can view this dialectic approach to decision
making as a game, with a fixed set of legitimate moves.
Hintikka’s game theoretic logic
This approach seems very promising. It may tie in
with game-theoretic logics such as those of Ehrenfeucht [Ehrenfeucht
61] and Hintikka [Hintikka & Kulas, 83 and passim]. Of course,
these logics were strictly monotonic. Problem areas where monotonic
logics are adequate are of limited interest, and unlikely to give
rise to interesting problems of explanation; but it should be
possible to develop a provable semantics for a non-montonic
extension. The decision procedure for a game-theoretic logic should
not raise any particular difficulties; the ‘games’ are zero sum,
under complete information.
However, the adoption of a game theoretic
explanation mechanism does not commit us to adopting a game theoretic
logic; so (although interesting) this line will be pursued only if it
proves easy.
The object of an explanation game
If explanation is to be seen as a game, what is to
be its objective? If we were to take a positivist approach, we might
say that the purpose of explanation was to convey understanding; then
the game would be a co-operative one, with the objective of both
players being to achieve this end.
But seeing we have rejected this position, the
objective of an explanation must be to persuade.
Furthermore, if we accept the argument advanced in Chapter 3 above that explanation is hegemonistic, the explanee must be seen to have an objective in resisting explanation. So now we have a clear idea of what the objective of an explanation game will be: the explainer will win, if the explainee is brought to a point of no longer being able to produce a rational1 reason for resisting the explanation. Otherwise, if when all the supporting arguments the explainer can bring to bear have been advanced the explanee is still unconvinced, the explainee has won.
Furthermore, if we accept the argument advanced in Chapter 3 above that explanation is hegemonistic, the explanee must be seen to have an objective in resisting explanation. So now we have a clear idea of what the objective of an explanation game will be: the explainer will win, if the explainee is brought to a point of no longer being able to produce a rational1 reason for resisting the explanation. Otherwise, if when all the supporting arguments the explainer can bring to bear have been advanced the explanee is still unconvinced, the explainee has won.
Legitimate moves in an explanation game
A possible set of legitimate moves in an
explanation game might be:
PRESENT
Present a case, supported by an argument, which
may include one or more assumptions.
Generally, in a non-monotonic system, we are
dealing with incomplete information (and, indeed, this is generally
the case in ‘real life’). Consequently, any conclusion is likely
to be based upon one or more unsupported assumptions.
[starting move]
DOUBT
Challenge one or more of the assumptions.
[basic response to PRESENT, REBUT or
COUNTER-EXAMPLE]
DOUBT-IRRELEVANT
Claim that, even if the assumption were false, it
would make no difference to the decision.
[blocking response to DOUBT]
ASSUMPTION-PROBABLE
Claim that the assumption is probable based on
prior experience, or, for example, statistical data.
[weaker response to DOUBT]
REBUT
This is essentially the same as PRESENT, but of an
argument based on the opposite assumption to that challenged in the
preceding DOUBT.
[response to a PASS following a DOUBT]
COUNTER-EXAMPLE
Present a known case similar to the current one,
where the predicate of the assumption which has been challenged holds
the opposite value.
[response to ASSUMPTION-PROBABLE]
PASS
A null move in response to any other move. Forces
the other player to move. In some sense this might be seen as
equivalent to saying ‘so what?’: the opponent has made a move,
but there is nothing further which the player wants to advance at
this stage. This move is inherently dangerous, since the game ends
when the two players pass in successive moves.
[weakest response to anything]
Implementing a game theoretic approach to explanation
Playing the explanation game
The game playing algorithm required to implement
an explanation system need not be sophisticated; indeed, it must not
be too sophisticated, at least as a game player. Consider: a good
game playing algorithm looks ahead to calculate what the probable
responses to the moves it makes will be. If it sees that one possible
move leads to a dead end, it will not make that move.
But in generating an explanation, we need to
explore any path whose conclusion is not immediately obvious. So
either the game-playing algorithm should not look ahead more than,
perhaps, one ply; or else, if it does so, then when it sees a closed
path it should print out a message something of the form:
Critic: Oh yes, I can see that even if the assumption that Carol hasTwoLegs is false it does not affect the assumption that Carol canSwim.
Initially, however, I propose to implement a
game-playing algorithm which simply makes the strongest move
available to the player, based on a scoring scheme which does not
look ahead. The game will proceed something like tennis; the player
who last made a PRESENT or REBUT move will have the initiative, the
‘serve’, as it were, and need only defend to win. The other
player must try to break this serve, by forcing a DOUBT… PASS
sequence, which allows a REBUT play.
Dialectic explanation: the two-player game
With this schema, we can mechanise a dialectic
about a case. The game is played by the machine against itself;
‘agents’ in the machine play moves in turn until both
sequentially pass, or until the user interrupts. Each move produces a
natural language statement on the display Ideally, the user could
interrupt at any time, either to request further justification for a
move or to halt the process.
As a (simple) example of what might be produced,
here is some sample output from a very crude prototype system2:
User: (← Carol Explain CanSwin)
Consultant: What is the value of
CanSwim for Carol?
User: Don’t know
Consultant: I found that CanSwim was
true of Carol by Default using the following assumptions:
I
assumed that CanSwim was true of Carol
My
reasons are
CanSwim
is true of most Mammals;
Carol is
a Mammal;
Therefore
it is probable that CanSwim is true of Carol.
Critic: The assumption that CanSwim
is true of Carol is unsafe:
Henry is
also of type(s) (Person) but CanSwim is false of Henry.
This example, which is from a system considerably
less developed than the ideas given above, shows ‘Consultant’
PRESENTing a case based on a single assumption, and ‘Critic’
responding with a COUNTER-EXAMPLE – a move which would not be legal
under the scheme given. However, this should give some indication of
how the idea might be implemented.
The simple model described above has not achieved
all we want, however. The machine, in the form of the agent
‘Consultant’ has attempted to explain to itself, wearing its
‘Critic’ hat. The user’s role has simply been that of passive
observer. The first enhancement would be to give the user control of
the ‘Critic’; but because the user cannot have the same view of
the knowledge base as the machine, the ‘Critic’ will have to
guide the user, firstly by showing the user the legal moves
available, and secondly by finding supplementary information to
support these moves.
Explanation as conversation – a three player game
A better design still might be to allow the
machine to carry on its dialogue as before, but to allow the user to
interrupt and challenge either of the agents. It seems probable that
the most interesting moves the user could make would be DOUBT, and
also COUNTER-EXAMPLE, where the user could present an example case
not previously known to the machine. Now the user can sit back and
watch, if the conversation brings out points which seem relevant; but
equally, the user can intervene to steer the conversation.
These ideas have, needless to say, yet to be
implemented.
1‘Rational’,
here, is of course a fix. In an explanation situation involving two
human beings, the explainee might continue to resist even if there
is no rational reason for doing so. This situation would be
non-deterministic, though, and too difficult for me to deal with.
2This
is output from Wildwood version Arden, my experimental prototype
object based inference mechanism. The game-playing model is
different from (a predecessor of) the one given above.
No comments:
Post a Comment