Tuesday 11 April 2017

Peer to peer post-scarcity computing

Trinity College Library, Dublin.
In previous notes on post-scarcity hardware (here and here) I've assumed a single, privileged, main memory manager which maintains the canonical memory pool. All requests for memory objects go to that manager, and new non-private memory objects must be created in the address space managed by that manager. The memory manager thus becomes both a bottleneck and a single point of failure.

In the second note, I'd suggested that memory should be allocated in pages with each page belonging to a single thread. On true post scarcity hardware, that means that each page could physically co-reside with the processor on which that thread was run. That processor would be responsible for curating its own memory objects (which in essence means updating their reference counts, and garbage collecting them when they're no longer required).

Memory objects would still be requested by other processors by outputting the request on the address bus. Because memory objects are immutable, any processor which currently holds a copy of the requested object can respond by outputting that copy onto the data bus. Whether this is a win depends on topology, but my own mental model of the internal arrangement of the processor array is that each node has direct communication with eight nodes on the same layer, and nine on each of the layers above and below.

In this model there's no physical space to route a single address/data bus pair which connects every node, so the model is necessarily store-and-forward like old-fashioned Usenet, so it would be a win for the topographically nearest node which has the data to respond with it. This does of course require that every node can trust every other node to obey the rules of the system.

Reference counts are of course not immutable, but no node but the canonical owner of the memory object needs to know anything about them. Of course, when a node which is not the canonical owner of the memory object passes a copy of the object to a third node, it must communicate back to the canonical owner to update the reference count; and when a node holding a copy of an object deletes that copy, it must again communicate back to the canonical owner that the copy no longer exists.

It also means that, for any object, when the reference count of that object on its canonical node hits zero, it must not be deleted immediately, because an 'add reference' message may still be propagating through the system; instead, it must be queued to be deleted, and held in that queue for the maximum time it could reasonably take for a message to propagate.

There are some problems I haven't worked out, which may make this idea unworkable. Suppose a node (A) requests a memory object (1) from each of its 26 neighbours. None have it, so each passes the request on to each of  its neighbours which haven't yet received the request. One node in this second shell, (B), has a copy of (1) and responds. How do we communicate with each of the nodes currently retransmitting the request that the request has been satisfied? If the 'cancel' message propagates though the system at the same speed as the original message, then it can never catch it.

For sending 'update reference' messages to work effectively, each node must know which single one of its neighbours is nearer to the target node of the message, but this does not seem to me in itself problematic. Obviously broadcasting 'update reference' messages across a store-and-forward network would be dreadfully inefficient. But is broadcasting 'have you got' messages any more efficient than simply querying nodes on the direct route to the canonical owner? I'm not sure.

And, when receiving a copy of a broadcast message, its obviously desirable that each node should only rebroadcast it only to nodes which have not yet received it. But is it computationally cheap to know which nodes that will be? I think so, but I'm not confident and will have to prove it.

Finally, the single memory manager acted as a single point of failure. But a system in which every single node is the canonical owner of a segment of the memory map means that the loss of any node could mean catastrophic failure if that node was the canonical owner of some widely used object. Obviously if the object is widely used it's likely that many nodes will have copies of it, so it's probably possible to rebuild at least the critical bits of the missing node's memory.

But how does one construct an algorithm to decide which node should take canonical responsibility for the orphaned memory objects? How does one construct an algorithm that would scale, as nodes are progressively lost?

This I don't know.

No comments:

Creative Commons Licence
The fool on the hill by Simon Brooke is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License