IPC specs

IPC needs

Notation

Process: An execution instance; implemented as a POSIX thread, POSIX light weight process, UNIX process, or whatever technology is appropriate.

Process management

There will the SYS process, which is responsible for managing processes based on the system configuration. (For example, if the configuration specifies a two CPU system, SYS would create and manage two CPU processes.

SYS will need to be able to create and destroy processes.

Receive notification of process death?

Establish a execution context that would allow multiple emulator instances on a single host without interference.

An error recovery strategy; for example if the SYS process crashes. cleanup of child processes and allocated resources should be as automatic and robust as is feasible; that the restart should stumble because of duplicate instances or dangling locks.

Shared memory

Shared memory is required for efficient implementation of multiple CPUs as the DPS8 hardware implements main memory as being a shared resource.

Locks

Locks are need to implement certain atomic dps8 main memory operations, and to manage thread-safe data structures if threads are part of the implementation (currently the ZMQ IPC library uses threads to deliver calls).

Signals

The code currently uses the USR1 signal to indicate to the CPU that the Execute Fault button has been pressed. This functionality could be replaced with any other appropriate technology.

Specifics

Processes

(This is a conceptual list; it would probably be a good idea to subsume several of these together)

SYS
CPU
SCU
IOM
MPC
tape drives
disk drives
DIA
FNP
card readers
card punches
printers
bulk storage
OPCON

List of IPC events (under construction)

+++Signalling events

Note: "from DEVi" and "to DEVj" might be better implemented as "from PORTi" dn "toPORTj"/

CIOC CPU -> SCU (from CPUi, to SCUj, port k)

SMCM CPU -> SCU (from CPUi, to SCUj, 72 bit mask value)

SMIC CPU -> SCU (from CPUi to SCUj, 36 bit mask value)

SSCR CPU -> SCU (from CPUi, to SCUj, 72 bit register value)

SXC CPU -> SCU (from CPUi, to SCUj, 5 bit cell number)

SXC IOM -> SCU (from IOMi, to SCUj, 5 bit cell number)

Calls returning status or value

RCCL CPU -> SCU (from CPUi, to SCUj; return 52 bit clock value)

RMCM CPU -> SCU (from CPUi. to SCUj, return 72 bit mask value)

RSCR CPU -> SCU (from CPUi to SCUj, return 72 bit register value)

XEC CPU -. SCU (from CPUi, to SCUi, 5 bit cell number, return trap pair address)

Other

The XIP (interrupt present) line is asserted by a SCU and presented to mask-enabled CPUs. I am unsure if this needs to be level or edge detect.

IOM to DIA or MPC: device command, return status

DIA or MPC to IOM: list service

DIA or MPC to IOM: indirect list service

DIA or MPC to IOM: fault

DIA or MPC to IOM: status

MPC or DIA to and from devices

Shared Memory

DPS8 main memory

SCU interrupt bits

SYS cable network

Locks

Main memory atomic R/M/W operations


Current Discussion

On Sun, Aug 9, 2015 at 5:39 PM, Harry Reed <us.xavbgk|eoJelcnU#us.xavbgk|eoJelcnU> wrote:

Well, playing around with nanomsg brings no lasting joy. While nanomsg
works well it lacks any kind of discovery service — which was a neat
feature of 0MQ that we could just start up the cpu & fnp and they would
automagically connect to each other and away we go.

I've been playing around with sysV IPC, viz message queues, etc. They
work well, but again no discovery available. Not they we couldn't come
up with a scheme for both nanomsg &/| sysV IPC to implement some sort of
beacon/discovery service I don't think that it's really worth the effort.

So, asking you, dear developers, what kind of IPC do we want to use for
this beast? The socket based nanomsg would allow us to connect fnp's &
cpu's across the internet - which could be really neat. sysV IPC is
straight forward, involves no networking, but only allows connections
between processes/threads on the current host.

What say ye?

I am becoming more resolute in my opinion that the way forward is to better specify the exact IPC needs of dps8m, prototype them out with the simplest components that will work on my development machine in a package that abstracts all of the implementation details to a simple well defined interface, ala:

typedef IPChandle …..
typedef IPCservice ….
typedef IPCmsg …

// Initialize the IPC library; called once per process
IPChandle * IPCinitialize (….);

// Terminate the library; closes and cleans up
int IPCterminate (IPChandle *, …)

// Launch a process that will be managed by the IPC library;
// The return value is used to communicate with the process
IPCservice * IPCstartService (IPChandle *, …)

// Terminate a process launched by IPCstartService()
int IPCterminateService (IPCservice *, ….)

// Send a message to a process; non-blocking.
int IPCsendMsg (IPCservice *, ….)

// Received message callback template
typedef int (* IPCrecvMsg) (IPCservice *, .IPCmsg *)

// Register received message callback handler
int IPCRegisterMsgRcvr (IPCservice *, IPCrecvMsg *, …)

// Send a UNIX style signal to a process
IPCsignal (IPCservice *, int signo)

// Do we need to have signal handler wrappers, or will the POSIX signal handler suffice?
// Received signal callback template
typedef int (* IPCrecvSig) (IPCservice *, .int signo)

// Register received signal callback handler
int IPCRegisterSigRcvr (IPCservice *, IPCrecvSig *, …)

// Create/open shared memory segments
IPCcreateSHM (…)
IPCopenSHM (…)

// RPC calls?

We can then get that interface working and the specification refined, making that IPC package the reference system. Then that simple, well-defined package can be ported, dropping in whatever implementation you would like (including making the whole thing a single process with a tangle of threads, buying into the Windoze philosophy).

If familiarize ourselves with the various platforms IPC capabilities and limitations, we should be able to keep the reference specification portable.

Eg. I would select POSIX message queues; I am given to understand that the OSX message passing system is a fundamental part of OSX, rock solid, and semantically quite similar to POSIX message queues.

I also know that back in circa 1990, we were supporting geophysics applications across nearly every platform know to the western world (IBM mainframes; UNIX workstaions, VANen, etc), and one of the engineers whipped up a POSIX message queue implementation that ran on most of the machines, and worked across the network to support client/server applications ; so I am inclined to believe that most of our target platforms, including windows will have a 3rd party msg queue library that will just work.

The things that are needed (approximately)

message passing
some kind of mechanism for RPC, ie. sending message to a process and getting a reply, with failure detection*
signaling
shared memory
launching and managing processes
process deatch detection
process termination
session ID

  • The RPC is subtle, and I am not sure of the exact behavior I want. RPC is inherently blocking; and I wouldn't want to block a CPU process while a (for example) 10 second timeout occurs on the RPC.

Signals and shared memory should be straight forward enough.

The zmq autoconfig was nice, but not essential, and perhaps a bad idea — we were pretty helpless if two 'fnp-d"s showed up on the chat line/

The .ini cable commands tell the emulator the exact configuration needed; the SYS device peruses the cable tables, launches the various processes needed (cpu-a, cpu-b, cpu-c, cpu-d, scu-a, scu-b, scu-c, scu-d, iom-a, iom-b, tap-a, dsk-a, opcon).

Multics wants to do I/O? to a disk; Mutiics consults it disk tables, decides it wants to talk to dsk_01a; consults the configuration deck, sees that dsk_01a is connected to iom-a, channel 12, and that iom-a is attached scu-a, port 4. it builds a channel command program that specifies channel 12 and puts it in the iom-a mailbox in memory. It consults the cpu configuration switches, and determines that scu-a is attached to port 0. It then sends a connect command to the scu connected to port 0, telling it to forward the command to the scu's port 4.

The cpu emulator consults it's cabling, sees that cpu port 0 is connected to scu-a, and sends the "connect to port 4: message to the scu-a process.

The scu-a process gets the message, consults the cable table, sees that the iom-a process is connected to port 4, and sends the connect message to the iom-a process.

The iom-a process gets the connect message, looks in its mailbox, and sees that the channel command for channel 12. It consults the cable table, and sees that the dsk-a process is connected to channel 12, and sends the channel command to the dsk-a process.

(Yes, I am worried that I can rattle that off the top of my head.)

My point is that the configuration is well-known at start-time, and I think that having the SYS process explicitly manage the processes and connections is a better idea then a self-configuring network, ala zyre.

— Charles

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License