QNX Neutrino is great — but it could be even better...

I've been using QNX products for the last two decades, starting with QNX 2, then QNX 4 (right from the beginning), through to the latest version, QNX Neutrino (started using that during the early days of the 0.3 limited beta).

The QNX Neutrino operating system is great — it's clean, the message passing model is very nice, and you just can't beat the ease with which you can develop everything from drivers to complete systems.

However, in spite of that, there are still things that I'd like to add. I've emailed QSSL (the makers of QNX) on these topics, and for the most part, was told either "Yup, we've already thought of that, and it's going into a later version", or "Huh. Neat, we'll consider it".

This is how InterruptAttachEvent was born. During a training course I was presenting, I put the struct sigevent as the area parameter for the InterruptAttach and then returned it as the return value if we wanted to trigger an interrupt. Dan Dodge saw this, and said, "Cute." A few revisions later, there was InterruptAttachEvent :-).

So, here's the list — I'll be adding more as I get to them...

Extend struct sigevent to include semaphores

Currently, the struct sigevent can deliver a pulse or a signal. (There are other things it can do, but they aren't completely general, such as unblocking an InterruptWait.)

I propose that you should be able to cause a semaphore to increment. The reason I think this is a great idea is because it would allow a thread to wait on a semaphore as the sole means of synchronization with multiple sources, such as other threads, and interrupt service routines. Right now, if that thread needs to wait for an interrupt or another thread, it has to either choose a synchronization mechanism that's common to both (pulses or signals), or it has to poll or have a converter thread. See below under "Create a pulse draining mechanism" for the reasons why waiting on a pulse is a bad idea. Signals are just plain nasty :-).

Create a pulse draining mechanism

Under QNX 4, you could drain proxies from other processes, simply by choosing which process ID you listened to. Under QNX 4, the code to drain a proxy was:

while (Creceive (proxy_id, &buff, sizeof (buf)) != -1) ;

Under Neutrino, this is just not possible. The reason is that pulses and messages are all "mixed together" in one channel — the only way to drain off pulses (and be sure you had drained off all pulses for a given pulse code) would be to drain all pulses, and that's not necessarily a good idea, or easy to deal with.

The only clean way to drain off all pulses, and preserve pulse priorities and order, would be to set up a linked list of pulses, and stash them as they arrived. Then you are still faced with the unpleasant task of trying to "reprocess" the pulses — some of them might be kernel pulses intended for your resource manager, and good luck trying to recork that genie!

The reason you'd want to drain pulses became clear to me after working on a contract at a company that had a large system, with many processes. The processes would send pulses to "worker" processes, telling them to do some kind of processing. The pulses had to be processed in order (and were prioritized as well). The problem is, that when a process was restarted, it became necessary to drain off all pulses from that process, in order to ensure that any new pulses from that process were from the new instance of the process, rather than the old instance.

Extend MsgDeliverEvent for local delivery

The MsgDeliverEvent function takes a receive ID and a struct sigevent. It would be a useful extension to allow a zero for the receive ID to mean "interpret this event in the context of this process." Where this comes into play is when a process needs to deliver some event, and the event may need to be delivered to itself, or to an external process. Currently, one delivery mechanism has to use MsgDeliverEvent and the other has to use MsgSendPulse. In large systems, this means that additional tracking has to occur when an event needs to be generated by a list processor — the list processor needs to look at the stored receive ID, and call either MsgDeliverEvent or MsgSendPulse depending on a flag value. Further, the MsgSendPulse function takes a connection ID, which must now be stored in the list, as well as the receive ID and the struct sigevent. The pulse code and value can be stored in a "made up" struct sigevent and later fished out by the list processor and handed to MsgSendPulse based on the flag. Kinda awkward. Compare (current method):

if (list -> flag == LOCAL) {
    MsgSendPulse (list -> coid,
                 list -> event.sigev_prio,
                 list -> event.sigev_code,
                 list -> event.sigev_value.sival_int);
} else {
    MsgDeliverEvent (list -> rcvid, &list -> event);
}

vs proposed method:

// if rcvid is zero, it means local delivery
MsgDeliverEvent (list -> rcvid, &list -> event);

Create a kernel third party copy service

Intermediate level servers often find themselves copying data from one client to another server, and back:

char    xbuf [32768];
int     curoff;
int     nread;

curoff = 0;
while (length > 0) {
    nread = read (coid, xbuf, sizeof (xbuf));
    MsgWrite (rcvid, xbuf, nread, curoff + o);
    curoff += nread;
    length -= nread;
}

It would be far more efficient if the kernel were to do the copy between the two address spaces for us:

MsgXferCoidToRcvid (coid, length, rcvid, offset);