Notes
Here we provide some general notes about the code.
- If there's no one waiting and a data message arrives, or there's
no room in the list for a new waiter client, we print a message
to standard error, but never reply to the client.
This means that some clients could be sitting there, reply-blocked
forever—we've lost their receive ID, so we have no way to reply to them.
This is intentional in the design. You could modify this to add MT_NO_WAITERS and MT_NO_SPACE messages, respectively, which can be returned whenever these errors were detected.
- When a waiter client is waiting, and a data-supplying client sends to it, we reply to both clients. This is crucial, because we want both clients to unblock.
- We reused the data-supplying client's buffer for both replies. This again is a style issue—in a larger application you'd probably have to have multiple types of return values, in which case you may not want to reuse the same buffer.
- The implementation shown here uses a
cheesy
fixed-length array with anin use
flag (clients[i].in_use). Since my goal here isn't to demonstrate owner-list tricks and techniques for singly-linked list management, I've shown the version that's the easiest to understand. Of course, in your production code, you'd probably use a linked list of dynamically managed storage blocks. - When the message arrives in the MsgReceive(),
our decision as to whether it was in fact
our
pulse is done on weak checking—we assume (as per the comments) that all pulses are the CODE_TIMER pulse. Again, in your production code you'd want to check the pulse's code value and report on any anomalies.
Note that the example above shows just one way of implementing timeouts
for clients.
Later in this chapter (in Kernel timeouts
), we'll talk about kernel timeouts,
which are another way of implementing almost the exact same thing,
except that it's driven by the client, rather than a timer.
Page updated:
