## Broadcast Buffers If, as you're reading this post, you find yourself thinking "ah, I've seen this pattern before, called XYZ!" please email me or something! Anyway. A problem that comes up sometimes is this: you have a service that generates broadcast traffic to its set of clients as an ordered sequence of messages, and the clients can consume these messages at different rates - maybe some are local over IPC, some are over a slow network link, etc. One way to do this is to impose per-client buffering. The involved data might look like this in C++: class Message { ... }; class Client { Queue messages; }; with three operations like: bool Client::HasOutgoing() { return !messages.empty(); } void Client::DequeueOutgoing() { Send(messages.pop_front()); } void Client::EnqueueOutgoing(Message m) { messages.push_back(m); } which is simple but can involve keeping lots of copies of the messages if there are lots of clients. We store O(n*m) data, where n is the number of clients and m is the number of messages. ### Sharing Message Bodies One obvious thing to do here is to make the Message class very lightweight - like, a single pointer or two - and store the actual payload in some reference- counted backing structure. Then while we do have to store O(n * m) data still, the constant factors are a lot better. Is there a way to store only O(n) data in most cases though? ### Using Ordering Two important factors: 1) The messages are shared between all clients - i.e. each client receives an identical message 2) The messages are ordered What this means is that we don't actually need to store a list of all unsent messages per client. We just need to store a reference to the first unsent message for this client! Instead, we can do this: class Message { ... }; class MessageQueue { list messages; int next() { return messages.count(); } }; class Client { int first_unsent; MessageQueue *mq; }; bool Client::HasOutgoing() { return first_unsent < mq->next(); } void Client::DequeueOutgoing() { Send(mq->At(first_unsent)); first_unsent++; } void Client::EnqueueOutgoing(Message m) { /* noop */ } So now we have one shared message queue, with each client keeping track of how far into the queue it is so far. There's only one issue with this: the queue needs to know when it can forget about messages, but it can't currently do that without iterating through each client checking their first_unsent field. We can instead keep track on each message of how many clients have yet to see that message, like so: class Message { int unsent_by; ... }; class MessageQueue { list messages; int offset; int next() { return messages.count() + offset; } }; class Client { int first_unsent; MessageQueue *mq; }; bool Client::HasOutgoing() { return first_unsent < mq->next(); } void Client::DequeueOutgoing() { Message* m = mq->At(first_unsent); m->unsent_by--; if (!m->unsent_by) mq->Drop(first_unsent); first_unsent++; Send(m); } void Client::EnqueueOutgoing() { // I haven't sent this message yet - it's in my queue mq->At(first_unsent)->unsent_by++; } and voila :) One further optimization is possible: because Client::EnqueueOutgoing() doesn't actually alter the state of the receiving client at all, we could optimize it down to just a single: mq->At(first_unsent)->unsent_by = clients.count(); $#t Broadcast Buffers $#s A data structure for handling buffered messages being fanned out to many $#s clients $#o computer-science, data-structures $#u ddc052a7-b834-4b42-8445-d76698c0958a