The Purported Evils of Buffering

The hardest choice in IT and business may not be a choice at all. CTO and co-founder Raymond Russell explains why.

The Purported Evils of BufferingBy Raymond Russell    25 August 2016      Thinking

The debate about buffering

I wrote a few weeks ago about the importance of continuous packet capture in data-driven operations, where complete visibility is vital for performance and security forensics. One strand of discussion that I ventured into briefly was that of architecture, and how packet-capture fits into a streaming analytics system. Interestingly this aside triggered a lot of discussion in our community, and in particular the the topic of buffering caused some confusion, if not angst. At its core, the issue boiled down to the question of whether buffering is a good thing or a bad thing.

Buffering certainly gets a bad rap: when we're trying to watch our favorite TV show online, for example, we all hate to see the image freeze and the dreaded "buffering 99%" message pop up. That bad rap is somewhat justified by the fact that buffering and delay are closely linked. A buffer is needed only where the capacity (your shared internet connection) is exceeded by the demand (the bursts of video frames streaming out to your TV), in which case the buffer accumulates the excess until it can be processed.

That wait for processing is what constitutes the delay, and that delay is often undesirable. In VoIP (voice-over-IP), for example, it's much better to throw away delayed voice samples than to decode them and replay them. We've all had those confusing calls where you don't hear anything from the person on the other end for a couple of seconds, so you start to speak just as their voice arrives, then you back off not wanting to interrupt them. The delay causes the whole conversation to be constantly stop-start, while in contrast discarding the delayed samples would just drop or distort a word here or there. Although the human ear is excellent, correcting for such distortion and the drops can end up being imperceptible.

The indispensability of network buffers

So our direct experience with digital media prejudices us towards thinking of buffering as being a bad thing. However this is yet another example of how relying only on our intuition can lead us to the wrong conclusion: in most uses of technology, buffering is essential to providing reliable and economical capture and delivery of data.

Let's discuss reliability first: there are many cases where a little buffering goes a long way to smoothing over speed mismatches. Networked applications often exchange messages or chunks of content that are too large to fit in network packets. Without some buffers in the network stacks at both ends, that content couldn't be transmitted and received. The internal fabrics of network switches typically run at much higher speeds than the external interfaces do to ensure no head-of-line blocking; without minimal input buffering, packets couldn't be deserialized from the external links onto the fabric.

Economics is critical too: buffering provides a way to build systems that would otherwise be prohibitively costly. The output buffers on WAN links provide a great example: the core networks in datacenters run at very high speeds, but the vast majority of the traffic stays local. The question is what to do with the occasional bursts of traffic that need to go to another datacenter or the internet: one option would be to build WAN links with the same bandwidth as the core network to every other site. Of course, that would be astronomically expensive, not to mention completely unnecessary - most of the time, that WAN bandwidth would lie idle. A far more cost-effective design is to add some buffering on the WAN interfaces to smooth the datacenter core-bandwidth bursts down to the available WAN bandwidth. The trade-off is a little extra latency for orders of magnitude less bandwidth.

The role of buffering in streaming analytics

At Corvil, we face challenges not unlike those that arise in networking design: we build systems that process raw data at high speed, decoding a dizzying array of network protocols, and derive detailed analytics on the behaviour of the applications driving the data. On one hand, this requires significant processing power, but on the other hand we also have a mandate to design systems that make best use of resources to deliver the analytics.

Buffering is of course a highly effective tool for doing so, especially because the raw data we consume comes from the network: as such, it varies hugely in rate especially at the sub-second level. We could build our appliances decked out with expensive FPGAs, ready to process sustained loads that will never arrive from the modern network, but we know this would be wasteful and not cost-effective for our customers. Instead we equip our appliances with an appropriate amount of system memory to allow our customers to make an informed and economical trade-off between their streaming analytics needs and the resources required to meet them.

So, the next time someone tries to denigrate a system or design simply on the basis that "it buffers," don't just rely on your experience with Netflix, and consider what benefits buffering might really have to offer.

The Purported Evils of Buffering

Raymond Russell, Founder & CTO, Corvil
Corvil safeguards business in a machine world. We see a future where all businesses trust digital machines to algorithmically conduct transactions on their behalf. For some businesses, this future is now.
@corvilinc

You might also be interested in...