Why I'm watching the Commonwealth Games

July 24, 2014 at 10:42 PM | categories: people, python, IP Studio, javascript, bbc, C++, dayjob, CWG | View Comments

People don't talk enough about things that go well, and I think I'm guilty of that sometimes, so I'll talk about something nice. I spent today watching lots and lots the Commonwealth Games and also talking about the output too.

Not because I'm a sports person - far from it. But because the output itself - was the result of hard work by an amazing team at work paying off with results that on the surface look like ... "those are nice pictures".

The reality is so much more than that - much like they say a swan looks serene gliding across a lake ... while paddling madly. The pictures themselves - Ultra HD - or 4K are really neat - quite incredible - to look at. The video quality is pretty astounding, and given the data rates coming out of cameras it's amazing really that it's happening at all. By R&D standards the team working on this is quite large, in industry terms quite small - almost microscopic - which makes this no mean feat.

Consider for a moment the data rate involved at minimal raw data rate: 24 bits per pixel, 3840 x 2160 pixels per picture, 50 pictures per second. That's just shy of 10Gbit/s for raw data. We don't have just one such camera, but 4. The pixels themselves have to be encoded/decoded in real time, meaning 10Gbit/s in + whatever the encode rate - which for production purposes is around 1-1.2Gbit/s. So that's 45Gbit/s for capture. Then it's being stored as well, so that's 540Gbyte/hour - per camera - for the encoded versions - so 2TB per hour for all 4 cameras. (Or of that order if my numbers are wrong - ie an awful lot)

By itself that's an impressive feat really just to get that working in realtime, with hard realtime constraints.

However, we're not working with traditional broadcast kit - kit that's designed from the ground up with realtime constraints in mind and broadcast networks with realtime constraints in mind and systems with tight signalling for synchronisation. We're doing this with commodity computing kit over IP networks - albeit with ISP/carrier grade networking kit. The 4K cameras we use are connected to high end capture cards, and then the data is encoded using software encoders into something tractable (1.2Gbit/s data rate) to send around the network - to get the data into the IP world as quickly as possible.

This is distributed around the studio realtime core environment using multicast RTP. That means we can then have decoders, retransmitters, clean switchers, analysers, and so on all individually pick up the data they're interested in to do processing. This is in stark contrast to telling a video router matrix that you'd like some dedicated circuits from A to B - in that it's more flexible right out the box, and lower cost. The lower cost comes at the price of needing sufficient bandwidth.

At the edges of the realtime core we have stores that look for live audio/video/data and store them. We have services for transferring data (using DASH) from high speed storage on capture notes to lower speed cheaper storage.

It also means that production can happen whereever you extend your routing to for your studio. So for example, the audio mixing for the UHD work was not happening in Glasgow, but rather in London. The director/production gallery was in a different building from the commonwealth games themselves. The mix they produced was provided to another team for output over a test/experimental DVB-T output. Furthermore it goes to some purely IP based recievers in a some homes which are part of a test environment.

As well as this, each and every site that recieves the data feeds can (and did) produce their own local mixes. Since this IS an experimental system doing something quite hard we have naturally had some software, hardware and network related issues. However, when we ceased recieving the "mix" from their clean switch output from glasgow we were able to route round and pick up the feeds directly locally. Scaling that up rather than having a London feed and regional opt-outs during big TV events (telethons etc), each of the regions could take over their local broadcast, and pull in the best of all the other regions, including London. Whether they would, I don't know. But this approach means that they could.

While the data/video being shipped itself is remarkable, what is doing the processing, and shipping is in many respects even more remarkable - it's a completely rebuilt broadcast infrastructure from the ground up. It provides significantly increased flexibility, with reduced cost with "just" more upfront costs at development time.

If that wasn't enough, every single aspect of the system is controllable from any site. The reason for this is each box on the network is a node. Each node runs a bunch of "processors" which are connected via shared memory segments, forming arbitrary pipelines sharing raw binary data and JSON structure metadata. All these processors on each node are made available as RESTful resources allowing complete network configuration and control of the entire system. So that means everything from vision mixing/data routing, system configuration, and so on gets done inside a browser. System builds are automated, and debian packages built via continuous integration servers. The whole thing is built using a mix of C++, Python and Javascript, and the result?

"Pretty nice pictures"

And that is really, the aim. Personally, I think 4K is pretty stunning and if you want a look, you should. If you're in the BBC in Salford, come and have a look.

So why am I posting this though? I don't normally talk about the day job here - that should really normally be on the BBC blog or similar really. It's because I made a facebook post bigging up my colleagues at work in the context of this, so I'll finish on that.

Basically, I joined the IP Studio team (from one R&D team to another) 2 1/2 years ago. At that time many of the things I mention above were a pipe dream - and since that time I've touched many parts of the system, only to have them ripped out and replaced with newer better ways of doing things. Over that time frame I've learnt loads, worked with great people, and pretty humbled by everyone on the team for different reason.

If you ever feel "am I the only one to suffer from imposter syndrome", the answer is no. Indeed, I think it's a good thing - it means you're lucky to be working with great people. If you're ever upset at someone pointing at your code - which when you wrote it you were dead proud of - and saying "That's a hideous awful mess" then you're missing the point - if someone isn't saying that your codebase can't improve. After all I've yet to meet anyone who looks at old code of theirs and doesn't think it's a hideous awful mess. Most people are just too polite to mention it.

However, just for one moment, let's just consider the simple elegance of the system - every capture device publishing to the network. Every display device subscribes to the network. Everything is network controllable. The entire system is distributed to where is best for them to work, and built using largely commodity kit. The upshot? A completely reinvented studio infrastructure that is fit for and native to a purely IP based world, while still working for modern broadcast too. Why wouldn't I have imposter syndrome? The team is doing amazing things, and that really means every individual is doing amazing things.

And yes, THAT song from the Lego movie springs to mind to me too.

Normal cynicism will be resumed in a later post :-)

(Oh, and as usual, bear in mind that while I'm referring to work stuff, I'm obviously a) missing things out b) not speaking for the BBC c) simplifying some of the practical issues, and don't try and pretend this is in anyway official please. Now where's that asscovering close tag - ah here it is -> )

Read and Post Comments