In the past, if you needed to add subtitles, closed captions or event triggers into your broadcast service how did you do it?
- First it was done with lots of dedicated hardware, in a big 19″ rack unit connected with dedicated signal cables.
- Then it was done using embedded processors, in a load of 19″ rack units all connected with dedicated signal cables.
- Then it was done in software, in a few 19″rack mount PCs connected with dedicated signal cables, and later just network cables.
Now it’s moving to the cloud; but just saying it’s ‘In the Cloud’ can covers a multitude of approaches. Often it’s just the same old software, run up on an AWC instance and marketed as a ‘New’ cloud service. It’s the same software, works in the same way but takes no advantage of the agility, flexibility and scalability of cloud delivery.
Often pushing functionality into The Cloud can be counterproductive. Media processing often involves working on very large files and moving these about is slow, creates security risks, clogs up your available bandwidth and adds cost. This is especially true when working on ancillary services where the data element can be small compared to the whole media file. Adding a subtitle track may only add a few hundred kilobytes to a multi gigabyte MXF file, so it would be crazy to move the large file to the processing tool. Much better to move the small data file, and the processing tool, to do a single job wherever the MXF file is stored.
This is a true Micro Service – used intelligently.