In the previous Cloud Broadcasting article, we looked at Agile software development and its relevance to cloud computing. In this article, we delve further into Cloud-Born systems and investigate the differences between public and private clouds and where you might want to use them.
Public and private clouds appear to be very similar; they both deliver IT infrastructure and network services to host software programs and provide data storage. A public cloud is an off-premise system that provides computing and storage over the internet from a third-party supplier; the broadcaster has little or no control over the physical aspects of the servers, storage or network. A private cloud refers to a similar computing and storage system under the direct control of a broadcaster which may be off or on their premises.
On or Off Premise
Datacenters are the underlying physical aspects of the cloud – the power, racks, air-conditioning, internet connections, building and security. They are often referred to as on-premise and off-premise, but these classifications do not add to the differentiation between public and private clouds. It’s possible to have a private cloud in an on-premise or off-premise datacenter, or a public cloud in an on-premise or off-premise datacenter.
Private cloud systems are provided by in-house IT and Network teams giving the broadcaster much more control over the infrastructure. They fine tune input-output speeds on servers, network bandwidths and throughput of high speed files to improve the efficiency and speed of the system.
A live sports production with all video and audio processing in the public cloud, only the camera's, microphones and control interfaces are outside of the cloud. The finished programme is delivered to the viewer over the internet to their home.
Where are Public Clouds?
Public cloud providers do not give access to the underlying hardware, and in the interests of security it’s generally not known what type of hardware is used or where it is located. Few people know where AWS datacenters are located, and as a subscriber we cannot turn up at the front door of their US-East datacenter in North Virginia and ask to replace a server. In private clouds, we do this regularly, in public cloud systems we do not.
Consequently, IT and network teams cannot tweak the servers or switches to fine tune the system to broadcast needs. This is often seen as a disadvantage, especially when considering egress and ingress costs to the public cloud. However, data transfer costs and the need to fine tune hardware diminishes as more infrastructure is moved to the public cloud.
Backhaul Cameras to Public Cloud
Broadcasters have been resistant to moving playout centers to public cloud systems, but as business owners see the versatility, cost savings and reliability of public clouds, they will soon move. There has been some progress on this; the backhauling of camera and microphone feeds from sports events allowing production to take place in the studio has resulted in massive cost savings as expensive crews do not need to be shipped around the world. To pipe these feeds into public clouds and mix live programs on AWS or Azure servers is only another small step, and is one that is sure to be made soon.
Cloud Vision Mixer
In the new IP world, a feed from a camera would enter a network gateway to the public cloud, be stored and processed in a software vision switcher, processed by the playout system, transcoded and delivered to the end user using CDN’s (content delivery network), and when the sports event finished, all the servers associated with the game would be deleted, thus stopping further costs.
A private cloud is a system dedicated entirely to one company and is under their direct control, it’s a datacenter that provides cloud services such as resilience, virtualization, scalability and security. However, their ability to provide these services is limited compared to the public cloud equivalent.
The previously described live-sports scenario is still possible using private clouds, but the costs savings would not be passed onto the broadcaster as they would have already invested in the underlying hardware and would still be paying for it, even when the virtual machines were switched off.
Although our designs can almost be 100% reliable, they will never completely get there. Public clouds have taught us to have realistic expectations for reliability. No system is 100% reliable.
Many users will share public cloud resource and even a major service provider may find themselves using the same physical server as a small business user. Subscribers never know who else is sharing their server due to the security and data protection systems in place.
Service Level Agreements
Public clouds are well governed by service level contracts with guaranteed uptime, and are subject to penalty payments should those be breached. In the past, broadcast engineers have tried to make systems as reliable as possible, attempting to reach one hundred percent delivery and uptime using redundant systems and backup paths. No system is one hundred percent perfect and will always fail at some time, no matter how small or large, in a way nobody could predict.
Engineers designing infrastructure now think differently about reliability and up-time of their public cloud systems. Using SLA’s as a guide, they can estimate an uptime of a system with some certainty, this is presented to the business owners and CEO and they are left to decide how much reliability they want, accepting they will never be completely reliable. In mathematical terms, the up-time of any system tends towards a limit as we add more and more resource, but it will never reach the asymptote, in this case one-hundred percent, unless we add a plus and minus infinity into the system, which we clearly cannot.
Public clouds have taught us to think in terms of SLA’s and be realistic about our expectations of reliability. Thinking we can design a system with one-hundred percent up-time is living with a false sense of security. Using public clouds gives engineers the opportunity to be realistic about reliability expectations and more importantly communicate this to the business owners and CEO’s so they can evaluate the risk to the business, something they do daily.
We cannot equally apply this SLA model to in-house designed private clouds, there’s simply too much emotional attachment and vested interest in the system, and there’s never enough resource available.
Rather than think of private and public clouds in terms of its physical manifestation and geographical location, it’s more useful to think in terms of service level agreements, and who provides them; either a third-party supplier (public cloud) or the in-house IT and Networks team (private cloud).
You might also like...
We’ve encountered media companies along all aspects of migrating their workflows to the cloud. Some with large on-premises media processing capabilities are just beginning to design their path, while others have transformed some of their workflows to be cloud-native, a…
The 2022 Commonwealth Games will be the biggest sports event on UK shores since London 2012 with around 1.5 billion global audience expected to watch over the 11 day event beginning July 28. Bidding for the host broadcast contract began in summer of 2019 with the…
India Spotlights The Importance of Converged “Direct-To-Mobile” Broadcasting In Today’s Mobile Video
As the U.S. continues to roll out NextGen TV services in markets large and small across the country, 5G wireless technology is being considered (and tested) to augment the OTA signal and provide a fast and accurate backchannel to…
NMOS has succeeded in providing interoperability between media devices on IP infrastructures, and there are provisions within the specifications to help maintain system security.
Every TV viewer compares live content with what they regularly see on TV, with multimillion-dollar talent with more multimillions in technical equipment and support.