New delivery channels require broadcasters, cable and satellite providers to continually adapt to changing network conditions.
Building the technical core for a cable, satellite or broadcast system starts with predicting the transport needs. Add to that requirement constantly changing viewer demands and needed support for new technology like 4K, UHD and HDR. Now ask yourself, what state-of-the-art monitoring technology will be needed to ensure proper system performance?
If you spend any time in a gym, you may enjoy watching how other users approach the activity. Some of them look as if building a vast bulk of muscle is the sole aim, and you can’t help wondering what’s the point of putting so much effort into developing enormous biceps, quads, a ridiculously sculpted six-pack and all the rest. These muscle-bound hulks are all show; contrast them with a professional athlete, whose physique is developed for utility, to be faster, more agile, more injury-resistant, and with exactly the weight of muscle calculated to be fit for the task, without a spare ounce to cause drag.
Then there are the other gym-bashers, whose priority is cardio-vascular fitness. No need for a lot of muscle there. And there are those who work mostly on core strength. They can usually be identified by a certain lean grace, beautiful carriage, correct posture, and fluid movement. But watch one of these elegant creatures zip through 200 ab-crunches apparently without effort and you realise they’ve got terrific strength where it counts.
The core is where the ‘chi’ flows from; a strong core is the reason why very slightly-built martial arts practitioners can deliver such power at the point of impact but still remain nimble and defeat opponents with many times their body weight and less agility.
Fitness for purpose and efficient provisioning are now a mantra for media businesses. As in any area of business today, no media organization wants to carry spare weight or spend money on capacity that looks good in theory but which delivers no real performance benefit. So core strength is a good analogy to bring from the gym to the network infrastructure.
Today’s viewers and advertisers expect near-perfect image quality and 100% availability. If this does not happen - either from technical difficulties or human error – then the new service will quickly be perceived as being problematic and unreliable. The solution is continuous multi-point monitoring so that potential problems can be detected early and corrected quickly.
As technology goes digital (UHD, 4K, IP, streaming) the transition has to be closely monitored to be able to give viewers the same Quality of Experience as older technologies, free from artifacts and digital macroblocking.
Consider the core
Ingest and headend are of course important, as is the last mile and all the other links in the chain, but with the wildfire proliferation of services and channels, it’s the strength of the core network that underpins the ability to maintain high quality. So to stay ahead in the bandwidth race, providers are moving from 40 Gigabit core networks to 100 Gigabit as soon as the technology is available to do it.
What’s driving all this need for core fitness? Media services are the tip of the iceberg – a huge tip, it’s true, but to a large degree the demand created by services like Netflix is predictable; there are only so many channels, and so many subscribers. These numbers change but they do not vary wildly overnight, so there are no great peaks and troughs. On top of that, media services are a one-way street, with content being served out to local caches and thence to the consumer, but no equivalent traffic in the opposite direction. So if it were only a case of provisioning core networks to deal with demand from this source, it would be a relatively straightforward task.
Less visible under the surface, the real load on core networks comes from symmetrical traffic, and the world is creating much, much more of this, with an exponential growth rate. What’s more, this type of traffic is unpredictable from minute to minute; it’s impossible to say how many more people will discover the joys of video communication next month, and when sudden peaks in video chat load will occur.
The same is true for other forms of video communication – news gathering services such as Periscope or other live streaming services like Meerkat and Snapchat. The primary attraction of these is the instantaneous access they give to events as they arise, often before conventional camera crews have set up. Peaks are therefore likely at any time as events unfold, and this means that unpredictability is an inescapable condition of this type of traffic.
In this monitoring scenario, a Bridge Technologies VB288 CONTENT EXTRACTOR is shown inserted just prior to scrambling at a cable head-end. The extracted metadata and decoded imagery is then fed to the VideoBridge Controller. Such a solution enables operators to inspect massive amounts of content, well beyond human eyeball capability, with dependable alarming on objective parameters that may affect QoE.
With nearly everybody on the planet carrying a smartphone, it’s possible to envisage the situation arising not too long from now where significant events occurring in public places will be live-streamed by dozens of people at once – potentially a bonanza for broadcasters and other content providers, who will be able to choose from multiple camera angles and a variety of detailed views of the event, but the cause of dramatic increases in load on the infrastructure.
Looking at video chat alone, if each chat uses between 400-700 kbits, a mere one thousand concurrent users will eat up most of a 1 Gig link, and with figures like those it’s not hard to see why providers are urgently beefing up their core networks to cope with the volatile surges in demand for bandwidth that widespread use of these services is already creating. It’s essential to have plenty of headroom because if a service provider can’t provide the bandwidth when it’s demanded, users will go elsewhere.
However, no provider can afford to carry a lot of spare muscle that isn’t paying its way, so the provisioning of core networks still has to be executed with great care in order to remain competitive. Infrastructure investment has be to matched to potential demand, however unpredictable it may be. Good monitoring technology can be enormously helpful here, giving engineering staff detailed insight into load levels, correlated errors and service degradation, and providing a real aid to proactive planning. With the most advanced monitoring systems, engineers can not only track and rectify errors, and analyse service failures, but also chart increasing levels of strain on all points in the network; that way, they can see where extra capacity is needed to head off problems before they become a reality and have a negative impact on the service level.
You might also like...
Here, we take a look at the landmark installation at the University of Notre Dame that highlights one of the biggest advantages of IP-based systems - flexibility. In the past networks have required a lot of cables and interconnections, today,…
With the release of the core parts of the SMPTE ST 2110 suite, the confusion around different transport standards for audio and video over IP is settled. However, the adoption of these solutions for day-to-day use is still far from easy.…
The consumer video market is growing fiercely competitive between content creators and content aggregators and OTT live and OTT VOD formats are growing increasingly fragmented. Consumers are benefiting from the tremendous choice of content and sources for that content while…
When the pandemic began shutting down TV stations in the spring of this year, journalists and producers were left to figure out how to work from home and set up technical systems they were very unfamiliar with. In many cases…
The arrival of 5G brings both opportunities and challenges to communications, media and entertainment companies, as well as the original equipment manufacturers (OEMs) working to support them.