Lossless Compression: Rewriting Data More Efficiently
Lossless compression algorithms allow the original data to be perfectly reconstructed from the compressed data.
There are many types of codecs, all used for specific purposes to reduce file sizes and make them easier to distribute down a limited bandwidth pipe. Lossy compression and Lossless compression are the two most common categories of data compression used to reduce the size of data without significant loss of information.
However, if you are looking for a codec that does not affect the quality of the image, lossless compression is the technology you are after. Indeed, Lossless compression algorithms allow the original data to be perfectly reconstructed from the compressed data.
This seemingly magical method of reducing file sizes can be applied to both image and audio files. While JPEGs and MP3s use lossy compression, newer compression algorithms, such as JPEG 2000, JPEG XS, H.265 and Apple Lossless compression, can be used to create lossless compressed files.
Basically, lossless compression rewrites the data of the original file in a more efficient way. It's most likely to be used in acquisition. So, cameras and their proprietary RAW formats or anywhere you need high bit depth precision for maintaining dynamic range. It’s also common as you get into the class of intermediate codecs, namely ProRes and DNX and Canopus. Those types of codecs are effectively lossless. And from there you’re ready to go into an efficient post-production process.
Lossless Files Are Bigger
However, because no quality is lost, the resulting files are typically much larger than image and audio files compressed with lossy compression. For example, a file compressed using lossy compression may be one tenth the size of the original, while lossless compression is unlikely to produce a file smaller than half of the original size.
Of course, video data can be compressed in various ways. Guntermann & Drunck GmbH, a German company that specializes in KVM systems for broadcast, use a pixel-perfect video compression developed inhouse. This not only insures very high video quality, but also offers other advantages. By compressing video signals, the systems require a lower bandwidth and allow for more cost-effective components. Due to the simpler cabling, they are often also more flexible in their application.
KVM Uses Different Approaches To Compression
According to Rolf Milde, Head of Integrated Hardware Systems at G&D, KVM systems do not deal with the actual broadcast content, but with the operation of the computer infrastructure in use. This is where a few other aspects come into play. KVM uses two different approaches to video compression: pixel perfect lossless compression or, as an alternative, visually lossless compression. However, visually lossless compression does involve some loss of information during transmission. Here, each frame is compressed, and signals are transmitted frame-by-frame. G&D's video codec is said to achieve peak values of 1:800 compression factor (significantly higher than JPEG 2000).
“As KVM manufacturers with many years of experience, G&D take a different approach,” Milde said. “For many years we have been developing our own compression algorithms, which enable pixel-perfect and lossless video transmission. G&D’s ‘High Dynamic Image Processing’ uses multi-level compression logic while maintaining fully sharp images and colour depth. Our codec contains more compression levels than the algorithms of other manufacturers. One of the advantages is that our codec works across frames. We therefore are able to leave the box of a single frame and look at several images in comparison. Our method ensures that lower bandwidths of 1Gbit/s or 3Gbit/s are sufficient to ensure that video data reaches users pixel-perfect, latency-free and unerringly.”
JPEG 2K Is The Most Common
The engineering team at Telestream takes a slightly different approach.
“In our product line, I think we probably use half a dozen different codecs that I would consider to be lossless, if not mathematically lossless, perceptually lossless, and yes there's a difference,” said Shawn Carnahan, CTO, Telestream. “There are a variety of true lossless codecs. JPEG 2000, is probably the most notable and broadly used mathematically lossless codec in that you can perform a perfect reconstruction of the original image while still providing some amount of compression.”
He added that a variety of lossless RAW codecs tend to be more on the camera side where, again, the further back you go in in the production food chain, the more likely it is that you want to use a RAW or a lossless codec, because you're trying to preserve as much image integrity early in the process as possible.
These days, with camera RAW footage and HDR, it's all about precision and dynamic range, Carnahan said.
“Those tend to be proprietary, and they're developed by the camera manufacturer for that particular RAW format. As a vendor whose job it is to try to make all of this stuff work together, we have to implement a lot of these codecs as part of the workflow. It's really about preserving as much fidelity as possible and giving you as much flexibility in post to go and make changes to the content after capture. The first thing we need to do is decompress it and then turn it into something that can be used in the post process. We've got to make it easily editable, which is one reason you don't see a lot of J2K actually in post. For a number of reasons, but mostly because it’s still really computationally challenging for an editing system trying to work with multiple layers even though most systems can play it back.”
Telestream’s Wirecast-Pro includes streaming and recording features that provide a choice of codecs and compression algorithms.
High Dynamic Image Processing
Particularly in broadcast applications, where uncompromising image quality, pixel perfection and latency-free operation are essential, G&D’s High Dynamic Image Processing offers users significant advantages, they say, and a much better visual result. It is ideally suited for demanding applications in OB vans during live productions, in post-production, in studios and in all control room applications supporting the workflow of broadcasting centers.
“The advantage of having our own compression is obvious: as manufacturers, we do not need to rely on external development institutes and are therefore much quicker and more flexible when it comes to adjusting the codec,” said Milde. “This agility is not only reflected in our product portfolio, but also ensures consistent compatibility with older, current and new products, which in the end benefits our users.
Avoiding Image Degradation
The burning question is: How many times can you edit visually lossless content before you begin to see artifacts?
“The number of times you can edit content before you see artifacts clearly depends on the type of content and the compression rate,” Milde said. “However, you have to keep in mind that there is an additional loss with each data generation.”
He said that most codec producers try to develop strategies to keep the impact of data loss similar, but the image becomes slightly blurrier. A particular challenge for the codec is when the content is actually edited and not just re-encoded, as this changes the starting point for the codec. You also have to consider cases in which codecs from different producers or even different standards are used successively.
“How well these strategies ultimately work depends on the objective and the content itself,” Milde said. “For example, the quality of the content could be sufficient for streaming but may be unacceptable for other broadcast objectives. Some codec producers are good at compressing AV content, few are more focused on content for computer screens, and very few are good at both. Therefore, the motto always has to be: “Know your application and your objective.”
Telestream’s Carnahan said the number of generations before noticing a degradation in image quality is “probably more times than you would ever actually do in actual post production. Certainly ten generations or more,” he said. “I believe that was the number ProRes used to claim and that was back in the 4:2:2 days. For all intents and purposes, generational loss is no longer a factor for most of these high rate, particularly 4:4:4 style codecs.”
All compressing G&D KVM systems, regardless of whether you use CAT or fibre optic cabling or even IP networks, transmit data in pixel perfect quality and without loss.
Latency Is Also An Issue
Using lossless compression, users should expect some frame delay which leads to latency during transmission.
“For all the intraframe coded codecs, your latencies are usually measured in a frame,” Carnahan said. “For things like JPEG2000, you have a frame delay because you've got to compress the frame, so you're working on that frame and it takes a frame time to do it. Then you work on the next frame. And because you don’t have to do any temporal operations in most of these codecs, you don't have to delay the stream while you're waiting for new frames to show up to analyze against or compare against previous frames.”
That one frame delay is insignificant for common post-production applications, but if you were using something like JPEG 2000 as a live stream compression format for live event production, then a one frame latency is significant versus what you would expect if it was SDI.
“There are ways of doing frame-based compression and coding kind of at the tile or macro block level where you can get substantially less than a frame of latency,” he said. “But that would be more if you were using something like JPEG 2000 in a live environment. In a live, real-time environment where you're trying to use a lossless codec in say a ST-2110 context, that's where latencies, even measured in frames can be significant because you're trying to time up multiple, or switch between multiple signals. You've also got non-trivial delays relative to audio. In a live environment it's a different deal.”
“Basically, every compressing system includes a certain latency,” said G&D’s Milde. “The G&D codec, for example, typically has a base latency of a few lines of the image. However, converted into milliseconds, this is so fast that it is not perceptible to the human eye. For us, the convenient operation of our systems is particularly important, as is the perfect hand-eye coordination, which in itself requires the lowest possible latency.”
Lossless Manufacturer Support
For both companies mentioned here and many more, building lossless compression support into their products has become a key differentiator.
“If a format is in use out there, we have to support it,” Carnahan said. “We deal with lossless compression in all of our ingest workflows, ingest from camera to post, and then from post to contribution. Many of our customers use our Vantage Media Processing Platform to create media that can easily be being edited. Many media organizations have a ‘house format’ that they have standardized on and everything needs to be in that format. Vantage workflows automatically transcode and deposit media where it needs to be.
“After a piece is edited, the ProRes or DNX media frequently gets converted to J2K as part of an IMF distribution package. And then subsequently, somebody could receive that J2K distribution package and either need to turn it into yet another distribution package or to then convert it into 'distributables.' In that sense, they’ might be taking a lossless J2K IMF package and creating ABR mezzanines from that. Our value add is not only to provide the transformations, but to automate the whole process as well.”
When discussing file versus live, most agree that J2K in a file context is the de facto standard for file-based contribution. It’s part of the DCP standard for cinema and the IMF standard for television delivery. Next to that, in terms of visually lossless, ProRes would probably be the other common one that you see for content contribution, particularly with iTunes/Apple TV. Many professionals will commonly take J2K masters and convert them to ProRes XQ for delivery to certain vendors, most notably Apple.
G&D remains committed to its home brewed codec because it looks at the content across frames in order to avoid image loss.
“All compressing G&D KVM systems, regardless of whether you use CAT or fibre optic cabling or even IP networks, transmit data in pixel perfect quality and without loss,” Milde said. “Since the beginning, G&D has developed its HDIP multi-level video compression technology in-house.”
Type Of Lossless Compression Employed Is Up To The User
Ultimately, the type of lossless compression employed is up to the user and what they are trying to accomplish. Business model is also important, do you want better quality or more channels?
“It’s important to realize that you sometimes want an amount of compression or loss in there just to have more predictable data rates,” Telestream’s Carnahan said. The problem with a true loss codec is, at that point, you have no rate control. If it's complex imagery, it's just going to take more bits, period. In most real time systems, like an edit system, you do need to have some predictability in the bit rate. With the higher bit rate intermediate codecs, you're in the neighborhood of a five to one compression ratio, so you can pretty much do 5:1 for most footage without any actual loss.”
You might also like...
Delivering Intelligent Multicast Networks - Part 1
How bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
NDI For Broadcast: Part 1 – What Is NDI?
This is the first of a series of three articles which examine and discuss NDI and its place in broadcast infrastructure.
Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer
The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…
Designing IP Broadcast Systems: System Monitoring
Monitoring is at the core of any broadcast facility, but as IP continues to play a more important role, the need to progress beyond video and audio signal monitoring is becoming increasingly important.
Broadcasting Innovations At Paris 2024 Olympic Games
France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.