With WebRTC, we focus on lossy media compression codecs. These won’t maintain all the data they compress, simply because we won’t notice it either.
[In this list of short articles, I’ll be going over some WebRTC related quotes and try to explain them]
The purpose of codecs – voice and video – is to compress and decompress the media that needs to be sent over the network. This was true before WebRTC and will stay true after WebRTC.
Generally speaking, there are two types of compression:
- Lossless compression – these are codecs that whatever they see as input to the encoder will be generated in the other end of the decoder. Nothing will get lost along the way. Think of it as a .zip file – it stores files and requires a perfect match on both ends of the compression
- Lossy compression – these are codecs that don’t maintain an exact match from what goes into the encoder with what ends up after the decoder. These types of codecs are quite common with audio and video processing
Audio and video tend to hold a lot of data. And since we want to send it over the network, we’d rather not waste network resources. So what do these codecs do? They try to remove anything and everything that they can which our eyes and ears won’t notice much.
On a conceptual level, lossy compression has this virtual dial. You move the dial to decide how much you are willing to lose out of the data. The encoder will do its best to lose things you wouldn’t notice, but at some point, you’ll notice.
This flexibility in setting the compression level is also used to manage the bitrate. By estimating the bandwidth, the encoder can be instructed to turn the dial up and down the compression level to generate higher or lower compression to meet the requirements of the estimated available bandwidth.
👉 Compression is lossy in WebRTC, and the network can lose even more packets. Here’s an acronym soup explanation on WebRTC media resilience
👉 Looking to learn more about video codecs? Go ahead and read my WebRTC video basics article