DRED stands for DeeP REDundancy. DRED is an Opus extension that lets a single packet carry up to a full second of past audio, so a long burst of packet loss can be reconstructed from whatever arrives next. It uses a neural network to compress that redundancy down to a level the network can actually afford.
Why it matters
Packet loss in real-world WebRTC calls is rarely uniform. It comes in bursts. Some examples:
- WiFi flakes for half a second
- A cellular handover loses three packets in a row
- Sudden congestion along the media path that gets handled by reducing bitrates
In such cases, the audio path drops a word-part or whole words, and PLC is left guessing.
Opus has had a redundancy story for years – LBRR, the in-band FEC enabled with useinbandfec=1. LBRR is limited – it carries one extra frame per packet, encoded at a lower bitrate. A single isolated loss is invisible – LBRR covers 20 milliseconds of loss. A burst of three or four lost packets becomes a problem LBRR cannot solve.
DRED is the deep redundancy answer to that gap. Instead of one extra frame, DRED packs roughly one second of past audio into every packet, by compressing the acoustic features through a rate-distortion-optimized variational autoencoder (a mouthful). When a burst hits, the first packet that arrives carries enough information for the decoder to synthesize an approximation of the lost speech rather than mask it.
How it works
The encoder ships acoustic features alongside the regular Opus bitstream. These are not raw audio. They are a compact neural representation of what the speech looked like, frame by frame, going back about a second. On the decoder side, when packets are lost, the synthesizer uses those features to reconstruct what the listener should have heard.
The compression ratio is the headline. DRED transmits about 50x redundancy – every 20 ms frame is effectively sent 50 times – using only about 1/50 of the regular Opus bitrate. To be clear: the 50x is 50 redundant past frames carried at a small fraction of their original bitrate, not a 50x bandwidth multiplier. The actual cost is the kbps numbers below. In practical numbers, the Opus team’s measurements show a one-second redundancy window costing roughly 12 to 32 kbps of overhead, depending on quality target. The reference test points used 24 kbps for the base Opus layer, 16 kbps for LBRR, and 32 kbps for DRED on top.
That cost is not free. On a tight mobile uplink, adding 32 kbps for redundancy is a real budget item. The tradeoff is meant to be made against losing entire syllables to bursty loss – which is the situation where DRED actually earns its keep.
DRED also pairs naturally with Deep PLC, the neural packet loss concealment introduced in the same Opus 1.5 cycle. DRED gives the decoder real information to work with. Deep PLC handles whatever still has to be hallucinated.
Where it stands today (May 2026)
The standardization track is draft-ietf-mlcodec-opus-dred, currently at revision 05 (last updated January 19, 2026), inside the IETF mlcodec working group. The working group itself was created specifically to standardize the ML-based extensions to Opus: a generic extension mechanism, deep redundancy, and speech enhancement.
Opus 1.5 shipped DRED in March 2024. The reference implementation is in libopus and is gated behind the --enable-dred configure option, which also pulls in --enable-deep-plc. Build cost is about 2 MB of binary size. Runtime cost is around 1% CPU.
The increase in binary size along with its newness is likely to keep the DRED implementation from being integrated officially into libWebRTC and the Chrome browser for the time being.
DRED vs LBRR vs RED
Three redundancy mechanisms, three different jobs.
- LBRR (Opus in-band FEC, enabled with
useinbandfec). One previous frame, encoded at lower bitrate, in every packet. Cheap, always-on if you ask for it. Good for isolated single-packet loss. Runs out at burst losses - RED (RFC 2198). The RTP-level redundancy mechanism. Each RTP packet carries one or more older payloads at full quality. Good for short bursts when bandwidth is not the constraint. Costs roughly 100% extra per redundant copy
- DRED. Up to a full second of past audio per packet, neurally compressed. Designed for long bursts (200 ms and up) where LBRR and RED both run out of headroom. Costs 12 to 32 kbps over the base Opus stream
If a deployment already runs Opus with useinbandfec=1, that is LBRR, and it is doing its job for the easy cases. RED is layered on top when single-frame redundancy is not enough. DRED enters the picture when the loss patterns include bursts that are too long for LBRR and the bitrate budget cannot afford a doubled stream from RED.
Should you care?
For most WebRTC applications today, the practitioner answer is: watch, do not deploy.
The right move for a WebRTC team in 2026 is to:
- You can use
useinbandfec=1today. Free win, available in every browser. Check if you see any noticeable improvement - Decide whether RED is worth the bandwidth tax for the network conditions actually being seen
- Track DRED standardization. When it lands in libwebrtc and the spec freezes, revisit. That is when DRED becomes a real WebRTC option, not before
The arrival of DRED in WebRTC will be a meaningful shift for voice quality on bad networks. It is not unlikely to happen in 2026.


