Most teams monitor their WebRTC infrastructure. Few monitor the actual user experience. Here is why that distinction matters – and how to fix it with client-side observability.

From chatting with companies in the past few months, showcasing our rtcstats troubleshooting solution, something quite obvious was apparent: their WebRTC monitoring focuses entirely on server infrastructure. Some of them even collect client-side metrics such as packet loss and round-trip time.
The end result is a monitoring apparatus that ends up monitoring the servers but not the service. Let’s figure out what that means exactly and how we can remedy that.
Key Takeaways: TL;DR
- Monitoring WebRTC infrastructure focuses on servers, often ignoring actual user experience, which leads to significant problems
- Client-side observability helps address this gap by collecting real user metrics like packet loss and jitter directly from users’ devices
- Implementing WebRTC client-side monitoring involves integrating libraries for data collection, streaming metrics to a backend, and setting up dashboards for user experience metrics
- Using tools like rtcStats enables comprehensive observation from collection to analysis, improving troubleshooting and user satisfaction
- Start monitoring client-side metrics from day one to catch issues early and maintain better service quality
Table of contents
- The WebRTC monitoring gap most teams ignore
- What server monitoring actually tells you
- What service monitoring in WebRTC gives you instead
- Why RTCP reports from the server are not enough
- The real-world cost of monitoring the wrong thing
- How to implement WebRTC client-side monitoring
- Where rtcStats fits in
- Three new areas worth monitoring
- When to start monitoring
- FAQ
- Need help?
The WebRTC monitoring gap most teams ignore

You have your dashboards set up. CPU looks fine. Memory is stable. Your media server is humming along. Everything looks green.
And then a customer calls: “we can’t hear each other” or “the video keeps freezing”.
How is that possible when all your metrics say everything is fine?
Because you are monitoring your servers. Not your service.
This distinction has been around since the early days of WebRTC – it isn’t the first time I am yapping about it. Here’s the thing – as WebRTC deployments scale – powering everything from telehealth to AI voice agents to live streaming – the gap between “servers are healthy” and “users are happy” keeps growing.
What server monitoring actually tells you

Server monitoring is what most teams start with. It is familiar, it is well-tooled, and it ticks the obvious boxes:
- Is the server alive? (Pingdom, uptime checks)
- Are CPU and memory within normal ranges? (Nagios, Datadog, New Relic)
- Are processes running? (Application monitoring)
- Are there network issues at the infrastructure level? (SNMP, traffic monitoring)
This is table stakes.
You absolutely need it.
But here is the problem – all of these can be perfectly green while your users are having a terrible experience.
A media server can report a healthy CPU while picking and sending the wrong layer of a set of simulcast streams, causing a degradation of video quality or worse – overloading client devices. Memory can look fine while jitter buffers overflow on the client side. Your infrastructure dashboard can be a wall of green while 15% of your users experience calls that sound like they are underwater.
👉 Server monitoring answers: “Is the infrastructure alive?”
💡 It does not answer: “Are users having good calls?”
How would you even know if user experience is good for your users?
What service monitoring in WebRTC gives you instead

Service monitoring – or more precisely, client-side WebRTC monitoring – flips the question: Instead of asking “are the machines working?” you ask “is the user experience acceptable?”
This means collecting data from where the experience actually happens: the client.
In WebRTC, that means:
- getStats() metrics from the browser or native app – bitrate, packet loss, jitter, round trip time, resolution, frame rate, and dozens more
- WebRTC API events – every call to createOffer, setRemoteDescription, addTrack and their outcomes, …
- Machine context – CPU, memory, browser version, network type, codec negotiation results – whatever you can lay your hands on
The key insight is that WebRTC quality problems are almost always visible in client-side metrics before they show up anywhere else. A resolution drop, increasing jitter, or a codec fallback tells you something is wrong from the user perspective – even when the server thinks everything is fine.
Why RTCP reports from the server are not enough

Some teams try to overcome this by using RTCP reports from the media server. After all, RTCP carries some quality metrics and it is already on the server side – easy to collect.
The problem? RTCP gives you a partial, delayed, network-only view:
- No codec or decoding information – you will not see CPU-related quality drops
- No client-side rendering data – a frame can arrive fine at the network level but stutter or drop on playback
- No information about non-network metrics that are important such as CPU and memory pressure
- Limited to what the server sees – which is not what the user sees, especially when TURN relays or packet loss recovery are in the mix
- No API-level context – you will not know if the application logic is causing issues (wrong constraints, bad oRTC handling, missing renegotiation)
And did we mention that you might want some of the calls (or all?) to run P2P? There’s no server side media metrics in such cases at all.
👉 Server RTCP reports are a useful signal. They are not a monitoring strategy.
The real-world cost of monitoring the wrong thing

Here is what happens when your WebRTC monitoring only covers servers: 🏓
A user reports “bad quality.” Your support team checks the server dashboard. Everything looks normal. Or even… there’s some packet loss, which is great, but what do you do now knowing there’s packet loss?
⬇️
So the support team asks the user for more details. The user says “it was laggy and the audio kept cutting out”.
Great… one of those… Support escalates to engineering.
⬇️
Engineering can not reproduce it. Days pass. The user churns.
Sound familiar?
With WebRTC client-side observability, that same scenario plays out differently. The support ticket comes in, you look up the session, and you see: high jitter starting at minute 3, this correlates with a CPU spiking high from that moment and on. Now there’s something to check with the user. We’re closer to a root cause and a resolution in less than 5 minutes 🤔
The difference is not just about speed. It is about being proactive instead of reactive. With proper WebRTC client-side monitoring, you can catch quality degradation patterns before users complain. You can set alerts on the metrics that actually matter to user experience, not just infrastructure health.
The end result is a better experience, happier users and lower churn.
How to implement WebRTC client-side monitoring

Getting started with WebRTC client-side monitoring does not need to be complex. The core steps are:
- Instrument your client – Integrate a library that collects getStats() data and WebRTC events from your application (or write one yourself). The rtcstats-js open source library does this with minimal integration effort
- Stream the data – Send the collected metrics to a backend that can store and process them. You can run your own rtcstats-server (part of the same open source project) or use a managed service
- Visualize and alert – Set up dashboards that show user-experience metrics (not just server metrics) and configure alerts based on quality thresholds that matter
The critical decision in any WebRTC monitoring setup is what to collect. Some services collect only high-level metrics – bitrate, packet loss, RTT. These tell you there is a problem but rarely point to the root cause.
My recommendation: collect everything meaningful. All getStats() metrics that change over the session lifetime, all WebRTC API calls and callbacks, and relevant machine data. Missing the data that would have identified a root cause is expensive.
Where rtcStats fits in

rtcStats was built specifically for this problem. It is client-side WebRTC observability, from collection through to root cause analysis:
- Open source client and server for data collection – you own your data
- Visualization purpose-built for WebRTC debugging and troubleshooting – not generic Grafana dashboards adapted for WebRTC, but charts and views designed by WebRTC engineers for WebRTC engineers
- Observations that surface problems automatically – not just charts, with AI-powered summaries that tells you what went wrong and why
- An Experience Score that gives you a single number for a session – something easy to look and use to quantify things – especially at scale
The free tier at rtcstats.com gives you access to the visualization layer. If you want full power of analysis then there is the commercial option.
For alerting and aggregate analysis of your whole deployment – you can just use the database populated by rtcstats-server and query it as you see fit. The best thing about it? You own your data, not relegating it to a third party – this makes it private, secure and way more flexible.
Three new areas worth monitoring

When it comes to WebRTC monitoring, there are 3 specific areas that deserve a bit more attention beyond the basics in 2026:
AI voice agent quality

With the explosion of Voice AI applications built on WebRTC, monitoring takes on new dimensions.
AI voice agents have unique quality requirements. Latency tolerance is tighter (users expect near-instant responses), silence detection matters more, and the interaction between speech-to-text, LLM processing, and text-to-speech creates entirely new failure modes.
What we’ve seen though, is that many of the current implementations of Voice AI can be improved, especially when looked at closely under the hood.
Client-side monitoring helps you distinguish between WebRTC transport issues and AI pipeline latency. It also helps you optimize the hidden architectural weaknesses early on.
Multi-party call analytics

Group calls compound every quality issue. One participant on a bad network can degrade the experience for everyone else, especially in SFU topologies. Monitoring at the individual participant level – not just the room level – lets you pinpoint which participant is causing quality issues. Is it their device, their network, or your infrastructure?
Call center agents

With contact centers moving to the cloud, there are quite a lot of cloud call center providers out there that end up relying heavily on WebRTC for the client-side agent softphone. The reasons are there – no additional installation, less moving parts, right in the browser, along with the CRM.
The challenges though… agents work from anywhere and need a setup that holds up for hours at a time. Being able to quickly assess, troubleshoot and solve issues for agents is imperative. WebRTC client-side monitoring empowers you as a provider to do just that.
When to start monitoring

Do not wait until you have a quality crisis. Here is my take:
- Day one: Add the rtcstats client to your application. It collects passively and costs nothing
- Your first paying customer: Start looking at the data systematically. Set up basic quality alerts
- At scale: Invest in aggregate analytics, trend detection, and proactive monitoring
If you already have paying customers and you are not doing client-side WebRTC monitoring, you are running blind. When issues happen – and they will – you want the data already there, not scrambling to reproduce a problem that happened yesterday.
FAQ
Server monitoring tracks infrastructure health – CPU, memory, uptime. WebRTC monitoring focuses on the actual user experience by collecting client-side metrics like bitrate, jitter, packet loss, and resolution from the browser or app. Your servers can be perfectly healthy while users have terrible calls.
Use getStats() from the browser or native app to collect real-time quality metrics. Integrate a client-side library (like rtcstats-js) to stream this data to a backend for storage and analysis. Focus on metrics that reflect user experience, not just infrastructure status.
The most impactful metrics are bitrate (audio and video), packet loss, jitter, round trip time, resolution, and frame rate. But do not stop there – codec negotiation results, CPU usage, and WebRTC API events all help pinpoint root causes when quality drops.
Day one. Add client-side collection as early as possible. The data costs nothing to collect passively, and when your first quality incident happens, you will want historical data already in place rather than scrambling to instrument after the fact.
Need help?

If you are looking into WebRTC monitoring or optimizing your application and do not know where to start – reach out. Happy to point you in the right direction based on your specific setup.
