getDisplayMedia (formally navigator.mediaDevices.getDisplayMedia) is the WebRTC API for capturing the contents of a user’s screen, application window, or browser tab. It is the companion to getUserMedia, which captures camera and microphone input.
How getDisplayMedia works
The API prompts the user to select what to share:
const stream = await navigator.mediaDevices.getDisplayMedia({
video: true,
audio: true // tab audio capture, Chrome/Edge only
});
The user can choose between:
- Entire screen – captures everything on a display
- Application window – captures a specific application
- Browser tab – captures a single tab (with optional audio)
The returned MediaStream can be sent via a PeerConnection alongside or instead of a camera track.
Screen sharing characteristics
Screen content has different properties than camera video:
- High resolution – typically 1080p or higher to keep text readable
- Low frame rate – 1-15 fps since screen content changes less frequently, and the high resolution eats away from the CPU and network resources
- Text-heavy content – codecs like AV1 hse screen content coding tools that handle sharp edges and text well
- Variable bitrate – can spike during animations or video playback on screen
Related APIs
- getUserMedia – captures camera and microphone
- enumerateDevices – lists available media devices
- Screencasting – the broader concept of screen sharing in WebRTC


