VoIP Packet Loss: Fix Choppy, Robotic, and Dropping Calls on Teams, Zoom, and Webex
Your internet speed test shows 200 Mbps. Your ping looks fine. But every call sounds like the other person is underwater, their voice cuts out mid-sentence, or you get that eerie robotic distortion that makes a normal conversation impossible.
The culprit is almost always packet loss, and speed has almost nothing to do with it.
VoIP and video conferencing use UDP, which means dropped packets are gone permanently. Unlike a file download, which TCP quietly retransmits, a voice call cannot go back and ask for missing audio. The codec fills the gap as best it can -- at low loss rates this sounds like brief silence, and at higher rates it produces the choppy or robotic distortion that makes calls unbearable.
This guide is for the people responsible for fixing it: IT admins managing office networks, remote workers troubleshooting their own connection, and business owners trying to understand why their calls sound worse than a 1990s mobile phone.
How Much Packet Loss Is Acceptable for VoIP?
The thresholds for VoIP are significantly tighter than for gaming or general internet use. Audio codecs can conceal a small amount of loss using Packet Loss Concealment (PLC) algorithms. But once loss climbs past a certain point, PLC stops hiding the problem and starts making it sound worse, generating the metallic robotic distortion that users describe.
| Loss % | Audio Impact |
|---|---|
| 0% | Perfect |
| 0.1-0.5% | Imperceptible to most people |
| 0.5-1% | Occasional brief dropout; most users notice this |
| 1-3% | Choppy audio, words missing; calls feel unreliable |
| 3-5% | Severe distortion; robotic voice artifact from PLC failure |
| 5%+ | Calls drop; real-time conversation becomes impossible |
Even 1% packet loss is noticeable on VoIP calls, and anything above 3% makes calls unusable.
The three metrics that matter for VoIP are:
- Packet loss: Anything above 1% needs attention. Above 3% is urgent.
- Jitter: Variation in packet arrival timing. Target under 15ms for Microsoft Teams. Under 30ms is the practical limit before audio becomes noticeably garbled.
- Round-trip latency: Target one-way under 50ms; RTT under 100ms. Above 150ms RTT and calls take on a walkie-talkie quality where both parties talk over each other.
These three are related. High jitter causes packets to miss their playback window and get discarded by the jitter buffer, so they register as lost even if they technically arrived. Fixing jitter often reduces effective packet loss without the underlying network actually losing fewer packets.
Step 1: Confirm and Locate the Loss
Before changing anything, measure what you actually have. A general speed test will not show packet loss. You need a UDP test.
Run a UDP packet loss test at openpacketloss.com. It measures packet loss in real time using WebRTC, which is the same transport layer Teams, Zoom, and Webex use for media. A clean result here strongly suggests the problem is either specific to that platform's routing, or only occurs under load.
Run the test while you're actively experiencing the problem. A test during a quiet moment proves nothing if your calls degrade only when other devices are active on the network.
Check Your Platform's Own Call Diagnostics
Each platform exposes per-call metrics. Pull these after a bad call to see what the platform actually measured.
Microsoft Teams -- Call Health panel:
During a meeting or call, click the three-dot More menu, select Settings, then select Call Health. A panel appears on the right side of the window showing live network, audio, video, and screen sharing statistics updated every 15 seconds. After the call ends, admins with Teams Admin Center access can review Call Quality Dashboard (CQD) reports per user, per subnet, and per building.
Note: Ctrl+Shift+Alt+1 triggers a diagnostic log export to your Downloads folder, not the live overlay. Use the More menu path above to get the on-screen stats during a call.
Zoom -- In-call statistics:
Click the shield or information icon in the bottom-left toolbar during a call, then select Statistics. The Audio tab shows packet loss percentage in both directions separately. After the call, account admins with a Business plan or higher can review per-participant stats in the Zoom Dashboard at zoom.us/dashboard.
Webex -- Call diagnostics:
During a call, click the three-dot menu and select Troubleshooting. After the call, admins can access Troubleshooting > Meetings in Control Hub to review jitter, packet loss, and latency per participant with timestamps.
Locate Which Hop Is Dropping Packets
If your OpenPacketLoss test shows loss, you need to know where in the path it's occurring. The most useful target for this test is your ISP's first gateway hop, not a generic public DNS server, because that isolates whether the loss is inside your network or upstream. If you can pull the media server IP from your platform's call stats, use that instead.
# Linux / macOS -- run for 2+ minutes during a bad call
mtr --report --report-cycles 120 8.8.8.8
# Windows -- use WinMTR (GUI) or pathping
pathping -n -q 100 8.8.8.8
Replace 8.8.8.8 with the Teams or Zoom media server IP from your call stats, or with your ISP's first gateway hop to isolate the local segment first. If loss appears at hop 1 or 2, it's local. If it first appears several hops out and persists, it's your ISP or upstream routing.
How to Fix VoIP Packet Loss: 5 Proven Solutions
1. Switch to Ethernet
The single most impactful fix: connect via wired Ethernet, not Wi-Fi. Wi-Fi introduces variable latency, interference-driven retransmissions, and signal drop events that register as packet loss at the application layer. A 5 GHz connection in the same room as the router can still produce intermittent packet loss that's invisible to a speed test but catastrophic on a voice call.
If desk phones are involved, IP phones have Ethernet ports built in -- use them. For laptops used as softphones, plug into Ethernet for calls even if you use Wi-Fi for everything else.
If running a cable is genuinely impossible, a Powerline Ethernet adapter (which routes data through your building's electrical wiring) is substantially more stable than Wi-Fi for voice traffic.
2. Fix Bufferbloat with QoS
This is the most common cause of VoIP packet loss in home offices and small businesses, and it's almost never diagnosed correctly.
Bufferbloat happens when your router's upload queue fills with bulk traffic and starts dropping the small, time-sensitive UDP packets carrying your voice. Think of it like an ambulance (your voice packet) stuck behind a fleet of slow-moving trucks (a Dropbox upload). Even though the road is wide (high Mbps), the ambulance can't get through. The result is choppy audio on your end and silence on the other end -- at exactly the moment the upload is happening.
Diagnose it first: Run a continuous ping to 8.8.8.8 while simultaneously starting a large upload (a cloud backup, a file transfer, a software update). If your ping latency spikes from 10ms to 200ms or higher the moment the upload starts, you have bufferbloat. That spike is your router queuing bulk data and squeezing out your voice packets.
The fix: Enable SQM (Smart Queue Management) in your router firmware. On consumer routers that support it natively (Asus with Merlin firmware, some Netgear and TP-Link models), look for "Adaptive QoS" or "CAKE" under advanced network settings. On OpenWrt, install luci-app-sqm and select the CAKE queue discipline -- it handles asymmetric connections (where upload is slower than download, typical in home broadband) better than fq_codel.
Set the bandwidth limits to 90-95% of your actual measured speeds, not the plan speeds from your ISP. If your upload tests at 18 Mbps, set SQM to 16-17 Mbps. This prevents full saturation, which is when drops start.
If your router doesn't support SQM, manually cap upload bandwidth to 80% of your capacity and place VoIP traffic in the highest-priority queue. On most consumer routers, this means setting the device or application type to "Voice/VoIP" priority in the QoS settings panel.
Also configure QoS rules to prioritize traffic on UDP ports 5060 (SIP signaling) and 10000-20000 (RTP audio). For Teams specifically, prioritize UDP to destination ports 3478-3481.
3. Kill Background Bandwidth Usage During Calls
Even without bufferbloat, saturating your upload during a call will cause packet loss. Common offenders that run silently during business hours:
- Cloud backup services (Backblaze, OneDrive, Google Drive, Dropbox) uploading large files
- Windows Update downloading and uploading across multiple machines
- Video surveillance systems uploading to cloud storage
- Another person on the same connection running a parallel video call
Schedule cloud backups overnight. Stagger Windows Update deployment in managed environments. If you're a remote worker on a residential connection, other household members uploading or streaming during working hours directly affects your call quality.
4. Disable SIP ALG on Your Router
SIP ALG (Application Layer Gateway) is enabled by default on most consumer routers. It was designed to help VoIP traffic traverse NAT by rewriting packet headers, but its implementation in most consumer firmware is broken. It corrupts SIP packets rather than helping them. The result is one-way audio, registration failures, and calls that drop after exactly 30 or 60 seconds.
Find it in your router's firewall or advanced settings -- it may appear as "SIP ALG," "SIP Transformations," "SIP Normalization," or "VoIP ALG" depending on the manufacturer -- and disable it. This fix alone resolves a large proportion of VoIP issues in small office and home office environments.
5. Open the Right Firewall Ports
If your firewall is blocking or deeply inspecting VoIP traffic, it will cause packet loss and call failures. Make sure these are explicitly allowed:
Microsoft Teams:
- UDP 3478-3481 outbound (media relay to Microsoft's network)
- TCP 80 and 443 (signaling and fallback)
- UDP source ports 50070-50089 to destination port 3478 (Teams media as of late 2024)
- Do not proxy or SSL-inspect Teams media traffic. Microsoft explicitly recommends bypassing proxies for Teams media, as inspection introduces latency and often converts UDP to TCP.
Zoom:
- UDP 8801-8802 (primary media)
- UDP/TCP 443 (fallback)
- Allow *.zoom.us and *.zoomgov.com
Webex:
- UDP 5004, 9000 (media)
- TCP/UDP 443
- Allow *.webex.com and *.cisco.com
If Teams falls back to TCP 443 because UDP is blocked, it works but poorly. TCP retransmits dropped packets, but the retransmission delay makes recovered audio arrive too late to be played -- so it sounds the same as packet loss. Always keep UDP paths open.
Microsoft Teams: Platform-Specific Fixes
Enable QoS DSCP Marking
For audio traffic, set DSCP value 46 (Expedited Forwarding). This marks Teams audio packets so network equipment along the path prioritizes them over general data traffic. Without DSCP marking, your switch and router treat a voice packet identically to a file download chunk.
Deploy via Group Policy on Windows:
Computer Configuration > Windows Settings > Policy-based QoS
Application name: ms-teams.exe
DSCP value: 46
Protocol: TCP and UDPOr via PowerShell:
New-NetQosPolicy -Name "Teams-Audio" -AppPathNameMatchCondition "ms-teams.exe" -IPProtocolMatchCondition Both -DSCPAction 46For DSCP marking to have any effect beyond the local machine, the network infrastructure also needs to honor those markings. Purely endpoint-side marking only helps within a managed corporate LAN where you control the switches.
Use the Teams Network Assessment Tool
Microsoft publishes the Teams Network Assessment Tool for Windows. It simulates a real media stream to Teams relay servers and reports latency, jitter, and packet loss specific to Teams infrastructure rather than generic internet quality. Run this from affected machines:
NetworkAssessmentTool.exe /connectivitycheck
NetworkAssessmentTool.exe /qualitycheckThe quality check report measures packet loss, loss burstiness, jitter, and round-trip time against Microsoft's own published thresholds. This is the most useful first diagnostic when troubleshooting a specific user's Teams call quality -- it tests the actual path to Teams, not a proxy destination.
Confirm UDP Is Being Used, Not TCP
If Teams is falling back to TCP (visible in CQD reports as TCP stream usage), verify that Teams media relay IPs are whitelisted through your entire network stack -- firewall, proxy, and any inline inspection appliances. Teams uses UDP by default and only falls back to TCP when UDP is completely blocked.
Zoom: Platform-Specific Fixes
Disable Zoom's Audio Processing
Zoom's noise suppression, echo cancellation, and audio enhancement can introduce artifacts that sound like packet loss but are actually codec processing artifacts. If users describe robotic or metallic voice distortion specifically on Zoom -- but not on Teams or regular phone calls -- try disabling the processing:
Settings > Audio > uncheck "Suppress background noise" and "Suppress intermittent background noise" > enable "Show in-meeting option to enable Original Sound."
Then during calls, click "Original Sound" in the top-left of the Zoom window to bypass the processing. If the distortion clears, the issue was the codec, not network packet loss.
Use Zoom Dashboard for Admin Diagnosis
Account admins (Business plan and above) can access zoom.us/dashboard after any call and view per-participant metrics: packet loss sent and received, jitter, latency, and bitrate sampled at one-minute intervals throughout the call. This tells you exactly when the loss occurred and whether it was on the send or receive path.
If receive-side loss is high for one specific user across multiple calls, the problem is between Zoom's servers and that user's connection. If send-side loss is consistently high, it's their local connection or ISP upload path.
Webex: Platform-Specific Fixes
Control Hub Troubleshooting
Webex admins can access per-meeting and per-participant troubleshooting data in Control Hub at admin.webex.com > Troubleshooting. Each call shows a timeline of audio and video quality with packet loss, jitter, and latency graphed over time. The timeline makes it easy to distinguish consistent loss (infrastructure or ISP problem) from burst-correlated loss (a local congestion event like a backup starting mid-call).
Webex Media Health Connector
For larger deployments, Cisco's Media Health Connector proactively monitors call paths, identifies poor-quality legs before users report them, and pinpoints whether quality issues originate at a specific office subnet, an ISP peering point, or within Cisco's own infrastructure.
Remote Worker-Specific Issues
Remote workers have a different problem profile from office-based users. The corporate network may be perfectly configured while the remote worker's home connection causes every issue.
VPN Routing of Media Traffic
Many corporate VPNs route all traffic through the company network, including Teams and Zoom media. Your voice packets travel from your home to the corporate office and back out to Microsoft or Zoom's servers -- adding two long hops to every packet and significantly increasing both latency and the chance of loss along the way.
Most VPN clients support split tunneling. Configure it to exclude Teams, Zoom, and Webex media traffic so it routes directly to the platform's servers rather than through the corporate tunnel. For Teams, Microsoft explicitly recommends split tunneling for its Optimize-category URLs and IPs.
If your IT policy requires full tunnel VPN, raise the issue with your IT team and reference Microsoft's or Zoom's own network guidance. Routing real-time media through a VPN concentrator degrades call quality by design -- it's not a configuration problem, it's a fundamental mismatch between how VPN tunnels and real-time UDP media work.
Residential ISP Upload Asymmetry
Most home broadband plans are heavily asymmetric: 500 Mbps download, 20 Mbps upload. VoIP and video conferencing are primarily upload-constrained. If you're on a call while your partner is uploading to cloud storage, or your security cameras are streaming, that 20 Mbps upload fills fast.
Check your actual upload speed during business hours, not off-peak. If it's consistently below 5 Mbps, call quality will be marginal whenever other upload demands are present. Symmetric business fiber is the most reliable long-term fix.
Shared Wi-Fi in Co-Working Spaces and Hotels
Public Wi-Fi is a packet loss environment by nature: shared medium, unknown congestion, unknown QoS policies, often rate-limited per device. For important calls from these locations, use your phone's mobile hotspot rather than the venue Wi-Fi. Keep a mobile data plan with sufficient hotspot allowance as a standing fallback.
Office Network and IT Admin Fixes
Separate VoIP Traffic onto a Dedicated VLAN
Create a voice VLAN and assign desk phones and video conferencing room systems to it. This prevents large file transfers or Windows Update downloads from competing with voice traffic at the switch level, regardless of QoS settings. Most managed switches (Cisco, Juniper, HP Aruba, MikroTik) support voice VLANs and will assign phones automatically via LLDP-MED or CDP discovery.
Check for Duplex Mismatches
Duplex mismatches cause packet loss that gets progressively worse until a reboot temporarily clears it, then recurs. One end negotiates full-duplex, the other half-duplex. Under load, collisions cause loss that looks completely random from higher-level diagnostics.
Check port statistics on managed switches for CRC errors, runts, and input errors:
# Cisco IOS
show interfaces GigabitEthernet0/1
# Look for: input errors, CRC, runts, giantsHard-code duplex and speed on both ends rather than relying on auto-negotiation, particularly for IP phones, VoIP gateways, and switch uplinks connecting to ISP equipment.
Bypass WAN Accelerators and SSL Inspection for Media Traffic
WAN optimization appliances (Riverbed, Silver Peak, Cisco WAAS) use TCP optimization and deduplication techniques that break real-time UDP media streams. Bypass VoIP and conferencing traffic by IP or port at the WAN accelerator.
The same applies to SSL inspection proxies. Routing Teams, Zoom, or Webex media through an inspection proxy adds decryption and re-encryption latency and in some configurations silently converts UDP to TCP -- both of which degrade call quality in ways that are hard to trace back to the proxy.
Diagnosing by Symptom
| Symptom | Most Likely Cause |
|---|---|
| Robotic or metallic voice | Packet loss 3-5%; PLC algorithm overwhelmed |
| Words and syllables missing | Packet loss 1-3%; PLC partially compensating |
| Audio cuts out during uploads or downloads | Bufferbloat; fix with SQM/QoS |
| One-way audio | SIP ALG corruption or NAT/firewall blocking RTP |
| Calls drop after exactly 30 or 60 seconds | SIP ALG killing the session; disable it |
| Bad quality only on VPN | Media routing through VPN concentrator; enable split tunnel |
| Bad quality only at specific times of day | ISP congestion or peak-hour bandwidth contention |
| Bad quality only on Wi-Fi | Interference or signal strength; switch to Ethernet |
| Bad quality only in one direction | Asymmetric path issue; run MTR in both directions |
| Echo | Acoustic feedback (speakers near mic) or high return-path latency |
| Quality degrades only in large meetings | Bandwidth saturation; video streams multiply with participant count |
Bandwidth vs. Packet Loss: Which One Actually Matters
A G.711 codec call uses roughly 87 kbps each way. Even at 720p video, Teams uses around 1.5 Mbps. The instinct when calls sound bad is to upgrade the internet plan. But a 1 Gbps connection with 3% packet loss produces worse call quality than a 10 Mbps connection with 0% packet loss.
If your tests show packet loss, adding bandwidth doesn't fix it. The packets are being dropped, not queued because the pipe is full. Fix the loss first. Bandwidth upgrades are only useful if your connection is genuinely saturated, and with modern broadband speeds, that's rarely the actual problem.
Test and Verify Your Fix
Run a UDP packet loss test at openpacketloss.com before and after each change you make. The test uses the same UDP/WebRTC transport as your conferencing platform and gives you a hard number rather than a subjective call quality impression.
For office network changes, test from multiple subnets -- a router-level fix may not help a specific subnet with its own problem. For remote workers, test during business hours when other household bandwidth is active, not late at night when results will be artificially clean.
If the test is clean but calls still have issues, the problem is specific to that platform's routing, or it only occurs under actual call conditions (multiple participants, screen sharing active, concurrent uploads). In that case, pull data from the actual call using the platform's own diagnostics: Teams Call Health panel, Zoom Dashboard, or Webex Control Hub.