Audio jitter buffers are required 101 introductory material for understanding VoIP. libWebRTC’s audio jitter buffer implementation – the one in Chromium – is known as NetEQ. NetEQ is anything but basic. This is good from a user perspective since real-life networks conditions are often challenging. However, this means NetEQ’s esoteric code is complex and difficult to parse. Luckily we found a volunteer who was up to this task 🙂
I am excited to welcome Fengdeng Lyu to webrtcHacks for a deep dive into NetEQ – the audio jitter buffer implementation that’s been quietly evolving to ensure smooth audio playback since the birth of WebRTC. Fengdeng is a software engineer at Meta, where he leads R&D efforts in RTC network resilience. This post is focused on audio, but Fengdeng is also involved in video FEC, which was recently featured at the Scale@ conference.
In this post, Fengdeng will unpack how NetEQ handles packet jitter, loss, and concealment to maintain audio quality under real-world network conditions. The first half reviews some basics and the not-so-basics of audio jitter buffers. The second half is a function-by-function guide to the libWebRTC NetEQ implementation with insights into its numerous overlapping algorithms.
Thank you to Niklas Enbom and Fippo for their help reviewing this 🙇.
I learned a lot from this post and I am sure you will too!
{“editor”, “chad hart“}
- High-level Review of Audio Jitter Buffers in WebRTC
- libWebRTC’s NetEQ Implementation
NetEQ is Chromium’s WebRTC audio jitter buffer implementation. Jitter buffers are essential components for moderating the inherent variability of the Internet. Jitter buffers are even more important, but more complex for real-time communications systems like WebRTC. Understanding this component is essential for debugging WebRTC audio issues. However, the NetEQ codebase is vast with many complex algorithms, making it nearly impossible to understand quickly without a lengthy review.
This post provides a review of the NetEQ implementation. First, I will begin with a review of Jitter buffers and WebRTC’s specific requirements for audio jitter buffers. Then we will jump into some of the specifics with links to the Chromium source.
In a Nutshell
- Whenever an audio packet arrives from the network, NetEq will store it.
- Whenever audio is needed for playout, NetEq must produce it.
NetEQ does the above two tasks with the lowest latency and the highest audio quality.
Why NetEQ Deserves the Complexity
Different use cases have different jitter buffer implementations due to distinct product requirements:
- When watching movies on Netflix (i.e., HLS), the jitter buffer caches minutes of content, allowing for seamless playback even when the connection is lost for a noticeable amount of time.
- When watching live streaming on TikTok (i.e. LL-HLS), the jitter buffer only caches a few seconds of content, because the longer the buffer, the longer it takes for influencers to interact with audiences from time to time.
- When it comes to VoIP calling (i.e. WebRTC), the jitter buffer should only cache at most a few hundred milliseconds of content, otherwise there will be catastrophic effects like double-talk.
Building jitter buffers for RTC is challenging. Compared to video streaming, RTC’s underlying transport is unreliable and it has stricter latency requirements:
- RTP over UDP is unreliable – WebRTC uses the RTP protocol as a thin layer over UDP. There’s NO guarantee when a packet will arrive, nor if the packet will arrive at all. VoIP systems use UDP over reliable transmission protocols like TCP because they are faster. This trade-off means you need a buffer that handles lost, late, and out-of-order packets.
- Continuous audio is critical – when the receiving device plays audio, the continuity of the audio waveform is the top priority. Any audio glitches/underruns will be immediately noticed by the user. The buffer should minimize audio hiccups as much as possible.
- Small buffer size is critical – streaming applications like YouTube and Spotify – even for live events – clients typically buffer at least seconds of audio. As the listener, you don’t notice this delay since there is no reference feedback to tell you what that delay is. However, real-time audio conversations require latencies of less than 500ms for fluid interactivity. If you go over this, natural conversations with the other end will become difficult. The delay will be obvious when you start talking over one another. As a result, RTC needs to keep small jitter buffers and dynamically try to shrink the size of the jitter buffer when it can while still delivering continuous audio.
Audio Encoding and Packetization
Wait – Are We Talking About NetEQ?
To understand how NetEQ works, it’s essential to be familiar with basic audio concepts.
We will discuss important audio fundamentals before proceeding.
Digital Audio Concepts
Prerequisite: See this MDN article for a comprehensive and accurate review of audio digital representation and audio encoding. It covers a broad spectrum of audio applications.
Specific to WebRTC, I’m giving some pragmatism principles around audio formats:
- 48kHz is the most prevalent frequency for audio recording, packetization, and playback. I will use 48kHz as the default sample rate throughout this blog.
- Opus is the most used audio codec in WebRTC. WebRTC has mature support of advanced Opus features like DTX and in-band Forward Error Correction (FEC) that impact NetEQ.
- With mono-channel 16-bit raw audio representation, the uncompressed footprint is 48k/s*16 bits = 768 kbps, which is a lot of bandwidth if you consider the need for multiple streams and even higher-bandwidth video too.
- Opus can encode audio at different bitrates. Typically, a 20kbps-25kbps encoding bitrate provides a fair perception quality. Anything between 10kbps and 40kbps also makes sense in RTC depending on network conditions.
Packetization Length and ptime
The chosen audio encoder (like Opus) encodes audio samples into the bitstream. Then the bitstream is packetized into different RTP packets and sent to the network.
Packetization length or “ptime” represents how many audio samples one packet contains. For instance, 20ms ptime means the sender accumulates 20 milliseconds of the audio bitstream in one packet, which adds 20ms to the sender-side audio latency.
In WebRTC, ptime is an attribute in the SDP that controls packetization length. It’s default as 20 representing a 20ms packetization interval. Typical values are 10ms, 20ms, 40ms, 60ms, and 120ms. 20ms and 60ms are seen most in VoIP applications.
As a rule of thumb, ptime should decrease as network delivery improves (the next section will deep dive in the reasoning), which explains the existence of the adaptive-ptime Field in RTCConfiguration even in the browser. Other than packet overhead, a 10ms ptime also has significantly less coding efficiency than 20ms ptime. You can get a sense of real-time audio bitrates of different codecs in the browser with this sample: Peer connection: audio only.
Audio Packetization Length Matters
Different packetization lengths imply a tradeoff among latency, bandwidth and audio quality. Though smaller packetization lengths mean smaller sender-side latencies, smaller packetization requires more bandwidth due to a higher packet rate and encoding efficiency decreases.
Surprisingly, the packet header size can be larger than the actual audio payload in many scenarios.
Packet header (IP/UDP/RTP) overhead consumes a considerable amount of bandwidth. Given 20 bytes IPv4 header size, 8 bytes UDP header size, 12 bytes RTP header size and 10+ bytes SRTP authentication tag length, each packet carries a 50 bytes header overhead.
Additionally, RTP header extensions bring considerable additional overhead but are crucial for advanced WebRTC features like Transport Wide Congestion Control (TWCC) and absolute capture time. In practice, RTP header extension feature incurs at least 20 bytes of additional overhead for each packet and can easily reach 40 bytes.
Let’s see what a 60ms ptime looks like vs. a 20ms one. Assuming 20 bytes for RTP header extension, with 20ms packetization, there are 50 packets per second (1 sec / 20 ms); with 60ms packetization, there are 16.7 packets per second (1 sec / 60 ms). So, the header overhead difference from a 20 ms ptime is (50-16.7 packets per second) * 70 bytes/packet * 8 bits/byte = 18,648 bits/second or 18.6 kbps. 18.6kbps can result in significant audio quality improvement when adding to codec encoding bitrate.
Audio Network Adaptor
Packetization length can be adjusted dynamically during a call. WebRTC uses a component called audio network adaptor (ANA) to achieve that. The idea is to send higher quality (via higher bitrate) and lower latency (via smaller packetization length) audio when the network is good, and vice versa.
As a rule of thumb, good network conditions should use 20ms packetization length, poorer networks can use 60ms or even 120ms. Theoretically, 10ms packetization is also possible but it’s rarely used.
Network Artifacts
The network is not perfect. There might be router failure causing packet loss; network overuse can cause packet delay; or there can even be threading issues on your device networking port. NetEQ is designed to overcome the irregularities in packet arrival patterns.
Let’s categorize common network issues that NetEQ cares about.
Jitter
The bulk of NetEQ logic is to handle packet jitter. Packets depart evenly from the sender but do not necessarily arrive at the receiver evenly. In other words, the network travel time fluctuates from packet to packet. Network jitter can be categorized as follows:
- Bursty arrival. The receiver doesn’t receive any packets for a while. Then multiple packets arrive at the receiver all at the same time. People might find this pattern extreme, but it’s the most common network artifact.
- Minor jitter. One or multiple packets have a minor delay in arrival time. The packets that arrive afterward quickly catch up. Note according to NetEQ’s logic, the delay shall be less than packetization length, otherwise it’s categorized as bursty arrival.
- Permanent network delay change. From one moment on, there’s a permanent network travel time increase/decrease. The receiver will observe one jump/drop in inter-arrival time, but no more jitter afterwards.
Loss
Packet loss recovery is a hot topic in WebRTC. NetEQ is only one of many components that deal with network loss. This blog focuses on NetEQ’s jitter compensation functionality. Still, I want to give a high-level overview of audio loss recovery to clear out basic questions.
Loss Detection
Every packet is assigned a number in sequence. The receiver can detect packet loss by looking for gaps between those numbers. Similar loss information is also sent as feedback to the sender by RTCP receiver reports (RTCP_RR) and the Transport Wide Congestion Control (TWCC) algorithm.
Audio Loss Recovery Mechanism
There are two categories of audio loss recovery mechanisms commonly seen in WebRTC applications:
- Proactive: in-band Forward Error Correction (FEC), out-band FEC, RED
- Reactive: Negative Acknowledgement (NACK) / Retransmission
WebRTC natively supports Opus in-band FEC, RED and NACK (Retransmission). Out-band FEC – i.e. sending redundant audio outside of Opus’ build-in encoding mechanisms – is up to the developer’s implementation.
Retransmission and NetEQ
If developers explicitly negotiate NACK in SDP, NetEQ requests audio retransmission by RTCP_NACK messages.
One common question is whether NetEQ proactively waits for retransmission, the answer is no. The biggest reason for this is that the receiver can’t differentiate retransmitted packets from regular packets.
The regular/retransmitted packet differentiation is done by RTX stream on the video side, but this isn’t implemented for audio. In other words, retransmitted audio packets and regular packets have the same SSRC, so the receiver can’t identify which packets are the result of audio NACK.
Though NetEQ doesn’t purposely wait for retransmission, jitter buffer length will eventually grow larger upon receiving some amount of retransmission packets. That’s because retransmission packets are usually late, which resembles network jitter and enlarges NetEQ delay.
Others
Packet reorder (out-of-order) is when packets arrive at the receiver in a different order than they were sent. One might assume this is a prevalent issue in the real world, but it is actually very rare outside of retransmitted packets from NACK.
Now we will take a deeper look into the specifics of the WebRTC NetEQ implementation.
NetEQ Architecture
NetEQ has two main APIs: InsertPacket and GetAudio:
- InsertPacket stores audio packets from the network.
- GetAudio returns exactly 10ms of audio samples. It MUST be called by the playout thread exactly 100 times per second. Ideally, GetAudio is called every 10ms for device playout.
Among the 100+ files inside the NetEQ folder, the backbone is just neteq_impl, delay_manager, and decision_logic:
- neteq_impl is the main orchestrator
- delay_manager takes care of delay estimation
- decision_logic is in charge of buffer management
You can find all NetEQ APIs here.
NetEQ Impl
neteq_impl.cc concretizes NetEQ APIs and serves as the main orchestrator.
When a packet arrives from the network, InsertPacket is called. This will:
- Split out FEC and RED for proactive protection (if there’s any)
- Call delay_manager to update delay estimation
- Insert packet into packet_buffer
- Update all internal states like sync buffer length, statistics, sampling rate, DTX, muted, etc
Note: Refer to RED: Improving Audio Quality with Redundancy for more context on audio RED. Though RED is negotiated as a separate codec rather than an attribute like FEC, on the receiver side NetEQ processes RED and FEC similarly by directly parsing redundancy content from payload.
When the playout thread requests 10ms audio samples, the GetAudio function is invoked:
- Give internal states and TargetLevelMs (from delay_manager) to decision_logic.
- Depending on the instruction from decision_logic, neteq_impl might extract packet from Sync Buffer, decode the packet by codec, use Digital Signal Processing (DSP) – expansion, acceleration, speech stretching, etc. – to post-process decoded audio samples, and put samples in sync buffer (an intermediate short buffer). We will explain those concepts in the Decision Logic section.
- Extract 10ms from the sync buffer and return with a GetAudio call.
Sync Buffer
The name “sync buffer” isn’t intuitive. It’s essentially an intermediate buffer hosting decoded audio samples. For example, with 60ms packetization length, there are 60ms audio samples in the sync buffer after a packet is decoded. Each GetAudio() call only requests 10ms of audio samples, no more, no less. So, the sync buffer is holding that extra part and allows immediate return for the next GetAudio() calls.
The sync buffer also aids DSP operations. When the speech stretching operation is required, audio samples are extracted from sync buffer, processed by DSP, and put back; When NetEQ audio expansion is required, the DSP uses previously extracted audio samples for interpolation;
Evolution of NetEQ Packet Delay Measurement
In this blog, we use a 60ms fixed packetization length for easier illustration.
The delay_manager generates target_level (target delay) based on network condition. When lots of packets are delayed, the delay_manager should output a higher target delay, so that packets stay in the buffer longer. The decision_logic manages the packets in the buffer. If a packet is due for decoding, then decision_logic removes a packet from the buffer for decoding.
But how does the delay_manager quantify how much a packet should be delayed? And how does decision_logic assert a packet is due to decoding? Let’s harness the concept of packet relative delay.
Which Packet Travels Faster
Network conditions fluctuate every millisecond, so each packet has a different network travel time. Due to potential device clock shifts, we can’t calculate “exactly” how long it takes for an individual packet to travel from the sender to the receiver. However, it’s still possible to compare packet travel time (from sender to receiver) between two packets using the following formula.
P1_P2_travel_time_difference = (P2_arrival_ntp_ms - P2_rtp_ts / sampling_rate + clock_shift_ms) - (P1_arrival_ntp_ms - P1_rtp_ts / sampling rate + clock_shift_ms) = (P2_arrival_ntp_ms - P2_rtp_ts / sampling_rate) - (P1_arrival_ntp_ms - P1_rtp_ts / sampling_rate) |
When the formula returns a positive number, it means P2 travels slower than P1, and vice versa.
If P2 is the most recently arrived packet, then the choice of P1 represents the evolution of NetEQ packet delay measurement.
The Classic Inter-arrival Delay
Inter-arrival delay is the arrival time difference between two consecutive packets. NetEQ was using the inter-arrival delay to update the delay_manager up until 2022.
With a 60ms packetization length, the receiver expects to receive one audio packet every 60ms. Imagine a scenario where for every 4 packets, the second packet is delayed by 40ms, so that its inter-arrival delay is 100ms. Naturally, an ideal NetEQ target delay should be 100ms.
Buffering 100ms audio works fine for the above example. Right before P2’s arrival, playout thread just consumed all the audio inside the buffer and then P2 arrived. NetEQ didn’t run out of audio for any moment.
When Inter-arrival Delay Fails
What if network delay keeps getting worse?
In the example below, P3 and P4 suffer from accumulating network delay but target level is still 100ms using inter-arrival delay. Remember we take the largest seen inter-arrival delay as target delay for illustration.
Before P3 arrives, the buffer was short 40ms of audio; Before P4 arrives, the jitter buffer was short 40ms of audio again, which is also called jitter buffer underrun.
It’s evident that inter-arrival can’t handle accumulating network delay. We need a better algorithm to calculate packet delay – we’ll cover that in the next section.
*Note that speech stretching / preemptive-expansion could partially offset the underrun in this case, but the audio underrun isn’t completely eliminated.
Relative Delay
Relative delay replaced inter-arrival delay in 2022 to solve the exact issue above.
With relative delay, we choose the “fastest” packet in a time window as an anchor. See the previous “which packet travels faster” section above for how this is measured.
Denoting the current packet as Pn, and fastest packet is Pf, the relative delay for each packet is:
(Pn_arrival_ntp_ms - Pn_rtp_ts / sampling_rate) - (Pf_arrival_ntp_ms - Pf_rtp_ts / sampling rate)
Let’s run the above example to see how relative delay wins over inter-arrival delay.
With relative delay calculation, the new target level will be 180ms. NetEQ no longer runs short of audio samples! Isn’t it magical?
Caveat – History Window Size
In the above example, P1 was chosen as the fastest packet in a history window. WebRTC NetEQ uses 2 seconds as default window size, but is there an intuitive understanding of this history window?
Think about this question: If there’s only one permanent increase in packet travel time, is it meaningful at all to increase the jitter buffer size?
The answer is no, but the relative delay calculation still pushes 2s worth of large values to the delay_manager. The window size is a heuristic value that classifies temporary jitter from permanent delay change:
- If packet travel time ever increased in the past 2 seconds, large delay values are pushed to delay the manager.
- Otherwise, that travel time increase is permanent / one-off, so we no longer update the delay manager with large delay values.
In practice, the default 2s window isn’t meant to work for all scenarios but rather a tuning knob.
Delay Manager
The delay_manager aggregates the relative delay of previous packets and then outputs a target level to control jitter buffer delay.
On a high level, there are two packet arrival tracking histograms: underrun histogram and reorder histogram. delay_manager takes the maximum target levels from two histograms as the final target level.
- Both histograms use forget_factor (an exponential factor between 0 and 1) to control how fast the histogram should forget previous values. With a larger forget factor, the histogram accepts new relative delay more slowly.
- Underrun histogram uses quantile to select target level, while reorder histogram uses a cost function.
Forget Factor
forget_factor controls how fast the histogram should forget previous delay values. The hardcoded 0.983 default is meant to capture low-frequency events.
All bucket values in the histogram sum up to 1. Whenever a new relative delay arrives, all values are multiplied by forget_factor, so that the new sum is exactly forget_factor. Then, we add (1-forget_factor) to the bucket the new relative_delay belongs to. See the following example for an easy understanding.
So how long does it take for the histogram to forget about the “relative delay”? In the above example, with a 0.95 quantile (explained in the below sections), it takes 3.5s (175 packets) for the new 20ms target delay to take over the old 140ms target delay, if using 20ms packetization length.
Underrun Histogram
The Underrun histogram keeps track of the relative delays during the entire session. It’s meant to capture low-frequency events, e.g. “there is 100ms of packet delay every 10s during the entire session.”
20ms is a good bucket size for the histogram. Note that this bucket size has nothing to do with the packetization length. They are orthogonal! Each bucket contains the frequency (from 0 to 1) of the corresponding value of relative delay.
In the following example, 20% of packets have a 20ms relative delay, 10% of packets have a 40ms relative delay, …, 6% of packets have a 120ms relative delay, and 4% of packets have a 140ms relative delay.
delay_manager takes a quantile of underrun histogram as the target level. For example, if the quantile is 0.95, then120ms is chosen as the target level because that bucket sums up to 0.96, which is larger than 0.95. The quantile is a good tuning knob in native applications, which means the percent of network delay NetEQ should accommodate.
reorder_optimizer
Under consecutive/bursty packet loss, FEC can’t compensate for all lost packets because it only contains the previous packet’s information. In this scenario, retransmission is the only mechanism to recover lost audio. This is when the reorder histogram helps.
reorder_optimizer keeps track of reordered packets and puts in-order packets in the first bucket. VoIP products usually experience packet reordering due to retransmission. In other words, reorder_optimizer is a tuning knob to accommodate retransmission packets.
There’s a tradeoff between latency and lost recovery. Even if every requested retransmission arrives at the receiver, those packets wouldn’t be utilized unless the jitter buffer has enough buffering to accommodate the delay. reorder_optimizer provides a framework to make tradeoff decisions.
Instead of using a fixed quantile as the target level, reorder_optimizer uses a tradeoff function = delay_ms + 20ms * loss_percent. loss_percent defaults to 20, but worth tuning.
The tradeoff function computes a composite cost between latency and packet loss. Then, reorder_optimizer iterates through all potential latency and picks the eventual target delay with minimum cost.
Decision Logic
delay_manager intakes network data and provides target delay, while decision_logic actually does the heavy lifting to maintain jitter buffer latency.
decision_logic.cc decides the timing to decode audio packets, post-process audio samples, perform loss concealment, and perform jitter compensation. decision_logic was completely revamped in 2022 along with delay_manager, for which we will deep dive shortly.
Decision Logic State Machine
Inside decision logic, there is a finite set of operations NetEQ can perform. Correspondingly, there is also a finite set of scenarios that NetEQ can run into:
- The current jitter buffer latency is larger than the target delay – NetEQ is introducing more delay than necessary. In other words, there are more packets in the buffer than desired. In this case, decision_logic would perform Accelerate Operation. It leverages the acceleration algorithm to shorten the playout duration for each packet.
- The current jitter buffer latency is smaller than the target delay, so there are high risks of audio underrun. On the contrary to Accelerate, decision_logic performs Preemptive Expand Operation to slow down the audio playout.
- There are no more audio packets or audio samples available. decision_logic keeps performing Expand Operation, which initially stretches out the last audio samples, and eventually produces silence. Note that “Expand Operation” can be either done by neteq_expand or codec plc.
- The next packet is unavailable but future packets are available, which represents packet loss. decision_logic performs “Expand Operation” the same as above. Just after some “Expand Operation”, decision_logic moves on to decode available packets.
- Unsurprisingly, the last scenario is when the jitter buffer has no loss, and delay is expected as delay_manager dictates. decision_logic just performs Normal Operation, which decodes packets and plays audio samples without any time-stretching algorithm.
- Special handling on DTX, muted state, etc.
Packet Delay vs. Buffer Length
Does decision_logic control individual packets’ delay, or control how many packets are cached in the buffer?
As for today’s WebRTC NetEQ implementation, decision_logic controls current_playout_delay using a function akin to relative delay, which represents individual packets’ delay. However, before 2022, NetEQ has been controlling total buffer length to maintain jitter buffer latency.
Two mechanisms are fundamentally different and thus lead to different results. We will deep dive into the differences right away.
Before Relative Delay / 2022
Intuitively, target level means how many audio samples the buffer should contain:
- If the buffer contains more than target_level worth of audio, play audio at an accelerated pace.
- If the buffer contains less than target_level worth of audio, play audio at a decelerated pace.
- Otherwise, play audio at the original pace.
Downside of Directly Managing Buffer Length
Though directly managing the buffer length is most intuitive, it causes unnecessary audio distortion from speech acceleration and speech slowdowns.
Assuming target level is 120ms, packetization length is 60ms. P1 is delayed by 60ms so it arrives along with P2.
When it’s P1’s due (60ms timestamp), buffer length is 60ms and starts doing preemptive expansion to play out audio at a slower pace. The preemptive expansion allows the buffer to retain 20ms of audio at the 120ms timestamp. Then P1 and P2 arrive all together, which makes the buffer to contain 20 + 120 = 140ms audio, and triggers speech acceleration.
In hindsight, the preemptive expansion wasn’t necessary, given the 120ms estimation accurately reflects how much P1 is delayed in the network.
The Better Way
Now, instead of matching buffer size with the target level, decision_logic uses current playout delay to compare with the target level. Intuitively, the current playout delay suggests if the current audio playout is ahead of schedule or behind schedule.
Denoting the next packet in the buffer as Pn, and the fastest packet in the time window is Pf (the same packet used in relative delay calculation). Then playout delay for Pn is:
(<strong>current_ntp_ms</strong> - Pn_rtp_ts / sampling_rate) - (Pf_arrival_ntp_ms - Pf_rtp_ts / sampling rate) |
Then decision_logic compares packet delay with target level:
- If Pn (the next packet in the buffer) doesn’t exist, then perform Packet Loss Concealment (PLC).
- When Pn_packet_delay < target level, it means it’s too early to play out this packet, but its predecessors are all gone – we have to play it out at a slower pace.
- When Pn_packet_delay > target level, it means this packet is due for playout a long time, and we should accelerate its playout.
In the following example, at the time mark of 60ms, the next packet is P0, and its delay is exactly the same as the target delay. So P0 is normally decoded and played; Same for P1, P2, and P3. This is more efficient than “first slow down, then accelerate”.
*Note that the actual neteq code doesn’t necessarily grab the next packet to calculate the current delay. It could also grab the next audio samples in the sync buffer.
Conclusion
NetEQ provides a solid framework as an audio jitter buffer. It covers all critical components (packet management, delay estimation, DSP, orchestration, etc.) and is highly flexible for modification.
Out of the box, it would work reasonably well for most applications. When scaling to millions of users, high latency and audio glitch issues will likely arise on the long tail. There’s no fixed formula working for all scenarios.
Comprehending fundamentals in delay_manager and decision_logic is just the start. To make NetEQ work flawlessly for your application, it takes careful thought on your application’s specific needs. Sometimes it’s other components (like congestion and device) that malfunctions NetEQ. Read logs, add logs, get your hands dirty, and enjoy debugging!
{“author”: “Fengdeng Lyu “}