What is Network Latency?
Network latency is the time interval between sending a data packet and receiving a response. It is measured in milliseconds (ms) and directly affects user experience in networks.
Types of Latency
- One-way latency — from sender to receiver
- Round-trip time (RTT) — there and back
- Buffering and jitter — additional delay effects
Causes of High Latency
| Cause | Impact |
|---|---|
| Long distance | Delay due to physical limitations (optical fiber) |
| Router overload | Queueing and congestion |
| DNS or NAT issues | Longer routing paths |
| Use of VPN or proxy | Additional hops |
How to Measure Latency
- ping — the simplest method
- traceroute/mtr — route diagnostics
- BGP Looking Glass — for external point checks
- DPI and BNG — operator-level telemetry collection
What to Do About High Latency
- Check traceroute and DNS resolution
- Avoid congested network paths
- Use QoS to prioritize traffic
- Place nodes closer to users (edge infrastructure)
FAQ
What is considered normal latency?
For video/calls — <50 ms. For gaming — <30 ms. For web — up to 100 ms is acceptable.
How does 5G affect latency?
5G reduces latency to 1–10 ms under ideal conditions.
Is latency the same as bandwidth?
No. Bandwidth is the amount of data; latency is the delivery time.
Conclusion
Network latency is a critical metric, especially for real-time services. Regular monitoring and optimization of latency help improve availability, performance, and user experience.