Bandwidth vs Throughput Explained
Understand why ISP speeds are in bits, downloads show bytes, and your actual transfer rate is less than both.
The Confusion
Your ISP promises 500 Mbps. You run a speed test and see 480 Mbps. You download a file and the browser shows 45 MB/s. Then you transfer a file between two servers and it moves at 30 MB/s. All of these numbers are "correct," but they measure different things using different units. Understanding the distinctions between bandwidth, throughput, and transfer rate is essential for capacity planning, performance debugging, and setting realistic expectations.
Bandwidth: The Theoretical Maximum
Bandwidth is the maximum data rate that a link can carry under ideal conditions. It is the theoretical ceiling, like the speed limit on a highway. An Ethernet link rated at 1 Gbps can carry at most 1 billion bits per second through the physical medium. This is a property of the hardware and the signaling protocol, not of any specific transfer.
Bandwidth is almost always measured in bits per second (bps, Kbps, Mbps, Gbps) using SI prefixes. This is a networking convention inherited from telecommunications, where the fundamental unit of transmission is a single signal pulse (a bit). Key bandwidths to know:
- Wi-Fi 5 (802.11ac): up to 866 Mbps per stream (theoretical)
- Wi-Fi 6 (802.11ax): up to 1,201 Mbps per stream
- Gigabit Ethernet: 1,000 Mbps = 1 Gbps
- 10G Ethernet: 10 Gbps
- USB 3.0: 5 Gbps (SuperSpeed)
- Thunderbolt 4: 40 Gbps
- PCIe 4.0 x16: ~256 Gbps
Throughput: What You Actually Get
Throughput is the measured data rate achieved in practice. It is always less than bandwidth because of protocol overhead, congestion, latency, packet loss, and processing delays. Continuing the highway analogy: throughput is your actual driving speed during rush hour, not the speed limit.
The gap between bandwidth and throughput comes from several sources:
- Protocol overhead: Every packet includes headers (Ethernet, IP, TCP/UDP). TCP alone adds 20-60 bytes of header per segment. For small packets, this overhead can consume a significant fraction of the bandwidth. On a gigabit Ethernet link carrying minimum-sized 64-byte frames, only about 76% of the bits are payload.
- TCP windowing: TCP's congestion control limits how fast data can be sent based on the round-trip time (RTT) and window size. A connection with 100 ms RTT and a 64 KB window can achieve at most about 5 Mbps, regardless of the link bandwidth. This is why downloading from a server on another continent is slower than from a nearby CDN.
- Packet loss and retransmission: Even 0.1% packet loss can reduce TCP throughput by 50% or more, because each lost packet triggers a timeout and retransmission.
- Shared medium: Wi-Fi bandwidth is shared among all devices on the same channel. Cable internet bandwidth is shared among subscribers on the same node.
- ISP throttling: Some ISPs apply traffic shaping or bandwidth caps during peak hours.
Bits vs Bytes: The 8x Confusion
The single most common source of confusion in network speed discussions is the difference between bits and bytes. Here is the rule:
- Network speeds are measured in bits per second: Mbps, Gbps
- File sizes and download speeds are measured in bytes per second: MB/s, GB/s
- 1 byte = 8 bits, so divide by 8 to convert
This means a 100 Mbps connection can transfer at most 12.5 MB/s (100 ÷ 8 = 12.5). When your browser shows a download speed of 12 MB/s on a 100 Mbps connection, that is excellent; you are using 96% of your bandwidth. The numbers look different because they use different units.
Quick Conversion: ISP Speed to Download Rate
| ISP Advertised | Max Download | Realistic |
|---|---|---|
| 25 Mbps | 3.1 MB/s | ~2.5 MB/s |
| 100 Mbps | 12.5 MB/s | ~10 MB/s |
| 300 Mbps | 37.5 MB/s | ~30 MB/s |
| 500 Mbps | 62.5 MB/s | ~50 MB/s |
| 1 Gbps | 125 MB/s | ~100 MB/s |
| 10 Gbps | 1.25 GB/s | ~1 GB/s |
"Realistic" assumes ~80% efficiency due to TCP overhead, latency, and minor congestion.
The convention of using bits for network speeds dates back to early telecommunications, where a modem would send one bit per signal baud. Storage, by contrast, has always been byte-oriented because the smallest addressable unit in a computer is a byte. The two communities adopted their conventions independently, and we live with the resulting confusion to this day.
Calculating Real-World Transfer Times
To estimate how long a file transfer will take, you need three things: the file size (in bytes), the available throughput (in bits per second), and the overhead factor.
Transfer time (seconds) = file_size_bytes × 8 / throughput_bps
Example: 4.7 GB file on a 100 Mbps connection
4,700,000,000 bytes × 8 = 37,600,000,000 bits
37,600,000,000 / 100,000,000 = 376 seconds ≈ 6.3 minutes
With 80% efficiency:
376 / 0.80 = 470 seconds ≈ 7.8 minutesOur Transfer Time Calculator does this math for you with preset speed profiles for common connection types, from 3G mobile to 100G datacenter links.
Throughput in Application Architecture
API Rate Limits
APIs are often rate-limited by both request count and bandwidth. A REST API returning 50 KB responses at 100 requests per second generates 5 MB/s (40 Mbps) of egress. At CDN egress pricing of $0.085/GB, that costs about $37/hour if sustained. Knowing the per-response size lets you estimate costs before they appear on your bill.
Database Replication
Cross-region database replication throughput is limited by both bandwidth and latency. A MySQL replication stream at 10 MB/s (80 Mbps) can handle about 20,000 transactions per second with 500-byte average row changes. If your write throughput exceeds the replication bandwidth, the replica will fall behind. Monitoring replication lag is monitoring throughput.
Message Queues
Kafka brokers commonly handle 100-300 MB/s per partition. A single broker with 10 partitions can sustain 1-3 GB/s. Knowing these throughput ceilings helps you estimate how many brokers and partitions you need for a given event volume.
Measuring and Improving Throughput
To measure actual throughput, use tools designed for the purpose:
- iperf3: The standard tool for measuring network throughput between two endpoints
- speedtest-cli: Measures ISP throughput to the nearest test server
- curl -o /dev/null -w ... : Measures HTTP download throughput to a specific server
- dd + netcat: Raw throughput test bypassing HTTP overhead
Common throughput improvements:
- Compression: gzip/brotli reduces transfer size by 60-80% for text-based formats (JSON, HTML, CSS, JS). This effectively multiplies your throughput.
- CDN: Reduces latency by serving from nearby edge nodes, improving TCP window utilization.
- HTTP/2 or HTTP/3: Multiplexes requests on a single connection, eliminating head-of-line blocking and reducing round trips.
- TCP tuning: Increasing the TCP window size (net.core.rmem_max on Linux) helps on high-bandwidth, high-latency links.
- Parallel streams: Tools like aria2c use multiple TCP connections to work around single-connection throughput limits.
Summary
- Bandwidth = theoretical max speed of the link (bits/sec)
- Throughput = actual measured speed (bits/sec or bytes/sec)
- Transfer rate = throughput as seen by the application (bytes/sec, after protocol overhead)
- Divide by 8 to convert bits to bytes
- Expect 70-90% of advertised bandwidth in practice
Use the Bits/Bytes tab in our calculator to see exactly how your ISP speed translates to real download rates, and the Transfer tab to estimate how long specific files will take.
Further Reading
- Bandwidth vs Throughput — Cloudflare
Cloudflare's accessible explanation of bandwidth, throughput, and latency.
- TCP Throughput Calculator
Calculate maximum TCP throughput based on window size and RTT.
- Measuring Network Performance — iperf3
The standard open-source tool for active network throughput measurement.
- RFC 5136 — Network Capacity Measurement
IETF standard defining bandwidth, capacity, and throughput measurement methodology.
- What Is Throughput? — Cloudflare Learning
Clear explanation of throughput vs bandwidth with diagrams showing the pipe analogy.