Your First Plan is on Us!

Get 100% of your first residential proxy purchase back as wallet balance, up to $900.

Start now
EN
English
简体中文
Log inGet started for free

Blog

blog

tcp-and-udp-whats-different-and-how-to-choose

TCP and UDP: What’s Different and How to Choose

author xyla
Xyla Huxley
Last updated on
 
2026-02-03
 
10 min read
 

A transport-layer guide to TCP vs UDP: connection model, reliability, ordering, control mechanisms, plus QUIC/HTTP/3 and proxy/acceleration selection.

Introduction

TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are the two most common transport-layer protocols on today’s networks. They are not direct substitutes: each provides different transport semantics for different business goals.

A practical selection should be driven by measurable goals—integrity, latency, jitter, loss tolerance, and the cost of implementing and maintaining controls at the application layer.

Core differences between TCP and UDP

This section covers the four dimensions that most decisions hinge on, so you can map mechanisms to scenarios later.

Connection

TCP establishes and maintains a connection before sending data; UDP sends datagrams without a connection. The handshake typically increases initial (first-packet) overhead.

Reliability

TCP uses sequence numbers, ACK, and retransmissions to achieve eventual reliable delivery; UDP does not guarantee delivery or uniqueness. If integrity is a hard requirement, TCP is usually the default.

Ordering and message boundaries

TCP is a byte stream (applications must frame messages). UDP preserves message boundaries (datagrams) but may arrive out of order or be dropped.

Control mechanisms

TCP includes flow control (rwnd) and congestion control (cwnd) for stability. UDP has none by default—apps or higher-layer protocols must provide them.

TCP features and mechanisms

TCP’s reliability comes with costs: handshake round trips, retransmission delays under loss, and control loops that trade peak throughput for stability and fairness.

Three-way handshake and teardown

TCP’s three-way handshake confirms reachability and negotiates parameters. Teardown typically involves four steps. These extra round trips add RTT cost, especially noticeable for short-lived connections.

Sequence numbers, ACK, and retransmission

TCP tracks bytes with sequence numbers and confirms reception with ACK. Missing ACKs trigger retransmissions (timeout or fast retransmit). This ensures eventual delivery, but loss introduces extra delay and throughput variability.

Flow and congestion control

Flow control prevents overrunning the receiver; congestion control adapts sending rate to network capacity. On the public Internet, these mechanisms often improve user-perceived stability more than chasing minimal latency.

UDP features and trade-offs

UDP is low-overhead because it does less: no connection setup, no retransmission, and no congestion control by default. The trade-off is that reliability and control are pushed to the application layer.

Datagrams and MTU risk

UDP sends datagrams. If a datagram exceeds the path MTU, IP fragmentation may occur; losing any fragment breaks the whole datagram, amplifying loss impact.

Handling loss, reordering, and jitter

Common techniques include jitter buffer smoothing, FEC, selective retransmission (only when needed), and timestamps/sequence numbers to reorder or drop late data.

Step-by-step recipe: Adding application-layer control on UDP

Below is a practical recipe we commonly use for realtime media / low-latency streams. The goal is to keep jitter, reordering, and loss within application tolerance while preserving UDP’s low overhead.

 Define measurable KPIs (measure first, then optimize)
At minimum, record end-to-end latency (p50/p95/p99), jitter, loss rate (uplink/downlink), recovered-loss ratio (by FEC/retransmit), and goodput/throughput.

 Add diagnostic fields to each datagram
Include seq (sequence number), ts (send timestamp), stream_id, and payload_type so logs can distinguish reordering, late packets, duplicates, and when NACK is appropriate.

Stabilize playback with an adaptive jitter buffer
Prefer a small adaptive buffer (e.g., 20–60 ms) rather than a fixed large buffer. Estimate target depth from recent inter-arrival statistics and shrink it when the network is stable.

 Use lightweight FEC for “sporadic loss”
When loss is roughly 0.5%–3% and mostly non-bursty, FEC often beats retransmission because it avoids RTT waiting and reduces tail latency.

 Selective retransmission (NACK) only for packets that are critical and still useful
Only NACK keyframes / critical state, and only if the packet can arrive before its playout deadline. Drop packets that are already too late.

 Regression test with the same impairment model (before/after)
Run at least three profiles: 0% loss (baseline), 1% loss (common weak network), and 3% loss (stress). Keep each run long enough (10–20 minutes) to see p95/p99 behavior.

Example implementation snippets 

The snippets below are minimal and illustrative. Translate to your runtime and integrate with your observability/logging.

Jitter buffer (simplified: in-order playout + timeout drop)

Code Block Example
buffer = map()
expected = 1
playout_delay_ms = adaptive_delay()   // e.g., 20–60 ms based on recent jitter

on_packet(pkt):
  buffer[pkt.seq] = pkt

playout_loop():
  while true:
    deadline = now() - playout_delay_ms
    if buffer contains expected and buffer[expected].ts <= deadline:
       render(buffer[expected])
       delete buffer[expected]
       expected += 1
    else if packet_too_late(expected):   // missed window
       expected += 1  // drop to keep realtime
    sleep(1ms)

 FEC (XOR parity: 1 parity packet per K data packets)

Code Block Example
K = 10
group = []
on_send(data_pkt):
  group.append(data_pkt.payload)
  send_udp(data_pkt)
  if len(group) == K:
     parity = xor_all(group)
     send_udp(FEC_PACKET(group_id, parity))
     group.clear()

on_receive(data_pkts, fec_pkt):
  if exactly_one_missing(data_pkts):
     missing = xor(parity, xor_all(received_payloads))
     recover_missing_packet(missing)

 Selective retransmission (NACK: only for critical and still-playable packets)

Code Block Example
missing = detect_missing_seq(window=last_200_packets)
for seq in missing:
  if is_key_packet(seq) and still_playable(seq):
     send_control(NACK(seq))

on_nack(seq):
  if cache_has(seq):
     resend_udp(seq)

Practical tip: make “still_playable” and “is_key_packet” explicit policies;  otherwise retransmits can inflate tail latency without improving user experience.

QUIC and HTTP/3 over UDP

When you want low latency but also stronger transport features, evaluate QUIC. It runs over UDP, adds reliable delivery, multiplexing, flow control, and connection migration, and is the foundation of HTTP/3.

Case study: Switching a video signaling path from TCP to QUIC reduced tail latency

In one of our video application deployments, we switched a “signaling/control path” (short request/response bursts with frequent small packets) from TCP (HTTPS over HTTP/2) to QUIC (HTTP/3 over UDP) to reduce head-of-line blocking and long tail delays under loss. We measured before/after with the same synthetic script and sampled real users (same regions and carrier mix), recording end-to-end request time, RTT, and loss/retransmission indicators.

Representative results (internal testing; for reference only):

Under 1% packet loss and 50 ms baseline RTT on a mobile-network emulator, p50 request time improved from 92 ms to 78 ms (-15%), and p95 improved from 310 ms to 190 ms (-39%). Timeout-driven retries decreased by ~30% over the same traffic window. We also observed fewer cases where a single loss event stalled multiple parallel requests, which reduced “stutter” in interactive UI flows.

( These numbers are from our internal environment and workload shape. Treat them as representative rather than universal benchmarks; validate with your own KPIs and network conditions.)

TCP vs UDP quick comparison

Use this table to align on transport semantics rather than a simplistic “fast vs slow” framing.

Dimension

TCP

UDP

Model

Connection-oriented, byte stream

Connectionless, datagrams

Reliability

Yes (ACK/retransmit)

No (best-effort)

Ordering

Guaranteed

Not guaranteed (may reorder/duplicate)

Flow control

Yes

No

Congestion control

Yes

No

First-packet latency

Usually higher (handshake)

Usually lower

Best for

Integrity/consistency

Realtime, controllable latency

How to choose by scenario 

Turn “user experience” into metrics: must delivery be 100% complete and ordered, what are latency/jitter KPIs, what loss is acceptable, and can you afford app-layer reliability and control?

In our selection reviews, we write the decision as a KPI-to-mechanism mapping to avoid arguing from intuition:

• If the KPI is p95 interaction latency: reduce head-of-line blocking (e.g., QUIC multiplexing) and drop packets that miss deadlines.

• If the KPI is integrity/consistency (100% complete + ordered): TCP (or reliable QUIC streams) is usually the baseline.

• If the KPI is weak-network stability (jitter + loss): UDP needs a jitter buffer plus FEC and/or selective retransmission, and you must measure recovered-loss ratio and tail latency.

• If the KPI is auditability/operability: TCP/proxy chains are often easier to instrument and troubleshoot with existing tooling.

Prefer TCP when

• Web/API traffic (HTTP/1.1HTTP/2)

• File transfers and email systems

• Remote login (SSH)

• Database connections and transaction systems

Prefer UDP (or UDP + QUIC) when

• Realtime voice/video calls and conferencing

• Low-latency streaming and media delivery

• Multiplayer game state updates

• Short request/response such as DNS(can fall back to TCP)

• Mobile/weak networks but still need reliability: evaluate QUIC/HTTP/3

Rule of thumb for proxies/acceleration

For stable, auditable connections (enterprise egress, secure browsing, collection), HTTP/HTTPS proxies over TCP are typical. For interactive low-latency needs (game acceleration, some realtime apps) or solutions with UDP forwarding are a better match.

Troubleshooting story:

UDP looked “fast” but playback stuttered — MTU/fragmentation was the culprit

We ran into a classic issue: after switching a path to UDP, average latency improved, but users reported more playback stutter. At first we suspected the jitter buffer, but logs showed bursts of missing sequence numbers concentrated around larger datagrams.

How we debugged it:
• On the receiver, we bucketed loss by sequence number and saw “clustered gaps” rather than random loss.
• On the sender, we logged UDP payload sizes and found some packets were close to (or over) the effective path MTU, triggering IP fragmentation. With fragmentation, losing any fragment invalidates the whole datagram, which amplifies loss.
• We fixed it by capping payload size more conservatively and adjusting packetization for critical data. After rollout, loss patterns became less bursty and stutter events dropped while FEC recovery became more effective.

Summary

TCP prioritizes reliable, ordered delivery; UDP prioritizes low overhead and controllable latency. The real hinge is whether you can afford application-layer reliability and congestion handling.

 
Get started for free

Frequently asked questions

Is UDP always faster than TCP?

 

Not always. UDP has less overhead and often lower initial latency, but if your app must rebuild reliability and congestion control, end-to-end results may be slower or less stable.

Can TCP lose packets?

 

Yes at the network layer. TCP achieves eventual reliable delivery using ACK and retransmissions, but loss increases delay and can reduce throughput.

When should I choose QUIC?

 

Choose QUIC when you need low latency on mobile/weak networks and still want reliability, multiplexing, and connection migration. QUIC also underpins HTTP/3.

About the author

Xyla is a technical writer at Thordata, who thinks rationally and views content creation as a problem-solving process based on real-world scenarios and data analysis.

The thordata Blog offers all its content in its original form and solely for informational intent. We do not offer any guarantees regarding the information found on the thordata Blog or any external sites that it may direct you to. It is essential that you seek legal counsel and thoroughly examine the specific terms of service of any website before engaging in any scraping endeavors, or obtain a scraping permit if required.