Step-by-Step Guide: Spot, Understand & Fix Common TCP Problems
Wireshark is the #1 free tool for seeing what TCP is really doing on your network.
This guide walks you through finding and solving the most common issues: failed connections, laggy/slow performance, packet drops, resets, hidden bottlenecks, and more.
| Problem | Wireshark Filter |
|---|---|
| All TCP issues at once | tcp.analysis.flags |
| Retransmissions | tcp.analysis.retransmission |
| Fast retransmissions | tcp.analysis.fast_retransmission |
| Zero window | tcp.analysis.zero_window |
| Resets | tcp.flags.reset == 1 |
| Duplicate ACKs | tcp.analysis.duplicate_ack |
| Out of order | tcp.analysis.out_of_order |
| Keep-alive probes | tcp.analysis.keep_alive |
| Lost segments | tcp.analysis.lost_segment |
| Spurious retransmissions | tcp.analysis.spurious_retransmission |
| Port reused too soon | tcp.analysis.port_reused |
| SYN retransmissions only | tcp.flags.syn == 1 && tcp.analysis.retransmission |
Tip: All filters above are display filters — they work on live captures AND saved
.pcapngfiles. They are not capture filters.
Download & install the latest Wireshark
→ https://www.wireshark.org/download.html (4.4+ or 4.6+ recommended in 2026)
Start a capture
→ Interface List → double-click your active interface (Wi-Fi / Ethernet) → click Start
Reproduce the problem
→ open the slow website, start the file transfer that hangs, try the app that won’t connect
Stop & save
→ Click red square to stop → File → Save As → choose .pcapng format (better than .pcap)
Your #1 filter
→ In the filter bar type: tcp.analysis.flags → press Enter
→ This instantly shows almost every problem Wireshark automatically detects (red/yellow icons)
→ Works on saved files too — not just live captures
⚠️ Capture position matters: Captures taken at the client vs. the server will look different. That asymmetry is diagnostic information, not noise — it tells you which side of the connection is causing the problem. When in doubt, capture on both ends simultaneously.
⚠️ Encrypted traffic (TLS/HTTPS): If payload looks garbled or unreadable, the connection is likely TLS-encrypted. To decrypt: Edit → Preferences → Protocols → TLS → add your RSA private key or pre-master secret log file. Without keys, you can still analyze TCP behavior (retransmissions, resets, window issues) — just not the application data.
Now let’s go through every major TCP problem one by one.
Symptom
App shows “connecting…” forever → timeout error → nothing loads.
Why it happens
Client sends SYN (“hello?”), server never answers with SYN-ACK (“hello back!”).
Common causes: server down/offline, firewall blocks port, wrong IP/port, routing broken, NAT issues.
How to spot it
Repeated [SYN] packets from client + [TCP Retransmission] on the SYN.
Filter: tcp.flags.syn == 1 && tcp.analysis.retransmission
Packet Example
No. Time Source Destination Protocol Length Info
1 0.000000 192.168.1.100 203.0.113.50 TCP 66 54321 → 443 [SYN] Seq=0 Win=64240 Len=0 MSS=1460
2 1.002345 192.168.1.100 203.0.113.50 TCP 66 [TCP Retransmission] 54321 → 443 [SYN] Seq=0 ...
3 3.004678 192.168.1.100 203.0.113.50 TCP 66 [TCP Retransmission] 54321 → 443 [SYN] Seq=0 ...
Note the ~1s and ~2s gaps — the client is doubling its retry wait (exponential backoff). No SYN-ACK ever arrives.
How to fix
ss -ltn or netstat -an → check if port is LISTENSymptom
Pages load slowly, videos buffer, file transfers pause/retry.
Two types
Why
Packets dropped: weak Wi-Fi, congested link, bad cable/switch/NIC, ISP problems.
How to spot
Filter: tcp.analysis.retransmission || tcp.analysis.fast_retransmission
To measure your retransmission rate:
→ Statistics → Conversations → TCP tab → compare the Bytes column against retransmission counts
→ >3–5% retransmissions = fix urgently; >1% on a production link warrants investigation
Timeout Retransmission Example
No. Time Source Destination Protocol Length Info
10 2.300000 Client Server TCP 1514 ... [PSH, ACK] Seq=1 Ack=1 Win=64240 Len=1460
11 4.800000 Client Server TCP 1514 [TCP Retransmission] ... Seq=1 ...
2.5 second gap — the RTO timer fired. This is a timeout retransmission.
Fast Retransmission Example
No. Time Source Destination Protocol Length Info
20 5.100000 Server Client TCP 60 ... [ACK] Seq=1 Ack=1461 ... (Dup #1)
21 5.100200 Server Client TCP 60 [TCP Dup ACK 20#2] ...
22 5.100500 Server Client TCP 60 [TCP Dup ACK 20#3] ...
23 5.101000 Client Server TCP 1514 [TCP Fast Retransmission] Seq=1461 ...
3 duplicate ACKs in <1ms → fast retransmit triggered. Much faster than waiting for RTO.
How to fix
ping -t google.com → >1% loss or jitter >50ms = issuemodprobe tcp_bbr && sysctl net.ipv4.tcp_congestion_control=bbrmodprobe tcp_bbr may be required first on some distros (kernel 4.9+)Symptom
Receiver sends same ACK number multiple times → triggers fast retransmit.
Why
Packets arrive out of order (multi-path routing, load balancers, ECMP, Wi-Fi roaming) or some are lost.
How to spot
Filter: tcp.analysis.duplicate_ack or tcp.analysis.out_of_order
Example
No. Time Source Destination Protocol Length Info
30 6.200000 Server Client TCP 60 ... [ACK] Seq=1 Ack=2921 ... (Dup #1)
31 6.200300 Server Client TCP 60 [TCP Dup ACK 30#2] ...
32 6.200600 Server Client TCP 60 [TCP Dup ACK 30#3] ...
33 6.201000 Client Server TCP 1514 [TCP Fast Retransmission] Seq=2921 ...
Fix & Notes
tcp.analysis.spurious_retransmissionSymptom
Transfer stops completely → sender sends tiny probes occasionally → throughput collapses.
Why
Receiver’s TCP buffer is full → application not reading data fast enough (slow disk, busy CPU, small buffer size). This is almost never a network problem — it is an endpoint problem.
How to spot
Filter: tcp.analysis.zero_window || tcp.analysis.window_full || tcp.analysis.zero_window_probe
Example
No. Time Source Destination Protocol Length Info
40 10.500000 Server Client TCP 60 ... [ACK] Seq=10001 Ack=10001 Win=0 [TCP Zero Window]
41 11.000000 Client Server TCP 66 ... [PSH, ACK] Seq=10001 ... [TCP Zero Window Probe]
42 11.500000 Server Client TCP 60 ... [ACK] Win=0 [TCP Zero Window]
43 12.000000 Client Server TCP 66 ... [TCP Zero Window Probe]
Sender probes every ~500ms waiting for receiver to open its window. Transfer is completely stalled.
Fix
sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216" (last value = max buffer)netsh int tcp set global autotuninglevel=normal (re-enables auto-tuning if disabled)Symptom
“Connection reset by peer”, “broken pipe”, app crashes or disconnects instantly.
Why
Application error, port no longer open, firewall/security device terminates connection, half-open timeout, middlebox interference.
How to spot
Filter: tcp.flags.reset == 1
Example
No. Time Source Destination Protocol Length Info
50 15.800000 Client Server TCP 60 ... [RST, ACK] Seq=20001 Ack=20001 Win=0 Len=0
Key diagnostic question: who sent the RST?
Fix
Symptom
No retransmissions, no resets, but throughput far below link speed (e.g., 5 Mbps on a gigabit line).
Why
TCP can only have a certain amount of unacknowledged data “in flight” at once, limited by the receiver window size. On high-latency links, if the window is small, the pipe stays mostly empty.
Formula: BDP = Bandwidth (bits/s) × RTT (seconds)
Example: 1 Gbps link with 100ms RTT = 12.5 MB must be in flight to saturate the link. If window is only 64 KB, you’ll get ~5 Mbps regardless of link speed.
How to spot
Packet Example
No. Time Source Destination Protocol Length Info
60 0.000000 Client Server TCP 66 [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=1
61 0.100000 Server Client TCP 66 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 WS=1
62 0.200000 Client Server TCP 1514 [PSH, ACK] Seq=1 Len=1460
63 0.201000 Client Server TCP 1514 [PSH, ACK] Seq=1461 Len=1460
64 0.202000 Client Server TCP 66 [ACK] — window exhausted, sender must wait
65 0.300000 Server Client TCP 60 [ACK] Ack=2921 Win=65535
Window scaling factor of 1 (WS=1) means no scaling — max window 65KB. On a 100ms RTT link, throughput is capped at ~5 Mbps even on gigabit hardware.
Fix
Symptom
Connection appears closed to the application but sockets linger in FIN_WAIT or CLOSE_WAIT state. Port exhaustion, resource leaks, or reconnection failures follow.
Why
Normal TCP close requires a four-way FIN handshake. If one side never sends or acknowledges a FIN, the connection stays half-open indefinitely.
How to spot
Filter: tcp.flags.fin == 1
Check socket state on Linux: ss -tan | grep -E 'FIN_WAIT|CLOSE_WAIT'
Check on Windows: netstat -an | findstr "FIN_WAIT\|CLOSE_WAIT"
Packet Example – Clean Close
No. Time Source Destination Protocol Length Info
70 20.000000 Client Server TCP 60 [FIN, ACK] Seq=5001 Ack=3001
71 20.001000 Server Client TCP 60 [ACK] Ack=5002
72 20.002000 Server Client TCP 60 [FIN, ACK] Seq=3001 Ack=5002
73 20.003000 Client Server TCP 60 [ACK] Ack=3002
Four packets, clean close. Both sides confirm the end.
Packet Example – Half-Open (FIN never answered)
No. Time Source Destination Protocol Length Info
74 20.000000 Client Server TCP 60 [FIN, ACK] Seq=5001
75 20.001000 Server Client TCP 60 [ACK] Ack=5002
— no FIN from server ever arrives —
Server ACKs the client FIN but never sends its own. Client socket stays in FIN_WAIT_2. Server socket stays in CLOSE_WAIT — typically a bug in the server application.
Fix
CLOSE_WAIT on server = application bug — server code is not closing the socket after receiving FINFIN_WAIT_2 lingering = set TCP keepalive or tcp_fin_timeout on Linux: sysctl -w net.ipv4.tcp_fin_timeout=30socket.close() callsKeep-Alive / Probes
Filter: tcp.analysis.keep_alive
→ Long-idle connections send probes to detect dead peers. Normal behavior — only a concern if excessive.
Previous segment not captured
Filter: tcp.analysis.lost_segment
→ ⚠️ Common false positive: This fires at the start of any capture because Wireshark joins mid-stream and hasn’t seen earlier packets. If it only appears in the first few packets, ignore it. If it appears throughout a capture, there is real packet loss upstream of your capture point.
Port reused too soon
Filter: tcp.analysis.port_reused
→ Old connection still lingering → new connection on same port conflicts. Often seen after rapid reconnections.
TCP Completeness
Filter: tcp.completeness
→ Bitmask shows whether handshake (7), data transfer (15+), and FIN/RST close are all present in the capture. Useful for verifying you captured the full session.
Spurious Retransmissions
Filter: tcp.analysis.spurious_retransmission
→ Unnecessary resend — the original packet actually arrived, just late. Usually caused by aggressive RTO timers or high jitter. Not true packet loss.
Statistics → TCP Stream Graphs (right-click any packet in a stream → choose graph type):
| Graph | What to look for |
|---|---|
| Time-Sequence (Stevens) | Slope = throughput. Steep and steady = healthy. Flat sections = stall. |
| Time-Sequence (tcptrace) | Adds receiver window line. Window shrinking = receiver bottleneck. |
| Throughput | Actual speed over time. Flat line well below link speed = BDP/window problem. |
| Round Trip Time | Spikes = delay or jitter. Climbing RTT = congestion building. |
| Window Scaling | Window drops to zero = zero-window event (see Section 4). |
Other useful views:
tcp.analysis.retransmission to plot error rate over timeStart here every time:
1. Apply filter: tcp.analysis.flags
|
├── Results found?
│ |
│ ├── Open: Analyze → Expert Information
│ │ |
│ │ ├── RST packets? → Section 5
│ │ ├── Zero Window? → Section 4
│ │ ├── Retransmissions? → Section 2
│ │ ├── Duplicate ACKs? → Section 3
│ │ └── SYN retransmissions? → Section 1
│ |
│ └── Focus on RED items first, then YELLOW
|
└── No results / errors look fine but still slow?
→ Section 6 (silent BDP bottleneck)
→ Check Statistics → TCP Stream Graphs → Throughput
Step-by-step:
tcp.analysis.flags → scan for highlighted packetsIf you have suggestions, improvements, or additional examples to contribute, please open an issue or pull request.