Notes from "Networking and Online Games - Understanding and Engineering Multiplayer Online Games", by Armitage, Pool and Branch, 2006
Latency, jitter, and packet loss
- minimum delay: distance of link / speed of light (3ms for every 1000km)
- serialization delay (transmitting a 1500 byte IP packet takes 0.12ms on a 100Mbps link, 20ms through 512kbps ADSL, and 350ms through 33kbps dial-up modem)
- queuing delay: packets from different sources sharing the same link have to wait to be transmitted (add 50-100ms to the RTT if on wifi)
Jitter: main source is burstiness of queuing delay.
Packet loss: main sources are rare data corruption, temporary dynamic routing changes, and proactive packet dropping (the bigger the router's message queue, the less likely to accept new messages; this slows down TCP and completely drops UDP).
- Preferential queuing of packets based on their source, destination, and protocol.
- Increase the size of router's active queue management
- Interleave multiple IP packets at the same time through multiple virtual channels, thanks to ATM and the 2 channels of ADSL
"dumb client" = waits for server approval to render user inputs. no latency compensation. Types of latency compensation techniques:
- prediction = playing on consistency-responsiveness trade-off .
- Player prediction = client only predicts local player's units and fixes discrepancies when receiving server response.
- Opponent prediction = predict other players' units based on their velocity and position; other players only send updates on their units when their actual position differs from the predicted position from more than a given threshold. Ex: sudden right turn). Trade-off between fidelity (the notification threshold) and number of messages sent. Unfairness problem: players with higher latency receive updates later. Solution: Time Delay (see bellow)
- time manipulation:
- Time Delay: server delays processing and sending of user commands to equalize latencies (or up to a maximum buffer size). Pro: fairness. Con: as responsive as the slowest client (or adds latency).
- Time Warp: server knows current lag to each client. When receiving a client input, server uses current state minus lag to compute new state, and rolls back everyone's state if computed state differs from current state. Used in Valve's engine. Problem: client can cheat by pretending it's lagging a lot while it's not, and send actions that will roll back the state to his advantage.
- bandwidth reduction (hence reduction of serialization delay): LZW compression, sending state deltas instead of the whole new state, interest management, using P2P for voice, and bucketing/update aggregation
- visual tricks: show an animation to cover part of the lag
Playability vs network conditions and cheats
2 ways to discover player tolerance to network: 1) lab experiment, or 2) monitoring public servers.
Network simulations: Using custom routers to simulate latency and packet loss: NISTnet for Linux or dummynet for FreeBSD. ns-2 is a traffic generator.
Problem: OS scheduler = 10ms buckets. Solution: change the kernel tick rate to 1ms.
Other problem: simulation parameters are game-specific, change with the number of players, etc. (although server-to-client packet simulations seem to estimate reality quite correctly).
Use a normal (not uniform) distribution for latency for more realistic jitter. But careful: packet re-ordering may happen if modifying Nistnet's packet delay value on the fly.
|client-side||graphics (e.g. wallhack), input (e.g. aimbots runs as proxy and rewrites 'shoot' command packets, 24/7 farm bot)||only send to clients the state they need, and check that client is running normally on the client-side (e.g. PunkBuster|
|server-side||escaping a game you're losing to not be ranked, brute-forcing an opponent's password to steal his account, or to prevent him from logging in||limit msg rate|
|network||DDoS to increase victim's latency||hide players IP|
Sniffing packets on the server side: use a hub, or a switch with port mirroring. Tools: tcpdump (command line) and ethereal (GUI) to capture ethernet frames both in and out. Problem: The accuracy of packet timestamping of the sniffer depends on CPU clock precision, IO throughput, CPU load, ... Solution: calibrate sniffer from a known source emitting regular packets.
Player information: Reverse-DNS-lookup of player IP to know where he comes from. Careful: 2 players behind the same NAT receive the same IP, but they use a different port, so 1 player = (ip, port), not just ip.
Tick and bandwidth: In Wolfenstein Enemy Territory and Quake 3, the server ticks every 50ms (20 hz), while for Half Life 2, it ticks at 15ms (66 Hz). However, for both, the client requests updates every 50ms. The client can chose to request packets more often (e.g. 100 packets/sec), but the server will still send at most at its tick rate (20 hz or 66 hz). Clients can also specify a max downstream bitrate (e.g. 50kbps); this is useful when update packets are too frequent or too big.
Packet size and inter-arrival time:
Server-to-client packet size depends on the map design, number of players, ...
Packet inter-arrival times: if one packet per player, and each of the n players receives updates about the other n-1 players every server tick, then each client receives a burst of n-1 packets per server tick. In other words, n-1/n of the inter-arrival times should be in the "sub-millisecond region". For example, every 50 ms, a Quake 3 server with 15 players will send to each client 14 packets at a time.
Client-to-server packet size and inter-arrival times depend on client hardware, gameplay behavior, ... but each client has a distribution of packet size and inter-arrival very distinct from other clients. [It could be possible to identify clients that way, and eventually stolen accounts or power-levellers].