1.1.e Explain TCP operations

1.1.e [iii] Latency

key causes:

propagation delay


data protocols

routing and switching

queueing and buffering

propagation delay is the primary source of latency. it is a function of how long it takes information to travel over the communications media at the speed of light from source to destination.

serialization is the conversion of bytes of data into a serial bit stream to be transmitted over the media, ie:

serialization of a 1500 byte packet on a 100mbps lan will take 120 microseconds

data communications protocols at various layers use handshakes to synchronize between transmitter and receiver, and for error detection/correction. these handshakes take time and therefore create latency also

routing and switching latency – routers and switches add approximately 200 microseconds of latency to the link for packet processing. this contributes about 5% overall latency to an average internet link

queueing and buffer management can contribute 20 ms to latency. this occurs as packets are necessarily queued due to over-utilization of a link

from wiki: http://en.wikipedia.org/wiki/Latency_%28engineering%29

Network latency in a packet-switched network is measured either one-way (the time from the source sending a packet to the destination receiving it), or round-trip delay time (the one-way latency from source to destination plus the one-way latency from the destination back to the source). Round-trip latency is more often quoted, because it can be measured from a single point. Note that round trip latency excludes the amount of time that a destination system spends processing the packet. Many software platforms provide a service called ping that can be used to measure round-trip latency. Ping performs no packet processing; it merely sends a response back when it receives a packet (i.e. performs a no-op), thus it is a first rough way of measuring latency. Ping cannot perform accurate measurements,[2] principally because it uses the ICMP protocol that is used only for diagnostic or control purposes, and differs from real communication protocols such as TCP. Furthermore routers and ISP‘s might apply different traffic shaping policies to different protocols.[3][4]

However, in a non-trivial network, a typical packet will be forwarded over many links via many gateways, each of which will not begin to forward the packet until it has been completely received. In such a network, the minimal latency is the sum of the minimum latency of each link, plus the transmission delay of each link except the final one, plus the forwarding latency of each gateway. In practice, this minimal latency is further augmented by queuing and processing delays. Queuing delay occurs when a gateway receives multiple packets from different sources heading towards the same destination. Since typically only one packet can be transmitted at a time, some of the packets must queue for transmission, incurring additional delay. Processing delays are incurred while a gateway determines what to do with a newly received packet. A new and emergent behavior called bufferbloat can also cause increased latency that is an order of magnitude or more. The combination of propagation, serialization, queuing, and processing delays often produces a complex and variable network latency profile.

Latency limits total bandwidth in reliable two-way communication systems as described by the bandwidth-delay product.