Let T (measured by RTT) denote the time interval that a TCP connection takes to increase its congestion window size from W/2 to W, where W is the maximum congestion window size. Argue that T is a function of TCP’s average throughput.
Here, B represents as the average throughput of a connection.
If a connection has loss rate L, then its average throughput
Loss rate L is derived from the above formula as shown below:
TCP host sends 1/L packets between two consecutive packet losses.
Hence, the time interval T is derived as follows
Therefore, T is a function of TCP’s average throughput.