Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>RFC 3393 is for measuring the <em>variance</em> in the packet delay, not for measuring the delay itself. </p> <p>To give an example: you're writing a video streaming application. You want to buffer as little video data as possible (so that the video starts playing as soon as possible). Let's say that data always always <em>always</em> takes 20ms to get from machine A to machine B. In this case (and assuming that machine A can send the video data as fast as it needs playing), you don't need any buffer at all. As soon as you receive the first frame, you can start playing, safe in the knowledge that by the time the next frame is needed, it will have arrived (because the data always takes exactly 20ms to arrive and machine A is sending at least as fast as you're playing). </p> <p>This works no matter how long that 20ms is, as long as it's always the same. It could be 1000ms - the first frame takes 1000ms to arrive, but you can still start playing as soon as it arrives, because the next frame will also take 1000ms and was sent right behind the first frame - in other words, it's already on its way and will be here momentarily. Obviously the real world isn't like this.</p> <p>Take the other extreme: most of the time, data arrives in 20ms. Except sometimes, when it takes 5000ms. If you keep no buffer and the delay on frames 1 through 50 is 20ms, then you get to play the first 50 frames without a problem. Then frame 51 takes 5000ms to arrive and you're left without any video data for 5000ms. The user goes and visits another site for their cute cat videos. What you really needed was a buffer of 5000ms of data - then you'd have been fine.</p> <p>Long example, short point: you're not interested in what the <em>absolute</em> delay on the packets is, you're interested in what the <em>variance</em> in that delay is - that's how big your buffer has to be.</p> <p>To measure the <em>absolute</em> delay, you'd have to have the clocks on both machines be synchronised. Machine A would send a packet with timestamp 12337849227 <strong>28</strong> and when that arrived at machine B at time 12337849227 <strong>48</strong>, you'd know the packet had taken 20ms to get there.</p> <p>But since you're interested in the <em>variance</em>, you need (as RFC 3393 describes) several packets from machine A. Machine A sends packet 1 with timestamp 1233784922 <strong>72</strong> 8, then 10ms later sends packet 2 with timestamp 1233784922 <strong>73</strong> 8, then 10ms later sends packet 3 with timestamp 1233784922 <strong>74</strong> 8.</p> <p>Machine B receives packet 1 at what it thinks is timestamp 1233784922 <strong>12</strong> 8. The one-way delay between machine A and machine B has in this case (from machine B's perspective) been -600ms. This is obviously complete rubbish, but we don't care. Machine B receives packet 2 at what it thinks is timestamp 1233784922 <strong>15</strong> 8. The one-way delay has been -580ms. Machine B receives packet 3 at what it thinks is timestamp 1233784922 <strong>16</strong> 8. The one-way delay was again -580ms.</p> <p>As above, we don't care what the absolute delay is - so we don't even care if it's negative, or three hours, or whatever. What we care about is that the amount of delay varied by 20ms. So you need a buffer of 20ms of data.</p> <p>Note that I'm entirely glossing over the issue of clock drift here (that is, the clocks on machines A and B running at slightly different rates, so that for example machine A's time advances at a rate of 1.00001 seconds for every second that actually passed). While this does introduce inaccuracy in the measurements, its practical effect isn't likely to be an issue in most applications.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload