DOC PREVIEW
GT CS 8803 - Delay and Loss Modeling in Internet
School name Georgia Tech
Pages 23

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Delay and Loss Modeling in InternetBySanjeev DwivediCS8803-Network MeasurementWhat causes Delays & Losses? [MoonJSAC03]• Bugs in router implementations.• Speed of EM waves in media.• CPU Power (e.g. routing updates).• Packets on the slow path.• Congestion (Queuing).• Packet sizes.• Noisy channels.• Route flapping.2TrimodalGraph (Peaks for most freq.packet sizes)Delays (w/in a router) proportional to packet sizes (Store and FFD arch)Are delays symmetric on Internet Paths? [Claffy]All delays in following sections assume end-to-end measurements.3Transatlantic Link – I (1992)LogScaleDirectional delay increases with RTTLogScaleDirectional delay increases with RTTNoticethe RTTTransatlantic Link – II (1992)4LogScaleRTTIs lowby a factorUSA Route – I (1992)LogScaleRTTIs lowby a factorUSA Route – II (1992)5TTL Variation (Route Flapping) [Cottrell] RTTs6LossesWhat can cause this?Loss: link speed contributions7Route changesICMP ping Measurements: Effectiveness8Link speed effectsHow to measure and sanitize? [Pax97]• Measuring between the same pairs• Measuring at poisson intervals because it gives unbiased measurement even if sampling rate varies [pax96]• Network pathologies: Reordering (violates FIFO model), packet replication (how and why can this even happen? Link layer replication, stack implementation fault…), packet corruption.• TTL shift removal (Routing changes)• Compressed timing• Suspect clocks9Some stats0.020.02Packet corruption %651Packet replication (count)0.10.6Reorder (Ack pkt) %0.32Reorder (data pkt) %1236Reorder (%)N2 (1995)N1 (1994)Do large windows cause higher loss rates?• Hypothesis: it can be done by looking at loss rates of data packets vs. those of ack packets.• Reason: Because Data packets stress the ffd path more than smaller ack packets, also ack rate is usually less than data rate by a factor.• But: TCP data packets adapt to the current conditions, whereas the acks do not unless a whole flight of packets is lost, causing a sender timeout. Hence ack losses give a clearer picture of overall internet loss patterns, while data losses tell specifically about the conditions as perceived by TCP connections.• Proof (look?): 5.142.88Ack %5.282.65Data %N2N110Geographical EffectsExcept for the links going into US (check what this means: Probably the Into Europe thing because the delta is 73%) the proportion of quiescent connections is fairly stable. Hence loss rates increases are primarily due to higher loss rates during the already-loaded “busy” periods.Loss Evolution & behavior over time• Observing a zero loss connection at a given point in time is quite a good predictor for longer time scales (hrs, even days & weeks). Similar holds for lossy durations.• Observation of the predictive behavior of lossy/non-lossy periods gives the notion that network lives in an “on-off” state or “quiescent-busy” periods, and that both periods might be long-lived.• Predictive power of observing a specific loss rate is much lower than that of observing the presence of zero/non-zero loss11Losing data packets and acks68%65%47%AcksUnloadedLoadedFor all of these extremes (the above stats are for different connections, worst casescenario in each category), no packet was lost in the reverse direction (TCP Good)! Clearly, packet loss on the ffd and reverse directions is sometimes completely independentNon-zero portions of both the unloaded and the loaded data packet loss rates agree closely with exponential distributions, while that for acks is not so persuasive match.Exponential fitLoss RateData packet loss and exponentials12Ack packets and exponentialsLoss Bursts and Independence•Loss events are well modeled as independent does nothold.•If successive packet losses (in bursts) are grouped as outages, then the outage duration span several ordersof magnitude13Outage duration spans large ordersDelay: Why timing compression does not matter?Ack compression is rare and small in magnitude. Also, upper extremes can be dealt by removing extremes from sender based measurements.Compression is present if:Data packet compression is rarer than ack compression, but it recurs between specific pairs of hosts and indicates that it happens repeatedly, and is sometimes due to specific routers. Itis rare enough not to present a problem, but outlier filtering should be applied in measurements.14Delays: Queuing time scalesAre there particular timescales at which most queuing variations occur?Methodology (For OTTs): 1) Sanitize the trace. 2) any time interval t, divide it into two halves. If or either chunk is zero, discard the interval. Else let and be medians of left and right halves, and be median of , the interval’s queuing variation.For each connection find value of t such that is greatest.Find the frequency of each t.Normalize by dividing the frequency of t by all t’s at least as large as this t.Find the proportion of each t from the numbers that we got in previous process(1/ 4)lrNN<lMrM||lrMM−tQtQSummary: Internet delay variations occur primarily on time scales of 0.1-1sec, but extend out quite frequently to much larger scales.Queuing time scales - II15Getting ModernData collection Methodology [Moon99] •End-to-End data collection•Loss measurement by sending probes in network periodically (periods = 20ms, 40ms, 80ms, 160ms).•Both Unicast and Multicast probes.•Making sure that the trace segments are stationary, i.e. splitting the traces in 2 hr intervals and checking for stationarity. This is done by using a moving average filter (of window size 2000 packets) to judge the extent of variation.•Other non-stationary effects (sudden increase in loss rate, slow decays or increase in loss percentages) are observed and those traces are not included in the analysis.Trace SanitizationAnalyzing the data•A packet whose sequence number is not recorded is assumed to be lost.•Under the above assumption, the loss data can be represented as a binary time series==otherwise 0lost is i probe if 1}{1xxinii16How to check whether the losses are independent?• We should check if the binary time series of the loss data has any dependence.• To do that we need to learn a little math.• We know that for an independent process the autocorrelations at all lags are zero. We could try this with our loss data series.• But since we have only an observed sample sequence, the


View Full Document
Download Delay and Loss Modeling in Internet
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Delay and Loss Modeling in Internet and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Delay and Loss Modeling in Internet 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?