Unformatted text preview:

EE 450 HOMEWORK 2 Chapter 1 23 Let the first packet be packet 1 and the second packet be packet 2 In case the bottleneck link is the first link then packet 2 is queued at the first link waiting for the transmission of packet 1 The packet inter arrival time at the destination is L Rs In case the second link is the bottleneck link and both packets are sent back to back it must be true that the second packet arrives at the input queue of the second link before the second link finishes the transmission of the first packet That is L Rs L Rs dprop L Rs dprop L Rc LHS left hand side indicates the time needed by the second packet to arrive at the input queue of the second link RHS indicates the time needed by the first packet to finish its transmission onto the second link If we send the second packet T seconds later we will ensure that there is no queuing delay for the second packet at the second link if we have L Rs L Rs dprop T L Rs dprop L Rc L Rs T L Rc T L Rc L Rs 31 Thus the minimum value of T is L Rc L Rs a 1 Time to send message from source host to first packet switch 8x106 2x106 sec 4sec 2 With store and forward switching the total time to move message from source host to destination host 4sec 3 hops 12 sec b 1 Time to send is packet from source host to first packet switch 1x104 2 106 sec 5 m sec 2 Time at which 2nd packet is received at the first switch time at which 1st packet is received at the second switch 2 x5 msec 10 m sec c 1 Time at which Is packet is received at the destination host 5 m sec x 3 hops 15 m sec 800th packet is received After this every 5 msec one packet will be received thus time at which last 15 m sec 799 5msec 4 01 sec 2 It can be seen that delay in using message segmentation is significantly less almost 1 3 d i Without message segmentation if bit errors are not tolerated if there is a single bit error the whole message has to be retransmitted rather than a single packet ii Without message segmentation huge packets are sent into the network Routers have to accommodate these huge packets Smaller packets have to queue behind enormous packets and suffer unfair delays e i Packets have to be put in sequence at the destination ii Message segmentation results in many smaller packets Since header size is usually the same for all packets regardless of their size with message segmentation the total amount of header bytes is more Chapter 2 7 successive visits incue a RTT of RTT1 RTTn total time time to get jp address is RTT1 RTT2 RTTn When you know the IP address then the one RRTO is elapses for making the TCP connection one RTT0for making requests and receiving the object After the above process is completed you get total responses time as follows RTT0 RTT0 RTT1 RTT2 RTTn 2RTT0 RTT1 RTT2 RTTn 8 B C A n RTT1 RTT2 n RTT1 6 RTT2 objects in parallel n RTT1 RTT2 8 server the time it n RTT1 RTT2 8 no parallel TCP connections will be the longest followed by a non persistent HTTP connection with the browser configured for 6 parallel connections The persistent HTTP connection will be the quickest since it can download the HTML file and all Assuming that the HTML file references eight very small objects on the same Assuming that the HTML file references eight very small objects on the same server the time it takes to complete the request will be different for each type of HTTP connection 10 The comparison between non persistent and persistent HTTP protocols for concurrent downloads is the idea employed in this scenario Non persistent HTTP creates a fresh connection for every object adding cost for connection creation and destruction The link bandwidth is shared evenly across all of the connections parallel instances of non persistent HTTP do not provide appreciable improvements in this situation The computation estimates the time needed to download the original object and any related objects by taking into consideration the transmission rate packet sizes and object sizes As there is no longer a need to repeat the connection setup process for each object this dramatically reduces download time Explanation Contrarily persistent HTTP eliminates the overhead of connection formation and termination for each object and enables the transmission of numerous objects over a single connection The starting object might request that all referenced objects be sent over the same connection by utilizing persistent HTTP 27 a If our packets consist of P bytes and 5 bytes of header total of our packet size will be P 5 The rate is constant and it is 128 kbps That s why our packetization delay will be calculated with this formula 3 L P 5 R L 3 5 128 10 L 8 128 m sec L 16 m sec b In previous stage we found packetization delay and it was L 16 m sec When we give 1500 of L it will be 1500 16 93 75 m sec But if we give 50 of L the packetization will be 50 16 3 125 m sec c Total amount of packet L 5 bytes L 5 8 bits Store and forward delay L 5 8 R 6 L 1500 Store and forward is 1500 5 8 622 10 19 36 m sec 6 L 50 store and forward is 50 5 8 622 10 0 70 m sec d It takes less time as compared to big sized packets for calculating packetization delay


View Full Document

USC EE 450 - HOMEWORK 2

Download HOMEWORK 2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view HOMEWORK 2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view HOMEWORK 2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?