CS 414 – Multimedia Systems Design Lecture 29 – Media Server (Part 5)AdministrativeOutlineTrue Video-On Demand SystemVOD System Delivery Schemes (to handle large number of clients)Caching for Streaming MediaTechniques for Increasing Server CapacityCachingCaching in Media ServersInterval CachingInterval CachingFrequency CachingTaxonomy of Cache Replacement PoliciesPatchingTypes of PatchingBatchingBatching PoliciesSlide 18Examples of Video Server SystemsSlide 20Factors affecting Optimal Block SizeSelecting MetricsSelecting MetricConclusionCS 414 - Spring 2011CS 414 – Multimedia Systems Design Lecture 29 – Media Server (Part 5) Klara NahrstedtSpring 2011AdministrativeMP3 is released (see class website for information)CS 414 - Spring 2011OutlineVOD Optimization Techniques Caching, Patching, BatchingExample of Multimedia File System - SymphonyTwo-level ArchitectureCello Scheduling Framework at Disk Management levelBuffer SubsystemVideo ModuleCachingCS 414 - Spring 2011True Video-On Demand SystemTrue VOD: serve thousands of clients simultaneously and allowing service any time (variable access time) Goal: minimize the required resource consumption such as Server bandwidth (disk I/O and network) – amount of data per time unit sent from server to clientsClient bandwidth – network bandwidth that a client must be able to receiveClient buffer requirements – amount of data client has to be able to temporarily store locallyStart-up delay – time between issuing request for playback and start of playback CS 414 - Spring 2011VOD System Delivery Schemes (to handle large number of clients)Periodic broadcast Data-centered approachServer channel is dedicated to video objects (movie channel) and broadcasting periodicallyScheduled multicastUser-centered approachServer dedicates channels to individual usersWhen server channel is free, the server selects a batch of clients to multicast according to some scheduling policyServer replication Servers maintaining the same videos are placed in multiple locations in the networkServer selection is a main issueCS 414 - Spring 2011Caching for Streaming MediaCaching – common technique to enhance scalability of general information disseminationExisting caching schemes are not designed for and do not take advantage of streaming characteristicsNeed New Caching for Streaming Media CS 414 - Spring 2011Techniques for Increasing Server Capacity CachingInterval CachingFrequency CachingKey Point In conventional systems, caching used to improve program performance In video servers, caching is used to increase server capacity CS 414 - Spring 2011Caching Read-ahead bufferingBlocks are read and buffered ahead of time they are neededEarly systems assumed separate buffers for each clientsRecent systems assume a global buffer cache, where cached data is shared among all clientsCS 414 - Spring 2011Caching in Media ServersCS 414 - Spring 2011Source: “preemptive, but safe interval caching for real-time multimedia systems “Lee et al. 2003Interval Caching This caching exploits sequential nature of multimedia accessesTwo streams Si and Sj are defined as consecutive if Si is the stream that next reads data blocks that have just been read by Sj. Such a pair of consecutive streams are referred to as preceding stream and following stream.Interval caching scheme exploits temporal locality accessing the same MM object, by caching intervals between successive streams (preceding stream and following stream)The interval caching policy orders all consecutive pairs in terms of increasing memory requirements. It then allocates memory to as many of consecutive pairs as possibleCS 414 - Spring 2011Interval CachingMemory requirements of intervals are proportional to length of interval and play-out rate of streams involvedWhen interval is cached, following stream does not have to go to disk, since all necessary data are in cache Algorithm:Order intervals based on increasing space –smaller interval implies smaller time to reaccessOptimal for homogeneous clientsDynamically adapts to changes in workloadCS 414 - Spring 2011Frequency Caching Typical video accesses follow 80-20 rule (i.e., 80% of requests access 20% of video objects)Cache most frequently accessed video objectsRequires large buffer spaceNot dynamicfrequency determination is based on past history or future estimates/Zipf distribution CS 414 - Spring 2011Taxonomy of Cache Replacement PoliciesRecency of access: locality of referenceFrequency based: hot sets with independent accessesOptimal: knowledge of the time of next accessSize-based: different size objectsMiss cost based: different times to fetch objectsResource-based: resource usage of different object classes CS 414 - Spring 2011PatchingStream tapping or patching – technique to support true VOD;Patching assumes multicast transmission and clients arriving late to miss the start of main transmissionThese late clients immediately receive main transmission and store it temporarily in a buffer. In parallel, each client connects to server via unicast and transports (patches) the missing video start which can be shown immediatelyCS 414 - Spring 2011Types of PatchingGreedy Patchinga new main transmission is started only if the old one has reached the end of the videoClients arriving in between create only patching transmissionsGrace PatchingIf a new client arrives and the ongoing main transmission is at least T seconds old, then the server automatically starts a new main transmission which plays whole video from the start again.CS 414 - Spring 2011Batching Batching – grouping clients requesting the same video object that arrives within a short duration of time or through adaptive piggy-backingIncreasing batching window increases the number of clients being served simultaneously, but also increases reneging probability reneging time – amount of time after which client leaves VOD service without delivery of videoIncreasing minimum wait time increases client renegingPerformance metrics: latency, reneging probability and fairnessPolicies: FCFS, MQL (Maximum Queue Length), FCFS-nCS 414 - Spring 2011Batching PoliciesFCFS: schedules the batch whose first client comes earliest, with the aim of achieving some level of fairnessMaximum
View Full Document