Slide 1Optical Networks Change the Current PyramidNew Networking ParadigmThe “Network” is a Prime Resource for Large- Scale Distributed SystemFrom Super-computer to Super-networkData Schlepping ScenarioLimitations of Solutions with Current Network TechnologyProblem StatementBIRN Network LimitationsOur proposed SolutionSlide 11Goals of Our InvestigationExample: Lightpath SchedulingScheduling Example - RerouteGeneralization and Future Direction for ResearchGoalsWhat used to be a bit-blasting raceEnabling new degrees of App/Net couplingTeamworkSlide 20Slide 21Slide 22ConclusionSlide 24Slide 25Bandwidth is nothing without controlWhy are we here today?Big Disk AvailabilityBandwidth is Becoming CommodityTechnology CompositionSlide 31Underlay Optical NetworksService CompositionASF with Leased pipesSummaryImprovements in Large-Area NetworksSlide 37Example: lambdaCAD (CO2 meets Grids)End-to-end Nexus via DRACWhat happened if:Slide 41Tal [email protected] www.nortel.com/dracAdvanced Technology Research , Nortel NetworksPile of selected Slides August 18th, 20052Optical Networks Change the Current Pyramid George Stix, Scientific American, January 2001x10DWDM- fundamental miss-balance between computation and communication3New Networking Paradigm >Great vision – •LambdaGrid is one step towards this concepts>LambdaGrid – •A novel service architecture •Lambda as a Scheduled Service •Lambda as a prime resource - like storage and computation•Change our current systems assumptions•Potentially opens new horizon “A global economy designed to waste transistors, power, and silicon area -and conserve bandwidth above all- is breaking apart and reorganizing itself to waste bandwidth and conserve power, silicon area, and transistors.“ George Gilder Telecosm (2000)☺4The “Network” is a Prime Resource for Large- Scale Distributed System Integrated SW System Provide the “Glue”Dynamic optical network as a fundamental Grid service in data-intensive Grid application, to be scheduled, to be managed and coordinated to support collaborative operations InstrumentationPersonStorageVisualizationNetworkComputation5From Super-computer to Super-network>In the past, computer processors were the fastest part•peripheral bottlenecks >In the future optical networks will be the fastest part•Computer, processor, storage, visualization, and instrumentation - slower "peripherals”>eScience Cyber-infrastructure focuses on computation, storage, data, analysis, Work Flow. •The network is vital for better eScience>How can we improve the way of doing eScience?6Data Schlepping ScenarioMouse Operation>The “BIRN Workflow” requires moving massive amounts of data:•The simplest service, just copy from remote DB to local storage in mega-compute site•Copy multi-Terabytes (10-100TB) data •Store first, compute later, not real time, batch modelMouse network limitations: •Needs to copy ahead of time•L3 networks can’t handle these amounts effectively, predictably, in a short time window •L3 network provides full connectivity -- major bottleneck•Apps optimized to conserve bandwidth and waste storage •Network does not fit the “BIRN Workflow” architecture7Limitations of Solutions with Current Network Technology>The BIRN networking is unpredictable, a major bottleneck, specifically over WAN, limit the type, way, data sizes of the biomedical research, prevents true Grid Virtual Organization (VO) research collaborations>The network model doesn’t fit the “BIRN Workflow” model, it is not an integral resource of the BIRN Cyber-Infrastructure8Problem Statement>Problems•BIRN Mouse often:•requires interaction and cooperation of resources that are distributed over many heterogeneous systems at many locations;•requires analyses of large amount of data (order of Terabytes);•requires the transport of large scale data;•requires sharing of data;•requires to support workflow cooperation modelQ? Do we need a new network abstraction?9BIRN Network Limitations>Optimized to conserve bandwidth and waste storage •Geographically dispersed data •Data can scale up 10-100 times easily>L3 networks can’t handle multi-terabytes efficiently and cost effectively >Network does not fit the “BIRN Workflow” architecture•Collaboration and information sharing is hard>Mega-computation, not possible to move the computation to the data (instead data to the computation site)>Not interactive research, must first copy then analyze•Analysis locally, but with strong limitations geographically•Don’t know a head of time where the data is•Can’t navigate the data interactively or in real time•Can’t “Webify” the information of large volumes>No cooperation/interaction between the storage and network middleware(s)10Our proposed Solution>Switching technology: •Lambda switching for data-intensive transfer>New abstraction: •Network Resource encapsulated as a Grid service>New middleware service architecture: LambdaGrid service architecture11Our proposed Solution>We are proposing LambdaGrid Service architecture that interacts with BIRN Cyber-infrastructure, and overcome BIRN data limitations efficiently & effectively by:•treating the “network” as a primary resource just like “storage” and “computation”•treat the “network” as a “scheduled resource”•rely upon a massive, dynamic transport infrastructure: Dynamic Optical Network12Goals of Our Investigation>Explore a new type of infrastructure which manages codependent storage and network resources>Explore dynamic wavelength switching, based on new optical technologies>Explore protocols for managing dynamically provisioned wavelengths>Encapsulate “optical network resources” into the Grid services framework to support dynamically provisioned, data-intensive transport services>Explicit representation of future scheduling in the data and network resource management model>Support a new set of application services that can intelligently schedule/re-schedule/co-schedule resources>Provide for large scale data transfer among multiple geographically distributed data locations, interconnected by paths with different attributes.>To provide inexpensive access to advanced computation capabilities and extremely large data sets13Example: Lightpath Scheduling>Request for 1/2 hour between 4:00 and 5:30 on Segment D granted to User W at 4:00>New request from User X for same segment for 1
or
We will never post anything without your permission.
Don't have an account? Sign up