DOC PREVIEW
Duke CPS 210 - Proceedings of the FAST 2002 Conference on File and Storage Technologies

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

SnapMirror®: File System Based Asynchronous Mirroring for Disaster RecoveryAbstract1 Introduction1.1 Outline for remainder of paper1.2 Requirements for Disaster Recovery1.3 Recovering from Off-line Data1.4 Remote Mirroring2 Related Work3 SnapMirror Design and Implementation3.1 Snapshots and the Active Map File3.2 SnapMirror Implementation3.2.1 Initializing the Mirror3.2.2 Block-Level Differences and Update TransfersFigure 1 . SnapMirror's use of snapshots to identify blocks for transfer. SnapMirror uses a base ...3.2.3 Disaster Recovery and Aborted Transfers3.2.4 Update Scheduling and Transfer Rate Throttling3.3 SnapMirror Advantages and Limitations4 Data Reduction through Asynchrony4.1 Tracing environmentTable 1 . Summary data for the traced file systems. We collected 24 hours of traces of block allo...4.2 ResultsFigure 2 . Percentage of written blocks transferred by SnapMirror vs. update interval. These grap...Figure 3 . Percentage of written blocks transferred with and without use of the active map to fil...5 SnapMirror vs. Asynchronous Logical Mirroring5.1 Experimental SetupTable 2 . Logical replication vs. SnapMirror incremental update performance. We measured incremen...5.2 ResultsFigure 4 . Logical replication vs. SnapMirror incremental update times. By avoiding directory and...6 SnapMirror on a loaded system6.1 ResultsTable 3 . SnapMirror Update Interval Impact on System Resources. During SFS-like loads, resource ...Figure 5 . SnapMirror Update Interval vs. NFS response time. We measured the effect of SnapMirror...7 Conclusion8 Acknowledgments9 ReferencesUSENIX AssociationProceedings of theFAST 2002 Conference onFile and Storage TechnologiesMonterey, California, USAJanuary 28-30, 2002THE ADVANCED COMPUTING SYSTEMS ASSOCIATION© 2002 by The USENIX Association All Rights Reserved For more information about the USENIX Association:Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.orgRights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes.This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein.AbstractComputerized data has become critical to the survival of an enterprise. Companies must have a strategy for recov-ering their data should a disaster such as a fire destroy the primary data center. Current mechanisms offer data man-agers a stark choice: rely on affordable tape but risk the loss of a full day of data and face many hours or even days to recover, or have the benefits of a fully synchro-nized on-line remote mirror, but pay steep costs in both write latency and network bandwidth to maintain the mirror. In this paper, we argue that asynchronous mirror-ing, in which batches of updates are periodically sent to the remote mirror, can let data managers find a balance between these extremes. First, by eliminating the write latency issue, asynchrony greatly reduces the perfor-mance cost of a remote mirror. Second, by storing up batches of writes, asynchronous mirroring can avoid sending deleted or overwritten data and thereby reduce network bandwidth requirements. Data managers can tune the update frequency to trade network bandwidth against the potential loss of more data. We present Snap-Mirror, an asynchronous mirroring technology that le-verages file system snapshots to ensure the consistency of the remote mirror and optimize data transfer. We use traces of production filers to show that even updating an asynchronous mirror every 15 minutes can reduce data transferred by 30% to 80%. We find that exploiting file system knowledge of deletions is critical to achieving any reduction for no-overwrite file systems such as WAFL and LFS. Experiments on a running system show that using file system metadata can reduce the time to identify changed blocks from minutes to seconds com-pared to purely logical approaches. Finally, we show that using SnapMirror to update every 30 minutes increases the response time of a heavily loaded system only 22%.1 IntroductionAs reliance on computerized data storage has grown, so too has the cost of data unavailability. A few hours downtime can cost from thousands to millions of dollars depending on the size of the enterprise and the role of the data. With increasing frequency, companies are instituting disaster recovery plans to ensure appropri-ate data availability in the event of a catastrophic failure or disaster that destroys a site (e.g. flood, fire, or earth-quake). It is relatively easy to provide redundant server and storage hardware to protect against the loss of phys-ical resources. Without the data, however, the redundant hardware is of little use. The problem is that current strategies for data pro-tection and recovery offer either inadequate protection, or are too expensive in performance and/or network bandwidth. Tape backup and restore is the traditional ap-proach. Although favored for its low cost, restoring from a nightly backup is too slow and the restored data is up to a day old. Remote synchronous and semi-synchronous mirroring are more recent alternatives. Mirrors keep backup data on-line and fully synchronized with the pri-mary store, but they do so at a high cost in performance (write latency) and network bandwidth. Semi-synchro-nous mirrors can reduce the write-latency penalty, but can result in inconsistent, unusable data unless write or-dering across the entire data set, not just within one stor-age device, is guaranteed. Data managers are forced to choose between two extremes: synchronized with great expense or affordable with a day of data loss.In this paper, we show that by letting a mirror vol-ume lag behind the primary volume it is possible to re-duce substantially the performance and network costs of maintaining a mirror while bounding the amount of data loss. The greater the lag, the greater the data loss, but the cheaper the cost of maintaining the mirror. Such asyn-chronous mirrors let data managers tune their systems to strike the right balance between potential data loss and cost.We present SnapMirror, a technology which imple-ments asynchronous mirrors on Network Appliance fil-ers. SnapMirror periodically transfers self-consistent snapshots of the data from a source volume to the desti-nation volume. The mirror is on-line, so disaster recov-SnapMirror®: File System Based Asynchronous Mirroringfor


View Full Document

Duke CPS 210 - Proceedings of the FAST 2002 Conference on File and Storage Technologies

Download Proceedings of the FAST 2002 Conference on File and Storage Technologies
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Proceedings of the FAST 2002 Conference on File and Storage Technologies and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Proceedings of the FAST 2002 Conference on File and Storage Technologies 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?