存储数据保护Raid技术DDP说明
合集下载
相关主题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
4
Traditional RAID—Drive Failure
? Data is reconstructed onto hot spare
? Single drive responsible for all writes (bottleneck) ? Reconstruction happens linearly (one stripe at a time)
Data
Block-level striping with a distributed parity
NetApp Confidential
3
Traditional RAID Volumes
? Disk drives organized into RAID groups ? Volumes reside across the drives in a RAID group
? Large pool of spindles for every volume reduces hot spots
? Each volume spread across all drives in pool
? Dynamic distribution/redistribution is a nondisruptive Fra Baidu bibliotekackground operation
Data Parity
Parity Data
? RAID 6 (P+Q) – data disks and rotating dual parity
Data Data Data
Data Data Parity
Data Parity Q Parity
Parity Q Parity Data
Q Parity Data
4TB+
?Drive transfer rates have not kept up with capacities
?Larger drives equal longer rebuilds —anywhere from 10+ hours to several days
6
Dynamic Disk Pools
Host LUNs Volumes Disk Pool SSDs
? Dynamic disk pools
– Min 11 SSDs – Max 120 SSDs – Up to 10 disk pools
per system
NetApp Confidential
2
SANtricity RAID Levels
? Performance is dictated by the number of spindles
? Hot spares sit idle until a drive fails
? Spare capacity is “stranded”
24-drive system with 2 10-drive groups (8+2) and 4 hot spares
存储高级 数据保护技术DDP说明
1
SANtricity RAID Protection
Host LUNs Volumes
Volume Groups SSDs
? Volume groups
– RAID 0, 1, 10, 5, 6 – Intermix RAID levels – Various group sizes
Maintain SLAs during drive failure
? Stay in the green
? Performance drop is minimized following drive failure
? Dynamic rebalance completes up to 8x faster than traditional RAID in random environments and up to 2x faster in sequential environments
? All volumes in that group are significantly impacted
24-drive system with 2 10-drive groups (8+2) and 4 hot spares
5
The Problem
The Large-Disk-Drive Challenge
? Staggering amounts of data to store, protect, access
?Some sites have thousands of large-capacity drives ?Drive failures are continual, particularly with NL-SAS drives
? Production I/O is impacted during rebuilds
?Up to 40% in many cases
? As drive capacities continue to grow, traditional RAID protection is pushed to its limit
7
ITnrnaodvitaiotinvael DRyAnIDamTieccDhnisokloPgoyols
Balanced: Algorithm randomly spreads data across all drives, balancing workload and rebuilding if necessary.
? RAID 0 – striped
Data
Data
Data
Data
? RAID 1 (10) – mirrored and striped
Data
Data
Mirror Mirror
? RAID 5 – data disks and rotating parity
Data Data
Data Data
Easy: No RAID or idle spares to manage— active spare capacity on all drives.
“With Dynamic Disk Pools, you can addor lose disk drives without impact, reconfiguration, or headaches.”
Traditional RAID—Drive Failure
? Data is reconstructed onto hot spare
? Single drive responsible for all writes (bottleneck) ? Reconstruction happens linearly (one stripe at a time)
Data
Block-level striping with a distributed parity
NetApp Confidential
3
Traditional RAID Volumes
? Disk drives organized into RAID groups ? Volumes reside across the drives in a RAID group
? Large pool of spindles for every volume reduces hot spots
? Each volume spread across all drives in pool
? Dynamic distribution/redistribution is a nondisruptive Fra Baidu bibliotekackground operation
Data Parity
Parity Data
? RAID 6 (P+Q) – data disks and rotating dual parity
Data Data Data
Data Data Parity
Data Parity Q Parity
Parity Q Parity Data
Q Parity Data
4TB+
?Drive transfer rates have not kept up with capacities
?Larger drives equal longer rebuilds —anywhere from 10+ hours to several days
6
Dynamic Disk Pools
Host LUNs Volumes Disk Pool SSDs
? Dynamic disk pools
– Min 11 SSDs – Max 120 SSDs – Up to 10 disk pools
per system
NetApp Confidential
2
SANtricity RAID Levels
? Performance is dictated by the number of spindles
? Hot spares sit idle until a drive fails
? Spare capacity is “stranded”
24-drive system with 2 10-drive groups (8+2) and 4 hot spares
存储高级 数据保护技术DDP说明
1
SANtricity RAID Protection
Host LUNs Volumes
Volume Groups SSDs
? Volume groups
– RAID 0, 1, 10, 5, 6 – Intermix RAID levels – Various group sizes
Maintain SLAs during drive failure
? Stay in the green
? Performance drop is minimized following drive failure
? Dynamic rebalance completes up to 8x faster than traditional RAID in random environments and up to 2x faster in sequential environments
? All volumes in that group are significantly impacted
24-drive system with 2 10-drive groups (8+2) and 4 hot spares
5
The Problem
The Large-Disk-Drive Challenge
? Staggering amounts of data to store, protect, access
?Some sites have thousands of large-capacity drives ?Drive failures are continual, particularly with NL-SAS drives
? Production I/O is impacted during rebuilds
?Up to 40% in many cases
? As drive capacities continue to grow, traditional RAID protection is pushed to its limit
7
ITnrnaodvitaiotinvael DRyAnIDamTieccDhnisokloPgoyols
Balanced: Algorithm randomly spreads data across all drives, balancing workload and rebuilding if necessary.
? RAID 0 – striped
Data
Data
Data
Data
? RAID 1 (10) – mirrored and striped
Data
Data
Mirror Mirror
? RAID 5 – data disks and rotating parity
Data Data
Data Data
Easy: No RAID or idle spares to manage— active spare capacity on all drives.
“With Dynamic Disk Pools, you can addor lose disk drives without impact, reconfiguration, or headaches.”