?? software-raid.howto.txt
字號:
The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk- size, which we define as the smallest ``atomic'' mass of data that can be written to the devices. A write of 16 KB with a chunk size of 4 KB, will cause the first and the third 4 KB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size. Chunk sizes can be specified for all RAID levels except the Linear mode. For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array. The argument to the chunk-size option in /etc/raidtab specifies the chunk-size in kilobytes. So ``4'' means ``4 KB''. 3.8.1. RAID-0 Data is written ``almost'' in parallel to the disks in the array. Actually, chunk-size bytes are written to each disk, serially. If you specify a 4 KB chunk size, and write 16 KB to an array of three disks, the RAID system will write 4 KB to disks 0, 1 and 2, in parallel, then the remaining 4 KB to disk 0. A 32 KB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the filsystem you put on it, and many other factors. Experiment with it, to get the best performance. 3.8.2. RAID-1 For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what. For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, reads can be done in a parallel RAID-0 like manner. 3.8.3. RAID-4 When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well. The chunk-size is the size of the parity blocks. If one byte is written to a RAID-4 array, then chunk- size bytes will be read from the N-1 disks, the parity information will be calculated, and chunk-size bytes written to the parity disk. The chunk-size affects read performance in the same way as in RAID-0, since reads from RAID-4 are done in the same way. 3.8.4. RAID-5 On RAID-5 the chunk-size has exactly the same meaning as in RAID-4. A reasonable chunk-size for RAID-5 is 128 KB, but as always, you may want to experiment with this. Also see the section on special options for mke2fs. This affects RAID-5 performance. 3.9. Options for mke2fs There is a special option available when formatting RAID-4 or -5 devices with mke2fs. The -R stride=nn option will allow mke2fs to better place different ext2 specific data-structures in an intelligent way on the RAID device. If the chunk-size is 32 KB, it means, that 32 KB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 KB block-size, we realize that there will be eight filesystem blocks in one array chunk. We can pass this information on the mke2fs utility, when creating the filesystem: mke2fs -b 4096 -R stride=8 /dev/md0 RAID-{4,5} performance is severely influenced by this option. I am unsure how the stride option will affect other RAID levels. If anyone has information on this, please send it in my direction. 3.10. Autodetection Autodetection allows the RAID devices to be automatically recognized by the kernel at boot-time, right after the ordinary partition detection is done. This requires several things: 1. You need autodetection support in the kernel. Check this 2. You must have created the RAID devices using persistent-superblock 3. The partition-types of the devices used in the RAID must be set to 0xFD (use fdisk and set the type to ``fd'') NOTE: Be sure that your RAID is NOT RUNNING before changing the partition types. Use raidstop /dev/md0 to stop the device. If you set up 1, 2 and 3 from above, autodetection should be set up. Try rebooting. When the system comes up, cat'ing /proc/mdstat should tell you that your RAID is running. During boot, you could see messages similar to these: Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512 bytes. Sectors= 12657717 [6180 MB] [6.2 GB] Oct 22 00:51:59 malthe kernel: Partition check: Oct 22 00:51:59 malthe kernel: sda: sda1 sda2 sda3 sda4 Oct 22 00:51:59 malthe kernel: sdb: sdb1 sdb2 Oct 22 00:51:59 malthe kernel: sdc: sdc1 sdc2 Oct 22 00:51:59 malthe kernel: sdd: sdd1 sdd2 Oct 22 00:51:59 malthe kernel: sde: sde1 sde2 Oct 22 00:51:59 malthe kernel: sdf: sdf1 sdf2 Oct 22 00:51:59 malthe kernel: sdg: sdg1 sdg2 Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdb1,1> Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdc1,2> Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdd1,3> Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sde1,4> Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376 Oct 22 00:51:59 malthe kernel: bind<sdf1,5> Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376 Oct 22 00:51:59 malthe kernel: bind<sdg1,6> Oct 22 00:51:59 malthe kernel: autorunning md0 Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1> Oct 22 00:51:59 malthe kernel: now! Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean -- starting background reconstruction This is output from the autodetection of a RAID-5 array that was not cleanly shut down (eg. the machine crashed). Reconstruction is auto- matically initiated. Mounting this device is perfectly safe, since reconstruction is transparent and all data are consistent (it's only the parity information that is inconsistent - but that isn't needed until a device fails). Autostarted devices are also automatically stopped at shutdown. Don't worry about init scripts. Just use the /dev/md devices as any other /dev/sd or /dev/hd devices. Yes, it really is that easy. You may want to look in your init-scripts for any raidstart/raidstop commands. These are often found in the standard RedHat init scripts. They are used for old-style RAID, and has no use in new-style RAID with autodetection. Just remove the lines, and everything will be just fine. 3.11. Booting on RAID This will be added in near future. The really really short nano-howto goes: o Put two identical disks in a system. o Put in a third disk, on which you install a complete Linux system. o Now set up the two identical disks each with a /boot, swap and / partition. o Configure RAID-1 on the two / partitions. o Copy the entire installation from the third disk to the RAID. (just using tar, no raw copying !) o Set up the /boot on the first disk. Run lilo. You probably want to set the root fs device to be 900, since LILO doesn't really handle the /dev/md devices. /dev/md0 is major 9 minor 9, so root=900 should work. o Set up /boot on the second disk just like you did on the first. o In the bios, in the case of IDE disks, set the disk types to autodetect. In the fstab, make sure you are not mounting any of the /boot filesystems. You don't need them, and in case of device failure, you will just get stuck in the boot sequence when trying to mount a non-existing device. o Try booting on just one of the disks. Try booting on the other disk only. If this works, you're up and running. o Document what you did, mail it to me, and I'll put it in here. 3.12. Pitfalls Never NEVER never re-partition disks that are part of a running RAID. If you must alter the partition table on a disk which is a part of a RAID, stop the array first, then repartition. It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus can sustain 10 MB/s which is less than many disks can do alone today. Putting six such disks on the bus will of course not give you the expected performance boost. More SCSI controllers will only give you extra performance, if the SCSI busses are nearly maxed out by the disks on them. You will not see a performance improvement from using two 2940s with two old SCSI disks, instead of just running the two disks on one controller. If you forget the persistent-superblock option, your array may not start up willingly after it has been stopped. Just re-create the array with the option set correctly in the raidtab. If a RAID-5 fails to reconstruct after a disk was removed and re- inserted, this may be because of the ordering of the devices in the raidtab. Try moving the first ``device ...'' and ``raid-disk ...'' pair to the bottom of the array description in the raidtab file. 4. Credits The following people contributed to the creation of this documentation: o Ingo Molnar o Jim Warren o Louis Mandelstam o Allan Noah o Yasunori Taniike o The Linux-RAID mailing list o The ones I forgot, sorry :) Please submit corrections, suggestions etc. to the author. It's the only way this HOWTO can improve.
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -