?? software-raid.howto.sgml
字號:
or re-start it using the<VERB> raidstop /dev/md0</VERB>or<VERB> raidstart /dev/md0</VERB>commands.<P>Instead of putting these into init-files and rebooting a zillion timesto make that work, read on, and get autodetection running.<P><SECT1>The Persistent Superblock<P>Back in ``The Good Old Days'' (TM), the raidtools would read your&raidtab; file, and then initialize the array. However, this wouldrequire that the filesystem on which &raidtab; resided wasmounted. This is unfortunate if you want to boot on a RAID.<P>Also, the old approach led to complications when mounting filesystemsRAID devices. They could not be put in the &fstab; file as usual, butwould have to be mounted from the init-scripts.<P>The persistent superblocks solve these problems. When an array isinitialized with the <TT>persistent-superblock</TT> option in the&raidtab; file, a special superblock is written in the beginning ofall disks participating in the array. This allows the kernel to readthe configuration of RAID devices directly from the disks involved,instead of reading from some configuration file that may not beavailable at all times.<P>You should however still maintain a consistent &raidtab; file, sinceyou may need this file for later reconstruction of the array.<P>The persistent superblock is mandatory if you want auto-detection ofyour RAID devices upon system boot. This is described in the<BF>Autodetection</BF> section.<P><SECT1>Chunk sizes<P>The chunk-size deserves an explanation. You can never writecompletely parallel to a set of disks. If you had two disks and wantedto write a byte, you would have to write four bits on each disk,actually, every second bit would go to disk 0 and the others to disk1. Hardware just doesn't support that. Instead, we choose somechunk-size, which we define as the smallest ``atomic'' mass of datathat can be written to the devices. A write of 16 KB with a chunksize of 4 KB, will cause the first and the third 4 KB chunks to bewritten to the first disk, and the second and fourth chunks to bewritten to the second disk, in the RAID-0 case with two disks. Thus,for large writes, you may see lower overhead by having fairly largechunks, whereas arrays that are primarily holding small files maybenefit more from a smaller chunk size.<P>Chunk sizes can be specified for all RAID levels except the Linearmode.<P>For optimal performance, you should experiment with the value, as wellas with the block-size of the filesystem you put on the array.<P>The argument to the chunk-size option in &raidtab; specifies thechunk-size in kilobytes. So ``4'' means ``4 KB''.<P><SECT2>RAID-0<P>Data is written ``almost'' in parallel to the disks in thearray. Actually, <TT>chunk-size</TT> bytes are written to each disk,serially.<P>If you specify a 4 KB chunk size, and write 16 KB to an array of threedisks, the RAID system will write 4 KB to disks 0, 1 and 2, inparallel, then the remaining 4 KB to disk 0.<P>A 32 KB chunk-size is a reasonable starting point for most arrays. Butthe optimal value depends very much on the number of drives involved,the content of the filsystem you put on it, and many other factors.Experiment with it, to get the best performance.<P><SECT2>RAID-1<P>For writes, the chunk-size doesn't affect the array, since all datamust be written to all disks no matter what. For reads however, thechunk-size specifies how much data to read serially from theparticipating disks. Since all active disks in the arraycontain the same information, reads can be done in a parallel RAID-0like manner.<P><SECT2>RAID-4<P>When a write is done on a RAID-4 array, the parity information must beupdated on the parity disk as well. The chunk-size is the size of theparity blocks. If one byte is written to a RAID-4 array, then<TT>chunk-size</TT> bytes will be read from the N-1 disks, the parityinformation will be calculated, and <TT>chunk-size</TT> bytes writtento the parity disk.<P>The chunk-size affects read performance in the same way as in RAID-0,since reads from RAID-4 are done in the same way.<P><SECT2>RAID-5<P>On RAID-5 the chunk-size has exactly the same meaning as inRAID-4.<P>A reasonable chunk-size for RAID-5 is 128 KB, but as always, you maywant to experiment with this.<P>Also see the section on special options for mke2fs. This affectsRAID-5 performance.<P><SECT1>Options for mke2fs<P>There is a special option available when formatting RAID-4 or -5devices with mke2fs. The <TT>-R stride=nn</TT> option will allowmke2fs to better place different ext2 specific data-structures in anintelligent way on the RAID device.<P>If the chunk-size is 32 KB, it means, that 32 KB of consecutive datawill reside on one disk. If we want to build an ext2 filesystem with 4KB block-size, we realize that there will be eight filesystem blocksin one array chunk. We can pass this information on the mke2fsutility, when creating the filesystem:<VERB> mke2fs -b 4096 -R stride=8 /dev/md0</VERB><P>RAID-{4,5} performance is severely influenced by this option. I amunsure how the stride option will affect other RAID levels. If anyonehas information on this, please send it in my direction.<P><SECT1>Autodetection<P>Autodetection allows the RAID devices to be automatically recognizedby the kernel at boot-time, right after the ordinary partitiondetection is done. <P>This requires several things:<ENUM><ITEM>You need autodetection support in the kernel. Check this<ITEM>You must have created the RAID devices using persistent-superblock<ITEM>The partition-types of the devices used in the RAID must be set to <BF>0xFD</BF> (use fdisk and set the type to ``fd'')</ENUM><P>NOTE: Be sure that your RAID is NOT RUNNING before changing thepartition types. Use <TT>raidstop /dev/md0</TT> to stop the device.<P>If you set up 1, 2 and 3 from above, autodetection should be setup. Try rebooting. When the system comes up, cat'ing &mdstat;should tell you that your RAID is running.<P>During boot, you could see messages similar to these:<VERB> Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512 bytes. Sectors= 12657717 [6180 MB] [6.2 GB] Oct 22 00:51:59 malthe kernel: Partition check: Oct 22 00:51:59 malthe kernel: sda: sda1 sda2 sda3 sda4 Oct 22 00:51:59 malthe kernel: sdb: sdb1 sdb2 Oct 22 00:51:59 malthe kernel: sdc: sdc1 sdc2 Oct 22 00:51:59 malthe kernel: sdd: sdd1 sdd2 Oct 22 00:51:59 malthe kernel: sde: sde1 sde2 Oct 22 00:51:59 malthe kernel: sdf: sdf1 sdf2 Oct 22 00:51:59 malthe kernel: sdg: sdg1 sdg2 Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdb1,1> Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdc1,2> Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sdd1,3> Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872 Oct 22 00:51:59 malthe kernel: bind<sde1,4> Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376 Oct 22 00:51:59 malthe kernel: bind<sdf1,5> Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376 Oct 22 00:51:59 malthe kernel: bind<sdg1,6> Oct 22 00:51:59 malthe kernel: autorunning md0 Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1> Oct 22 00:51:59 malthe kernel: now! Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean -- starting background reconstruction </VERB>This is output from the autodetection of a RAID-5 array that was notcleanly shut down (eg. the machine crashed). Reconstruction isautomatically initiated. Mounting this device is perfectly safe,since reconstruction is transparent and all data are consistent (it'sonly the parity information that is inconsistent - but that isn'tneeded until a device fails).<P>Autostarted devices are also automatically stopped at shutdown. Don'tworry about init scripts. Just use the /dev/md devices as any other/dev/sd or /dev/hd devices.<P>Yes, it really is that easy.<P>You may want to look in your init-scripts for any raidstart/raidstopcommands. These are often found in the standard RedHat initscripts. They are used for old-style RAID, and has no use in new-styleRAID with autodetection. Just remove the lines, and everything will bejust fine.<P><SECT1>Booting on RAID<P>This will be added in near future.<P>The really really short nano-howto goes:<ITEMIZE><ITEM>Put two identical disks in a system.<ITEM> Put in a third disk, on which you install a complete Linux system.<ITEM> Now set up the two identical disks each with a /boot, swap and / partition.<ITEM> Configure RAID-1 on the two / partitions.<ITEM> Copy the entire installation from the third disk to the RAID. (just using tar, no raw copying !)<ITEM> Set up the /boot on the first disk. Run lilo. You probably want to set the root fs device to be 900, since LILO doesn't really handle the /dev/md devices. /dev/md0 is major 9 minor 9, so root=900 should work.<ITEM> Set up /boot on the second disk just like you did on the first.<ITEM> In the bios, in the case of IDE disks, set the disk types to autodetect. In the fstab, make sure you are not mounting any of the /boot filesystems. You don't need them, and in case of device failure, you will just get stuck in the boot sequence when trying to mount a non-existing device.<ITEM> Try booting on just one of the disks. Try booting on the other disk only. If this works, you're up and running.<ITEM> Document what you did, mail it to me, and I'll put it in here.</ITEMIZE><SECT1>Pitfalls<P>Never NEVER <BF>never</BF> re-partition disks that are part of a runningRAID. If you must alter the partition table on a disk which is a partof a RAID, stop the array first, then repartition.<P>It is easy to put too many disks on a bus. A normal Fast-Wide SCSI buscan sustain 10 MB/s which is less than many disks can do alonetoday. Putting six such disks on the bus will of course not give youthe expected performance boost.<P>More SCSI controllers will only give you extra performance, if theSCSI busses are nearly maxed out by the disks on them. You will notsee a performance improvement from using two 2940s with two old SCSIdisks, instead of just running the two disks on one controller.<P>If you forget the persistent-superblock option, your array may notstart up willingly after it has been stopped. Just re-create thearray with the option set correctly in the raidtab.<P>If a RAID-5 fails to reconstruct after a disk was removed andre-inserted, this may be because of the ordering of the devices in theraidtab. Try moving the first ``device ...'' and ``raid-disk ...''pair to the bottom of the array description in the raidtab file.<P><SECT>Credits<P>The following people contributed to the creation of thisdocumentation:<ITEMIZE><ITEM>Ingo Molnar<ITEM>Jim Warren<ITEM>Louis Mandelstam<ITEM>Allan Noah<ITEM>Yasunori Taniike<ITEM>The Linux-RAID mailing list<ITEM>The ones I forgot, sorry :)</ITEMIZE><P>Please submit corrections, suggestions etc. to the author. It's theonly way this HOWTO can improve.</ARTICLE>
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -