I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong. Centos 5.x, software raid, 250gb drives. 2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions. I was thinking of taking out the spare and adding a 500gb drive. I would run this " sfdisk -d /dev/sda | sfdisk /dev/sdc " (drive 'a' setup is cloned onto drive c) Then I would add drive c to the raid array as the spare. I would then pull out drive b and allow drive c to be synced with a. At this point drive c has 250gb worth of partioned space in raid/lvm. I then add a 500gb drive b and repeat above pulling out drive a. Now I have two 500gb drives (b and c) with 250gb worth of partitions mirrored. I am thinking I would next forget about md0 (the boot part) and concentrate on the md1, where the whole system lies. # mdadm --grow /dev/md1 --size=max I believe this will grow out the size of md1 to fill the 500gb of the drive. I would then wrestle with expanding the LVMs that fill the md1 up as I wish. After that, I would add drive 'a' 500gb to the mix by cloning the partitions and then adding as a spare. So...does this sound like the way to upgrade from 250gb to 500gb drives on a raid 1 software raid? Or, is it back to the drawing board?
Hi, On Thu, Jul 2, 2009 at 12:52, Bob Hoffman<bob at bobhoffman.com> wrote:> # mdadm --grow /dev/md1 --size=maxIf /dev/md1 is made out of /dev/sda2 and /dev/sdb2, it will not work, as those partitions will still be the same size as they were before... I think it would be easier to just create new partitions /dev/sda3 and /dev/sdb3, then create a new RAID1 /dev/md2, pvcreate it, and then use vgextend to add /dev/md2 to the same volume group that already contains /dev/md1. Does that make sense to you? Cheers, Filipe
>If /dev/md1 is made out of /dev/sda2 and /dev/sdb2, it will not work, >as those partitions will still be the same size as they were before...I have md0 as the boot, md1 as the rest. Inside md1 are all my lvm partitons. The grow command is supposed to expand the size of that md1 from its current 200+GB to fill the rest of the 500gb drive. I thought you could then expand the lvm partions inside of that. If the grow command does not make the raid device larger, then what does it do? # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2](S) sdb1[1] sda1[0] 104320 blocks [2/2] [UU] md1 : active raid1 sdc2[2](S) sdb2[1] sda2[0] 245007232 blocks [2/2] [UU] unused devices: <none> Thanks for the input...glad I posted before I did this.
> > I have md0 as the boot, md1 as the rest. Inside md1 are all my lvm > > partitons. > > The grow command is supposed to expand the size of that md1 > from its > > current > > 200+GB to fill the rest of the 500gb drive. > > > > I thought you could then expand the lvm partions inside of that. > > > > If the grow command does not make the raid device larger, then what > > does it do? > > It only works once you have replaced all of the smaller drives with > larger ones. If /dev/sdc is your spareset drive, when you replace it > you'd partition it (manually) to have a larger #2 partition. Then you > would fail over say sdb to sdc and make sdb the spareset drive and > then pull it and replace it with a large disk and partition it like > sdc, then you would fail sda over to sdb and replace sda with a large > disk, and partition it like sdc. NOW you can grow the raid set & LVM > vg. >That was my intent...take out all of them, and leave two that have been switched to 500gb drives. Then grow them, then change things, then add the spare 500gb cloning the new sizes. Or just change them all to 500gb drives with the spare, having cloned the raids, then grow the raid device. So...that is the way to do it then? Sound good?
On 07/02/2009 09:52 AM, Bob Hoffman wrote:> > I am thinking I would next forget about md0 (the boot part) and concentrate > on the md1, where the whole system lies.Don't completely forget about it. You'll probably need to reinstall grub when you replace each disk.