Christian Wahlgren
2006-Oct-15 17:14 UTC
[CentOS] Proper partition/LVM growth after RAID migration
Hi This topic is perhaps not for this list, but it I'm running on a CentOS 4.4 and it seems that a lot of people here uses 3Ware and RAID volumes. I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk grew from 700GB to 1400GB. The exported disk is managed by LVM. The problem now is that I don't really know what to do now to let LVM and my locigal volume to make use of this new disk size, and probably future disk size growth. I initially imagined that I could let the physical partition grow and then the LVM PV, the LV and the FS (ext3) to grow to fill up the new exported disk size. Then I realized that this is not how LVM is designed - you add PVs and let the LV and FS to grow. But, what if the underlying exported disk from a RAID card is growing as you add disks to it's RAID volume? Should you create new physical partitions on the exported disk, and specifically, extended partitions, since you might in the future add more than 4 partitions? Secondly, it seems there is a problem after this migration: The graphical LVM shows that I have an "Unpartitioned space" on the exported disk, but I can't click on "Initialize entry" button. The property window on the right says: "Not initializable: Partition manually". parted shows only one partition: [root at acrux ~]# parted /dev/sdb print Disk geometry for /dev/sdb: 0.000-1430490.000 megabytes Disk label type: msdos Minor Start End Type Filesystem Flags 1 0.000 715245.000 primary lvm fdisk in expert mode (x) shows that partition 2-4 fields are all zeros. Should I now make a lvm partition in an extended partition in parted, and then hopefully, the graphical LVM will be able to add the new physical partition to my logical volume, and then resize my fs (ext3)? Thanks in advance for any input on this, Christian
Shawn K. O'Shea
2006-Oct-16 19:50 UTC
[CentOS] Proper partition/LVM growth after RAID migration
> I did a RAID migration on a 3Ware 9590SE-12, so that an exported disk > grew from 700GB to 1400GB. The exported disk is managed by LVM. The > problem now is that I don't really know what to do now to let LVM and > my locigal volume to make use of this new disk size, and probably > future disk size growth. >I've been doing this recently with VMWare ESX server. To save space, I create base disk images of clean OS installs on a minimalistic sized disk. If I need space, I use tools from VMWare to make the virtual disk bigger, and then grow the bits inside Linux with LVM. I used the following two documents for info: http://fedoranews.org/mediawiki/index.php/Expanding_Linux_Partitions_with_LVM http://www.knoppix.net/wiki/LVM2 In my case, I was growing the root filesystem, so I needed to boot into something like Knoppix (hence the 2nd link above). To summarize the links...(usual caveats, backup data, etc, etc) -Create a new partition of type 8e (Linux LVM) on the new empty space. -Add that pv to LVM If the new partition is /dev/sda3, then this would look like pvcreate /dev/sda3 -Extend the volume group that contains the logical volume you want to add this space to If the the VG is VolGroup00 then vgextend VolGroup00 /dev/sda3 -Here I usually run vgdisplay, and get the amount of free disk space that now exists. (Look for the line that says Free PE / Size) If the Free PE / Size says there are 2.2GB free and the LV is LogVol00 you could do lvextend -L+2.2G /dev/VolGroup00/LogVol00 -Extend the filesystem. For ext2/ext3, use resize2fs (you may want to fsck before this, this is with the filesystem unmounted) resize2fs /dev/VolGroup00/LogVol00 -Fsck e2fsck -fy /dev/VolGroup00/LogVol00 (you may now want to use -y) The trick I had when doing this in Knoppix for existing LVMs was after fdisk'ing to run vgscan; vgchange -a y to get the existing LVM partitions recognized and /dev entries created. -Shawn