I'm new to lvm. I decided to decrease the space of a logical volume.
So I did a:
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
1953 251 1602 14% /
/dev/sda2 494 21 448 5% /boot
tmpfs 1014 0 1014 0% /dev/shm
/dev/mapper/VolGroup00-LogVol05
48481 6685 39295 15% /home
/dev/mapper/VolGroup00-LogVol03
961 18 894 2% /tmp
/dev/mapper/VolGroup00-LogVol01
7781 2051 5329 28% /usr
/dev/mapper/VolGroup00-LogVol02
5239 327 4642 7% /var
$ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05
Rounding up size to full physical extent 992.00 MB
WARNING: Reducing active and open logical volume to 47.91 GB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol05? [y/n]: y
Reducing logical volume LogVol05 to 47.91 GB
Logical volume LogVol05 successfully resized
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
1953 251 1602 14% /
/dev/sda2 494 21 448 5% /boot
tmpfs 1014 0 1014 0% /dev/shm
/dev/mapper/VolGroup00-LogVol05
48481 6685 39295 15% /home
/dev/mapper/VolGroup00-LogVol03
961 18 894 2% /tmp
/dev/mapper/VolGroup00-LogVol01
7781 2051 5329 28% /usr
/dev/mapper/VolGroup00-LogVol02
5239 327 4642 7% /var
Note that "df" shows the same size available. This probably means
that the 2 "systems" aren't talking to each other (or my lvm
command
failed).
When I rebooted, things failed, going into "repair filesystem" mode.
I tried
fsck /dev/VolGroup00/LogVol05
but after awhile, it started giving block errors, specifically
"Error reading block <block-number> (Invalid argument) while doing
inode scan. Ignore error<y>?
I held down the <Enter> key for awhile in hopes that I'd be able to
get through the errors, but no joy. I finally cancelled the thing.
I can rebuild the server, it's no big deal. In fact the logical
volume that went bad isn't a big deal data wise, and I shouldn't need
that data to bring up the server itself. I shouldn't need to mount
it. So can I still save this?
=== Al
On Wed, May 02, 2007 at 06:59:26PM -0700, Al Sparks enlightened us:> I'm new to lvm. I decided to decrease the space of a logical volume. > So I did a: > $ df -m > Filesystem 1M-blocks Used Available Use% Mounted on > /dev/mapper/VolGroup00-LogVol00 > 1953 251 1602 14% / > /dev/sda2 494 21 448 5% /boot > tmpfs 1014 0 1014 0% /dev/shm > /dev/mapper/VolGroup00-LogVol05 > 48481 6685 39295 15% /home > /dev/mapper/VolGroup00-LogVol03 > 961 18 894 2% /tmp > /dev/mapper/VolGroup00-LogVol01 > 7781 2051 5329 28% /usr > /dev/mapper/VolGroup00-LogVol02 > 5239 327 4642 7% /var > > > > $ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 > Rounding up size to full physical extent 992.00 MB > WARNING: Reducing active and open logical volume to 47.91 GB > THIS MAY DESTROY YOUR DATA (filesystem etc.) > Do you really want to reduce LogVol05? [y/n]: y > Reducing logical volume LogVol05 to 47.91 GB > Logical volume LogVol05 successfully resized > > > $ df -m > Filesystem 1M-blocks Used Available Use% Mounted on > /dev/mapper/VolGroup00-LogVol00 > 1953 251 1602 14% / > /dev/sda2 494 21 448 5% /boot > tmpfs 1014 0 1014 0% /dev/shm > /dev/mapper/VolGroup00-LogVol05 > 48481 6685 39295 15% /home > /dev/mapper/VolGroup00-LogVol03 > 961 18 894 2% /tmp > /dev/mapper/VolGroup00-LogVol01 > 7781 2051 5329 28% /usr > /dev/mapper/VolGroup00-LogVol02 > 5239 327 4642 7% /var > > Note that "df" shows the same size available. This probably means > that the 2 "systems" aren't talking to each other (or my lvm command > failed). > > When I rebooted, things failed, going into "repair filesystem" mode. > I tried > fsck /dev/VolGroup00/LogVol05 > > but after awhile, it started giving block errors, specifically > "Error reading block <block-number> (Invalid argument) while doing inode scan. Ignore error<y>? > > I held down the <Enter> key for awhile in hopes that I'd be able to > get through the errors, but no joy. I finally cancelled the thing. > > I can rebuild the server, it's no big deal. In fact the logical > volume that went bad isn't a big deal data wise, and I shouldn't need > that data to bring up the server itself. I shouldn't need to mount > it. So can I still save this?Did you resize the filesystem, too? Matt -- Matt Hyclak Department of Mathematics Department of Social Work Ohio University (740) 593-1263
Al Sparks spake the following on 5/2/2007 6:59 PM:> I'm new to lvm. I decided to decrease the space of a logical volume. > So I did a: > $ df -m > Filesystem 1M-blocks Used Available Use% Mounted on > /dev/mapper/VolGroup00-LogVol00 > 1953 251 1602 14% / > /dev/sda2 494 21 448 5% /boot > tmpfs 1014 0 1014 0% /dev/shm > /dev/mapper/VolGroup00-LogVol05 > 48481 6685 39295 15% /home > /dev/mapper/VolGroup00-LogVol03 > 961 18 894 2% /tmp > /dev/mapper/VolGroup00-LogVol01 > 7781 2051 5329 28% /usr > /dev/mapper/VolGroup00-LogVol02 > 5239 327 4642 7% /var > > > > $ sudo lvm lvreduce -L -1000M /dev/VolGroup00/LogVol05 > Rounding up size to full physical extent 992.00 MB > WARNING: Reducing active and open logical volume to 47.91 GB > THIS MAY DESTROY YOUR DATA (filesystem etc.) > Do you really want to reduce LogVol05? [y/n]: y > Reducing logical volume LogVol05 to 47.91 GB > Logical volume LogVol05 successfully resizedLVM even warned you --IN CAPS-- "THIS MAY DESTROY YOUR DATA". I guess it was right. I haven't had much luck with reducing a volume below its initial size. I usually make a new LV and rsync or cp -a the data over to it. I try to leave some free space just for this. Or add a drive temporarily. -- MailScanner is like deodorant... You hope everybody uses it, and you notice quickly if they don't!!!!