Gordon Messmer
2015-Jun-24 01:42 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
On 06/23/2015 09:15 AM, Jason Warr wrote:>> That said, I prefer virtual machines over multiboot environments, and I >> absolutely despise LVM --- that cursed thing is never getting on my >> drives. Never again, that is... > > I'm curious what has made some people hate LVM so much.I wondered the same thing, especially in the context of someone who prefers virtual machines. LV-backed VMs have *dramatically* better disk performance than file-backed VMs.
Marko Vojinovic
2015-Jun-24 03:10 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
On Tue, 23 Jun 2015 18:42:13 -0700 Gordon Messmer <gordon.messmer at gmail.com> wrote:> > I wondered the same thing, especially in the context of someone who > prefers virtual machines. LV-backed VMs have *dramatically* better > disk performance than file-backed VMs.Ok, you made me curious. Just how dramatic can it be? From where I'm sitting, a read/write to a disk takes the amount of time it takes, the hardware has a certain physical speed, regardless of the presence of LVM. What am I missing? For concreteness, let's say I have a guest machine, with a dedicated physical partition for it, on a single drive. Or, I have the same thing, only the dedicated partition is inside LVM. Why is there a performance difference, and how dramatic is it? If you convince me, I might just change my opinion about LVM. :-) Oh, and just please don't tell me that the load can be spread accross two or more harddrives, cutting the file access by a factor of two (or more). I can do that with raid, no need for LVM. Stick to a single harddrive scenario, please. Best, :-) Marko
Chris Adams
2015-Jun-24 13:14 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
Once upon a time, Marko Vojinovic <vvmarko at gmail.com> said:> On Tue, 23 Jun 2015 18:42:13 -0700 > Gordon Messmer <gordon.messmer at gmail.com> wrote: > > I wondered the same thing, especially in the context of someone who > > prefers virtual machines. LV-backed VMs have *dramatically* better > > disk performance than file-backed VMs. > > Ok, you made me curious. Just how dramatic can it be? From where I'm > sitting, a read/write to a disk takes the amount of time it takes, the > hardware has a certain physical speed, regardless of the presence of > LVM. What am I missing?File-backed images have to go through the filesystem layer. They are not allocated contiguously, so what appear to be sequential reads inside the VM can be widely scattered across the underlying disk. There are plenty of people that have documented the performance differences, just Google it. -- Chris Adams <linux at cmadams.net>
Robert Heller
2015-Jun-24 13:45 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
At Wed, 24 Jun 2015 04:10:35 +0100 CentOS mailing list <centos at centos.org> wrote:> > On Tue, 23 Jun 2015 18:42:13 -0700 > Gordon Messmer <gordon.messmer at gmail.com> wrote: > > > > I wondered the same thing, especially in the context of someone who > > prefers virtual machines. LV-backed VMs have *dramatically* better > > disk performance than file-backed VMs. > > Ok, you made me curious. Just how dramatic can it be? From where I'm > sitting, a read/write to a disk takes the amount of time it takes, the > hardware has a certain physical speed, regardless of the presence of > LVM. What am I missing? > > For concreteness, let's say I have a guest machine, with a > dedicated physical partition for it, on a single drive. Or, I have the > same thing, only the dedicated partition is inside LVM. Why is there a > performance difference, and how dramatic is it? > > If you convince me, I might just change my opinion about LVM. :-)Well if you are comparing direct partitions to LVM there is no real difference. OTOH, if you have more than a few VMs (eg more than the limits imposed by the partitioning system) and/or want to create [temporary] ones 'on-the-fly', using LVM makes that trivially possible. Otherwise, you have to repartition the disk and reboot the host. This puts you 'back' in the old-school reality of a multi-boot system. And partitioning a RAID array is tricky and combersome. Resizing physical partitions is also non-trivial. Bascally, LVM gives you on-the-fly 'partitioning', without rebooting. It is just not possible (AFAIK) to always update partition tables of a running system (never if the disk is the system disk). Most partitioning tools are not really designed for dynamic re-sizing of partitions and it is a highly error-prone process. Most partitioning tools are designed for dealing with a 'virgin' disk (or a re-virgined disk) with the idea that the partitioning won't be revisited once the O/S has been installed. LVM is all about creating and managing *dynamic* 'partitions' (which is what Logical Volumes effectively are). And no, there is little advantage in using multiple PVs. To get performance gains (and/or redundency, etc.), one uses real RAID (eg kernel software RAID -- md or hardware RAID), then layers LVM on top of that. The 'other' *alternitive* is to use virtual container disks (eg image files as disks), which have horrible performance (compared to LVM or hard partitions) and are hard to backup. The *additional* feature: with LVM you can take a snapshot of the VM's disk and back it up safely. Otherwise you *have* to shutdown the VM and remount the VM's disk to back it up OR you have to install backup software (eg amanda-client or the like) on the VM and back it up over the virtual network. It some cases (many cases!) it is not possible to either shutdown the VM and/or install backup software on it (eg the VM is running a 'foreign' or otherwise imcompatible O/S).> > Oh, and just please don't tell me that the load can be spread accross > two or more harddrives, cutting the file access by a factor of two (or > more). I can do that with raid, no need for LVM. Stick to a single > harddrive scenario, please. > > Best, :-) > Marko > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos > >-- Robert Heller -- 978-544-6933 Deepwoods Software -- Custom Software Services http://www.deepsoft.com/ -- Linux Administration Services heller at deepsoft.com -- Webhosting Services
Gordon Messmer
2015-Jun-24 17:40 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
On 06/23/2015 08:10 PM, Marko Vojinovic wrote:> Ok, you made me curious. Just how dramatic can it be? From where I'm > sitting, a read/write to a disk takes the amount of time it takes, the > hardware has a certain physical speed, regardless of the presence of > LVM. What am I missing?Well, there's best and worst case scenarios. Best case for file-backed VMs is pre-allocated files. It takes up more space, and takes a while to set up initially, but it skips block allocation and probably some fragmentation performance hits later. Worst case, though, is sparse files. In such a setup, when you write a new file in a guest, the kernel writes the metadata to the journal, then writes the file's data block, then flushes the journal to the filesystem. Every one of those writes goes through the host filesystem layer, often allocating new blocks, which goes through the host's filesystem journal. If each of those three writes hit blocks not previously used, then the host may do three writes for each of them. In that case, one write() in an application in a VM becomes nine disk writes in the VM host. The first time I benchmarked a sparse-file-backed guest vs an LVM backed guest, bonnie++ measured block write bandwidth at about 12.5% (1/8) native disk write performance. Yesterday I moved a bunch of VMs from a file-backed virt server (set up by someone else) to one that used logical volumes. Block write speed on the old server, measured with bonnie++, was about 21.6MB/s in the guest and about 39MB/s on the host. So, less bad than a few years prior, but still bad. (And yes, all of those numbers are bad. It's a 3ware controller, what do you expect?) LVM backed guests measure very nearly the same as bare metal performance. After migration, bonnie++ reports about 180MB/s block write speed.> For concreteness, let's say I have a guest machine, with a > dedicated physical partition for it, on a single drive. Or, I have the > same thing, only the dedicated partition is inside LVM. Why is there a > performance difference, and how dramatic is it?Well, I said that there's a big performance hit to file-backed guests, not partition backed guests. You should see exactly the same disk performance on partition backed guests as LV backed guests. However, partitions have other penalties relative to LVM. 1) If you have a system with a single disk, you have to reboot to add partitions for new guests. Linux won't refresh the partition table on the disk it boots from. 2) If you have two disks you can allocate new partitions on the second disk without a reboot. However, your partition has to be contiguous, which may be a problem, especially over time if you allocate VMs of different sizes. 3) If you want redundancy, partitions on top of RAID is more complex than LVM on top of RAID. As far as I know, partitions on top of RAID are subject to the same limitation as in #1. 4) As far as I know, Anaconda can't set up a logical volume that's a redundant type, so LVM on top of RAID is the only practical way to support redundant storage of your host filesystems. If you use LVM, you don't have to remember any oddball rules. You don't have to reboot to set up new VMs when you have one disk. You don't have to manage partition fragmentation. Every system, whether it's one disk or a RAID set behaves the same way.
Gordon Messmer
2015-Jun-26 16:51 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
On 06/26/2015 07:58 AM, Mark Milhollan wrote:> On Wed, 24 Jun 2015, Gordon Messmer wrote: > >> 1) If you have a system with a single disk, you have to reboot to add >> partitions for new guests. Linux won't refresh the partition table on the disk >> it boots from. > I'm not sure this is still true, but I use LVM almost everywhere so I > seldom need to try.It's definitely still true on CentOS 7.>> 3) If you want redundancy, partitions on top of RAID is more complex than LVM >> on top of RAID. As far as I know, partitions on top of RAID are subject to the >> same limitation as in #1. > They look the same to me, and share the same limitations (WRT the PV).Create a RAID1 volume on two drives. Partition that volume. Where is your partition table? Is it in a spot where your BIOS/UEFI or another OS will see it? Will that non-Linux system try to open or modify the partitions inside your RAID? It depends on what metadata version you use. If you set this up in Anaconda, it's going to be version 0.90, and your partition table will be in a spot where a non-Linux system will read it. There's no ambiguity with LVM. That's what I mean when I say that it's less complicated. The format of MBR and GPT partition tables are imposed by the design of BIOS and UEFI. There is no good reason to use them for any purpose other than identifying the location of a filesytem that BIOS or UEFI must be able to read. The limitation I was referring to was that as far as I know, if Linux has mounted filesystems from a partitioned RAID set, you can't modify partitions without rebooting. That limitation doesn't affect LVM.> Either can be partitioned but making more LVs is indeed simpler than > using DM to partition a partition or MD. I'd like to use LVM RAID and > never again have RAIDed partitions, so that I can choose the RAID level > per LV, alas LVM RAID MDs don't appear in /proc/mdstat so monitoring > them is somewhat more annoying. > >> 4) As far as I know, Anaconda can't set up a logical volume that's a redundant >> type, so LVM on top of RAID is the only practical way to support redundant >> storage of your host filesystems. > Anaconda has many deficiencies and indeed I am annoyed enough with it > that I often skip trying to use its new disk manager, but making the PV > on an MD RAID isn't impossibleI know, that's what I said was the only practical way to support redundant storage (when using LVM).> , or alternatively making the LVs > redundant after install is a single command (each) and you can choose > whether it should be mere mirroring or some MD manged RAID level (modulo > the LVM RAID MD monitoring issue).I hadn't realized that. That's an interesting alternative to MD RAID, particularly for users who want LVs with different RAID levels.
Chris Murphy
2015-Jun-26 17:34 UTC
[CentOS] LVM hatred, was Re: /boot on a separate partition?
On Fri, Jun 26, 2015 at 10:51 AM, Gordon Messmer <gordon.messmer at gmail.com> wrote:>> , or alternatively making the LVs >> redundant after install is a single command (each) and you can choose >> whether it should be mere mirroring or some MD manged RAID level (modulo >> the LVM RAID MD monitoring issue). > > > I hadn't realized that. That's an interesting alternative to MD RAID, > particularly for users who want LVs with different RAID levels.LVM RAID uses the md kernel code, but is managed by LVM tools and metadata rather than mdadm and its metadata format. It supports all the same RAID levels these days. The gotcha is that it's obscure enough that you won't find nearly as much documentation or help when you arrive at DR, what to do. And anyone who lurks on the linux-raid@ list knows that a huge pile of data loss comes from users who do the wrong thing; maybe top on the list is they for some ungodly reason read somewhere to use mdadm -C to overwrite mdadm metadata on one of their drives and this obliterates important information needed for recovery and now they actually have caused a bigger problem. At the moment, LVM RAID is only supported with conventional/thick provisioning. So if you want to do software RAID and also use LVM thin provisioning, you still need to use mdadm (or hardware RAID). -- Chris Murphy