Hi, is there a way to speed up lvm performance while logical volume has one or more snapshots ? I have several domUs on lvm volumes, I''m taking a snapshot of each volume, then with the help of dd taking backup of each volume. It seems to me that performance of snapshotted lv goes down, in my case from 600MB/s write to 78-80MB/s. Tried the solution with archive=0 in lvm.conf == no result. Is there a way to speed it up or maybe there are more interesting live domU backup solutions out there ? Thanks in advance.
Denis J. Cirulis wrote:> Hi, > > is there a way to speed up lvm performance while logical volume has one > or more snapshots ? > > I have several domUs on lvm volumes, I''m taking a snapshot of each > volume, then with the help of dd taking backup of each volume. > It seems to me that performance of snapshotted lv goes down, in my > case from 600MB/s write to 78-80MB/s. > Tried the solution with archive=0 in lvm.conf == no result. > > Is there a way to speed it up or maybe there are more interesting live > domU backup solutions out there ? >Are you saying your performance drops by that much before you even start backing up? Why not use ntfsclone/partclone as opposed to dd? That will certainly improve performance and reduce your exposure.
On Mon, Nov 28, 2011 at 01:30:02PM -0500, Errol Neal wrote:> Denis J. Cirulis wrote: > > Hi, > > > > is there a way to speed up lvm performance while logical volume has one > > or more snapshots ? > > > > I have several domUs on lvm volumes, I''m taking a snapshot of each > > volume, then with the help of dd taking backup of each volume. > > It seems to me that performance of snapshotted lv goes down, in my > > case from 600MB/s write to 78-80MB/s. > > Tried the solution with archive=0 in lvm.conf == no result. > > > > Is there a way to speed it up or maybe there are more interesting live > > domU backup solutions out there ? > > > Are you saying your performance drops by that much before you even start backing up? > Why not use ntfsclone/partclone as opposed to dd? That will certainly improve performance and reduce your exposure.For example: suse-cloud:~ # vgs VG #PV #LV #SN Attr VSize VFree nova-volumes 1 0 0 wz--n- 83.43G 83.43G test-vg 2 1 0 wz--n- 596.17G 586.17G suse-cloud:~ # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert test-vol test-vg -wi-a- 10.00G suse-cloud:~ # mount /dev/test-vg/test-vol /mnt/test/ suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1g bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 2.38892 s, 439 MB/s suse-cloud:~ # lvcreate -s -n test-vol-snap /dev/test-vg/test-vol -L3G Logical volume "test-vol-snap" created suse-cloud:~ # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert test-vol test-vg owi-ao 10.00G test-vol-snap test-vg swi-a- 3.00G test-vol 0.00 suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-1g bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 6.60005 s, 159 MB/s suse-cloud:~ # lvremove /dev/test-vg/test-vol-snap Do you really want to remove active logical volume "test-vol-snap"? [y/n]: y Logical volume "test-vol-snap" successfully removed suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-2g bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 3.43745 s, 305 MB/s suse-cloud:~ # These results are from test system. While I have only one snapshot of test-vg/test-vol I have 3 times performance drop, the more snapshots the less write speed on original volume perform.
---------------------------------------------------------------------------------- I''m not sure what you were expecting, thats how snapshots work, there''s always a downside. All writes go to the snapshot instead of the origin disk and snapshots are sparse disks so they have the added performance hit of having to allocate the space as needed up to the limit. If you create a snap of a 20gb origin disk and specify -L 1G during the snap create it will effectively store 1gb worth of changes until its either dropped or auto-expanded depending on your settings. I believe if you do not specify a size it will create a snap the same size as the origin volume. Snapshots are effectively branches of the origin volume in much the same way as they are used in typical source control systems (svn, cvs, etc). On Mon, Nov 28, 2011 at 2:06 PM, Denis J. Cirulis <denis@opensource.lv>wrote:> On Mon, Nov 28, 2011 at 01:30:02PM -0500, Errol Neal wrote: > > Denis J. Cirulis wrote: > > > Hi, > > > > > > is there a way to speed up lvm performance while logical volume has one > > > or more snapshots ? > > > > > > I have several domUs on lvm volumes, I''m taking a snapshot of each > > > volume, then with the help of dd taking backup of each volume. > > > It seems to me that performance of snapshotted lv goes down, in my > > > case from 600MB/s write to 78-80MB/s. > > > Tried the solution with archive=0 in lvm.conf == no result. > > > > > > Is there a way to speed it up or maybe there are more interesting live > > > domU backup solutions out there ? > > > > > Are you saying your performance drops by that much before you even start > backing up? > > Why not use ntfsclone/partclone as opposed to dd? That will certainly > improve performance and reduce your exposure. > > For example: > > suse-cloud:~ # vgs > VG #PV #LV #SN Attr VSize VFree > nova-volumes 1 0 0 wz--n- 83.43G 83.43G > test-vg 2 1 0 wz--n- 596.17G 586.17G > suse-cloud:~ # lvs > LV VG Attr LSize Origin Snap% Move Log Copy% Convert > test-vol test-vg -wi-a- 10.00G > suse-cloud:~ # mount /dev/test-vg/test-vol /mnt/test/ > suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1g bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 2.38892 s, 439 MB/s > suse-cloud:~ # lvcreate -s -n test-vol-snap /dev/test-vg/test-vol -L3G > Logical volume "test-vol-snap" created > suse-cloud:~ # lvs > LV VG Attr LSize Origin Snap% Move Log Copy% > Convert > test-vol test-vg owi-ao 10.00G > test-vol-snap test-vg swi-a- 3.00G test-vol 0.00 > suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-1g bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 6.60005 s, 159 MB/s > suse-cloud:~ # lvremove /dev/test-vg/test-vol-snap > Do you really want to remove active logical volume "test-vol-snap"? [y/n]: > y > Logical volume "test-vol-snap" successfully removed > suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-2g bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 3.43745 s, 305 MB/s > suse-cloud:~ # > > These results are from test system. > While I have only one snapshot of test-vg/test-vol I have 3 times > performance drop, the more snapshots the less write speed on original > volume perform. > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
David Della Vecchia wrote:> ---------------------------------------------------------------------------------- > > I''m not sure what you were expecting, thats how snapshots work, there''s always a downside. All writes go to the snapshot instead of the origin disk and snapshots are sparse disks so they have the added performance hit of having to allocate the space as needed up to the limit. If you create a snap of a 20gb origin disk and specify -L 1G during the snap create it will effectively store 1gb worth of changes until its either dropped or auto-expanded depending on your settings. I believe if you do not specify a size it will create a snap the same size as the origin volume. Snapshots are effectively branches of the origin volume in much the same way as they are used in typical source control systems (svn, cvs, etc). > > On Mon, Nov 28, 2011 at 2:06 PM, Denis J. Cirulis <denis@opensource.lv> wrote: > > On Mon, Nov 28, 2011 at 01:30:02PM -0500, Errol Neal wrote: > > Denis J. Cirulis wrote: > > > Hi, > > > > > > is there a way to speed up lvm performance while logical volume has one > > > or more snapshots ? > > > > > > I have several domUs on lvm volumes, I''m taking a snapshot of each > > > volume, then with the help of dd taking backup of each volume. > > > It seems to me that performance of snapshotted lv goes down, in my > > > case from 600MB/s write to 78-80MB/s. > > > Tried the solution with archive=0 in lvm.conf == no result. > > > > > > Is there a way to speed it up or maybe there are more interesting live > > > domU backup solutions out there ? > > > > > Are you saying your performance drops by that much before you even start backing up? > > Why not use ntfsclone/partclone as opposed to dd? That will certainly improve performance and reduce your exposure. > > For example: > > suse-cloud:~ # vgs > VG #PV #LV #SN Attr VSize VFree > nova-volumes 1 0 0 wz--n- 83.43G 83.43G > test-vg 2 1 0 wz--n- 596.17G 586.17G > suse-cloud:~ # lvs > LV VG Attr LSize Origin Snap% Move Log Copy% Convert > test-vol test-vg -wi-a- 10.00G > suse-cloud:~ # mount /dev/test-vg/test-vol /mnt/test/ > suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1g bs=1M count=1000 > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 2.38892 s, 439 MB/s > suse-cloud:~ # lvcreate -s -n test-vol-snap /dev/test-vg/test-vol -L3G > Logical volume "test-vol-snap" created > suse-cloud:~ # lvs > LV VG Attr LSize Origin Snap% Move Log Copy% Convert > test-vol test-vg owi-ao 10.00G > test-vol-snap test-vg swi-a- 3.00G test-vol 0.00 > suse-cloud:~ # dd if=/dev/zero of=/mnt/test/file.1-1g bs=1M count=1000 > 1000+0 records in > 1000+0 records outDavid beat me to it. I won''t be redundant, but I will say you are artificially inducing an issue unless you plan on writing your backups to the same device you''ve snapped. Your read (both rand and seq) should be largely unaffected by the presence of a snap.
On 29/11/11 08:24, David Della Vecchia wrote:> I''m not sure what you were expecting, thats how snapshots work, there''s > always a downside. All writes go to the snapshot instead of the origin > disk and snapshots are sparse disks so they have the added performance > hit of having to allocate the space as needed up to the limit. If you > create a snap of a 20gb origin disk and specify -L 1G during the snap > create it will effectively store 1gb worth of changes until its either > dropped or auto-expanded depending on your settings. I believe if you do > not specify a size it will create a snap the same size as the origin > volume. Snapshots are effectively branches of the origin volume in much > the same way as they are used in typical source control systems (svn, > cvs, etc).Not only that, but any write to the source volume requires a copy of the existing data from the source to the snapshot in addition to the write, so that amounts to (for each write to source): 1 read from source. N writes to snapshots (where N is the number of snapshots) 1 write to source. ...so you can easily see how the IO speed can be affected. The advantage you get with snapshots is that they are instantaneous, you don''t risk corrupting your source data and your data won''t change in the middle of making a backup or copy if you use a snapshot. These are usually considered to outweigh the performance loss. Peter