Are there any ways to improve/manage the speed of pvmove? Man doesn't show any documented switches for priority scheduling. Iostat shows the system way underutilized even though the lv whose pe's are being migrated is continuously being written (slowly) to. Thanks! jlc
On Tue, 2008-02-12 at 19:57 -0700, Joseph L. Casale wrote:> Are there any ways to improve/manage the speed of pvmove? Man doesn't show any documented switches for priority scheduling. > Iostat shows the system way underutilized even though the lv whose pe's are being migrated is continuously being written (slowly) to. > > Thanks! > jlc > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos
Sorry 'bout that previous one. Wrong key combo hit! On Tue, 2008-02-12 at 19:57 -0700, Joseph L. Casale wrote:> Are there any ways to improve/manage the speed of pvmove?Not that I am aware of. Keep in mind that a *lot* of work is being done. You could "nice" it. "man nice". Since there is likely to be a lot of I/O happening, it may not help much.> Man doesn't show any documented switches for priority scheduling. > Iostat shows the system way underutilized even though the lv whose > pe's are being migrated is continuously being written (slowly) to.If the drives are on the same channel, or other devices on the channel are also flooding the channel, that would be expected. Does "swapon -s" show a lot of swap being used? Does top give a clue? I suspect a lot of CPU may also be involved.> > Thanks! > jlc > <snip sig stuff>-- Bill
On Tue, 2008-02-12 at 19:57 -0700, Joseph L. Casale wrote:> <snip>> Iostat shows the system way underutilized even though the lv whose > pe's are being migrated is continuously being written (slowly) to.I finally thought about that last line. Makes since because meta-data tracking must be done as various pieces are moved and a checkpoint is written (note in the man page about being able to restart without providing any parameters). And that is the drive that is failing too! May be a lot of write failures followed by alternate block assignments going on at the hardware level. Just a SWAG (Scientific Wild-Assed Guess).> > Thanks! > jlc > <snip sig stuff>-- Bill
Joseph L. Casale wrote:> > Not very impressive :) Two different SATA II based arrays on an LSI > controller, 5% complete in ~7 hours == a week to complete! I ran this > command from an ssh session from my workstation (That was clearly a > dumb move). Given the robustness of the pvmove command I have gleaned > from reading, if the session bales how much time am I likely to lose > by restarting? Are the checkpoints frequent?I always use screen when running updates, installs, disk maintenance, etc. This way, if the ssh session dies for whatever reason, the command keeps going and I can reconnect to the session later. It's also useful to be able to start the run at the office and then reconnect from home to check on it. -- Bowie
Joseph L. Casale wrote:> > Are there any ways to improve/manage the speed of pvmove? Man > doesn't show any documented switches for priority scheduling. > Iostat shows the system way underutilized even though the lv > whose pe's are being migrated is continuously being written > (slowly) to.I don't believe pvmove actually does any of the lifting. Pvmove merely creates a mirrored pv area in dev-mapper and then hangs around monitoring it's progress until the mirror is sync'd up then it throws a couple of barriers and removes the original pv from the mirror leaving the new pv as the new location for the data. That is how the move continues through reboots. All lifting is actually done in dev-mapper and it's state is preserved there. On restart LVM will read it's meta-data to determine if there is a pvmove in progress and then spawn a pvmove to wait for it to complete so it can remove the mirror. Any slowness is due to disk io errors and retries being thrown around. You should really run LVM on top of a RAID1, software or hardware makes no difference, but LVM is more to storage management then fault tolerance and redundancy. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Good suggestion, no it's not ESX, but it does do snapshots. -Ross ----- Original Message ----- From: centos-bounces at centos.org <centos-bounces at centos.org> To: 'CentOS mailing list' <centos at centos.org> Sent: Wed Feb 13 17:30:39 2008 Subject: RE: [CentOS] pvmove speed>I am facing the same issue with a migration of our VM machines >to a new iSCSI setup this year, around 1TB of VMs need to be >fork lifted over and I thought about exotic ways to move it >over, but I think in the end it will be by good ole backup exec >and tape.You're not running esx are you? Heh, I just did the same thing on a much smaller scale. Couldn't afford the long downtime while a copy took place so I shut the vm's off, snapped it and restarted it. I then scripted all files "without" 00000 in the name to rsync over (ssssslowly). I then only had to shut the vm off and sync the small snap's and restart the vm's on other storage. Only took a few minutes. jlc _______________________________________________ CentOS mailing list CentOS at centos.org http://lists.centos.org/mailman/listinfo/centos ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20080213/68d42a22/attachment.html>