Matthew Booth
2010-Apr-06 15:44 UTC
[Libguestfs] [PATCH] RHEV: Use dd and direct io to write to NFS
I've been experiencing severe stability issues writing large amounts of data to an NFS export (I have never successfully written 8GB of data without having to reboot my machine). This patch alleviates the problem. I have successfully exported an 8GB disk with this patch in place. --- lib/Sys/VirtV2V/Target/RHEV.pm | 29 ++++++----------------------- 1 files changed, 6 insertions(+), 23 deletions(-) diff --git a/lib/Sys/VirtV2V/Target/RHEV.pm b/lib/Sys/VirtV2V/Target/RHEV.pm index 4b663ef..9b4b73a 100644 --- a/lib/Sys/VirtV2V/Target/RHEV.pm +++ b/lib/Sys/VirtV2V/Target/RHEV.pm @@ -285,29 +285,12 @@ sub open path => "$path.meta", error => $!))); - # Open the data file for writing - my $data; - open($data, '>', $path) - or die(__x("Unable to open {path} for writing: {error}", - path => "$path", - error => $!)); - - # Write all data received to the data file - my $buffer; - - for(;;) { - my $ret = sysread(STDIN, $buffer, 64*1024); - die("Error in NFSHelper reading from stdin: $!") - unless (defined($ret)); - last if ($ret == 0); - - print $data $buffer; - } - - close($data) - or die(user_message(__x("Error closing {path}: {error}", - path => "$path", - error => $!))); + # Write the remainder of the data using dd in 2MB chunks + # XXX - mbooth at redhat.com 06/04/2010 (Fedora 12 writing to RHEL 5 NFS) + # Use direct IO as writing a large amount of data to NFS regularly + # crashes my machine. Using direct io crashes less. + exec('dd', 'obs='.1024*1024*2, 'oflag=direct', 'of='.$path) + or die("Unable to execute dd: $!"); }); } -- 1.6.6.1
Richard W.M. Jones
2010-Apr-08 08:56 UTC
[Libguestfs] [PATCH] RHEV: Use dd and direct io to write to NFS
On Tue, Apr 06, 2010 at 04:44:54PM +0100, Matthew Booth wrote:> I've been experiencing severe stability issues writing large amounts of data to > an NFS export (I have never successfully written 8GB of data without having to > reboot my machine). This patch alleviates the problem. I have successfully > exported an 8GB disk with this patch in place. > --- > lib/Sys/VirtV2V/Target/RHEV.pm | 29 ++++++----------------------- > 1 files changed, 6 insertions(+), 23 deletions(-) > > diff --git a/lib/Sys/VirtV2V/Target/RHEV.pm b/lib/Sys/VirtV2V/Target/RHEV.pm > index 4b663ef..9b4b73a 100644 > --- a/lib/Sys/VirtV2V/Target/RHEV.pm > +++ b/lib/Sys/VirtV2V/Target/RHEV.pm > @@ -285,29 +285,12 @@ sub open > path => "$path.meta", > error => $!))); > > - # Open the data file for writing > - my $data; > - open($data, '>', $path) > - or die(__x("Unable to open {path} for writing: {error}", > - path => "$path", > - error => $!)); > - > - # Write all data received to the data file > - my $buffer; > - > - for(;;) { > - my $ret = sysread(STDIN, $buffer, 64*1024); > - die("Error in NFSHelper reading from stdin: $!") > - unless (defined($ret)); > - last if ($ret == 0); > - > - print $data $buffer; > - } > - > - close($data) > - or die(user_message(__x("Error closing {path}: {error}", > - path => "$path", > - error => $!))); > + # Write the remainder of the data using dd in 2MB chunks > + # XXX - mbooth at redhat.com 06/04/2010 (Fedora 12 writing to RHEL 5 NFS) > + # Use direct IO as writing a large amount of data to NFS regularly > + # crashes my machine. Using direct io crashes less. > + exec('dd', 'obs='.1024*1024*2, 'oflag=direct', 'of='.$path) > + or die("Unable to execute dd: $!"); > });Good old NFS. ACK. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://et.redhat.com/~rjones/libguestfs/ See what it can do: http://et.redhat.com/~rjones/libguestfs/recipes.html
Reasonably Related Threads
- [PATCH 1/3] Fix RHEV cleanup on unclean shutdown
- [PATCH 1/2] Refactor guest and volume creation into Sys::VirtV2V::Target::LibVirt
- [PATCH 1/4] Check that we're not overwriting an existing Libvirt domain
- [PATCH] RHEV: Warn instead of die if rmtree dies during cleanup
- [PATCH] RHEV: Pad disk sizes up to a multiple of 1024 bytes