Albert Chin
2007-May-21 20:16 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
We''re testing an X4100M2, 4GB RAM, with a 2-port 4GB Fibre Channel QLogic connected to a 2GB Fibre Channel 6140 array. The X4100M2 is running OpenSolaris b63. We have 8 drives in the Sun 6140 configured as individual RAID-0 arrays and have a ZFS RAID-Z2 array comprising 7 of the drives (for testing, we''re treating the 6140 as JBOD for now). The RAID-0 stripe size is 128k. We''re testing updates to the X4100M2 using rsync across the network with ssh and using NFS: 1. [copy 400MB of gcc-3.4.3 via rsync/NFS] # mount file-server:/opt/test /mnt # rsync -vaHR --delete --stats gcc343 /mnt ... sent 409516941 bytes received 80590 bytes 5025736.58 bytes/sec 2. [copy 400MB of gcc-3.4.3 via rsync/ssh] # rsync -vaHR -e ''ssh'' --delete --stats gcc343 file-server:/opt/test ... sent 409516945 bytes received 80590 bytes 9637589.06 bytes/sec The network is 100MB. /etc/system on the file server is: set maxphys = 0x800000 set ssd:ssd_max_throttle = 64 set zfs:zfs_nocacheflush = 1 Why can''t the NFS performance match that of SSH? -- albert chin (china at thewrittenword.com)
Marion Hakanson
2007-May-21 20:23 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
opensolaris-zfs-discuss at mlists.thewrittenword.com said:> Why can''t the NFS performance match that of SSH?Hi Albert, My first guess is the NFS vs array cache-flush issue. Have you configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That''ll make a huge difference for NFS clients of ZFS file servers. Also, you might make your ssh connection go faster if you change the rsync arg from "-e ''ssh''" to "-e ''ssh -c blowfish''". Depends, of course, on how fast both client and server CPU''s are. Regards, Marion
Robert Thurlow
2007-May-21 20:55 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
Albert Chin wrote:> Why can''t the NFS performance match that of SSH?One big reason is that the sending CPU has to do all the comparisons to compute the list of files to be sent - it has to fetch the attributes from both local and remote and compare timestamps. With ssh, local processes at each end do lstat() calls in parallel and chatter about the timestamps, and the lstat() calls are much cheaper. I would wonder how long the attr-chatter takes in your two cases before bulk data starts to be sent - deducting that should reduce the imbalance you''re seeing. If rsync were more multi-threaded and could manage multiple lstat() calls in parallel NFS would be closer. Rob T
Albert Chin
2007-May-21 22:40 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Mon, May 21, 2007 at 02:55:18PM -0600, Robert Thurlow wrote:> Albert Chin wrote: > > >Why can''t the NFS performance match that of SSH? > > One big reason is that the sending CPU has to do all the comparisons to > compute the list of files to be sent - it has to fetch the attributes > from both local and remote and compare timestamps. With ssh, local > processes at each end do lstat() calls in parallel and chatter about > the timestamps, and the lstat() calls are much cheaper. I would wonder > how long the attr-chatter takes in your two cases before bulk data > starts to be sent - deducting that should reduce the imbalance you''re > seeing. If rsync were more multi-threaded and could manage multiple > lstat() calls in parallel NFS would be closer.Well, there is no data on the file server as this is an initial copy, so there is very little for rsync to do. To compare the rsync overhead, I conducted some more tests, using tar: 1. [copy 400MB of gcc-3.4.3 via tar/NFS to ZFS file system] # mount file-server:/opt/test /mnt # time tar cf - gcc343 | (cd /mnt; tar xpf - ) ... 419721216 bytes in 1:08.65 => 6113928.86 bytes/sec 2. [copy 400MB of gcc-3.4.3 via tar/ssh to ZFS file system] # time tar cf - gcc343 | ssh -oForwardX11=no file-server \ ''cd /opt/test; tar xpf -'' ... 419721216 bytes in 35:82 => 11717510.21 bytes/sec 3. [copy 400MB of gcc-3.4.3 via tar/NFS to Fibre-attached file system] # mount file-server:/opt/fibre-disk /mnt # time tar cf - gcc343 | (cd /mnt; tar xpf - ) ... 419721216 bytes in 56:87 => 7380362.51 bytes/sec 4. [copy 400MB of gcc-3.4.3 via tar/ssh to Fibre-attached file system] # time tar cf - gcc343 | ssh -oForwardX11=no file-server \ ''cd /opt/fibre-disk; tar xpf -'' ... 419721216 bytes in 35:89 => 11694656.34 bytes/sec So, it would seem using #1 and #2, NFS performance can stand some improvement. And, I''d have thought that since #2/#4 were similar, #1/#3 should be as well. Maybe some NFS/ZFS issues would answer the discrepancy. I think the bigger problem is the NFS performance penalty so we''ll go lurk somewhere else to find out what the problem is. -- albert chin (china at thewrittenword.com)
Robert Thurlow
2007-May-21 22:55 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
Albert Chin wrote:> Well, there is no data on the file server as this is an initial copy,Sorry Albert, I should have noticed that from your e-mail :-(> I think the bigger problem is the NFS performance penalty so we''ll go > lurk somewhere else to find out what the problem is.Is this with Solaris 10 or OpenSolaris on the client as well? I guess this goes back to some of the "why is tar slow over NFS" discussions we''ve had, some here and some on nfs-discuss. A more multi-threaded workload would help; so will planned work to focus on performance of NFS and ZFS together, which can sometimes be slower than expected. Rob T
Albert Chin
2007-May-21 23:09 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Mon, May 21, 2007 at 04:55:35PM -0600, Robert Thurlow wrote:> Albert Chin wrote: > > >I think the bigger problem is the NFS performance penalty so we''ll go > >lurk somewhere else to find out what the problem is. > > Is this with Solaris 10 or OpenSolaris on the client as well?Client is RHEL 4/x86_64. But, we just ran a concurrent tar/SSH across Solaris 10, HP-UX 11.23/PA, 11.23/IA, AIX 5.2, 5.3, RHEL 4/x86, 4/x86_64 and the average was ~4562187 bytes/sec. But, the gcc343 copy on each of these machines isn''t the same size. It''s certainly less than 400MBx7 though. While performance on one system is fine, things degrade when you add clients.> I guess this goes back to some of the "why is tar slow over NFS" > discussions we''ve had, some here and some on nfs-discuss. A more > multi-threaded workload would help; so will planned work to focus > on performance of NFS and ZFS together, which can sometimes be > slower than expected.But still, how is tar/SSH any more multi-threaded than tar/NFS? I''ve posted to nfs-discuss so maybe someone knows something. -- albert chin (china at thewrittenword.com)
Nicolas Williams
2007-May-21 23:11 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:> But still, how is tar/SSH any more multi-threaded than tar/NFS?It''s not that it is, but that NFS sync semantics and ZFS sync semantics conspire against single-threaded performance.
Albert Chin
2007-May-21 23:21 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > > It''s not that it is, but that NFS sync semantics and ZFS sync > semantics conspire against single-threaded performance.What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, that''s only helps ZFS. Is there something similar for NFS? -- albert chin (china at thewrittenword.com)
Nicolas Williams
2007-May-21 23:30 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote:> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > > > > It''s not that it is, but that NFS sync semantics and ZFS sync > > semantics conspire against single-threaded performance. > > What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, > that''s only helps ZFS. Is there something similar for NFS?NFS''s semantics for open() and friends is that they are synchronous, whereas POSIX''s semantics are that they are not. You''re paying for a sync() after every open. Nico --
Frank Cusack
2007-May-22 01:18 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On May 21, 2007 6:30:42 PM -0500 Nicolas Williams <Nicolas.Williams at sun.com> wrote:> On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote: >> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: >> > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >> > > But still, how is tar/SSH any more multi-threaded than tar/NFS? >> > >> > It''s not that it is, but that NFS sync semantics and ZFS sync >> > semantics conspire against single-threaded performance. >> >> What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, >> that''s only helps ZFS. Is there something similar for NFS? > > NFS''s semantics for open() and friends is that they are synchronous, > whereas POSIX''s semantics are that they are not. You''re paying for a > sync() after every open.nocto?
Paul Armstrong
2007-May-22 03:26 UTC
[zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over
GIven you''re not using compression for rsync, the only thing I can think if would be that the stream compression of SSH is helping here. This message posted from opensolaris.org
Albert Chin
2007-May-22 03:51 UTC
[zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over
On Mon, May 21, 2007 at 08:26:37PM -0700, Paul Armstrong wrote:> GIven you''re not using compression for rsync, the only thing I can > think if would be that the stream compression of SSH is helping > here.SSH compresses by default? I thought you had to specify -oCompression and/or -oCompressionLevel? -- albert chin (china at thewrittenword.com)
Casper.Dik at Sun.COM
2007-May-22 08:04 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
>On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote: >> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: >> > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >> > > But still, how is tar/SSH any more multi-threaded than tar/NFS? >> > >> > It''s not that it is, but that NFS sync semantics and ZFS sync >> > semantics conspire against single-threaded performance. >> >> What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, >> that''s only helps ZFS. Is there something similar for NFS? > >NFS''s semantics for open() and friends is that they are synchronous, >whereas POSIX''s semantics are that they are not. You''re paying for a >sync() after every open.I''m not sure the semantics of NFS are at all relevant for the complete performance picture. NFS writes are(/used to be) synchronous, but the client hides that from processes; similarly, the client could hide the fact that creates are synchronous, but that''s a bit trickier because creates can fail. Casper
Pål Baltzersen
2007-May-22 11:20 UTC
[zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over NFS?
Try mounting the other way, so you read form NFS and write to ZFS (~DAS). That should perform significantly better. NFS write is slow (compared to read) because of syncronous ack. If you for some reason cant mount the other way, then you may want to play with NFS mount-options for write-buffer sizes (wsize=) and async (risky but faster!) I guess rsync has a wider ack-window that makes it faster than NFS write (i.e. buffers more without wait-state). Filesizes may have influence on the picture. P?l This message posted from opensolaris.org
Darren J Moffat
2007-May-22 11:26 UTC
[zfs-discuss] Re: Rsync update to ZFS server over SSH faster than over NFS?
P?l Baltzersen wrote:> Try mounting the other way, so you read form NFS and write to ZFS (~DAS). That should perform significantly better. > NFS write is slow (compared to read) because of syncronous ack. > If you for some reason cant mount the other way, then you may want to play with NFS mount-options for write-buffer sizes (wsize=) and async (risky but faster!) > I guess rsync has a wider ack-window that makes it faster than NFS write (i.e. buffers more without wait-state). Filesizes may have influence on the picture.Could also try NFS over SSH :-) -- Darren J Moffat
Dick Davies
2007-May-22 14:23 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
<allyourbase> Take off every ZIL! http://number9.hellooperator.net/articles/2007/02/12/zil-communication </allyourbase> On 22/05/07, Albert Chin <opensolaris-zfs-discuss at mlists.thewrittenword.com> wrote:> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > > > > It''s not that it is, but that NFS sync semantics and ZFS sync > > semantics conspire against single-threaded performance. > > What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, > that''s only helps ZFS. Is there something similar for NFS? > > -- > albert chin (china at thewrittenword.com) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
Nicolas Williams
2007-May-22 15:13 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Tue, May 22, 2007 at 10:04:34AM +0200, Casper.Dik at Sun.COM wrote:> > >On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote: > >> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > >> > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > >> > > But still, how is tar/SSH any more multi-threaded than tar/NFS? > >> > > >> > It''s not that it is, but that NFS sync semantics and ZFS sync > >> > semantics conspire against single-threaded performance. > >> > >> What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, > >> that''s only helps ZFS. Is there something similar for NFS? > > > >NFS''s semantics for open() and friends is that they are synchronous, > >whereas POSIX''s semantics are that they are not. You''re paying for a > >sync() after every open. > > I''m not sure the semantics of NFS are at all relevant for the > complete performance picture. > > NFS writes are(/used to be) synchronous, but the client hides that > from processes; similarly, the client could hide the fact that creates > are synchronous, but that''s a bit trickier because creates can fail.But it sounds tricky enough that it can''t be pulled off. It''d be nice to have async versions of all fs-related syscalls...
Albert Chin
2007-May-22 15:40 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Tue, May 22, 2007 at 03:23:46PM +0100, Dick Davies wrote:> <allyourbase> > Take off every ZIL! > > http://number9.hellooperator.net/articles/2007/02/12/zil-communicationInteresting. With "set zfs:zil_disable = 1", I get: 1. [copy 400MB of gcc-3.4.3 via rsync/NFS] # mount file-server:/opt/test /mnt # rsync -vaHR --delete --stats gcc343 /mnt ... (old) sent 409516941 bytes received 80590 bytes 5025736.58 bytes/sec (new) sent 409516941 bytes received 80590 bytes 7380135.69 bytes/sec 2. [copy 400MB of gcc-3.4.3 via tar/NFS to ZFS file system] # mount file-server:/opt/test /mnt # time tar cf - gcc343 | (cd /mnt; tar xpf - ) ... (old) 419721216 bytes in 1:08.65 => 6113928.86 bytes/sec (new) 419721216 bytes in 0:44.67 => 9396042.44 bytes/sec> </allyourbase> > > On 22/05/07, Albert Chin > <opensolaris-zfs-discuss at mlists.thewrittenword.com> wrote: > >On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: > >> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: > >> > But still, how is tar/SSH any more multi-threaded than tar/NFS? > >> > >> It''s not that it is, but that NFS sync semantics and ZFS sync > >> semantics conspire against single-threaded performance. > > > >What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, > >that''s only helps ZFS. Is there something similar for NFS? > > > >-- > >albert chin (china at thewrittenword.com) > >_______________________________________________ > >zfs-discuss mailing list > >zfs-discuss at opensolaris.org > >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > -- > Rasputin :: Jack of All Trades - Master of Nuns > http://number9.hellooperator.net/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- albert chin (china at thewrittenword.com)
Albert Chin
2007-May-22 16:35 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On Mon, May 21, 2007 at 13:23:48 -0800, Marion Hakanson wrote:>Albert Chin wrote: >> Why can''t the NFS performance match that of SSH? > > My first guess is the NFS vs array cache-flush issue. Have you > configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That''ll > make a huge difference for NFS clients of ZFS file servers.Doesn''t setting zfs:zfs_nocacheflush=1 achieve the same result: http://blogs.digitar.com/jjww/?itemid=44 The 6140 has a non-volatile cache. Dunno if it''s order-preserving though. -- albert chin (china at thewrittenword.com)
Marion Hakanson
2007-May-22 17:45 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
>> My first guess is the NFS vs array cache-flush issue. Have you >> configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That''ll >> make a huge difference for NFS clients of ZFS file servers. >opensolaris-zfs-discuss at mlists.thewrittenword.com said:> Doesn''t setting zfs:zfs_nocacheflush=1 achieve the same result: > http://blogs.digitar.com/jjww/?itemid=44Yes, it should. You''re running a more modern (future) Solaris release than I''ve got here (10U3).> The 6140 has a non-volatile cache. Dunno if it''s order-preserving though.Yikes. I didn''t even want to think about that. For ZFS, I''d think it shouldn''t matter, though. Either "the" ueber-block gets written, or it doesn''t. One then only depends on the whole cache getting flushed to disk eventually, right? Marion
Paul Armstrong
2007-May-22 23:07 UTC
[zfs-discuss] Re: Re: Rsync update to ZFS server over SSH faster than over
> SSH compresses by default? I thought you had to > specify -oCompression > and/or -oCompressionLevel?Depends on how it was compiled. Looking at the man pages for Solaris, looks like it''s turned off so yes, you''d have to set -oCompression Paul This message posted from opensolaris.org
Roch Bourbonnais
2007-May-25 16:22 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
Le 22 mai 07 ? 01:11, Nicolas Williams a ?crit :> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >> But still, how is tar/SSH any more multi-threaded than tar/NFS? > > It''s not that it is, but that NFS sync semantics and ZFS sync > semantics > conspire against single-threaded performance. >Hi Nic, I don''t agree with the blanket statement. So to clarify. There are 2 independant things at play here. a) NFS sync semantics conspire againts single thread performance with any backend filesystem. However NVRAM normally offers some releaf of the issue. b) ZFS sync semantics along with the Storage Software + imprecise protocol in between, conspire againts ZFS performance of some workloads on NVRAM backed storage. NFS being one of the affected workloads. The conjunction of the 2 causes worst than expected NFS perfomance over ZFS backend running __on NVRAM back storage__. If you are not considering NVRAM storage, then I know of no ZFS/NFS specific problems. Issue b) is being delt with, by both Solaris and Storage Vendors (we need a refined protocol); Issue a) is not related to ZFS and rather fundamental NFS issue. Maybe future NFS protocol will help. Net net; if one finds a way to ''disable cache flushing'' on the storage side, then one reaches the state we''ll be, out of the box, when b) is implemented by Solaris _and_ Storage vendor. At that point, ZFS becomes a fine NFS server not only on JBOD as it is today , both also on NVRAM backed storage. It''s complex enough, I thougt it was worth repeating. -r> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roch Bourbonnais
2007-May-25 16:30 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
Le 22 mai 07 ? 01:21, Albert Chin a ?crit :> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: >> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >>> But still, how is tar/SSH any more multi-threaded than tar/NFS? >> >> It''s not that it is, but that NFS sync semantics and ZFS sync >> semantics conspire against single-threaded performance. > > What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. But, > that''s only helps ZFS. Is there something similar for NFS? >With this set, we also reach a state where the NFS/ZFS/NVRAM works as it should. So it should speed things up. The problem is : Once it starts to go in /etc/system it will spread. Customers with no NVRAM storage will use it and some will experience pool corruption. -r> -- > albert chin (china at thewrittenword.com) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roch Bourbonnais
2007-May-25 16:32 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
Le 22 mai 07 ? 03:18, Frank Cusack a ?crit :> On May 21, 2007 6:30:42 PM -0500 Nicolas Williams > <Nicolas.Williams at sun.com> wrote: >> On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote: >>> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: >>> > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >>> > > But still, how is tar/SSH any more multi-threaded than tar/NFS? >>> > >>> > It''s not that it is, but that NFS sync semantics and ZFS sync >>> > semantics conspire against single-threaded performance. >>> >>> What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. >>> But, >>> that''s only helps ZFS. Is there something similar for NFS? >> >> NFS''s semantics for open() and friends is that they are synchronous, >> whereas POSIX''s semantics are that they are not. You''re paying for a >> sync() after every open. > > nocto?I think it''s after every client close. But on the server side, there are lots of operations that also requires a commit. So nocto is not the silver bullet. -r> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roch Bourbonnais
2007-May-25 16:40 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
Le 22 mai 07 ? 16:23, Dick Davies a ?crit :> <allyourbase> > Take off every ZIL! > > http://number9.hellooperator.net/articles/2007/02/12/zil- > communication > > </allyourbase> >Cause client corrupt but also database corruption and just about anything that carefully manages data. Yes the zpool will survive, but it may be the only thing that does. So please don''t do this. -r> On 22/05/07, Albert Chin > <opensolaris-zfs-discuss at mlists.thewrittenword.com> wrote: >> On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote: >> > On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >> > > But still, how is tar/SSH any more multi-threaded than tar/NFS? >> > >> > It''s not that it is, but that NFS sync semantics and ZFS sync >> > semantics conspire against single-threaded performance. >> >> What''s why we have "set zfs:zfs_nocacheflush = 1" in /etc/system. >> But, >> that''s only helps ZFS. Is there something similar for NFS? >> >> -- >> albert chin (china at thewrittenword.com) >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > > -- > Rasputin :: Jack of All Trades - Master of Nuns > http://number9.hellooperator.net/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Spencer Shepler
2007-May-25 18:03 UTC
[zfs-discuss] Rsync update to ZFS server over SSH faster than over NFS?
On May 25, 2007, at 11:22 AM, Roch Bourbonnais wrote:> > Le 22 mai 07 ? 01:11, Nicolas Williams a ?crit : > >> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote: >>> But still, how is tar/SSH any more multi-threaded than tar/NFS? >> >> It''s not that it is, but that NFS sync semantics and ZFS sync >> semantics >> conspire against single-threaded performance. >> > > Hi Nic, I don''t agree with the blanket statement. So to clarify. > > There are 2 independant things at play here. > > a) NFS sync semantics conspire againts single thread performance > with any backend filesystem. > However NVRAM normally offers some releaf of the issue. > > b) ZFS sync semantics along with the Storage Software + imprecise > protocol in between, conspire againts ZFS performance > of some workloads on NVRAM backed storage. NFS being one of the > affected workloads. > > The conjunction of the 2 causes worst than expected NFS perfomance > over ZFS backend running __on NVRAM back storage__. > If you are not considering NVRAM storage, then I know of no ZFS/NFS > specific problems. > > Issue b) is being delt with, by both Solaris and Storage Vendors > (we need a refined protocol); > > Issue a) is not related to ZFS and rather fundamental NFS issue. > Maybe future NFS protocol will help. > > > Net net; if one finds a way to ''disable cache flushing'' on the > storage side, then one reaches the state > we''ll be, out of the box, when b) is implemented by Solaris _and_ > Storage vendor. At that point, ZFS becomes a fine NFS > server not only on JBOD as it is today , both also on NVRAM backed > storage.I will add a third category, response time of individual requests. One can think of the ssh stream of filesystem data as one large remote procedure call that says "put this directory tree and contents on the server". The time it takes is essentially the time it takes to transfer the filesystem data. The latency on the very last of the request, amortized across the entire stream is zero. For the NFS client, there is response time injected at each request and the best way to amortize this is through parallelism and that is very difficult for some applications. Add the items in a) and b) and there is a lot to deal with. Not insurmountable but it takes a little more effort to build an effective solution. Spencer