Hello zfs-discuss,
Just a note to everyone experimenting with this - if you change it
online it has only effect when pools are exported and then imported.
ps. I didn''t use for my last posted benchmarks - with it I get about
35,000IOPS and 0.2ms latency - but it''s meaningless.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Robert -
This isn''t surprising (either the switch or the results). Our long
term
fix for tweaking this knob is:
6280630 zil synchronicity
Which would add ''zfs set sync'' as a per-dataset option. A cut
from the
comments (which aren''t visible on opensolaris):
sync={deferred,standard,forced}
Controls synchronous semantics for the dataset.
When set to ''standard'' (the default), synchronous
operations such as fsync(3C) behave precisely as defined
in fcntl.h(3HEAD).
When set to ''deferred'', requests for synchronous
semantics are ignored. However, ZFS still guarantees
that ordering is preserved -- that is, consecutive
operations reach stable storage in order. (If a thread
performs operation A followed by operation B, then the
moment that B reaches stable storage, A is guaranteed to
be on stable storage as well.) ZFS also guarantees that
all operations will be scheduled for write to stable
storage within a few seconds, so that an unexpected
power loss only takes the last few seconds of change
with it.
When set to ''forced'', all operations become synchronous.
No operation will return until all previous operations
have been committed to stable storage. This option can
be useful if an application is found to depend on
synchronous semantics without actually requesting them;
otherwise, it will just make everything slow, and is not
recommended.
There was a thread describing the usefulness of this (for builds where
all-or-nothing over a long period of time), but I can''t find it.
- Eric
On Mon, Aug 07, 2006 at 06:07:53PM +0200, Robert Milkowski
wrote:> Hello zfs-discuss,
>
> Just a note to everyone experimenting with this - if you change it
> online it has only effect when pools are exported and then imported.
>
>
> ps. I didn''t use for my last posted benchmarks - with it I get
about
> 35,000IOPS and 0.2ms latency - but it''s meaningless.
>
>
>
> --
> Best regards,
> Robert mailto:rmilkowski at task.gda.pl
> http://milek.blogspot.com
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Not quite, zil_disable is inspected on file system mounts. It''s also looked at dynamically on every write for zvols. Neil. Robert Milkowski wrote On 08/07/06 10:07,:> Hello zfs-discuss, > > Just a note to everyone experimenting with this - if you change it > online it has only effect when pools are exported and then imported. > > > ps. I didn''t use for my last posted benchmarks - with it I get about > 35,000IOPS and 0.2ms latency - but it''s meaningless. >
Hello Eric,
Monday, August 7, 2006, 6:29:45 PM, you wrote:
ES> Robert -
ES> This isn''t surprising (either the switch or the results). Our
long term
ES> fix for tweaking this knob is:
ES> 6280630 zil synchronicity
ES> Which would add ''zfs set sync'' as a per-dataset option.
A cut from the
ES> comments (which aren''t visible on opensolaris):
ES> sync={deferred,standard,forced}
ES> Controls synchronous semantics for the dataset.
ES>
ES> When set to ''standard'' (the default),
synchronous
ES> operations such as fsync(3C) behave precisely as defined
ES> in fcntl.h(3HEAD).
ES> When set to ''deferred'', requests for
synchronous
ES> semantics are ignored. However, ZFS still guarantees
ES> that ordering is preserved -- that is, consecutive
ES> operations reach stable storage in order. (If a thread
ES> performs operation A followed by operation B, then the
ES> moment that B reaches stable storage, A is guaranteed to
ES> be on stable storage as well.) ZFS also guarantees that
ES> all operations will be scheduled for write to stable
ES> storage within a few seconds, so that an unexpected
ES> power loss only takes the last few seconds of change
ES> with it.
ES> When set to ''forced'', all operations
become synchronous.
ES> No operation will return until all previous operations
ES> have been committed to stable storage. This option can
ES> be useful if an application is found to depend on
ES> synchronous semantics without actually requesting them;
ES> otherwise, it will just make everything slow, and is not
ES> recommended.
ES> There was a thread describing the usefulness of this (for builds where
ES> all-or-nothing over a long period of time), but I can''t find it.
I remember the thread. Do you know if anyone is currently working on
it and when is it expected to be integrated into snv?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Hello Neil,
Monday, August 7, 2006, 6:40:01 PM, you wrote:
NP> Not quite, zil_disable is inspected on file system mounts.
I guess you right that umount/mount will suffice - I just hadn''t time
to check it and export/import worked.
Anyway is there a way for file systems to make it active without
unmount/mount in current nevada?
NP> It''s also looked at dynamically on every write for zvols.
Good to know, thank you.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
Robert Milkowski wrote:> Hello Neil, > > Monday, August 7, 2006, 6:40:01 PM, you wrote: > > NP> Not quite, zil_disable is inspected on file system mounts. > > I guess you right that umount/mount will suffice - I just hadn''t time > to check it and export/import worked. > > Anyway is there a way for file systems to make it active without > unmount/mount in current nevada?No, sorry. Neil
Robert Milkowski wrote:> Hello Eric, > > Monday, August 7, 2006, 6:29:45 PM, you wrote: > > ES> Robert - > > ES> This isn''t surprising (either the switch or the results). Our long term > ES> fix for tweaking this knob is: > > ES> 6280630 zil synchronicity > > ES> Which would add ''zfs set sync'' as a per-dataset option. A cut from the > ES> comments (which aren''t visible on opensolaris): > > ES> sync={deferred,standard,forced} > > ES> Controls synchronous semantics for the dataset. > ES> > ES> When set to ''standard'' (the default), synchronous > ES> operations such as fsync(3C) behave precisely as defined > ES> in fcntl.h(3HEAD). > > ES> When set to ''deferred'', requests for synchronous > ES> semantics are ignored. However, ZFS still guarantees > ES> that ordering is preserved -- that is, consecutive > ES> operations reach stable storage in order. (If a thread > ES> performs operation A followed by operation B, then the > ES> moment that B reaches stable storage, A is guaranteed to > ES> be on stable storage as well.) ZFS also guarantees that > ES> all operations will be scheduled for write to stable > ES> storage within a few seconds, so that an unexpected > ES> power loss only takes the last few seconds of change > ES> with it. > > ES> When set to ''forced'', all operations become synchronous. > ES> No operation will return until all previous operations > ES> have been committed to stable storage. This option can > ES> be useful if an application is found to depend on > ES> synchronous semantics without actually requesting them; > ES> otherwise, it will just make everything slow, and is not > ES> recommended. > > ES> There was a thread describing the usefulness of this (for builds where > ES> all-or-nothing over a long period of time), but I can''t find it. > > I remember the thread. Do you know if anyone is currently working on > it and when is it expected to be integrated into snv?I''m slated to work on it after I finish up some other ZIL bugs and performance fixes. Neil
Hello Neil, Tuesday, August 8, 2006, 3:54:31 PM, you wrote: NP> Robert Milkowski wrote:>> Hello Neil, >> >> Monday, August 7, 2006, 6:40:01 PM, you wrote: >> >> NP> Not quite, zil_disable is inspected on file system mounts. >> >> I guess you right that umount/mount will suffice - I just hadn''t time >> to check it and export/import worked. >> >> Anyway is there a way for file systems to make it active without >> unmount/mount in current nevada?NP> No, sorry. ok, thank you for info. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com