I''m trying to add a pair of new cache devices to my zpool, but I''m getting the following error: # zpool add space cache c10t7d0 Assertion failed: nvlist_lookup_string(cnv, "path", &path) == 0, file zpool_vdev.c, line 650 Abort (core dumped) I replaced a failed disk a few minutes before trying this, so the zpool is still resilvering. The pool also has an existing cache device, so this will be the second (with a third waiting at c10t6d0). The error message is kind of opaque, and I don''t have the ZFS source handy to look at the assertion and see what it''s checking. Is this caused by the resilvering or is something wrong? Scott
As for source, here you go :) http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 -- This message posted from opensolaris.org
On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai <mritun+opensolaris at gmail.com> wrote:> As for source, here you go :) > > http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650Thanks. It''s in the middle of get_replication, so I suspect it''s a bug--zpool tries to check on the replication status of existing vdevs and croaks in the process. As it turns out, I was able to add the cache devices just fine once the resilver completed. Out of curiosity, what''s the easiest way to shove a file into the L2ARC? Repeated reads with dd if=file of=/dev/null doesn''t appear to do the trick. Scott
Scott Laird wrote:> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai > <mritun+opensolaris at gmail.com> wrote: > >> As for source, here you go :) >> >> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 >> > > Thanks. It''s in the middle of get_replication, so I suspect it''s a > bug--zpool tries to check on the replication status of existing vdevs > and croaks in the process. As it turns out, I was able to add the > cache devices just fine once the resilver completed. >It is a bug because the assertion failed. Please file one. http://en.wikipedia.org/wiki/Assertion_(computing) http://bugs.opensolaris.org> Out of curiosity, what''s the easiest way to shove a file into the > L2ARC? Repeated reads with dd if=file of=/dev/null doesn''t appear to > do the trick. >To put something in the L2ARC, it has to be purged from the ARC. So until you run out of space in the ARC, nothing will be placed into the L2ARC. [note to self: arcstat should know about the l2 kstats...] -- richard
On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling <Richard.Elling at sun.com> wrote:> Scott Laird wrote: >> >> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai >> <mritun+opensolaris at gmail.com> wrote: >> >>> >>> As for source, here you go :) >>> >>> >>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 >>> >> >> Thanks. It''s in the middle of get_replication, so I suspect it''s a >> bug--zpool tries to check on the replication status of existing vdevs >> and croaks in the process. As it turns out, I was able to add the >> cache devices just fine once the resilver completed. >> > > It is a bug because the assertion failed. Please file one. > http://en.wikipedia.org/wiki/Assertion_(computing) > http://bugs.opensolaris.org > >> Out of curiosity, what''s the easiest way to shove a file into the >> L2ARC? Repeated reads with dd if=file of=/dev/null doesn''t appear to >> do the trick. >> > > To put something in the L2ARC, it has to be purged from the ARC. > So until you run out of space in the ARC, nothing will be placed into > the L2ARC.I have a ~50G working set and 8 GB of RAM, so I''m out of space in my ARC. My read rate is low enough for the disks to keep up, but I''d like to see lower latency. Also, 30G SSDs were cheap last week :-). My big problem is that dd if=file of=/dev/null doesn''t appear to actually read the whole file--I can loop over 50G of data in about 20 seconds while doing under 100 MB/sec of disk I/O. Does Solaris''s dd have some sort of of=/dev/null optimization? Adding conv=swab seems to be making it work better, but I''m still only seeing write rates of ~1 MB/sec per SSD, even though they''re mostly empty. Scott
[no Sun folks replying to this? ok, let me do more spam then...] Scott, thank you so much for the testing spirit and sharing the result with the list! -- We architects can be talking all day long and still don''t have any idea how the open things would work on "any box", not just the poster-boy kind of expensive boxes with tons of hardware. However, I would just like to suggest that the SSD performance gain would be mostly in rates (IOPS), but not throughput (MB/s). If you measure the gain in light of rates, you might be (actually should be, by our architecting theory) much more impressed. [well, only if you care about database applications, beyond just our personal digital media files on company network... :-) ] Please see the testing below, done before the 10/2008 Sun official 7000 SSD availability annoucement, as well as the tech talk by Brendan, a bit long (and less fun than my spam), but I am sure it is worth the time to study. http://blogs.sun.com/brendan/entry/test Best, z ----- Original Message ----- From: "Scott Laird" <scott at sigkill.org> To: "Richard Elling" <Richard.Elling at sun.com> Cc: <zfs-discuss at opensolaris.org>; "Akhilesh Mritunjai" <mritun+opensolaris at gmail.com> Sent: Saturday, January 03, 2009 12:02 AM Subject: Re: [zfs-discuss] Unable to add cache device> On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling <Richard.Elling at sun.com> > wrote: >> Scott Laird wrote: >>> >>> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai >>> <mritun+opensolaris at gmail.com> wrote: >>> >>>> >>>> As for source, here you go :) >>>> >>>> >>>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 >>>> >>> >>> Thanks. It''s in the middle of get_replication, so I suspect it''s a >>> bug--zpool tries to check on the replication status of existing vdevs >>> and croaks in the process. As it turns out, I was able to add the >>> cache devices just fine once the resilver completed. >>> >> >> It is a bug because the assertion failed. Please file one. >> http://en.wikipedia.org/wiki/Assertion_(computing) >> http://bugs.opensolaris.org >> >>> Out of curiosity, what''s the easiest way to shove a file into the >>> L2ARC? Repeated reads with dd if=file of=/dev/null doesn''t appear to >>> do the trick. >>> >> >> To put something in the L2ARC, it has to be purged from the ARC. >> So until you run out of space in the ARC, nothing will be placed into >> the L2ARC. > > I have a ~50G working set and 8 GB of RAM, so I''m out of space in my > ARC. My read rate is low enough for the disks to keep up, but I''d > like to see lower latency. Also, 30G SSDs were cheap last week :-). > > My big problem is that dd if=file of=/dev/null doesn''t appear to > actually read the whole file--I can loop over 50G of data in about 20 > seconds while doing under 100 MB/sec of disk I/O. Does Solaris''s dd > have some sort of of=/dev/null optimization? Adding conv=swab seems > to be making it work better, but I''m still only seeing write rates of > ~1 MB/sec per SSD, even though they''re mostly empty. > > > Scott > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Laird writes: > On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling <Richard.Elling at sun.com> wrote: > > Scott Laird wrote: > >> > >> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai > >> <mritun+opensolaris at gmail.com> wrote: > >> > >>> > >>> As for source, here you go :) > >>> > >>> > >>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool/zpool_vdev.c#650 > >>> > >> > >> Thanks. It''s in the middle of get_replication, so I suspect it''s a > >> bug--zpool tries to check on the replication status of existing vdevs > >> and croaks in the process. As it turns out, I was able to add the > >> cache devices just fine once the resilver completed. > >> > > > > It is a bug because the assertion failed. Please file one. > > http://en.wikipedia.org/wiki/Assertion_(computing) > > http://bugs.opensolaris.org > > > >> Out of curiosity, what''s the easiest way to shove a file into the > >> L2ARC? Repeated reads with dd if=file of=/dev/null doesn''t appear to > >> do the trick. > >> > > > > To put something in the L2ARC, it has to be purged from the ARC. > > So until you run out of space in the ARC, nothing will be placed into > > the L2ARC. > > I have a ~50G working set and 8 GB of RAM, so I''m out of space in my > ARC. My read rate is low enough for the disks to keep up, but I''d > like to see lower latency. Also, 30G SSDs were cheap last week :-). > > My big problem is that dd if=file of=/dev/null doesn''t appear to > actually read the whole file--I can loop over 50G of data in about 20 > seconds while doing under 100 MB/sec of disk I/O. Does Solaris''s dd > have some sort of of=/dev/null optimization? Not that I know, so this results is very strange. > Adding conv=swab seems > to be making it work better, but I''m still only seeing write rates of > ~1 MB/sec per SSD, even though they''re mostly empty. > > About installing in L2ARC, since the L2ARC is design to help lower latency (vs throughput) we only install in the L2ARC if we don''t detect sequential access. Buffers that were streamed in will not install in L2ARC (unless l2arc_noprefetch is unset). -r > Scott > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss