Hello, I''ve looked quickly through the archives and haven''t found mention of this issue. I''m running SXCE (snv_99), which I believe uses zfs version 13. I had an existing zpool: -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081026/a5e2f25b/attachment.html>
Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
===================================Hello,
I''ve looked quickly through the archives and haven''t found
mention of
this issue. I''m running SXCE (snv_99), which uses zfs version 13. I
had an existing zpool:
------------------------------
[ethan at opensolaris ~]$ zpool status -v data
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t1d0p0 ONLINE 0 0 0
c4t9d0p0 ONLINE 0 0 0
...
cache
c4t15d0p0 ONLINE 0 0 0
errors: No known data errors
------------------------------
The cache device (c4t15d0p0) is an Intel SSD. To test zil, I removed
the cache device, and added it as a log device:
----------------------------------
[ethan at opensolaris ~]$ pfexec zpool remove data c4t15d0p0
[ethan at opensolaris ~]$ pfexec zpool add data log c4t15d0p0
[ethan at opensolaris ~]$ zpool status -v data
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t1d0p0 ONLINE 0 0 0
c4t9d0p0 ONLINE 0 0 0
...
logs ONLINE 0 0 0
c4t15d0p0 ONLINE 0 0 0
errors: No known data errors
----------------------------------
The device is working fine. I then said, that was fun, time to remove
and add as cache device. But that doesn''t seem possible:
----------------------------------
[ethan at opensolaris ~]$ pfexec zpool remove data c4t15d0p0
cannot remove c4t15d0p0: only inactive hot spares or cache devices can
be removed
----------------------------------
I''ve also tried using detach, offline, each failing in other more
obvious ways. The manpage does say that those devices should be
removable/replaceable. At this point the only way to reclaim my SSD
device is to destroy the zpool.
Just in-case you are wondering about versions:
----------------------------------
[ethan at opensolaris ~]$ zpool upgrade data
This system is currently running ZFS pool version 13.
Pool ''data'' is already formatted using the current version.
[ethan at opensolaris ~]$ uname -a
SunOS opensolaris 5.11 snv_99 i86pc i386 i86pc
----------------------------------
Any ideas?
Thanks,
Ethan
CR 6574286 removing a slog doesn''t work http://bugs.opensolaris.org/view_bug.do?bug_id=6574286 -- richard Ethan Erchinger wrote:> Sorry for the first incomplete send, stupid Ctrl-Enter. :-) > ===================================> Hello, > > I''ve looked quickly through the archives and haven''t found mention of > this issue. I''m running SXCE (snv_99), which uses zfs version 13. I > had an existing zpool: > ------------------------------ > [ethan at opensolaris ~]$ zpool status -v data > pool: data > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c4t1d0p0 ONLINE 0 0 0 > c4t9d0p0 ONLINE 0 0 0 > ... > cache > c4t15d0p0 ONLINE 0 0 0 > > errors: No known data errors > > ------------------------------ > > The cache device (c4t15d0p0) is an Intel SSD. To test zil, I removed > the cache device, and added it as a log device: > ---------------------------------- > [ethan at opensolaris ~]$ pfexec zpool remove data c4t15d0p0 > [ethan at opensolaris ~]$ pfexec zpool add data log c4t15d0p0 > [ethan at opensolaris ~]$ zpool status -v data > pool: data > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > data ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c4t1d0p0 ONLINE 0 0 0 > c4t9d0p0 ONLINE 0 0 0 > ... > logs ONLINE 0 0 0 > c4t15d0p0 ONLINE 0 0 0 > > errors: No known data errors > ---------------------------------- > > The device is working fine. I then said, that was fun, time to remove > and add as cache device. But that doesn''t seem possible: > ---------------------------------- > [ethan at opensolaris ~]$ pfexec zpool remove data c4t15d0p0 > cannot remove c4t15d0p0: only inactive hot spares or cache devices can > be removed > ---------------------------------- > > I''ve also tried using detach, offline, each failing in other more > obvious ways. The manpage does say that those devices should be > removable/replaceable. At this point the only way to reclaim my SSD > device is to destroy the zpool. > > Just in-case you are wondering about versions: > ---------------------------------- > [ethan at opensolaris ~]$ zpool upgrade data > This system is currently running ZFS pool version 13. > > Pool ''data'' is already formatted using the current version. > [ethan at opensolaris ~]$ uname -a > SunOS opensolaris 5.11 snv_99 i86pc i386 i86pc > ---------------------------------- > > Any ideas? > > Thanks, > Ethan > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Ethan,
It is still not possible to remove a slog from a pool. This is bug:
6574286 removing a slog doesn''t work
The error message:
"cannot remove c4t15d0p0: only inactive hot spares or cache devices can be
removed"
is correct and this is the same as documented in the zpool man page:
zpool remove pool device ...
Removes the specified device from the pool. This command
currently only supports removing hot spares and cache
devices.
It''s actually relatively easy to implement removal of slogs. We simply
flush the
outstanding transactions and start using the main pool for the Intent Logs.
Thus the vacated device can be removed.
However, we wanted to make sure it fit into the framework for
the removal of any device. This a much harder problem which we
have made progress, but it''s not there yet...
Neil.
On 10/26/08 11:41, Ethan Erchinger wrote:> Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
> ===================================> Hello,
>
> I''ve looked quickly through the archives and haven''t
found mention of
> this issue. I''m running SXCE (snv_99), which uses zfs version 13.
I
> had an existing zpool:
> ------------------------------
> [ethan at opensolaris ~]$ zpool status -v data
> pool: data
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> data ONLINE 0 0 0
> mirror ONLINE 0 0 0
> c4t1d0p0 ONLINE 0 0 0
> c4t9d0p0 ONLINE 0 0 0
> ...
> cache
> c4t15d0p0 ONLINE 0 0 0
>
> errors: No known data errors
>
> ------------------------------
>
> The cache device (c4t15d0p0) is an Intel SSD. To test zil, I removed
> the cache device, and added it as a log device:
> ----------------------------------
> [ethan at opensolaris ~]$ pfexec zpool remove data c4t15d0p0
> [ethan at opensolaris ~]$ pfexec zpool add data log c4t15d0p0
> [ethan at opensolaris ~]$ zpool status -v data
> pool: data
> state: ONLINE
> scrub: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> data ONLINE 0 0 0
> mirror ONLINE 0 0 0
> c4t1d0p0 ONLINE 0 0 0
> c4t9d0p0 ONLINE 0 0 0
> ...
> logs ONLINE 0 0 0
> c4t15d0p0 ONLINE 0 0 0
>
> errors: No known data errors
> ----------------------------------
>
> The device is working fine. I then said, that was fun, time to remove
> and add as cache device. But that doesn''t seem possible:
> ----------------------------------
> [ethan at opensolaris ~]$ pfexec zpool remove data c4t15d0p0
> cannot remove c4t15d0p0: only inactive hot spares or cache devices can
> be removed
> ----------------------------------
>
> I''ve also tried using detach, offline, each failing in other more
> obvious ways. The manpage does say that those devices should be
> removable/replaceable. At this point the only way to reclaim my SSD
> device is to destroy the zpool.
>
> Just in-case you are wondering about versions:
> ----------------------------------
> [ethan at opensolaris ~]$ zpool upgrade data
> This system is currently running ZFS pool version 13.
>
> Pool ''data'' is already formatted using the current
version.
> [ethan at opensolaris ~]$ uname -a
> SunOS opensolaris 5.11 snv_99 i86pc i386 i86pc
> ----------------------------------
>
> Any ideas?
>
> Thanks,
> Ethan
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Neil,
Tuesday, October 28, 2008, 10:34:28 PM, you wrote:
NP> However, we wanted to make sure it fit into the framework for
NP> the removal of any device. This a much harder problem which we
NP> have made progress, but it''s not there yet...
I think a lot of people here would be interested in more details and any
ETA (no commitments) on this - could you write couple of sentences on
what you guys are actually doing re disk eviction? FOr example - would
it be possible to change raidz2 -> raidz1 or raid10 on a fly?
--
Best regards,
Robert Milkowski mailto:milek at task.gda.pl
http://milek.blogspot.com