Hi guys, after reading the mailings yesterday i noticed someone was after
upgrading to zfs v21 (deduplication) i''m after the same, i installed
osol-dev-127 earlier which comes with v19 and then followed the instructions on
http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date,
however, the system is reporting no updates are available and stays at zfs v19,
any ideas?
On 17 Nov 2009, at 10:28, zfs-discuss-request at opensolaris.org wrote:
> Send zfs-discuss mailing list submissions to
> zfs-discuss at opensolaris.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> or, via email, send a message with subject or body ''help''
to
> zfs-discuss-request at opensolaris.org
>
> You can reach the person managing the list at
> zfs-discuss-owner at opensolaris.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of zfs-discuss digest..."
>
>
> Today''s Topics:
>
> 1. Re: permanent files error, unable to access pool
> (Cindy Swearingen)
> 2. Re: hung pool on iscsi (Tim Cook)
> 3. Re: hung pool on iscsi (Richard Elling)
> 4. Re: hung pool on iscsi (Jacob Ritorto)
> 5. Re: permanent files error, unable to access pool
> (daniel.rodriguez.delgado at gmail.com)
> 6. building zpools on device aliases (sean walmsley)
> 7. Re: Best config for different sized disks (Erik Trimble)
> 8. [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot
> obsoletion in snv 128] (Tim Foster)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 16 Nov 2009 15:46:26 -0700
> From: Cindy Swearingen <Cindy.Swearingen at Sun.COM>
> To: "daniel.rodriguez.delgado at gmail.com"
> <daniel.rodriguez.delgado at gmail.com>
> Cc: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] permanent files error, unable to access
> pool
> Message-ID: <4B01D642.4010407 at Sun.COM>
> Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
>
> Hi Daniel,
>
> In some cases, when I/O is suspended, permanent errors are logged and
> you need to run a zpool scrub to clear the errors.
>
> Are you saying that a zpool scrub cleared the errors that were
> displayed in the zpool status output? Or, did you also use zpool
> clear?
>
> Metadata is duplicated even in a one-device pool but recovery
> must depend on the severity of metadata errors.
>
> Thanks,
>
> Cindy
>
> On 11/16/09 13:18, daniel.rodriguez.delgado at gmail.com wrote:
>> Thanks Cindy,
>>
>> In fact, after some research, I ran into the scrub suggestion and it
worked perfectly. Now I think that the automated message in
http://www.sun.com/msg/ZFS-8000-8A should mention something about scrub as a
worthy attempt.
>>
>> It was related to an external usb disk. I guess I am happy it happened
now before I invested in getting a couple of other external disks as mirrors of
the existing one. I guess I am better off installing an extra internal disk.
>>
>> is this something common on usb disks? would it get improved in later
versions of osol or it is somewhat of an incompatibility/unfriendliness of zfs
with external usb disks?
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 16 Nov 2009 17:04:48 -0600
> From: Tim Cook <tim at cook.ms>
> To: Martin Vool <mardicas at gmail.com>
> Cc: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] hung pool on iscsi
> Message-ID:
> <fbef46640911161504y6e193b32obe3d7bebaf2d9448 at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Mon, Nov 16, 2009 at 4:00 PM, Martin Vool <mardicas at gmail.com>
wrote:
>
>> I already got my files back acctuay and the disc contains already new
>> pools, so i have no idea how it was set.
>>
>> I have to make a virtualbox installation and test it.
>> Can you please tell me how-to set the failmode?
>>
>>
>>
>
>
http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs-to-gracefully-deal-with-failures/
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091116/f51a8a6e/attachment-0001.html>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 16 Nov 2009 15:13:49 -0800
> From: Richard Elling <richard.elling at gmail.com>
> To: Martin Vool <mardicas at gmail.com>
> Cc: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] hung pool on iscsi
> Message-ID: <7ADFC8E2-3BE0-48E6-8D5A-506D975A21CC at gmail.com>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
>
>
> On Nov 16, 2009, at 2:00 PM, Martin Vool wrote:
>
>> I already got my files back acctuay and the disc contains already
>> new pools, so i have no idea how it was set.
>>
>> I have to make a virtualbox installation and test it.
>
> Don''t forget to change VirtualBox''s default cache flush
setting.
>
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#OpenSolaris.2FZFS.2FVirtual_Box_Recommendations
> -- richard
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 16 Nov 2009 18:22:17 -0500
> From: Jacob Ritorto <jacob.ritorto at gmail.com>
> To: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] hung pool on iscsi
> Message-ID:
> <1f3f8f1d0911161522u55db284bw5668cbe48f321082 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Mon, Nov 16, 2009 at 4:49 PM, Tim Cook <tim at cook.ms> wrote:
>> Is your failmode set to wait?
>
> Yes. This box has like ten prod zones and ten corresponding zpools
> that initiate to iscsi targets on the filers. We can''t panic the
> whole box just because one {zone/zpool/iscsi target} fails. Are there
> undocumented commands to reset a specific zpool or something?
>
> thx
> jake
>
>
> ------------------------------
>
> Message: 5
> Date: Mon, 16 Nov 2009 15:53:50 PST
> From: "daniel.rodriguez.delgado at gmail.com"
> <daniel.rodriguez.delgado at gmail.com>
> To: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] permanent files error, unable to access
> pool
> Message-ID: <317370742.281258415660371.JavaMail.Twebapp at sf-app1>
> Content-Type: text/plain; charset=UTF-8
>
> to be the best of my recollection, I only needed to run zfs scrub, reboot
and the disk became operational again....
>
> the irony was that the error message was asking me to recover from backup,
but the disk involved was my backup of my working pool.....
> --
> This message posted from opensolaris.org
>
>
> ------------------------------
>
> Message: 6
> Date: Mon, 16 Nov 2009 18:17:39 PST
> From: sean walmsley <sean at fpp.nuclearsafetysolutions.com>
> To: zfs-discuss at opensolaris.org
> Subject: [zfs-discuss] building zpools on device aliases
> Message-ID: <1647062936.311258424289885.JavaMail.Twebapp at sf-app1>
> Content-Type: text/plain; charset=UTF-8
>
> We have a number of Sun J4200 SAS JBOD arrays which we have multipathed
using Sun''s MPxIO facility. While this is great for reliability, it
results in the /dev/dsk device IDs changing from cXtYd0 to something virtually
unreadable like "c4t5000C5000B21AC63d0s3".
>
> Since the entries in /dev/{rdsk,dsk} are simply symbolic links anyway,
would there be any problem with adding "alias" links to /devices there
and building our zpools on them? We''ve tried this and it seems to work
fine producing a zpool status similar to the following:
>
> ...
> NAME STATE READ WRITE CKSUM
> vol01 ONLINE 0 0 0
> mirror ONLINE 0 0 0
> top00 ONLINE 0 0 0
> bot00 ONLINE 0 0 0
> mirror ONLINE 0 0 0
> top01 ONLINE 0 0 0
> bot01 ONLINE 0 0 0
> ...
>
> Here our aliases are "topnn" and "botnn" to denote the
disks in the top and bottom JBODs.
>
> The obvious question is "what happens if the alias link
disappears?". We''ve tested this, and ZFS seems to handle it quite
nicely by finding the "normal" /dev/dsk link and simply working with
that (although it''s more difficult to get ZFS to use the alias again
once it is recreated).
>
> If anyone can think of anything really nasty that we''ve missed,
we''d appreciate knowing about it. Alternatively, if there is a better
supported means of having ZFS display human-readable device ids we''re
all ears :-)
>
> Perhaps an MPxIO RFE for "vanity" device names would be in order?
> --
> This message posted from opensolaris.org
>
>
> ------------------------------
>
> Message: 7
> Date: Tue, 17 Nov 2009 00:51:05 -0800
> From: Erik Trimble <Erik.Trimble at Sun.COM>
> To: Tim Cook <tim at cook.ms>
> Cc: Les Pritchard <les.pritchard at gmail.com>,
> zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] Best config for different sized disks
> Message-ID: <4B0263F9.1060208 at sun.com>
> Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
>
> Tim Cook wrote:
>>
>>
>> On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn
>> <bfriesen at simple.dallas.tx.us <mailto:bfriesen at
simple.dallas.tx.us>>
>> wrote:
>>
>> On Sun, 15 Nov 2009, Tim Cook wrote:
>>
>>
>> Once again I question why you''re wasting your time with
>> raid-z. You might as well just stripe across all the drives.
>> You''re taking a performance penalty for a setup that
>> essentially has 0 redundancy. You lose a 500gb drive, you
>> lose everything.
>>
>>
>> Why do you say that this user will lose everything? The two
>> concatenated/striped devices on the local system are no different
>> than if they were concatenated on SAN array and made available as
>> one LUN. If one of those two drives fails, then it would have the
>> same effect as if one larger drive failed.
>>
>> Bob
>>
>>
>> Can I blame it on too many beers? I was thinking losing half of one
>> drive, rather than an entire vdev would just cause
"weirdness" in the
>> pool, rather than a clean failure. I suppose without experimentation
>> there''s no way to really no, in theory though, zfs should be
able to
>> handle it.
>>
>> --Tim
>>
> Back to the original question: the "concat using SVM" method
works OK
> if the disk you have are all integer multiples of each other (that is,
> this worked because he had 2 500GB drives to make a 1TB drive out of).
> It certainly seems the best method - both for performance and maximum
> disk space - that I can think of. However, it won''t work well in
other
> cases: i.e. a couple of 250GB drives, and a couple of 1.5TB drives.
>
> In cases of serious mis-match between the drive sizes, especially when
> there''s not a real good way to concat to get a metadrive big
enough to
> match others, I''d recommend going for multiple zpools, and slicing
up
> the bigger drives to allow for RAIDZ-ing with the smaller ones
"natively".
>
> E.g.
>
> let''s say you have 3 250GB drives, and 3 1.5TB drives. You could
> partition the 1.5TB drives into 250GB and 1.25TB partitions, and then
> RAIDZ the 3 250GB drives together, plus the 250GB partitions as one
> zpool, then the 1.25TB partitions as another zpool.
>
> You''ll have some problems with contending I/O if you try to write
to
> both zpools at once, but it''s the best way I can think of to
maximize
> space and at the same time maximize performance for single-pool I/O.
>
> I think it would be a serious performance mistake to combine the two
> pools as vdevs in a single pool, though it''s perfectly possible.
>
> I.e.
> (preferred)
> zpool create smalltank raidz c0t0d0 c0t1d0 c0t2d0 c1t0d0s0 c1t1d0s0
c1t2d0s0
> zpool create largetank raidz c1t0d0s1 c1t1d0s1 c1t2d0s1
>
> instead of
>
> zpool create supertank raidz c0t0d0 c0t1d0 c0t2d0 c1t0d0s0 c1t1d0s0
> c1t2d0s0 raidz c1t0d0s1 c1t1d0s1 c1t2d0s1
>
>
>
> --
> Erik Trimble
> Java System Support
> Mailstop: usca22-123
> Phone: x17195
> Santa Clara, CA
> Timezone: US/Pacific (GMT-0800)
>
>
>
> ------------------------------
>
> Message: 8
> Date: Tue, 17 Nov 2009 10:27:38 +0000
> From: Tim Foster <Tim.Foster at Sun.COM>
> To: zfs-discuss <zfs-discuss at opensolaris.org>
> Subject: [zfs-discuss] [Fwd: [zfs-auto-snapshot] Heads up:
> SUNWzfs-auto-snapshot obsoletion in snv 128]
> Message-ID: <1258453658.2502.28.camel at igyo>
> Content-Type: text/plain; CHARSET=US-ASCII
>
> Hi all,
>
> Just forwarding Niall''s heads-up message about the impending
removal of
> the existing zfs-auto-snapshot implementation in nv_128
>
> I''ve not been involved in the rewrite, but what I''ve read
about the new
> code, it''ll be a lot more efficient than the old ksh-based code,
and
> will fix many of the problems with the old solution.
>
> The new code currently lacks support for the ''zfs/backup''
functionality
> - which allowed you to specify a command to tell the service to generate
> incremental or full send streams at each time interval, along with a
> commandline to process that stream (sshing to a remote server and doing
> a zfs recv, for example)
>
> If you do use that functionality, it''d be good to drop a mail to
the
> thread[1] on the zfs-auto-snapshot alias.
>
>
> It''s been a wild ride, but my work on zfs-auto-snapshot is done I
> think :-)
>
> cheers,
> tim
>
> [1]
http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2009-November/thread.html#199
>
> -------- Forwarded Message --------
> From: Niall Power <Niall.Power at Sun.COM>
> To: zfs-auto-snapshot at opensolaris.org
> Subject: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot obsoletion
> in snv 128
> Date: Mon, 16 Nov 2009 18:26:28 +1100
>
> This is a heads up for user of Tim''s zfs-auto-snapshot auto
snapshot
> service currently delivered
> in Solaris Nevada and OpenSolaris. As of build 128 the zfs-auto-snapshot
> scripts are being replaced
> by a rewritten daemon (time-sliderd).
> Time-sliderd will continue to support the existing SMF
> auto-snapshot:<schedule> service instances
> as it''s configuration mechanism so for most users there should be
no
> significant differences noticeable.
> Some of the options will no longer be supported however since they are
> either obsolete or too
> specifically tied to the zfs-auto-snapshot implementation to make them
> portable.
>
> Stuff that will work:
> - Default out of the box schedules (frequent, hourly, daily, weekly,
> monthly)
> - Custom schedules
>
> SMF properties that will be supported:
> - interval
> - period
> - keep
>
> SMF properties that won''t be supported
> - "offset" The time-slider daemon schedules snapshots based on
relative
> times rather that absolute times which makes it more suitable for systems
> that do not have constant 24/7 uptime so this feature isn''t so
relevant
> anymore (it only got implemented recently in zfs-auto-snapshot also)
>
> - "label" Dropping it allows naming shemes to be simplified and
> enforces uniqueness when SMF tries to import an auto-snapshot instance
>
> - backup/backup-save-cmd/backup-lock
> We plan to implement an rsync based backup mechanism that allows backups
> to be browsed like a proper filesystem instead of having binary
> snapshot blobs
> that are explicitly classified as unstable/volatile by zfs(1)
>
> For those who want to use time-slider without going through the GUI, simply
> enable/configure (or create) the auto-snapshot instances you need then
> enable
> the time-slider SMF service. time-slider will pick up the enabled
> auto-snapshot
> instances and schedule snapshots for them.
>
> For folks who prefer to continue using zfs-auto-snapshot, you will need to
> remove SUNWgnome-time-slider and install the existing zfs-auto-snapshot
> packages instead.
>
> Comments/questions welcome ;)
>
> Cheers,
> Niall.
>
> _______________________________________________
> zfs-auto-snapshot mailing list
> zfs-auto-snapshot at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-auto-snapshot
>
>
>
>
> ------------------------------
>
> _______________________________________________
> zfs-discuss mailing list
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> End of zfs-discuss Digest, Vol 49, Issue 78
> *******************************************