Chun, Peter non Unisys
2006-Sep-19 04:40 UTC
[zfs-discuss] slow zpool create ( and format )
Hi, I am running Solaris 10 6/06. system was upgraded from Solaris10 via live upgrade. zpool create mirror c0t10d0 c0t11d0 takes about 30 minutes to complete or even just to produce the error message "invalid vdev specification, use -f to override......" Strangely enough the format command also takes about 30 minutes to respond when I select a disk ( to print the partition table ) actual timings clocks it to 32 minutes. Any ideas why we now have this delay? Thanks Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060919/a24374a5/attachment.html>
Is there anything in /var/adm/messages? This sounds like some flakey hardware causing I/O retries. After you create the pool, is it usable in any sense of the word? Does ''zpool status'' show any errors after running a scrub? - Eric On Tue, Sep 19, 2006 at 02:40:43PM +1000, Chun, Peter non Unisys wrote:> Hi, > I am running Solaris 10 6/06. > system was upgraded from Solaris10 via live upgrade. > > zpool create mirror c0t10d0 c0t11d0 > takes about 30 minutes to complete > or even just to produce the error message "invalid vdev specification, > use -f to override......" > > Strangely enough the format command also takes about 30 minutes to > respond > when I select a disk ( to print the partition table ) > > actual timings clocks it to 32 minutes. > > Any ideas why we now have this delay? > > Thanks > Peter> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Chun, Peter non Unisys
2006-Sep-19 22:03 UTC
[zfs-discuss] slow zpool create ( and format )
Hi, There is ( quite literally ) nothing in the /var/adm/messages. It is 0 size. syslog is configured correctly. There is no evidence of flakiness in /var/adm/messages.* After the 32 minute delay the zpool status is all good. pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 errors: No known data errors Francois'' suggestion for "export NOINUSE_CHECK=1" did the trick for format. It also takes care of the zpool create delay too. Though I would imagine this setting is quite dangerous as it went ahead and destroyed metadb info without warning.. ( on a 3rd disk in raidz ) Thanks Peter -----Original Message----- From: Eric Schrock [mailto:eric.schrock at sun.com] Sent: Tuesday, 19 September 2006 3:03 PM To: Chun, Peter non Unisys Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] slow zpool create ( and format ) Is there anything in /var/adm/messages? This sounds like some flakey hardware causing I/O retries. After you create the pool, is it usable in any sense of the word? Does ''zpool status'' show any errors after running a scrub? - Eric On Tue, Sep 19, 2006 at 02:40:43PM +1000, Chun, Peter non Unisys wrote:> Hi, > I am running Solaris 10 6/06. > system was upgraded from Solaris10 via live upgrade. > > zpool create mirror c0t10d0 c0t11d0 > takes about 30 minutes to complete > or even just to produce the error message "invalid vdev specification, > use -f to override......" > > Strangely enough the format command also takes about 30 minutes to > respond > when I select a disk ( to print the partition table ) > > actual timings clocks it to 32 minutes. > > Any ideas why we now have this delay? > > Thanks > Peter> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Hi, Just wondering what is the recommended filesystem structure under ZFS. I am playing/learning... Firstly If we follow the suggestions in the quick start guide. Then /export/home is setup so that each user get a "partition" This is nice as you can set quota and get inheritance etc. This also mean the "df" output can be very large if you have a large number of users. Is there some sort automount like feature. Where the zfs "partition" is mounted only when used? Secondly Is it desireable to split off a zfs for /var, /usr, /opt. Without zfsboot we can''t get zfs /. Or has the state of play changed so that we can boot off a zfs partition? Thanks Peter
Chun, Peter non Unisys wrote:> Hi, > Just wondering what is the recommended filesystem structure under ZFS.I think I don''t understand the question, because the obvious answer is "ZFS."> Secondly > Is it desireable to split off a zfs for /var, /usr, /opt.I believe that those are prohibited until we have zfsboot.> Without zfsboot we can''t get zfs /. > Or has the state of play changed so that we can boot off a zfs > partition?Not yet. -------------------------------------------------------------------------- Jeff VICTOR Sun Microsystems jeff.victor @ sun.com OS Ambassador Sr. Technical Specialist Solaris 10 Zones FAQ: http://www.opensolaris.org/os/community/zones/faq --------------------------------------------------------------------------