Arlina Goce-Capiral
2006-Aug-23 23:41 UTC
[zfs-discuss] Need Help: didn''t create the pool as radiz but stripes
I need help on this and don''t know what to give to customer. System is V40z running Solaris 10 x86 and customer is trying to create 3 disks as Raidz. After creating the pool, looking at the disk space and configuration, he thinks that this is not raidz pool but rather stripes. THis is what exactly he told me so i''m not sure if this makes sense to all of you. Any assistance and help is greatly appreciated. THank you in advance, Arlina NOTE: Please email me directly as i''m not on this alias. Below are more informations. ================Command used: # zpool create pool raidz c1t2d0 c1t3d0 c1t4d0 From the format command: 0. c1t0d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63> /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 0,0 1. c1t2d0 <FUJITSU-MAT3073NC-0104-68.49GB> /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 2,0 2. c1t3d0 <FUJITSU-MAT3073NC-0104-68.49GB> /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 3,0 3. c1t4d0 <FUJITSU-MAT3073NC-0104-68.49GB> /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 4,0 The pool status: # zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE (B READ WRITE CKSUM pool ONLINE 0 0 0 raidz ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 errors: No known data errors The df -k output of te newly created pool as raidz. # df -k Filesystem kbytes used avail capacity Mounted on pool 210567168 49 210567033 1% /pool I can create a file that is large as the stripe of the 3 disks. So the information reported is correct. Also, if I pull a disk out, the whole zpool fails! There is no degraded pools, it just fails. ===========================================
James Dickens
2006-Aug-24 00:02 UTC
[zfs-discuss] Need Help: didn''t create the pool as radiz but stripes
On 8/23/06, Arlina Goce-Capiral <Arlina.Capiral at sun.com> wrote:> I need help on this and don''t know what to give to customer. > > System is V40z running Solaris 10 x86 and customer is trying to create 3 > disks as Raidz. After creating the pool, > looking at the disk space and configuration, he thinks that this is not > raidz pool but rather > stripes. THis is what exactly he told me so i''m not sure if this makes > sense to all of you. > > Any assistance and help is greatly appreciated. > > THank you in advance, > Arlina > > NOTE: Please email me directly as i''m not on this alias. > > Below are more informations. > ================> Command used: > # zpool create pool raidz c1t2d0 c1t3d0 c1t4d0 > > From the format command: > 0. c1t0d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63> > /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 0,0 > 1. c1t2d0 <FUJITSU-MAT3073NC-0104-68.49GB> > /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 2,0 > 2. c1t3d0 <FUJITSU-MAT3073NC-0104-68.49GB> > /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 3,0 > 3. c1t4d0 <FUJITSU-MAT3073NC-0104-68.49GB> > /pci at 0,0/pci1022,7450 at a/pci17c2,20 at 4/sd at 4,0 > > The pool status: > # zpool status > pool: pool > state: ONLINE > scrub: none requested > config:this right here shows its a raidz pool, there is a bug in update 2 that makes new pools show up with the wrong availible disk space, as he adds files to the pool, it will fix it self, i beleve teh fix is slated to go into update 3.> > NAME STATE (B READ WRITE CKSUM > pool ONLINE 0 0 0 > raidz ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 >James Dickens uadmin.blogspot.com> errors: No known data errors > > > The df -k output of te newly created pool as raidz. > # df -k > Filesystem kbytes used avail capacity Mounted on > pool 210567168 49 210567033 1% /pool > > I can create a file that is large as the > stripe of the 3 disks. So the information reported is correct. Also, > if I pull a disk out, the whole zpool fails! There is no degraded > pools, it just fails. > ==========================================> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Arlina Goce-Capiral
2006-Aug-24 00:14 UTC
[zfs-discuss] Need Help: didn''t create the pool as radiz but stripes
Hello James, Thanks for the response. Yes. I got the bug id# and forwarded that to customer. But cu said that he can create a large file that is large as the stripe of the 3 disks. And if he pull a disk, the whole zpool failes, so there''s no degraded pools, just fails. Any idea on this? Thank you,. Arlina-
Boyd Adamson
2006-Aug-24 04:43 UTC
[zfs-discuss] Need Help: didn''t create the pool as radiz but stripes
On 24/08/2006, at 10:14 AM, Arlina Goce-Capiral wrote:> Hello James, > > Thanks for the response. > > Yes. I got the bug id# and forwarded that to customer. But cu said > that he can create a large file > that is large as the stripe of the 3 disks. And if he pull a disk, > the whole zpool failes, so there''s no > degraded pools, just fails. > > Any idea on this?The output of your zpool command certainly shows a raidz pool. It may be that the failing pool and the size issues are unrelated. How are they creating a huge file? It''s not sparse is it? Compression involved? As to the failure mode, you may like to include any relevant /var/adm/ messages lines and errors from fmdump -e. Boyd
Arlina Goce-Capiral
2006-Aug-24 16:12 UTC
[zfs-discuss] Need Help: didn''t create the pool as radiz but stripes
Boyd and all, Just an update of what happened and what the customer found out regarding the issue. ==========================It does appear that the disk is fill up by 140G. I think I now know what happen. I created a raidz pool and I did not write any data to it before I just pulled out a disk. So I believe the zfs filesystem did not initialize yet. So this is why my zfs filesystem was unusable. Can you confirm this? But when I created a zfs filesystem and wrote data to it, it could now lose a disk and just be degraded. I tested this part by removing the disk partition in format. I will try this same test to re-duplicate my issue, but can you confirm for me if my zfs filesystem as a raidz requires me to write data to it first before it''s really ready? [root at caroldaps10.nss.vzwnet.com]# df -k Filesystem kbytes used avail capacity Mounted on /dev/dsk/c1t0d0s0 4136995 2918711 1176915 72% / /devices 0 0 0 0% /devices ctfs 0 0 0 0% /system/contract proc 0 0 0 0% /proc mnttab 0 0 0 0% /etc/mnttab swap 5563996 616 5563380 1% /etc/svc/volatile objfs 0 0 0 0% /system/object /usr/lib/libc/libc_hwcap2.so.1 4136995 2918711 1176915 72% /lib/libc.so.1 fd 0 0 0 0% /dev/fd /dev/dsk/c1t0d0s5 4136995 78182 4017444 2% /var /dev/dsk/c1t0d0s7 4136995 4126 4091500 1% /tmp swap 5563400 20 5563380 1% /var/run /dev/dsk/c1t0d0s6 4136995 38674 4056952 1% /opt pool 210567315 210566773 0 100% /pool / [root at caroldaps10.nss.vzwnet.com]# cd /pool /pool [root at caroldaps10.nss.vzwnet.com]# ls -la total 421133452 drwxr-xr-x 2 root sys 3 Aug 23 17:19 . drwxr-xr-x 25 root root 512 Aug 23 20:34 .. -rw------- 1 root root 171798691840 Aug 23 17:43 nullfile /pool [root at caroldaps10.nss.vzwnet.com]# [root at caroldaps10.nss.vzwnet.com]# zpool status pool: pool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using ''zpool online''. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: none requested config: NAME STATE READ WRITE CKSUM pool DEGRADED 0 0 0 raidz DEGRADED 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 UNAVAIL 15.12 10.27 0 cannot open errors: No known data errors AND SECOND EMAIL: I''m unable to re-duplicate my failed zfs pool using raidz. As for the disk size bug (6288488 and 2140116), I have a few questions. The developer said that it would be fixed in u3. When is u3 suppose to be release? U2 just came out. Also, can or will ============================================ Any ideas when the Solaris 10 update 3 (11/06) be released? And would this be fixed in Solaris 10 update 2 (6/06)? Thanks to all of you. Arlina-
Matthew Ahrens
2006-Aug-24 16:56 UTC
[zfs-discuss] Need Help: didn''t create the pool as radiz but stripes
On Thu, Aug 24, 2006 at 10:12:12AM -0600, Arlina Goce-Capiral wrote:> It does appear that the disk is fill up by 140G.So this confirms what I was saying, that they are only able to write ndisks-1 worth of data (in this case, ~68GB * (3-1) == ~136GB. So there is no unexpected behavior with respect to the size of their raid-z pool, just the known (and now fixed) bug.> I think I now know what happen. I created a raidz pool and I did not > write any data to it before I just pulled out a disk. So I believe the > zfs filesystem did not initialize yet. So this is why my zfs filesystem > was unusable. Can you confirm this?No, that should not be the case. As soon as the ''zfs'' or ''zpool'' command completes, everything will be on disk for the requested action.> But when I created a zfs filesystem and wrote data to it, it could now > lose a disk and just be degraded. I tested this part by removing the > disk partition in format.Well, it sounds like you are testing two different things: first you tried physically pulling out a disk, then you tried re-partitioning a disk. It sounds like there was a problem when you pulled out the disk. If you can describe the problem further (Did the machine panic? What was the panic message?) then perhaps we can diagnose it.> I will try this same test to re-duplicate my issue, but can you confirm > for me if my zfs filesystem as a raidz requires me to write data to it > first before it''s really ready?No, that should not be the case.> Any ideas when the Solaris 10 update 3 (11/06) be released?I''m not sure, but November or December sounds about right. And of course, if they want the fix sooner they can always use Solaris Express or OpenSolaris! --matt