Hello, I was hoping that this would work: http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror I have 4x(1TB) disks, one of which is filled with 800GB of data (that I cant delete/backup somewhere else)> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 > /dev/lofi/1 > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > ambry 592G 132K 592G 0% ONLINE -I get this (592GB???) I bring the virtual device offline, and it becomes degraded, yet I wont be able to copy my data over. I was wondering if anyone else had a solution. Thanks, Jonny P.S. Please let me know if you need any extra information.
Hi Jonny, So far there is no Sun comments here or at the blog site, I guess your approach is good by the Sun folks. I also noticed that the blog hit today is only 5. If, I tell my folks to visit the blog often, can they also do chinese? most of them are doing blogging in chinese, not english today. And how would non-china folks be able to visit the blog without getting hit by all chinese text? So, if you would like more visitors, you would have to have a solution to deal with the chinese traffic. Just some thoughts if you are serious about global open storage. [BTW, I see Zhang in the URL. That is the name I honor with Zhou. For that, if you need help, just let me know.] Best, ??? ----- Original Message ----- From: "Jonny Gerold" <jg at thermeon.com> To: <zfs-discuss at opensolaris.org> Sent: Thursday, January 15, 2009 5:20 PM Subject: [zfs-discuss] 4 disk raidz1 with 3 disks...> Hello, > I was hoping that this would work: > http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror > > I have 4x(1TB) disks, one of which is filled with 800GB of data (that I > cant delete/backup somewhere else) > >> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 >> /dev/lofi/1 >> root at FSK-Backup:~# zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> ambry 592G 132K 592G 0% ONLINE - > I get this (592GB???) I bring the virtual device offline, and it becomes > degraded, yet I wont be able to copy my data over. I was wondering if > anyone else had a solution. > > Thanks, Jonny > > P.S. Please let me know if you need any extra information. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Beloved Jonny, I am just like you. There was a day, I was hungry, and went for a job interview for sysadmin. They asked me - what is a "protocol"? I could not give a definition, and they said, no, not qualified. But they did not ask me about CICS and mainframe. Too bad. baby, even there is a day you can break daddy''s pride, you won''t want to, I am sure. ;-) [if you want a solution, ask Orvar, I would guess he thinks on his own now, not baby no more, teen now...] best, z ----- Original Message ----- From: "Jonny Gerold" <jg at thermeon.com> To: "JZ" <jz at excelsioritsolutions.com> Sent: Thursday, January 15, 2009 10:19 PM Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks... Sorry that I broke your pride (all knowing) bubble by "challenging" you. But your just as stupid as I am since you did not give me a "solution." Find a solution, and I will rock with your Zhou style, otherwise you''re just like me :) I am in the U.S. Great weather... Thanks, Jonny
Hi James, I have done nothing wrong. It was ok in my religion. Sue my if you care. He asked for a solution to a ZFS problem. I was calling for help, Zhou style. All my C and Z and J folks, are we going to help Jonny or what??? darn!!! Do I have to put down my other work to make a solution that may not be open? best, z ----- Original Message ----- From: "James C. McPherson" <James.McPherson at Sun.COM> To: "JZ" <jz at excelsioritsolutions.com> Sent: Thursday, January 15, 2009 10:35 PM Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...> > Hello JZ, > I fail to see what your email has to do with ZFS. > > I am also at a loss as to why you appear to think that > it is acceptable to include public mailing lists on > what are clearly personal emails. > > > James C. McPherson > -- > Senior Kernel Software Engineer, Solaris > Sun Microsystems > http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
This seems to have worked. But is showing an abnormal amount of data. root at FSK-Backup:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT ambry 3.62T 132K 3.62T 0% ONLINE - root at FSK-Backup:~# df -h | grep ambry ambry 2.7T 27K 2.7T 1% /ambry This happened the last time I created a raidz1... Meh, before I continue, is this incredibly abnormal? Or is there something that I''m missing and this is normal procedure? Thanks, Jonny Wes Morgan wrote:> On Thu, 15 Jan 2009, Jonny Gerold wrote: > >> Hello, >> I was hoping that this would work: >> http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror >> >> I have 4x(1TB) disks, one of which is filled with 800GB of data (that I >> cant delete/backup somewhere else) >> >>> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 >>> /dev/lofi/1 >>> root at FSK-Backup:~# zpool list >>> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >>> ambry 592G 132K 592G 0% ONLINE - >> I get this (592GB???) I bring the virtual device offline, and it becomes >> degraded, yet I wont be able to copy my data over. I was wondering if >> anyone else had a solution. >> >> Thanks, Jonny >> >> P.S. Please let me know if you need any extra information. > > Are you certain that you created the sparse file as the correct size? > If I had to guess, it is only in the range of about 150gb. The > smallest device size will limit the total size of your array. Try > using this for your sparse file and recreating the raidz: > > dd if=/dev/zero of=fakedisk bs=1k seek=976762584 count=0 > lofiadm -a fakedisk >
Very nice. Ok. If I don''t see any post to promise some help in solving Jonny''s solution in the next 8 minutes -- I would go to chinatown and get some commitment. I would have that commitment in 48 hours and a working and tested blog site in 60 days. But it will not be open. Please, open folks, are you going to help Jonny or what? Best, z ----- Original Message ----- From: "JZ" <jz at excelsioritsolutions.com> To: "James C. McPherson" <James.McPherson at Sun.COM> Cc: <zfs-discuss at opensolaris.org> Sent: Thursday, January 15, 2009 10:42 PM Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...> Hi James, > I have done nothing wrong. It was ok in my religion. Sue my if you care. > > > He asked for a solution to a ZFS problem. > > I was calling for help, Zhou style. > > > > All my C and Z and J folks, are we going to help Jonny or what??? > > > darn!!! Do I have to put down my other work to make a solution that may > not > be open? > > > > best, > z > > > > > > ----- Original Message ----- > From: "James C. McPherson" <James.McPherson at Sun.COM> > To: "JZ" <jz at excelsioritsolutions.com> > Sent: Thursday, January 15, 2009 10:35 PM > Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks... > > >> >> Hello JZ, >> I fail to see what your email has to do with ZFS. >> >> I am also at a loss as to why you appear to think that >> it is acceptable to include public mailing lists on >> what are clearly personal emails. >> >> >> James C. McPherson >> -- >> Senior Kernel Software Engineer, Solaris >> Sun Microsystems >> http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 15 Jan 2009, Jonny Gerold wrote:> Hello, > I was hoping that this would work: > http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror > > I have 4x(1TB) disks, one of which is filled with 800GB of data (that I > cant delete/backup somewhere else) > >> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0 >> /dev/lofi/1 >> root at FSK-Backup:~# zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> ambry 592G 132K 592G 0% ONLINE - > I get this (592GB???) I bring the virtual device offline, and it becomes > degraded, yet I wont be able to copy my data over. I was wondering if > anyone else had a solution. > > Thanks, Jonny > > P.S. Please let me know if you need any extra information.Are you certain that you created the sparse file as the correct size? If I had to guess, it is only in the range of about 150gb. The smallest device size will limit the total size of your array. Try using this for your sparse file and recreating the raidz: dd if=/dev/zero of=fakedisk bs=1k seek=976762584 count=0 lofiadm -a fakedisk
Meh this is retarted. It looks like zpool list shows an incorrect calculation? Can anyone agree that this looks like a bug? root at FSK-Backup:~# df -h | grep ambry ambry 2.7T 27K 2.7T 1% /ambry root at FSK-Backup:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT ambry 3.62T 132K 3.62T 0% ONLINE - root at FSK-Backup:~# zfs list NAME USED AVAIL REFER MOUNTPOINT ambry 92.0K 2.67T 26.9K /ambry Thanks, Jonny Bob Friesenhahn wrote:> On Fri, 16 Jan 2009, Matt Harrison wrote: > >> Is this guy seriously for real? It''s getting hard to stay on the list >> with all this going on. No list etiquette, complete irrelevant >> ramblings, need I go on? >> > > The ZFS discussion list has produced its first candidate for the > rubber room that I mentioned here previously. A reduction in crystal > meth intake could have a profound effect though. > > Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
BTW, is there any difference between raidz & raidz1 (is the one for one disk parity) or does raidz have a parity disk too? Thanks, Jonny Tim wrote:> > > On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold <jg at thermeon.com > <mailto:jg at thermeon.com>> wrote: > > Meh this is retarted. It looks like zpool list shows an incorrect > calculation? Can anyone agree that this looks like a bug? > > root at FSK-Backup:~# df -h | grep ambry > ambry 2.7T 27K 2.7T 1% /ambry > > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > ambry 3.62T 132K 3.62T 0% ONLINE - > > root at FSK-Backup:~# zfs list > NAME USED AVAIL REFER MOUNTPOINT > ambry 92.0K 2.67T 26.9K /ambry > > > From what I understand: > > zpool list shows total capacity of all the drives in the pool. df > shows usable capacity after parity. > > I wouldn''t really call that retarded, it allows you to see what kind > of space you''re chewing up with parity fairly easily. > > --Tim
That''s what I figured. That raidz & raidz1 are the same thing. The one is just put there to collect confusion ;) Thanks, Jonny Tim wrote:> > > On Thu, Jan 15, 2009 at 10:36 PM, Jonny Gerold <jg at thermeon.com > <mailto:jg at thermeon.com>> wrote: > > BTW, is there any difference between raidz & raidz1 (is the one > for one > disk parity) or does raidz have a parity disk too? > > Thanks, Jonny > > > It depends on who you''re talking to I suppose. > > I would expect generally "raidz" is describing that you''re using Sun''s > raid algorithm which can be either "raidz1" (one parity drive) or > "raidz2" (two parity drives). > > It may also be that people are just interchanging the term "raidz" and > "raidz1" as well. I guess most documentation I''ve seen officially > address them as "raidz" or "raidz2", there is no "raidz1". > > --Tim
JZ wrote:> Beloved Jonny, > > I am just like you. > > > There was a day, I was hungry, and went for a job interview for sysadmin. > They asked me - what is a "protocol"? > I could not give a definition, and they said, no, not qualified. > > But they did not ask me about CICS and mainframe. Too bad. > > > > baby, even there is a day you can break daddy''s pride, you won''t want to, I > am sure. ;-) > > [if you want a solution, ask Orvar, I would guess he thinks on his own now, > not baby no more, teen now...] > > best, > z > > ----- Original Message ----- > From: "Jonny Gerold" <jg at thermeon.com> > To: "JZ" <jz at excelsioritsolutions.com> > Sent: Thursday, January 15, 2009 10:19 PM > Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks... > > > Sorry that I broke your pride (all knowing) bubble by "challenging" you. > But your just as stupid as I am since you did not give me a "solution." > Find a solution, and I will rock with your Zhou style, otherwise you''re > just like me :) I am in the U.S. Great weather... > > Thanks, Jonny > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Is this guy seriously for real? It''s getting hard to stay on the list with all this going on. No list etiquette, complete irrelevant ramblings, need I go on? ~Matt
On Fri, 16 Jan 2009, Matt Harrison wrote:> > Is this guy seriously for real? It''s getting hard to stay on the list > with all this going on. No list etiquette, complete irrelevant > ramblings, need I go on?The ZFS discussion list has produced its first candidate for the rubber room that I mentioned here previously. A reduction in crystal meth intake could have a profound effect though. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> JZ wrote:[...]> Is this guy seriously for real? It''s getting hard to stay on the list > with all this going on. No list etiquette, complete irrelevant > ramblings, need I go on?He probably has nothing better to do. Just ignore him; that''s what they dislike most. He will go away eventually. Just put him in your killfile. Don''t feed the trolls. Regards -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt
On Fri, Jan 16, 2009 at 9:59 AM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Fri, 16 Jan 2009, Matt Harrison wrote: > > > > Is this guy seriously for real? It''s getting hard to stay on the list > > with all this going on. No list etiquette, complete irrelevant > > ramblings, need I go on? > > The ZFS discussion list has produced its first candidate for the > rubber room that I mentioned here previously. A reduction in crystal > meth intake could have a profound effect though. > > BobJust the product of English as a second language + intentionally trolling. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090116/01d5064f/attachment.html>
Jonny Gerold wrote:> Meh this is retarted. It looks like zpool list shows an incorrect > calculation? Can anyone agree that this looks like a bug? > > root at FSK-Backup:~# df -h | grep ambry > ambry 2.7T 27K 2.7T 1% /ambry > > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > ambry 3.62T 132K 3.62T 0% ONLINE - > > root at FSK-Backup:~# zfs list > NAME USED AVAIL REFER MOUNTPOINT > ambry 92.0K 2.67T 26.9K /ambryBug or not I am not the person to say, but it''s done that ever since I''ve used ZFS. zpool list shows the total space regardless of redundancy, whereas zfs list shows the actual available space. I was confusing at first but now I just ignore it. Matt
On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold <jg at thermeon.com> wrote:> Meh this is retarted. It looks like zpool list shows an incorrect > calculation? Can anyone agree that this looks like a bug? > > root at FSK-Backup:~# df -h | grep ambry > ambry 2.7T 27K 2.7T 1% /ambry > > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > ambry 3.62T 132K 3.62T 0% ONLINE - > > root at FSK-Backup:~# zfs list > NAME USED AVAIL REFER MOUNTPOINT > ambry 92.0K 2.67T 26.9K /ambry > > >From what I understand:zpool list shows total capacity of all the drives in the pool. df shows usable capacity after parity. I wouldn''t really call that retarded, it allows you to see what kind of space you''re chewing up with parity fairly easily. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090116/7152df14/attachment.html>
On Thu, Jan 15, 2009 at 10:36 PM, Jonny Gerold <jg at thermeon.com> wrote:> BTW, is there any difference between raidz & raidz1 (is the one for one > disk parity) or does raidz have a parity disk too? > > Thanks, Jonny >It depends on who you''re talking to I suppose. I would expect generally "raidz" is describing that you''re using Sun''s raid algorithm which can be either "raidz1" (one parity drive) or "raidz2" (two parity drives). It may also be that people are just interchanging the term "raidz" and "raidz1" as well. I guess most documentation I''ve seen officially address them as "raidz" or "raidz2", there is no "raidz1". --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090116/4ead8659/attachment.html>
Tim wrote:> On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold <jg at thermeon.com > <mailto:jg at thermeon.com>> wrote: > > Meh this is retarted. It looks like zpool list shows an incorrect > calculation? Can anyone agree that this looks like a bug? > > root at FSK-Backup:~# df -h | grep ambry > ambry 2.7T 27K 2.7T 1% /ambry > > root at FSK-Backup:~# zpool list > NAME SIZE USED AVAIL CAP HEALTH ALTROOT > ambry 3.62T 132K 3.62T 0% ONLINE - > > root at FSK-Backup:~# zfs list > NAME USED AVAIL REFER MOUNTPOINT > ambry 92.0K 2.67T 26.9K /ambry > > > From what I understand: > > zpool list shows total capacity of all the drives in the pool. df shows > usable capacity after parity.More specifically, from zpool(1m) These space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(1M) command takes into account, but the zpool command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable. Similarly, from zfs(1m) The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. Because space is shared within a pool, availa- bility can be limited by any number of factors, includ- ing physical pool size, quotas, reservations, or other datasets within the pool. IMHO, this is a little bit wordy, in an already long man page. If you come up with a better way to say the same thing in fewer words, then please file a bug against the man page. -- richard
Hi Rich, This is the best summary I have seen. [china folks say, older ginger more satisfying, true] Just one thing I would like to add - It also depends on the encryption technique and algorism. Today we are doing private key encryption that without the key, you cannot read the data. Some used a public "public key" approach, that you can read the data without the key, but just misleading. The private key approach saves a lot of blocks in data writing, but carries the risk of cannot duplicate or duplicating too many of the key. The public "public key" approach takes much much more storage space to store the real data, but less risky, in some views. Again, how to do data storage is an art. Sun folks can guide with a good taste, but they are not limiting anyone''s free will to do IT. Best, z, going chinatown for dinner soon ----- Original Message ----- From: "Richard Elling" <Richard.Elling at Sun.COM> To: "Tim" <tim at tcsac.net> Cc: <zfs-discuss at opensolaris.org> Sent: Friday, January 16, 2009 5:28 PM Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...> Tim wrote: >> On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold <jg at thermeon.com >> <mailto:jg at thermeon.com>> wrote: >> >> Meh this is retarted. It looks like zpool list shows an incorrect >> calculation? Can anyone agree that this looks like a bug? >> >> root at FSK-Backup:~# df -h | grep ambry >> ambry 2.7T 27K 2.7T 1% /ambry >> >> root at FSK-Backup:~# zpool list >> NAME SIZE USED AVAIL CAP HEALTH ALTROOT >> ambry 3.62T 132K 3.62T 0% ONLINE - >> >> root at FSK-Backup:~# zfs list >> NAME USED AVAIL REFER MOUNTPOINT >> ambry 92.0K 2.67T 26.9K /ambry >> >> >> From what I understand: >> >> zpool list shows total capacity of all the drives in the pool. df shows >> usable capacity after parity. > > More specifically, from zpool(1m) > > These space usage properties report actual physical space > available to the storage pool. The physical space can be > different from the total amount of space that any contained > datasets can actually use. The amount of space used in a > raidz configuration depends on the characteristics of the > data being written. In addition, ZFS reserves some space for > internal accounting that the zfs(1M) command takes into > account, but the zpool command does not. For non-full pools > of a reasonable size, these effects should be invisible. For > small pools, or pools that are close to being completely > full, these discrepancies may become more noticeable. > > Similarly, from zfs(1m) > The amount of space available to the dataset and all its > children, assuming that there is no other activity in > the pool. Because space is shared within a pool, availa- > bility can be limited by any number of factors, includ- > ing physical pool size, quotas, reservations, or other > datasets within the pool. > > IMHO, this is a little bit wordy, in an already long man page. > If you come up with a better way to say the same thing in > fewer words, then please file a bug against the man page. > -- richard > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 16 Jan 2009, Bob Friesenhahn wrote:> On Fri, 16 Jan 2009, Matt Harrison wrote: >> >> Is this guy seriously for real? It''s getting hard to stay on the list >> with all this going on. No list etiquette, complete irrelevant >> ramblings, need I go on? > > The ZFS discussion list has produced its first candidate for the > rubber room that I mentioned here previously. A reduction in crystal > meth intake could have a profound effect though.I''m halfway inclined to believe he/it is a silly "artificial intelligence" script.
Hi Wes, I now have a real question. How do you define "silly", and "artificial intelligence", and "script"? And "halfway inclined to believe" to me means 25%. (believe is 100%, inclined is 50%, and halfway is 25% in crystal math, and maybe even less in storage math, including the RAID and HA and DR and BC...) Is my understanding correct? But my confusion is only toward the Wes statement, all other posts by Sun folks made clear sense to me. So, I am going out for dinner, hope dear Wes can help me out here. Ciao, z ----- Original Message ----- From: "Wes Morgan" <morganw at chemikals.org> To: "Bob Friesenhahn" <bfriesen at simple.dallas.tx.us> Cc: <zfs-discuss at opensolaris.org> Sent: Friday, January 16, 2009 5:48 PM Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...> On Fri, 16 Jan 2009, Bob Friesenhahn wrote: > >> On Fri, 16 Jan 2009, Matt Harrison wrote: >>> >>> Is this guy seriously for real? It''s getting hard to stay on the list >>> with all this going on. No list etiquette, complete irrelevant >>> ramblings, need I go on? >> >> The ZFS discussion list has produced its first candidate for the >> rubber room that I mentioned here previously. A reduction in crystal >> meth intake could have a profound effect though. > > I''m halfway inclined to believe he/it is a silly "artificial intelligence" > script. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss