Rob Clark
2008-Jul-06 07:48 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correctly ?
I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug. I have eight 10GB drives. When I installed SX:CE (snv_91) I chose "3" ("Solaris Interactive Text (Desktop Session)) and the installer found all my drives but I told it to only use two - giving me a 10GB mirrored rpool. Immediately prior to the installation commenced I shelled and typed this to enable compression: # zfs set compression=on rpool (Note: the editor removes multiple spaces and compresses this charts wrecking the nice layout) That trick gives me this (great for tiny drives): # zfs get -r compression NAME PROPERTY VALUE SOURCE rpool compression on local rpool/ROOT compression on inherited from rpool rpool/ROOT/snv_91 compression on inherited from rpool rpool/ROOT/snv_91/var compression on inherited from rpool rpool/dump compression off local rpool/export compression on inherited from rpool rpool/export/home compression on inherited from rpool rpool/swap compression on inherited from rpool # zfs get -r compressratio NAME PROPERTY VALUE SOURCE rpool compressratio 1.56x - rpool/ROOT compressratio 1.68x - rpool/ROOT/snv_91 compressratio 1.68x - rpool/ROOT/snv_91/var compressratio 2.05x - rpool/dump compressratio 1.00x - rpool/export compressratio 1.47x - rpool/export/home compressratio 1.47x - rpool/swap compressratio 1.00x - I mention that so you don''t wonder why my installation seems too small. My "Bug Report" (or confusion about ZFS) is this: I have 6 remaining 10 GB drives and I desire to "raid" 3 of them and "mirror" them to the other 3 to give me raid security and integrity with mirrored drive performance. I then want to move my "/export" directory to the new drive. My SCSI drives are numbered c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t8d0 (Drive c1t7d0 was reserved so the number is skipped). I will type a few commands that I hope will provide some basic info (remember I am new to this so don''t hesitate to ask for more info nor should you flame me for my foolishness :) ). # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci at 0,0/pci1000,30 at 10/sd at 0,0 1. c1t1d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63> /pci at 0,0/pci1000,30 at 10/sd at 1,0 2. c1t2d0 <VMware,-VMware Virtual S-1.0-10.00GB> /pci at 0,0/pci1000,30 at 10/sd at 2,0 3. c1t3d0 <VMware,-VMware Virtual S-1.0-10.00GB> /pci at 0,0/pci1000,30 at 10/sd at 3,0 4. c1t4d0 <VMware,-VMware Virtual S-1.0-10.00GB> /pci at 0,0/pci1000,30 at 10/sd at 4,0 5. c1t5d0 <VMware,-VMware Virtual S-1.0-10.00GB> /pci at 0,0/pci1000,30 at 10/sd at 5,0 6. c1t6d0 <VMware,-VMware Virtual S-1.0-10.00GB> /pci at 0,0/pci1000,30 at 10/sd at 6,0 7. c1t8d0 <VMware,-VMware Virtual S-1.0-10.00GB> /pci at 0,0/pci1000,30 at 10/sd at 8,0 Specify disk (enter its number): ^C # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.36G 5.42G 35K /rpool rpool/ROOT 3.09G 5.42G 18K legacy rpool/ROOT/snv_91 3.09G 5.42G 3.01G / rpool/ROOT/snv_91/var 84.7M 5.42G 84.7M /var rpool/dump 640M 5.42G 640M - rpool/export 13.9M 5.42G 19K /export rpool/export/home 13.9M 5.42G 13.9M /export/home rpool/swap 640M 6.05G 16K - # zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors ========= The "Bug" is that the drive sizes don''t seem to add up correctly when I raid+mirror my drives. The following display the sizes of three drives when mirrored or in raid configuration: # zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0 # zfs list | grep temparray temparray 97.2K 19.5G 1.33K /temparray # zpool destroy temparray # zpool create temparray mirror c1t2d0 c1t4d0 c1t5d0 # zfs list | grep temparray temparray 89.5K 9.78G 1K /temparray # zpool destroy temparray So far so good. Now for what I wanted to do (raid + mirror (and move "/export" to the new drive)): Some web page suggested I could do this (wrong): # zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0 # zpool attach temparray mirror c1t2d0 c1t3d0 too many arguments The man page says the correct syntax is this (still no cigar): # zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0 # zpool attach temparray c1t2d0 c1t3d0 cannot attach c1t3d0 to c1t2d0: can only attach to mirrors and top-level disks So lets combine everything on one line (like that''s gonna work, but it did, sort of): # zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 c1t8d0 invalid vdev specification use ''-f'' to override the following errors: mismatched replication level: both raidz and mirror vdevs are present # zpool create -f temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 c1t8d0 # zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors pool: temparray state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM temparray ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.36G 5.42G 35K /rpool rpool/ROOT 3.09G 5.42G 18K legacy rpool/ROOT/snv_91 3.09G 5.42G 3.01G / rpool/ROOT/snv_91/var 84.5M 5.42G 84.5M /var rpool/dump 640M 5.42G 640M - rpool/export 12.9M 5.42G 19K /export rpool/export/home 12.9M 5.42G 12.9M /export/home rpool/swap 640M 6.05G 16K - temparray 115K 29.3G 21.0K /temparray The question (Bug?) is "Shouldn''t I get this instead ? # zfs list | grep temparray temparray 97.2K 19.5G 1.33K /temparray Why do I get 29.3G instead of 19.5G ? Thanks for any help, Rob This message posted from opensolaris.org
Peter Tribble
2008-Jul-06 08:20 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correctly ?
On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark <rob1weld at aol.com> wrote:> I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug. > > I have eight 10GB drives....> I have 6 remaining 10 GB drives and I desire to "raid" 3 of them and "mirror" them to the other 3 to give me raid security and integrity with mirrored drive performance. I then want to move my "/export" directory to the new drive. >...> # zpool create -f temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 c1t8d0...> The question (Bug?) is "Shouldn''t I get this instead ? > > # zfs list | grep temparray > temparray 97.2K 19.5G 1.33K /temparray > > Why do I get 29.3G instead of 19.5G ?Because what you''ve created is a pool containing two components: - a 3-drive raidz - a 3-drive mirror concatenated together. I think that what you''re trying to do based on your description is to create one raidz and mirror that to another raidz. (Or create a raidz out of mirrored drives.) You can''t do that. You can''t layer raidz and mirroring. You''ll either have to use raidz for the lot, or just use mirroring: zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror c1t6d0 c1t8d0 -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Rob Clark
2008-Jul-06 09:13 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correc
> Peter Tribble wrote: > Because what you''ve created is a pool containing two > components: > - a 3-drive raidz > - a 3-drive mirror > concatenated together. >OK. Seems odd that ZFS would allow that (would people want that configuration instead of what I am attempting to do).> I think that what you''re trying to do based on your description is to create > one raidz and mirror that to another raidz. (Or create a raidz out of mirrored > drives.) You can''t do that. You can''t layer raidz and mirroring. > You''ll either have to use raidz for the lot, or just use mirroring: > zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror c1t6d0 c1t8d0Bummer. Curiously I can get that same odd size with either of these two commands (the second attempt sort of looks like it is raid + mirroring): # zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror c1t6d0 c1t8d0 # zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors pool: temparray1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM temparray1 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.36G 5.42G 35K /rpool rpool/ROOT 3.09G 5.42G 18K legacy rpool/ROOT/snv_91 3.09G 5.42G 3.01G / rpool/ROOT/snv_91/var 84.5M 5.42G 84.5M /var rpool/dump 640M 5.42G 640M - rpool/export 14.0M 5.42G 19K /export rpool/export/home 14.0M 5.42G 14.0M /export/home rpool/swap 640M 6.05G 16K - temparray1 92.5K 29.3G 1K /temparray1 # zpool destroy temparray1 And the pretty one: # zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 c1t8d0 # zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors pool: temparray state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM temparray ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.36G 5.42G 35K /rpool rpool/ROOT 3.09G 5.42G 18K legacy rpool/ROOT/snv_91 3.09G 5.42G 3.01G / rpool/ROOT/snv_91/var 84.6M 5.42G 84.6M /var rpool/dump 640M 5.42G 640M - rpool/export 14.0M 5.42G 19K /export rpool/export/home 14.0M 5.42G 14.0M /export/home rpool/swap 640M 6.05G 16K - temparray 94K 29.3G 1K /temparray # zpool destroy temparray That second attempt leads this newcommer to imagine that they have 3 raid drives mirrored to 3 raid drives. Is there a way to get mirror performance (double speed) with raid integrity (one drive can fail and you are OK)? I can''t imagine that there exists no one who would want that configuration. Thanks for your comment Peter. This message posted from opensolaris.org
Peter Tribble
2008-Jul-06 12:15 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correc
On Sun, Jul 6, 2008 at 10:13 AM, Rob Clark <rob1weld at aol.com> wrote:> Is there a way to get mirror performance (double speed) with raid integrity (one drive can fail and you are OK)? I can''t imagine that there exists no one who would want that configuration.That''s what mirroring does - you have redundant data. The extra performance is just a side-effect. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Ross
2008-Jul-06 13:46 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correc
I''m no expert in ZFS, but I think I can explain what you''ve created there: # zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror c1t6d0 c1t8d0 This creates a stripe of three mirror sets (or in old fashioned terms, you have a raid-0 stripe made up of three raid-1 sets of two disks). It''ll give you 30GB of capacity, all your disks are mirrored to another (so your data is safe if any one drive fails). I believe it will give you 3x the write performance (as data will be streamed across the three sets), and should give 2x the read performance (as data can be read from any of the mirror drives). I don''t really understand why you''re trying to mix raid-z and mirroring, but from what you say for performance, I suspect this may be the setup you are looking for. For your second one I''m less sure what''s going on: # zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 c1t8d0 This creates three two disk raid-z sets and stripes the data across them. The problem is that a two disk raid-z makes no sense. Traditionally this level of raid needs a minimum of three disks to work. I suspect ZFS may be interpreting raid-z as requiring one parity drive, in which case this will effectively mirror the drives, but without the read performance boost that mirroring would give you. The way zpool create works is that you can specify raid or mirror sets, but that if you list a bunch of these one after the other, it simply strips the data across them. This message posted from opensolaris.org
Johan Hartzenberg
2008-Jul-06 13:54 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correc
On Sun, Jul 6, 2008 at 3:46 PM, Ross <myxiplx at hotmail.com> wrote:> > For your second one I''m less sure what''s going on: > # zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz > c1t6d0 c1t8d0 > > This creates three two disk raid-z sets and stripes the data across them. > The problem is that a two disk raid-z makes no sense. Traditionally this > level of raid needs a minimum of three disks to work. I suspect ZFS may be > interpreting raid-z as requiring one parity drive, in which case this will > effectively mirror the drives, but without the read performance boost that > mirroring would give you. > > The way zpool create works is that you can specify raid or mirror sets, but > that if you list a bunch of these one after the other, it simply strips the > data across them. > > I read somewhere, a long time ago when ZFS documentation were still mostlyspeculation, that raidz will use "mirroring" when the amount of data to be written is less than what justifies 2+parity. Eg in stead of 1+parity, you get mirrored data for small writes, and essentially raid-5 for big writes, with writes with intermediate sizes having raid 5 - like spread of blocks across disks but using fewer than the total nr of disks in the set. If that still holds true, then a raidz of 2 disks is probably just a mirror? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080706/b425d19a/attachment.html>
Rob Clark
2008-Jul-20 01:39 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
> On Sun, Jul 6, 2008 at 3:46 PM, Ross myxiplx at hotmail.com wrote: > For your second one I''m less sure what''s going on: > ... . The problem is that a two disk raid-z makes no sense. > Traditionally this level of raid needs a minimum of three disks to work. > I suspect ZFS may be interpreting raid-z as requiring one parity > drive, in which case this will effectively mirror the drives, but > without the read performance boost that mirroring would give you.> The way zpool create works is that you can specify > raid or mirror sets, but that if you list a bunch of > these one after the other, it simply strips the data > across them.> I read somewhere, a long time ago when ZFS documentation were still > mostly speculation, that raidz will use "mirroring" when the amount of > data to be written is less than what justifies 2+parity. Eg in stead > of 1+parity, you get mirrored data for small writes, and essentially > raid-5 for big writes, with writes with intermediate sizes having raid > 5 - like spread of blocks across disks but using fewer than the total > nr of disks in the set. > If that still holds true, then a raidz of 2 disks is probably just a > mirror?Exactly. I wish we could either have the commands do as we ask _OR_ tell us that we made an error instead of doing whatever they think we wanted. The displayed result can also being confusing (to us newcomers) and since the filesystem (ZFS) is new (and under development) there is bound to be some confusion. I tried a few configurations of raid / mirroring and was using IOZone to test them when things drew to a crawl and eventually (many hours later) crashed. I''ve now installed snv_93 (so some prior things may not apply) and will be testing that now. System load is definitely going to factor into my configuration choice. Thanks for all the replies (this post seems to go to the zfs-discuss at opensolaris.org mailing list but posts there don''t seem to end up here). Sincerely, Rob This message posted from opensolaris.org
Rob Clark
2008-Jul-20 11:50 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
> -Peter Tribble wrote:>> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote: >> I have eight 10GB drives. >> ... >> I have 6 remaining 10 GB drives and I desire to >> "raid" 3 of them and "mirror" them to the other 3 to >> give me raid security and integrity with mirrored >> drive performance. I then want to move my "/export" >> directory to the new drive. >> ...> You can''t do that. You can''t layer raidz and mirroring. > You''ll either have to use raidz for the lot, or just use mirroring: > zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror c1t6d0 c1t8d0 > -Peter TribbleSolaris may not allow me to do that but the concept is not unheard of: Quoting: Proceedings of the Third USENIX Conference on File and Storage Technologies http://www.usenix.org/publications/library/proceedings/fast04/tech/corbett/corbett.pdf "Mirrored RAID-4 and RAID-5 protect against higher order failures [4]. However, the efficiency of the array as measured by its data capacity divided by its total disk space is reduced." [4] Qin Xin, E. Miller, T. Schwarz, D. Long, S. Brandt, W. Litwin, ?Reliability mechanisms for very large storage systems?, 20th IEEE/11th NASA Boddard Conference on Mass Storage Systems and Technologies, San Diego, CA, pgs. 146-156, Apr. 2003. Rob This message posted from opensolaris.org
Richard Elling
2008-Jul-20 15:46 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
Rob Clark wrote:>> -Peter Tribble wrote: >> > > >>> On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark wrote: >>> I have eight 10GB drives. >>> ... >>> I have 6 remaining 10 GB drives and I desire to >>> "raid" 3 of them and "mirror" them to the other 3 to >>> give me raid security and integrity with mirrored >>> drive performance. I then want to move my "/export" >>> directory to the new drive. >>> ... >>> > > >> You can''t do that. You can''t layer raidz and mirroring. >> You''ll either have to use raidz for the lot, or just use mirroring: >> zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror c1t6d0 c1t8d0 >> -Peter Tribble >> > > > Solaris may not allow me to do that but the concept is not unheard of: >Solaris will allow you to do this, but you''ll need to use SVM instead of ZFS. Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. -- richard
Rob Clark
2008-Jul-21 13:21 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
> Solaris will allow you to do this, but you''ll need to use SVM instead of ZFS. > Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. > -- richardOr run Linux ... Richard, The ZFS Best Practices Guide says not. "Do not use the same disk or slice in both an SVM and ZFS configuration." This message posted from opensolaris.org
Volker A. Brandt
2008-Jul-21 16:06 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
> > Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. > > Richard, The ZFS Best Practices Guide says not. > > "Do not use the same disk or slice in both an SVM and ZFS configuration."Hmmm... my guess is that this means that one shouldn''t layer SVM and ZFS devices. I can''t see any problems with just using the same disk. For Solaris 10 (without the ZFS root feature) I have been doing this routinely (root and swap are a mirrored metadevice, the rest of the root disks are a mirrored zpool providing /var, /opt, etc). Works Just Fine(TM) Regards -- Volker -- ------------------------------------------------------------------------ Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH WWW: http://www.bb-c.de/ Am Wiesenpfad 6, 53340 Meckenheim Email: vab at bb-c.de Handelsregister: Amtsgericht Bonn, HRB 10513 Schuhgr??e: 45 Gesch?ftsf?hrer: Rainer J. H. Brandt und Volker A. Brandt
Richard Elling
2008-Jul-21 17:00 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
Rob Clark wrote:>> Solaris will allow you to do this, but you''ll need to use SVM instead of ZFS. >> Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. >> -- richard >> > Or run Linux ... > > > Richard, The ZFS Best Practices Guide says not. > > "Do not use the same disk or slice in both an SVM and ZFS configuration." >Though possible, I don''t think we would classify it as a best practice. -- richard
Carson Gaspar
2008-Jul-21 17:19 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
Richard Elling wrote:> Rob Clark wrote: >>> Solaris will allow you to do this, but you''ll need to use SVM instead of ZFS. >>> Or, I suppose, you could use SVM for RAID-5 and ZFS to mirror those. >>> -- richard >>> >> Or run Linux ... >> >> >> Richard, The ZFS Best Practices Guide says not. >> >> "Do not use the same disk or slice in both an SVM and ZFS configuration." >> > > Though possible, I don''t think we would classify it as a best practice.Is it possible? What will stop ZFS from auto-detecting the underlying devices? Does it have inside knowledge of ODS/SDS/SVM/Name_du_jour? In a simple exmaple, Mirror c1d1s2 and c1d2s2 into md30. Create a zpool on md30. When zfs scans for pools, it will see 2 or 3 copies (depending on SVM/ZFS start ordering). What happens? -- Carson
Bob Friesenhahn
2008-Jul-21 20:56 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
On Mon, 21 Jul 2008, Rob Clark wrote:> "Do not use the same disk or slice in both an SVM and ZFS configuration."It seems that the main reason for this is that responding to faults becomes haphazard and unsynchronized. Unlike the space shuttle, there are not three flight computers, with cross-checking. SVM and ZFS are completely different software developed in different eras. If SVM and ZFS make opposite decisions, then the system can not recover. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Rob Clark
2008-Jul-22 16:01 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive
> Though possible, I don''t think we would classify it as a best practice. > -- richardLooking at http://opensolaris.org/os/community/volume_manager/ I see: "Supports RAID-0, RAID-1, RAID-5", "Root mirroring" and "Seamless upgrades and live upgrades" (that would go nicely with my ZFS root mirror - right). I also don''t see that there is a nice GUI for those that desire one ... Looking at http://evms.sourceforge.net/gui_screen/ I see some great screenshots and page http://evms.sourceforge.net/ says it supports: Ext2/3, JFS, ReiserFS, XFS, Swap, OCFS2, NTFS, FAT -- so it might be better to suggest adding ZFS there instead of focusing on non-ZFS solutions in this ZFS discussion group. Rob This message posted from opensolaris.org
Rob Clark
2008-Jul-29 14:54 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correctl
There may be some work being done to fix this: zpool should support raidz of mirrors http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6485689 Discussed in this thread: Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM ) http://opensolaris.org/jive/thread.jspa?threadID=15854&tstart=0 This message posted from opensolaris.org
Rob Clark
2008-Nov-29 17:35 UTC
[zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don''t add correctl
Bump. Some of the threads on this were last posted to over a year ago. I checked 6485689 and it is not fixed yet, is there any work being done in this area? Thanks, Rob> There may be some work being done to fix this: > > zpool should support raidz of mirrors > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu > g_id=6485689 > > Discussed in this thread: > Mirrored Raidz ( Posted: Oct 19, 2006 9:02 PM ) > http://opensolaris.org/jive/thread.jspa?threadID=15854 > &tstart=0 > > > The suggested solution (by jone > http://opensolaris.org/jive/thread.jspa?messageID=6627 > 9 ) is: > > # zpool create a1pool raidz c0t0d0 c0t1d0 c0t2d0 .. > # zpool create a2pool raidz c1t0d0 c1t1d0 c1t2d0 .. > # zfs create -V a1pool/vol > # zfs create -V a2pool/vol > # zpool create mzdata mirror /dev/zvol/dsk/a1pool/vol > /dev/zvol/dsk/a2pool/vol-- This message posted from opensolaris.org