Andrew Watkins
2005-Nov-24 21:04 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
I still find it hard to believe that people are going to convert there expensive RAID boxes into disk storage boxes running software ZFS. I see ZFS being used in 2 ways: 1) The people (may be universities) who have got lots of disk packs (non-raid) and want to improve the performance of there systems at no cost. i.e. I have a E450 full of disks, so I may give it a go. 2) A "few" companies who have big systems who want that extra layer of protection, by sticking ZFS between the O/S and hardware RAID? Personally I think ZFS may have come too late, unless SUN are going to stick it into the StorageTek boxes? Also, I want to hear what sort of feed back we get from people who use ZFS on small systems where there is not many spare CPU cycles left (we don''t all have big machines)? Any comments Andrew This message posted from opensolaris.org
Torrey McMahon
2005-Nov-24 21:29 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
Andrew Watkins wrote:> I still find it hard to believe that people are going to convert there expensive RAID boxes into disk storage boxes running software ZFS.What part are you finding hard to believe? Do you think people are advocating trading in all your storage arrays for JBODs?> I see ZFS being used in 2 ways: > > 1) The people (may be universities) who have got lots of disk packs (non-raid) and want to improve the performance of there systems at no cost. i.e. I have a E450 full of disks, so I may give it a go. > > 2) A "few" companies who have big systems who want that extra layer of protection, by sticking ZFS between the O/S and hardware RAID? > > Personally I think ZFS may have come too late, unless SUN are going to stick it into the StorageTek boxes?Sticking a filesystem, and some other bits, into a storage array turns it into a NAS box. That''s a whole other discussion....but probably an interesting one. ;-)
James C. McPherson
2005-Nov-24 22:08 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
Andrew Watkins wrote:> I still find it hard to believe that people are going to convert there > expensive RAID boxes into disk storage boxes running software ZFS. I see ZFS > being used in 2 ways: > 1) The people (may be universities) who have got lots of disk packs > (non-raid) and want to improve the performance of there systems at no cost. > i.e. I have a E450 full of disks, so I may give it a go.Yes, this is already happening.> 2) A "few" companies who have big systems who want that extra layer of > protection, by sticking ZFS between the O/S and hardware RAID?when you can remove a layer of management complexity from storage -- with no additional $$ cost to your organisation -- why would you _not_ want to use zfs? If you''ve got two accessible paths to your hw raid array, you can take advantage of the safety features of zfs. You can also, at the same time, remove the need to manually chunk the presented storage like a lot of folks do for databases (redologs go in this fast group, tablespaces over here... archivelogs on that group... but it''s a real pain to add space to what you''ve got configured already....)> Personally I think ZFS may have come too late, unless SUN are going to stick > it into the StorageTek boxes?I can''t speak for future plans (check The Register!) but what I can say is that while I would have been ecstatic if we''d been able to get ZFS shipped by April 2005 (6 months ago), it''s still not too late. In fact, if you think about the awareness of complexity issues which has snowballed over the last year, then ZFS is here right at the point where it can make a significant difference. With Tom''sHardware doing reviews of 1Tb SATA disks, which will inevitably make their way into storage arrays of one sort or another, would you want to be using ufs or vxfs on them? I certainly wouldn''t.> Also, I want to hear what sort of feed back we get from people who use ZFS on > small systems where there is not many spare CPU cycles left (we don''t all > have big machines)?I have an ultra60 in the office with two 360MHz cpus. It runs ZFS on two attached multipacks hanging off glm scsi instances (the old limited scsi ultrawide controllers) _just fine_ -- are you wondering "how low-end can we go and still have ZFS usable?" ? I haven''t tried it on my old duron-650 but I reckon that machine also would not have a problem. Of course I wouldn''t be using it for interactive work in JDS but that''s a completely different matter! best regards, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
Peter Tribble
2005-Nov-24 22:08 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
On Thu, 2005-11-24 at 21:04, Andrew Watkins wrote:> I still find it hard to believe that people are going to convert there expensive > RAID boxes into disk storage boxes running software ZFS. I see ZFS being used in 2 ways:Who says they have to convert? You can use ZFS on top of hardware raid, and get all the administrative advantages, and a bit of extra performance. But ZFS does allow people to get many of the advantages of expensive raid hardware without having to buy the expensive hardware. And gain a few extra advantages along the way.> 1) The people (may be universities) who have got lots of disk packs (non-raid) and > want to improve the performance of there systems at no cost. i.e. I have a E450 full > of disks, so I may give it a go.Or people who have relatively cheap EIDE/ATA/SATA drives that have good throughput but are relatively poor on latency. Or people who have huge raid arrays and want a dead easy way to manage the data sat on it.> 2) A "few" companies who have big systems who want that extra layer of protection, > by sticking ZFS between the O/S and hardware RAID?Hardware raid of itself isn''t any great protection. Anyone can take advantage of the ZFS features and administrative model, and it''s end-to-end integrity. (In fact, given the extra complexity and potential for error in higher-end raid systems [simply due to the number of components along the path] that''s one place where you do want the error checking and recovery capability of ZFS.)> Personally I think ZFS may have come too late, unless SUN are going to stick it > into the StorageTek boxes?Too late? I don''t think so. Not that we couldn''t have done with ZFS earlier (much earlier...), but it''s so much of an advance on what we''ve had to struggle with for the last decade or more. And it''s actually being released at an interesting time, when data storage is starting to spiral out of control. (I don''t know about anybody else, but I''ve noticed the rate of growth in storage has really accelarated in the last year or so.)> Also, I want to hear what sort of feed back we get from people who use ZFS on > small systems where there is not many spare CPU cycles left (we don''t all have > big machines)?What do you class as a small or big machine? While it''s true that ZFS seems to like a few cpu cycles, compared to the growth in resource requirements of desktop environments and applications (and web browsers) it''s pretty lightweight. -- -Peter Tribble L.I.S., University of Hertfordshire - http://www.herts.ac.uk/ http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Gavin Maltby
2005-Nov-24 22:35 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
On 11/24/05 21:04, Andrew Watkins wrote:> I still find it hard to believe that people are going to convert there expensive RAID boxes into disk storage boxes running software ZFS. I see ZFS being used in 2 ways:I hope, and believe, you''re wrong here :-)> 1) The people (may be universities) who have got lots of disk packs (non-raid) and want to improve the performance of there systems at no cost. i.e. I have a E450 full of disks, so I may give it a go.Well I''m looking forward to improving the performance of more sophisticated storage. On my build system and on our home directory servers our standard "super safe" volume setup (mirror everything into separate arrays on separate controllers etc, with the submirrors being made up of hardware RAID5 volumes) just makes things miserably slow. And it doesn''t make the truly safe, either.> 2) A "few" companies who have big systems who want that extra layer of protection, by sticking ZFS between the O/S and hardware RAID?Have a read on the blogs of end-to-end checksumming in zfs/end (I forget who it was now but it should not be difficult to find (*)). There are more than a "few" companies who value their data. ZFS is not just an "extra layer of protection" - as pointed out elsewhere (*) - http://blogs.sun.com/roller/page/bonwick?entry=raid_z - http://blogs.sun.com/roller/page/elowe?entry=zfs_saves_the_day_ta - and email to this alias from Bill Moore, 11/16/05 on subject "Re: [zfs-discuss] ZFS and SAN"> Personally I think ZFS may have come too late, unless SUN are going to stick it into the StorageTek boxes? > > Also, I want to hear what sort of feed back we get from people who use ZFS on small systems where there is not many spare CPU cycles left (we don''t all have big machines)?I have a build NFS server which is a v240 with 2 x USIIIi and 4GB - hardly bundles of cpu cycles. Multiple widely-parallel builds from big build servers to its zfs volumes have made no dent in build times and no noticable increase in cpu utilization. The bottleneck is still those hardware RAID-5 arrays that are connected. Now that zfs upgrade no longer means a likely full backup and restore (as it did during development when on-disk format changed) I''ll be knocking the RAID-5 volumes on the head and using RAID-Z. Cheers Gavin
George William Herbert
2005-Nov-24 23:22 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
James McPherson wrote:>Andrew Watkins wrote: >> I still find it hard to believe that people are going to convert there >> expensive RAID boxes into disk storage boxes running software ZFS. I see ZFS >> being used in 2 ways: >> 1) The people (may be universities) who have got lots of disk packs >> (non-raid) and want to improve the performance of there systems at no cost. >> i.e. I have a E450 full of disks, so I may give it a go. > >Yes, this is already happening. > >> 2) A "few" companies who have big systems who want that extra layer of >> protection, by sticking ZFS between the O/S and hardware RAID?More likely, O/S, Volume Management, plus HW RAID (or JBOD).>when you can remove a layer of management complexity from storage -- with >no additional $$ cost to your organisation -- why would you _not_ want to >use zfs? If you''ve got two accessible paths to your hw raid array, you can >take advantage of the safety features of zfs. You can also, at the same >time, remove the need to manually chunk the presented storage like a lot >of folks do for databases (redologs go in this fast group, tablespaces over >here... archivelogs on that group... but it''s a real pain to add space to >what you''ve got configured already....)I have seen this assertion repeatedly, and I am hopeful that it turns out to be true. However.... I have built a couple or three major enterprise class storage installations a year for a while for customers or consulting clients. The actual performance and stability of the storage systems built and installed, which often had to include both HW RAID and SW RAID / Vol Mgt to meet single point of failure avoidance requirements (and not uncommonly, geographical distrubtion of storage for disaster recovery), were terrible things to have to design and impliment, with trying to optimize it all and chunk out the volume slices for various DB role users per above, etc. As ugly as those are to design and manage, the performance and stability of ZFS remain untested for that level of enterprise storage use, in publically available test cases etc. I don''t doubt that Sun internally and some early access users have done that sort of installation; I haven''t seen it and talked to the people who did it yet. I would not specify ZFS or a hardware buy contengent on ZFS being "the" software solution until such time as I have at least seen and talked to some of the field test example sites, and preferably been able to build at least one large unit in a lab. ZFS is moving from "research toy" into "limited production beta test" right now. It has to pass that deployed beta...>> Personally I think ZFS may have come too late, unless SUN are going to stick >> it into the StorageTek boxes? > >I can''t speak for future plans (check The Register!) but what I can say is >that while I would have been ecstatic if we''d been able to get ZFS shipped >by April 2005 (6 months ago), it''s still not too late. In fact, if you think >about the awareness of complexity issues which has snowballed over the last >year, then ZFS is here right at the point where it can make a significant >difference. With Tom''sHardware doing reviews of 1Tb SATA disks, which will >inevitably make their way into storage arrays of one sort or another, would >you want to be using ufs or vxfs on them? I certainly wouldn''t.s/uxfs or vxfs/uxfs or vxfs plus SVM or VXVM/ Other key required capabilities are bringing in a shared clustered capability. One thing that concerns me is that it''s not really publically clear how much of that was on the design team''s minds when it was being conceived of and built up. What ZFS appears to be right now is a beta of a really good really manageable combined advanced local FS and volume manager. As I stated above, the solutions for building large storage installations with prior tools like VXVM/SVM and ufs/vxfs are a pain... but they''re just a pain. They don''t make me sweat thinking about them. What makes me sweat are projects where globally shared filesystems are required, and they''re becoming more common over time. If ZFS can''t play in that space, then it will not be the last major filesystem built and deployed in the first half of this century, and it will not have a particularly long lifetime either. -george william herbert gherbert at retro.com / gherbert at taos.com
Richard Elling - PAE
2005-Nov-28 19:40 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
Andrew Watkins wrote:> I still find it hard to believe that people are going to convert there expensive > RAID boxes into disk storage boxes running software ZFS.When we speak of reliable data storage, we need the ability to both detect and correct faulty data. Even if you use the array for data redundancy, and do not use ZFS for data redundancy, you still get ZFS''s ability to detect faulty data. I expect that we will see more and more service calls as people roll out ZFS because of this additional data fault detection. Using older file systems which did not detect these silent faults, leads to a false sense of security. Unfortunately, some will blame the messenger... Another huge benefit of ZFS''s redundancy technology is the migration of data across storage. In the past, if I wanted to add a new whizzy storage device and copy a file system to it, I would have needed to monkey around with a volume manager (and hoped that I had preplanned such a move and thus already had the file system running on a volume manager) or dump&restore. With ZFS, I can just add the new device and, later, remove the old. Simple and flexible. -- richard
Cyril Plisko
2005-Nov-28 19:51 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
On 11/28/05, Richard Elling - PAE <Richard.Elling at sun.com> wrote:> Andrew Watkins wrote: > > I still find it hard to believe that people are going to convert there expensive > > RAID boxes into disk storage boxes running software ZFS....> Another huge benefit of ZFS''s redundancy technology is the migration of > data across storage. In the past, if I wanted to add a new whizzy storage > device and copy a file system to it, I would have needed to monkey around > with a volume manager (and hoped that I had preplanned such a move and > thus already had the file system running on a volume manager) or dump&restore. > With ZFS, I can just add the new device and, later, remove the old. Simple > and flexible.Richard, this functionality isn''t available with ZFS (at least today) unless you are talking strictly about mirrors. (In which case the volume manager would be as easy as ZFS in this respect). I recall that someone on that list mentioned that storage migration is on the roadmap, but no certain plans were discussed. If you know something specific about storage migration please share it with the rest of community. Supporting migration within ZFS looks like a natural thing to do, BTW. -- Regards, Cyril
Richard Elling - PAE
2005-Nov-29 00:25 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
comment below... Cyril Plisko wrote:> On 11/28/05, Richard Elling - PAE <Richard.Elling at sun.com> wrote: > >>Andrew Watkins wrote: >> >>>I still find it hard to believe that people are going to convert there expensive >>>RAID boxes into disk storage boxes running software ZFS. > > ... > >>Another huge benefit of ZFS''s redundancy technology is the migration of >>data across storage. In the past, if I wanted to add a new whizzy storage >>device and copy a file system to it, I would have needed to monkey around >>with a volume manager (and hoped that I had preplanned such a move and >>thus already had the file system running on a volume manager) or dump&restore. >>With ZFS, I can just add the new device and, later, remove the old. Simple >>and flexible. > > > Richard, > > this functionality isn''t available with ZFS (at least today) unless you are > talking strictly about mirrors.well, if you are migrating from one storage device to another, space isn''t a problem...> (In which case the volume manager would > be as easy as ZFS in this respect).I disagree for two reasons: 1. LVMs don''t know anything about the data stored, so mirror silvering will *always* take longer. This is really a manifestation of the end-to-end capabilities of ZFS as opposed to an interposing layer of software pretending to look like a disk. You will always be better off managing the data closer to the layer which knows the context of the data. 2. LVMs today pretty much all require disks. ZFS doesn''t have that limitation, which opens up some interesting options for migrating data.> I recall that someone on that list mentioned > that storage migration is on the roadmap, but no certain plans were > discussed. If you know something specific about storage migration please > share it with the rest of community. > Supporting migration within ZFS looks like a natural thing to do, BTW.Perhaps you''re thinking of migration between ZFS and other file systems? In my example, I''m concerned with migrating data across storage. IMHO, the former is largely a copy-out/copy-in excercise, not very technically challenging, though sometimes logistically constrained. -- richard
Jeff Bonwick
2005-Nov-29 04:15 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
> Supporting migration within ZFS looks like a natural thing to do, BTW.Indeed it is. I''ll blog about this in some detail in a few weeks. Jeff
Torrey McMahon
2005-Nov-29 08:10 UTC
[zfs-discuss] I can beleive that ZFS is better then hardward RAID?
Richard Elling - PAE wrote:> > > 1. LVMs don''t know anything about the data stored, so mirror > silvering will *always* take longer. This is really a manifestation > of the end-to-end capabilities of ZFS as opposed to an interposing > layer of software pretending to look like a disk. You will always > be better off managing the data closer to the layer which knows the > context of the data.At what point is it faster to simply copy a large swath of disk drive or LUN then going back and forth, taking speed hits, by copying only the data? And how can you tell? Could you re-order your read/write operations to increase the speed of the overall copy operation yet also maintain the requisite amount of consistency? And do it all while other operations are ongoing? Things to ponder, eh?